Apple fixes dangerous 'GAZEploit' Vision Pro security flaw

Apple has fixed a Vision Pro security flaw that enabled hackers to steal passwords just by observing a user's eye movements.

Apple fixes dangerous 'GAZEploit' Vision Pro security flaw
Apple Vision Pro

Apple's Vision Pro has a way of showing the world a virtual version of you while you interact with others in virtual reality. Unfortunately, this very feature – called Persona – could've been used by hackers to steal a Vision Pro user's sensitive data.

The security flaw was discovered by a group of six computer scientists from the University of Florida's Department of Computer Science, and it was first reported on by Wired.

The GAZEploit attack, as it was dubbed by the researchers, works by tracking the eye movements of a user's Persona to identify when they're typing something on the Vision Pro's virtual keyboard. The researchers discovered that users tend to direct their gaze onto specific keys that they're about to click, and were able to construct an algorithm that identified what the users were typing. The results were quite accurate; for example, the researchers were able to identify the correct letters of users' passwords 77 percent of the time. When it came to detecting what people were typing in a message, the results were accurate 92 percent of the time.

The researchers disclosed the vulnerability to Apple back in April, and Apple fixed it in visionOS 1.3, which came out in July. In the release notes, Apple says that the flaw enabled inputs to the virtual keyboard to be inferred from Persona.

"The issue was addressed by suspending Persona when the virtual keyboard is active," Apple wrote in the release notes. Vision Pro users who haven't yet updated to the latest version are advised to do so as soon as possible.

While simply disabling Persona while the user is typing was a pretty simple fix, the flaw does raise the question of just how much info a malicious hacker could infer just by observing a virtual version of you.

The researchers said that the attack hasn't been used against someone using Personas in the real world. But what makes this attack particularly dangerous is that it only requires a video recording of someone's Persona while the person was typing, meaning an attacker could still use it on an older video. It seems that the only way to mitigate this issue is to erase any publicly available videos where your Persona is visible while typing; we've reached out to Apple for clarification on what can be done to protect your data.