Google has said it will not fix a recently discovered security flaw in its Gemini AI, a vulnerability that allows hackers to “smuggle” malicious commands into seemingly normal emails and documents. The company is calling the “ASCII smuggling attack” a social engineering tactic, placing the responsibility on users to avoid falling for the trick.
Here’s how the attack works: A hacker sends you an email that looks perfectly legitimate. But hidden within the message is a malicious prompt written in a tiny font and in white text on a white background, making it invisible to the human eye. When you ask Gemini to do something simple, like summarize the email, the AI reads everything—including the hidden command—and executes it.
This is a serious risk because Gemini is deeply integrated into Google’s other products like Gmail. The hidden command could be a simple phishing trick, telling the AI to display a fake message like “Your computer is compromised, call this number immediately.” But it could also be far more dangerous, ordering the AI to search through your inbox and steal sensitive personal information.
The flaw was demonstrated by security researcher Viktor Markopoulos, who was reportedly dismissed by Google when he raised the issue. The company’s decision not to patch the vulnerability has raised alarms, as it puts the burden of spotting these invisible attacks squarely on the shoulders of everyday users. For now, it seems if your AI gets tricked by a hidden message, Google believes it’s on you to have known better.