Gemini has climbed up the AI race ladder against ChatGPT at a rapid pace but Google’s AI assistant could easily become a victim to the latest cybersecurity threat. Prompt injections are going to be the Achilles
heels for the tech segment in the near future as these attacks are capable of crippling even the most advanced AI models and bringing systems down to its knees.
Researchers are actively fighting these challenges by doing their own mock attacks and Gemini AI assistant was in the firing line of the latest prompt injection salvo. And the target was a private Google Calendar which was leaked using the AI chatbot and seeking its support.
Gemini AI Prompt Attack: How Google Calendar Helped
As mentioned in its report by Bleeping Computer, the researchers didn’t apply any technical nuance to attack the AI assistant and relied on basic language prompts. And you can do this by sending a Google Calendar invite injected with the malicious prompt payload, which when a person analyses using Gemini triggers the attack. So, the researchers at Miggo Security, quoted in the report, were able to use a strong point of the AI assistant and turned it into a massive weapon to target the victim.
Prompts are what trigger the AI chatbots into action and deliver the tasks you assign to them. Gemini linked with a Google account means it can access Gmail inbox, your documents for analysing and more. The same access becomes a nightmare if the hacker manages to bypass the security barriers of Gemini and these injection attacks make it look rather simpler than it should be which is scary by all means.
Gemini can summarise all your data, even create new items like a Calendar invite, and when you activate the payload inside these invites with the prompt, the dirty work starts behind the scenes, leaving all your data exposed.
Now, imagine if the same AI chatbot is fully controlling your smartphone, and it can execute any task you command, that’s the level of security threat these attacks are going to pose in the years to come. Google, quoted in the same report, has assured that it is actively working on solutions to fight these threats and is happy for more researchers to show the chinks that need to be tightened at its end.
Even then, it is imperative that the company and other tech giants need a strong mechanism to thwart these advanced cyber threats before they become too dangerous for people to think twice about deploying AI at such a personal and deep level into their ecosystem.












