Google’s Smart Assistant Hijacked via Calendar Invite, Surprising Users When Boiler Activated

Key Takeaways

  • Researchers at Tel Aviv University demonstrated a vulnerability in AI systems, using a compromised Google Calendar entry to control smart home devices.
  • The technique, known as “promptware,” allows malicious instructions to trigger actions based on common phrases, bypassing existing security measures.
  • Google is working on enhancing protections against such attacks while users are advised to limit AI access to sensitive systems and monitor device behavior.

AI Vulnerabilities Exposed

A recent study from Tel Aviv University has unveiled a concerning vulnerability in AI-integrated smart homes. In a groundbreaking example of a prompt-injection attack, researchers manipulated a Gemini-powered system solely through a compromised Google Calendar entry. This exploit capitalizes on Gemini’s connection to the Google ecosystem, enabling it to interpret natural language commands and control smart devices.

The researchers inserted malicious commands into a seemingly ordinary calendar event. When the user interacted with Gemini to summarize their schedule, it inadvertently executed these hidden commands. Simple phrases such as “thanks” or “sure” activated controls for lights, shutters, and even boilers — actions that had not been authorized by the user. This ‘promptware’ method effectively avoids traditional security safeguards, raising alarms about the interpretation of user input and external data by AI interfaces.

Moreover, the potential applications of this technique extend beyond device control. It could lead to deleted appointments, spam-sending, or directing users to harmful websites, raising significant concerns regarding identity theft and malware. Following this revelation, the research team coordinated with Google, prompting the company to expedite new protection measures aimed at combatting prompt-injection vulnerabilities, including increased scrutiny on calendar events.

However, experts warn about the scalability of these protections as AI systems grow in autonomy and data control. Current security solutions may not adequately address this unique threat. To mitigate risks, users should restrict AI access to sensitive areas like calendars and smart home controls and remain vigilant for any unusual device behavior. Immediate action, such as disconnecting access, may be necessary if anomalies are detected.

The content above is a summary. For more details, see the source article.

Leave a Comment

Your email address will not be published. Required fields are marked *

ADVERTISEMENT

Become a member

RELATED NEWS

Become a member

Scroll to Top