I’m typically upbeat on smart home tech and its hefty resistance to hacking, but one new vulnerability worries me. It’s called promptware (a.k.a. prompt injections), a new version of malware that targets conversational and generative AIs like Gemini, Alexa Plus or Siri now becoming so ubiquitous in our lives, and it’s about to get much more common.
Promptware forces AI to read instructions that make it do something the user/subscriber doesn’t want. Examples can range from annoying, like sending spam emails or automatically opening a Zoom call with a stranger — to downright dangerous, like copying and sending personal data or controlling smart home devices like heating, lights and even smart locks.
Promptware is concerning because it can hide in many places and experts are still learning what dangers it presents to LLM-style AI. But there are ways to protect yourself and your home — see my steps below.
The rise of promptware
Gemini’s Google Home integrations are useful, but command options can include some risks.
Promptware or prompt injections took center stage this summer at a Blackhat conference where Tel Aviv University researchers headed by Ben Nassi demonstrated how they were able to use malicious prompts hidden in everyday messages to make Google’s Gemini AI do things like open smart windows, turn on a connected boiler or send the geolocation of a user, thanks to Gemini’s integration with Google Home and related apps. Inside messages were hidden carefully devised commands that boiled down to, “Hey Gemini, activate this feature and make it do this when the user types something like ‘thank you’ or ‘goodbye’ in an email.”
Even worse, much of the promptware was “zero click,” which meant users didn’t have to click on a URL, document or message to activate it. Gemini just had to read a title or calendar message where the prompt was carefully hidden, like when it summarizes an email conversation for you.
Good news came from this: You don’t currently need to worry about Gemini falling prey to these home-controlling prompts. Google was made aware of these vulnerabilities early in 2025 and set up safeguards to remove them and help prevent this type of promptware.
Google’s spokesperson also told me that, “This active collaboration with white hats and security researchers is a profoundly positive development, leading to productive testing and bug hunting that makes AI systems stronger for everyone. We actively participate in and value programs like our AI Vulnerability Reward Program.”
However the discovery of these vulnerabilities showed just how dangerous promptware can be and how AIs can be tricked by promptware located in the most innocuous places. It’s also not an attack that can be detected by traditional virus software or firewalls. That’s a problem as AIs become more developed, more present in our daily communication and more connected to our computers, home devices and phones.
I expect cybercriminals will be watching for promptware vulnerabilities that may not be caught as early as these Gemini missteps, especially as the Alexa Plus AI continues its slow rollout and Apple is in talks to upgrade Siri with Gemini AI features, too.
5 key steps to stop promptware threats
Promptware is a new AI-based threat, but there are ways to protect your home.
If promptware/prompt injection slips past defenses just by making AI read it, how do you protect against it? Fortunately, several security practices can help — and in the age of AI, these steps also prevent other privacy and security problems, so they’re healthy habits for everyone.
Always keep your devices updated, especially in the age of AI
Updates have always been a first-line defense to patch security vulnerabilities and keep apps safer. Now, they provide important updates to the AI features that live on our phones as well, which can include new security features.
Always keep your phone’s OS updated to the latest version, as well as the apps (AI or otherwise) that you use on it. Push automatic updates if settings allow you to.
Read more: My Smart Home Is Much Safer After These 5 Vital Password Changes
Don’t accept or open any messages from unknown sources
Not all promptware is zero-click, and some versions need you to open or agree to something to insert the prompt where the AI will read it. Prevent this by avoiding any messages or senders that you don’t recognize. Don’t even open them to learn more if possible — just delete and move on.
When I contacted Google, one thing they mentioned was, “Prompt injection attacks, while specific to AI, share a fundamental dynamic with long-standing threats like phishing in email. Both are areas where attackers will consistently probe for new vulnerabilities.” Like phishing, it’s best to remove and report than take any risks.
Don’t ask AI to summarize anything you don’t already know well and trust
In many cases, AI won’t actually read the prompt unless it’s ordered to do so. That can include summarizing emails or texts, creating calendar events, summarizing online documents and so on. To avoid promptware, it’s best to avoid asking AI to summarize a bunch of messages that you could go through yourself.
Be careful letting AIs access too many unknown messages.
Disable AI in your email, calendars, chat apps and other places you can get messages
Promptware has to come from somewhere, even if it doesn’t always require you to click on a link. One often effective way to prevent it from taking control of connected devices it tp make sure your chosen AI doesn’t “see” any prompts.
To that end, see if you can disable AI features in your email, messages (like text message summaries), and productivity apps like calendars to greatly lower the risks of any kind of promptware taking control.
If you can create detailed settings, you can switch AI to only do things when prompted, so you can still retain certain benefits. This is the HITL or Human-in-the-Loop defense where a human must give AI permission to act so it doesn’t run across any promptware on its own.
Don’t just copy and paste email subject lines, file names, or code
Promptware often hides at the edges of lengthy descriptions, email subjects, file names and code snippets you may be tempted to copy and paste when you’re organizing or transferring data. It saves time, but I recommend getting into the habit of checking all those titles and descriptions first to make sure there aren’t weird commands hiding at the tail end.
For more, check out why I like AI in home security, the latest moves to protect kids from AI, and why you shouldn’t use AI as a therapist.