Meta has fixed a security bug that allowed Meta AI chatbot users to access and view the private prompts and AI-generated responses of other users.
Sandeep Hodkasia, the founder of security testing firm AppSecure, exclusively told TechCrunch that Meta paid him $10,000 in a bug bounty reward for privately disclosing the bug he filed on December 26, 2024.
Meta deployed a fix on January 24, 2025, said Hodkasia, and found no evidence that the bug was maliciously exploited.
Hodkasia told TechCrunch that he identified the bug after examining how Meta AI allows its logged-in users to edit their AI prompts to regenerate text and images. He discovered that when a user edits their prompt, Meta’s back-end servers assign the prompt and its AI-generated response a unique number. By analyzing the network traffic in his browser while editing an AI prompt, Hodkasia found he could change that unique number and Meta’s servers would return a prompt and AI-generated response of someone else entirely.
The bug meant that Meta’s servers were not properly checking to ensure that the user requesting the prompt and its response was authorized to see it. Hodkasia said the prompt numbers generated by Meta’s servers were “easily guessable,” potentially allowing a malicious actor to scrape users’ original prompts by rapidly changing prompt numbers using automated tools.
When reached by TechCrunch, Meta confirmed it fixed the bug in January and that the company “found no evidence of abuse and rewarded the researcher,” Meta spokesperson Ryan Daniels told TechCrunch.
News of the bug comes at a time when tech giants are scrambling to launch and refine their AI products, despite many security and privacy risks associated with their use.
Meta AI’s stand-alone app, which debuted earlier this year to compete with rival apps like ChatGPT, launched to a rocky start after some users inadvertently publicly shared what they thought were private conversations with the chatbot.
Source link