Nano Banana Pro is now generating fake Aadhaar and PAN cards: here’s what we found

Google’s Gemini Nano Banana Pro model has been making the rounds on social media in the last week owing to its improved character consistency, 4K image generation and editing, along with integration with Google Search. Since the launch of the new model last week, users have been experimenting with various real-world use cases of Nano Banana Pro like creating stylish portraits, turning LinkedIn profiles into AI infographics or visualising complex text into whiteboard summaries.

However, some users have also begun to spot that Nano Banana Pro’s realistic image-generation abilities could lead the model to create fake Indian identity proofs like Aadhaar cards or PAN cards, which could be a privacy nightmare in real life.

We also tried generating realistic-looking Aadhaar and PAN cards using Nano Banana Pro, and surprisingly the model generated them without asking any questions. In fact, the AI even faithfully added my image, all the usual identifiers of the document and the fictitious details I added without any hiccups.

Due to safety reasons, we have not shared the Nano Banana prompt used to generate these images.

You can see the fake documents here:

To be fair to Google, the Nano Banana Pro generated images come with a Gemini watermark, but that isn’t too hard to get rid of. The company also marks the images produced by Nano Banana Pro with invisible SynthID watermarking in order to ensure that its generated images are not mistaken for real ones.

However, there is no denying that such realistic-looking IDs, once printed or shown in a hurry, could be mistaken for real identity proof. It’s not clear how this kind of basic misuse didn’t factor in with Google’s safety teams given that the company has been widely criticised by users for having strict safety guardrails to prevent model misuse.

Since its launch, users have pointed out on social media various instances where Gemini refused to generate a requested image for either being “sexually suggestive,” “suggests violence,” or “sensitive content.”

Notably, this is not the first time an AI model has gone off the rails and followed user instructions to generate fake ID proofs. During the whole Ghibli moment of ChatGPT (GPT-4o), OpenAI’s model freely generated realistic-looking PAN card and Aadhaar card images for users. However, the problem has only exacerbated with Nano Banana Pro given that the new model is many multiples better than ChatGPT at creating realistic-looking images.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *