People are using ChatGPT to write their text messages – here’s how you can tell

text message

Kirill Stytsenko/Getty Images

ZDNET’s key takeaways

  • People are using AI to write sensitive messages to loved ones.
  • Detecting AI-generated text is becoming more difficult as chatbots evolve.
  • Some tech leaders have promoted this use of AI in their marketing strategies.

Everyone loves receiving a handwritten letter, but those take time, patience, effort, and sometimes multiple drafts to compose. Most of us at one time or another have given a Hallmark card to a loved one or friend. Not because we don’t care; more often than not, because it’s convenient — or maybe we just don’t know what to say.

These days, some people are turning to AI chatbots like ChatGPT to express their congratulations, condolences, and other sentiments, or just to make idle chitchat. 

AI-generated messages

One Reddit user in the r/ChatGPT subreddit this past weekend, for example, posted a screenshot of a text he’d received from her mom during her divorce, which he suspected may have been written by the chatbot.

The message read: “I’m thinking of you today, and I want you to know how proud I am of your strength and courage,” the message read. “It takes a brave person to choose what’s best for your future, even when it’s hard. Today is a turning point — one that leads you toward more peace, healing, and happiness. I love you so much, and I’m walking beside you — always ❤️😘”

Also: Anthropic wants to stop AI models from turning evil – here’s how

The redditor wrote that the message raised some “red flags” since it was “SO different” from the language their mom usually used in texts.

In the comments, many other users defended the mother’s suspected use of AI — arguing, basically, that it’s the thought that counts. “People tend to use ChatGPT when they aren’t sure what to say or how to say it, and most important stuff fits into that category,” one person wrote. “I’m sure it’s very off-putting, but I think the intentions in this case were really good.”

As public use of generative AI has grown in recent years, so too has the number of online detection tools designed to distinguish AI- and human-generated text. One of those, a site called GPTZero, reported a 97% probability that the text from the redditor’s mom had been written by AI. Detecting AI-generated text is becoming more difficult, however, as chatbots become more advanced.

Also: How to prove your writing isn’t AI-generated with Grammarly’s free new tool

On Friday, another user posted in the same subreddit a screenshot of a text they suspected had also been generated by ChatGPT. This one was more casual — the sender was discussing their life after college — but as was the case with the recent divorcée, there was clearly something about the tone and language of the text that set off some kind of instinctive alarm in the mind of the recipient. (The redditor behind that post commented that they replied to the text using ChatGPT, providing a glimpse of a strange and perhaps not so distant future in which a growing number of text conversations are handled entirely by AI.)

AI-induced guilt

Others are wrestling with feelings of guilt after using AI to communicate with loved ones. In June, a redditor wrote that they felt “so bad” after they used ChatGPT to respond to their aunt: “it gave me a terrific reply that answered all her questions in a very thoughtful way and addressed every point,” the redditor wrote. “She then responded and said that it was the nicest text anyone has ever sent to her and it brought tears to her eyes. I feel guilty about this!”

AI-generated sentimentality has been actively encouraged by some within the AI industry. During the summer Olympics last year, for example, Google aired an ad depicting a mom using Gemini, the company’s proprietary AI chatbot, to compose a fan letter on behalf of her daughter to US Olympic runner Sydney McLaughlin-Levrone. 

Google removed the ad after receiving significant backlash from critics who pointed out that using a computer to speak on behalf of a child was perhaps not the most dignified or desirable technological future we should be aspiring to.

How can you tell?

Just as image-generating AI tools tend to garble words, add the occasional extra finger, and fail in other predictable ways, there are a few telltale signs of AI-generated text.

Also: I found 5 AI content detectors that can correctly identify AI text 100% of the time

The first and most obvious is that if it’s supposedly coming from a loved one, it will be devoid of the usual tone and style that person exhibits in their written communication. Similarly, AI chatbots generally won’t include references to specific, real-life memories or people (unless they’ve been specifically prompted to do so), as humans so often do when writing to one another. Also, if the text reads as being a little too polished, that could be another indicator that it’s been generated by AI. And, of course, always look out for ChatGPT’s favorite punctuation — the em dash. 

You can also check for AI-generated text using GPTZero or another online AI text detection tool.

Get the morning’s top stories in your inbox each day with our Tech Today newsletter.




Source link

Leave a Reply

Your email address will not be published. Required fields are marked *