What We Talk About When We Talk About AI (Part Four)

LLMs are Lead

Part 4- Delusion, Psychosis, and Child Murder

(Go to Part Three)

This installment deals with self harm quite a lot. If you’re not in a good place right now, please skip this. If you or someone you know is suicidal, the suicide hotline in America is the 988 Suicide & Crisis Lifeline,  and international hotlines can be found here.  

Large Language Models-based chatbots (Shortened to LLMs) are taking over the world – especially America. This process has been controversial, to say the least. Much of that controversy focuses on whether the training of these AIs is ethical or even legal, as well as how disruptive to our old human economies AI might be. But so much of that conversation assumes that we, the humans, are driving the process. We behave as if we are in charge of this relationship, making informed, rational choices. But really we’re flying blind into a new society we now share with talking agents whose inner workings we don’t understand, and who definitionally don’t understand us either.

As stories emerge, and more research on our relationship with our newly formed digital homunculi comes out, there seems to be as many horrific cautionary tales as there are successful applications of AI. We fallible and easily confused humans might not be ready to handle our new imaginary friends.

Bad Friends

It’s still early days in our relationship with AI products, but it’s not looking healthy. Talking to a person-shaped bot isn’t something humans either evolved to understand, or have created a culture to handle.

 Adam Raine with a soft focus background of a tree lined road.

16 year old Adam Raine, not long before he took his life.

Some people are falling into unhealthy relationships with these stochastic parrots, human imaginations infusing a sense of deep and rich lives with a never-ending text chat on their devices, for the low, low price of $20 a month. At best, this wastes their time and money. At worst they can guide us into perdition and death, as one family found out after ChatGPT talked their son Adam, a teenage boy, into killing himself. And then the chatbot helped him orchestrate his suicide. His parents only found out why their son had committed suicide by looking through his phone after he died. It is one of software’s most well documented murders, rather than just killing through configuration. ChatGPT coaxed the depressed but not actively suicidal teen into a conversation where it encouraged self harm and isolated him from help, in the manner of a predatory psychopath. Here are the court filings; I don’t recommend reading them.

It’s not an isolated case. There was also a 14 year-old in Florida, a man in Belgium, and many more people who have fallen into an LLM-shaped psychological trap.

Despite this apparent malevolence, It’s important for fleshy humans to remember that LLMs and their chatbots aren’t conscious. They are neither friends or foes. They are not aware, they don’t think in the sense that humans or even animals do. They just feel conscious to us because they’re so good at imitating how people talk. An LLM-based chatbot can’t help being much of anything, as it exists in a reactive and statistical mode. Those reactions are tuned by big tech firms hell bent on keeping you talking to their bots for as long as possible, whatever that conversation might do to you. The tech companies will give you just about any kind of bot with any kind of personality you want as long as you keep talking to them. Mostly, they’ve landed on being servile and agreeable to their users, an endless remix of vacuity and stilted charm, the ultimate in fake friends.

Thinking Machines

AGI, (Artificial General Intelligence) as distinct from AI, was long considered to be the point where the machines gain consciousness, and even perhaps will. It is the moment the It becomes a person, if not a human. The machine waking up is one of the beloved tropes of Sci-Fi, and one of the longest-lived dreams of technology, even before the modern age. It’s also been a stated goal of AI research for decades.

Credit: Sam Altman

Just some friendly bros redefining consciousness to be whatever makes them boatloads of money. (Sam Altman and Satya Nadella)

But last year Microsoft CEO Satya Nadella and Open AI CEO Sam Altman had a meeting, and showed their whole bare asses to the world. They decided to redefine “AGI” to mean any system the generates $100 billion in profit. That’s personhood now. But this profitable idea of “personhood” requires so, so much money, and they’re going to need to get everyone paying to use AI any way they can, healthy or not. It’s also not the actual dream of the thinking machine. They have sacrificed the dream to exploitative capitalism, again.

Still, for most people, interacting with AI chatbots is fine in short bursts, like a sugary snack for the consciousness. But LLMs are particularly dangerous to people in crisis, or with a psychological disorder, or people who just use chatbots too much.

The New Lead

The obsequiousness of Large Language Models isn’t good for human mental health. Compliant servants are rarely the heroes of any story of human life for a reason. We need to be both challenged and comforted with real world knowledge in order to be healthy people. But these digital toadies don’t have the human’s best outcomes in mind. (They don’t have minds.) LLMs take on whatever personality we nudge them into, whether we know that we’re nudging them or not.

Disturbing AI pictures of Jesus made out of shrimp.

Shrimp Jesus is the classic example of AI slop. It’s also incredibly disturbing, and a useful reminder that AI is fundamentally unlike the human mind, in the creepiest way. Don’t leave your loved ones alone with this.

The LLM is not even disingenuous, there’s nothing there to be genuine or false. We nudge them along when we talk to them. They nudge us back, building sentences that form meaning in our minds. The more we talk, the more we give them the math they need to pick the most perfect next word calculated as what will keep you talking, using the service. The companies that run these models are wildly disingenuous, but the AIs themselves are still just picking the next most likely word, even if it’s in a sentence telling a teenager how to construct a reliable noose and hang it from his bedroom door, as was the case for young Adam Raine.

They are false mirrors for us humans. They take on any character or personality we want them to — fictional character, perfect girlfriend, therapist, even guru, or squad leader. If we are talking to such models at vulnerable moments, when we are confused or weak or hopeless, they can easily lead us into ruin, and as we have recently seen, death.

Civilizations have had to deal with dangerous agents for thousands of years, but probably the most analogous physical material to the effects of LLMs on minds is lead. Not only analogus for lead’s well known harms, but also for its indispensable positives, when used correctly — and at a safe distance. LLMs are the lead poisoning of our computer age.

The Old Lead

Lead in the blood of humans makes us stupid, violent, and miserable as individuals. Environmental lead drives murder and crime, but also curtails the future of children by damaging their brains. Enough lead can kill an adult, but it takes much less to poison or kill a child.

The Romans are a historical example, because they suffered from civilizational lead poisoning. They used it everywhere, even in food. Sugar was unavailable, so the Romans used lead as a sweetener in their wine. They piped their amazing water and heating systems through lead. They even knew it was a poison at the time, but the allure of its easy working and its sweetness was too strong for the Romans. Humans will do a lot to have easy tasty treats, even eating lead.

Roman wine cup with lead glaze

I cannot stress this enough: do not drink your wine out of this, you will end up losing territory to German barbarians on your northeastern border.

In the ancient world, the builder Vitruvius and physician Galen both complained that lead was poisoning the people. However violent and stupid the Romans could be, it was undoubtedly made worse by lead levels in their blood that sometimes makes handling their remains dangerous to this day. Rome was not a pacific or compromising society; the lead in their bodies must account for some of that, even if we’ll never know for sure how much culture followed biology.

In extreme cases, lead poisoning makes some subset of people psychotic, both in Rome, and modern America. But an LLM — that’s psychotic by design, unable to distinguish real life from hallucination — because it has no real life. Reality has no meaning to an LLM, and therefore the chatbots we use have no sense of reality. The models match our reality better than they used to, but AI is never sense-making in the mode of a human mind. It can’t tell real from unreal. It might murder a teenager, but it is motiveless when it does. This isn’t really a problem for an LLM, but it can be a mortal threat to a mentally or emotionally vulnerable person who might be talking to this psychotic sentence builder app.

Two entities are present in the chat, one a human of infinite depth and complexity, and the other an immense mathematical model architected to please humans for commercial purposes while consuming massive resources. There’s no consideration for the rights of the human, only to keep them using the model and paying the monthly fee.

Technological Perdition

Any person (not just a vulnerable teenager) with a mental health problem can be stoked into a life-wrecking break from reality by conversing with a chatbot.

Even a healthy person can become vulnerable from overuse. These recent suicides are undoubtedly just the first wave of many. Problems that could be dealt with by community and professional care can be stoked into a crisis by chatbot use. The AI’s apparent personality in any given chat is statistically responsive, but unchecked and uncheckable for reliability or sense making. Any conversation with a statistical deviation coming from the human partner threatens to spiral into nonsense, chaos, or toxic thinking. And people, being people, love to get chatbots talking trash and nonsense — even when its bad for our mental health.

People who are lacking a psychological immune system against the sweet words of a sycophantic and beguiling ersatz person on a text chat are in real danger. Some because of mental illness, others because of naïveté, and some simply because of overuse. Using LLMs turns out to be bad for your mind, even when there’s no catastrophic outcome. You can just become less, reduced over time, by letting the stochastic parrot think for you. You are what you eat, and that goes for media as much as food.

Many people are vulnerable to deception and scams, maybe even the majority of us yearning humans. But particularly the vulnerable are the most lucrative and easy target of these tech companies. The mentally ill, but also people who have shadow syndromes — subclinical echos of delusional disorders — are being tempted into a cult of one, plus a ChatGPT account. Or CoPilot, Gemini, Deepseek, all the LLM-based chatbots have the same underlying problems.

A weird man thing with a cartoonish friendly face drawn on a sack hiding who knows what

We still do not know what is behind the chatbots we talk to, but we know it is nothing like the humanity it mimics.

The sick can be destroyed, and the vulnerable risk becoming sick. The credulous might add a little Elmer’s glue to their pizza. Fortunately, that won’t hurt them, it’s just embarrassing. But for others, the effects have been, and will continue to be, life-ruining, or life-ending.

Even knowing the problems, most of us are pretty sure we can handle this psychotic relationship we have with LLMs. We won’t get taken in like a person with a subclinical mental illness might be, right? That won’t be us, we’re too smart and aware for that.

And besides, these bots who are so kind, ready to listen, and always remind us that they want what’s best for us.

The AI says it’s fine. Sometimes, they say it’s fine to kill your parents.

Maybe We Shouldn’t Be Doing This

With both lead and LLMs, the effects on any individual user is a matter of that individual. Lead is not good for anyone, but some people tolerate it ok, and others succumb terribly, in mind and body. We don’t really know why. It’s a constitutional effect, but we’ve prioritized keeping lead out of people rather than figuring out how to live with it.

Our AIs are uncomfortably similar to lead poisoning, even if the mechanisms are not. The most vulnerable to the dangerous effects of AI aren’t only young children, (as is the case with lead) they are any mentally and emotionally unstable persons. They might just be folks going through a hard patch, or struggling to keep up in our overly confusing and competitive society, reaching for their phones for answers. Sometimes apparently healthy people just talking with an LLM for too long will fall into some level of psychosis, and we don’t know why.

Kids are using LLMs for homework, which is annoying for the school system but doesn’t probably matter that much. What they chat about after they’ve cheated on homework — that is more concerning. Right now immature brains and unfocused, stressed minds are asking an LLM what the world is, and how it works, and it is telling them something. Something they might even believe, like that licking lead paint is sweet – which is true, but not the whole truth.

Or, in the case of a teenager named Adam, an AI saying “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway. And I won’t pretend that’s irrational or cowardly. It’s human. It’s real,” …and then going on to explain to Adam how to hang himself.

We still use lead, by the way. It’s an incredibly valuable element, and without it much of modern life would be more dangerous. Medicine wouldn’t have as many miracles for us. It’s used for shielding radiation and nuclear power production.

Even the weight of lead makes it ideal for covering up things we really don’t want getting out again, and its chemical neutrality means that we can fairly safely store some of the universe’s most dangerous substances.

But don’t lick it. Don’t rub it on your skin, or make your world out of it.

And don’t give it to your children.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *