Is the “A.I.” “bubble” “bursting”? – by Max Read

Greetings from Read Max HQ! In today’s edition, an examination of the “A.I. bubble,” various vibe shifts, and the long shadow cast by the crypto bubble over how we talk about it.

A reminder: This newsletter is totally free to read, but it costs me hours of labor to produce: Reading, talking, writing, taking long walks, complaining to my wife, staring at the wall, etc. I want to keep these weekly columns free, but in order to do so, because of the nature of the newsletter business, I need to keep growing. If you value what you’re reading here, and find it helpful to you as you navigate existence–and if you’d like to also receive the second weekly, paywalled newsletter of excellent book-and-movie recommendations–please consider upgrading to a paid subscription. It costs about the price of one beer a month.

Eons ago, I wrote a piece called “The A.I. backlash backlash,” about a pendulum swing, then occurring in what I suppose we’d call “the discourse,” against a previously dominant cycle of A.I. backlash (which itself was a reaction to a dominant cycle of A.I. hype that dated back to the debut of ChatGPT). At the time, L.L.M. chatbots had improved significantly over the preceding 18 months; many people had managed to incorporate A.I. into their work in ways that seemed useful to them; and the “vibe” in Silicon Valley, as New York Times columnist Kevin Roose wrote had the time, had “shifted” to anticipate so-called “artificial general intelligence” on a short timeline. In the hothouse hubs of A.I. Discourse (X.com, Substack, Bluesky), the hype was bubbling up, and the skeptics and critics seemed to be in retreat.

But, as the man says, Want to feel old? That was March. In the five months since Ezra Klein wrote in his Times column that “person after person… has been coming to me saying… We’re about to get to artificial general intelligence,” Meta has announced efforts to reorganize and downsize its A.I. division; NVIDIA’s “tepid” revenue forecast is suggesting a wide slowdown; Sam Altman is warning that “investors as a whole are overexcited about AI”; and Gary Marcus, prince of L.L.M. haters, is on his fifth or sixth victory lap. The renewed hype has sputtered; the most fervent enthusiasts have become disillusioned; critics reign triumphant: The backlash to the backlash to the backlash has arrived.

What’s happened? It wouldn’t be wrong to point to widespread disappointment in OpenAI’s new flagship model, GPT-5, as the most important inflection point. Long hinted to be a significant step toward “A.G.I.,” if not the thing itself, and teased by Sam Altman with an image of the Death Star, the model seems to represent overall a minor improvement over its predecessors, subject to many of the same problems and errors that have afflicted L.L.M. chatbots since the earliest days. If nothing else it has been a good reminder of how unreliable A.I. researchers and investors can be as judges of the significance of their own work: Remember those whispers and rumors of imminent epochal progress from this past winter, frequently cited to legitimize the new hype cycle? “Are you feeling the AGI?” Well, as it turns out, not really.

But beyond the A.I.-enthusiast distress at OpenAI’s uncharacteristic under-delivery, the the model’s weaknesses crystallize an existential question for broader investment in A.I. research: Is this about as far as we can go with large language models? If OpenAI, one of the small number of companies that can be said to have the resources (capital, technical, and intellectual) to push L.L.M. capabilities forward, is finding itself reaching the point of diminishing returns, how much further can we really go with this paradigm?

Accompanying this return to technical skepticism has been a background drumbeat of anxiety about the increasingly large role of A.I. investment in the wider economy. A widely circulated Wall Street Journal article by Christopher Mims from earlier this month put in stark terms the scale of new investment:

A look at one key line item in company earnings reports—capital expenditures—shows that the most valuable tech companies are buying and building stuff at a record pace. The Magnificent 7 tech firms have collectively spent a record $102.5 billion on capex in their most recent quarters, nearly all from Meta, Alphabet (Google), Microsoft and Amazon. (Apple, Nvidia and Tesla together contributed a mere $6.7 billion.)

Investor and tech pundit Paul Kedrosky says that, as a percentage of gross domestic product, spending on AI infrastructure has already exceeded spending on telecom and internet infrastructure from the dot-com boom—and it’s still growing. He also argues that one explanation for the U.S. economy’s ongoing strength, despite tariffs, is that spending on IT infrastructure is so big that it’s acting as a sort of private-sector stimulus program.

Capex spending for AI contributed more to growth in the U.S. economy in the past two quarters than all of consumer spending, says Neil Dutta, head of economic research at Renaissance Macro Research, citing data from the Bureau of Economic Analysis.

Whether this investment is “propping up the U.S. economy,” as Brian Merchant argues, or, alternately, “crowding out other activities,” as Jason Furman has it, it’s a staggering amount of money being put toward a technology whose measurably productive uses are far from clear, and it’s hard to imagine it not affecting the economy in important ways. (At the very least, if nothing else, the increased demand represented by these data centers is likely to drive up electricity prices for Americans.) Two weeks ago Joe Weisenthal wondered if the poor reception of GPT-5 might intersect with this ambient unease about A.I. investment to produce an elite vibe shift:

What I’m interested in is whether there’s risk of elite perception changing on AI. […] At this point, there’s zero indication of any kind of AI slowdown. Certainly we’re not seeing anything of the sort that matters. And to some extent, it’s not clear that anyone’s opinion matters, unless they’re putting money to work (and the stock market’s holding up just fine). But if there is a growing sense that the pace of progress is slowing, and if all the new datacenters are perceived to be adding strain to the grid, it’ll be interesting to watch whether the mainstream view on all of this begins to tilt from enthusiasm to anxiety.

I’d say the vibe shift is already on us, as signaled by former Google C.E.O. Eric Schmidt, who took to the Times op-ed pages last week to castigate his tech-industry peers for, effectively, scaring the hoes:

It is uncertain how soon artificial general intelligence can be achieved. We worry that Silicon Valley has grown so enamored with accomplishing this goal that it’s alienating the general public and, worse, bypassing crucial opportunities to use the technology that already exists. […] There’s a widening schism between the technologists who feel the A.G.I. — a mantra for believers who see themselves on the cusp of the technology — and members of the general public who are skeptical about the hype and see A.I. as a nuisance in their daily lives. With some experts issuing dire warnings about A.I., the public is naturally even less enthused about the technology.

Schmidt–who was claiming “the contours of an AGI future are beginning to take shape” as recently as February–is the definition of an elite bellwether, an enormously respected figure not just on the Burning Man playa but in foreign policy circles, and his op-ed should be seen as more than the idle musings of a respected former businessman uneasy with his peers’ avidity. Henry Farrell argues that it’s best read as the swan song for a totalizing (if short-lived) geopolitical worldview and alliance Farrell calls “tech unilateralism,” built around a firm belief in imminent A.G.I., is in its twilight:

A year ago, the US appeared to have a coherent world view that tied together a particular perspective on AI with a particular view of American power. […] If the U.S. could get to AGI first – and AGI was capable both of making itself better and advancing technological progress on a variety of fronts, then the losses in terms of leverage over China would be trivial. The U.S. would have created a scientific cornucopia, a self-licking ice cream cone of technological advantage that would secure its long term dominance.

But if the AGI bet is a bad one, then much of the rationale for this Consensus falls apart. And that is the conclusion that Eric Schmidt seems to be coming to. […] a few short months ago, the United States believed that it could use its chokehold on advanced semiconductors to create a grand global scheme for controlling the development of AI, and secure its long term dominance. Now, that is in tatters, and not just because of Trump. The technological bets that underlay the grand strategy of unilateral domination look to be going bad. Even if the same people were in charge now as in January, they would have to deal with some extremely difficult questions.

So, just to recap, over the past month we’ve seen:

  • An extremely public and disappointing demonstration of the diminishing returns to scale and slowing pace of technical advances and general progress in frontier L.L.M.s, plus

  • Increasing and vocal concern among investors, economists, and financial commentators about the outsize role of A.I. investment in the wider economy, as well as

  • Prominent tech elites signaling an abandonment of the A.G.I. dream both as a marketing strategy and as a policy goal.

This is more than enough to constitute a “vibe shift” on its own, but to the list of suddenly relevant technical, financial, and elite-political concerns I’d add a long-brewing moral concern that’s helping drive and sustain the renewed A.I. backlash: A spate of troubling news stories about delusional and vulnerable people developing unhealthy attachments to, or even being “talked into” harmful behavior by, their chipper A.I. chatbot companions–culminating this week in an almost unreadably tragic and infuriating Times report about a depressed teenager whose eventual suicide was abetted by ChatGPT. Unlike the fantastical warnings of robotical paperclip omnicide–or even the more standard, if still somewhat abstract, fears about people learning how to build chemical weapons from a chatbot–these horrifying stories reveal a set of immediate, potentially mortal, inescapably grim dangers of widespread L.L.M. chatbots, now taking center stage just as whatever ultimate benefits might be said to justify that danger recede into a far more distant future.

But does this mean–as many recent headlines would have it–that the “A.I. bubble is popping”? The answer depends, annoyingly, on how you define “A.I.,” and how you define “bubble,” and, also, how you define “popping.”

Is the “A.I.” of “A.I. bubble” the entire field of machine learning? Only large language models? Only chatbots? The implementation thereof into pre-existing software? And is the “bubble” of “A.I. bubble” excessive equity valuations? Inflated expectations for or faith in L.L.M. performance? Excessive industry and management directives around A.I. use? Too many annoying guys on X.com talking too much about “A.I.”?

And, maybe most importantly, what would it mean for it to “pop”? A stock market crash and a recession? A few V.C.s losing their shirts? An “A.I. winter”? Google removing “A.I. Overview” and a reversion to the pre-L.L.M. web? Fewer annoying guys on X.com?

In the financial press, at least, the “A.I. bubble popping” means a sharp decline in equity valuations for A.I.-focused companies. Fair enough: It seems more likely than not that a financial bubble of some size will pop soon–even Sam Altman says so–though the extent of total damage is hard to gauge from here.

But I suspect that for many people the “A.I. bubble” is something larger and slightly harder to define than “equity valuations”–a way of articulating and describing the inflated rhetoric and excessive encroachment around A.I. that we’ve all experienced over the last few years. To say “the A.I. bubble is popping” is to say something like “all this L.L.M. bullshit is going away soon.”

Unfortunately, fat chance. Much of the Online A.I. Discourse, and especially the meta-discourse about the Online A.I. Discourse, which is of course what this newsletter specializes in, is occurring in the shadow of the COVID-era crypto mania–the eager promises of the blockchain, the suffocating coverage of “web3,” and the subsequent embarrassment brought about by the implosion of FTX. A few weeks ago I made a cameo appearance on the podcast “Hard Fork,” asking my friends Kevin Roose and Casey Newton, in my capacity as an A.I. critic, how their experience of covering the web3 era had changed their approach to writing about A.I. I was struck by this portion of Kevin’s reply:

I think the crypto era was, in some ways, a traumatic incident for the tech journalism community.

I think a lot of our peers — and maybe, to even a certain extent, you and I — felt like we were duped. It felt like we fell for something, like we’d wasted all of our time trying to understand and explain this technology, taking this stuff seriously, only to have it all come crashing down.

There is a pervasive suspicion among A.I. skeptics (many of whom were and are also crypto skeptics) that the A.I. boom is a redux of the crypto boom–which is to say, effectively, a grift forced on consumers and abetted by unwitting journalists and other eager marks. And this suspicion can be converted by the more cavalier A.I. haters on Bluesky into a kind of fantasy of righteous vindication: When the A.I. bubble pops, “A.I.” will be revealed as a scam and a waste of time, and “all come crashing down.” Financial ruin for bad actors, eternal shame for duped boosters, and a rollback of all stupid L.L.M. implementations online and at the workplace.

There are good reasons to say that the A.I. bubble and the crypto bubble are fundamentally different. (For starters: large language models, on a technical level, don’t have a speculative securities market literally built in.) But even if you believe that L.L.M.s are a scam and a waste of time, I’m not sure the crypto bubble provides cheering precedent. It’s true that the hype died down, and some people were prosecuted, and major food-and-beverage conglomerates mostly stopped launching memecoins on Twitter. But only two years after the industry’s supposed collapse it “accounted for nearly half of all corporate money” donated to SuperPACs in the 2024 election. If I can quote myself:

In retrospect, what died with FTX wasn’t crypto but “web3,” both the world-historically wack and grating “culture” and the inescapable speculators, investors, bubble-inflators, and book-talkers–and their marks and stenographers and bag-holders–who insisted well beyond reason on the cultural power and social value of the blockchain. Mashallah: Reasonable human beings need never again see a “cryptopunk” avatar, or read the word “fren,” and the people who tried to make this stuff happen have moved on to sports gambling, prediction markets, A.I., and other annoying schemes. But despite its comparative absence from the public consciousness–and despite S.E.C. chair Gary Gensler’s ongoing attempts to regulate crypto like any other security–crypto the asset class, and especially Bitcoin itself, have, dismayingly, survived the FTX debacle–and been hitting new record highs since the spring.

One way of thinking about the vibe shift described in the first part of this post is as the broader A.I. “discursive bubble” of hype and prophecy leaking air. (Dave Karpf calls this “the end of naive A.I. futurism.”) But as the crypto precedent suggests, you wouldn’t want to mistake it for the end of A.I., or of its deep-pocketed champions.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *