Begin typing your search...

    H​ow Long Will A.I.’s ‘Slop’ Era Last?

    A decade ago, venture capital provided Americans the “millennial lifestyle subsidy”: investors keeping the price of Uber and DoorDash and dozens of other services artificially low for years.

    H​ow Long Will A.I.’s ‘Slop’ Era Last?
    X

    Representative Image 

    David Wallace-Wells

    Remember the season of A.I. doom? It wasn’t that long ago, in the spring of last year, that the new chatbots were trailed by various “godfathers” of the technology, warning of existential risk; by researchers suggesting a 10 percent chance of human extinction at the hands of robots; by executives speculating about future investment rounds tabulated in the trillions.

    Now the reckoning is happening on very different terms. In a note from Barclays, one analyst warned that today’s A.I. investments might be three times as large as expected returns, while another analyst, in several assessments published by Sequoia Capital, calculated that investments in A.I. were running short of projected profits by a margin of at least several hundred billion dollars annually. (He called this “A.I.’s $600 billion question” and warned of “investment incineration.”) In a similarly bearish Goldman Sachs report, the firm’s head of global equity research estimated that the cost of A.I. infrastructure build-out over the next several years would reach $1 trillion. “Replacing low-wage jobs with tremendously costly technology is basically the polar opposite of the prior technology transitions I’ve witnessed,” he noted. “The crucial question is: What $1 trillion problem will A.I. solve?”

    A decade ago, venture capital provided Americans the “millennial lifestyle subsidy”: investors keeping the price of Uber and DoorDash and dozens of other services artificially low for years. Today the same millennial might read about that trillion-dollar A.I. expenditure, more than the United States spends annually on its military, and think: What exactly is that money going toward? What is A.I. even for?

    One increasingly intuitive answer is “garbage.” The neuroscientist Erik Hoel has called it “A.I. pollution,” and the physicist Anthony Aguirre “something like noise” and “A.I.-generated dross.” More recently, it has inspired a more memorable neologistic term of revulsion, “A.I. slop”: often uncanny, frequently misleading material, now flooding web browsers and social-media platforms like spam in old inboxes. Years deep into national hysteria over the threat of internet misinformation pushed on us by bad actors, we’ve sleepwalked into a new internet in which meaningless, nonfactual slop is casually mass-produced and force-fed to us by A.I.

    When Thomas Crooks tried to assassinate Donald Trump, for instance, X’s A.I. sputtered out a whole string of cartoonishly false trending topics, including that it was Kamala Harris who had been shot. Where not long ago we used to find the very best results for Google searches, we can now find instead potentially plagiarized and often inaccurate paragraph summaries of answers to our queries — including, reportedly, that only 17 American presidents were white, that Barack Obama is a Muslim and that Andrew Johnson, who became president in 1865 and died in 1875, earned 13 college degrees between 1947 and 2012. We can also read that geologists advise eating at least one rock a day, that Elmer’s glue should be added to pizza sauce for thickening and that it’s completely chill to run with scissors.

    Sometimes, of course, you can get reliable information too; maybe even most of the time. But you can also get bad advice about A.D.H.D., about chemotherapy, about Ozempic — some potentially delicate subjects. And while the internet was never perfectly trustworthy, one epoch-defining breakthrough of Google was that it got us pretty close. Now the company’s chief executive acknowledges that hallucinations are “inherent” to the technology it has celebrated as a kind of successor for ranked-order search results, which are now often found far below not just the A.I. summary but a whole stack of “sponsored” results as well.

    But not all A.I.s are large language models like ChatGPT, Gemini or Claude, each of which were trained on gobsmackingly large quantities of text to better simulate interaction with humans and bring them closer to approximations of humanlike thinking, at least in theory. Peer away from those chatbots and you can see a very different story, with different robot protagonists: machine-learning tools trained much more narrowly and focused less on producing a conversational, natural-language interface than on processing data dumps much more efficiently than human minds ever could. These products are less eerie, which means they have generated little existential angst. They are also — for now, at least — much more reliable and productive.

    This month, KoBold Metals announced the largest discovery of new copper deposits in a decade — a green-energy gold mine, so to speak, delivered with the help of its own proprietary A.I., which integrated information about subatomic particles detected underground with century-old mining reports and radar imagery to make predictions about where minerals critical for the green transition might be found. Machine learning may help make our electricity grid as much as 40 percent more efficient at delivering power as it is today, when many of its routing decisions are made by individual humans on the basis of experience and intuition. (At some crucial points, A.I. has cut decision time to one minute from 10.) It has already helped drive down the cost and drive up the performance of next-gen batteries and solar photovoltaic cells, whose performance can also be improved, even after the panels have been manufactured and installed on your roof, by as much as 25 percent. Our models of ice-sheet melt and rainforest degradation are much sharper now, too.

    When in 2021 DeepMind revealed that it had effectively solved the protein-folding problem, making the three-dimensional structure of biological building blocks for the first time easily predictable for researchers, the breakthrough made global news, even if the headlines flew over the heads of most readers, who might not have known how significant a roadblock that has been in biomedical research. A few years later, A.I. is designing new proteins, rapidly accelerating drug discovery and speeding up clinical trials testing new medicines and therapies.

    A year ago, as normies were only just toying around with chatbots, there were already dozens of drugs developed in part by A.I. proceeding in those trials. The number is much larger today, whether you choose to be impressed by the reported performance of ChatGPT on the LSAT or frustrated that Claude cannot solve logic problems intuitive to a kindergartner (or even reliably tell you whether 9.11 is greater than 9.9, as a Windows 95 calculator could).

    And so if one account of this experience is that we are drowning in A.I. “pollution,” another perspective is that plenty of good news from A.I. is being drowned out by it, too. And perhaps that a more optimistic perspective can be drawn by analogy to what economists call the “environmental Kuznets curve,” which suggests that, as nations develop, they tend to first pollute a lot more and then, over time, as they grow richer, they ultimately pollute less.

    Even in describing regular old pollution, this framework has its shortcomings, especially because it treats as automatic eventual progress that has always required tooth-and-nail fights against some very stubborn bad actors. As it happens, A.I. is generating an awful lot of genuine pollution, too — both Google and Microsoft, which each pledged in 2019 to reach zero emissions by 2030, have instead expanded their carbon footprints by nearly 50 percent in the interim. The same political concerns about the ownership and control of A.I. tools, and how they might be turned toward harmful or exploitative ends, remain somewhat systematically unaddressed. And there are reasons to suspect that “hallucination” may be an unsolvable problem for generative A.I. as we know it.

    Which all suggests that the answer to the many problems of generative A.I. may not be what boosters call “scaling,” especially given the possibility that new training runs deliver some very expensive diminishing returns. It’s the matter of weeding out the good from the bad, in the name of intellectual hygiene. Hoel, the neuroscientist, has called for “the equivalent of a Clean Air Act: a Clean Internet Act,” including the unscrubbable watermarking of anything produced by A.I. Actually scrubbing the slop would be even better.

    NYT Editorial Board
    Next Story