Republicans want unregulated AI. What could go wrong?
A provision in Trump's big bill would eliminate state-level regulation for 10 years.
π¦Ύ PN is possible thanks to paid subscribers. If you appreciate our fiercely independent coverage of American politics, please support us. π
The great artificial intelligence bait-and-switch is almost complete.
Hereβs how it worked. First, industry leaders including Sam Altman, the head of OpenAI, went before Congress in 2023 and expressed their deep concern that the amazing tools they were developing might someday cause extraordinary harm, so there really ought to be some federal regulation of the industry. Senators were charmed and praised Altman for his openness.
Even Elon Musk advocated regulation β before his embrace of Donald Trump.
That regulation never came to pass, and perhaps the tech leaders knew it wouldnβt. Congressβs bias, after all, is always toward not doing something rather than doing something, particularly when the members donβt understand the technology at issue.
Fast forward to today, and with fears that China might best us in the battle for AI dominance, Congress has done a 180-degree turn β and so have the CEOs whose future wealth and power depends on AI. Now theyβre saying: You canβt possibly regulate us, because if you do, it will destroy the glorious AI future weβre bringing you. And not only that, what if the Chicoms beat us to the punch!
Tucked into its gigantic budget bill among tax giveaways and cuts to Medicaid, the House passed a measure forbidding the states from regulating artificial intelligence for a period of 10 years.
βWe have to be careful not to have 50 different states regulating AI, because it has national security implications, right?β said Speaker of the House Mike Johnson.
Many states have already passed laws regulating AI, most of which cover specific uses to which the technology is put. A California law, for instance, requires that certain kinds of AI-generated content contain detection tools that enable it to be identified as an AI creation. An Illinois law prohibits employers from using AI in ways that result in discrimination against job applicants. A Utah law requires companies to disclose to consumers that they are interacting with an AI system. All could be nullified if the Republican budget is passed, as could many of the hundreds of bills now being considered in state legislatures around the country.
Though both Republican and Democratic state officials have supported state laws to regulate AI, it is Republicans in Congress who are pushing the moratorium. The effort is being led by Ted Cruz of Texas, who in the last year or so has become a full-time podcaster with a side gig as a senator. Cruz confronted a problem after the bill passed the House: the Byrd Rule, which requires that only measures with a direct effect on the budget can be included in a reconciliation bill that can pass the Senate with 50 votes.
Cruz believes he solved that problem by changing the language so that rather than an outright ban on state regulation, the federal government would withhold money for broadband expansion β money every state wants β if they regulate AI.
βAll of this busybody bureaucracy β whether Bidenβs industrial policy on chip exports or industry and regulator-approved βguidanceβ documents β is a wolf in sheepβs clothing,β says Cruz. βTo lead in AI, the US cannot allow regulation, even the supposedly benign kind, to choke innovation or adoption.β
Mark that word β βinnovation.β Itβs what you often hear from businesses that want complete freedom to do dangerous things without pesky oversight or restrictions from the government. Itβs the same thing Wall Street said in the years leading up to the crash of 2008: With all the creative new financial instruments they had devised, we had truly entered a golden age of innovation. Until the global economy collapsed.
Cruzβs measure has a number of Republican critics in the Senate, including Marsha Blackburn of Tennessee and Josh Hawley of Missouri. But thereβs no telling whether theyβd be willing to bring down the entire budget β the most significant piece of legislation of Trumpβs second term β over this issue.
The apocalypse isnβt what we should be afraid of
The New York Times recently published a horrifying story about people who have been convinced by ChatGPT that they are essentially living in The Matrix, leading them to mental health crises. At one point, the chatbot told a man that he could jump off the roof of his building and fly, if only he believed.
There isnβt a straightforward regulatory answer to the challenges that kind of story presents. But given the speed at which AI is changing and the tech companies are pushing out new tools before they understand what those tools are capable of and what the unintended consequences might be, the idea of a 10-year moratorium on regulation seems crazy. Even within the tech industry you can find a wide range of beliefs about what the technology will look like a decade from now, but almost no one thinks it wonβt be dramatically different in ways that are hard to predict.
And the attitude in Silicon Valley toward AI is not exactly one of restraint and thoughtfulness. AI companies insist that if investors pour hundreds of billions of dollars into this nascent industry, almost infinite riches will be created. Meta just announced that it will spend $14 billion on a new research lab to pursue βsuperintelligenceβ; having sunk $46 billion into the laughingstock that is the Metaverse, Mark Zuckerberg is clearly desperate not to get left behind. We can all trust him to make sure no unforeseen harms come to pass in his relentless pursuit of dominance over our mindspace, right?
A note from Aaron: Enjoying this piece from Paul? Then please sign up to support our work. Public Notice is 100 precent reader-funded.π
On one hand, the tech companies tell us that the technology is evolving with stunning speed, so we donβt know what it will look like a year or even a few months from now. On the other hand, they say that we shouldnβt put up any legal guardrails to keep it from doing extraordinary harm.
But weβre not talking about rogue AIs achieving sentience, then deciding to kill us all. Without going too deeply into the debate about whether thatβs a possibility, it certainly isnβt going to happen any time soon, if ever. Instead, the threat we face is less existential but more immediate. AI is going to be an increasing presence in our lives, but at least in part it will be through a relentless accumulation of slop, scams, job displacement, and the general degradation of everything we do online.
Itβs already happening. AI is accelerating what Cory Doctorow calls βenshittification,β the process by which tech products get worse as the companies that make them exploit their monopolistic position to suck up more and more money. For instance, youβve no doubt noticed how awful Google search has become lately, as you have to wade through a pile of AI-generated junk sites to find the thing youβre looking for. As much as you hate it, itβs making Google more money, because the longer it takes you to complete your search, the more ads they can shove in front of your eyeballs.
Thatβs the kind of βinnovationβ tech companies want to make sure they are free to pursue without restraint β along with violating your privacy in almost every way imaginable.
That isnβt to say we should be confident that state legislatures are going to devise smart and informed regulations. But that at least is a democratic process, one subject to amendment and revision if things go wrong. Given their record up until now, to believe that the tech companies are ever going to choose the public interest over their own wealth, power, and fantastical dreams of a post-human future seems rather naΓ―ve.
In a recently published study, the Pew Research Center compared how the general public views AI to what experts working in the field believe. Not surprisingly, the tech people are much more optimistic, while the public is much more wary:
Interestingly enough, majorities of both the public (58 percent) and AI experts (53 percent) said they were more worried that the federal government will not go far enough in regulating AIβs use than that it will go too far. But AI experts and the leadership of the companies producing it are not the same people. Those at the top of the tech pyramid want what they always have: maximum autonomy, free from oversight or restraint by government, to shape the world we live in however they please.
When they argue that a βpatchworkβ of regulation is inefficient, they arenβt necessarily wrong. But perhaps in this case, we can tolerate a little inefficiency when the alternative is enormous risk. That may be inimical to the βmove fast and break thingsβ mindset of the tech industry, but thatβs precisely why we donβt want to leave all the decision-making in their hands.
We just saw with Elon Muskβs disastrous siege of the federal government what happens when you give too much power to an aggressive tech ignoramus with no concern for the damage he might do, and weβll be dealing with the consequences for years if not decades. If the government is forced to step aside thanks to Trump's big bill and simply puts its faith in the good sense and public-spiritedness of the Silicon Valley elite, Americaβs future will become even more enshittified.
Thatβs it for today
Weβll be back with more tomorrow. If you appreciate todayβs newsletter, please support our work by signing up. Paid subscribers make Public Notice possible.
Thanks for reading, and for your support.
The US has a long history of watching what corporations and businesses do when there are no regulations and oversight. Laws and regulations have always been necessary to protect labor, wages, the environment, consumer product protection, consumer financial protection, and on and on. In no case has an industry properly policed itself and has not been driven by human greed.
Vote ever Republican out at the first chance we get