Watching the opening day of the US Senate hearings on AI brought to mind Marx’s quip about history repeating itself, “the first time as tragedy, the second as farce”. Except this time it’s the other way round. Some time ago we had the farce of the boss of Meta (neé Facebook) explaining to a senator that his company made money from advertising. This week we had the tragedy of seeing senators quizzing Sam Altman, the new acceptable face of the tech industry.
Why tragedy? Well, as one of my kids, looking up from revising O-level classics, once explained to me: “It’s when you can see the disaster coming but you can’t do anything to stop it.” The trigger moment was when Altman declared: “We think that regulatory interventions by government will be critical to mitigate the risks of increasingly powerful models.” Warming to the theme, he said that the US government “might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities”. He believed that companies like his can “partner with governments, including ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes that develop and update safety measures and examining opportunities for global coordination.”
To some observers, Altman’s testimony looked like big news: wow, a tech boss actually saying that his industry needs regulation! Less charitable observers (like this columnist) see two alternative interpretations. One is that it’s an attempt to consolidate OpenAI’s lead over the rest of the industry in large language models (LLMs), because history suggests that regulation often enhances dominance. (Remember AT&T.) The other is that Altman’s proposal is an admission that the industry is already running out of control, and that he sees bad things ahead. So his proposal is either a cunning strategic move or a plea for help. Or both.
As a general rule, whenever a CEO calls for regulation, you know something’s up. Meta, for example, has been running ads for ages in some newsletters saying that new laws are needed in cyberspace. Some of the cannier crypto crowd have also been baying for regulation. Mostly, these calls are pitches for corporations – through their lobbyists – to play a key role in drafting the requisite legislation. Companies’ involvement is deemed essential because – according to the narrative – government is clueless. As Eric Schmidt – the nearest thing tech has to Machiavelli – put it last Sunday on NBC’s Meet the Press, the AI industry needs to come up with regulations before the government tries to step in “because there’s no way a non-industry person can understand what is possible. It’s just too new, too hard, there’s not the expertise. There’s no one in the government who can get it right. But the industry can roughly get it right and then the government can put a regulatory structure around it.”
Don’t you just love that idea of the tech boys roughly “getting it right”? Similar claims are made by foxes when pitching for henhouse-design contracts. The industry’s next strategic ploy will be to plead that the current worries about AI are all based on hypothetical scenarios about the future. The most polite term for this is baloney. ChatGPT and its bedfellows are – among many other things – social media on steroids. And we already know how these platforms undermine democratic institutions and possibly influence elections. The probability that important elections in 2024 will not be affected by this kind of AI is precisely zero.
Besides, as Scott Galloway has pointed out in a withering critique, it’s also a racing certainty that chatbot technology will exacerbate the epidemic of loneliness that is afflicting young people across the world. “Tinder’s former CEO is raising venture capital for an AI-powered relationship coach called Amorai that will offer advice to young adults struggling with loneliness. She won’t be alone. Call Annie is an ‘AI friend’ you can phone or FaceTime to ask anything you want. A similar product, Replika, has millions of users.” And of course we’ve all seen those movies – such as Her and Ex Machina – that vividly illustrate how AIs insert themselves between people and relationships with other humans.
In his opening words to the Senate judiciary subcommittee’s hearing, the chairman, Senator Blumenthal, said this: “Congress has a choice now. We had the same choice when we faced social media. We failed to seize that moment. The result is: predators on the internet; toxic content; exploiting children, creating dangers for them… Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real.”
Amen to that. The only thing wrong with the senator’s stirring introduction is the word “before”. The threats and the risks are already here. And we are about to find out if Marx’s view of history was the one to go for.
What I’ve been reading
Capitalist punishmentWill AI Become the New McKinsey? is a perceptive essay in the New Yorker by Ted Chiang.
Founders keepersHenry Farrell has written a fabulous post called The Cult of the Founders on the Crooked Timber blog.
Superstore meThe Dead Silence of Goods is a lovely essay in the Paris Review by Adrienne Raphel about Annie Ernaux’s musings on the “superstore” phenomenon.
Source: US Politics - theguardian.com