More stories

  • in

    When the tech boys start asking for new regulations, you know something’s up | John Naughton

    Watching the opening day of the US Senate hearings on AI brought to mind Marx’s quip about history repeating itself, “the first time as tragedy, the second as farce”. Except this time it’s the other way round. Some time ago we had the farce of the boss of Meta (neé Facebook) explaining to a senator that his company made money from advertising. This week we had the tragedy of seeing senators quizzing Sam Altman, the new acceptable face of the tech industry.Why tragedy? Well, as one of my kids, looking up from revising O-level classics, once explained to me: “It’s when you can see the disaster coming but you can’t do anything to stop it.” The trigger moment was when Altman declared: “We think that regulatory interventions by government will be critical to mitigate the risks of increasingly powerful models.” Warming to the theme, he said that the US government “might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities”. He believed that companies like his can “partner with governments, including ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes that develop and update safety measures and examining opportunities for global coordination.”To some observers, Altman’s testimony looked like big news: wow, a tech boss actually saying that his industry needs regulation! Less charitable observers (like this columnist) see two alternative interpretations. One is that it’s an attempt to consolidate OpenAI’s lead over the rest of the industry in large language models (LLMs), because history suggests that regulation often enhances dominance. (Remember AT&T.) The other is that Altman’s proposal is an admission that the industry is already running out of control, and that he sees bad things ahead. So his proposal is either a cunning strategic move or a plea for help. Or both.As a general rule, whenever a CEO calls for regulation, you know something’s up. Meta, for example, has been running ads for ages in some newsletters saying that new laws are needed in cyberspace. Some of the cannier crypto crowd have also been baying for regulation. Mostly, these calls are pitches for corporations – through their lobbyists – to play a key role in drafting the requisite legislation. Companies’ involvement is deemed essential because – according to the narrative – government is clueless. As Eric Schmidt – the nearest thing tech has to Machiavelli – put it last Sunday on NBC’s Meet the Press, the AI industry needs to come up with regulations before the government tries to step in “because there’s no way a non-industry person can understand what is possible. It’s just too new, too hard, there’s not the expertise. There’s no one in the government who can get it right. But the industry can roughly get it right and then the government can put a regulatory structure around it.”Don’t you just love that idea of the tech boys roughly “getting it right”? Similar claims are made by foxes when pitching for henhouse-design contracts. The industry’s next strategic ploy will be to plead that the current worries about AI are all based on hypothetical scenarios about the future. The most polite term for this is baloney. ChatGPT and its bedfellows are – among many other things – social media on steroids. And we already know how these platforms undermine democratic institutions and possibly influence elections. The probability that important elections in 2024 will not be affected by this kind of AI is precisely zero.Besides, as Scott Galloway has pointed out in a withering critique, it’s also a racing certainty that chatbot technology will exacerbate the epidemic of loneliness that is afflicting young people across the world. “Tinder’s former CEO is raising venture capital for an AI-powered relationship coach called Amorai that will offer advice to young adults struggling with loneliness. She won’t be alone. Call Annie is an ‘AI friend’ you can phone or FaceTime to ask anything you want. A similar product, Replika, has millions of users.” And of course we’ve all seen those movies – such as Her and Ex Machina – that vividly illustrate how AIs insert themselves between people and relationships with other humans.In his opening words to the Senate judiciary subcommittee’s hearing, the chairman, Senator Blumenthal, said this: “Congress has a choice now. We had the same choice when we faced social media. We failed to seize that moment. The result is: predators on the internet; toxic content; exploiting children, creating dangers for them… Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real.”Amen to that. The only thing wrong with the senator’s stirring introduction is the word “before”. The threats and the risks are already here. And we are about to find out if Marx’s view of history was the one to go for.What I’ve been readingCapitalist punishmentWill AI Become the New McKinsey? is a perceptive essay in the New Yorker by Ted Chiang.Founders keepersHenry Farrell has written a fabulous post called The Cult of the Founders on the Crooked Timber blog.Superstore meThe Dead Silence of Goods is a lovely essay in the Paris Review by Adrienne Raphel about Annie Ernaux’s musings on the “superstore” phenomenon. More

  • in

    OpenAI CEO calls for laws to mitigate ‘risks of increasingly powerful’ AI

    The CEO of OpenAI, the company responsible for creating artificial intelligence chatbot ChatGPT and image generator Dall-E 2, said “regulation of AI is essential” as he testified in his first appearance in front of the US Congress.Speaking to the Senate judiciary committee on Tuesday, Sam Altman said he supported regulatory guardrails for the technology that would enable the benefits of artificial intelligence while minimizing the harms.“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Altman said in his prepared remarks.Altman suggested the US government might consider licensing and testing requirements for development and release of AI models. He proposed establishing a set of safety standards and a specific test models would have to pass before they can be deployed, as well as allowing independent auditors to examine the models before they are launched. He also argued existing frameworks like Section 230, which releases platforms from liability for the content its users post, would not be the right way to regulate the system.“For a very new technology we need a new framework,” Altman said.Both Altman and Gary Marcus, an emeritus professor of psychology and neural science at New York University who also testified at the hearing, called for a new regulatory agency for the technology. AI is complicated and moving fast, Marcus argued, making “an agency whose full-time job” is to regulate it crucial.Throughout the hearing, senators drew parallels between social media and generative AI, and the lessons lawmakers had learned from the government’s failure to act on regulating social platforms.Yet the hearing was far less contentious than those at which the likes of the Meta CEO, Mark Zuckerberg, testified. Many lawmakers gave Altman credit for his calls for regulation and acknowledgment of the pitfalls of generative AI. Even Marcus, brought on to provide skepticism about the technology, called Altman’s testimony sincere.The hearing came as renowned and respected AI experts and ethicists, including former Google researchers Dr Timnit Gebru, who co-led the company’s ethical AI team, and Meredith Whitaker, have been sounding the alarm about the rapid adoption of generative AI, arguing the technology is over-hyped. “The idea that this is going to magically become a source of social good … is a fantasy used to market these programs,” Whitaker, now the president of secure messaging app Signal, recently said in an interview with Meet the Press Reports.Generative AI is a probability machine “designed to spit out things that seem plausible” based on “massive amounts of effectively surveillance data that has been scraped from the web”, she argued.Senators Josh Hawley and Richard Blumenthal said this hearing is just the first step in understanding the technology.Blumenthal said he recognized what he described as the “promises” of the technology including “curing cancer, developing new understandings of physics and biology, or modeling climate and weather”.Potential risks Blumenthal said he was worried about include deepfakes, weaponized disinformation, housing discrimination, harassment of women and impersonation frauds. “For me, perhaps the biggest nightmare is the looming new industrial revolution, the displacement of millions of workers,” he said.Altman said that while OpenAI was building tools that will one day “address some of humanity’s biggest challenges like climate changes and curing cancer”, the current systems were not capable of doing these things yet.But he believes the benefits of the tools deployed so far “vastly outweigh the risks” and said the company conducts extensive testing and implements safety and monitoring systems before releasing any new system.“OpenAI was founded on the belief that artificial intelligence has the ability to improve nearly every aspect of our lives but also that it creates serious risks that we have to work together to manage,” Altman said.Altman said the technology will significantly affect the job market but he believes “there will be far greater jobs on the other side of this”.“The jobs will get better,” he said. “I think it’s important to think of GPT as a tool not a creature … GPT 4 and tools like it are good at doing tasks, not jobs. GPT 4 will, I think, entirely automate away some jobs and it will create new ones that we believe will be much better.”Altman also said he was very concerned about the impact that large language model services will have on elections and misinformation, particularly ahead of the primaries.“There’s a lot that we can and do do,” Altman said in response to a question from Senator Amy Klobuchar about a tweet ChatGPT crafted that listed fake polling locations. “There are things that the model won’t do and there is monitoring. At scale … we can detect someone generating a lot of those [misinformation] tweets.”Altman didn’t have an answer yet for how content creators whose work is being used in AI-generated songs, articles or other works can be compensated, saying the company is engaged with artists and other entities on what that economic model could look like. When asked by Klobuchar about how he plans to remedy threats to local news publications whose content is being scraped and used to train these models, Altman said he hopes the tool would help journalists but that “if there are things that we can do to help local news, we’d certainly like to”.Touched upon but largely missing from the conversation was the potential danger of a small group of power players dominating the industry, a dynamic Whitaker has warned risks entrenching existing power dynamics.“There are only a handful of companies in the world that have the combination of data and infrastructural power to create what we’re calling AI from nose-to-tail,” she said in the Meet the Press interview. “We’re now in a position that this overhyped technology is being created, distributed and ultimately shaped to serve the economic interests of these same handful of actors.” More

  • in

    Breakfast with Chad: Techno-feudalism

    The Fair Observer website uses digital cookies so it can collect statistics on how many visitors come to the site, what content is viewed and for how long, and the general location of the computer network of the visitor. These statistics are collected and processed using the Google Analytics service. Fair Observer uses these aggregate statistics from website visits to help improve the content of the website and to provide regular reports to our current and future donors and funding organizations. The type of digital cookie information collected during your visit and any derived data cannot be used or combined with other information to personally identify you. Fair Observer does not use personal data collected from its website for advertising purposes or to market to you.As a convenience to you, Fair Observer provides buttons that link to popular social media sites, called social sharing buttons, to help you share Fair Observer content and your comments and opinions about it on these social media sites. These social sharing buttons are provided by and are part of these social media sites. They may collect and use personal data as described in their respective policies. Fair Observer does not receive personal data from your use of these social sharing buttons. It is not necessary that you use these buttons to read Fair Observer content or to share on social media. More

  • in

    Mind Blowing: The Startling Reality of Conscious Machines

    The Fair Observer website uses digital cookies so it can collect statistics on how many visitors come to the site, what content is viewed and for how long, and the general location of the computer network of the visitor. These statistics are collected and processed using the Google Analytics service. Fair Observer uses these aggregate statistics from website visits to help improve the content of the website and to provide regular reports to our current and future donors and funding organizations. The type of digital cookie information collected during your visit and any derived data cannot be used or combined with other information to personally identify you. Fair Observer does not use personal data collected from its website for advertising purposes or to market to you.As a convenience to you, Fair Observer provides buttons that link to popular social media sites, called social sharing buttons, to help you share Fair Observer content and your comments and opinions about it on these social media sites. These social sharing buttons are provided by and are part of these social media sites. They may collect and use personal data as described in their respective policies. Fair Observer does not receive personal data from your use of these social sharing buttons. It is not necessary that you use these buttons to read Fair Observer content or to share on social media. More

  • in

    Breakfast with Chad: Posthumanism

    The Fair Observer website uses digital cookies so it can collect statistics on how many visitors come to the site, what content is viewed and for how long, and the general location of the computer network of the visitor. These statistics are collected and processed using the Google Analytics service. Fair Observer uses these aggregate statistics from website visits to help improve the content of the website and to provide regular reports to our current and future donors and funding organizations. The type of digital cookie information collected during your visit and any derived data cannot be used or combined with other information to personally identify you. Fair Observer does not use personal data collected from its website for advertising purposes or to market to you.As a convenience to you, Fair Observer provides buttons that link to popular social media sites, called social sharing buttons, to help you share Fair Observer content and your comments and opinions about it on these social media sites. These social sharing buttons are provided by and are part of these social media sites. They may collect and use personal data as described in their respective policies. Fair Observer does not receive personal data from your use of these social sharing buttons. It is not necessary that you use these buttons to read Fair Observer content or to share on social media. More

  • in

    Breakfast with Chad: Who sabotaged the Nord Stream pipelines?

    The Fair Observer website uses digital cookies so it can collect statistics on how many visitors come to the site, what content is viewed and for how long, and the general location of the computer network of the visitor. These statistics are collected and processed using the Google Analytics service. Fair Observer uses these aggregate statistics from website visits to help improve the content of the website and to provide regular reports to our current and future donors and funding organizations. The type of digital cookie information collected during your visit and any derived data cannot be used or combined with other information to personally identify you. Fair Observer does not use personal data collected from its website for advertising purposes or to market to you.As a convenience to you, Fair Observer provides buttons that link to popular social media sites, called social sharing buttons, to help you share Fair Observer content and your comments and opinions about it on these social media sites. These social sharing buttons are provided by and are part of these social media sites. They may collect and use personal data as described in their respective policies. Fair Observer does not receive personal data from your use of these social sharing buttons. It is not necessary that you use these buttons to read Fair Observer content or to share on social media. More