More stories

  • in

    Top tech firms commit to AI safeguards amid fears over pace of change

    Top players in the development of artificial intelligence, including Amazon, Google, Meta, Microsoft and OpenAI, have agreed to new safeguards for the fast-moving technology, Joe Biden announced on Friday.Among the guidelines brokered by the Biden administration are watermarks for AI content to make it easier to identify and third-party testing of the technology that will try to spot dangerous flaws.Speaking at the White House, Biden said the companies’ commitment were “real and concrete” and will help “develop safe, secure and trustworthy” technologies that benefit society and uphold values.“Americans are seeing how advanced artificial intelligence and the pace of innovation have the power to disrupt jobs in industries,” he said. “These commitments are a promising step that we have a lot more work to do together.”The president said AI brings “incredible opportunities”, as well as risks to society and economy. The agreement, he said, would underscore three fundamental principles – safety, security and trust.The White House said seven US companies had agreed to the voluntary commitments, which are meant to ensure their AI products are safe before they release them.The announcement comes as critics charge AI’s breakneck expansion threatens to allow real damage to occur before laws catch up. The voluntary commitments are not legally binding, but may create a stopgap while more comprehensive action is developed.A surge of commercial investment in generative AI tools that can write convincingly human-like text and churn out new images and other media has brought public fascination as well as concern about their ability to trick people and spread disinformation, among other dangers.The tech companies agreed to eight measures:
    Using watermarking on audio and visual content to help identify content generated by AI.
    Allowing independent experts to try to push models into bad behavior – a process known as “red-teaming”.
    Sharing trust and safety information with the government and other companies.
    Investing in cybersecurity measures.
    Encouraging third parties to uncover security vulnerabilities.
    Reporting societal risks such as inappropriate uses and bias.
    Prioritizing research on AI’s societal risks.
    Using the most cutting-edge AI systems, known as frontier models, to solve society’s greatest problems.
    The voluntary commitments are meant to be an immediate way of addressing risks ahead of a longer-term push to get Congress to pass laws regulating the technology.Some advocates for AI regulations said Biden’s move is a start but more needs to be done to hold the companies and their products accountable.“History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations,” said a statement from James Steyer, founder and CEO of the non-profit Common Sense Media.The guidelines, as detailed at a high level in a fact sheet the White House released, some critics have argued, do not go far enough in addressing concerns over the way AI could impact society and give the administration little to no remedies for enforcement if the companies do not abide by them. “We need a much more wide-ranging public deliberation and that’s going to bring up issues that companies almost certainly won’t voluntarily commit to because it would lead to substantively different results, ones that may more directly impact their business models,” said Amba Kak, the executive director of research group the AI Now Institute.“A closed-door deliberation with corporate actors resulting in voluntary safeguards isn’t enough,” Kak said. “What this list covers is a set of problems that are comfortable to business as usual, but we also need to be looking at what’s not on the list – things like competition concerns, discriminatory impacts of these systems. The companies have said they’ll ‘research’ privacy and bias, but we already have robust bodies of research on both – what we need is accountability.”Voluntary guidelines amount to little more than self-regulation, said Caitriona Fitzgerald, the deputy director at the non-profit research group, the Electronic Privacy Information Center (Epic). A similar approach was taken with social media platforms, she said, and it didn’t work. “It’s internal compliance checking and it’s similar to what we’ve seen in the FTC consent orders from the past when they required Facebook to do internal privacy impact assessments and they just became a box-checking exercise.”The Senate majority leader, Chuck Schumer, has said he will introduce legislation to regulate AI. He has held a number of briefings with government officials to educate senators about an issue that’s attracted bipartisan interest.A number of technology executives have called for regulation, and several went to the White House in May to speak with Biden, vice-president Kamala Harris and other officials.Senator Mark Warner said the guidelines released on Friday are a start but that “we need more than industry commitments”.“While we often hear AI vendors talk about their commitment to security and safety, we have repeatedly seen the expedited release of products that are exploitable, prone to generating unreliable outputs, and susceptible to misuse,” Warner said in a statement.But some experts and upstart competitors worry that the type of regulation being floated could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft, as smaller players are elbowed out by the high cost of making their AI systems known as large language models adhere to regulatory strictures.The software trade group BSA, which includes Microsoft as a member, said on Friday that it welcomed the Biden administration’s efforts to set rules for high-risk AI systems.“Enterprise software companies look forward to working with the administration and Congress to enact legislation that addresses the risks associated with artificial intelligence and promote its benefits,” the group said in a statement.Several countries have been looking at ways to regulate AI, including European Union lawmakers who have been negotiating sweeping AI rules for the 27-country bloc.The details of the European legislation are still being hashed out, but the EU AI Act contains robust regulations that would create significant consumer protections against the overreach, privacy violations and biases of certain types of high-risk AI models.Meanwhile conversations in the US remain in the early stages. Fitzgerald, of Epic, said while the voluntary guidelines are just one in a series of guidelines the White House has released on AI, she worries it might cause Congress to slow down their push to create regulations. “We need the rules of the road before it gets too big to regulate,” she said.The UN secretary general, António Guterres, recently said the United Nations was “the ideal place” to adopt global standards and appointed a board that will report back on options for global AI governance by the end of the year.The United Nations chief also said he welcomed calls from some countries for the creation of a new UN body to support global efforts to govern AI, inspired by such models as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.The White House said on Friday that it had already consulted on the voluntary commitments with a number of countries.Associated Press contributed to this story More

  • in

    Oppenheimer biographer supports US bill to bar use of AI in nuclear launches

    A biographer whose Pulitzer prize-winning book inspired the new movie Oppenheimer has expressed support for a US senator’s attempt to bar the use of artificial intelligence in nuclear weapons launches.“Humans must always maintain sole control over nuclear weapons,” Kai Bird, author of American Prometheus, said in a statement reported by Politico.“This technology is too dangerous to gamble with. This bill will send a powerful signal to the world that the United States will never take the reckless step of automating our nuclear command and control.”In Washington on Thursday, Bird met Ed Markey, the Democratic Massachusetts senator who is attempting to add the AI-nuclear provision to a major defense spending bill.Markey, Politico said, was a friend of Bird’s co-author, the late Tufts University professor Martin J Sherwin.A spokesperson for the senator told Politico Markey and Bird “shared their mutual concerns over the proliferation of artificial intelligence in national security and defense without guardrails, and the risks of using nuclear weapons in south Asia and elsewhere.“They also discussed ways to increase awareness of nuclear issues among the younger set.”J Robert Oppenheimer was the driving force behind US development of the atomic bomb, at the end of the second world war.Bird and Sherwin’s book is now the inspiration for Oppenheimer, Christopher Nolan’s summer blockbuster starring Cillian Murphy in the title role.The movie opens in the US on Friday – in competition with Barbie, Greta Gerwig’s film about the popular children’s doll.On Friday, Nolan told the Guardian: “International surveillance of nuclear weapons is possible because nuclear weapons are very difficult to build. Oppenheimer spent $2bn and used thousands of people across America to build those first bombs.“It’s reassuringly difficult to make nuclear weapons and so it’s relatively easy to spot when a country is doing that. I don’t believe any of that applies to AI.”Nolan also noted “very strong parallels” between Oppenheimer and AI experts now calling for such technology to be controlled.Nolan said he had “been interested to talk to some of the leading researchers in the AI field, and hear from them that they view this as their ‘Oppenheimer moment’. And they’re clearly looking to his story for some kind of guidance … as a cautionary tale in terms of what it says about the responsibility of somebody who’s putting this technology to the world, and what their responsibilities would be in terms of unintended consequences.”Bird and Sherwin’s biography, subtitled The Triumph and Tragedy of J Robert Oppenheimer, was published in 2008.Reviewing for the Guardian, James Buchan saluted the authors’ presentation of “the cocktails and wire-taps and love affairs of Oppenheimer’s existence, his looks and conversation, the way he smoked the cigarettes and pipe that killed him, his famous pork-pie hat and splayed walk, and all the tics and affectations that his students imitated and the patriots and military men despised.“It is as if these authors had gone back to James Boswell, who said of Dr Johnson: ‘Everything relative to so great a man is worth observing.’” More

  • in

    ‘An evolution in propaganda’: a digital expert on AI influence in elections

    Every election presents an opportunity for disinformation to find its way into the public discourse. But as the 2024 US presidential race begins to take shape, the growth of artificial intelligence (AI) technology threatens to give propagandists powerful new tools to ply their trade.Generative AI models that are able to create unique content from simple prompts are already being deployed for political purposes, taking disinformation campaigns into strange new places. Campaigns have circulated fake images and audio targeting other candidates, including an AI-generated campaign ad attacking Joe Biden and deepfake videos mimicking real-life news footage.The Guardian spoke with Renée DiResta, technical research manager at the Stanford Internet Observatory, a university program that researches the abuses of information technology, about how the latest developments in AI influence campaigns and how society is catching up to a new, artificially created reality.Concern around AI and its potential for disinformation has been around for a while. What has changed that makes this threat more urgent?When people became aware of deepfakes – which usually refers to machine-generated video of an event that did not happen – a few years ago there was concern that adversarial actors would use these types of video to disrupt elections. Perhaps they would make video of a candidate, perhaps they would make video of some sort of disaster. But it didn’t really happen. The technology captured public attention, but it wasn’t very widely democratized. And so it didn’t primarily manifest in the political conversation, but instead in the realm of much more mundane but really individually harmful things, like revenge porn.There’s been two major developments in the last six months. First is the rise of ChatGPT, which is generated text. It became available to a mass market and people began to realize how easy it was to use these types of text-based tools. At the same time, text-to-still image tools became globally available. Today, anybody can use Stable Diffusion or Midjourney to create photorealistic images of things that don’t really exist in the world. The combination of these two things, in addition to the concerns that a lot of people feel around the 2024 elections, has really captured public attention once again.Why did the political use of deepfakes not materialize?The challenge with using video in a political environment is that you really have to nail the substance of the content. There are a lot of tells in video, a lot of ways in which you can determine whether it’s generated. On top of that, when a video is truly sensational, a lot of people look at it and factcheck it and respond to it. You might call it a natural immune response.Text and images, however, have the potential for higher actual impact in an election scenario because they can be more subtle and longer lasting. Elections require months of campaigning during which people formulate an opinion. It’s not something where you’re going to change the entire public mind with a video and have that be the most impactful communication of the election.How do you think large language models can change political propaganda?I want to caveat that describing what is tactically possible is not the same thing as me saying the sky is falling. I’m not a doomer about this technology. But I do think that we should understand generative AI in the context of what it makes possible. It increases the number of people who can create political propaganda or content. It decreases the cost to do it. That’s not to say necessarily that they will, and so I think we want to maintain that differentiation between this is the tactic that a new technology enables versus that this is going to swing an election.As far as the question of what’s possible, in terms of behaviors, you’ll see things like automation. You might remember back in 2015 there were all these fears about bots. You had a lot of people using automation to try to make their point of view look more popular – making it look like a whole lot of people think this thing, when in reality it’s six guys and their 5,000 bots. For a while Twitter wasn’t doing anything to stop that, but it was fairly easy to detect. A lot of the accounts would be saying the exact same thing at the exact same time, because it was expensive and time consuming to generate a unique message for each of your fake accounts. But with generative AI it is now effortless to generate highly personalized content and to automate its dissemination.And then finally, in terms of content, it’s really just that the messages are more credible and persuasive.That seems tied to another aspect you’ve written about, that the sheer amount of content that can be generated, including misleading or inaccurate content, has a muddying effect on information and trust.It’s the scale that makes it really different. People have always been able to create propaganda, and I think it’s very important to emphasize that. There is an entire industry of people whose job it is to create messages for campaigns and then figure out how to get them out into the world. We’ve just changed the speed and the scale and the cost to do that. It’s just an evolution in propaganda.When we think about what’s new and what’s different here, the same thing goes for images. When Photoshop emerged, the public at first was very uncomfortable with Photoshopped images, and gradually became more comfortable with it. The public acclimated to the idea that Photoshop existed and that not everything that you see with your eyes is a thing that necessarily is as it seems – the idea that the woman that you see on the magazine cover probably does not actually look like that. Where we’ve gone with generative AI is the fabrication of a complete unreality, where nothing about the image is what it seems but it looks photorealistic.skip past newsletter promotionafter newsletter promotionNow anybody can make it look like the pope is wearing Balenciaga.Exactly.In the US, it seems like meaningful federal regulation is pretty far away if it’s going to come at all. Absent of that, what are some of the sort of short-term ways to mitigate these risks?First is the education piece. There was a very large education component when deepfakes became popular – media covered them and people began to get the sense that we were entering a world in which a video might not be what it seems.But it’s unreasonable to expect every person engaging with somebody on a social media platform to figure out if the person they’re talking to is real. Platforms will have to take steps to more carefully identify if automation is in play.On the image front, social media platforms, as well as generative AI companies, are starting to come together to try and determine what kind of watermarking might be useful so that platforms and others can determine computationally whether an image is generated.Some companies, like OpenAI, have policies around generating misinformation or the use of ChatGPT for political ends. How effective do you see those policies being?It’s a question of access. For any technology, you can try to put guardrails on your proprietary version of that technology and you can argue you’ve made a values-based decision to not allow your products to generate particular types of content. On the flip side, though, there are models that are open source and anyone can go and get access to them. Some of the things that are being done with some of the open source models and image generation are deeply harmful, but once the model is open sourced, the ability to control its use is much more limited.And it’s a very big debate right now in the field. You don’t want to necessarily create regulations that lock in and protect particular corporate actors. At the same time, there is a recognition that open-source models are out there in the world already. The question becomes how the platforms that are going to serve as the dissemination pathways for this stuff think about their role and their policies in what they amplify and curate.What’s the media or the public getting wrong about AI and disinformation?One of the real challenges is that people are going to believe what they see if it conforms to what they want to believe. In a world of unreality in which you can create that content that fulfills that need, one of the real challenges is whether media literacy efforts actually solve any of the problems. Or will we move further into divergent realities – where people are going to continue to hold the belief in something that they’ve seen on the internet as long as it tells them what they want. Larger offline challenges around partisanship and trust are reflected in, and exacerbated by, new technologies that enable this kind of content to propagate online. More

  • in

    Disinformation reimagined: how AI could erode democracy in the 2024 US elections

    A banal dystopia where manipulative content is so cheap to make and so easy to produce on a massive scale that it becomes ubiquitous: that’s the political future digital experts are worried about in the age of generative artificial intelligence (AI).In the run-up to the 2016 presidential election, social media platforms were vectors for misinformation as far-right activists, foreign influence campaigns and fake news sites worked to spread false information and sharpen divisions. Four years later, the 2020 election was overrun with conspiracy theories and baseless claims about voter fraud that were amplified to millions, fueling an anti-democratic movement to overturn the election.Now, as the 2024 presidential election comes into view, experts warn that advances in AI have the potential to take the disinformation tactics of the past and breathe new life into them.AI-generated disinformation not only threatens to deceive audiences, but also erode an already embattled information ecosystem by flooding it with inaccuracies and deceptions, experts say.“Degrees of trust will go down, the job of journalists and others who are trying to disseminate actual information will become harder,” said Ben Winters, a senior counsel at the Electronic Privacy Information Center, a privacy research non-profit. “It will have no positive effects on the information ecosystem.”New tools for old tacticsArtificial intelligence tools that can create photorealistic images, mimic voice audio and write convincingly human text have surged in use this year, as companies such as OpenAI have released their products on the mass market. The technology, which has already threatened to upend numerous industries and exacerbate existing inequalities, is increasingly being employed to create political content.In past months, an AI-generated image of an explosion at the Pentagon caused a brief dip in the stock market. AI audio parodies of US presidents playing video games became a viral trend. AI-generated images that appeared to show Donald Trump fighting off police officers trying to arrest him circulated widely on social media platforms. The Republican National Committee released an entirely AI-generated ad that showed images of various imagined disasters that would take place if Biden were re-elected, while the American Association of Political Consultants warned that video deepfakes present a “threat to democracy”.In some ways, these images and ads are not so different from the manipulated images and video, misleading messages and robocalls that have been a feature of society for years. But disinformation campaigns formerly faced a range of logistic hurdles – creating individualized messages for social media was incredibly time consuming, as was Photoshopping images and editing videos.Now, though, generative AI has made the creation of such content accessible to anyone with even basic digital skills, amid limited guardrails or effective regulation to curtail it. The potential effect, experts warn, is a sort of democratization and acceleration of propaganda right at a time when several countries enter major election years.AI lowers the bar for disinformationThe potential harms of AI on elections can read like a greatest hits of concerns from past decades of election interference. Social media bots that pretend to be real voters, manipulated videos or images, and even deceptive robocalls are all easier to produce and harder to detect with the help of AI tools.There are also new opportunities for foreign countries to attempt to influence US elections or undermine their integrity, as federal officials have long warned Russia and China are working to do. Language barriers to creating deceptive content are eroding, and telltale signs of scammers or disinformation campaigns using repetitive phrasing or strange word choices are being replaced with more believable texts.“If you’re sitting in a troll farm in a foreign country, you no longer need to be fluent to produce a fluent-sounding article in the language of your target audience,” said Josh Goldstein, a research fellow at Georgetown University’s Center for Security and Emerging Technology. “You can just have a language model spit out an article with the grammar and vocabulary of a fluent speaker.”AI technology may also intensify voter suppression campaigns to target marginalized communities. Two far-right activists admitted last year to making more than 67,000 robocalls targeting Black voters in the midwest with election misinformation, and experts such as Winters note that AI could hypothetically be used to replicate such a campaign on a greater scale with more personalized information. Audio that mimics elected leaders or trusted personalities could tell select groups of voters misleading information about polls and voting, or cause general confusion.Generating letter-writing campaigns or fake engagement could also create a sort of false constituency, making it unclear how voters are actually responding to issues. As part of a research experiment published earlier this year, Cornell University professors Sarah Kreps and Doug Kriner sent tens of thousands of emails to more than 7,000 state legislators across the country. The emails purported to be from concerned voters, but were split between AI-generated letters and ones written by a human. The responses were virtually the same, with human-written emails receiving only a 2% higher rate of reply than the AI-generated ones.skip past newsletter promotionafter newsletter promotionCampaigns test the watersCampaigns have already begun dabbling in using AI-generated content for political purposes. After Florida’s governor, Ron DeSantis, announced his candidacy during a Twitter live stream in May, Donald Trump mocked his opponent with a parody video of the announcement that featured the AI-generated voices of DeSantis, Elon Musk and Adolf Hitler. Last month, the DeSantis campaign shared AI-generated images of Trump embracing and kissing Anthony Fauci.During the 2016 and 2020 elections, Trump’s campaign leaned heavily on memes and videos made by his supporters – including deceptively edited videos that made it seem like Biden was slurring his words or saying that he shouldn’t be president. The AI version of that strategy is creeping in, election observers warn, with Trump sharing a deepfake video in May of the CNN host Anderson Cooper telling viewers that they had just watched “Trump ripping us a new asshole here on CNN’s live presidential town hall”.With about 16 months to go until the presidential election and widespread generative AI use still in its early days, it’s an open question what role artificial intelligence will play in the vote. The creation of misleading AI-generated content alone doesn’t mean that it will have an effect on an election, researchers say, and measuring the impact of disinformation campaigns is a notoriously difficult task. It’s one thing to monitor the engagement of fake materials but another to gauge the secondary effects of polluting the information ecosystem to the point where people generally distrust any information they consume online.But there are concerning signs. Just as the use of generative AI is increasing, many of the social media platforms that bad actors rely on to spread disinformation have begun rolling back some of their content moderation measures – YouTube reversed its election integrity policy, Instagram allowed the anti-vaccine conspiracy theorist Robert F Kennedy Jr back on its platform and Twitter’s head of content moderation left the company in June amid a general fall in standards under Elon Musk.It remains to be seen how effective media literacy and traditional means of factchecking can be in pushing back against a deluge of misleading text and images, researchers say, as the potential scale of generated content represents a new challenge.“AI-generated images and videos can be created much more quickly than factcheckers can review and debunk them,” Goldstein said, adding that hype over AI can also corrode trust by making the public believe anything could be artificially generated.Some generative AI services, including ChatGPT, do have policies and safeguards against generating misinformation and in certain cases are able to block the service from being used for that purpose. But it’s still unclear how effective those are, and several open-source models lack such policies and features.“There’s not really going to be sufficient control of dissemination,” Winters said. “There’s no shortage of robocallers, robo emailers or texters, and mass email platforms. There’s nothing limiting the use of those.” More

  • in

    In America’s ‘Voltage Valley’, hopes of car-making revival turn sour

    When Lordstown Motors, an electric vehicles (EV) manufacturer in Ohio’s Mahoning Valley, declared bankruptcy last month, it was the latest blow to a region that has seen decades of extravagant promises fail to deliver.The 5,000 new jobs executives vowed to create in 2020 generated fresh hope for the shuttered General Motors Lordstown plant, which once functioned as an economic engine for the area and a critical piece of the nation’s industrial heartland.Local leaders rebranded Mahoning Valley “Voltage Valley”, claiming the EV revolution would revive the region’s fortunes. Donald Trump, then the president, trumpeted a major victory. “The area was devastated when General Motors moved out,” he said. “It’s incredible what’s happened in the area. It’s booming now. It’s absolutely booming.”But Lordstown Motors’ failure and its decision to sue its major investor, the electronics giant Foxconn, over a soured investment partnership, have dented Voltage Valley’s fortunes. Years of similar failures have given some residents here “savior fatigue” and have largely given up hope that the Lordstown plant can ever be fully rebooted.“I really want the plant to do well and succeed, but we’ve experienced so many ‘Hey we’re gonna come in and save the day’ promises that never happen,” said David Green, the regional director of United Auto Workers (UAW), who started working at Lordstown in 1995.Green said he was especially skeptical of Foxconn. The company has put up nets to prevent workers fromkilling themselves at one of its Chinese plants, he said, and has failed to live up to other promises of job creation across the US: “This is the savior company? I don’t have warm feelings toward them.”Still, some local leaders are optimistic. They insist Foxconn, which is attempting to scale up autonomous tractor production at Lordstown and lure a different EV startup, will save the plant.“I think Foxconn will be successful,” said Lordstown’s mayor, Arno Hill. “They are fairly confident they are going to be here for a while.”Hill and other leaders said Lordstown Motors was not the only new employer in town. GM partnered with LG Corporation to build an EV battery plant that employs about 1,300 people next door to Lordstown, and a new TJX warehouse has hired about 1,000 workers. A new industrial park is planned in the region, as are two gas plants.The feelings of those not in the business of promoting the region are more nuanced. In nearby Warren, where many Lordstown employees have lived since GM originally opened the plant in 1966 opening, mentions of Foxconn saving Lordstown or the Mahoning Valley drew a mix of eye-rolls, scoffs and blank looks from residents in the city’s downtown.“There are words, but I have seen no action,” said Leslie Dunlap, owner of the FattyCakes Soap Company, and several other Warren businesses, as she worked at a farmers’ market. “People here have lost faith in big companies.”Warren’s fortune is tied to that of the plant – when the latter’s employment numbers dipped, “people stopped spending money here, started selling houses, walking away from properties,” Dunlap said.Residents on a recent Tuesday afternoon said they were “cautiously optimistic” about the region’s economic future. Warren’s downtown shopfronts are full. But the city also bears the scars of rust belt decline with vacant industrial buildings and blighted neighborhoods.A few miles down the road at Lordstown, the lots around the well-kept offices where a few hundred Foxconn employees work are repaved. But the rest of the 6.2m sq ft factory looks like a depressing relic. Weeds sprout from the cracked pavement of the vast, unused blacktop lots surrounding it.Lordstown employed 11,000 people at its peak, but between the mid-1990s and 2016, the workforce in Trumbull county, where Lordstown sits, dropped by 63%. Just a few thousand remained when Lordstown closed in 2018.Some still hold a shred of hope that GM will repurchase the plant – it is nextdoor to an EV battery factory, and batteries are expensive to ship. It makes sense, said Josh Ayers, the bargaining chairman for UAW 1112.“I have a pit in my stomach every time I drive past Lordstown,” he said. “Foxconn is in there but I don’t see a future for them.”Regardless of the plant’s potential, local labor leaders say they have largely moved on and trained their attention on GM’s nearby Ultium electric-vehicle plant. A small explosion, fires and chemical leaks at the plant recently injured employees who work there, for as little as $16 per hour – less than the amount the local Waffle House offers, and low enough that some employees need government assistance, Ayers noted.Some local leaders tout the region’s job openings. Ayers said they exist because turnover is high. “People used to run through walls to work at Lordstown,” he said. “Nobody is running through walls to work at Ultium.”It is not the first time that a politician’s promises have left locals disappointed.‘This plant is about to shift into high gear’As the Great Recession battered the nation in late 2009, Barack Obama traveled to General Motors’ mammoth Lordstown plant to promise laid-off autoworkers a brighter future.Obama’s 2009 GM bailout became a lifeline: ramping up production of the Chevrolet Cobalt would bring back over 1,000 workers, the president told the anxious crowd.“Because of the steps we have taken, this plant is about to shift into high gear,” Obama bellowed over loud cheers. The plan soon fizzled, however, and by 2019 GM had shed the plant’s workforce and sold it to Lordstown Motors.In 2014 Obama declared Youngstown the center for 3D-printing technology, though the industry has brought few jobs. The failure to revive the area, in part, helped Trump defeat Hillary Clinton in 2016.Mahoning Valley was once steel country, and residents here trace their economic troubles back to 1977’s Black Monday, when two steel plants abruptly closed and 5,000 workers lost their jobs. Since then, the promises to pull the region out of its slow tailspin have been plentiful.An eccentric businessman from nearby Youngstown briefly revived the Avanti car company until slow sales and poor management killed it by 1990, leaving its workforce jobless.A glass company that recently received tax incentives to build a large plant “never made one fuckin’ bottle”, UAW’s Green said.Perhaps most infamously, Trump, in a July 2017 Youngstown speech, promised residents auto jobs “are all coming back. Don’t move, don’t sell your house.” A year later, GM idled the plant and, as residents here are keen to highlight, it did so after receiving billions in taxpayer assistance, including $60m in state subsidies in exchange for a promise to keep the plant open through 2027.In 2019, Trump tweeted that he had been “working nicely with GM to get” the Lordstown deal done. But Lordstown Motors floundered almost from the start, suffering from scandals over inflated sales figures and battery range. By 2022, a new savior arrived: Foxconn. It agreed to buy the plant and a 55% stake in Lordstown Motors for $230m. That relationship soured, and Foxconn quit making the payments this year. The deal collapsed.In a sign of how little impact this “booming” transformation has had, the name “Foxconn” hardly registered with some Warren residents. They squinted as they tried to recall where they had heard it. Others pointed to other ventures they felt could have more impact – a proposed science-fiction museum and businesses at the farmers’ market.Outside the county courthouse, an employee who did not want their name printed said they knew of the Lordstown Motors collapse, but it was not top of mind for anyone they knew: “Lordstown is not where the money is. I don’t know where it’s at.”‘Foxconn didn’t come through’About 450 miles from Lordstown, in Mount Pleasant, Wisconsin, Foxconn in 2017 promised to build a hi-tech factory campus that would employ 13,000 people in exchange for $4.5bn in tax incentives. Residents were forced from their homes to make way for the factory, but very little was built.Kelly Gallaher is among those who fought the project, and she sees a replay in Lordstown as Foxconn promises big things while its deal falls apart. Mount Pleasant residents tried to warn Lordstown on social media when Foxconn showed interest in the plant, she said.“Lordstown needed a savior angel, and they weren’t in a position with any other backup choices. But it isn’t a surprise that Foxconn didn’t come through,” Gallaher said.Guy Coviello, the chief executive of the Youngstown/Warren Chamber of Commerce, dismissed such concerns. Foxconn is not asking for incentives or making big promises, he said, claiming that the problems in Wisconsin were largely “political ballyhooing”.The idea that autonomous tractors will save Lordstown is not landing with many residents. But one thing everyone around Lordstown seems to agree on is the notion that the region’s manufacturing heyday is never returning – for no other reason than automation has made it impossible. Manufacturers simply don’t need the labor force they once did.Mahoning still has much to offer. Its population loss is stabilizing, the cost of living is low, it is near other major population centers and it offers a huge workforce, Ayers said.Those selling points may bring more investment. But after so many broken promises, any floated idea is met with skepticism. Reflecting on Obama’s speech, Green said the president’s reassurance was a “great feeling that day”.“What a stark contrast to 10 years later.” More

  • in

    Republicans attack FTC chair and big tech critic Lina Khan at House hearing

    Lina Khan, the chair of the Federal Trade Commission, faced a grueling four hours of questioning during a House judiciary committee oversight hearing on Thursday.Republicans criticized Khan – an outspoken critic of big tech – for “mismanagement” and for “politicizing” legal action against large companies such as Twitter and Google as head of the powerful antitrust agency.In his opening statement, committee chair Jim Jordan, an Ohio Republican, said Khan has given herself and the FTC “unchecked power” by taking aggressive steps to regulate practices at big tech companies such as Twitter, Meta and Google.He said Khan carried out “targeted harassment against Twitter” by asking for all communications related to Elon Musk, including conversations with journalists, following Musk’s acquisition because she does not share his political views.Khan, a former journalist, said the company has “a history of lax security and privacy policies” that did not begin with Musk.Other Democrats agreed. “Protecting user privacy is not political,” said congressman Jerry Nadler, a Democrat of New York, in response to Jordan’s remarks.Republicans also condemned Khan for allegedly wasting government money by pursuing more legal action to prevent mergers than her predecessors – but losing. On Tuesday, a federal judge ruled against the FTC’s bid to delay Microsoft from acquiring video game company Activision Blizzard, saying the agency failed to prove it would decrease competition and harm consumers. The FTC is appealing against that ruling.“She has pushed investigations to burden parties with vague and costly demands without any substantive follow-through, or, frankly, logic, for the requests themselves,” said Jordan.Another Republican member, Darrell Issa, of California, called Khan a “bully” for trying to prevent mergers.“I believe you’ve taken the idea that companies should have to be less competitive in order to merge, [and] that every merger has to be somehow bad for the company and good for the consumer – a standard that cannot be met,” Issa said.Khan earlier came under scrutiny from Republicans participating in an FTC case reviewing Meta’s bid to acquire a virtual reality company despite a recommendation from an ethics official to recuse herself. She defended her decision to remain on the case Thursday, saying she consulted with the ethics official. Khan testified she had “not a penny” in the company’s financial stock and thus did not violate ethics laws.But enforcing antitrust laws for big tech companies such as Twitter has traditionally been a bipartisan issue.“It’s a little strange that you have this real antipathy among the Republicans of Lina Khan, who in many ways is doing exactly what the Republicans say needs to be done, which is bringing a lot more antitrust scrutiny of big tech,” said Daniel Crane, a professor on antitrust law and enforcement at the University of Michigan Law School.“There’s a broad consensus that we need to do more, but that’s kind of where the agreement ends,” he said.Republicans distrust big tech companies over issues of censorship, political bias and cultural influence, whereas Democrats come from a traditional scrutiny of corporations and concentration of economic power, said Crane.“I don’t fundamentally think she’s doing something other than what she was put in office to do,” he said.Congress has not yet passed a major antitrust statute that would be favorable to the FTC in these court battles and does not seem to be pursuing one any time soon, said Crane. “They’re just going to lose a lot of cases, and that’s foreseen.”The FTC’s list of battles with big tech companies is growing.Hours earlier on Thursday, Twitter – which now legally goes by X Corp – asked a federal court to terminate a 2011 settlement with the FTC that placed restrictions on its user data and privacy practices. Khan noted Twitter voluntarily entered into that agreement.Also on Thursday, the Washington Post reported the FTC opened an investigation in OpenAI on whether its chatbot, ChatGPT, is harmful to consumers. A spokesperson for the FTC would not comment on the OpenAI investigation but Khan said during the hearing that “it has been publicly reported”.In 2017, Khan, now 34, gained fame for an academic article she wrote as a law student at Yale that used Amazon’s business practices to explain gaps in US antitrust policy. Biden announced he intended to nominate the antitrust researcher to head the FTC in March 2021. She was sworn in that June.“Chair Khan has delivered results for families, consumers, workers, small businesses, and entrepreneurs,” White House spokesperson Michael Kikukawa said in a statement. More

  • in

    You think the internet is a clown show now? You ain’t seen nothing yet | John Naughton

    Robert F Kennedy Jr is a flake of Cadbury proportions with a famous name. He’s the son of Robert Kennedy, who was assassinated in 1968 when he was running for the Democratic presidential nomination (and therefore also JFK’s nephew). Let’s call him Junior. For years – even pre-Covid-19 – he’s been running a vigorous anti-vaccine campaign and peddling conspiracy theories. In 2021, for example, he was claiming that Dr Anthony Fauci was in cahoots with Bill Gates and the big pharma companies to run a “powerful vaccination cartel” that would prolong the pandemic and exaggerate its deadly effects with the aim of promoting expensive vaccinations. And it went without saying (of course) that the mainstream media and big tech companies were also in on the racket and busily suppressing any critical reporting of it.Like most conspiracists, Junior was big on social media, but then in 2021 his Instagram account was removed for “repeatedly sharing debunked claims about the coronavirus or vaccines”, and in August last year his anti-vaccination Children’s Health Defense group was removed by Facebook and Instagram on the grounds that it had repeatedly violated Meta’s medical-misinformation policies.But guess what? On 4 June, Instagram rescinded Junior’s suspension, enabling him to continue beaming his baloney, without let or hindrance, to his 867,000 followers. How come? Because he announced that he’s running against Joe Biden for the Democratic nomination and Meta, Instagram’s parent, has a policy that users should be able to engage with posts from “political leaders”. “As he is now an active candidate for president of the United States,” it said, “we have restored access to Robert F Kennedy Jr’s Instagram account.”Which naturally is also why the company allowed Donald Trump back on to its platform. So in addition to anti-vax propaganda, American voters can also look forward in 2024 to a flood of denialism about the validity of the 2020 election on their social media feeds as Republican acolytes of Trump stand for election and get a free pass from Meta and co.All of which led technology journalist Casey Newton, an astute observer of these things, to advance an interesting hypothesis last week about what’s happening. We may, he said, have passed “peak trust and safety”. Translation: we may have passed the point where tech platforms stopped caring about moderating what happens on their platforms. From now on, (almost) anything goes.If that’s true, then we have reached the most pivotal moment in the evolution of the tech industry since 1996. That was the year when two US legislators inserted a short clause – section 230 – into the Communications Decency Act that was then going through Congress. In 26 words, the clause guaranteed immunity for online computer services with respect to third-party content generated by its users. It basically meant that if you ran an online service on which people could post whatever they liked, you bore no legal liability for any of the bad stuff that could happen as a result of those publications.On the basis of that keep-out-of-jail card, corporations such as Google, Meta and Twitter prospered mightily for years. Bad stuff did indeed happen, but no legal shadow fell on the owners of the platforms on which it was hosted. Of course it often led to bad publicity – but that was ameliorated or avoided by recruiting large numbers of (overseas and poorly paid) moderators, whose job was to ensure that the foul things posted online did not sully the feeds of delicate and fastidious users in the global north.But moderation is difficult and often traumatising work. And, given the scale of the problem, keeping social media clean is an impossible, sisyphean task. The companies employ many thousands of moderators across the globe, but they can’t keep up with the deluge. For a time, these businesses argued that artificial intelligence (meaning machine-learning technology) would enable them to get on top of it. But the AI that can outwit the ingenuity of the bad actors who lurk in the depths of the internet has yet to be invented.And, more significantly perhaps, times have suddenly become harder for tech companies. The big ones are still very profitable, but that’s partly because they been shedding jobs at a phenomenal rate. And many of those who have been made redundant worked in areas such as moderation, or what the industry came to call “trust and safety”. After all, if there’s no legal liability for the bad stuff that gets through whatever filters there are, why keep these worthy custodians on board?Which is why democracies will eventually have to contemplate what was hitherto unthinkable: rethink section 230 and its overseas replications and make platforms legally liable for the harms that they enable. And send Junior back to the soapbox he deserves.What I’ve been readingHere’s looking at usTechno-Narcissism is Scott Galloway’s compelling blogpost on his No Mercy / No Malice site about the nauseating hypocrisy of the AI bros.Ode to JoyceThe Paris Review website has the text of novelist Sally Rooney’s 2022 TS Eliot lecture, Misreading Ulysses.Man of lettersRemembering Robert Gottlieb, Editor Extraordinaire is a lovely New Yorker piece by David Remnick on one of his predecessors, who has just died. More

  • in

    New electric cars won’t have AM radio. Rightwingers claim political sabotage

    Charlie Kirk, radio host and founder of the rightwing youth group Turning Point USA, believes that a conspiracy may be afoot. “Whether they’re doing this intentionally or not, the consequence will be … an all-out attack on AM radio,” he told the listeners of his popular syndicated show.In an appearance on Fox, the television and radio host Sean Hannity gave his viewers a similar warning: “This would be a direct hit politically on conservative talk radio in particular, which is what most people go to AM radio to listen to.” Mark Levin, another longtime radio host, agreed: “They finally figured out how to attack conservative talk radio,” he told his listeners in April.What are they all so worried about? It turns out, a minor manufacturing change announced by car companies including Volkswagen and Mazda: they will be removing AM radios from their forthcoming fleets of electric vehicles, citing technical issues. Tesla, BMW, Audi and Volvo have already dispensed with AM in their electric cars, because AM’s already unpolished reception is subject to even more buzz, crackling and interference when installed near an electric motor. While some manufacturers have found workarounds for the interference, others appear to have decided that it’s not worth the engineering expense.Many on the right have been quick to declare the move political sabotage. The Texas senator Ted Cruz, while promoting a federal bill that would require automakers to install AM radios in new cars, claimed he smelled something fishy: “There’s a reason big car companies were open to taking down AM radio … let’s be clear: big business doesn’t like things that are overwhelmingly conservative.”AM is the oldest commercial radio technology in the US. In the 1920s, when AM was all there was, listeners would gather around neighborhood and living room radio sets to hear everything from music to boxing matches, soap operas and presidential speeches. They would listen through AM’s constant (if now somewhat nostalgic) hum. By mid-century, music was king on the radio as many dramatic programs shifted over to the new medium of television. And in the 1960s, the comparatively crystal clear FM band overtook AM as the band of choice. Many music stations deserted AM, leaving it floundering in lo-fi isolation and struggling to secure advertising dollars, until it found its salvation in talk radio. Initially there was a wide variety of political perspectives on AM but the deregulation of content and consolidation of ownership of radio during the 1980s edged many minority voices and local owners off the air. Following the model of the nationally syndicated Rush Limbaugh Show, conservative talk became the cost-effective default for the risk-averse corporations that now dominated the radio dial. The humble AM band played a starring role in the rise of social conservatism in the US and was a precursor to outlets like Fox News.These days, AM radio is somewhat synonymous in the public imagination with conservative blowhards, a place where false claims about the 2020 election, racist notions of a “great replacement” and other conspiracy theories fester and escape into the atmosphere without accountability. Far-right programming is not only ubiquitous, it’s monotonous – with a few national radio chains syndicating the same handful of shows to “local” stations, many of which have almost no local content. In cities and towns across the country, listeners hear much of the same one-sided, syndicated programming.But the idea that AM radio is made up of nothing more than conservative talk is a myth that has dangerous implications for the medium.It is true that conservatives and far-right pundits have claimed near dominion on talk radio – a medium that still ranks nearly neck-and-neck with social media for how Americans get their news. Seventeen of the top 20 most-listened-to US talk radio hosts are conservative, while only one is liberal. But that’s not the whole story: while syndicated rightwing voices are the best platformed on AM radio, what is less known is that the band is home to many of the country’s increasingly rare local stations and non-English-language radio shows. And ownership of AM radio stations is more diverse than that of FM stations: according to a 2021 FCC report, 13% of commercial AM stations were majority-owned by a Black, Hispanic or Asian American broadcaster; on the FM band, that figure was only 7%. Often lacking the financial and political resources available to chain-owned conservative talk stations, it is these local and diverse voices – not nationally syndicated conservative talkers like Sean Hannity and Mark Levin – that are likely to be the hardest hit by any changes to the band.“AM is, generally, the least expensive route to a broadcast station ownership,” says Jim Winston, president and CEO of the National Association of Black Owned Broadcasters (Nabob), a trade organization serving Black- and minority-owned radio stations. And though the 1980s and 1990s saw a decrease in local and minority ownership, Winston says a disproportionate number of the stations he works with today are on the AM dial. “There are many communities where the only Black-owned station is an AM station,” he says. “And Black owners, for the most part, are local owners.”In cities across the country, AM stations remain a crucial resource for those who are rarely served by other media. Detroit’s WNZK, known as the “station of nations”, runs a variety of non-English and English language programming for the area’s immigrant communities. In Chicago, WNVR broadcasts in Polish, and many AM stations in California and New York run talk and music programs in Vietnamese and Chinese.The time-tested technology of AM radio has also given the medium a particularly important role in small towns and rural areas. “Out here, it does serve a very distinct purpose, because AM frequency travels very differently from FM,” says Austin Roof, general manager at KSDP in Sand Point, Alaska, on the Aleutian Islands. AM is better than FM at getting through mountains and other barriers. Plus, Roof says, “once AM hits water, it just carries really well”. For a radio station serving island residents and those who work on the area’s fishing boats, that value can’t be overstated. “One kilowatt of AM can outperform thousands of kilowatts of FM in our environment.”Satellite internet has only recently become available in much of KSDP’s coverage area, and the region’s geography means that even the few local newspapers have limited distribution. So radio stations like KSDP – which serves an area nearly twice the size of Massachusetts – can be a lifeline. In recent years, as the islands have experienced some of their largest earthquakes and subsequent tsunamis, the radio has played a crucial role in spreading emergency alerts and instructions. (Between emergency updates after a 2021 earthquake, station staff played songs like AC/DC’s You Shook Me All Night Long and the Surfaris’ Wipe Out.) “Your cellphone can lose its charge,” says Winston of Nabob, “You could be … out someplace where your cellphone signal is not being picked up.” But radio, he says, is ubiquitous, and it’s very important “that people be able to receive radio when they can’t receive anything else”.AM stations are not just of value during emergencies: in small towns and rural areas across the country, AM stations are a rare tool for civic engagement, especially with the decline in local newspapers. Roof says KSDP’s most popular broadcasts are those that listeners can’t find anywhere else: “Local, state news, local meetings, sports,” he says, “it’s the hyper-local content that matters.” The story is similar on the Yakama Reservation in Washington state, where the program director Reggie George says the hyperlocal AM station KYNR broadcasts public service announcements and coverage of local events such as government meetings and powwows, in addition to a steady playlist of both oldies and Native American music. When a technical snag or bad weather temporarily silences the station, residents react. “We get calls right away when we go off the air,” says George, one of two paid staff at KYNR.Many AM stations have tried to prepare for an uncertain future by meeting their listeners on other platforms, such as FM simulcasts, podcasts and web streams. Alaska’s KSDP has managed to get its content simulcast on one full-power and three low-power FM signals that serve nearby towns, and on a well-utilized online audio stream. But finding the money to stay afloat while supporting those other platforms hasn’t been easy. “We’ve begged, borrowed and stolen for hardware,” Roof says. Roof personally climbs the radio tower to replace equipment and touch up paint, has taken pay cuts, and has opted out of company healthcare to keep more money in the station. But other hyperlocal AM stations haven’t had the budget to make the expansion.To some in the radio industry, the removal of AM radios from electric vehicles feels like a death sentence for their already struggling medium. Others are less worried. “I think a lot of these places that are really benefiting from AM … are not where electric cars are really going to serve up the most benefits,” says Roof. In his part of the country, there’s no infrastructure to support EVs yet, and not many people can afford a Tesla or a BMW. “If you think someone in Sand Point, Alaska, is getting an electric car any time in the near future, you’re crazy,” he says. “Is getting rid of [AM radio] in electric vehicles going to do away with it? Absolutely not.”There remains a lurking sense, however, that the removal of AM from EVs is a symptom of a larger shift away from the AM band. And if other changes come to pass, it will probably be the local, diverse stations – the unlauded heroes of AM – that are at greatest risk, not the well-resourced nationally syndicated conservative talk hosts who dominate talk radio. “Those voices are not going to be shut down, no matter what happens with AM radio,” says Winston. If AM radio does become harder to access, he says, “there are serious casualties.”
    Katie Thornton is a freelance print and audio journalist. Her Peabody-winning podcast series The Divided Dial, made with WNYC’s On the Media, reveals how the American right came to dominate talk radio More