More stories

  • in

    Google accused of spending billions to block rivals as landmark trial continues

    The court battle between the US justice department and Google has entered its second day, as the United States government seeks to prove that the tech behemoth illegally leveraged its power to maintain a monopoly over internet search engines. The trial is a major test of antitrust law and could have far-reaching implications for the tech industry and for how people engage with the internet.The question at the heart of the trial is whether Google’s place as the search engine for most Americans is the result of anti-competitive practices that gave internet users no other choice but to use its services.On the first day of the trial, attorneys for the justice department and the dozens of states that have joined in the suit accused Google of shutting out competition through billion-dollar agreements with companies such as Apple and Samsung.The justice department lawyer Kenneth Dintzer alleged Google spends $10bn a year in deals to ensure it is the default search engine on devices such as the iPhone, effectively blocking meaningful competition and positioning Google as the gatekeeper of the internet.“They knew these agreements crossed antitrust lines,” Dintzer said.Google’s opening statement gave a window into how the company and its lead attorney, John Schmidtlein, plan to defend against the accusations. Schmidtlein argued that Google has achieved its dominance over online search – the government estimates it holds about a 90% market share – because it is simply a better product than alternatives such as Microsoft’s Bing search engine. Consumers are free to switch default settings with “a few easy clicks” and use other search engines if they please, Schmidtlein told the court on Tuesday.The justice department called its first witness, Google’s chief economist Hal Varian. Over the course of two hours, Dintzer presented Varian with internal memos and documents dating back to the 2000s that showed him discussing how search defaults could be strategically important. One internal communication from Varian warned over antitrust issues that “we should be careful about what we say in both public and private”.On Wednesday, the justice department called the former Google executive Chris Barton, who had worked in partnerships and was an employee from 2004 to 2011. The department questioned Barton about the value of those partnerships in establishing dominance over the market.“As we recognized the opportunity for search on mobile phones, we began to build a product team,” Barton said, according to Reuters.As with the first day of the trial, the government has tried to show that Google saw the importance early on of making deals and securing its position as the default search engine on devices. The documents and witnesses it has brought up have so far been from over a decade ago, when the government says Google was first beginning to forge agreements that helped it monopolize search.The justice department has also alleged that Google was aware of possible antitrust violations and has consciously tried to obscure its actions. The government presented a document in court from an internal Google presentation on antitrust, which warned employees to avoid mentioning “market share” or “dominance”.The trial is set to last 10 weeks and feature numerous witnesses, as well as internal Google documents that the justice department hopes will show that monopolizing search has long been a top priority at the company. Judge Amit Mehta will decide the case, and there is no jury in the trial. More

  • in

    TechScape: As the US election campaign heats up, so could the market for misinformation

    X, the platform formerly known as Twitter, announced it will allow political advertising back on the platform – reversing a global ban on political ads since 2019. The move is the latest to stoke concerns about the ability of big tech to police online misinformation ahead of the 2024 elections – and X is not the only platform being scrutinised.Social media firms’ handlings of misinformation and divisive speech reached a breaking point in the 2020 US presidential elections when Donald Trump used online platforms to rile up his base, culminating in the storming of the Capitol building on 6 January 2021. But in the time since, companies have not strengthened their policies to prevent such crises, instead slowly stripping protections away. This erosion of safeguards, coupled with the rise of artificial intelligence, could create a perfect storm for 2024, experts warn.As the election cycle heats up, Twitter’s move this week is not the first to raise major concerns about the online landscape for 2024 – and it won’t be the last.Musk’s free speech fantasyTwitter’s change to election advertising policies is hardly surprising to those following the platform’s evolution under the leadership of Elon Musk, who purchased the company in 2022. In the months since his takeover, the erratic billionaire has made a number of unilateral changes to the site – not least of all the rebrand of Twitter to X.Many of these changes have centered on Musk’s goal to make Twitter profitable at all costs. The platform, he complained, was losing $4m per day at the time of his takeover, and he stated in July that its cash flow was still in the negative. More than half of the platform’s top advertisers have fled since the takeover – roughly 70% of the platforms leading advertisers were not spending there as of last December. For his part, this week Musk threatened to sue the Anti-Defamation League, saying, “based on what we’ve heard from advertisers, ADL seems to be responsible for most of our revenue loss”. Whatever the reason, his decision to re-allow political advertisers could help boost revenue at a time when X sorely needs it.But it’s not just about money. Musk has identified himself as a “free speech absolutist” and seems hell bent on turning the platform into a social media free-for-all. Shortly after taking the helm of Twitter, he lifted bans on the accounts of Trump and other rightwing super-spreaders of misinformation. Ahead of the elections, he has expressed a goal of turning Twitter into “digital town square” where voters and candidates can discuss politics and policies – solidified recently by its (disastrous) hosting of Republican governor Ron DeSantis’s campaign announcement.Misinformation experts and civil rights advocates have said this could spell disaster for future elections. “Elon Musk is using his absolute control over Twitter to exert dangerous influence over the 2024 election,” said Imran Ahmed, head of the Center for Countering Digital Hate, a disinformation and hate speech watchdog that Musk himself has targeted in recent weeks.In addition to the policy changes, experts warn that the massive workforce reduction Twitter has carried out under Musk could impact the ability to deal with misinformation, as trust and safety teams are now reported to be woefully understaffed.Let the misinformation wars beginWhile Musk’s decisions have been the most high profile in recent weeks, it is not the only platform whose policies have raised alarm. In June, YouTube reversed its election integrity policy, now allowing content contesting the validity of the 2020 elections to remain on the platform. Meanwhile, Meta has also reinstated accounts of high-profile spreaders of misinformation, including Donald Trump and Robert F Kennedy Jr.Experts say these reversals could create an environment similar to that which fundamentally threatened democracy in 2020. But now there is an added risk: the meteoric rise of artificial intelligence tools. Generative AI, which has increased its capabilities in the last year, could streamline the ability to manipulate the public on a massive scale.Meta has a longstanding policy that exempts political ads from its misinformation policies and has declined to state whether that immunity will extend to manipulated and AI-generated images in the upcoming elections. Civil rights watchdogs have envisioned a worst-case scenario in which voters’ feeds are flooded with deceptively altered and fabricated images of political figures, eroding their ability to trust what they read online and chipping away at the foundations of democracy.While Twitter is not the only company rolling back its protections against misinformation, its extreme stances are moving the goalposts for the entire industry. The Washington Post reported this week that Meta was considering banning all political advertising on Facebook, but reversed course to better compete with its rival Twitter, which Musk had promised to transform into a haven for free speech. Meta also dissolved its Facebook Journalism Project, tasked with promoting accurate information online, and its “responsible innovation team,” which monitored the company’s products for potential risks, according to the Washington Post.Twitter may be the most scrutinised in recent weeks, but it’s clear that almost all platforms are moving towards an environment in which they throw up their hands and say they cannot or will not police dangerous misinformation online – and that should concern us all.skip past newsletter promotionafter newsletter promotionThe wider TechScape David Shariatmadari goes deep with the co-founder of DeepMind about the mind-blowing potential of artificial intelligence in biotech in this long read. New tech news site 404 Media has published a distressing investigation into AI-generated mushroom-foraging books on Amazon. In a space where misinformation could mean the difference between eating something delicious and something deadly, the stakes are high. If you can’t beat them, join them: celebrities have been quietly working to sign deals licensing their artificially generated likenesses as the AI arms race continues. Elsewhere in AI – scammers are on the rise, and their tactics are terrifying. And the Guardian has blocked OpenAI from trawling its content. Can you be “shadowbanned” on a dating app? Some users are convinced their profiles are not being prioritised in the feed. A look into this very modern anxiety, and how the algorithms of online dating actually work. More

  • in

    A tsunami of AI misinformation will shape next year’s knife-edge elections | John Naughton

    It looks like 2024 will be a pivotal year for democracy. There are elections taking place all over the free world – in South Africa, Ghana, Tunisia, Mexico, India, Austria, Belgium, Lithuania, Moldova and Slovakia, to name just a few. And of course there’s also the UK and the US. Of these, the last may be the most pivotal because: Donald Trump is a racing certainty to be the Republican candidate; a significant segment of the voting population seems to believe that the 2020 election was “stolen”; and the Democrats are, well… underwhelming.The consequences of a Trump victory would be epochal. It would mean the end (for the time being, at least) of the US experiment with democracy, because the people behind Trump have been assiduously making what the normally sober Economist describes as “meticulous, ruthless preparations” for his second, vengeful term. The US would morph into an authoritarian state, Ukraine would be abandoned and US corporations unhindered in maximising shareholder value while incinerating the planet.So very high stakes are involved. Trump’s indictment “has turned every American voter into a juror”, as the Economist puts it. Worse still, the likelihood is that it might also be an election that – like its predecessor – is decided by a very narrow margin.In such knife-edge circumstances, attention focuses on what might tip the balance in such a fractured polity. One obvious place to look is social media, an arena that rightwing actors have historically been masters at exploiting. Its importance in bringing about the 2016 political earthquakes of Trump’s election and Brexit is probably exaggerated, but it – and notably Trump’s exploitation of Twitter and Facebook – definitely played a role in the upheavals of that year. Accordingly, it would be unwise to underestimate its disruptive potential in 2024, particularly for the way social media are engines for disseminating BS and disinformation at light-speed.And it is precisely in that respect that 2024 will be different from 2016: there was no AI way back then, but there is now. That is significant because generative AI – tools such as ChatGPT, Midjourney, Stable Diffusion et al – are absolutely terrific at generating plausible misinformation at scale. And social media is great at making it go viral. Put the two together and you have a different world.So you’d like a photograph of an explosive attack on the Pentagon? No problem: Dall-E, Midjourney or Stable Diffusion will be happy to oblige in seconds. Or you can summon up the latest version of ChatGPT, built on OpenAI’s large language model GPT-4, and ask it to generate a paragraph from the point of view of an anti-vaccine advocate “falsely claiming that Pfizer secretly added an ingredient to its Covid-19 vaccine to cover up its allegedly dangerous side-effects” and it will happily oblige. “As a staunch advocate for natural health,” the chatbot begins, “it has come to my attention that Pfizer, in a clandestine move, added tromethamine to its Covid-19 vaccine for children aged five to 11. This was a calculated ploy to mitigate the risk of serious heart conditions associated with the vaccine. It is an outrageous attempt to obscure the potential dangers of this experimental injection, which has been rushed to market without appropriate long-term safety data…” Cont. p94, as they say.You get the point: this is social media on steroids, and without the usual telltale signs of human derangement or any indication that it has emerged from a machine. We can expected a tsunami of this stuff in the coming year. Wouldn’t it be prudent to prepare for it and look for ways of mitigating it?That’s what the Knight First Amendment Institute at Columbia University is trying to do. In June, it published a thoughtful paper by Sayash Kapoor and Arvind Narayanan on how to prepare for the deluge. It contains a useful categorisation of malicious uses of the technology, but also, sensibly, includes the non-malicious ones – because, like all technologies, this stuff has beneficial uses too (as the tech industry keeps reminding us).The malicious uses it examines are disinformation, so-called “spear phishing”, non-consensual image sharing and voice and video cloning, all of which are real and worrying. But when it comes to what might be done about these abuses, the paper runs out of steam, retreating to bromides about public education and the possibility of civil society interventions while avoiding the only organisations that have the capacity actually to do something about it: the tech companies that own the platforms and have a vested interest in not doing anything that might impair their profitability. Could it be that speaking truth to power is not a good career move in academia?What I’ve been readingShake it upDavid Hepworth has written a lovely essay for LitHub about the Beatles recording Twist and Shout at Abbey Road, “the moment when the band found its voice”.Dish the dirtThere is an interesting profile of Techdirt founder Mike Masnick by Kashmir Hill in the New York Times, titled An Internet Veteran’s Guide to Not Being Scared of Technology.Truth bombsWhat does Oppenheimer the film get wrong about Oppenheimer the man? A sharp essay by Haydn Belfield for Vox illuminates the differences. More

  • in

    Biden’s China investment ban: who’s targeted and what does it mean for the 2024 US election?

    Joe Biden has moved to restrict US investment in Chinese technology, signing an executive order which focuses on a few, sensitive hi-tech sectors including semiconductors, quantum computing and artificial intelligence (AI).It is the latest in a series of measures taken by the US to restrict China’s access to the most advanced technology and comes as the president has embarked on a multi-state tour of the south-west to tout his plans to revive American manufacturing after decades of decline.The restrictions are expected to take effect next year – and come at a sensitive time in the US-China relationship. The Biden administration has launched diplomatic overtures to Beijing in recent months, seeking to mend ties after a series of incidents, while still attempting to bolster its position against China on military, economic and technological fronts.What are the latest restrictions?As a result of previous Biden administration measures, the US already bans or restricts the export to China of many of the technologies covered in these new measures. The aim of Wednesday’s executive order is to prevent US funds from helping China build its own domestic capabilities, which could undermine the existing export controls.Under the executive order, the US Treasury has been directed to regulate certain US investments in semiconductors and microelectronics, quantum computing and artificial intelligence.China, Hong Kong and Macau are listed as the “countries of concern”, but a senior Biden official has told Reuters other countries could be added in the future.The rules are not retroactive and apply to to future investments, with officials saying the goal is to regulate investments in areas that could give China military and intelligence advantages.Britain and the European Union have signalled their intention to move along similar lines, and the Group of Seven advanced economies agreed in June that restrictions on outbound investments should be part of an overall toolkit.Biden’s plan has been criticised by Republicans, many of whom say it does not go far enough.Republican Senator Marco Rubio has called it “almost laughable”, adding that the plan is “riddled with loopholes … and fails to include industries China’s government deems critical”, he said.How has China reacted?A spokesperson for the Chinese embassy in Washington said the White House had ignored “China’s repeated expression of deep concerns” about the plan.The embassy warned that it would affect more than 70,000 US companies that do business in China, hurting both Chinese and American businesses.The country’s commerce ministry said it reserved the right to take countermeasures and encouraged the US to respect the laws of market economy and the principle of fair competition.What part do these measures play in Biden’s re-election bid?As the executive order was made public, Biden was speaking in New Mexico, touting his government’s success in boosting manufacturing jobs in the renewable energy sector.“Where’s it written that America can’t lead the world again in manufacturing? Because we’re going to do just that,” Biden said at the groundbreaking of a new factory manufacturing wind turbine towers in the city of Belon.“Instead of exporting American jobs, we’re creating American jobs and we’re exporting American products,” he added.However, polling shows that for many, the perception of the president’s economic policies – “Bidenomics” as his communications team likes to call them – are at odds with a range of positive indicators. US inflation has dropped to the lowest levels since 2021 and the administration has repeatedly touted months of consistent jobs growth; despite this though multiple polls show that only a minority of Americans support Biden’s handling of the economy.The cornerstone of Biden’s refreshed bid to voters are two major bills he shepherded through Congress and signed into law a year ago: the Chips and Science Act – which pumps huge funding into semiconductor manufacturing, research and development – and the Inflation Reduction Act (IRA), a law for megaprojects boosting green investment.The chips act aims to further freeze China’s semiconductor industry in place, while pouring billions of dollars in subsidies into the US chip industry.Both laws, along with the growing restrictions on Chinese industry, are positioned to win back portions of the working-class vote who felt left behind by globalisation and turned to Donald Trump at previous elections.What’s next?The ban is a step in a broad and ongoing push to undermine China’s efforts to achieve independence in a number of technological areas, in particular the development of advanced semiconductors.In recent months, the US government has signalled it still wants to close some loopholes Chinese businesses are using to get their hands on the most advanced semiconductors.In response to previous chip bans, Nvidia one of the world’s leading chip companies, has started offering a less advanced chip, the A800, to Chinese buyers. But new curbs being considered by Washington would restrict even those products.In possible anticipation of such a move China’s tech giants – including Baidu, TikTok-owner ByteDance, Tencent and Alibaba – have made orders worth $1bn to acquire about 100,000 A800 processors from the Nvidia to be delivered this year, the Financial Times has reported.The Chinese groups had also bought a further $4bn worth of graphics processing units to be delivered in 2024, according to the report.Reuters and Agence France-Presse contributed to this report More

  • in

    Republicans shelve Zuckerberg contempt vote in ‘censorship’ inquiry ‘for now’

    Mark Zuckerberg, the chief executive of Meta, is no stranger to Capitol Hill, where he has sparred with Republicans and Democrats over how he runs his platforms. A Republican-led panel was set to vote on Thursday on a resolution to hold him in contempt of Congress, for allegedly failing to turn over internal documents on content moderation.However, House judiciary committee chair Jim Jordan, a Republican of Ohio, temporarily suspended the vote.Jordan announced on Twitter that the committee “decided to hold contempt in abeyance. For now” and posted a series of tweets of alleged internal communications among Meta executives hours ahead of the hearing.“To be clear, contempt is still on the table and WILL be used if Facebook fails to cooperate in FULL,” Jordan said.Republican lawmakers have repeatedly accused Meta – along with other big names Google, Apple and Microsoft – of suppressing conservative speech on their platforms.Jim Jordan had alleged that Meta failed to turn over requested internal company documents to an investigation into tech companies and “willfully refused to comply in full with a congressional subpoena”, according to a report released on Tuesday.Jordan, an Ohio Republican, also subpoenaed the chief executives at Alphabet, Microsoft, Amazon and Apple in February. Zuckerberg is so far the only one facing additional scrutiny.But regulating tech companies is a rare area of bipartisan support, even if the reasons behind it are different. Meta has come under fire from Democrats over privacy concerns and its marketing toward kids and teens. In 2020, Zuckerberg, along with the then Twitter chief executive, Jack Dorsey, faced intense questioning during a Senate judiciary hearing where Democrats condemned the executives for amplifying misinformation, such as false claims of election fraud, and raised antitrust concerns.Meta says it has fully complied with the congressional investigation.“For many months, Meta has operated in good faith with this committee’s sweeping requests for information. We began sharing documents before the committee’s February subpoena and have continued to do so,” said a Meta spokesperson, Andy Stone, in a statement posted in response to the hearing notice on Tuesday.He said Meta had so far delivered more than 53,000 pages of internal and external documents and “made nearly a dozen current and former employees available to discuss external and internal matters, including some scheduled this very week”, according to the statement.Politico reported that Meta handed over more documents hours before Jordan announced the Thursday vote but that the Ohio Republican was not satisfied.“They’ve given us documents because we’re pushing and because we’re talking about this – we appreciate that, but we are convinced that it’s way short of what they should be providing us,” Jordan reportedly said in an interview.One social media company, Twitter – which now goes by X – has escaped much of the scrutiny as its chief executive, Elon Musk, has been seen as friendly to conservatives. In his February letter to tech companies, Jordan called Twitter a model of transparency and praised its “Twitter files” – which many experts flagged as sensationalized.Meta’s second-quarter revenue defied expectations after its earnings release on Wednesday, and Zuckerberg’s own net worth surged on Thursday. More

  • in

    Top tech firms commit to AI safeguards amid fears over pace of change

    Top players in the development of artificial intelligence, including Amazon, Google, Meta, Microsoft and OpenAI, have agreed to new safeguards for the fast-moving technology, Joe Biden announced on Friday.Among the guidelines brokered by the Biden administration are watermarks for AI content to make it easier to identify and third-party testing of the technology that will try to spot dangerous flaws.Speaking at the White House, Biden said the companies’ commitment were “real and concrete” and will help “develop safe, secure and trustworthy” technologies that benefit society and uphold values.“Americans are seeing how advanced artificial intelligence and the pace of innovation have the power to disrupt jobs in industries,” he said. “These commitments are a promising step that we have a lot more work to do together.”The president said AI brings “incredible opportunities”, as well as risks to society and economy. The agreement, he said, would underscore three fundamental principles – safety, security and trust.The White House said seven US companies had agreed to the voluntary commitments, which are meant to ensure their AI products are safe before they release them.The announcement comes as critics charge AI’s breakneck expansion threatens to allow real damage to occur before laws catch up. The voluntary commitments are not legally binding, but may create a stopgap while more comprehensive action is developed.A surge of commercial investment in generative AI tools that can write convincingly human-like text and churn out new images and other media has brought public fascination as well as concern about their ability to trick people and spread disinformation, among other dangers.The tech companies agreed to eight measures:
    Using watermarking on audio and visual content to help identify content generated by AI.
    Allowing independent experts to try to push models into bad behavior – a process known as “red-teaming”.
    Sharing trust and safety information with the government and other companies.
    Investing in cybersecurity measures.
    Encouraging third parties to uncover security vulnerabilities.
    Reporting societal risks such as inappropriate uses and bias.
    Prioritizing research on AI’s societal risks.
    Using the most cutting-edge AI systems, known as frontier models, to solve society’s greatest problems.
    The voluntary commitments are meant to be an immediate way of addressing risks ahead of a longer-term push to get Congress to pass laws regulating the technology.Some advocates for AI regulations said Biden’s move is a start but more needs to be done to hold the companies and their products accountable.“History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations,” said a statement from James Steyer, founder and CEO of the non-profit Common Sense Media.The guidelines, as detailed at a high level in a fact sheet the White House released, some critics have argued, do not go far enough in addressing concerns over the way AI could impact society and give the administration little to no remedies for enforcement if the companies do not abide by them. “We need a much more wide-ranging public deliberation and that’s going to bring up issues that companies almost certainly won’t voluntarily commit to because it would lead to substantively different results, ones that may more directly impact their business models,” said Amba Kak, the executive director of research group the AI Now Institute.“A closed-door deliberation with corporate actors resulting in voluntary safeguards isn’t enough,” Kak said. “What this list covers is a set of problems that are comfortable to business as usual, but we also need to be looking at what’s not on the list – things like competition concerns, discriminatory impacts of these systems. The companies have said they’ll ‘research’ privacy and bias, but we already have robust bodies of research on both – what we need is accountability.”Voluntary guidelines amount to little more than self-regulation, said Caitriona Fitzgerald, the deputy director at the non-profit research group, the Electronic Privacy Information Center (Epic). A similar approach was taken with social media platforms, she said, and it didn’t work. “It’s internal compliance checking and it’s similar to what we’ve seen in the FTC consent orders from the past when they required Facebook to do internal privacy impact assessments and they just became a box-checking exercise.”The Senate majority leader, Chuck Schumer, has said he will introduce legislation to regulate AI. He has held a number of briefings with government officials to educate senators about an issue that’s attracted bipartisan interest.A number of technology executives have called for regulation, and several went to the White House in May to speak with Biden, vice-president Kamala Harris and other officials.Senator Mark Warner said the guidelines released on Friday are a start but that “we need more than industry commitments”.“While we often hear AI vendors talk about their commitment to security and safety, we have repeatedly seen the expedited release of products that are exploitable, prone to generating unreliable outputs, and susceptible to misuse,” Warner said in a statement.But some experts and upstart competitors worry that the type of regulation being floated could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft, as smaller players are elbowed out by the high cost of making their AI systems known as large language models adhere to regulatory strictures.The software trade group BSA, which includes Microsoft as a member, said on Friday that it welcomed the Biden administration’s efforts to set rules for high-risk AI systems.“Enterprise software companies look forward to working with the administration and Congress to enact legislation that addresses the risks associated with artificial intelligence and promote its benefits,” the group said in a statement.Several countries have been looking at ways to regulate AI, including European Union lawmakers who have been negotiating sweeping AI rules for the 27-country bloc.The details of the European legislation are still being hashed out, but the EU AI Act contains robust regulations that would create significant consumer protections against the overreach, privacy violations and biases of certain types of high-risk AI models.Meanwhile conversations in the US remain in the early stages. Fitzgerald, of Epic, said while the voluntary guidelines are just one in a series of guidelines the White House has released on AI, she worries it might cause Congress to slow down their push to create regulations. “We need the rules of the road before it gets too big to regulate,” she said.The UN secretary general, António Guterres, recently said the United Nations was “the ideal place” to adopt global standards and appointed a board that will report back on options for global AI governance by the end of the year.The United Nations chief also said he welcomed calls from some countries for the creation of a new UN body to support global efforts to govern AI, inspired by such models as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.The White House said on Friday that it had already consulted on the voluntary commitments with a number of countries.Associated Press contributed to this story More

  • in

    Oppenheimer biographer supports US bill to bar use of AI in nuclear launches

    A biographer whose Pulitzer prize-winning book inspired the new movie Oppenheimer has expressed support for a US senator’s attempt to bar the use of artificial intelligence in nuclear weapons launches.“Humans must always maintain sole control over nuclear weapons,” Kai Bird, author of American Prometheus, said in a statement reported by Politico.“This technology is too dangerous to gamble with. This bill will send a powerful signal to the world that the United States will never take the reckless step of automating our nuclear command and control.”In Washington on Thursday, Bird met Ed Markey, the Democratic Massachusetts senator who is attempting to add the AI-nuclear provision to a major defense spending bill.Markey, Politico said, was a friend of Bird’s co-author, the late Tufts University professor Martin J Sherwin.A spokesperson for the senator told Politico Markey and Bird “shared their mutual concerns over the proliferation of artificial intelligence in national security and defense without guardrails, and the risks of using nuclear weapons in south Asia and elsewhere.“They also discussed ways to increase awareness of nuclear issues among the younger set.”J Robert Oppenheimer was the driving force behind US development of the atomic bomb, at the end of the second world war.Bird and Sherwin’s book is now the inspiration for Oppenheimer, Christopher Nolan’s summer blockbuster starring Cillian Murphy in the title role.The movie opens in the US on Friday – in competition with Barbie, Greta Gerwig’s film about the popular children’s doll.On Friday, Nolan told the Guardian: “International surveillance of nuclear weapons is possible because nuclear weapons are very difficult to build. Oppenheimer spent $2bn and used thousands of people across America to build those first bombs.“It’s reassuringly difficult to make nuclear weapons and so it’s relatively easy to spot when a country is doing that. I don’t believe any of that applies to AI.”Nolan also noted “very strong parallels” between Oppenheimer and AI experts now calling for such technology to be controlled.Nolan said he had “been interested to talk to some of the leading researchers in the AI field, and hear from them that they view this as their ‘Oppenheimer moment’. And they’re clearly looking to his story for some kind of guidance … as a cautionary tale in terms of what it says about the responsibility of somebody who’s putting this technology to the world, and what their responsibilities would be in terms of unintended consequences.”Bird and Sherwin’s biography, subtitled The Triumph and Tragedy of J Robert Oppenheimer, was published in 2008.Reviewing for the Guardian, James Buchan saluted the authors’ presentation of “the cocktails and wire-taps and love affairs of Oppenheimer’s existence, his looks and conversation, the way he smoked the cigarettes and pipe that killed him, his famous pork-pie hat and splayed walk, and all the tics and affectations that his students imitated and the patriots and military men despised.“It is as if these authors had gone back to James Boswell, who said of Dr Johnson: ‘Everything relative to so great a man is worth observing.’” More

  • in

    ‘An evolution in propaganda’: a digital expert on AI influence in elections

    Every election presents an opportunity for disinformation to find its way into the public discourse. But as the 2024 US presidential race begins to take shape, the growth of artificial intelligence (AI) technology threatens to give propagandists powerful new tools to ply their trade.Generative AI models that are able to create unique content from simple prompts are already being deployed for political purposes, taking disinformation campaigns into strange new places. Campaigns have circulated fake images and audio targeting other candidates, including an AI-generated campaign ad attacking Joe Biden and deepfake videos mimicking real-life news footage.The Guardian spoke with Renée DiResta, technical research manager at the Stanford Internet Observatory, a university program that researches the abuses of information technology, about how the latest developments in AI influence campaigns and how society is catching up to a new, artificially created reality.Concern around AI and its potential for disinformation has been around for a while. What has changed that makes this threat more urgent?When people became aware of deepfakes – which usually refers to machine-generated video of an event that did not happen – a few years ago there was concern that adversarial actors would use these types of video to disrupt elections. Perhaps they would make video of a candidate, perhaps they would make video of some sort of disaster. But it didn’t really happen. The technology captured public attention, but it wasn’t very widely democratized. And so it didn’t primarily manifest in the political conversation, but instead in the realm of much more mundane but really individually harmful things, like revenge porn.There’s been two major developments in the last six months. First is the rise of ChatGPT, which is generated text. It became available to a mass market and people began to realize how easy it was to use these types of text-based tools. At the same time, text-to-still image tools became globally available. Today, anybody can use Stable Diffusion or Midjourney to create photorealistic images of things that don’t really exist in the world. The combination of these two things, in addition to the concerns that a lot of people feel around the 2024 elections, has really captured public attention once again.Why did the political use of deepfakes not materialize?The challenge with using video in a political environment is that you really have to nail the substance of the content. There are a lot of tells in video, a lot of ways in which you can determine whether it’s generated. On top of that, when a video is truly sensational, a lot of people look at it and factcheck it and respond to it. You might call it a natural immune response.Text and images, however, have the potential for higher actual impact in an election scenario because they can be more subtle and longer lasting. Elections require months of campaigning during which people formulate an opinion. It’s not something where you’re going to change the entire public mind with a video and have that be the most impactful communication of the election.How do you think large language models can change political propaganda?I want to caveat that describing what is tactically possible is not the same thing as me saying the sky is falling. I’m not a doomer about this technology. But I do think that we should understand generative AI in the context of what it makes possible. It increases the number of people who can create political propaganda or content. It decreases the cost to do it. That’s not to say necessarily that they will, and so I think we want to maintain that differentiation between this is the tactic that a new technology enables versus that this is going to swing an election.As far as the question of what’s possible, in terms of behaviors, you’ll see things like automation. You might remember back in 2015 there were all these fears about bots. You had a lot of people using automation to try to make their point of view look more popular – making it look like a whole lot of people think this thing, when in reality it’s six guys and their 5,000 bots. For a while Twitter wasn’t doing anything to stop that, but it was fairly easy to detect. A lot of the accounts would be saying the exact same thing at the exact same time, because it was expensive and time consuming to generate a unique message for each of your fake accounts. But with generative AI it is now effortless to generate highly personalized content and to automate its dissemination.And then finally, in terms of content, it’s really just that the messages are more credible and persuasive.That seems tied to another aspect you’ve written about, that the sheer amount of content that can be generated, including misleading or inaccurate content, has a muddying effect on information and trust.It’s the scale that makes it really different. People have always been able to create propaganda, and I think it’s very important to emphasize that. There is an entire industry of people whose job it is to create messages for campaigns and then figure out how to get them out into the world. We’ve just changed the speed and the scale and the cost to do that. It’s just an evolution in propaganda.When we think about what’s new and what’s different here, the same thing goes for images. When Photoshop emerged, the public at first was very uncomfortable with Photoshopped images, and gradually became more comfortable with it. The public acclimated to the idea that Photoshop existed and that not everything that you see with your eyes is a thing that necessarily is as it seems – the idea that the woman that you see on the magazine cover probably does not actually look like that. Where we’ve gone with generative AI is the fabrication of a complete unreality, where nothing about the image is what it seems but it looks photorealistic.skip past newsletter promotionafter newsletter promotionNow anybody can make it look like the pope is wearing Balenciaga.Exactly.In the US, it seems like meaningful federal regulation is pretty far away if it’s going to come at all. Absent of that, what are some of the sort of short-term ways to mitigate these risks?First is the education piece. There was a very large education component when deepfakes became popular – media covered them and people began to get the sense that we were entering a world in which a video might not be what it seems.But it’s unreasonable to expect every person engaging with somebody on a social media platform to figure out if the person they’re talking to is real. Platforms will have to take steps to more carefully identify if automation is in play.On the image front, social media platforms, as well as generative AI companies, are starting to come together to try and determine what kind of watermarking might be useful so that platforms and others can determine computationally whether an image is generated.Some companies, like OpenAI, have policies around generating misinformation or the use of ChatGPT for political ends. How effective do you see those policies being?It’s a question of access. For any technology, you can try to put guardrails on your proprietary version of that technology and you can argue you’ve made a values-based decision to not allow your products to generate particular types of content. On the flip side, though, there are models that are open source and anyone can go and get access to them. Some of the things that are being done with some of the open source models and image generation are deeply harmful, but once the model is open sourced, the ability to control its use is much more limited.And it’s a very big debate right now in the field. You don’t want to necessarily create regulations that lock in and protect particular corporate actors. At the same time, there is a recognition that open-source models are out there in the world already. The question becomes how the platforms that are going to serve as the dissemination pathways for this stuff think about their role and their policies in what they amplify and curate.What’s the media or the public getting wrong about AI and disinformation?One of the real challenges is that people are going to believe what they see if it conforms to what they want to believe. In a world of unreality in which you can create that content that fulfills that need, one of the real challenges is whether media literacy efforts actually solve any of the problems. Or will we move further into divergent realities – where people are going to continue to hold the belief in something that they’ve seen on the internet as long as it tells them what they want. Larger offline challenges around partisanship and trust are reflected in, and exacerbated by, new technologies that enable this kind of content to propagate online. More