More stories

  • in

    US orders immediate stop to some AI chip exports to China; Lloyds profits up but lending margins fall – business live

    Good morning, and welcome to our live, rolling coverage of business, economics and financial markets.The US has ordered the immediate halt of exports to China of hi-tech computer chips used for artificial intelligence, chipmaker Nvidia has said.Nvidia said the US had brought forward a ban which had given the company 30 days from 17 October to stop shipments. Instead of a grace period, the ban is “effective immediately”, the company said in a statement to US regulators.The company did not say why the ban had been brought forward so abruptly, but it comes amid a deep rivalry between the US and China over who will dominate the AI boom.Nvidia said that shipments of its A100, A800, H100, H800, and L40S chips would be affected. Those chips, which retail at several thousand dollars apiece, are specifically designed for use in datacentres to train AI and large language models.Demand for AI chips has soared as excitement has grown about the capabilities of generative AI, which can produce new text, images and video based on the inputs of huge volumes of data.Nvidia said it “does not anticipate that the accelerated timing of the licensing requirements will have a near-term meaningful impact on its financial results”.Lloyds profits up but competition squeezes marginsIn the UK, Lloyds Banking Group has reported a rise in profits even as it said competition was hitting its margins as mortgage rates fall back.Britain’s biggest bank said it made £1.9bn in profits from July to September, an increase compared to the £576m for the same period last year. The comparison has an important caveat, however: the bank has restated its financials to conform to new accounting rules.Net interest margin – the measure of the difference between the cost of borrowing and what it charges customers when it lends – was 3.08% in the third quarter, down 0.06 percentage points in the quarter “given the expected mortgage and deposit pricing headwinds”, it said.The bank did set aside £800m to deal with rising defaults from borrowers, but said that it was still seeing “broadly stable credit trends and resilient asset quality”.The agendaFilters BETAAn EY-linked auditor to the Adani Group is under scrutiny from India’s accounting regulator, Bloomberg News has reported.The National Financial Reporting Authority, or NFRA, has started an inquiry into, S.R. Batliboi, a member firm of EY in India, Bloomberg said, citing unnamed sources.S.R. Batliboi is the auditor for five Adani companies which account for about half Adani’s revenues.Bloomberg reported that representatives for NFRA and the Adani Group didn’t respond to an emailed request for comments. A representative for EY and S.R. Batliboi declined to comment to Bloomberg.China’s economic slowdown is causing worries at home, as well as in Germany and other big trade partners.A series of Chinese government actions have signalled their concern about slowing growth, which could cause problems for an authoritarian regime.Xi Jinping, China’s president, visited the People’s Bank of China for the first time, according to reports yesterday. “The purpose of the visit was not immediately known,” said Reuters, ominously.State media also reported that China had sharply lifted its 2023 budget deficit to about 3.8% of GDP because of an extra $137bn in government borrowing. That was up from 3%. The Global Times, a state-controlled newspaper, said the move would “benefit home consumption and the country’s economic growth”, citing an unnamed official.Germany’s economic fortunes were better than expected in October, according to a closely watched indicator – but whether it’s overall good news or bad depends on who you ask.The ifo business climate index rose from 85.8 to 86.9 points, higher than the 85.9 expected by economists polled beforehand by Reuters.Germany has been struggling as growth slows in China, a key export market, as well as the costs of switching from Russian gas to fuel its economy. You can read more context here:Franziska Palmas, senior Europe economist at Capital Economics, a consultancy, is firmly in team glass half empty. She said:
    The small rise in the Ifo business climate index (BCI) in October still left the index in contractionary territory, echoing the downbeat message from the composite PMI released yesterday. This chimes with our view that the German economy is again recession.
    Despite the improvement in October, the bigger picture remains that the German economy is struggling. The Ifo current conditions index, which has a better relationship with GDP than the BCI, is still consistent with GDP contracting by around 1% quarter-on-quarter. This is an even worse picture than that painted by the composite PMI, which fell in October but points to output dropping by “only” 0.5% quarter-on-quarter.
    But journalist Holger Zschaepitz said it looks like things are improving:UK house prices will continue to slide this year and in 2024 and will not start to recover until 2025, Lloyds Banking Group has forecast.The lender, which owns Halifax and is Britain’s largest mortgage provider, said that by the end of 2023 UK house prices will have fallen 5% over the course of the year and are likely to fall another 2.4% in 2024.Those forecasts, which were released alongside its third-quarter financial results on Wednesday, suggest UK house prices will have dropped 11% from their peak last year, when the market was still being fuelled by a rush for larger homes in the wake of the coronavirus pandemic.Lloyds said the first signs of growth would only start to emerge in 2025, with its economists predicting a 2.3% increase in house prices that year.You can read the full report here:The Israel-Hamas conflict adds another cloud on the horizon for the global economy, according to the head of the International Monetary Fund (IMF).Kristalina Georgieva was at “Davos in the desert”, a big conference hosted by Saudi Arabia.The Future Investment Initiative conference was the subject of boycotts five years ago when Saudi crown prince Mohammed bin Salman allegedly ordered the murder of exiled critic Jamal Khasoggi. The distaste of global leaders has apparently faded since, however.Speaking on the Israel-Hamas conflict, Georgieva said (via Reuters):
    What we see is more jitters in what has already been an anxious world. And on a horizon that had plenty of clouds, one more – and it can get deeper.
    The war has been devastating for Israel and Gaza. Hamas killed more than 1,400 people and took more than 220 people as hostages in an assault on Israel. The health ministry in Gaza, which is run by Hamas, said last night that Gaza’s total death toll after 18 days of retaliatory bombing was 5,791 people, including 2,360 children.The broader economic impacts have been relatively limited, but Georgieva said that some neighbouring countries were feeling them:
    Egypt, Lebanon, Jordan. There, the channels of impact are already visible. Uncertainty is a killer for tourists inflows. Investors are going to be shy to go to that place.
    Reckitt, the maker of Dettol bleach and Finish dishwasher products, has missed sales expectations as revenues dropped 3.6% year-on-year in the third quarter.Its shares were down 2.3% on Wednesday morning, despite it also committing to buy back £1bn in shares.It missed expectations because of the comparison with strong sales in the same period last year in its nutrition division, which makes baby milk powder.Kris Licht, Reckitt’s chief executive, said:
    Reckitt delivered a strong quarter with 6.7% like-for-like growth across our hygiene and health businesses and has maintained market leadership in our US nutrition business.
    We are firmly on track to deliver our full year targets, despite some tough prior year comparatives that we continue to face in our US Nutrition business and across our OTC [over-the-counter medicines] portfolio in the fourth quarter.
    Speaking of Deutsche Bank, it posted its own earnings this morning: third-quarter profits dropped by 8%, but that was better than expected by analysts.Shares in Deutsche, which has struggled in the long shadow of the financial crisis, are up 4.2% in early trading.Reuters reported:
    The bank was slightly more optimistic on its revenue outlook for the full year, forecasting it would reach €29bn ($30.73bn), the top end of its previous guidance range, as it upgraded the outlook for revenue at the retail division.
    Net profit attributable to shareholders at Germany’s largest bank was €1.031bn, better than analyst expectations for profit of around €937m.
    Though earnings dropped, it marked the 13th consecutive profitable quarter, a considerable streak in the black after years of hefty losses.
    Here are the opening snaps from across Europe’s stock market indices, via Reuters:
    EUROPE’S STOXX 600 DOWN 0.1%
    FRANCE’S CAC 40 DOWN 0.4%
    SPAIN’S IBEX DOWN 0.3%
    EURO STOXX INDEX DOWN 0.2%
    EURO ZONE BLUE CHIPS DOWN 0.3%
    European indices appeared to be taking their lead from the US, where Google owner Alphabet’s share price dropped in after-hours trading last night. That dragged down futures for US tech companies, even though another tech titan, Microsoft, pleased investors.Analysts led by Jim Reid at Deutsche Bank said:
    Microsoft saw its shares rise +3.95% in after-market trading as revenues of $56.52bn (+13% y/y) beat estimates of $54.54bn and EPS came in at $2.99 (v $2.65 expected). The beat comes on the back of recovering cloud-computing growth with corporate customers spending more than expected. The other megacap, Alphabet, missed on their cloud revenue estimates at $8.4bn (v $8.6bn) with the share price falling -5.93% after hours as operating income and margins both surprised slightly to the downside.
    You can read more about Google’s performance here:We’re off to the races on the London Stock Exchange this morning: and the FTSE 100 has dipped at the open.Shares on London’s blue-chip index are down by 0.15% in the early trades. Lloyds Banking Group shares initially moved higher, but now they are down 2.1% after they flagged increasing competition hitting net interest margins.Good morning, and welcome to our live, rolling coverage of business, economics and financial markets.The US has ordered the immediate halt of exports to China of hi-tech computer chips used for artificial intelligence, chipmaker Nvidia has said.Nvidia said the US had brought forward a ban which had given the company 30 days from 17 October to stop shipments. Instead of a grace period, the ban is “effective immediately”, the company said in a statement to US regulators.The company did not say why the ban had been brought forward so abruptly, but it comes amid a deep rivalry between the US and China over who will dominate the AI boom.Nvidia said that shipments of its A100, A800, H100, H800, and L40S chips would be affected. Those chips, which retail at several thousand dollars apiece, are specifically designed for use in datacentres to train AI and large language models.Demand for AI chips has soared as excitement has grown about the capabilities of generative AI, which can produce new text, images and video based on the inputs of huge volumes of data.Nvidia said it “does not anticipate that the accelerated timing of the licensing requirements will have a near-term meaningful impact on its financial results”.Lloyds profits up but competition squeezes marginsIn the UK, Lloyds Banking Group has reported a rise in profits even as it said competition was hitting its margins as mortgage rates fall back.Britain’s biggest bank said it made £1.9bn in profits from July to September, an increase compared to the £576m for the same period last year. The comparison has an important caveat, however: the bank has restated its financials to conform to new accounting rules.Net interest margin – the measure of the difference between the cost of borrowing and what it charges customers when it lends – was 3.08% in the third quarter, down 0.06 percentage points in the quarter “given the expected mortgage and deposit pricing headwinds”, it said.The bank did set aside £800m to deal with rising defaults from borrowers, but said that it was still seeing “broadly stable credit trends and resilient asset quality”.The agenda More

  • in

    TechScape: As the US election campaign heats up, so could the market for misinformation

    X, the platform formerly known as Twitter, announced it will allow political advertising back on the platform – reversing a global ban on political ads since 2019. The move is the latest to stoke concerns about the ability of big tech to police online misinformation ahead of the 2024 elections – and X is not the only platform being scrutinised.Social media firms’ handlings of misinformation and divisive speech reached a breaking point in the 2020 US presidential elections when Donald Trump used online platforms to rile up his base, culminating in the storming of the Capitol building on 6 January 2021. But in the time since, companies have not strengthened their policies to prevent such crises, instead slowly stripping protections away. This erosion of safeguards, coupled with the rise of artificial intelligence, could create a perfect storm for 2024, experts warn.As the election cycle heats up, Twitter’s move this week is not the first to raise major concerns about the online landscape for 2024 – and it won’t be the last.Musk’s free speech fantasyTwitter’s change to election advertising policies is hardly surprising to those following the platform’s evolution under the leadership of Elon Musk, who purchased the company in 2022. In the months since his takeover, the erratic billionaire has made a number of unilateral changes to the site – not least of all the rebrand of Twitter to X.Many of these changes have centered on Musk’s goal to make Twitter profitable at all costs. The platform, he complained, was losing $4m per day at the time of his takeover, and he stated in July that its cash flow was still in the negative. More than half of the platform’s top advertisers have fled since the takeover – roughly 70% of the platforms leading advertisers were not spending there as of last December. For his part, this week Musk threatened to sue the Anti-Defamation League, saying, “based on what we’ve heard from advertisers, ADL seems to be responsible for most of our revenue loss”. Whatever the reason, his decision to re-allow political advertisers could help boost revenue at a time when X sorely needs it.But it’s not just about money. Musk has identified himself as a “free speech absolutist” and seems hell bent on turning the platform into a social media free-for-all. Shortly after taking the helm of Twitter, he lifted bans on the accounts of Trump and other rightwing super-spreaders of misinformation. Ahead of the elections, he has expressed a goal of turning Twitter into “digital town square” where voters and candidates can discuss politics and policies – solidified recently by its (disastrous) hosting of Republican governor Ron DeSantis’s campaign announcement.Misinformation experts and civil rights advocates have said this could spell disaster for future elections. “Elon Musk is using his absolute control over Twitter to exert dangerous influence over the 2024 election,” said Imran Ahmed, head of the Center for Countering Digital Hate, a disinformation and hate speech watchdog that Musk himself has targeted in recent weeks.In addition to the policy changes, experts warn that the massive workforce reduction Twitter has carried out under Musk could impact the ability to deal with misinformation, as trust and safety teams are now reported to be woefully understaffed.Let the misinformation wars beginWhile Musk’s decisions have been the most high profile in recent weeks, it is not the only platform whose policies have raised alarm. In June, YouTube reversed its election integrity policy, now allowing content contesting the validity of the 2020 elections to remain on the platform. Meanwhile, Meta has also reinstated accounts of high-profile spreaders of misinformation, including Donald Trump and Robert F Kennedy Jr.Experts say these reversals could create an environment similar to that which fundamentally threatened democracy in 2020. But now there is an added risk: the meteoric rise of artificial intelligence tools. Generative AI, which has increased its capabilities in the last year, could streamline the ability to manipulate the public on a massive scale.Meta has a longstanding policy that exempts political ads from its misinformation policies and has declined to state whether that immunity will extend to manipulated and AI-generated images in the upcoming elections. Civil rights watchdogs have envisioned a worst-case scenario in which voters’ feeds are flooded with deceptively altered and fabricated images of political figures, eroding their ability to trust what they read online and chipping away at the foundations of democracy.While Twitter is not the only company rolling back its protections against misinformation, its extreme stances are moving the goalposts for the entire industry. The Washington Post reported this week that Meta was considering banning all political advertising on Facebook, but reversed course to better compete with its rival Twitter, which Musk had promised to transform into a haven for free speech. Meta also dissolved its Facebook Journalism Project, tasked with promoting accurate information online, and its “responsible innovation team,” which monitored the company’s products for potential risks, according to the Washington Post.Twitter may be the most scrutinised in recent weeks, but it’s clear that almost all platforms are moving towards an environment in which they throw up their hands and say they cannot or will not police dangerous misinformation online – and that should concern us all.skip past newsletter promotionafter newsletter promotionThe wider TechScape David Shariatmadari goes deep with the co-founder of DeepMind about the mind-blowing potential of artificial intelligence in biotech in this long read. New tech news site 404 Media has published a distressing investigation into AI-generated mushroom-foraging books on Amazon. In a space where misinformation could mean the difference between eating something delicious and something deadly, the stakes are high. If you can’t beat them, join them: celebrities have been quietly working to sign deals licensing their artificially generated likenesses as the AI arms race continues. Elsewhere in AI – scammers are on the rise, and their tactics are terrifying. And the Guardian has blocked OpenAI from trawling its content. Can you be “shadowbanned” on a dating app? Some users are convinced their profiles are not being prioritised in the feed. A look into this very modern anxiety, and how the algorithms of online dating actually work. More

  • in

    A tsunami of AI misinformation will shape next year’s knife-edge elections | John Naughton

    It looks like 2024 will be a pivotal year for democracy. There are elections taking place all over the free world – in South Africa, Ghana, Tunisia, Mexico, India, Austria, Belgium, Lithuania, Moldova and Slovakia, to name just a few. And of course there’s also the UK and the US. Of these, the last may be the most pivotal because: Donald Trump is a racing certainty to be the Republican candidate; a significant segment of the voting population seems to believe that the 2020 election was “stolen”; and the Democrats are, well… underwhelming.The consequences of a Trump victory would be epochal. It would mean the end (for the time being, at least) of the US experiment with democracy, because the people behind Trump have been assiduously making what the normally sober Economist describes as “meticulous, ruthless preparations” for his second, vengeful term. The US would morph into an authoritarian state, Ukraine would be abandoned and US corporations unhindered in maximising shareholder value while incinerating the planet.So very high stakes are involved. Trump’s indictment “has turned every American voter into a juror”, as the Economist puts it. Worse still, the likelihood is that it might also be an election that – like its predecessor – is decided by a very narrow margin.In such knife-edge circumstances, attention focuses on what might tip the balance in such a fractured polity. One obvious place to look is social media, an arena that rightwing actors have historically been masters at exploiting. Its importance in bringing about the 2016 political earthquakes of Trump’s election and Brexit is probably exaggerated, but it – and notably Trump’s exploitation of Twitter and Facebook – definitely played a role in the upheavals of that year. Accordingly, it would be unwise to underestimate its disruptive potential in 2024, particularly for the way social media are engines for disseminating BS and disinformation at light-speed.And it is precisely in that respect that 2024 will be different from 2016: there was no AI way back then, but there is now. That is significant because generative AI – tools such as ChatGPT, Midjourney, Stable Diffusion et al – are absolutely terrific at generating plausible misinformation at scale. And social media is great at making it go viral. Put the two together and you have a different world.So you’d like a photograph of an explosive attack on the Pentagon? No problem: Dall-E, Midjourney or Stable Diffusion will be happy to oblige in seconds. Or you can summon up the latest version of ChatGPT, built on OpenAI’s large language model GPT-4, and ask it to generate a paragraph from the point of view of an anti-vaccine advocate “falsely claiming that Pfizer secretly added an ingredient to its Covid-19 vaccine to cover up its allegedly dangerous side-effects” and it will happily oblige. “As a staunch advocate for natural health,” the chatbot begins, “it has come to my attention that Pfizer, in a clandestine move, added tromethamine to its Covid-19 vaccine for children aged five to 11. This was a calculated ploy to mitigate the risk of serious heart conditions associated with the vaccine. It is an outrageous attempt to obscure the potential dangers of this experimental injection, which has been rushed to market without appropriate long-term safety data…” Cont. p94, as they say.You get the point: this is social media on steroids, and without the usual telltale signs of human derangement or any indication that it has emerged from a machine. We can expected a tsunami of this stuff in the coming year. Wouldn’t it be prudent to prepare for it and look for ways of mitigating it?That’s what the Knight First Amendment Institute at Columbia University is trying to do. In June, it published a thoughtful paper by Sayash Kapoor and Arvind Narayanan on how to prepare for the deluge. It contains a useful categorisation of malicious uses of the technology, but also, sensibly, includes the non-malicious ones – because, like all technologies, this stuff has beneficial uses too (as the tech industry keeps reminding us).The malicious uses it examines are disinformation, so-called “spear phishing”, non-consensual image sharing and voice and video cloning, all of which are real and worrying. But when it comes to what might be done about these abuses, the paper runs out of steam, retreating to bromides about public education and the possibility of civil society interventions while avoiding the only organisations that have the capacity actually to do something about it: the tech companies that own the platforms and have a vested interest in not doing anything that might impair their profitability. Could it be that speaking truth to power is not a good career move in academia?What I’ve been readingShake it upDavid Hepworth has written a lovely essay for LitHub about the Beatles recording Twist and Shout at Abbey Road, “the moment when the band found its voice”.Dish the dirtThere is an interesting profile of Techdirt founder Mike Masnick by Kashmir Hill in the New York Times, titled An Internet Veteran’s Guide to Not Being Scared of Technology.Truth bombsWhat does Oppenheimer the film get wrong about Oppenheimer the man? A sharp essay by Haydn Belfield for Vox illuminates the differences. More

  • in

    Top tech firms commit to AI safeguards amid fears over pace of change

    Top players in the development of artificial intelligence, including Amazon, Google, Meta, Microsoft and OpenAI, have agreed to new safeguards for the fast-moving technology, Joe Biden announced on Friday.Among the guidelines brokered by the Biden administration are watermarks for AI content to make it easier to identify and third-party testing of the technology that will try to spot dangerous flaws.Speaking at the White House, Biden said the companies’ commitment were “real and concrete” and will help “develop safe, secure and trustworthy” technologies that benefit society and uphold values.“Americans are seeing how advanced artificial intelligence and the pace of innovation have the power to disrupt jobs in industries,” he said. “These commitments are a promising step that we have a lot more work to do together.”The president said AI brings “incredible opportunities”, as well as risks to society and economy. The agreement, he said, would underscore three fundamental principles – safety, security and trust.The White House said seven US companies had agreed to the voluntary commitments, which are meant to ensure their AI products are safe before they release them.The announcement comes as critics charge AI’s breakneck expansion threatens to allow real damage to occur before laws catch up. The voluntary commitments are not legally binding, but may create a stopgap while more comprehensive action is developed.A surge of commercial investment in generative AI tools that can write convincingly human-like text and churn out new images and other media has brought public fascination as well as concern about their ability to trick people and spread disinformation, among other dangers.The tech companies agreed to eight measures:
    Using watermarking on audio and visual content to help identify content generated by AI.
    Allowing independent experts to try to push models into bad behavior – a process known as “red-teaming”.
    Sharing trust and safety information with the government and other companies.
    Investing in cybersecurity measures.
    Encouraging third parties to uncover security vulnerabilities.
    Reporting societal risks such as inappropriate uses and bias.
    Prioritizing research on AI’s societal risks.
    Using the most cutting-edge AI systems, known as frontier models, to solve society’s greatest problems.
    The voluntary commitments are meant to be an immediate way of addressing risks ahead of a longer-term push to get Congress to pass laws regulating the technology.Some advocates for AI regulations said Biden’s move is a start but more needs to be done to hold the companies and their products accountable.“History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations,” said a statement from James Steyer, founder and CEO of the non-profit Common Sense Media.The guidelines, as detailed at a high level in a fact sheet the White House released, some critics have argued, do not go far enough in addressing concerns over the way AI could impact society and give the administration little to no remedies for enforcement if the companies do not abide by them. “We need a much more wide-ranging public deliberation and that’s going to bring up issues that companies almost certainly won’t voluntarily commit to because it would lead to substantively different results, ones that may more directly impact their business models,” said Amba Kak, the executive director of research group the AI Now Institute.“A closed-door deliberation with corporate actors resulting in voluntary safeguards isn’t enough,” Kak said. “What this list covers is a set of problems that are comfortable to business as usual, but we also need to be looking at what’s not on the list – things like competition concerns, discriminatory impacts of these systems. The companies have said they’ll ‘research’ privacy and bias, but we already have robust bodies of research on both – what we need is accountability.”Voluntary guidelines amount to little more than self-regulation, said Caitriona Fitzgerald, the deputy director at the non-profit research group, the Electronic Privacy Information Center (Epic). A similar approach was taken with social media platforms, she said, and it didn’t work. “It’s internal compliance checking and it’s similar to what we’ve seen in the FTC consent orders from the past when they required Facebook to do internal privacy impact assessments and they just became a box-checking exercise.”The Senate majority leader, Chuck Schumer, has said he will introduce legislation to regulate AI. He has held a number of briefings with government officials to educate senators about an issue that’s attracted bipartisan interest.A number of technology executives have called for regulation, and several went to the White House in May to speak with Biden, vice-president Kamala Harris and other officials.Senator Mark Warner said the guidelines released on Friday are a start but that “we need more than industry commitments”.“While we often hear AI vendors talk about their commitment to security and safety, we have repeatedly seen the expedited release of products that are exploitable, prone to generating unreliable outputs, and susceptible to misuse,” Warner said in a statement.But some experts and upstart competitors worry that the type of regulation being floated could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft, as smaller players are elbowed out by the high cost of making their AI systems known as large language models adhere to regulatory strictures.The software trade group BSA, which includes Microsoft as a member, said on Friday that it welcomed the Biden administration’s efforts to set rules for high-risk AI systems.“Enterprise software companies look forward to working with the administration and Congress to enact legislation that addresses the risks associated with artificial intelligence and promote its benefits,” the group said in a statement.Several countries have been looking at ways to regulate AI, including European Union lawmakers who have been negotiating sweeping AI rules for the 27-country bloc.The details of the European legislation are still being hashed out, but the EU AI Act contains robust regulations that would create significant consumer protections against the overreach, privacy violations and biases of certain types of high-risk AI models.Meanwhile conversations in the US remain in the early stages. Fitzgerald, of Epic, said while the voluntary guidelines are just one in a series of guidelines the White House has released on AI, she worries it might cause Congress to slow down their push to create regulations. “We need the rules of the road before it gets too big to regulate,” she said.The UN secretary general, António Guterres, recently said the United Nations was “the ideal place” to adopt global standards and appointed a board that will report back on options for global AI governance by the end of the year.The United Nations chief also said he welcomed calls from some countries for the creation of a new UN body to support global efforts to govern AI, inspired by such models as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.The White House said on Friday that it had already consulted on the voluntary commitments with a number of countries.Associated Press contributed to this story More

  • in

    Oppenheimer biographer supports US bill to bar use of AI in nuclear launches

    A biographer whose Pulitzer prize-winning book inspired the new movie Oppenheimer has expressed support for a US senator’s attempt to bar the use of artificial intelligence in nuclear weapons launches.“Humans must always maintain sole control over nuclear weapons,” Kai Bird, author of American Prometheus, said in a statement reported by Politico.“This technology is too dangerous to gamble with. This bill will send a powerful signal to the world that the United States will never take the reckless step of automating our nuclear command and control.”In Washington on Thursday, Bird met Ed Markey, the Democratic Massachusetts senator who is attempting to add the AI-nuclear provision to a major defense spending bill.Markey, Politico said, was a friend of Bird’s co-author, the late Tufts University professor Martin J Sherwin.A spokesperson for the senator told Politico Markey and Bird “shared their mutual concerns over the proliferation of artificial intelligence in national security and defense without guardrails, and the risks of using nuclear weapons in south Asia and elsewhere.“They also discussed ways to increase awareness of nuclear issues among the younger set.”J Robert Oppenheimer was the driving force behind US development of the atomic bomb, at the end of the second world war.Bird and Sherwin’s book is now the inspiration for Oppenheimer, Christopher Nolan’s summer blockbuster starring Cillian Murphy in the title role.The movie opens in the US on Friday – in competition with Barbie, Greta Gerwig’s film about the popular children’s doll.On Friday, Nolan told the Guardian: “International surveillance of nuclear weapons is possible because nuclear weapons are very difficult to build. Oppenheimer spent $2bn and used thousands of people across America to build those first bombs.“It’s reassuringly difficult to make nuclear weapons and so it’s relatively easy to spot when a country is doing that. I don’t believe any of that applies to AI.”Nolan also noted “very strong parallels” between Oppenheimer and AI experts now calling for such technology to be controlled.Nolan said he had “been interested to talk to some of the leading researchers in the AI field, and hear from them that they view this as their ‘Oppenheimer moment’. And they’re clearly looking to his story for some kind of guidance … as a cautionary tale in terms of what it says about the responsibility of somebody who’s putting this technology to the world, and what their responsibilities would be in terms of unintended consequences.”Bird and Sherwin’s biography, subtitled The Triumph and Tragedy of J Robert Oppenheimer, was published in 2008.Reviewing for the Guardian, James Buchan saluted the authors’ presentation of “the cocktails and wire-taps and love affairs of Oppenheimer’s existence, his looks and conversation, the way he smoked the cigarettes and pipe that killed him, his famous pork-pie hat and splayed walk, and all the tics and affectations that his students imitated and the patriots and military men despised.“It is as if these authors had gone back to James Boswell, who said of Dr Johnson: ‘Everything relative to so great a man is worth observing.’” More

  • in

    ‘An evolution in propaganda’: a digital expert on AI influence in elections

    Every election presents an opportunity for disinformation to find its way into the public discourse. But as the 2024 US presidential race begins to take shape, the growth of artificial intelligence (AI) technology threatens to give propagandists powerful new tools to ply their trade.Generative AI models that are able to create unique content from simple prompts are already being deployed for political purposes, taking disinformation campaigns into strange new places. Campaigns have circulated fake images and audio targeting other candidates, including an AI-generated campaign ad attacking Joe Biden and deepfake videos mimicking real-life news footage.The Guardian spoke with Renée DiResta, technical research manager at the Stanford Internet Observatory, a university program that researches the abuses of information technology, about how the latest developments in AI influence campaigns and how society is catching up to a new, artificially created reality.Concern around AI and its potential for disinformation has been around for a while. What has changed that makes this threat more urgent?When people became aware of deepfakes – which usually refers to machine-generated video of an event that did not happen – a few years ago there was concern that adversarial actors would use these types of video to disrupt elections. Perhaps they would make video of a candidate, perhaps they would make video of some sort of disaster. But it didn’t really happen. The technology captured public attention, but it wasn’t very widely democratized. And so it didn’t primarily manifest in the political conversation, but instead in the realm of much more mundane but really individually harmful things, like revenge porn.There’s been two major developments in the last six months. First is the rise of ChatGPT, which is generated text. It became available to a mass market and people began to realize how easy it was to use these types of text-based tools. At the same time, text-to-still image tools became globally available. Today, anybody can use Stable Diffusion or Midjourney to create photorealistic images of things that don’t really exist in the world. The combination of these two things, in addition to the concerns that a lot of people feel around the 2024 elections, has really captured public attention once again.Why did the political use of deepfakes not materialize?The challenge with using video in a political environment is that you really have to nail the substance of the content. There are a lot of tells in video, a lot of ways in which you can determine whether it’s generated. On top of that, when a video is truly sensational, a lot of people look at it and factcheck it and respond to it. You might call it a natural immune response.Text and images, however, have the potential for higher actual impact in an election scenario because they can be more subtle and longer lasting. Elections require months of campaigning during which people formulate an opinion. It’s not something where you’re going to change the entire public mind with a video and have that be the most impactful communication of the election.How do you think large language models can change political propaganda?I want to caveat that describing what is tactically possible is not the same thing as me saying the sky is falling. I’m not a doomer about this technology. But I do think that we should understand generative AI in the context of what it makes possible. It increases the number of people who can create political propaganda or content. It decreases the cost to do it. That’s not to say necessarily that they will, and so I think we want to maintain that differentiation between this is the tactic that a new technology enables versus that this is going to swing an election.As far as the question of what’s possible, in terms of behaviors, you’ll see things like automation. You might remember back in 2015 there were all these fears about bots. You had a lot of people using automation to try to make their point of view look more popular – making it look like a whole lot of people think this thing, when in reality it’s six guys and their 5,000 bots. For a while Twitter wasn’t doing anything to stop that, but it was fairly easy to detect. A lot of the accounts would be saying the exact same thing at the exact same time, because it was expensive and time consuming to generate a unique message for each of your fake accounts. But with generative AI it is now effortless to generate highly personalized content and to automate its dissemination.And then finally, in terms of content, it’s really just that the messages are more credible and persuasive.That seems tied to another aspect you’ve written about, that the sheer amount of content that can be generated, including misleading or inaccurate content, has a muddying effect on information and trust.It’s the scale that makes it really different. People have always been able to create propaganda, and I think it’s very important to emphasize that. There is an entire industry of people whose job it is to create messages for campaigns and then figure out how to get them out into the world. We’ve just changed the speed and the scale and the cost to do that. It’s just an evolution in propaganda.When we think about what’s new and what’s different here, the same thing goes for images. When Photoshop emerged, the public at first was very uncomfortable with Photoshopped images, and gradually became more comfortable with it. The public acclimated to the idea that Photoshop existed and that not everything that you see with your eyes is a thing that necessarily is as it seems – the idea that the woman that you see on the magazine cover probably does not actually look like that. Where we’ve gone with generative AI is the fabrication of a complete unreality, where nothing about the image is what it seems but it looks photorealistic.skip past newsletter promotionafter newsletter promotionNow anybody can make it look like the pope is wearing Balenciaga.Exactly.In the US, it seems like meaningful federal regulation is pretty far away if it’s going to come at all. Absent of that, what are some of the sort of short-term ways to mitigate these risks?First is the education piece. There was a very large education component when deepfakes became popular – media covered them and people began to get the sense that we were entering a world in which a video might not be what it seems.But it’s unreasonable to expect every person engaging with somebody on a social media platform to figure out if the person they’re talking to is real. Platforms will have to take steps to more carefully identify if automation is in play.On the image front, social media platforms, as well as generative AI companies, are starting to come together to try and determine what kind of watermarking might be useful so that platforms and others can determine computationally whether an image is generated.Some companies, like OpenAI, have policies around generating misinformation or the use of ChatGPT for political ends. How effective do you see those policies being?It’s a question of access. For any technology, you can try to put guardrails on your proprietary version of that technology and you can argue you’ve made a values-based decision to not allow your products to generate particular types of content. On the flip side, though, there are models that are open source and anyone can go and get access to them. Some of the things that are being done with some of the open source models and image generation are deeply harmful, but once the model is open sourced, the ability to control its use is much more limited.And it’s a very big debate right now in the field. You don’t want to necessarily create regulations that lock in and protect particular corporate actors. At the same time, there is a recognition that open-source models are out there in the world already. The question becomes how the platforms that are going to serve as the dissemination pathways for this stuff think about their role and their policies in what they amplify and curate.What’s the media or the public getting wrong about AI and disinformation?One of the real challenges is that people are going to believe what they see if it conforms to what they want to believe. In a world of unreality in which you can create that content that fulfills that need, one of the real challenges is whether media literacy efforts actually solve any of the problems. Or will we move further into divergent realities – where people are going to continue to hold the belief in something that they’ve seen on the internet as long as it tells them what they want. Larger offline challenges around partisanship and trust are reflected in, and exacerbated by, new technologies that enable this kind of content to propagate online. More

  • in

    The Guardian view on Sunak’s foreign policy: a Europe-shaped hole | Editorial

    The alliance between Britain and the US, resting on deep foundations of shared history and strategic interest, is not overly affected by the personal relationship between a prime minister and a president.Sometimes individual affinity is consequential, as when Margaret Thatcher and Ronald Reagan were aligned over cold war doctrine, or when Tony Blair put Britain in lockstep with George W Bush for the march to war in Iraq. But there is no prospect of Rishi Sunak forming such a partnership – for good or ill – with Joe Biden at this week’s Washington summit.Viewed from the White House, the prime minister cuts an insubstantial figure – the caretaker leader of a country that has lost its way. That doesn’t jeopardise the underlying relationship. Britain is a highly valued US ally, most notably in the fields of defence, security and intelligence. On trade and economics, Mr Sunak’s position is less comfortable. The prime minister is a poor match with a president who thinks Brexit was an epic blunder and whose flagship policy is a rebuttal of the sacred doctrines of the Conservative party.Mr Biden is committed to shoring up American primacy by means of massive state support for green technology, tax breaks for foreign investment and reconfiguring supply chains with a focus on national security. Mr Sunak’s instincts are more laissez-faire, and his orthodox conservative budgets preclude interventionist statecraft.The two men disagree on a fundamental judgment about the future direction of the global economy, but only one of them has a hand on the steering wheel. Mr Sunak looks more like a passenger, or a pedestrian, since Britain bailed out of the EU – the vehicle that allows European countries to aggregate mid-range economic heft into continental power.London lost clout in the world by surrendering its seat in Brussels, but that fact is hard for Brexit ideologues to process. Their worldview is constructed around the proposition that EU membership depleted national sovereignty and that leaving the bloc would open more lucrative trade routes. Top of the wishlist was a deal with Washington, and Mr Biden has said that won’t happen. Even if it did, the terms would be disadvantageous to Britain as the supplicant junior partner.If Mr Sunak grasps that weakness, he dare not voice it. Instead, Downing Street emits vague noises about Britain’s leading role in AI regulation. But, in governing uses of new technology, Brussels matters more to Washington. London is not irrelevant, but British reach is reduced when ministers are excluded from the rooms where their French, German and other continental counterparts develop policy.Those are the relationships that Mr Sunak must cultivate with urgency. But his view of Europe is circumscribed by Brexit ideology and parochial campaign issues. His meetings with the French president, Emmanuel Macron, have been dominated by the domestic political obsession with small-boat migration across the Channel. The prime minister has no discernible relationship with the German chancellor, Olaf Scholz. He has not visited Berlin.Negotiating the Windsor framework to stabilise Northern Ireland’s status in post-Brexit trade was a vital step in repairing damage done by Boris Johnson and Liz Truss to UK relations with the EU. But there is still a gaping European hole in Britain’s foreign policy. It is visible all the way across the Atlantic, even if the prime minister refuses to see it. More

  • in

    ‘I do not think ethical surveillance can exist’: Rumman Chowdhury on accountability in AI

    Rumman Chowdhury often has trouble sleeping, but, to her, this is not a problem that requires solving. She has what she calls “2am brain”, a different sort of brain from her day-to-day brain, and the one she relies on for especially urgent or difficult problems. Ideas, even small-scale ones, require care and attention, she says, along with a kind of alchemic intuition. “It’s just like baking,” she says. “You can’t force it, you can’t turn the temperature up, you can’t make it go faster. It will take however long it takes. And when it’s done baking, it will present itself.”It was Chowdhury’s 2am brain that first coined the phrase “moral outsourcing” for a concept that now, as one of the leading thinkers on artificial intelligence, has become a key point in how she considers accountability and governance when it comes to the potentially revolutionary impact of AI.Moral outsourcing, she says, applies the logic of sentience and choice to AI, allowing technologists to effectively reallocate responsibility for the products they build onto the products themselves – technical advancement becomes predestined growth, and bias becomes intractable.“You would never say ‘my racist toaster’ or ‘my sexist laptop’,” she said in a Ted Talk from 2018. “And yet we use these modifiers in our language about artificial intelligence. And in doing so we’re not taking responsibility for the products that we build.” Writing ourselves out of the equation produces systematic ambivalence on par with what the philosopher Hannah Arendt called the “banality of evil” – the wilful and cooperative ignorance that enabled the Holocaust. “It wasn’t just about electing someone into power that had the intent of killing so many people,” she says. “But it’s that entire nations of people also took jobs and positions and did these horrible things.”Chowdhury does not really have one title, she has dozens, among them Responsible AI fellow at Harvard, AI global policy consultant and former head of Twitter’s Meta team (Machine Learning Ethics, Transparency and Accountability). AI has been giving her 2am brain for some time. Back in 2018 Forbes named her one of the five people “building our AI future”.A data scientist by trade, she has always worked in a slightly undefinable, messy realm, traversing the realms of social science, law, philosophy and technology, as she consults with companies and lawmakers in shaping policy and best practices. Around AI, her approach to regulation is unique in its staunch middle-ness – both welcoming of progress and firm in the assertion that “mechanisms of accountability” should exist.Effervescent, patient and soft-spoken, Chowdhury listens with disarming care. She has always found people much more interesting than what they build or do. Before skepticism around tech became reflexive, Chowdhury had fears too – not of the technology itself, but of the corporations that developed and sold it.As the global lead at the responsible AI firm Accenture, she led the team that designed a fairness evaluation tool that pre-empted and corrected algorithmic bias. She went on to start Parity, an ethical AI consulting platform that seeks to bridge “different communities of expertise”. At Twitter – before it became one of the first teams disbanded under Elon Musk – she hosted the company’s first-ever algorithmic bias bounty, inviting outside programmers and data scientists to evaluate the site’s code for potential biases. The exercise revealed a number of problems, including that the site’s photo-cropping software seemed to overwhelmingly prefer faces that were young, feminine and white.This is a strategy known as red-teaming, in which programmers and hackers from outside an organization are encouraged to try and curtail certain safeguards to push a technology to “do bad things to identify what bad things it’s capable of”, says Chowdhury. These kinds of external checks and balances are rarely implemented in the world of tech because of technologists’ fear of “people touching their baby”.She is currently working on another red-teaming event for Def Con – a convention hosted by the hacker organization AI Village. This time, hundreds of hackers are gathering to test ChatGPT, with the collaboration of its founder OpenAI, along with Microsoft, Google and the Biden administration. The “hackathon” is scheduled to run for over 20 hours, providing them with a dataset that is “totally unprecedented”, says Chowdhury, who is organizing the event with Sven Cattell, founder of AI Village and Austin Carson, president of the responsible AI non-profit SeedAI.In Chowdhury’s view, it’s only through this kind of collectivism that proper regulation – and regulation enforcement – can occur. In addition to third-party auditing, she also serves on multiple boards across Europe and the US helping to shape AI policy. She is wary, she tells me, of the instinct to over-regulate, which could lead models to overcorrect and not address ingrained issues. When asked about gay marriage, for example, ChatGPT and other generative AI tools “totally clam up”, trying to make up for the amount of people who have pushed the models to say negative things. But it’s not easy, she adds, to define what is toxic and what is hateful. “It’s a journey that will never end,” she tells me, smiling. “But I’m fine with that.”Early on, when she first started working in tech, she realized that “technologists don’t always understand people, and people don’t always understand technology”, and sought to bridge that gap. In its broadest interpretation, she tells me, her work deals with understanding humans through data. “At the core of technology is this idea that, like, humanity is flawed and that technology can save us,” she says, noting language like “body hacks” that implies a kind of optimization unique to this particular age of technology. There is an aspect of it that kind of wishes we were “divorced from humanity”.Chowdhury has always been drawn to humans, their messiness and cloudiness and unpredictability. As an undergrad at MIT, she studied political science, and, later, after a disillusioning few months in non-profits in which she “knew we could use models and data more effectively, but nobody was”, she went to Columbia for a master’s degree in quantitative methods.skip past newsletter promotionafter newsletter promotionIn the last month, she has spent a week in Spain helping to carry out the launch of the Digital Services Act, another in San Francisco for a cybersecurity conference, another in Boston for her fellowship, and a few days in New York for another round of Def Con press. After a brief while in Houston, where she’s based, she has upcoming talks in Vienna and Pittsburgh on AI nuclear misinformation and Duolingo, respectively.At its core, what she prescribes is a relatively simple dictum: listen, communicate, collaborate. And yet, even as Sam Altman, the founder and CEO of OpenAI, testifies before Congress that he’s committed to preventing AI harms, she still sees familiar tactics at play. When an industry experiences heightened scrutiny, barring off prohibitive regulation often means taking control of a narrative – ie calling for regulation, while simultaneously spending millions in lobbying to prevent the passing of regulatory laws.The problem, she says, is a lack of accountability. Internal risk analysis is often distorted within a company because risk management doesn’t often employ morals. “There is simply risk and then your willingness to take that risk,” she tells me. When the risk of failure or reputational harm becomes too great, it moves to an arena where the rules are bent in a particular direction. In other words: “Let’s play a game where I can win because I have all of the money.”But people, unlike machines, have indefinite priorities and motivations. “There are very few fundamentally good or bad actors in the world,” she says. “People just operate on incentive structures.” Which in turn means that the only way to drive change is to make use of those structures, ebbing them away from any one power source. Certain issues can only be tackled at scale, with cooperation and compromise from many different vectors of power, and AI is one of them.Though, she readily attests that there are limits. Points where compromise is not an option. The rise of surveillance capitalism, she says, is hugely concerning to her. It is a use of technology that, at its core, is unequivocally racist and therefore should not be entertained. “We cannot put lipstick on a pig,” she said at a recent talk on the future of AI at New York University’s School of Social Sciences. “I do not think ethical surveillance can exist.”Chowdhury recently wrote an op-ed for Wired in which she detailed her vision for a global governance board. Whether it be surveillance capitalism or job disruption or nuclear misinformation, only an external board of people can be trusted to govern the technology – one made up of people like her, not tied to any one institution, and one that is globally representative. On Twitter, a few users called her framework idealistic, referring to it as “blue sky thinking” or “not viable”. It’s funny, she tells me, given that these people are “literally trying to build sentient machines”.She’s familiar with the dissonance. “It makes sense,” she says. We’re drawn to hero narratives, the assumption that one person is and should be in charge at any given time. Even as she organizes the Def Con event, she tells me, people find it difficult to understand that there is a team of people working together every step of the way. “We’re getting all this media attention,” she says, “and everybody is kind of like, ‘Who’s in charge?’ And then we all kind of look at each other and we’re like, ‘Um. Everyone?’” More