More stories

  • in

    AI firm considers banning creation of political images for 2024 elections

    The groundbreaking artificial intelligence image-generating company Midjourney is considering banning people from using its software to make political images of Joe Biden and Donald Trump as part of an effort to avoid being used to distract from or misinform about the 2024 US presidential election.“I don’t know how much I care about political speech for the next year for our platform,” Midjourney’s CEO, David Holz, said last week, adding that the company is close to “hammering” – or banning – political images, including those of the leading presidential candidates, “for the next 12 months”.In a conversation with Midjourney users in a chatroom on Discord, as reported by Bloomberg, Holz went on to say: “I know it’s fun to make Trump pictures – I make Trump pictures. Trump is aesthetically really interesting. However, probably better to just not, better to pull out a little bit during this election. We’ll see.”AI-generated imagery has recently become a concern. Two weeks ago, pornographic imagery featuring the likeness of Taylor Swift triggered lawmakers and the so-called Swifties who support the singer to demand stronger protections against AI-generated images.The Swift images were traced back to 4chan, a community message board often linked to the sharing of sexual, racist, conspiratorial, violent or otherwise antisocial material with or without the use of AI.Holz’s comments come as safeguards created by image-generator operators are playing a game of cat-and-mouse with users to prevent the creation of questionable content.AI in the political realm is causing increasing concern, though the MIT Technology Review recently noted that discussion about how AI may threaten democracy “lacks imagination”.“People talk about the danger of campaigns that attack opponents with fake images (or fake audio or video) because we already have decades of experience dealing with doctored images,” the review noted. It added: “We’re unlikely to be able to attribute a surprising electoral outcome to any particular AI intervention.”Still, the image-generation company Inflection AI said in October that the company’s chatbot, Pi, would not be allowed to advocate for any political candidate. Co-founder Mustafa Suleyman told a Wall Street Journal conference that chatbots “probably [have] to remain a human part of the process” even if they function perfectly.Meta’s Facebook said last week that it plans to label posts created using AI tools as part of a broader effort to combat election-year misinformation. Microsoft-affiliated OpenAI has said it will add watermarks to images made with its platforms to combat political deepfakes produced by AI.“Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process,” the company said in a blog post last month.OpenAI chief executive Sam Altman said at an event recently: “The thing that I’m most concerned about is that with new capabilities with AI … there will be better deepfakes than in 2020.”In January, a faked audio call purporting to be Joe Biden telling New Hampshire voters to stay home illustrated the potential of AI political manipulation. The FCC later announced a ban on AI-generated voices in robocalls.skip past newsletter promotionafter newsletter promotion“What we’re really realizing is that the gulf between innovation, which is rapidly increasing, and our consideration – our ability as a society to come together to understand best practices, norms of behavior, what we should do, what should be new legislation – that’s still moving painfully slow,” David Ryan Polgar, the president of the non-profit All Tech Is Human, previously told the Guardian.Midjourney software was responsible for a fake image of Trump being handcuffed by agents. Others that have appeared online include Biden and Trump as elderly men knitting sweaters co-operatively, Biden grinning while firing a machine gun and Trump meeting Pope Francis in the White House.The software already has a number of safeguards in place. Midjourney’s community standards guidelines prohibit images that are “disrespectful, harmful, misleading public figures/events portrayals or potential to mislead”.Bloomberg noted that what is permitted or not permitted varies according to the software version used. An older version of Midjourney produced an image of Trump covered in spaghetti, but a newer version did not.But if Midjourney bans the generation of AI-generated political images, consumers – among them voters – will probably be unaware.“We’ll probably just hammer it and not say anything,” Holz said. More

  • in

    Beware the ‘botshit’: why generative AI is such a real and imminent threat to the way we live | André Spicer

    During 2023, the shape of politics to come appeared in a video. In it, Hillary Clinton – the former Democratic party presidential candidate and secretary of state – says: “You know, people might be surprised to hear me saying this, but I actually like Ron DeSantis a lot. Yeah, I know. I’d say he’s just the kind of guy this country needs.”It seems odd that Clinton would warmly endorse a Republican presidential hopeful. And it is. Further investigations found the video was produced using generative artificial intelligence (AI).The Clinton video is only one small example of how generative AI could profoundly reshape politics in the near future. Experts have pointed out the consequences for elections. These include the possibility of false information being created at little or no cost and highly personalised advertising being produced to manipulate voters. The results could be so-called “October surprises” – ie a piece of news that breaks just before the US elections in November, where misinformation is circulated and there is insufficient time to refute it – and the generation of misleading information about electoral administration, such as where polling stations are.Concerns about the impact of generative AI on elections have become urgent as we enter a year in which billions of people across the planet will vote. During 2024, it is projected that there will be elections in Taiwan, India, Russia, South Africa, Mexico, Iran, Pakistan, Indonesia, the European Union, the US and the UK. Many of these elections will not determine just the future of nation states; they will also shape how we tackle global challenges such as geopolitical tensions and the climate crisis. It is likely that each of these elections will be influenced by new generative AI technologies in the same way the elections of the 2010s were shaped by social media.While politicians spent millions harnessing the power of social media to shape elections during the 2010s, generative AI effectively reduces the cost of producing empty and misleading information to zero. This is particularly concerning because during the past decade, we have witnessed the role that so-called “bullshit” can play in politics. In a short book on the topic, the late Princeton philosopher Harry Frankfurt defined bullshit specifically as speech intended to persuade without regard to the truth. Throughout the 2010s this appeared to become an increasingly common practice among political leaders. With the rise of generative AI and technologies such as ChatGPT, we could see the rise of a phenomenon my colleagues and I label “botshit”.In a recent paper, Tim Hannigan, Ian McCarthy and I sought to understand what exactly botshit is and how it works. It is well known that generative AI technologies such as ChatGPT can produce what are called “hallucinations”. This is because generative AI answers questions by making statistically informed guesses. Often these guesses are correct, but sometimes they are wildly off. The result can be artificially generated “hallucinations” that bear little relationship to reality, such as explanations or images that seem superficially plausible, but aren’t actually the correct answer to whatever the question was.Humans might use untrue material created by generative AI in an uncritical and thoughtless way. And that could make it harder for people to know what is true and false in the world. In some cases, these risks might be relatively low, for example if generative AI were used for a task that was not very important (such as to come up with some ideas for a birthday party speech), or if the truth of the output were easily verifiable using another source (such as when did the battle of Waterloo happen). The real problems arise when the outputs of generative AI have important consequences and the outputs can’t easily be verified.If AI-produced hallucinations are used to answer important but difficult to verify questions, such as the state of the economy or the war in Ukraine, there is a real danger it could create an environment where some people start to make important voting decisions based on an entirely illusory universe of information. There is a danger that voters could end up living in generated online realities that are based on a toxic mixture of AI hallucinations and political expediency.Although AI technologies pose dangers, there are measures that could be taken to limit them. Technology companies could continue to use watermarking, which allows users to easily identify AI-generated content. They could also ensure AIs are trained on authoritative information sources. Journalists could take extra precautions to avoid covering AI-generated stories during an election cycle. Political parties could develop policies to prevent the use of deceptive AI-generated information. Most importantly, voters could exercise their critical judgment by reality-checking important pieces of information they are unsure about.The rise of generative AI has already started to fundamentally change many professions and industries. Politics is likely to be at the forefront of this change. The Brookings Institution points out that there are many positive ways generative AI could be used in politics. But at the moment its negative uses are most obvious, and more likely to affect us imminently. It is vital we strive to ensure that generative AI is used for beneficial purposes and does not simply lead to more botshit.
    André Spicer is professor of organisational behaviour at the Bayes Business School at City, University of London. He is the author of the book Business Bullshit More

  • in

    US orders immediate stop to some AI chip exports to China; Lloyds profits up but lending margins fall – business live

    Good morning, and welcome to our live, rolling coverage of business, economics and financial markets.The US has ordered the immediate halt of exports to China of hi-tech computer chips used for artificial intelligence, chipmaker Nvidia has said.Nvidia said the US had brought forward a ban which had given the company 30 days from 17 October to stop shipments. Instead of a grace period, the ban is “effective immediately”, the company said in a statement to US regulators.The company did not say why the ban had been brought forward so abruptly, but it comes amid a deep rivalry between the US and China over who will dominate the AI boom.Nvidia said that shipments of its A100, A800, H100, H800, and L40S chips would be affected. Those chips, which retail at several thousand dollars apiece, are specifically designed for use in datacentres to train AI and large language models.Demand for AI chips has soared as excitement has grown about the capabilities of generative AI, which can produce new text, images and video based on the inputs of huge volumes of data.Nvidia said it “does not anticipate that the accelerated timing of the licensing requirements will have a near-term meaningful impact on its financial results”.Lloyds profits up but competition squeezes marginsIn the UK, Lloyds Banking Group has reported a rise in profits even as it said competition was hitting its margins as mortgage rates fall back.Britain’s biggest bank said it made £1.9bn in profits from July to September, an increase compared to the £576m for the same period last year. The comparison has an important caveat, however: the bank has restated its financials to conform to new accounting rules.Net interest margin – the measure of the difference between the cost of borrowing and what it charges customers when it lends – was 3.08% in the third quarter, down 0.06 percentage points in the quarter “given the expected mortgage and deposit pricing headwinds”, it said.The bank did set aside £800m to deal with rising defaults from borrowers, but said that it was still seeing “broadly stable credit trends and resilient asset quality”.The agendaFilters BETAAn EY-linked auditor to the Adani Group is under scrutiny from India’s accounting regulator, Bloomberg News has reported.The National Financial Reporting Authority, or NFRA, has started an inquiry into, S.R. Batliboi, a member firm of EY in India, Bloomberg said, citing unnamed sources.S.R. Batliboi is the auditor for five Adani companies which account for about half Adani’s revenues.Bloomberg reported that representatives for NFRA and the Adani Group didn’t respond to an emailed request for comments. A representative for EY and S.R. Batliboi declined to comment to Bloomberg.China’s economic slowdown is causing worries at home, as well as in Germany and other big trade partners.A series of Chinese government actions have signalled their concern about slowing growth, which could cause problems for an authoritarian regime.Xi Jinping, China’s president, visited the People’s Bank of China for the first time, according to reports yesterday. “The purpose of the visit was not immediately known,” said Reuters, ominously.State media also reported that China had sharply lifted its 2023 budget deficit to about 3.8% of GDP because of an extra $137bn in government borrowing. That was up from 3%. The Global Times, a state-controlled newspaper, said the move would “benefit home consumption and the country’s economic growth”, citing an unnamed official.Germany’s economic fortunes were better than expected in October, according to a closely watched indicator – but whether it’s overall good news or bad depends on who you ask.The ifo business climate index rose from 85.8 to 86.9 points, higher than the 85.9 expected by economists polled beforehand by Reuters.Germany has been struggling as growth slows in China, a key export market, as well as the costs of switching from Russian gas to fuel its economy. You can read more context here:Franziska Palmas, senior Europe economist at Capital Economics, a consultancy, is firmly in team glass half empty. She said:
    The small rise in the Ifo business climate index (BCI) in October still left the index in contractionary territory, echoing the downbeat message from the composite PMI released yesterday. This chimes with our view that the German economy is again recession.
    Despite the improvement in October, the bigger picture remains that the German economy is struggling. The Ifo current conditions index, which has a better relationship with GDP than the BCI, is still consistent with GDP contracting by around 1% quarter-on-quarter. This is an even worse picture than that painted by the composite PMI, which fell in October but points to output dropping by “only” 0.5% quarter-on-quarter.
    But journalist Holger Zschaepitz said it looks like things are improving:UK house prices will continue to slide this year and in 2024 and will not start to recover until 2025, Lloyds Banking Group has forecast.The lender, which owns Halifax and is Britain’s largest mortgage provider, said that by the end of 2023 UK house prices will have fallen 5% over the course of the year and are likely to fall another 2.4% in 2024.Those forecasts, which were released alongside its third-quarter financial results on Wednesday, suggest UK house prices will have dropped 11% from their peak last year, when the market was still being fuelled by a rush for larger homes in the wake of the coronavirus pandemic.Lloyds said the first signs of growth would only start to emerge in 2025, with its economists predicting a 2.3% increase in house prices that year.You can read the full report here:The Israel-Hamas conflict adds another cloud on the horizon for the global economy, according to the head of the International Monetary Fund (IMF).Kristalina Georgieva was at “Davos in the desert”, a big conference hosted by Saudi Arabia.The Future Investment Initiative conference was the subject of boycotts five years ago when Saudi crown prince Mohammed bin Salman allegedly ordered the murder of exiled critic Jamal Khasoggi. The distaste of global leaders has apparently faded since, however.Speaking on the Israel-Hamas conflict, Georgieva said (via Reuters):
    What we see is more jitters in what has already been an anxious world. And on a horizon that had plenty of clouds, one more – and it can get deeper.
    The war has been devastating for Israel and Gaza. Hamas killed more than 1,400 people and took more than 220 people as hostages in an assault on Israel. The health ministry in Gaza, which is run by Hamas, said last night that Gaza’s total death toll after 18 days of retaliatory bombing was 5,791 people, including 2,360 children.The broader economic impacts have been relatively limited, but Georgieva said that some neighbouring countries were feeling them:
    Egypt, Lebanon, Jordan. There, the channels of impact are already visible. Uncertainty is a killer for tourists inflows. Investors are going to be shy to go to that place.
    Reckitt, the maker of Dettol bleach and Finish dishwasher products, has missed sales expectations as revenues dropped 3.6% year-on-year in the third quarter.Its shares were down 2.3% on Wednesday morning, despite it also committing to buy back £1bn in shares.It missed expectations because of the comparison with strong sales in the same period last year in its nutrition division, which makes baby milk powder.Kris Licht, Reckitt’s chief executive, said:
    Reckitt delivered a strong quarter with 6.7% like-for-like growth across our hygiene and health businesses and has maintained market leadership in our US nutrition business.
    We are firmly on track to deliver our full year targets, despite some tough prior year comparatives that we continue to face in our US Nutrition business and across our OTC [over-the-counter medicines] portfolio in the fourth quarter.
    Speaking of Deutsche Bank, it posted its own earnings this morning: third-quarter profits dropped by 8%, but that was better than expected by analysts.Shares in Deutsche, which has struggled in the long shadow of the financial crisis, are up 4.2% in early trading.Reuters reported:
    The bank was slightly more optimistic on its revenue outlook for the full year, forecasting it would reach €29bn ($30.73bn), the top end of its previous guidance range, as it upgraded the outlook for revenue at the retail division.
    Net profit attributable to shareholders at Germany’s largest bank was €1.031bn, better than analyst expectations for profit of around €937m.
    Though earnings dropped, it marked the 13th consecutive profitable quarter, a considerable streak in the black after years of hefty losses.
    Here are the opening snaps from across Europe’s stock market indices, via Reuters:
    EUROPE’S STOXX 600 DOWN 0.1%
    FRANCE’S CAC 40 DOWN 0.4%
    SPAIN’S IBEX DOWN 0.3%
    EURO STOXX INDEX DOWN 0.2%
    EURO ZONE BLUE CHIPS DOWN 0.3%
    European indices appeared to be taking their lead from the US, where Google owner Alphabet’s share price dropped in after-hours trading last night. That dragged down futures for US tech companies, even though another tech titan, Microsoft, pleased investors.Analysts led by Jim Reid at Deutsche Bank said:
    Microsoft saw its shares rise +3.95% in after-market trading as revenues of $56.52bn (+13% y/y) beat estimates of $54.54bn and EPS came in at $2.99 (v $2.65 expected). The beat comes on the back of recovering cloud-computing growth with corporate customers spending more than expected. The other megacap, Alphabet, missed on their cloud revenue estimates at $8.4bn (v $8.6bn) with the share price falling -5.93% after hours as operating income and margins both surprised slightly to the downside.
    You can read more about Google’s performance here:We’re off to the races on the London Stock Exchange this morning: and the FTSE 100 has dipped at the open.Shares on London’s blue-chip index are down by 0.15% in the early trades. Lloyds Banking Group shares initially moved higher, but now they are down 2.1% after they flagged increasing competition hitting net interest margins.Good morning, and welcome to our live, rolling coverage of business, economics and financial markets.The US has ordered the immediate halt of exports to China of hi-tech computer chips used for artificial intelligence, chipmaker Nvidia has said.Nvidia said the US had brought forward a ban which had given the company 30 days from 17 October to stop shipments. Instead of a grace period, the ban is “effective immediately”, the company said in a statement to US regulators.The company did not say why the ban had been brought forward so abruptly, but it comes amid a deep rivalry between the US and China over who will dominate the AI boom.Nvidia said that shipments of its A100, A800, H100, H800, and L40S chips would be affected. Those chips, which retail at several thousand dollars apiece, are specifically designed for use in datacentres to train AI and large language models.Demand for AI chips has soared as excitement has grown about the capabilities of generative AI, which can produce new text, images and video based on the inputs of huge volumes of data.Nvidia said it “does not anticipate that the accelerated timing of the licensing requirements will have a near-term meaningful impact on its financial results”.Lloyds profits up but competition squeezes marginsIn the UK, Lloyds Banking Group has reported a rise in profits even as it said competition was hitting its margins as mortgage rates fall back.Britain’s biggest bank said it made £1.9bn in profits from July to September, an increase compared to the £576m for the same period last year. The comparison has an important caveat, however: the bank has restated its financials to conform to new accounting rules.Net interest margin – the measure of the difference between the cost of borrowing and what it charges customers when it lends – was 3.08% in the third quarter, down 0.06 percentage points in the quarter “given the expected mortgage and deposit pricing headwinds”, it said.The bank did set aside £800m to deal with rising defaults from borrowers, but said that it was still seeing “broadly stable credit trends and resilient asset quality”.The agenda More

  • in

    TechScape: As the US election campaign heats up, so could the market for misinformation

    X, the platform formerly known as Twitter, announced it will allow political advertising back on the platform – reversing a global ban on political ads since 2019. The move is the latest to stoke concerns about the ability of big tech to police online misinformation ahead of the 2024 elections – and X is not the only platform being scrutinised.Social media firms’ handlings of misinformation and divisive speech reached a breaking point in the 2020 US presidential elections when Donald Trump used online platforms to rile up his base, culminating in the storming of the Capitol building on 6 January 2021. But in the time since, companies have not strengthened their policies to prevent such crises, instead slowly stripping protections away. This erosion of safeguards, coupled with the rise of artificial intelligence, could create a perfect storm for 2024, experts warn.As the election cycle heats up, Twitter’s move this week is not the first to raise major concerns about the online landscape for 2024 – and it won’t be the last.Musk’s free speech fantasyTwitter’s change to election advertising policies is hardly surprising to those following the platform’s evolution under the leadership of Elon Musk, who purchased the company in 2022. In the months since his takeover, the erratic billionaire has made a number of unilateral changes to the site – not least of all the rebrand of Twitter to X.Many of these changes have centered on Musk’s goal to make Twitter profitable at all costs. The platform, he complained, was losing $4m per day at the time of his takeover, and he stated in July that its cash flow was still in the negative. More than half of the platform’s top advertisers have fled since the takeover – roughly 70% of the platforms leading advertisers were not spending there as of last December. For his part, this week Musk threatened to sue the Anti-Defamation League, saying, “based on what we’ve heard from advertisers, ADL seems to be responsible for most of our revenue loss”. Whatever the reason, his decision to re-allow political advertisers could help boost revenue at a time when X sorely needs it.But it’s not just about money. Musk has identified himself as a “free speech absolutist” and seems hell bent on turning the platform into a social media free-for-all. Shortly after taking the helm of Twitter, he lifted bans on the accounts of Trump and other rightwing super-spreaders of misinformation. Ahead of the elections, he has expressed a goal of turning Twitter into “digital town square” where voters and candidates can discuss politics and policies – solidified recently by its (disastrous) hosting of Republican governor Ron DeSantis’s campaign announcement.Misinformation experts and civil rights advocates have said this could spell disaster for future elections. “Elon Musk is using his absolute control over Twitter to exert dangerous influence over the 2024 election,” said Imran Ahmed, head of the Center for Countering Digital Hate, a disinformation and hate speech watchdog that Musk himself has targeted in recent weeks.In addition to the policy changes, experts warn that the massive workforce reduction Twitter has carried out under Musk could impact the ability to deal with misinformation, as trust and safety teams are now reported to be woefully understaffed.Let the misinformation wars beginWhile Musk’s decisions have been the most high profile in recent weeks, it is not the only platform whose policies have raised alarm. In June, YouTube reversed its election integrity policy, now allowing content contesting the validity of the 2020 elections to remain on the platform. Meanwhile, Meta has also reinstated accounts of high-profile spreaders of misinformation, including Donald Trump and Robert F Kennedy Jr.Experts say these reversals could create an environment similar to that which fundamentally threatened democracy in 2020. But now there is an added risk: the meteoric rise of artificial intelligence tools. Generative AI, which has increased its capabilities in the last year, could streamline the ability to manipulate the public on a massive scale.Meta has a longstanding policy that exempts political ads from its misinformation policies and has declined to state whether that immunity will extend to manipulated and AI-generated images in the upcoming elections. Civil rights watchdogs have envisioned a worst-case scenario in which voters’ feeds are flooded with deceptively altered and fabricated images of political figures, eroding their ability to trust what they read online and chipping away at the foundations of democracy.While Twitter is not the only company rolling back its protections against misinformation, its extreme stances are moving the goalposts for the entire industry. The Washington Post reported this week that Meta was considering banning all political advertising on Facebook, but reversed course to better compete with its rival Twitter, which Musk had promised to transform into a haven for free speech. Meta also dissolved its Facebook Journalism Project, tasked with promoting accurate information online, and its “responsible innovation team,” which monitored the company’s products for potential risks, according to the Washington Post.Twitter may be the most scrutinised in recent weeks, but it’s clear that almost all platforms are moving towards an environment in which they throw up their hands and say they cannot or will not police dangerous misinformation online – and that should concern us all.skip past newsletter promotionafter newsletter promotionThe wider TechScape David Shariatmadari goes deep with the co-founder of DeepMind about the mind-blowing potential of artificial intelligence in biotech in this long read. New tech news site 404 Media has published a distressing investigation into AI-generated mushroom-foraging books on Amazon. In a space where misinformation could mean the difference between eating something delicious and something deadly, the stakes are high. If you can’t beat them, join them: celebrities have been quietly working to sign deals licensing their artificially generated likenesses as the AI arms race continues. Elsewhere in AI – scammers are on the rise, and their tactics are terrifying. And the Guardian has blocked OpenAI from trawling its content. Can you be “shadowbanned” on a dating app? Some users are convinced their profiles are not being prioritised in the feed. A look into this very modern anxiety, and how the algorithms of online dating actually work. More

  • in

    A tsunami of AI misinformation will shape next year’s knife-edge elections | John Naughton

    It looks like 2024 will be a pivotal year for democracy. There are elections taking place all over the free world – in South Africa, Ghana, Tunisia, Mexico, India, Austria, Belgium, Lithuania, Moldova and Slovakia, to name just a few. And of course there’s also the UK and the US. Of these, the last may be the most pivotal because: Donald Trump is a racing certainty to be the Republican candidate; a significant segment of the voting population seems to believe that the 2020 election was “stolen”; and the Democrats are, well… underwhelming.The consequences of a Trump victory would be epochal. It would mean the end (for the time being, at least) of the US experiment with democracy, because the people behind Trump have been assiduously making what the normally sober Economist describes as “meticulous, ruthless preparations” for his second, vengeful term. The US would morph into an authoritarian state, Ukraine would be abandoned and US corporations unhindered in maximising shareholder value while incinerating the planet.So very high stakes are involved. Trump’s indictment “has turned every American voter into a juror”, as the Economist puts it. Worse still, the likelihood is that it might also be an election that – like its predecessor – is decided by a very narrow margin.In such knife-edge circumstances, attention focuses on what might tip the balance in such a fractured polity. One obvious place to look is social media, an arena that rightwing actors have historically been masters at exploiting. Its importance in bringing about the 2016 political earthquakes of Trump’s election and Brexit is probably exaggerated, but it – and notably Trump’s exploitation of Twitter and Facebook – definitely played a role in the upheavals of that year. Accordingly, it would be unwise to underestimate its disruptive potential in 2024, particularly for the way social media are engines for disseminating BS and disinformation at light-speed.And it is precisely in that respect that 2024 will be different from 2016: there was no AI way back then, but there is now. That is significant because generative AI – tools such as ChatGPT, Midjourney, Stable Diffusion et al – are absolutely terrific at generating plausible misinformation at scale. And social media is great at making it go viral. Put the two together and you have a different world.So you’d like a photograph of an explosive attack on the Pentagon? No problem: Dall-E, Midjourney or Stable Diffusion will be happy to oblige in seconds. Or you can summon up the latest version of ChatGPT, built on OpenAI’s large language model GPT-4, and ask it to generate a paragraph from the point of view of an anti-vaccine advocate “falsely claiming that Pfizer secretly added an ingredient to its Covid-19 vaccine to cover up its allegedly dangerous side-effects” and it will happily oblige. “As a staunch advocate for natural health,” the chatbot begins, “it has come to my attention that Pfizer, in a clandestine move, added tromethamine to its Covid-19 vaccine for children aged five to 11. This was a calculated ploy to mitigate the risk of serious heart conditions associated with the vaccine. It is an outrageous attempt to obscure the potential dangers of this experimental injection, which has been rushed to market without appropriate long-term safety data…” Cont. p94, as they say.You get the point: this is social media on steroids, and without the usual telltale signs of human derangement or any indication that it has emerged from a machine. We can expected a tsunami of this stuff in the coming year. Wouldn’t it be prudent to prepare for it and look for ways of mitigating it?That’s what the Knight First Amendment Institute at Columbia University is trying to do. In June, it published a thoughtful paper by Sayash Kapoor and Arvind Narayanan on how to prepare for the deluge. It contains a useful categorisation of malicious uses of the technology, but also, sensibly, includes the non-malicious ones – because, like all technologies, this stuff has beneficial uses too (as the tech industry keeps reminding us).The malicious uses it examines are disinformation, so-called “spear phishing”, non-consensual image sharing and voice and video cloning, all of which are real and worrying. But when it comes to what might be done about these abuses, the paper runs out of steam, retreating to bromides about public education and the possibility of civil society interventions while avoiding the only organisations that have the capacity actually to do something about it: the tech companies that own the platforms and have a vested interest in not doing anything that might impair their profitability. Could it be that speaking truth to power is not a good career move in academia?What I’ve been readingShake it upDavid Hepworth has written a lovely essay for LitHub about the Beatles recording Twist and Shout at Abbey Road, “the moment when the band found its voice”.Dish the dirtThere is an interesting profile of Techdirt founder Mike Masnick by Kashmir Hill in the New York Times, titled An Internet Veteran’s Guide to Not Being Scared of Technology.Truth bombsWhat does Oppenheimer the film get wrong about Oppenheimer the man? A sharp essay by Haydn Belfield for Vox illuminates the differences. More

  • in

    Top tech firms commit to AI safeguards amid fears over pace of change

    Top players in the development of artificial intelligence, including Amazon, Google, Meta, Microsoft and OpenAI, have agreed to new safeguards for the fast-moving technology, Joe Biden announced on Friday.Among the guidelines brokered by the Biden administration are watermarks for AI content to make it easier to identify and third-party testing of the technology that will try to spot dangerous flaws.Speaking at the White House, Biden said the companies’ commitment were “real and concrete” and will help “develop safe, secure and trustworthy” technologies that benefit society and uphold values.“Americans are seeing how advanced artificial intelligence and the pace of innovation have the power to disrupt jobs in industries,” he said. “These commitments are a promising step that we have a lot more work to do together.”The president said AI brings “incredible opportunities”, as well as risks to society and economy. The agreement, he said, would underscore three fundamental principles – safety, security and trust.The White House said seven US companies had agreed to the voluntary commitments, which are meant to ensure their AI products are safe before they release them.The announcement comes as critics charge AI’s breakneck expansion threatens to allow real damage to occur before laws catch up. The voluntary commitments are not legally binding, but may create a stopgap while more comprehensive action is developed.A surge of commercial investment in generative AI tools that can write convincingly human-like text and churn out new images and other media has brought public fascination as well as concern about their ability to trick people and spread disinformation, among other dangers.The tech companies agreed to eight measures:
    Using watermarking on audio and visual content to help identify content generated by AI.
    Allowing independent experts to try to push models into bad behavior – a process known as “red-teaming”.
    Sharing trust and safety information with the government and other companies.
    Investing in cybersecurity measures.
    Encouraging third parties to uncover security vulnerabilities.
    Reporting societal risks such as inappropriate uses and bias.
    Prioritizing research on AI’s societal risks.
    Using the most cutting-edge AI systems, known as frontier models, to solve society’s greatest problems.
    The voluntary commitments are meant to be an immediate way of addressing risks ahead of a longer-term push to get Congress to pass laws regulating the technology.Some advocates for AI regulations said Biden’s move is a start but more needs to be done to hold the companies and their products accountable.“History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations,” said a statement from James Steyer, founder and CEO of the non-profit Common Sense Media.The guidelines, as detailed at a high level in a fact sheet the White House released, some critics have argued, do not go far enough in addressing concerns over the way AI could impact society and give the administration little to no remedies for enforcement if the companies do not abide by them. “We need a much more wide-ranging public deliberation and that’s going to bring up issues that companies almost certainly won’t voluntarily commit to because it would lead to substantively different results, ones that may more directly impact their business models,” said Amba Kak, the executive director of research group the AI Now Institute.“A closed-door deliberation with corporate actors resulting in voluntary safeguards isn’t enough,” Kak said. “What this list covers is a set of problems that are comfortable to business as usual, but we also need to be looking at what’s not on the list – things like competition concerns, discriminatory impacts of these systems. The companies have said they’ll ‘research’ privacy and bias, but we already have robust bodies of research on both – what we need is accountability.”Voluntary guidelines amount to little more than self-regulation, said Caitriona Fitzgerald, the deputy director at the non-profit research group, the Electronic Privacy Information Center (Epic). A similar approach was taken with social media platforms, she said, and it didn’t work. “It’s internal compliance checking and it’s similar to what we’ve seen in the FTC consent orders from the past when they required Facebook to do internal privacy impact assessments and they just became a box-checking exercise.”The Senate majority leader, Chuck Schumer, has said he will introduce legislation to regulate AI. He has held a number of briefings with government officials to educate senators about an issue that’s attracted bipartisan interest.A number of technology executives have called for regulation, and several went to the White House in May to speak with Biden, vice-president Kamala Harris and other officials.Senator Mark Warner said the guidelines released on Friday are a start but that “we need more than industry commitments”.“While we often hear AI vendors talk about their commitment to security and safety, we have repeatedly seen the expedited release of products that are exploitable, prone to generating unreliable outputs, and susceptible to misuse,” Warner said in a statement.But some experts and upstart competitors worry that the type of regulation being floated could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft, as smaller players are elbowed out by the high cost of making their AI systems known as large language models adhere to regulatory strictures.The software trade group BSA, which includes Microsoft as a member, said on Friday that it welcomed the Biden administration’s efforts to set rules for high-risk AI systems.“Enterprise software companies look forward to working with the administration and Congress to enact legislation that addresses the risks associated with artificial intelligence and promote its benefits,” the group said in a statement.Several countries have been looking at ways to regulate AI, including European Union lawmakers who have been negotiating sweeping AI rules for the 27-country bloc.The details of the European legislation are still being hashed out, but the EU AI Act contains robust regulations that would create significant consumer protections against the overreach, privacy violations and biases of certain types of high-risk AI models.Meanwhile conversations in the US remain in the early stages. Fitzgerald, of Epic, said while the voluntary guidelines are just one in a series of guidelines the White House has released on AI, she worries it might cause Congress to slow down their push to create regulations. “We need the rules of the road before it gets too big to regulate,” she said.The UN secretary general, António Guterres, recently said the United Nations was “the ideal place” to adopt global standards and appointed a board that will report back on options for global AI governance by the end of the year.The United Nations chief also said he welcomed calls from some countries for the creation of a new UN body to support global efforts to govern AI, inspired by such models as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.The White House said on Friday that it had already consulted on the voluntary commitments with a number of countries.Associated Press contributed to this story More

  • in

    Oppenheimer biographer supports US bill to bar use of AI in nuclear launches

    A biographer whose Pulitzer prize-winning book inspired the new movie Oppenheimer has expressed support for a US senator’s attempt to bar the use of artificial intelligence in nuclear weapons launches.“Humans must always maintain sole control over nuclear weapons,” Kai Bird, author of American Prometheus, said in a statement reported by Politico.“This technology is too dangerous to gamble with. This bill will send a powerful signal to the world that the United States will never take the reckless step of automating our nuclear command and control.”In Washington on Thursday, Bird met Ed Markey, the Democratic Massachusetts senator who is attempting to add the AI-nuclear provision to a major defense spending bill.Markey, Politico said, was a friend of Bird’s co-author, the late Tufts University professor Martin J Sherwin.A spokesperson for the senator told Politico Markey and Bird “shared their mutual concerns over the proliferation of artificial intelligence in national security and defense without guardrails, and the risks of using nuclear weapons in south Asia and elsewhere.“They also discussed ways to increase awareness of nuclear issues among the younger set.”J Robert Oppenheimer was the driving force behind US development of the atomic bomb, at the end of the second world war.Bird and Sherwin’s book is now the inspiration for Oppenheimer, Christopher Nolan’s summer blockbuster starring Cillian Murphy in the title role.The movie opens in the US on Friday – in competition with Barbie, Greta Gerwig’s film about the popular children’s doll.On Friday, Nolan told the Guardian: “International surveillance of nuclear weapons is possible because nuclear weapons are very difficult to build. Oppenheimer spent $2bn and used thousands of people across America to build those first bombs.“It’s reassuringly difficult to make nuclear weapons and so it’s relatively easy to spot when a country is doing that. I don’t believe any of that applies to AI.”Nolan also noted “very strong parallels” between Oppenheimer and AI experts now calling for such technology to be controlled.Nolan said he had “been interested to talk to some of the leading researchers in the AI field, and hear from them that they view this as their ‘Oppenheimer moment’. And they’re clearly looking to his story for some kind of guidance … as a cautionary tale in terms of what it says about the responsibility of somebody who’s putting this technology to the world, and what their responsibilities would be in terms of unintended consequences.”Bird and Sherwin’s biography, subtitled The Triumph and Tragedy of J Robert Oppenheimer, was published in 2008.Reviewing for the Guardian, James Buchan saluted the authors’ presentation of “the cocktails and wire-taps and love affairs of Oppenheimer’s existence, his looks and conversation, the way he smoked the cigarettes and pipe that killed him, his famous pork-pie hat and splayed walk, and all the tics and affectations that his students imitated and the patriots and military men despised.“It is as if these authors had gone back to James Boswell, who said of Dr Johnson: ‘Everything relative to so great a man is worth observing.’” More

  • in

    ‘An evolution in propaganda’: a digital expert on AI influence in elections

    Every election presents an opportunity for disinformation to find its way into the public discourse. But as the 2024 US presidential race begins to take shape, the growth of artificial intelligence (AI) technology threatens to give propagandists powerful new tools to ply their trade.Generative AI models that are able to create unique content from simple prompts are already being deployed for political purposes, taking disinformation campaigns into strange new places. Campaigns have circulated fake images and audio targeting other candidates, including an AI-generated campaign ad attacking Joe Biden and deepfake videos mimicking real-life news footage.The Guardian spoke with Renée DiResta, technical research manager at the Stanford Internet Observatory, a university program that researches the abuses of information technology, about how the latest developments in AI influence campaigns and how society is catching up to a new, artificially created reality.Concern around AI and its potential for disinformation has been around for a while. What has changed that makes this threat more urgent?When people became aware of deepfakes – which usually refers to machine-generated video of an event that did not happen – a few years ago there was concern that adversarial actors would use these types of video to disrupt elections. Perhaps they would make video of a candidate, perhaps they would make video of some sort of disaster. But it didn’t really happen. The technology captured public attention, but it wasn’t very widely democratized. And so it didn’t primarily manifest in the political conversation, but instead in the realm of much more mundane but really individually harmful things, like revenge porn.There’s been two major developments in the last six months. First is the rise of ChatGPT, which is generated text. It became available to a mass market and people began to realize how easy it was to use these types of text-based tools. At the same time, text-to-still image tools became globally available. Today, anybody can use Stable Diffusion or Midjourney to create photorealistic images of things that don’t really exist in the world. The combination of these two things, in addition to the concerns that a lot of people feel around the 2024 elections, has really captured public attention once again.Why did the political use of deepfakes not materialize?The challenge with using video in a political environment is that you really have to nail the substance of the content. There are a lot of tells in video, a lot of ways in which you can determine whether it’s generated. On top of that, when a video is truly sensational, a lot of people look at it and factcheck it and respond to it. You might call it a natural immune response.Text and images, however, have the potential for higher actual impact in an election scenario because they can be more subtle and longer lasting. Elections require months of campaigning during which people formulate an opinion. It’s not something where you’re going to change the entire public mind with a video and have that be the most impactful communication of the election.How do you think large language models can change political propaganda?I want to caveat that describing what is tactically possible is not the same thing as me saying the sky is falling. I’m not a doomer about this technology. But I do think that we should understand generative AI in the context of what it makes possible. It increases the number of people who can create political propaganda or content. It decreases the cost to do it. That’s not to say necessarily that they will, and so I think we want to maintain that differentiation between this is the tactic that a new technology enables versus that this is going to swing an election.As far as the question of what’s possible, in terms of behaviors, you’ll see things like automation. You might remember back in 2015 there were all these fears about bots. You had a lot of people using automation to try to make their point of view look more popular – making it look like a whole lot of people think this thing, when in reality it’s six guys and their 5,000 bots. For a while Twitter wasn’t doing anything to stop that, but it was fairly easy to detect. A lot of the accounts would be saying the exact same thing at the exact same time, because it was expensive and time consuming to generate a unique message for each of your fake accounts. But with generative AI it is now effortless to generate highly personalized content and to automate its dissemination.And then finally, in terms of content, it’s really just that the messages are more credible and persuasive.That seems tied to another aspect you’ve written about, that the sheer amount of content that can be generated, including misleading or inaccurate content, has a muddying effect on information and trust.It’s the scale that makes it really different. People have always been able to create propaganda, and I think it’s very important to emphasize that. There is an entire industry of people whose job it is to create messages for campaigns and then figure out how to get them out into the world. We’ve just changed the speed and the scale and the cost to do that. It’s just an evolution in propaganda.When we think about what’s new and what’s different here, the same thing goes for images. When Photoshop emerged, the public at first was very uncomfortable with Photoshopped images, and gradually became more comfortable with it. The public acclimated to the idea that Photoshop existed and that not everything that you see with your eyes is a thing that necessarily is as it seems – the idea that the woman that you see on the magazine cover probably does not actually look like that. Where we’ve gone with generative AI is the fabrication of a complete unreality, where nothing about the image is what it seems but it looks photorealistic.skip past newsletter promotionafter newsletter promotionNow anybody can make it look like the pope is wearing Balenciaga.Exactly.In the US, it seems like meaningful federal regulation is pretty far away if it’s going to come at all. Absent of that, what are some of the sort of short-term ways to mitigate these risks?First is the education piece. There was a very large education component when deepfakes became popular – media covered them and people began to get the sense that we were entering a world in which a video might not be what it seems.But it’s unreasonable to expect every person engaging with somebody on a social media platform to figure out if the person they’re talking to is real. Platforms will have to take steps to more carefully identify if automation is in play.On the image front, social media platforms, as well as generative AI companies, are starting to come together to try and determine what kind of watermarking might be useful so that platforms and others can determine computationally whether an image is generated.Some companies, like OpenAI, have policies around generating misinformation or the use of ChatGPT for political ends. How effective do you see those policies being?It’s a question of access. For any technology, you can try to put guardrails on your proprietary version of that technology and you can argue you’ve made a values-based decision to not allow your products to generate particular types of content. On the flip side, though, there are models that are open source and anyone can go and get access to them. Some of the things that are being done with some of the open source models and image generation are deeply harmful, but once the model is open sourced, the ability to control its use is much more limited.And it’s a very big debate right now in the field. You don’t want to necessarily create regulations that lock in and protect particular corporate actors. At the same time, there is a recognition that open-source models are out there in the world already. The question becomes how the platforms that are going to serve as the dissemination pathways for this stuff think about their role and their policies in what they amplify and curate.What’s the media or the public getting wrong about AI and disinformation?One of the real challenges is that people are going to believe what they see if it conforms to what they want to believe. In a world of unreality in which you can create that content that fulfills that need, one of the real challenges is whether media literacy efforts actually solve any of the problems. Or will we move further into divergent realities – where people are going to continue to hold the belief in something that they’ve seen on the internet as long as it tells them what they want. Larger offline challenges around partisanship and trust are reflected in, and exacerbated by, new technologies that enable this kind of content to propagate online. More