More stories

  • in

    Can US Congress control the abuse of AI in the 2024 election? – podcast

    In January, voters in New Hampshire answered a phone call from what sounded like President Joe Biden. What turned out to be an AI-generated robocall caused a stir because it was trying to convince Democratic voters not to turn up to polling stations on election day.
    In response to this scam, just a couple of weeks later, the US government outlawed robocalls that use voices generated by artificial intelligence. But experts are warning that this story is just one example of why 2024 will be a year of unprecedented election disinformation in the US and around the world.
    This week, Jonathan Freedland and Rachel Leingang discuss why people are so worried about the influence of artificial intelligence on November’s presidential election, and what politicians can do to catch up

    How to listen to podcasts: everything you need to know More

  • in

    Political operative and firms behind Biden AI robocall sued for thousands

    A political operative and two companies that facilitated a fake robocall using AI to impersonate Joe Biden should be required to pay thousands of dollars in damages and should be barred from taking similar future actions, a group of New Hampshire voters and a civic action group said in a federal lawsuit filed on Thursday.The suit comes weeks after Steve Kramer, a political operative, admitted that he was behind the robocall that spoofed Biden’s voice on the eve of the New Hampshire primary and urged Democrats in the state not to vote. Kramer was working for Biden’s challenger Dean Phillips, but Phillips’s campaign said he had nothing to do with the call and Kramer has said he did it as an act of civil disobedience to draw attention to the dangers of AI in elections. The incident may have been the first time AI was used to interfere in a US election.Lawyers for the plaintiffs – three New Hampshire voters who received the calls and the League of Women Voters, a voting rights group – said they believed it was the first lawsuit of its kind seeking redress for using AI in robocalls in elections. The New Hampshire attorney general’s office is investigating the matter.Two Texas companies, Life Corporation and Lingo Telecom, also helped facilitate the calls.“If Defendants are not permanently enjoined from deploying AI-generated robocalls, there is a strong likelihood that it will happen again,” the lawsuit says.The plaintiffs say Kramer and the two companies violated a provision of the Voting Rights Act that prohibits voter intimidation as well a ban in the Telephone Consumer Protection Act on delivering a prerecorded call to someone without their consent. They also say the calls violated New Hampshire state laws that require disclosure of the source of politically related calls.The plaintiffs are seeking up to $7,500 in damages for each plaintiff that received a call that violated federal and state law. The recorded call was sent to anywhere between 5,000 and 25,000 people.“It’s really imperative that we address the threat that these defendants are creating for voters,” Courtney Hostetler, a lawyer with the civic action group Free Speech for People, which is helping represent the plaintiffs, said in a press call with reporters on Thursday.“The other hope of this lawsuit is that it will demonstrate to other people who might attempt similar campaigns that this is illegal, that there are parties out there like the League of Women Voters who are prepared to challenge this sort of illegal voter intimidation, and these illegal deceptive practices, hopefully make them think twice before they do the same,” she added.NBC News reported Kramer paid a street magician in New Orleans $150 to create the call using a script Kramer prepared.“This is a way for me to make a difference, and I have,” he said in the interview last month. “For $500, I got about $5m worth of action, whether that be media attention or regulatory action.”Mark Herring, a former Virginia attorney general who is helping represent the plaintiffs, told reporters on Thursday that kind of justification was “self-serving”.“Regardless of the motivation, the intent here was to suppress the vote, and to threaten and coerce voters into not voting out of fear that they might lose their right to vote,” he said. More

  • in

    ‘Disinformation on steroids’: is the US prepared for AI’s influence on the election?

    The AI election is here.Already this year, a robocall generated using artificial intelligence targeted New Hampshire voters in the January primary, purporting to be President Joe Biden and telling them to stay home in what officials said could be the first attempt at using AI to interfere with a US election. The “deepfake” calls were linked to two Texas companies, Life Corporation and Lingo Telecom.It’s not clear if the deepfake calls actually prevented voters from turning out, but that doesn’t really matter, said Lisa Gilbert, executive vice-president of Public Citizen, a group that’s been pushing for federal and state regulation of AI’s use in politics.“I don’t think we need to wait to see how many people got deceived to understand that that was the point,” Gilbert said.Examples of what could be ahead for the US are happening all over the world. In Slovakia, fake audio recordings might have swayed an election in what serves as a “frightening harbinger of the sort of interference the United States will likely experience during the 2024 presidential election”, CNN reported. In Indonesia, an AI-generated avatar of a military commander helped rebrand the country’s defense minister as a “chubby-cheeked” man who “makes Korean-style finger hearts and cradles his beloved cat, Bobby, to the delight of Gen Z voters”, Reuters reported. In India, AI versions of dead politicians have been brought back to compliment elected officials, according to Al Jazeera.But US regulations aren’t ready for the boom in fast-paced AI technology and how it could influence voters. Soon after the fake call in New Hampshire, the Federal Communications Commission announced a ban on robocalls that use AI audio. The FEC has yet to put rules in place to govern the use of AI in political ads, though states are moving quickly to fill the gap in regulation.The US House launched a bipartisan taskforce on 20 February that will research ways AI could be regulated and issue a report with recommendations. But with partisan gridlock ruling Congress, and US regulation trailing the pace of AI’s rapid advance, it’s unclear what, if anything, could be in place in time for this year’s elections.Without clear safeguards, the impact of AI on the election might come down to what voters can discern as real and not real. AI – in the form of text, bots, audio, photo or video – can be used to make it look like candidates are saying or doing things they didn’t do, either to damage their reputations or mislead voters. It can be used to beef up disinformation campaigns, making imagery that looks real enough to create confusion for voters.Audio content, in particular, can be even more manipulative because the technology for video isn’t as advanced yet and recipients of AI-generated calls lose some of the contextual clues that something is fake that they might find in a deepfake video. Experts also fear that AI-generated calls will mimic the voices of people a caller knows in real life, which has the potential for a bigger influence on the recipient because the caller would seem like someone they know and trust. Commonly called the “grandparent” scam, callers can now use AI to clone a loved one’s voice to trick the target into sending money. That could theoretically be applied to politics and elections.“It could come from your family member or your neighbor and it would sound exactly like them,” Gilbert said. “The ability to deceive from AI has put the problem of mis- and disinformation on steroids.”There are less misleading uses of the technology to underscore a message, like the recent creation of AI audio calls using the voices of kids killed in mass shootings aimed at swaying lawmakers to act on gun violence. Some political campaigns even use AI to show alternate realities to make their points, like a Republican National Committee ad that used AI to create a fake future if Biden is re-elected. But some AI-generated imagery can seem innocuous at first, like the rampant faked images of people next to carved wooden dog sculptures popping up on Facebook, but then be used to dispatch nefarious content later on.People wanting to influence elections no longer need to “handcraft artisanal election disinformation”, said Chester Wisniewski, a cybersecurity expert at Sophos. Now, AI tools help dispatch bots that sound like real people more quickly, “with one bot master behind the controls like the guy on the Wizard of Oz”.Perhaps most concerning, though, is that the advent of AI can make people question whether anything they’re seeing is real or not, introducing a heavy dose of doubt at a time when the technologies themselves are still learning how to best mimic reality.skip past newsletter promotionafter newsletter promotion“There’s a difference between what AI might do and what AI is actually doing,” said Katie Harbath, who formerly worked in policy at Facebook and now writes about the intersection between technology and democracy. People will start to wonder, she said, “what if AI could do all this? Then maybe I shouldn’t be trusting everything that I’m seeing.”Even without government regulation, companies that manage AI tools have announced and launched plans to limit its potential influence on elections, such as having their chatbots direct people to trusted sources on where to vote and not allowing chatbots that imitate candidates. A recent pact among companies such as Google, Meta, Microsoft and OpenAI includes “reasonable precautions” such as additional labeling of and education about AI-generated political content, though it wouldn’t ban the practice.But bad actors often flout or skirt around government regulations and limitations put in place by platforms. Think of the “do not call” list: even if you’re on it, you still probably get some spam calls.At the national level, or with major public figures, debunking a deepfake happens fairly quickly, with outside groups and journalists jumping in to spot a spoof and spread the word that it’s not real. When the scale is smaller, though, there are fewer people working to debunk something that could be AI-generated. Narratives begin to set in. In Baltimore, for example, recordings posted in January of a local principal allegedly making offensive comments could be AI-generated – it’s still under investigation.In the absence of regulations from the Federal Election Commission (FEC), a handful of states have instituted laws over the use of AI in political ads, and dozens more states have filed bills on the subject. At the state level, regulating AI in elections is a bipartisan issue, Gilbert said. The bills often call for clear disclosures or disclaimers in political ads that make sure voters understand content was AI-generated; without such disclosure, the use of AI is then banned in many of the bills, she said.The FEC opened a rule-making process for AI last summer, and the agency said it expects to resolve it sometime this summer, the Washington Post has reported. Until then, political ads with AI may have some state regulations to follow, but otherwise aren’t restricted by any AI-specific FEC rules.“Hopefully we will be able to get something in place in time, so it’s not kind of a wild west,” Gilbert said. “But it’s closing in on that point, and we need to move really fast.” More

  • in

    Tech firms sign ‘reasonable precautions’ to stop AI-generated election chaos

    Major technology companies signed a pact Friday to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters. Twelve other companies – including Elon Musk’s X – are also signing on to the accord.“Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own,” said Nick Clegg, president of global affairs for Meta, the parent company of Facebook and Instagram, in an interview ahead of the summit.The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio and video “that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote”.The companies aren’t committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide “swift and proportionate responses” when that content starts to spread.The vagueness of the commitments and lack of any binding requirements likely helped win over a diverse swath of companies, but disappointed advocates were looking for stronger assurances.“The language isn’t quite as strong as one might have expected,” said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”Clegg said each company “quite rightly has its own set of content policies”.“This is not attempting to try to impose a straitjacket on everybody,” he said. “And in any event, no one in the industry thinks that you can deal with a whole new technological paradigm by sweeping things under the rug and trying to play Whac-a-Mole and finding everything that you think may mislead someone.”Several political leaders from Europe and the US also joined Friday’s announcement. Vera Jourová, the European Commission vice-president, said while such an agreement can’t be comprehensive, “it contains very impactful and positive elements”. She also urged fellow politicians to take responsibility to not use AI tools deceptively and warned that AI-fueled disinformation could bring about “the end of democracy, not only in the EU member states”.The agreement at the German city’s annual security meeting comes as more than 50 countries are due to hold national elections in 2024. Bangladesh, Taiwan, Pakistan and most recently Indonesia have already done so.Attempts at AI-generated election interference have already begun, such as when AI robocalls that mimicked the US president Joe Biden’s voice tried to discourage people from voting in New Hampshire’s primary election last month.Just days before Slovakia’s elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media.Politicians also have experimented with the technology, from using AI chatbots to communicate with voters to adding AI-generated images to ads.The accord calls on platforms to “pay attention to context and in particular to safeguarding educational, documentary, artistic, satirical, and political expression”.It said the companies will focus on transparency to users about their policies and work to educate the public about how they can avoid falling for AI fakes.Most companies have previously said they’re putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they’re seeing is real. But most of those proposed solutions haven’t yet rolled out and the companies have faced pressure to do more.That pressure is heightened in the US, where Congress has yet to pass laws regulating AI in politics, leaving companies to largely govern themselves.The Federal Communications Commission recently confirmed AI-generated audio clips in robocalls are against the law, but that doesn’t cover audio deepfakes when they circulate on social media or in campaign advertisements.Many social media companies already have policies in place to deter deceptive posts about electoral processes – AI-generated or not. Meta says it removes misinformation about “the dates, locations, times, and methods for voting, voter registration, or census participation” as well as other false posts meant to interfere with someone’s civic participation.Jeff Allen, co-founder of the Integrity Institute and a former Facebook data scientist, said the accord seems like a “positive step” but he’d still like to see social media companies taking other actions to combat misinformation, such as building content recommendation systems that don’t prioritize engagement above all else.Lisa Gilbert, executive vice-president of the advocacy group Public Citizen, argued Friday that the accord is “not enough” and AI companies should “hold back technology” such as hyper-realistic text-to-video generators “until there are substantial and adequate safeguards in place to help us avert many potential problems”.In addition to the companies that helped broker Friday’s agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.Notably absent is another popular AI image-generator, Midjourney. The San Francisco-based startup didn’t immediately respond to a request for comment Friday.The inclusion of X – not mentioned in an earlier announcement about the pending accord – was one of the surprises of Friday’s agreement. Musk sharply curtailed content-moderation teams after taking over the former Twitter and has described himself as a “free-speech absolutist”.In a statement Friday, X CEO Linda Yaccarino said “every citizen and company has a responsibility to safeguard free and fair elections”.“X is dedicated to playing its part, collaborating with peers to combat AI threats while also protecting free speech and maximizing transparency,” she said. More

  • in

    AI firm considers banning creation of political images for 2024 elections

    The groundbreaking artificial intelligence image-generating company Midjourney is considering banning people from using its software to make political images of Joe Biden and Donald Trump as part of an effort to avoid being used to distract from or misinform about the 2024 US presidential election.“I don’t know how much I care about political speech for the next year for our platform,” Midjourney’s CEO, David Holz, said last week, adding that the company is close to “hammering” – or banning – political images, including those of the leading presidential candidates, “for the next 12 months”.In a conversation with Midjourney users in a chatroom on Discord, as reported by Bloomberg, Holz went on to say: “I know it’s fun to make Trump pictures – I make Trump pictures. Trump is aesthetically really interesting. However, probably better to just not, better to pull out a little bit during this election. We’ll see.”AI-generated imagery has recently become a concern. Two weeks ago, pornographic imagery featuring the likeness of Taylor Swift triggered lawmakers and the so-called Swifties who support the singer to demand stronger protections against AI-generated images.The Swift images were traced back to 4chan, a community message board often linked to the sharing of sexual, racist, conspiratorial, violent or otherwise antisocial material with or without the use of AI.Holz’s comments come as safeguards created by image-generator operators are playing a game of cat-and-mouse with users to prevent the creation of questionable content.AI in the political realm is causing increasing concern, though the MIT Technology Review recently noted that discussion about how AI may threaten democracy “lacks imagination”.“People talk about the danger of campaigns that attack opponents with fake images (or fake audio or video) because we already have decades of experience dealing with doctored images,” the review noted. It added: “We’re unlikely to be able to attribute a surprising electoral outcome to any particular AI intervention.”Still, the image-generation company Inflection AI said in October that the company’s chatbot, Pi, would not be allowed to advocate for any political candidate. Co-founder Mustafa Suleyman told a Wall Street Journal conference that chatbots “probably [have] to remain a human part of the process” even if they function perfectly.Meta’s Facebook said last week that it plans to label posts created using AI tools as part of a broader effort to combat election-year misinformation. Microsoft-affiliated OpenAI has said it will add watermarks to images made with its platforms to combat political deepfakes produced by AI.“Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process,” the company said in a blog post last month.OpenAI chief executive Sam Altman said at an event recently: “The thing that I’m most concerned about is that with new capabilities with AI … there will be better deepfakes than in 2020.”In January, a faked audio call purporting to be Joe Biden telling New Hampshire voters to stay home illustrated the potential of AI political manipulation. The FCC later announced a ban on AI-generated voices in robocalls.skip past newsletter promotionafter newsletter promotion“What we’re really realizing is that the gulf between innovation, which is rapidly increasing, and our consideration – our ability as a society to come together to understand best practices, norms of behavior, what we should do, what should be new legislation – that’s still moving painfully slow,” David Ryan Polgar, the president of the non-profit All Tech Is Human, previously told the Guardian.Midjourney software was responsible for a fake image of Trump being handcuffed by agents. Others that have appeared online include Biden and Trump as elderly men knitting sweaters co-operatively, Biden grinning while firing a machine gun and Trump meeting Pope Francis in the White House.The software already has a number of safeguards in place. Midjourney’s community standards guidelines prohibit images that are “disrespectful, harmful, misleading public figures/events portrayals or potential to mislead”.Bloomberg noted that what is permitted or not permitted varies according to the software version used. An older version of Midjourney produced an image of Trump covered in spaghetti, but a newer version did not.But if Midjourney bans the generation of AI-generated political images, consumers – among them voters – will probably be unaware.“We’ll probably just hammer it and not say anything,” Holz said. More

  • in

    Beware the ‘botshit’: why generative AI is such a real and imminent threat to the way we live | André Spicer

    During 2023, the shape of politics to come appeared in a video. In it, Hillary Clinton – the former Democratic party presidential candidate and secretary of state – says: “You know, people might be surprised to hear me saying this, but I actually like Ron DeSantis a lot. Yeah, I know. I’d say he’s just the kind of guy this country needs.”It seems odd that Clinton would warmly endorse a Republican presidential hopeful. And it is. Further investigations found the video was produced using generative artificial intelligence (AI).The Clinton video is only one small example of how generative AI could profoundly reshape politics in the near future. Experts have pointed out the consequences for elections. These include the possibility of false information being created at little or no cost and highly personalised advertising being produced to manipulate voters. The results could be so-called “October surprises” – ie a piece of news that breaks just before the US elections in November, where misinformation is circulated and there is insufficient time to refute it – and the generation of misleading information about electoral administration, such as where polling stations are.Concerns about the impact of generative AI on elections have become urgent as we enter a year in which billions of people across the planet will vote. During 2024, it is projected that there will be elections in Taiwan, India, Russia, South Africa, Mexico, Iran, Pakistan, Indonesia, the European Union, the US and the UK. Many of these elections will not determine just the future of nation states; they will also shape how we tackle global challenges such as geopolitical tensions and the climate crisis. It is likely that each of these elections will be influenced by new generative AI technologies in the same way the elections of the 2010s were shaped by social media.While politicians spent millions harnessing the power of social media to shape elections during the 2010s, generative AI effectively reduces the cost of producing empty and misleading information to zero. This is particularly concerning because during the past decade, we have witnessed the role that so-called “bullshit” can play in politics. In a short book on the topic, the late Princeton philosopher Harry Frankfurt defined bullshit specifically as speech intended to persuade without regard to the truth. Throughout the 2010s this appeared to become an increasingly common practice among political leaders. With the rise of generative AI and technologies such as ChatGPT, we could see the rise of a phenomenon my colleagues and I label “botshit”.In a recent paper, Tim Hannigan, Ian McCarthy and I sought to understand what exactly botshit is and how it works. It is well known that generative AI technologies such as ChatGPT can produce what are called “hallucinations”. This is because generative AI answers questions by making statistically informed guesses. Often these guesses are correct, but sometimes they are wildly off. The result can be artificially generated “hallucinations” that bear little relationship to reality, such as explanations or images that seem superficially plausible, but aren’t actually the correct answer to whatever the question was.Humans might use untrue material created by generative AI in an uncritical and thoughtless way. And that could make it harder for people to know what is true and false in the world. In some cases, these risks might be relatively low, for example if generative AI were used for a task that was not very important (such as to come up with some ideas for a birthday party speech), or if the truth of the output were easily verifiable using another source (such as when did the battle of Waterloo happen). The real problems arise when the outputs of generative AI have important consequences and the outputs can’t easily be verified.If AI-produced hallucinations are used to answer important but difficult to verify questions, such as the state of the economy or the war in Ukraine, there is a real danger it could create an environment where some people start to make important voting decisions based on an entirely illusory universe of information. There is a danger that voters could end up living in generated online realities that are based on a toxic mixture of AI hallucinations and political expediency.Although AI technologies pose dangers, there are measures that could be taken to limit them. Technology companies could continue to use watermarking, which allows users to easily identify AI-generated content. They could also ensure AIs are trained on authoritative information sources. Journalists could take extra precautions to avoid covering AI-generated stories during an election cycle. Political parties could develop policies to prevent the use of deceptive AI-generated information. Most importantly, voters could exercise their critical judgment by reality-checking important pieces of information they are unsure about.The rise of generative AI has already started to fundamentally change many professions and industries. Politics is likely to be at the forefront of this change. The Brookings Institution points out that there are many positive ways generative AI could be used in politics. But at the moment its negative uses are most obvious, and more likely to affect us imminently. It is vital we strive to ensure that generative AI is used for beneficial purposes and does not simply lead to more botshit.
    André Spicer is professor of organisational behaviour at the Bayes Business School at City, University of London. He is the author of the book Business Bullshit More

  • in

    US orders immediate stop to some AI chip exports to China; Lloyds profits up but lending margins fall – business live

    Good morning, and welcome to our live, rolling coverage of business, economics and financial markets.The US has ordered the immediate halt of exports to China of hi-tech computer chips used for artificial intelligence, chipmaker Nvidia has said.Nvidia said the US had brought forward a ban which had given the company 30 days from 17 October to stop shipments. Instead of a grace period, the ban is “effective immediately”, the company said in a statement to US regulators.The company did not say why the ban had been brought forward so abruptly, but it comes amid a deep rivalry between the US and China over who will dominate the AI boom.Nvidia said that shipments of its A100, A800, H100, H800, and L40S chips would be affected. Those chips, which retail at several thousand dollars apiece, are specifically designed for use in datacentres to train AI and large language models.Demand for AI chips has soared as excitement has grown about the capabilities of generative AI, which can produce new text, images and video based on the inputs of huge volumes of data.Nvidia said it “does not anticipate that the accelerated timing of the licensing requirements will have a near-term meaningful impact on its financial results”.Lloyds profits up but competition squeezes marginsIn the UK, Lloyds Banking Group has reported a rise in profits even as it said competition was hitting its margins as mortgage rates fall back.Britain’s biggest bank said it made £1.9bn in profits from July to September, an increase compared to the £576m for the same period last year. The comparison has an important caveat, however: the bank has restated its financials to conform to new accounting rules.Net interest margin – the measure of the difference between the cost of borrowing and what it charges customers when it lends – was 3.08% in the third quarter, down 0.06 percentage points in the quarter “given the expected mortgage and deposit pricing headwinds”, it said.The bank did set aside £800m to deal with rising defaults from borrowers, but said that it was still seeing “broadly stable credit trends and resilient asset quality”.The agendaFilters BETAAn EY-linked auditor to the Adani Group is under scrutiny from India’s accounting regulator, Bloomberg News has reported.The National Financial Reporting Authority, or NFRA, has started an inquiry into, S.R. Batliboi, a member firm of EY in India, Bloomberg said, citing unnamed sources.S.R. Batliboi is the auditor for five Adani companies which account for about half Adani’s revenues.Bloomberg reported that representatives for NFRA and the Adani Group didn’t respond to an emailed request for comments. A representative for EY and S.R. Batliboi declined to comment to Bloomberg.China’s economic slowdown is causing worries at home, as well as in Germany and other big trade partners.A series of Chinese government actions have signalled their concern about slowing growth, which could cause problems for an authoritarian regime.Xi Jinping, China’s president, visited the People’s Bank of China for the first time, according to reports yesterday. “The purpose of the visit was not immediately known,” said Reuters, ominously.State media also reported that China had sharply lifted its 2023 budget deficit to about 3.8% of GDP because of an extra $137bn in government borrowing. That was up from 3%. The Global Times, a state-controlled newspaper, said the move would “benefit home consumption and the country’s economic growth”, citing an unnamed official.Germany’s economic fortunes were better than expected in October, according to a closely watched indicator – but whether it’s overall good news or bad depends on who you ask.The ifo business climate index rose from 85.8 to 86.9 points, higher than the 85.9 expected by economists polled beforehand by Reuters.Germany has been struggling as growth slows in China, a key export market, as well as the costs of switching from Russian gas to fuel its economy. You can read more context here:Franziska Palmas, senior Europe economist at Capital Economics, a consultancy, is firmly in team glass half empty. She said:
    The small rise in the Ifo business climate index (BCI) in October still left the index in contractionary territory, echoing the downbeat message from the composite PMI released yesterday. This chimes with our view that the German economy is again recession.
    Despite the improvement in October, the bigger picture remains that the German economy is struggling. The Ifo current conditions index, which has a better relationship with GDP than the BCI, is still consistent with GDP contracting by around 1% quarter-on-quarter. This is an even worse picture than that painted by the composite PMI, which fell in October but points to output dropping by “only” 0.5% quarter-on-quarter.
    But journalist Holger Zschaepitz said it looks like things are improving:UK house prices will continue to slide this year and in 2024 and will not start to recover until 2025, Lloyds Banking Group has forecast.The lender, which owns Halifax and is Britain’s largest mortgage provider, said that by the end of 2023 UK house prices will have fallen 5% over the course of the year and are likely to fall another 2.4% in 2024.Those forecasts, which were released alongside its third-quarter financial results on Wednesday, suggest UK house prices will have dropped 11% from their peak last year, when the market was still being fuelled by a rush for larger homes in the wake of the coronavirus pandemic.Lloyds said the first signs of growth would only start to emerge in 2025, with its economists predicting a 2.3% increase in house prices that year.You can read the full report here:The Israel-Hamas conflict adds another cloud on the horizon for the global economy, according to the head of the International Monetary Fund (IMF).Kristalina Georgieva was at “Davos in the desert”, a big conference hosted by Saudi Arabia.The Future Investment Initiative conference was the subject of boycotts five years ago when Saudi crown prince Mohammed bin Salman allegedly ordered the murder of exiled critic Jamal Khasoggi. The distaste of global leaders has apparently faded since, however.Speaking on the Israel-Hamas conflict, Georgieva said (via Reuters):
    What we see is more jitters in what has already been an anxious world. And on a horizon that had plenty of clouds, one more – and it can get deeper.
    The war has been devastating for Israel and Gaza. Hamas killed more than 1,400 people and took more than 220 people as hostages in an assault on Israel. The health ministry in Gaza, which is run by Hamas, said last night that Gaza’s total death toll after 18 days of retaliatory bombing was 5,791 people, including 2,360 children.The broader economic impacts have been relatively limited, but Georgieva said that some neighbouring countries were feeling them:
    Egypt, Lebanon, Jordan. There, the channels of impact are already visible. Uncertainty is a killer for tourists inflows. Investors are going to be shy to go to that place.
    Reckitt, the maker of Dettol bleach and Finish dishwasher products, has missed sales expectations as revenues dropped 3.6% year-on-year in the third quarter.Its shares were down 2.3% on Wednesday morning, despite it also committing to buy back £1bn in shares.It missed expectations because of the comparison with strong sales in the same period last year in its nutrition division, which makes baby milk powder.Kris Licht, Reckitt’s chief executive, said:
    Reckitt delivered a strong quarter with 6.7% like-for-like growth across our hygiene and health businesses and has maintained market leadership in our US nutrition business.
    We are firmly on track to deliver our full year targets, despite some tough prior year comparatives that we continue to face in our US Nutrition business and across our OTC [over-the-counter medicines] portfolio in the fourth quarter.
    Speaking of Deutsche Bank, it posted its own earnings this morning: third-quarter profits dropped by 8%, but that was better than expected by analysts.Shares in Deutsche, which has struggled in the long shadow of the financial crisis, are up 4.2% in early trading.Reuters reported:
    The bank was slightly more optimistic on its revenue outlook for the full year, forecasting it would reach €29bn ($30.73bn), the top end of its previous guidance range, as it upgraded the outlook for revenue at the retail division.
    Net profit attributable to shareholders at Germany’s largest bank was €1.031bn, better than analyst expectations for profit of around €937m.
    Though earnings dropped, it marked the 13th consecutive profitable quarter, a considerable streak in the black after years of hefty losses.
    Here are the opening snaps from across Europe’s stock market indices, via Reuters:
    EUROPE’S STOXX 600 DOWN 0.1%
    FRANCE’S CAC 40 DOWN 0.4%
    SPAIN’S IBEX DOWN 0.3%
    EURO STOXX INDEX DOWN 0.2%
    EURO ZONE BLUE CHIPS DOWN 0.3%
    European indices appeared to be taking their lead from the US, where Google owner Alphabet’s share price dropped in after-hours trading last night. That dragged down futures for US tech companies, even though another tech titan, Microsoft, pleased investors.Analysts led by Jim Reid at Deutsche Bank said:
    Microsoft saw its shares rise +3.95% in after-market trading as revenues of $56.52bn (+13% y/y) beat estimates of $54.54bn and EPS came in at $2.99 (v $2.65 expected). The beat comes on the back of recovering cloud-computing growth with corporate customers spending more than expected. The other megacap, Alphabet, missed on their cloud revenue estimates at $8.4bn (v $8.6bn) with the share price falling -5.93% after hours as operating income and margins both surprised slightly to the downside.
    You can read more about Google’s performance here:We’re off to the races on the London Stock Exchange this morning: and the FTSE 100 has dipped at the open.Shares on London’s blue-chip index are down by 0.15% in the early trades. Lloyds Banking Group shares initially moved higher, but now they are down 2.1% after they flagged increasing competition hitting net interest margins.Good morning, and welcome to our live, rolling coverage of business, economics and financial markets.The US has ordered the immediate halt of exports to China of hi-tech computer chips used for artificial intelligence, chipmaker Nvidia has said.Nvidia said the US had brought forward a ban which had given the company 30 days from 17 October to stop shipments. Instead of a grace period, the ban is “effective immediately”, the company said in a statement to US regulators.The company did not say why the ban had been brought forward so abruptly, but it comes amid a deep rivalry between the US and China over who will dominate the AI boom.Nvidia said that shipments of its A100, A800, H100, H800, and L40S chips would be affected. Those chips, which retail at several thousand dollars apiece, are specifically designed for use in datacentres to train AI and large language models.Demand for AI chips has soared as excitement has grown about the capabilities of generative AI, which can produce new text, images and video based on the inputs of huge volumes of data.Nvidia said it “does not anticipate that the accelerated timing of the licensing requirements will have a near-term meaningful impact on its financial results”.Lloyds profits up but competition squeezes marginsIn the UK, Lloyds Banking Group has reported a rise in profits even as it said competition was hitting its margins as mortgage rates fall back.Britain’s biggest bank said it made £1.9bn in profits from July to September, an increase compared to the £576m for the same period last year. The comparison has an important caveat, however: the bank has restated its financials to conform to new accounting rules.Net interest margin – the measure of the difference between the cost of borrowing and what it charges customers when it lends – was 3.08% in the third quarter, down 0.06 percentage points in the quarter “given the expected mortgage and deposit pricing headwinds”, it said.The bank did set aside £800m to deal with rising defaults from borrowers, but said that it was still seeing “broadly stable credit trends and resilient asset quality”.The agenda More

  • in

    TechScape: As the US election campaign heats up, so could the market for misinformation

    X, the platform formerly known as Twitter, announced it will allow political advertising back on the platform – reversing a global ban on political ads since 2019. The move is the latest to stoke concerns about the ability of big tech to police online misinformation ahead of the 2024 elections – and X is not the only platform being scrutinised.Social media firms’ handlings of misinformation and divisive speech reached a breaking point in the 2020 US presidential elections when Donald Trump used online platforms to rile up his base, culminating in the storming of the Capitol building on 6 January 2021. But in the time since, companies have not strengthened their policies to prevent such crises, instead slowly stripping protections away. This erosion of safeguards, coupled with the rise of artificial intelligence, could create a perfect storm for 2024, experts warn.As the election cycle heats up, Twitter’s move this week is not the first to raise major concerns about the online landscape for 2024 – and it won’t be the last.Musk’s free speech fantasyTwitter’s change to election advertising policies is hardly surprising to those following the platform’s evolution under the leadership of Elon Musk, who purchased the company in 2022. In the months since his takeover, the erratic billionaire has made a number of unilateral changes to the site – not least of all the rebrand of Twitter to X.Many of these changes have centered on Musk’s goal to make Twitter profitable at all costs. The platform, he complained, was losing $4m per day at the time of his takeover, and he stated in July that its cash flow was still in the negative. More than half of the platform’s top advertisers have fled since the takeover – roughly 70% of the platforms leading advertisers were not spending there as of last December. For his part, this week Musk threatened to sue the Anti-Defamation League, saying, “based on what we’ve heard from advertisers, ADL seems to be responsible for most of our revenue loss”. Whatever the reason, his decision to re-allow political advertisers could help boost revenue at a time when X sorely needs it.But it’s not just about money. Musk has identified himself as a “free speech absolutist” and seems hell bent on turning the platform into a social media free-for-all. Shortly after taking the helm of Twitter, he lifted bans on the accounts of Trump and other rightwing super-spreaders of misinformation. Ahead of the elections, he has expressed a goal of turning Twitter into “digital town square” where voters and candidates can discuss politics and policies – solidified recently by its (disastrous) hosting of Republican governor Ron DeSantis’s campaign announcement.Misinformation experts and civil rights advocates have said this could spell disaster for future elections. “Elon Musk is using his absolute control over Twitter to exert dangerous influence over the 2024 election,” said Imran Ahmed, head of the Center for Countering Digital Hate, a disinformation and hate speech watchdog that Musk himself has targeted in recent weeks.In addition to the policy changes, experts warn that the massive workforce reduction Twitter has carried out under Musk could impact the ability to deal with misinformation, as trust and safety teams are now reported to be woefully understaffed.Let the misinformation wars beginWhile Musk’s decisions have been the most high profile in recent weeks, it is not the only platform whose policies have raised alarm. In June, YouTube reversed its election integrity policy, now allowing content contesting the validity of the 2020 elections to remain on the platform. Meanwhile, Meta has also reinstated accounts of high-profile spreaders of misinformation, including Donald Trump and Robert F Kennedy Jr.Experts say these reversals could create an environment similar to that which fundamentally threatened democracy in 2020. But now there is an added risk: the meteoric rise of artificial intelligence tools. Generative AI, which has increased its capabilities in the last year, could streamline the ability to manipulate the public on a massive scale.Meta has a longstanding policy that exempts political ads from its misinformation policies and has declined to state whether that immunity will extend to manipulated and AI-generated images in the upcoming elections. Civil rights watchdogs have envisioned a worst-case scenario in which voters’ feeds are flooded with deceptively altered and fabricated images of political figures, eroding their ability to trust what they read online and chipping away at the foundations of democracy.While Twitter is not the only company rolling back its protections against misinformation, its extreme stances are moving the goalposts for the entire industry. The Washington Post reported this week that Meta was considering banning all political advertising on Facebook, but reversed course to better compete with its rival Twitter, which Musk had promised to transform into a haven for free speech. Meta also dissolved its Facebook Journalism Project, tasked with promoting accurate information online, and its “responsible innovation team,” which monitored the company’s products for potential risks, according to the Washington Post.Twitter may be the most scrutinised in recent weeks, but it’s clear that almost all platforms are moving towards an environment in which they throw up their hands and say they cannot or will not police dangerous misinformation online – and that should concern us all.skip past newsletter promotionafter newsletter promotionThe wider TechScape David Shariatmadari goes deep with the co-founder of DeepMind about the mind-blowing potential of artificial intelligence in biotech in this long read. New tech news site 404 Media has published a distressing investigation into AI-generated mushroom-foraging books on Amazon. In a space where misinformation could mean the difference between eating something delicious and something deadly, the stakes are high. If you can’t beat them, join them: celebrities have been quietly working to sign deals licensing their artificially generated likenesses as the AI arms race continues. Elsewhere in AI – scammers are on the rise, and their tactics are terrifying. And the Guardian has blocked OpenAI from trawling its content. Can you be “shadowbanned” on a dating app? Some users are convinced their profiles are not being prioritised in the feed. A look into this very modern anxiety, and how the algorithms of online dating actually work. More