More stories

  • in

    House votes to force TikTok owner ByteDance to divest or face US ban

    The House of Representatives passed a bill on Wednesday that would require the TikTok owner ByteDance to sell the social media platform or face a total ban in the United States.The vote was a landslide, with 352 Congress members voting in favor and only 65 against. The bill, which was fast-tracked to a vote after being unanimously approved by a committee last week, gives China-based ByteDance 165 days to divest from TikTok. If it did not, app stores including the Apple App store and Google Play would be legally barred from hosting TikTok or providing web hosting services to ByteDance-controlled applications.The vote in the House represents the most concrete threat to TikTok in an ongoing political battle over allegations the China-based company could collect sensitive user data and politically censor content. TikTok has repeatedly stated it has not and would not share US user data with the Chinese government.Despite those arguments, TikTok faced an attempted ban by Donald Trump in 2020 and a state-level ban passed in Montana in 2023. Courts blocked both of those bans on grounds of first amendment violations, and Trump has since reversed his stance, now opposing a ban on TikTok.The treasury-led Committee on Foreign Investment in the United States (CFIUS) in March 2023 demanded ByteDance sell their TikTok shares or face the possibility of the app being banned, Reuters reported, but no action has been taken.The bill’s future is less certain in the Senate. Some Senate Democrats have publicly opposed the bill, citing freedom of speech concerns, and suggested measures that would address concerns of foreign influence across social media without targeting TikTok specifically. “We need curbs on social media, but we need those curbs to apply across the board,” Senator Elizabeth Warren said.The Democratic senator Mark Warner, who proposed a separate bill last year to give the White House new powers over TikTok, said he had “some concerns about the constitutionality of an approach that names specific companies”, but will take “a close look at this bill”.The White House has backed the legislation, with the press secretary, Karine Jean-Pierre, saying the administration wants “to see this bill get done so it can get to the president’s desk”.Authors of the bill have argued it does not constitute a ban, as it gives ByteDance the opportunity to sell TikTok and avoid being blocked in the US. Representative Mike Gallagher, the Republican chairman of the House select China committee, and Representative Raja Krishnamoorthi, the panel’s top Democrat, introduced legislation to address national security concerns posed by Chinese ownership of the app.“TikTok could live on and people could do whatever they want on it provided there is that separation,” Gallagher said, urging US ByteDance investors to support a sale. “It is not a ban – think of this as a surgery designed to remove the tumor and thereby save the patient in the process.”TikTok, which has 170 million users in the US, has argued otherwise, stating that it is not clear if China would approve any sale, or that it could be divested in six months.“This legislation has a predetermined outcome: a total ban of TikTok in the United States,” the company said after the committee vote. “The government is attempting to strip 170 million Americans of their constitutional right to free expression. This will damage millions of businesses, deny artists an audience, and destroy the livelihoods of countless creators across the country.”skip past newsletter promotionafter newsletter promotionFollowing the committee’s passage of the bill, staffers complained that TikTok supporters had flooded Congress with phone calls, after the app pushed out a notification urging users to oppose the legislation.“Why are Members of Congress complaining about hearing from their constituents? Respectfully, isn’t that their job?” TikTok said on X.Although the bill was written with TikTok in mind, it is possible other China-owned platforms could be affected, including US operations of Tencent’s WeChat, which Trump also sought to ban in 2020. Gallagher said he would not speculate on what other impacts the bill could have, but said “going forward we can debate what companies fall” under the bill.Reuters contributed to this report More

  • in

    ‘New text, same problems’: inside the fight over child online safety laws

    Sharp divisions between advocates for children’s safety online have emerged as a historic bill has gathered enough votes to pass in the US Senate. Amendments to the bill have appeased some former detractors who now support the legislation; its fiercest critics, however, have become even more entrenched in their demands for changes.The Kids Online Safety Act (Kosa), introduced more than two years ago, reached 60 backers in the Senate mid-February. A number of human rights groups still vehemently oppose the legislation, underscoring ongoing divisions among experts, lawmakers and advocates over how to keep young people safe online.“The Kids Online Safety Act is our best chance to address social media’s toxic business model, which has claimed far too many children’s lives and helped spur a mental health crisis,” said Josh Golin, the executive director of the children’s online safety group Fairplay.Opponents say alterations to the bill are not enough and that their concerns remain unchanged.“A one-size-fits-all approach to kids’ safety won’t keep kids safe,” said Aliya Bhatia, a policy analyst at the Center for Democracy and Technology. “This bill still rests on the premise that there is consensus around the types of content and design features that cause harm. There isn’t, and this belief will limit young people from exercising their agency and accessing the communities they need to online.”What is the Kids Online Safety Act?Sponsored by the Connecticut Democrat Richard Blumenthal and the Tennessee Republican Marsha Blackburn, Kosa would be the biggest change to American tech legislation in decades. The bill would require platforms like Instagram and TikTok to mitigate online dangers via design changes or opt-outs of algorithm-based recommendations, among other measures. Enforcement would demand much more fundamental modifications to social networks than current regulations require.When it was first introduced in 2022, Kosa prompted an open letter signed by more than 90 human rights organizations united in strong opposition. The groups warned the bill could be “weaponized” by conservative state attorneys general – who were charged with determining what content is harmful – to censor online resources and information for queer and trans youth or people seeking reproductive healthcare.In response to the critiques, Blumenthal amended the bill, notably shifting some enforcement decisions to the Federal Trade Commission rather than state attorneys general. At least seven LGBTQ+ advocacy organizations that previously spoke out against the bill dropped their opposition citing the “considerable changes” to Kosa that “significantly mitigate the risk of it being misused to suppress LGBTQ+ resources or stifle young people’s access to online communities”, including Glaad, the Human Rights Campaign and the Trevor Project.To the critics who now support Kosa, the amendments by Blumenthal solved the legislation’s major issues. However, the majority of those who signed the initial letter still oppose the bill, including the Center for Democracy and Technology, the Electronic Frontier Foundation, Fight for the Future, and the ACLU.“New bill text, same problems,” said Adam Kovacevich, chief executive of the tech industry policy coalition the Chamber of Progress, which is supported by corporate partners including Airbnb, Amazon, Apple and Snap. “The changes don’t address a lot of its potential abuses.” Snap and X, formerly Twitter, have publicly supported Kosa.Is Kosa overly broad or a net good?Kovacevich said the latest changes fail to address two primary concerns with the legislation: that vague language will lead social media platforms to over-moderate to restrict their liability, and that allowing state attorneys general to enforce the legislation could enable targeted and politicized content restriction even with the federal government assuming more of the bill’s authority.The vague language targeted by groups that still oppose the bill is the “duty of care” provision, which states that social media firms have “a duty to act in the best interests of a minor that uses the platform’s products or services” – a goal subject to an enforcer’s interpretation. The legislation would also require platforms to mitigate harms by creating “safeguards for minors”, but with little direction as to what content would be deemed harmful, opponents argue the legislation is likely to encourage companies to more aggressively filter content – which could lead to unintended consequences.“Rather than protecting children, this could impact access to protected speech, causing a chilling effect for all users and incentivizing companies to filter content on topics that disproportionately impact marginalized communities,” said Prem M Trivedi, policy director at the Open Technology Institute, which opposes Kosa.Trivedi said he and other opponents fear that important but charged topics like gun violence and racial justice could be interpreted as having a negative impact on young users, and be filtered out by algorithms. Many have expressed concern that LGBTQ+-related topics would be targeted by conservative regulators, leading to fewer available resources for young users who rely on the internet to connect with their communities. Blackburn, the bill’s sponsor, has previously stated her intention to “protect minor children from the transgender [sic] in this culture and that influence”.An overarching concern among opponents of the bill is that it is too broad in scope, and that more targeted legislation would achieve similar goals with fewer unintended impacts, said Bhatia.“There is a belief that there are these magic content silver bullets that a company can apply, and that what stands between a company applying those tools and not applying those tools is legislation,” she said. “But those of us who study the impact of these content filters still have reservations about the bill.”Many with reservations acknowledge that it does feature broadly beneficial provisions, said Mohana Mukherjee, visiting faculty at George Washington University, who has studied technology’s impact on teenagers and young adults. She said the bill’s inclusion of a “Kosa council” – a coalition of stakeholders including parents, academic experts, health professionals and young social media users to provide advice on how best to implement the legislation – is groundbreaking.“It’s absolutely crucial to involve young adults and youth who are facing these problems, and to have their perspective on the legislation,” she said.Kosa’s uncertain futureKosa is likely to be voted on in the Senate this session, but other legislation targeting online harms threatens its momentum. A group of senators is increasingly pushing a related bill that would ban children under the age of 13 from social media. Its author, Brian Schatz, has requested a panel that would potentially couple the bill with Kosa. Blumenthal, the author of Kosa, has cautioned that such a move could slow the passage of both bills and spoke out against the markup.“We should move forward with the proposals that have the broadest support, but at the same time, have open minds about what may add value,” he said, according to the Washington Post. “This process is the art of addition not subtraction often … but we should make sure that we’re not undermining the base of support.”The bill’s future in the House is likewise unclear. Other bills with similar purported goals are floating around Congress, including the Invest in Child Safety Act – a bill introduced by the Democratic senator Ron Wyden of Oregon and the representatives Anna G Eshoo and Brian Fitzpatrick – which would invest more than $5bn into investigating online sexual abusers.With so much legislation swirling around the floors of Congress, it’s unclear when – or if – a vote will be taken on any of them. But experts agree that Congress has at least begun trying to bolster children’s online safety.“This is an emotionally fraught topic – there are urgent online safety issues and awful things that happen to our children at the intersection of the online world and the offline world,” said Trivedi. “In an election year, there are heightened pressures on everyone to demonstrate forward movement on issues like this.” More

  • in

    The Lie Detectives: Trump, US politics and the disinformation damage done

    Most of Joe Biden’s past supporters see him as too old. An 81-year-old president with an unsteady step is a turn-off. But Donald Trump, Biden’s malignant, 77-year-old predecessor, vows to be a dictator for “a day”, calls for suspending the constitution and threatens Nato. “Russia, if you’re listening”, his infamous 2016 shout-out to Vladimir Putin, still haunts us eight years on. Democracy is on the ballot again.Against this bleak backdrop, Sasha Issenberg delivers The Lie Detectives, an examination of disinformation in politics. It is a fitting follow-up to The Victory Lab, his look at GOTV (“getting out the vote”) which was published weeks before the 2012 US election.Issenberg lectures at UCLA and writes for Monocle. He has covered presidential campaigns for the Boston Globe and he co-founded Votecastr, a private venture designed to track, project and publish real-time results. Voting science, though, is nothing if not tricky. A little after 4pm on election day 2016, hours before polls closed, Votecastr calculations led Slate to pronounce: Hillary Clinton Has to Like Where She Stands in Florida.The Victory Lab and The Lie Detectives are of a piece, focused on the secret sauce of winning campaigns. More than a decade ago, Issenberg gave props to Karl Rove, the architect of George W Bush’s successful election drives, and posited that micro-targeting voters had become key to finishing first. He also observed that ideological conflicts had become marbled through American politics. On that front, there has been an acceleration. These days, January 6 and its aftermath linger but much of the country has moved on, averting its gaze or embracing alternative facts.In 2016, Issenberg and Joshua Green of Businessweek spoke to Trump campaign digital gurus who bragged of using the internet to discourage prospective Clinton supporters.“We have three major voter suppression operations under way,” Issenberg and Green quote a senior official as saying. “They’re aimed at three groups Clinton needs to win overwhelmingly: idealistic white liberals, young women and African Americans.”It was micro-targeting on steroids.The exchange stuck with Issenberg. “I thought back often to that conversation with the Trump officials in the years that followed,” he writes now. “I observed so much else online that was manufactured and perpetuated with a similarly brazen impunity.”In The Lie Detectives, Issenberg pays particular attention and respect to Jiore Craig and her former colleagues at Greenberg Quinlan Rosner Research, a leading Democratic polling and strategy firm founded by Stan Greenberg, Bill Clinton’s pollster. Issenberg also examines the broader liberal ecosystem and its members, including the billionaire Reid Hoffman, a founder of LinkedIn and PayPal. The far-right former Brazilian president Jair Bolsonaro and his “office of hate” come under the microscope too.Craig’s experience included more than a dozen elections across six continents. But until Trump’s triumph, she had not worked on a domestic race. To her, to quote Issenberg, US politics was essentially “a foreign country”. Nonetheless, Craig emerged as the Democrats’ go-to for countering disinformation.“It was a unique moment in time where everybody who had looked for an answer up until that point had been abundantly wrong,” Craig says. “The fact that I had to start every race in a new country with the building blocks allowed me to see things that you couldn’t.”No party holds a monopoly on disinformation. In a 2017 special election for US Senate in Alabama, Democratic-aligned consultants launched Project Birmingham, a $100,000 disinformation campaign under which Republicans were urged to cast write-in ballots instead of voting for Roy Moore, the controversial GOP candidate.The project posed as a conservative operation. Eventually, Hoffman acknowledged funding it, but disavowed knowledge of disinformation and said sorry. Doug Jones, the Democrat, won by fewer than 22,000 votes. The write-in total was 22,819.skip past newsletter promotionafter newsletter promotionMore recently, Steve Kramer, a campaign veteran working for Dean Phillips, a long-shot candidate for the Democratic nomination against Biden, launched an AI-generated robocall that impersonated the president.Comparing himself to Paul Revere and Thomas Paine, patriots who challenged the mother country, Kramer, who also commissioned a deepfake impersonation of Senator Lindsey Graham, said Phillips was not in on the effort. If the sorry little episode showed anything, it showed disinformation is here to stay.Under the headline Disinformation on steroids: is the US prepared for AI’s influence on the election?, a recent Guardian story said: “Without clear safeguards, the impact of AI on the election might come down to what voters can discern as real and not real.”Free speech is on the line. Last fall, the US court of appeals for the fifth circuit – “the Trumpiest court in America”, as Vox put it – unanimously held that Biden, the surgeon general, the Centers for Disease Control and Prevention (CDC) and the FBI violated the first amendment by seeking to tamp down on Covid-related misinformation.In the court’s view, social media platforms were impermissibly “coerced” or “significantly encouraged” to suppress speech government officials viewed as dangerously inaccurate or misleading. The matter remains on appeal, oral argument before the supreme court set for later this month.Issenberg reminds us that Trump’s current presidential campaign has pledged that a second Trump administration will bar government agencies from assisting any effort to “label domestic speech as mis- or dis-information”. A commitment to free speech? Not exactly. More like Putinism, US-style.According to Kash Patel, a Trump administration veteran and true believer, a second Trump administration will target journalists for prosecution.“We will go out and find the conspirators, not just in government but in the media,” Patel told Steve Bannon, Trump’s former campaign chair and White House strategist. “Yes, we’re going to come after the people in the media who lied about American citizens, who helped Joe Biden rig presidential elections. We’re going to come after you.”Welcome to the Trump Vengeance tour.
    The Lie Detectives is published in the US by Columbia University’s Columbia Global Reports More

  • in

    ‘Disinformation on steroids’: is the US prepared for AI’s influence on the election?

    The AI election is here.Already this year, a robocall generated using artificial intelligence targeted New Hampshire voters in the January primary, purporting to be President Joe Biden and telling them to stay home in what officials said could be the first attempt at using AI to interfere with a US election. The “deepfake” calls were linked to two Texas companies, Life Corporation and Lingo Telecom.It’s not clear if the deepfake calls actually prevented voters from turning out, but that doesn’t really matter, said Lisa Gilbert, executive vice-president of Public Citizen, a group that’s been pushing for federal and state regulation of AI’s use in politics.“I don’t think we need to wait to see how many people got deceived to understand that that was the point,” Gilbert said.Examples of what could be ahead for the US are happening all over the world. In Slovakia, fake audio recordings might have swayed an election in what serves as a “frightening harbinger of the sort of interference the United States will likely experience during the 2024 presidential election”, CNN reported. In Indonesia, an AI-generated avatar of a military commander helped rebrand the country’s defense minister as a “chubby-cheeked” man who “makes Korean-style finger hearts and cradles his beloved cat, Bobby, to the delight of Gen Z voters”, Reuters reported. In India, AI versions of dead politicians have been brought back to compliment elected officials, according to Al Jazeera.But US regulations aren’t ready for the boom in fast-paced AI technology and how it could influence voters. Soon after the fake call in New Hampshire, the Federal Communications Commission announced a ban on robocalls that use AI audio. The FEC has yet to put rules in place to govern the use of AI in political ads, though states are moving quickly to fill the gap in regulation.The US House launched a bipartisan taskforce on 20 February that will research ways AI could be regulated and issue a report with recommendations. But with partisan gridlock ruling Congress, and US regulation trailing the pace of AI’s rapid advance, it’s unclear what, if anything, could be in place in time for this year’s elections.Without clear safeguards, the impact of AI on the election might come down to what voters can discern as real and not real. AI – in the form of text, bots, audio, photo or video – can be used to make it look like candidates are saying or doing things they didn’t do, either to damage their reputations or mislead voters. It can be used to beef up disinformation campaigns, making imagery that looks real enough to create confusion for voters.Audio content, in particular, can be even more manipulative because the technology for video isn’t as advanced yet and recipients of AI-generated calls lose some of the contextual clues that something is fake that they might find in a deepfake video. Experts also fear that AI-generated calls will mimic the voices of people a caller knows in real life, which has the potential for a bigger influence on the recipient because the caller would seem like someone they know and trust. Commonly called the “grandparent” scam, callers can now use AI to clone a loved one’s voice to trick the target into sending money. That could theoretically be applied to politics and elections.“It could come from your family member or your neighbor and it would sound exactly like them,” Gilbert said. “The ability to deceive from AI has put the problem of mis- and disinformation on steroids.”There are less misleading uses of the technology to underscore a message, like the recent creation of AI audio calls using the voices of kids killed in mass shootings aimed at swaying lawmakers to act on gun violence. Some political campaigns even use AI to show alternate realities to make their points, like a Republican National Committee ad that used AI to create a fake future if Biden is re-elected. But some AI-generated imagery can seem innocuous at first, like the rampant faked images of people next to carved wooden dog sculptures popping up on Facebook, but then be used to dispatch nefarious content later on.People wanting to influence elections no longer need to “handcraft artisanal election disinformation”, said Chester Wisniewski, a cybersecurity expert at Sophos. Now, AI tools help dispatch bots that sound like real people more quickly, “with one bot master behind the controls like the guy on the Wizard of Oz”.Perhaps most concerning, though, is that the advent of AI can make people question whether anything they’re seeing is real or not, introducing a heavy dose of doubt at a time when the technologies themselves are still learning how to best mimic reality.skip past newsletter promotionafter newsletter promotion“There’s a difference between what AI might do and what AI is actually doing,” said Katie Harbath, who formerly worked in policy at Facebook and now writes about the intersection between technology and democracy. People will start to wonder, she said, “what if AI could do all this? Then maybe I shouldn’t be trusting everything that I’m seeing.”Even without government regulation, companies that manage AI tools have announced and launched plans to limit its potential influence on elections, such as having their chatbots direct people to trusted sources on where to vote and not allowing chatbots that imitate candidates. A recent pact among companies such as Google, Meta, Microsoft and OpenAI includes “reasonable precautions” such as additional labeling of and education about AI-generated political content, though it wouldn’t ban the practice.But bad actors often flout or skirt around government regulations and limitations put in place by platforms. Think of the “do not call” list: even if you’re on it, you still probably get some spam calls.At the national level, or with major public figures, debunking a deepfake happens fairly quickly, with outside groups and journalists jumping in to spot a spoof and spread the word that it’s not real. When the scale is smaller, though, there are fewer people working to debunk something that could be AI-generated. Narratives begin to set in. In Baltimore, for example, recordings posted in January of a local principal allegedly making offensive comments could be AI-generated – it’s still under investigation.In the absence of regulations from the Federal Election Commission (FEC), a handful of states have instituted laws over the use of AI in political ads, and dozens more states have filed bills on the subject. At the state level, regulating AI in elections is a bipartisan issue, Gilbert said. The bills often call for clear disclosures or disclaimers in political ads that make sure voters understand content was AI-generated; without such disclosure, the use of AI is then banned in many of the bills, she said.The FEC opened a rule-making process for AI last summer, and the agency said it expects to resolve it sometime this summer, the Washington Post has reported. Until then, political ads with AI may have some state regulations to follow, but otherwise aren’t restricted by any AI-specific FEC rules.“Hopefully we will be able to get something in place in time, so it’s not kind of a wild west,” Gilbert said. “But it’s closing in on that point, and we need to move really fast.” More

  • in

    Want to come up with a winning election ad campaign? Just be honest | Torsten Bell

    There are so many elections this year but how to go about winning them? Labour has a sub-optimal, but impressively consistent strategy: waiting (usually a decade and a half in opposition).It’s paying off again with huge swings to them in last week’s two byelections. But this approach requires patience and most parties around the world are less keen on waiting that long. So they spend a lot of time and money trying to win, which means election adverts. In the US, TV ads are centre stage. In the UK, those are largely banned (even GB News is meant to be providing news when Tory MPs interview each other) but online ads are big business.Those involved in politics have very strong views about the kind of ads that work. They absolutely have to be positive about your offer. Or negative about your ghastly opponent. It’s imperative they’re about issues, not personalities. Or the opposite. The only problem with those election gurus’ certainties? Different kinds of ads work at different times and places. So found research with access to an intriguing data source: experiments conducted by campaign teams during 2018 and 2020 US elections to test ad options before choosing which to air; 617 ads were tested in 146 survey experiments.Researchers showed that quality matters – it’s not unusual for an advert to be 50% more or less persuasive than average. But one kind is not generally more persuasive and the type of ads that worked in 2018 didn’t have the same effect in 2020.So, if you’re trying to get yourself elected, my advice is to base your campaign on the evidence, not just your hunch. See it as good practice. After all, we’d ideally run the country that way. More

  • in

    Tech firms sign ‘reasonable precautions’ to stop AI-generated election chaos

    Major technology companies signed a pact Friday to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters. Twelve other companies – including Elon Musk’s X – are also signing on to the accord.“Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own,” said Nick Clegg, president of global affairs for Meta, the parent company of Facebook and Instagram, in an interview ahead of the summit.The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio and video “that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote”.The companies aren’t committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide “swift and proportionate responses” when that content starts to spread.The vagueness of the commitments and lack of any binding requirements likely helped win over a diverse swath of companies, but disappointed advocates were looking for stronger assurances.“The language isn’t quite as strong as one might have expected,” said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”Clegg said each company “quite rightly has its own set of content policies”.“This is not attempting to try to impose a straitjacket on everybody,” he said. “And in any event, no one in the industry thinks that you can deal with a whole new technological paradigm by sweeping things under the rug and trying to play Whac-a-Mole and finding everything that you think may mislead someone.”Several political leaders from Europe and the US also joined Friday’s announcement. Vera Jourová, the European Commission vice-president, said while such an agreement can’t be comprehensive, “it contains very impactful and positive elements”. She also urged fellow politicians to take responsibility to not use AI tools deceptively and warned that AI-fueled disinformation could bring about “the end of democracy, not only in the EU member states”.The agreement at the German city’s annual security meeting comes as more than 50 countries are due to hold national elections in 2024. Bangladesh, Taiwan, Pakistan and most recently Indonesia have already done so.Attempts at AI-generated election interference have already begun, such as when AI robocalls that mimicked the US president Joe Biden’s voice tried to discourage people from voting in New Hampshire’s primary election last month.Just days before Slovakia’s elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media.Politicians also have experimented with the technology, from using AI chatbots to communicate with voters to adding AI-generated images to ads.The accord calls on platforms to “pay attention to context and in particular to safeguarding educational, documentary, artistic, satirical, and political expression”.It said the companies will focus on transparency to users about their policies and work to educate the public about how they can avoid falling for AI fakes.Most companies have previously said they’re putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they’re seeing is real. But most of those proposed solutions haven’t yet rolled out and the companies have faced pressure to do more.That pressure is heightened in the US, where Congress has yet to pass laws regulating AI in politics, leaving companies to largely govern themselves.The Federal Communications Commission recently confirmed AI-generated audio clips in robocalls are against the law, but that doesn’t cover audio deepfakes when they circulate on social media or in campaign advertisements.Many social media companies already have policies in place to deter deceptive posts about electoral processes – AI-generated or not. Meta says it removes misinformation about “the dates, locations, times, and methods for voting, voter registration, or census participation” as well as other false posts meant to interfere with someone’s civic participation.Jeff Allen, co-founder of the Integrity Institute and a former Facebook data scientist, said the accord seems like a “positive step” but he’d still like to see social media companies taking other actions to combat misinformation, such as building content recommendation systems that don’t prioritize engagement above all else.Lisa Gilbert, executive vice-president of the advocacy group Public Citizen, argued Friday that the accord is “not enough” and AI companies should “hold back technology” such as hyper-realistic text-to-video generators “until there are substantial and adequate safeguards in place to help us avert many potential problems”.In addition to the companies that helped broker Friday’s agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.Notably absent is another popular AI image-generator, Midjourney. The San Francisco-based startup didn’t immediately respond to a request for comment Friday.The inclusion of X – not mentioned in an earlier announcement about the pending accord – was one of the surprises of Friday’s agreement. Musk sharply curtailed content-moderation teams after taking over the former Twitter and has described himself as a “free-speech absolutist”.In a statement Friday, X CEO Linda Yaccarino said “every citizen and company has a responsibility to safeguard free and fair elections”.“X is dedicated to playing its part, collaborating with peers to combat AI threats while also protecting free speech and maximizing transparency,” she said. More

  • in

    AI firm considers banning creation of political images for 2024 elections

    The groundbreaking artificial intelligence image-generating company Midjourney is considering banning people from using its software to make political images of Joe Biden and Donald Trump as part of an effort to avoid being used to distract from or misinform about the 2024 US presidential election.“I don’t know how much I care about political speech for the next year for our platform,” Midjourney’s CEO, David Holz, said last week, adding that the company is close to “hammering” – or banning – political images, including those of the leading presidential candidates, “for the next 12 months”.In a conversation with Midjourney users in a chatroom on Discord, as reported by Bloomberg, Holz went on to say: “I know it’s fun to make Trump pictures – I make Trump pictures. Trump is aesthetically really interesting. However, probably better to just not, better to pull out a little bit during this election. We’ll see.”AI-generated imagery has recently become a concern. Two weeks ago, pornographic imagery featuring the likeness of Taylor Swift triggered lawmakers and the so-called Swifties who support the singer to demand stronger protections against AI-generated images.The Swift images were traced back to 4chan, a community message board often linked to the sharing of sexual, racist, conspiratorial, violent or otherwise antisocial material with or without the use of AI.Holz’s comments come as safeguards created by image-generator operators are playing a game of cat-and-mouse with users to prevent the creation of questionable content.AI in the political realm is causing increasing concern, though the MIT Technology Review recently noted that discussion about how AI may threaten democracy “lacks imagination”.“People talk about the danger of campaigns that attack opponents with fake images (or fake audio or video) because we already have decades of experience dealing with doctored images,” the review noted. It added: “We’re unlikely to be able to attribute a surprising electoral outcome to any particular AI intervention.”Still, the image-generation company Inflection AI said in October that the company’s chatbot, Pi, would not be allowed to advocate for any political candidate. Co-founder Mustafa Suleyman told a Wall Street Journal conference that chatbots “probably [have] to remain a human part of the process” even if they function perfectly.Meta’s Facebook said last week that it plans to label posts created using AI tools as part of a broader effort to combat election-year misinformation. Microsoft-affiliated OpenAI has said it will add watermarks to images made with its platforms to combat political deepfakes produced by AI.“Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process,” the company said in a blog post last month.OpenAI chief executive Sam Altman said at an event recently: “The thing that I’m most concerned about is that with new capabilities with AI … there will be better deepfakes than in 2020.”In January, a faked audio call purporting to be Joe Biden telling New Hampshire voters to stay home illustrated the potential of AI political manipulation. The FCC later announced a ban on AI-generated voices in robocalls.skip past newsletter promotionafter newsletter promotion“What we’re really realizing is that the gulf between innovation, which is rapidly increasing, and our consideration – our ability as a society to come together to understand best practices, norms of behavior, what we should do, what should be new legislation – that’s still moving painfully slow,” David Ryan Polgar, the president of the non-profit All Tech Is Human, previously told the Guardian.Midjourney software was responsible for a fake image of Trump being handcuffed by agents. Others that have appeared online include Biden and Trump as elderly men knitting sweaters co-operatively, Biden grinning while firing a machine gun and Trump meeting Pope Francis in the White House.The software already has a number of safeguards in place. Midjourney’s community standards guidelines prohibit images that are “disrespectful, harmful, misleading public figures/events portrayals or potential to mislead”.Bloomberg noted that what is permitted or not permitted varies according to the software version used. An older version of Midjourney produced an image of Trump covered in spaghetti, but a newer version did not.But if Midjourney bans the generation of AI-generated political images, consumers – among them voters – will probably be unaware.“We’ll probably just hammer it and not say anything,” Holz said. More

  • in

    When dead children are just the price of doing business, Zuckerberg’s apology is empty | Carole Cadwalladr

    I don’t generally approve of blood sports but I’m happy to make an exception for the hunting and baiting of Silicon Valley executives in a congressional committee room. But then I like expensive, pointless spectacles. And waterboarding tech CEOs in Congress is right up there with firework displays, a brief, thrillingly meaningless sensation on the retina and then darkness.Last week’s grilling of Mark Zuckerberg and his fellow Silicon Valley Übermenschen was a classic of the genre: front pages, headlines, and a genuinely stand-out moment of awkwardness in which he was forced to face victims for the first time ever and apologise: stricken parents holding the photographs of their dead children lost to cyberbullying and sexual exploitation on his platform.Less than six hours later, his company delivered its quarterly results, Meta’s stock price surged by 20.3% delivering a $200bn bump to the company’s market capitalisation and, if you’re counting, which as CEO he presumably does, a $700m sweetener for Zuckerberg himself. Those who listened to the earnings call tell me there was no mention of dead children.A day later, Biden announced, “If you harm an American, we will respond”, and dropped missiles on more than 80 targets across Syria and Iraq. Sure bro, just so long as the Americans aren’t teenagers with smart phones. US tech companies routinely harm Americans, and in particular, American children, though to be fair they routinely harm all other nationalities’ children too: the Wall Street Journal has shown Meta’s algorithms enable paedophiles to find each other. New Mexico’s attorney general is suing the company for being the “largest marketplace for predators and paedophiles globally”. A coroner in Britain found that 14-year-old Molly Jane Russell, “died from an act of self-harm while suffering from depression and the negative effects of online content” – which included Instagram videos depicting suicide.And while dispatching a crack squad of Navy Seals to Menlo Park might be too much to hope for, there are other responses that the US Congress could have mandated, such as, here’s an idea, a law. Any law. One that, say, prohibits tech companies from treating dead children as just a cost of doing business.Because demanding that tech companies don’t enable paedophiles to find and groom children is the lowest of all low-hanging fruit in the tech regulation space. And yet even that hasn’t happened yet. What America urgently needs is to act on its anti-trust laws and break up these companies as a first basic step. It needs to take an axe to Section 230, the law that gives platforms immunity from lawsuits for hosting harmful or illegal content.It needs basic product safety legislation. Imagine GlaxoSmithKline launched an experimental new wonder drug last year. A drug that has shown incredible benefits, including curing some forms of cancer and slowing down ageing. It might also cause brain haemorrhages and abort foetuses, but the data on that is not yet in so we’ll just have to wait and see. There’s a reason that doesn’t happen. They’re called laws. Drug companies go through years of testing. Because they have to. Because at some point, a long time ago, Congress and other legislatures across the world did their job.Yet Silicon Valley’s latest extremely disruptive technology, generative AI, was released into the wild last year without even the most basic federally mandated product testing. Last week, deep fake porn images of the most famous female star on the planet, Taylor Swift, flooded social media platforms, which had no legal obligation to take them down – and hence many of them didn’t.But who cares? It’s only violence being perpetrated against a woman. It’s only non-consensual sexual assault, algorithmically distributed to millions of people across the planet. Punishing women is the first step in the rollout of any disruptive new technology, so get used to that, and if you think deep fakes are going to stop with pop stars, good luck with that too.You thought misinformation during the US election and Brexit vote in 2016 was bad? Well, let’s wait and see what 2024 has to offer. Could there be any possible downside to releasing this untested new technology – one that enables the creation of mass disinformation at scale for no cost – at the exact moment in which more people will go to the polls than at any time in history?You don’t actually have to imagine where that might lead because it’s already happened. A deep fake targeting a progressive candidate dropped days before the Slovakian general election in October. It’s impossible to know what impact it had or who created it, but the candidate lost, and the opposition pro-Putin candidate won. CNN reports that the messaging of the deepfake echoed that put out by Russia’s foreign intelligence service, just an hour before it dropped. And where was Facebook in all of this, you ask? Where it usually is, refusing to take many of the deep fake posts down.Back in Congress, grilling tech execs is something to do to fill the time in between the difficult job of not passing tech legislation. It’s now six years since the Cambridge Analytica scandal when Zuckerberg became the first major tech executive to be commanded to appear before Congress. That was a revelation because it felt like Facebook might finally be brought to heel.But Wednesday’s outing was Zuckerberg’s eighth. And neither Facebook, nor any other tech platform, has been brought to heel. The US has passed not a single federal law. Meanwhile, Facebook has done some exculpatory techwashing of its name to remove the stench of data scandals and Kremlin infiltration and occasionally offers up its CEO for a ritual slaughtering on the Senate floor.To understand America’s end-of-empire waning dominance in the world, its broken legislature and its capture by corporate interests, the symbolism of a senator forcing Zuckerberg to apologise to bereaved parents while Congress – that big white building stormed by insurrectionists who found each other on social media platforms – does absolutely nothing to curb his company’s singular power is as good as any place to start.We’ve had eight years to learn the lessons of 2016 and yet here we are. Britain has responded by weakening the body that protects our elections and degrading our data protection laws to “unlock post-Brexit opportunities”. American congressional committees are now a cargo cult that go through ritualised motions of accountability. Meanwhile, there’s a new tech wonder drug on the market that may create untold economic opportunities or lethal bioweapons and the destabilisation of what is left of liberal democracy. Probably both. Carole Cadwalladr is a reporter and feature writer for the Observer More