More stories

  • in

    First came the bots, then came the bosses – we’re entering Musk and Zuck’s new era of disinformation | Joan Donovan

    I’m a researcher of media manipulation, and watching the 2024 US election returns was like seeing the Titanic sink.Every day leading up to 5 November, there were more and more outrageous claims being made in an attempt across social media to undermine election integrity: conspiracy theories focused on a tidal wave of immigrants plotting to undermine the right wing, allegations that there were millions of excess ballots circulating in California, and rumors that the voting machines were already corrupted by malicious algorithms.All of the disinformation about corrupt vote counts turned out not to be necessary, as Donald Trump won the election decisively. But the election proved that disinformation is no longer the provenance of anonymous accounts amplified by bots to mimic human engagement, like it was in 2016. In 2024, lies travel further and faster across social media, which is now a battleground for narrative dominance. And now, the owners of the platforms circulating the most incendiary lies have direct access to the Oval Office.We talk a lot about social media “platforms”. The word “platform” is interesting as it means both a stated political position and a technological communication system. Over the past decade, we have watched social media platforms warp public opinion by deciding what is seen and when users see it, as algorithms double as newsfeed and timeline editors. When tech CEOs encode their political beliefs into the design of platforms, it’s a form of technofascism, where technology is used for political suppression of speech and to repress the organization of resistance to the state or capitalism.Content moderation at these platforms now reflects the principles of the CEO and what that person believes is in the public’s interest. The political opinions of tech’s overlords, like Musk and Zuckerberg, are now directly embedded in their algorithms.For example, Meta has limited the circulation of critical discussions about political power, reportedly even downranking posts that use the word “vote” on Instagram. Meta’s Twitter clone, Threads, suspended journalists for reporting on Trump’s former chief of staff describing Trump’s admiration of Hitler. Threads built in a politics filter that is turned on by default.View image in fullscreenImplementing these filtering mechanisms illustrates a sharp difference from Meta’s embrace of politicians who got personalized white-glove service in 2016 as Facebook embedded employees directly in political campaigns, who advised on branding and reaching new audiences. It’s also a striking reversal of Zuckerberg’s free speech position in 2019. Zuckerberg gave a presentation at Georgetown University claiming that he was inspired to create Facebook because he wanted to give students a voice during the Iraq war. This historical revisionism was quickly skewered in the media. (Facebook’s predecessor allowed users to rate the appearance of Harvard female freshmen. Misogyny was the core of its design.) Nevertheless, his false origin story encapsulated a vision of how Zuckerberg once believed society and politics should be organized, where political discussion was his guiding reason to bring people into community.However, he now appears to have abandoned this position in favor of disincentivizing political discussion altogether. Recently, Zuckerberg wrote to the Republican Jim Jordan saying he regretted his content moderation decisions during the pandemic because he acted under pressure from the Biden administration. The letter itself was an obvious attempt to curry favor as Trump rose as the Republican presidential candidate. Zuckerberg has reason to fear Trump, who has mentioned wanting to arrest Zuckerberg for deplatforming him on Meta products after the January 6 Capitol riot.X seems to have embraced the disinformation chaos and fully fused Trump’s campaign into the design of X’s content strategies. Outrageous assertions circle the drain on X, including false claims such as that immigrants are eating pets in Ohio, Kamala Harris’s Jamaican grandmother was white, and that immigrants are siphoning aid meant for Fema. It’s also worth noting that Musk is the biggest purveyor of anti-immigrant conspiracy theories on X. The hiss and crackle of disinformation is as ambient as it is unsettling.There are no clearer signs of Musk’s willingness to use platform power than his relentless amplification of his own account as well as Trump’s Twitter account on X’s “For You” algorithm. Moreover, Musk bemoaned the link suppression by Twitter in 2020 over Hunter Biden’s laptop while then hypocritically working with the Trump campaign in 2024 to ban accounts and links to leaked documents emanating from the Trump campaign that painted JD Vance in a negative light.Musk understands that he will personally benefit from being close to power. He supported Trump with a controversial political action committee that gave away cash to those who signed his online petition. Musk also paid millions for canvassers and spent many evenings in Pennsylvania stumping for Trump. With Trump’s win, he will need to make good on his promise of placing Musk in a position on the not-yet-created “Department of Government Efficiency” (Doge – which is also the name of Musk’s favorite cryptocurrency). While it sure seems like a joke taken too far, Musk has said he plans to cut $2tn from the national budget, which will wreak havoc on the economy and could be devastating when coupled with the mass deportation of 10 million people.In short, what we learn from the content strategies of X and Meta is simple: the design of platforms is now inextricable from the politics of the owner.skip past newsletter promotionafter newsletter promotionThis wasn’t inevitable. In 2016, there was a public reckoning that social media had been weaponized by foreign adversaries and domestic actors to spread disinformation on a number of wedge issues to millions of unsuspecting users. Hundreds of studies were conducted in the intervening years, by internal corporate researchers and independent academics, showing that platforms amplify and expose audiences to conspiracy theories and fake news, which can lead to networked incitement and political violence.By 2020, disinformation had become its own industry and the need for anonymity lessened as rightwing media makers directly impugned election results, culminating in January 6. That led to an unprecedented decision by social media companies to ban Trump, who was still the sitting president, and a number of other high-profile rightwing pundits, thus illustrating just how powerful social media platforms had become as political actors.In reaction to this unprecedented move to curb disinformation, the richest man in the world, Musk, bought Twitter, laid off much of the staff, and sent internal company communications to journalists and politicians in 2022. Major investigations of university researchers and government agencies ensued, naming and shaming those who engaged with Twitter’s former leadership and made appeals for the companies to enforce its own terms of service during the 2020 election.Since then, these CEOs have ossified their political beliefs in the design of algorithms and by extension dictated political discourse for the rest of us.Whether it’s Musk’s strategy of overloading users with posts from himself and Trump, or Zuckerberg’s silencing of political discussion, it’s citizens who suffer from such chilling of speech. Of course, there is no way to know decisively how disinformation affected individual voters, but a recent Ipsos poll shows Trump voters believed disinformation on a number of wedge issues, claiming that immigration, crime, and the economy are all worse than data indicates. For now, let this knowledge be the canary warning of technofascism, where the US is not only ruled by elected politicians, but also by technological authoritarians who control speech on a global scale.If we are to disarm disinformers, we need a whole of society approach that values real Talk (Timely, Accurate Local Knowledge) and community safety. This might look like states passing legislation to fund local journalism in the public interest, because local news can bridge divides between neighbors and bring some accountability to the government. It will require our institutions, such as medicine, journalism, and academia, to fight for truth and justice, even in the face of anticipated retaliation. But most of all, it’s going to require that you and I do something quickly to protect those already in the crosshairs of Trump’s new world order, by donating to or joining community organizations tackling issues such as women’s rights and immigration. Even subscribing to a local news outlet is a profound political act these days. Let that sink in.Joan Donovan is the founder of the Critical Internet Studies Institute and assistant professor of journalism at Boston University More

  • in

    Jeff Bezos, Mark Zuckerberg and other business leaders congratulate Trump

    Business leaders were swift to offer their congratulations to Donald Trump on his election victory, less than four years after they criticized him for his role in the January 6 insurrection.Some of tech’s business leaders, including Amazon’s Jeff Bezos, Meta’s Mark Zuckerberg and Apple’s Tim Cook all publicly congratulated Trump for his win.“Big congratulations to our 45th and now 47th President on an extraordinary political comeback and decisive victory,” Bezos said in a statement. “No nation has bigger opportunities.”“Congratulations to President Trump on a decisive victory. We have great opportunities ahead of us as a country,” Zuckerberg wrote on Threads. “Looking forward to working with you and your administration.”“Congratulations President Trump on your victory! We look forward to engaging with you and your administration,” Cook wrote on Twitter/X.The influential Business Roundtable, a powerful lobbying group with more than 200 members, who are the chief executives of companies such as JPMorgan, Walmart, Google and Pepsi, said in a statement: “Business Roundtable congratulates President-elect Donald Trump on his election as the 47th President of the United States.”“We look forward to working with the incoming Trump Administration and all federal and state policymakers,” the group said.Billionaire Mark Cuban, who endorsed Kamala Harris, was one of the first to congratulate Trump just after 1am ET.“Congrats @realDonaldTrump. You won fair and square,” Cuban wrote. “Congrats to @elonmusk as well.”Elon Musk, Trump’s highest-profile business backer, celebrated with a post on X declaring victory for himself. “It is morning in America again,” he wrote. Trump has floated giving Musk an influential role in his administration.The reaction presents a stark contrast to how the leaders responded to Trump after the 2020 election. Cook had called the insurrection “a shameful chapter in our nation’s history”, while Zuckerberg said: “I believe the former president should be responsible for his words.”Bezos, meanwhile, had congratulated Joe Biden for his victory four years ago with a post. “Unity, empathy and decency are not characteristics of a bygone era,” he said on Instagram, posting a picture of Biden and Kamala Harris.skip past newsletter promotionafter newsletter promotionIt’s something of an about-face that was seen leading up to the election. Trump had started to brag that executives such as Google’s Sundar Pichai and Zuckerberg were calling him, seemingly trying to rebuild relationships that had been strained during Biden’s presidency.Bezos has had a particularly fraught relationship with Trump. But in October the Bezos-owned Washington Post chose not to endorse any candidate in the US presidential election. The Post had planned to endorse the vice-president.While coalitions of former executives had endorsed Harris, and said that many CEOs were probably going to vote in support of her, the business community appears poised to transition to a second Trump term. By Wednesday afternoon, US stock markets were soaring on news of Trump’s victory.Read more of the Guardian’s 2024 US election coverage

    How to watch Kamala Harris’s concession speech

    Trump wins the presidency – how did it happen?

    With Trump re-elected, this is what’s at stake

    Tracking abortion ballot measures More

  • in

    Amazon, Tesla and Meta among world’s top companies undermining democracy – report

    Some of the world’s largest companies have been accused of undermining democracy across the world by financially backing far-right political movements, funding and exacerbating the climate crisis, and violating trade union rights and human rights in a report published on Monday by the International Trade Union Confederation (ITUC).Amazon, Tesla, Meta, ExxonMobil, Blackstone, Vanguard and Glencore are the corporations included in the report. The companies’ lobbying arms are attempting to shape global policy at the United Nations Summit of the Future in New York City on 22 and 23 September.At Amazon, the report notes the company’s size and role as the fifth largest employer in the world and the largest online retailer and cloud computing service, has had a profound impact on the industries and communities it operates within.“The company has become notorious for its union busting and low wages on multiple continents, monopoly in e-commerce, egregious carbon emissions through its AWS data centres, corporate tax evasion, and lobbying at national and international level,” states the report.The report cites Amazon’s high injury rates in the US, the company challenging the constitutionality of the National Labor Relations Board (NLRB), its efforts in Canada to overturn labor law, the banning of Amazon lobbyists from the European parliament for refusing to attend hearings on worker violations, and refusal to negotiate with unions in Germany, among other cases. Amazon has also funded far-right political groups’ efforts to undermine women’s rights and antitrust legislation, and its retail website has been used by hate groups to raise money and sell products.At Tesla, the report cites anti-union opposition by the company in the US, Germany, and Sweden; human rights violations within its supply chains; and Elon Musk’s personal opposition to unions and democracy, challenges to the NLRB in the US, and his support for the political leaders Donald Trump, Javier Milei in Argentina and Narendra Modi in India.The report cites Meta, the largest social media company in the world, for its vast role in permitting and enabling far-right propaganda and movements to use its platforms to grow members and garner support in the US and abroad. It also cited retaliation from the company for regulatory measures in Canada, and expensive lobbying efforts against laws to regulate data privacy.Glencore, the largest mining company in the world by revenue, was included in the report for its role in financing campaigns globally against Indigenous communities and activists.Blackstone, the private equity firm led by Stephen Schwarzman, a billionaire backer of Donald Trump, was cited in the report for its roles in funding far-right political movements, investments in fossil fuel projects and deforestation in the Amazon.“Blackstone’s network has spent tens of millions of dollars supporting politicians and political forces who promise to prevent or eliminate regulations that might hold it to account,” the report noted.The Vanguard Group was included in the report due to its role in financing some of the world’s most anti-democratic corporations. ExxonMobil was cited for funding anti-climate science research and aggressive lobbying against environmental regulations.Even in “robust democracies” workers’ demands “are overwhelmed by corporate lobbying operations, either in policymaking or the election in itself”, said Todd Brogan, director of campaigns and organizing at the ITUC.skip past newsletter promotionafter newsletter promotion“This is about power, who has it, and who sets the agenda. We know as trade unionists that unless we’re organized, the boss sets the agenda in the workplace, and we know as citizens in our countries that unless we’re organized and demanding responsive governments that actually meet the needs of people, it’s corporate power that’s going to set the agenda.“They’re playing the long game, and it’s a game about shifting power away from democracy at every level into one where they’re not concerned about the effects on workers – they’re concerned about maximizing their influence and their extractive power and their profit,” added Brogan. “Now is the time for international and multi-sectoral strategies, because these are, in many cases, multinational corporations that are more powerful than states, and they have no democratic accountability whatsoever, except for workers organized.”The ITUC includes labor group affiliates from 169 nations and territories around the world representing 191 million workers, including the AFL-CIO, the largest federation of labor unions in the US, and the Trades Union Congress in the UK.With 4 billion people around the world set to participate in elections in 2024, the federation is pushing for an international binding treaty being worked on by the Open-ended intergovernmental working group to hold transnational corporations accountable under international human rights laws. More

  • in

    Meta bans Russian state media outlets over ‘foreign interference activity’

    Facebook owner Meta said on Monday it was banning RT, Rossiya Segodnya and other Russian state media networks, alleging the outlets used deceptive tactics to carry out influence operations while evading detection on the social media company’s platforms.“After careful consideration, we expanded our ongoing enforcement against Russian state media outlets. Rossiya Segodnya, RT and other related entities are now banned from our apps globally for foreign interference activity,” the company said in a written statement.Enforcement of the ban would roll out over the coming days, it said. In addition to Facebook, Meta’s apps include Instagram, WhatsApp and Threads.The Russian embassy did not immediately respond to a Reuters request for comment.The ban marks a sharp escalation in actions by the world’s biggest social media company against Russian state media, after it spent years taking more limited steps such as blocking the outlets from running ads and reducing the reach of their posts.It came after the US filed money-laundering charges earlier this month against two RT employees for what officials said was a scheme to hire a US company to produce online content to influence the 2024 election.On Friday, US secretary of state Antony Blinken announced new sanctions against the Russian state-backed media company, formerly known as Russia Today, after new information gleaned from the outfit’s employees showed it was “functioning like a de facto arm of Russia’s intelligence apparatus”.“Today, we’re exposing how Russia deploys similar tactics around the world,” Blinken said. “Russian weaponization of disinformation to subvert and polarize free and open societies extends to every part of the world.”The Russian government in 2023 established a new unit in RT with “cyber operational capabilities and ties to Russian intelligence”, Blinken claimed, with the goal of spreading Russian influence in countries around the world through information operations, covert influence and military procurement.skip past newsletter promotionafter newsletter promotionBlinken said the US treasury would sanction three entities and two individuals tied to Rossiya Segodnya, the Russian state media company. The decision came after the announcement earlier this month that RT had funneled nearly $10m to conservative US influencers through a local company to produce videos meant to influence the outcome of the US presidential election in November.Speaking to reporters from the state department on Friday, Blinken accused RT of crowdfunding weapons and equipment for Russian soldiers in Ukraine, including sniper rifles, weapon sights, body armor, night-vision equipment, drones, radio equipment and diesel generators. Some of the equipment, including the recon drones, could be sourced from China, he said.Blinken also detailed how the organisation had targeted countries in Europe, Africa and North and South America. In particular, he said that RT leadership had coordinated directly with the Kremlin to target the October 2024 elections in Moldova, a former Soviet state in Europe where Russia has been accused of waging a hybrid war to exert greater influence. In particular, he said, RT’s leadership had “attempted to foment unrest in Moldova, likely with the specific aim of causing protests to turn violent”.“RT is aware of and prepared to assist Russia’s plans to incite protests should the election not result in a Russia-preferred candidate winning the presidency,” Blinken said.Andrew Roth contributed reporting More

  • in

    Meta lifts restrictions on Trump’s Facebook and Instagram accounts

    Meta has removed previous restrictions on the Facebook and Instagram accounts of Donald Trump as the 2024 election nears, the company announced on Friday.Trump was allowed to return to the social networks in 2023 with “guardrails” in place, after being banned over his online behavior during the 6 January insurrection. Those guardrails have now been removed.“In assessing our responsibility to allow political expression, we believe that the American people should be able to hear from the nominees for president on the same basis,” Meta said in a blogpost, citing the Republican national convention, slated for next week, which will formalize Trump as the party’s candidate.As a result, Meta said, Trump’s accounts will no longer be subject to heightened suspension penalties, which Meta said were created in response to “extreme and extraordinary circumstances” and “have not had to be deployed”.“All US presidential candidates remain subject to the same community standards as all Facebook and Instagram users, including those policies designed to prevent hate speech and incitement to violence,” the company’s blogpost reads.Since his return to Meta’s social networks, Trump has primarily shared campaign information, attacks on Democratic candidate Biden, and memes on his accounts.Critics of Trump and online safety advocates have expressed concern that Trump’s return could lead to a rise of misinformation and incitement of violence, as was seen during the Capitol riot that prompted his initial ban.The Biden campaign condemned Meta’s decision in a statement on Friday, saying it is a “greedy, reckless decision” that constitutes “ a direct attack on our safety and our democracy”.“Restoring his access is like handing your car keys to someone you know will drive your car into a crowd and off a cliff,” said campaign spokesperson Charles Kretchmer Lutvak. “It is holding a megaphone for a bonafide racist who will shout his hate and white supremacy from the rooftops and try to take it mainstream.”In addition to Meta platforms, other major social media firms banned Trump due to his online activity surrounding the 6 January attack, including Twitter (now X), Snapchat and YouTube.The former president was allowed back on X last year by the decision of Elon Musk, who bought the company in 2022, though the former president has not yet tweeted.Trump returned to YouTube in March 2023. He remains banned from Snapchat.Trump founded his own social network, Truth Social, in early 2022. More

  • in

    Battle lines drawn as US states take on big tech with online child safety bills

    On 6 April, Maryland became the first state in the US to pass a “Kids Code” bill, which aims to prevent tech companies from collecting predatory data from children and using design features that could cause them harm. Vermont’s legislature held its final hearing before a full vote on its Kids Code bill on 11 April. The measures are the latest in a salvo of proposed policies that, in the absence of federal rules, have made state capitols a major battlefield in the war between parents and child advocates, who lament that there are too few protections for minors online, and Silicon Valley tech companies, who protest that the recommended restrictions would hobble both business and free speech.Known as Age-Appropriate Design Code or Kids Code bills, these measures call for special data safeguards for underage users online as well as blanket prohibitions on children under certain ages using social media. Maryland’s measure passed with unanimous votes in its house and senate.In all, nine states across the country – Maryland, Vermont, Minnesota, Hawaii, Illinois, New Mexico, South Carolina, New Mexico and Nevada – have introduced and are now hashing out bills aimed at improving online child safety. Minnesota’s bill passed the house committee in February.Lawmakers in multiple states have accused lobbyists for tech firms of deception during public hearings. Tech companies have also spent a quarter of a million dollars lobbying against the Maryland bill to no avail.Carl Szabo, vice-president and general counsel of the tech trade association NetChoice, spoke against the Maryland bill at a state senate finance committee meeting in mid-2023 as a “lifelong Maryland resident, parent, [spouse] of a child therapist”.Later in the hearing, a Maryland state senator asked: “Who are you, sir? … I don’t believe it was revealed at the introduction of your commentary that you work for NetChoice. All I heard was that you were here testifying as a dad. I didn’t hear you had a direct tie as an employee and representative of big tech.”For the past two years, technology giants have been directly lobbying in some states looking to pass online safety bills. In Maryland alone, tech giants racked up more than $243,000 in lobbying fees in 2023, the year the bill was introduced. Google spent $93,076, Amazon $88,886, and Apple $133,449 last year, according to state disclosure forms.Amazon, Apple, Google and Meta hired in-state lobbyists in Minnesota and sent employees to lobby directly in 2023. In 2022, the four companies also spent a combined $384,000 on lobbying in Minnesota, the highest total up to that point, according to the Minnesota campaign finance and public disclosure board.The bills require tech companies to undergo a series of steps aimed at safeguarding children’s experiences on their websites and assessing their “data protection impact”. Companies must configure all default privacy settings provided to children by online products to offer a high level of privacy, “unless the covered entity can demonstrate a compelling reason that a different setting is in the best interests of children”. Another requirement is to provide privacy information and terms of service in clear, understandable language for children and provide responsive tools to help children or their parents or guardians exercise their privacy rights and report concerns.The legislation leaves it to tech companies to determine whether users are underage but does not require verification by documents such as a driver’s license. Determining age could come from data profiles companies have on a user, or self-declaration, where users must enter their birth date, known as “age-gating”.Critics argue the process of tech companies guessing a child’s age may lead to privacy invasions.“Generally, this is how it will work: to determine whether a user in a state is under a specific age and whether the adult verifying a minor over that designated age is truly that child’s parent or guardian, online services will need to conduct identity verification,” said a spokesperson for NetChoice.The bills’ supporters argue that users of social media should not be required to upload identity documents since the companies already know their age.“They’ve collected so many data points on users that they are advertising to kids because they know the user is a kid,” said a spokesperson for the advocacy group the Tech Oversight Project. “Social media companies’ business models are based on knowing who their users are.”NetChoice – and by extension, the tech industry – has several alternative proposals for improving child safety online. They include digital literacy and safety education in the classroom for children to form “an understanding of healthy online practices in a classroom environment to better prepare them for modern challenges”.At a meeting in February to debate a proposed bill aimed at online child safety, NetChoice’s director, Amy Bos, argued that parental safety controls introduced by social media companies and parental interventions such as parents taking away children’s phones when they have racked up too much screen time were better courses of action than regulation. Asking parents to opt into protecting their children often fails to achieve wide adoption, though. Snapchat and Discord told the US Senate in February that fewer than 1% of under-18 users on either social network had parents who monitor their online behavior using parental controls.Bos also ardently argued that the proposed bill breached first amendment rights. Her testimony prompted a Vermont state senator to ask: “You said, ‘We represent eBay and Etsy.’ Why would you mention those before TikTok and X in relation to a bill about social media platforms and teenagers?”NetChoice is also promoting the bipartisan Invest in Child Safety Act, which is aimed at giving “cops the needed resources to put predators behind bars”, it says, highlighting that less than 1% of reported child sexual abuse material (CSAM) violations are investigated by law enforcement due to a lack of resources and capacity.However, critics of NetChoice’s stance argue that more needs to be done proactively to prevent children from harm in the first place and that tech companies should take responsibility for ensuring safety rather than placing it on the shoulders of parents and children.“Big Tech and NetChoice are mistaken if they think they’re still fooling anybody with this ‘look there not here’ act,” said Sacha Haworth, executive director of the Tech Oversight Project. “The latest list of alleged ‘solutions’ they propose is just another feint to avoid any responsibility and kick the can down the road while continuing to profit off our kids.”All the state bills have faced opposition by tech companies in the form of strenuous statements or in-person lobbying by representatives of these firms.Other tech lobbyists needed similar prompting to Bos and Szabo to disclose their relevant tech patrons during their testimonies at hearings on child safety bills, if they notified legislators at all. A registered Amazon lobbyist who has spoken at two hearings on New Mexico’s version of the Kids Code bill said he represented the Albuquerque Hispano Chamber of Commerce and the New Mexico Hospitality Association. He never mentioned the e-commerce giant. A representative of another tech trade group did not disclose his organization’s backing from Meta at the same Vermont hearing that saw Bos’s motives and affiliations questioned – arguably the company that would be most affected by the bill’s stipulations.The bills’ supporters say these speakers are deliberately concealing who they work for to better convince lawmakers of their messaging.“We see a clear and accelerating pattern of deception in anti-Kids Code lobbying,” said Haworth of the Tech Oversight Project, which supports the bills. “Big tech companies that profit billions a year off kids refuse to face outraged citizens and bereaved parents themselves in all these states, instead sending front-group lobbyists in their place to oppose this legislation.”NetChoice denied the accusations. In a statement, a spokesperson for the group said: “We are a technology trade association. The claim that we are trying to conceal our affiliation with the tech industry is ludicrous.”These state-level bills follow attempts in California to introduce regulations aimed at protecting children’s privacy online. The California Age-Appropriate Design Code Act is based on similar legislation from the UK that became law in October. The California bill, however, was blocked from being passed into law in late 2023 by a federal judge, who granted NetChoice a preliminary injunction, citing potential threats to the first amendment. Rights groups such as the American Civil Liberties Union also opposed the bill. Supporters in other states say they have learned from the fight in California. They point out that language in the eight other states’ bills has been updated to address concerns raised in the Golden state.The online safety bills come amid increasing scrutiny of Meta’s products for their alleged roles in facilitating harm against children. Mark Zuckerberg, its CEO, was told he had “blood on his hands” at a January US Senate judiciary committee hearing on digital sexual exploitation. Zuckerberg turned and apologized to a group of assembled parents. In December, the New Mexico attorney general’s office filed a lawsuit against Meta for allegedly allowing its platforms to become a marketplace for child predators. The suit follows a 2023 Guardian investigation that revealed how child traffickers were using Meta platforms, including Instagram, to buy and sell children into sexual exploitation.“In time, as Meta’s scandals have piled up, their brand has become toxic to public policy debates,” said Jason Kint, CEO of Digital Content Next, a trade association focused on the digital content industry. “NetChoice leading with Apple, but then burying that Meta and TikTok are members in a hearing focused on social media harms sort of says it all.”A Meta spokesperson said the company wanted teens to have age-appropriate experiences online and that the company has developed more than 30 child safety tools.“We support clear, consistent legislation that makes it simple for parents to manage their teens’ online experiences,” said the spokesperson. “While some laws align with solutions we support, we have been open about our concerns over state legislation that holds apps to different standards in different states. Instead, parents should approve their teen’s app downloads, and we support legislation that requires app stores to get parents’ approval whenever their teens under 16 download apps.” More

  • in

    Facebook and Instagram to label digitally altered content ‘made with AI’

    Meta, owner of Facebook and Instagram, announced major changes to its policies on digitally created and altered media on Friday, before elections poised to test its ability to police deceptive content generated by artificial intelligence technologies.The social media giant will start applying “Made with AI” labels in May to AI-generated videos, images and audio posted on Facebook and Instagram, expanding a policy that previously addressed only a narrow slice of doctored videos, the vice-president of content policy, Monika Bickert, said in a blogpost.Bickert said Meta would also apply separate and more prominent labels to digitally altered media that poses a “particularly high risk of materially deceiving the public on a matter of importance”, regardless of whether the content was created using AI or other tools. Meta will begin applying the more prominent “high-risk” labels immediately, a spokesperson said.The approach will shift the company’s treatment of manipulated content, moving from a focus on removing a limited set of posts toward keeping the content up while providing viewers with information about how it was made.Meta previously announced a scheme to detect images made using other companies’ generative AI tools by using invisible markers built into the files, but did not give a start date at the time.A company spokesperson said the labeling approach would apply to content posted on Facebook, Instagram and Threads. Its other services, including WhatsApp and Quest virtual-reality headsets, are covered by different rules.The changes come months before a US presidential election in November that tech researchers warn may be transformed by generative AI technologies. Political campaigns have already begun deploying AI tools in places like Indonesia, pushing the boundaries of guidelines issued by providers like Meta and generative AI market leader OpenAI.In February, Meta’s oversight board called the company’s existing rules on manipulated media “incoherent” after reviewing a video of Joe Biden posted on Facebook last year that altered real footage to wrongfully suggest the US president had behaved inappropriately.The footage was permitted to stay up, as Meta’s existing “manipulated media” policy bars misleadingly altered videos only if they were produced by artificial intelligence or if they make people appear to say words they never actually said.The board said the policy should also apply to non-AI content, which is “not necessarily any less misleading” than content generated by AI, as well as to audio-only content and videos depicting people doing things they never actually said or did. More

  • in

    Tech firms sign ‘reasonable precautions’ to stop AI-generated election chaos

    Major technology companies signed a pact Friday to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters. Twelve other companies – including Elon Musk’s X – are also signing on to the accord.“Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own,” said Nick Clegg, president of global affairs for Meta, the parent company of Facebook and Instagram, in an interview ahead of the summit.The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio and video “that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote”.The companies aren’t committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide “swift and proportionate responses” when that content starts to spread.The vagueness of the commitments and lack of any binding requirements likely helped win over a diverse swath of companies, but disappointed advocates were looking for stronger assurances.“The language isn’t quite as strong as one might have expected,” said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”Clegg said each company “quite rightly has its own set of content policies”.“This is not attempting to try to impose a straitjacket on everybody,” he said. “And in any event, no one in the industry thinks that you can deal with a whole new technological paradigm by sweeping things under the rug and trying to play Whac-a-Mole and finding everything that you think may mislead someone.”Several political leaders from Europe and the US also joined Friday’s announcement. Vera Jourová, the European Commission vice-president, said while such an agreement can’t be comprehensive, “it contains very impactful and positive elements”. She also urged fellow politicians to take responsibility to not use AI tools deceptively and warned that AI-fueled disinformation could bring about “the end of democracy, not only in the EU member states”.The agreement at the German city’s annual security meeting comes as more than 50 countries are due to hold national elections in 2024. Bangladesh, Taiwan, Pakistan and most recently Indonesia have already done so.Attempts at AI-generated election interference have already begun, such as when AI robocalls that mimicked the US president Joe Biden’s voice tried to discourage people from voting in New Hampshire’s primary election last month.Just days before Slovakia’s elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media.Politicians also have experimented with the technology, from using AI chatbots to communicate with voters to adding AI-generated images to ads.The accord calls on platforms to “pay attention to context and in particular to safeguarding educational, documentary, artistic, satirical, and political expression”.It said the companies will focus on transparency to users about their policies and work to educate the public about how they can avoid falling for AI fakes.Most companies have previously said they’re putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they’re seeing is real. But most of those proposed solutions haven’t yet rolled out and the companies have faced pressure to do more.That pressure is heightened in the US, where Congress has yet to pass laws regulating AI in politics, leaving companies to largely govern themselves.The Federal Communications Commission recently confirmed AI-generated audio clips in robocalls are against the law, but that doesn’t cover audio deepfakes when they circulate on social media or in campaign advertisements.Many social media companies already have policies in place to deter deceptive posts about electoral processes – AI-generated or not. Meta says it removes misinformation about “the dates, locations, times, and methods for voting, voter registration, or census participation” as well as other false posts meant to interfere with someone’s civic participation.Jeff Allen, co-founder of the Integrity Institute and a former Facebook data scientist, said the accord seems like a “positive step” but he’d still like to see social media companies taking other actions to combat misinformation, such as building content recommendation systems that don’t prioritize engagement above all else.Lisa Gilbert, executive vice-president of the advocacy group Public Citizen, argued Friday that the accord is “not enough” and AI companies should “hold back technology” such as hyper-realistic text-to-video generators “until there are substantial and adequate safeguards in place to help us avert many potential problems”.In addition to the companies that helped broker Friday’s agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.Notably absent is another popular AI image-generator, Midjourney. The San Francisco-based startup didn’t immediately respond to a request for comment Friday.The inclusion of X – not mentioned in an earlier announcement about the pending accord – was one of the surprises of Friday’s agreement. Musk sharply curtailed content-moderation teams after taking over the former Twitter and has described himself as a “free-speech absolutist”.In a statement Friday, X CEO Linda Yaccarino said “every citizen and company has a responsibility to safeguard free and fair elections”.“X is dedicated to playing its part, collaborating with peers to combat AI threats while also protecting free speech and maximizing transparency,” she said. More