More stories

  • in

    Tech firms sign ‘reasonable precautions’ to stop AI-generated election chaos

    Major technology companies signed a pact Friday to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters. Twelve other companies – including Elon Musk’s X – are also signing on to the accord.“Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own,” said Nick Clegg, president of global affairs for Meta, the parent company of Facebook and Instagram, in an interview ahead of the summit.The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio and video “that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote”.The companies aren’t committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide “swift and proportionate responses” when that content starts to spread.The vagueness of the commitments and lack of any binding requirements likely helped win over a diverse swath of companies, but disappointed advocates were looking for stronger assurances.“The language isn’t quite as strong as one might have expected,” said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”Clegg said each company “quite rightly has its own set of content policies”.“This is not attempting to try to impose a straitjacket on everybody,” he said. “And in any event, no one in the industry thinks that you can deal with a whole new technological paradigm by sweeping things under the rug and trying to play Whac-a-Mole and finding everything that you think may mislead someone.”Several political leaders from Europe and the US also joined Friday’s announcement. Vera Jourová, the European Commission vice-president, said while such an agreement can’t be comprehensive, “it contains very impactful and positive elements”. She also urged fellow politicians to take responsibility to not use AI tools deceptively and warned that AI-fueled disinformation could bring about “the end of democracy, not only in the EU member states”.The agreement at the German city’s annual security meeting comes as more than 50 countries are due to hold national elections in 2024. Bangladesh, Taiwan, Pakistan and most recently Indonesia have already done so.Attempts at AI-generated election interference have already begun, such as when AI robocalls that mimicked the US president Joe Biden’s voice tried to discourage people from voting in New Hampshire’s primary election last month.Just days before Slovakia’s elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media.Politicians also have experimented with the technology, from using AI chatbots to communicate with voters to adding AI-generated images to ads.The accord calls on platforms to “pay attention to context and in particular to safeguarding educational, documentary, artistic, satirical, and political expression”.It said the companies will focus on transparency to users about their policies and work to educate the public about how they can avoid falling for AI fakes.Most companies have previously said they’re putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they’re seeing is real. But most of those proposed solutions haven’t yet rolled out and the companies have faced pressure to do more.That pressure is heightened in the US, where Congress has yet to pass laws regulating AI in politics, leaving companies to largely govern themselves.The Federal Communications Commission recently confirmed AI-generated audio clips in robocalls are against the law, but that doesn’t cover audio deepfakes when they circulate on social media or in campaign advertisements.Many social media companies already have policies in place to deter deceptive posts about electoral processes – AI-generated or not. Meta says it removes misinformation about “the dates, locations, times, and methods for voting, voter registration, or census participation” as well as other false posts meant to interfere with someone’s civic participation.Jeff Allen, co-founder of the Integrity Institute and a former Facebook data scientist, said the accord seems like a “positive step” but he’d still like to see social media companies taking other actions to combat misinformation, such as building content recommendation systems that don’t prioritize engagement above all else.Lisa Gilbert, executive vice-president of the advocacy group Public Citizen, argued Friday that the accord is “not enough” and AI companies should “hold back technology” such as hyper-realistic text-to-video generators “until there are substantial and adequate safeguards in place to help us avert many potential problems”.In addition to the companies that helped broker Friday’s agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.Notably absent is another popular AI image-generator, Midjourney. The San Francisco-based startup didn’t immediately respond to a request for comment Friday.The inclusion of X – not mentioned in an earlier announcement about the pending accord – was one of the surprises of Friday’s agreement. Musk sharply curtailed content-moderation teams after taking over the former Twitter and has described himself as a “free-speech absolutist”.In a statement Friday, X CEO Linda Yaccarino said “every citizen and company has a responsibility to safeguard free and fair elections”.“X is dedicated to playing its part, collaborating with peers to combat AI threats while also protecting free speech and maximizing transparency,” she said. More

  • in

    AI firm considers banning creation of political images for 2024 elections

    The groundbreaking artificial intelligence image-generating company Midjourney is considering banning people from using its software to make political images of Joe Biden and Donald Trump as part of an effort to avoid being used to distract from or misinform about the 2024 US presidential election.“I don’t know how much I care about political speech for the next year for our platform,” Midjourney’s CEO, David Holz, said last week, adding that the company is close to “hammering” – or banning – political images, including those of the leading presidential candidates, “for the next 12 months”.In a conversation with Midjourney users in a chatroom on Discord, as reported by Bloomberg, Holz went on to say: “I know it’s fun to make Trump pictures – I make Trump pictures. Trump is aesthetically really interesting. However, probably better to just not, better to pull out a little bit during this election. We’ll see.”AI-generated imagery has recently become a concern. Two weeks ago, pornographic imagery featuring the likeness of Taylor Swift triggered lawmakers and the so-called Swifties who support the singer to demand stronger protections against AI-generated images.The Swift images were traced back to 4chan, a community message board often linked to the sharing of sexual, racist, conspiratorial, violent or otherwise antisocial material with or without the use of AI.Holz’s comments come as safeguards created by image-generator operators are playing a game of cat-and-mouse with users to prevent the creation of questionable content.AI in the political realm is causing increasing concern, though the MIT Technology Review recently noted that discussion about how AI may threaten democracy “lacks imagination”.“People talk about the danger of campaigns that attack opponents with fake images (or fake audio or video) because we already have decades of experience dealing with doctored images,” the review noted. It added: “We’re unlikely to be able to attribute a surprising electoral outcome to any particular AI intervention.”Still, the image-generation company Inflection AI said in October that the company’s chatbot, Pi, would not be allowed to advocate for any political candidate. Co-founder Mustafa Suleyman told a Wall Street Journal conference that chatbots “probably [have] to remain a human part of the process” even if they function perfectly.Meta’s Facebook said last week that it plans to label posts created using AI tools as part of a broader effort to combat election-year misinformation. Microsoft-affiliated OpenAI has said it will add watermarks to images made with its platforms to combat political deepfakes produced by AI.“Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process,” the company said in a blog post last month.OpenAI chief executive Sam Altman said at an event recently: “The thing that I’m most concerned about is that with new capabilities with AI … there will be better deepfakes than in 2020.”In January, a faked audio call purporting to be Joe Biden telling New Hampshire voters to stay home illustrated the potential of AI political manipulation. The FCC later announced a ban on AI-generated voices in robocalls.skip past newsletter promotionafter newsletter promotion“What we’re really realizing is that the gulf between innovation, which is rapidly increasing, and our consideration – our ability as a society to come together to understand best practices, norms of behavior, what we should do, what should be new legislation – that’s still moving painfully slow,” David Ryan Polgar, the president of the non-profit All Tech Is Human, previously told the Guardian.Midjourney software was responsible for a fake image of Trump being handcuffed by agents. Others that have appeared online include Biden and Trump as elderly men knitting sweaters co-operatively, Biden grinning while firing a machine gun and Trump meeting Pope Francis in the White House.The software already has a number of safeguards in place. Midjourney’s community standards guidelines prohibit images that are “disrespectful, harmful, misleading public figures/events portrayals or potential to mislead”.Bloomberg noted that what is permitted or not permitted varies according to the software version used. An older version of Midjourney produced an image of Trump covered in spaghetti, but a newer version did not.But if Midjourney bans the generation of AI-generated political images, consumers – among them voters – will probably be unaware.“We’ll probably just hammer it and not say anything,” Holz said. More

  • in

    SolarWinds hack was work of 'at least 1,000 engineers', tech executives tell Senate

    Sign up for the Guardian Today US newsletterTech executives revealed that a historic cybersecurity breach that affected about 100 US companies and nine federal agencies was larger and more sophisticated than previously known.The revelations came during a hearing of the US Senate’s select committee on intelligence on Tuesday on last year’s hack of SolarWinds, a Texas-based software company. Using SolarWinds and Microsoft programs, hackers believed to be working for Russia were able to infiltrate the companies and government agencies. Servers run by Amazon were also used in the cyber-attack, but that company declined to send representatives to the hearing.Representatives from the impacted firms, including SolarWinds, Microsoft, and the cybersecurity firms FireEye Inc and CrowdStrike Holdings, told senators that the true scope of the intrusions is still unknown, because most victims are not legally required to disclose attacks unless they involve sensitive information about individuals. But they described an operation of stunning size.Brad Smith, the Microsoft president, said its researchers believed “at least 1,000 very skilled, very capable engineers” worked on the SolarWinds hack. “This is the largest and most sophisticated sort of operation that we have seen,” Smith told senators.Smith said the hacking operation’s success was due to its ability to penetrate systems through routine processes. SolarWinds functions as a network monitoring software, working deep in the infrastructure of information technology systems to identify and patch problems, and provides an essential service for companies around the world. “The world relies on the patching and updating of software for everything,” Smith said. “To disrupt or tamper with that kind of software is to in effect tamper with the digital equivalent of our Public Health Service. It puts the entire world at greater risk.”“It’s a little bit like a burglar who wants to break into a single apartment but manages to turn off the alarm system for every home and every building in the entire city,” he added. “Everybody’s safety is put at risk. That is what we’re grappling with here.”Smith said many techniques used by the hackers have not come to light and that the attacker might have used up to a dozen different means of getting into victim networks during the past year.This is the largest and most sophisticated sort of operation that we have seenMicrosoft disclosed last week that the hackers had been able to read the company’s closely guarded source code for how its programs authenticate users. At many of the victims, the hackers manipulated those programs to access new areas inside their targets.Smith stressed that such movement was not due to programming errors on Microsoft’s part but on poor configurations and other controls on the customer’s part, including cases “where the keys to the safe and the car were left out in the open”.George Kurtz, the CrowdStrike chief executive, explained that in the case of his company, hackers used a third-party vendor of Microsoft software, which had access to CrowdStrike systems, and tried but failed to get into the company’s email. Kurtz turned the blame on Microsoft for its complicated architecture, which he called “antiquated”.“The threat actor took advantage of systemic weaknesses in the Windows authentication architecture, allowing it to move laterally within the network” and reach the cloud environment while bypassing multifactor authentication, Kurtz said.Where Smith appealed for government help in providing remedial instruction for cloud users, Kurtz said Microsoft should look to its own house and fix problems with its widely used Active Directory and Azure.“Should Microsoft address the authentication architecture limitations around Active Directory and Azure Active Directory, or shift to a different methodology entirely, a considerable threat vector would be completely eliminated from one of the world*s most widely used authentication platforms,” Kurtz said.The executives argued for greater transparency and information-sharing about breaches, with liability protections and a system that does not punish those who come forward, similar to airline disaster investigations.“It’s imperative for the nation that we encourage and sometimes even require better information-sharing about cyber-attacks,” Smith said.Lawmakers spoke with the executives about how threat intelligence can be more easily and confidentially shared among competitors and lawmakers to prevent large hacks like this in the future. They also discussed what kinds of repercussion nation-state sponsored hacks warrant. The Biden administration is rumored to be considering sanctions against Russia over the hack, according to a Washington Post report.“This could have been exponentially worse and we need to recognize the seriousness of that,” said Senator Mark Warner of Virginia. “We can’t default to security fatalism. We’ve got to at least raise the cost for our adversaries.”Lawmakers berated Amazon for not appearing at the hearing, threatening to compel the company to testify at subsequent panels.“I think [Amazon has] an obligation to cooperate with this inquiry, and I hope they will voluntarily do so,” said Senator Susan Collins, a Republican. “If they don’t, I think we should look at next steps.”Reuters contributed to this report. More

  • in

    Russian hackers targeting US political campaigns ahead of elections, Microsoft warns

    The same Russian military intelligence outfit that hacked the Democrats in 2016 has attempted similar intrusions into the computer systems of organizations involved in the 2020 elections, Microsoft said Thursday.Those efforts, which have targeted more than 200 organizations including political parties and consultants, appear to be part of a broader increase in targeting of US political campaigns and related groups, the company said.“What we’ve seen is consistent with previous attack patterns that not only target candidates and campaign staffers but also those who they consult on key issues,” Tom Burt, a Microsoft vice-president, said in a blogpost.Most of the infiltration attempts by Russian, Chinese and Iranian agents were halted by Microsoft security software and the targets notified, he said. The company would not comment on who may have been successfully hacked or the impact.Microsoft did not assess which foreign adversary poses the greater threat to the integrity of the November presidential election. The consensus among cybersecurity experts is that Russian interference is the gravest. Senior Trump administration officials have disputed that, though without offering any evidence.Intelligence officials have found that – as in 2016 – the Russian government is attempting to undermine the Democratic candidate and boost Donald Trump’s chances of winning. In 2016, actors working on behalf of the Russian government hacked email accounts of the Democratic National Committee and publicly released stolen files and emails. The Russian government also funded “troll farms” in St Petersburg where nationals pretending to be from the US would post misinformation online to sow unrest.“This is the actor from 2016, potentially conducting business as usual,” said John Hultquist, the director of intelligence analysis at the cybersecurity firm FireEye. “We believe that Russian military intelligence continues to pose the greatest threat to the democratic process.”The subject of Russian interference has been an ongoing frustration for Trump, who has disputed the country’s meddling in the 2016 elections despite extensive evidence, calling it a “witch hunt”. Trump loyalists at the Department of Homeland Security have also manipulated and fabricated intelligence reports to downplay the threat of Russian interference, a whistleblower claimed on Wednesday.A spokeswoman for the Trump campaign said it takes cybersecurity threats “very seriously” and does not publicly comment on specific efforts it is making.“As President Trump’s re-election campaign, we are a large target, so it is not surprising to see malicious activity directed at the campaign or our staff,” she said. “We work closely with our partners, Microsoft and others, to mitigate these threats.”The attempted hacks come at a time when election security concerns are remarkably high, given that many people will be voting with mail-in ballots due to the Covid-19 pandemic. An international body in August called these “the most challenging” US election in recent decades.Campaigns are also at a heightened risk for hacking given that many employees are now working from home without heightened security measures that may exist on workplace computers, said Bob Stevens, the vice-president of mobile security firm Lookout.“Mobile devices now exist at the intersection of our work and personal lives,” he said. “Considering how reliant we are on them to support all aspects of our lives, bad actors have taken note.”The Microsoft revelations on Thursday show that Russian military intelligence continues to pursue election-related targets undeterred by US indictments, sanctions and other countermeasures, Hultquist said.Microsoft, which has visibility into these efforts because its software is both ubiquitous and highly rated for security, did not address whether US officials who manage elections or operate voting systems have been targeted by state-backed hackers this year. US intelligence officials say they have so far not seen no evidence of that. More

  • in

    America's billionaires are giving to charity – but much of it is self-serving rubbish | Robert Reich

    America’s billionaires are giving to charity – but much of it is self-serving rubbish Robert Reich Well-publicized philanthropy shows how afraid the super-rich are of a larger social safety net – and higher taxes Jeff Bezos’s $100m donation, for example, amounts to about 11 days of his income. Photograph: Katherine Taylor/Reuters As millions of jobless Americans […] More