More stories

  • in

    Social Media Companies Still Boost Election Fraud Claims, Report Says

    The major social media companies all say they are ready to deal with a torrent of misinformation surrounding the midterm elections in November.A report released on Monday, however, claimed that they continued to undermine the integrity of the vote by allowing election-related conspiracy theories to fester and spread.In the report, the Stern Center for Business and Human Rights at New York University said the social media companies still host and amplify “election denialism,” threatening to further erode confidence in the democratic process.The companies, the report argued, bear a responsibility for the false but widespread belief among conservatives that the 2020 election was fraudulent — and that the coming midterms could be, too. The report joins a chorus of warnings from officials and experts that the results in November could be fiercely, even violently, contended.“The malady of election denialism in the U.S. has become one of the most dangerous byproducts of social media,” the report warned, “and it is past time for the industry to do more to address it.”The State of the 2022 Midterm ElectionsWith the primaries over, both parties are shifting their focus to the general election on Nov. 8.Echoing Trump: Six G.O.P. nominees for governor and the Senate in critical midterm states, all backed by former President Donald J. Trump, would not commit to accepting this year’s election results.Times/Siena Poll: Our second survey of the 2022 election cycle found Democrats remain unexpectedly competitive in the battle for Congress, while G.O.P. dreams of a major realignment among Latino voters have failed to materialize.Ohio Senate Race: The contest between Representative Tim Ryan, a Democrat, and his Republican opponent, J.D. Vance, appears tighter than many once expected.Pennsylvania Senate Race: In one of his most extensive interviews since having a stroke, Lt. Gov. John Fetterman, the Democratic nominee, said he was fully capable of handling a campaign that could decide control of the Senate.The major platforms — Facebook, Twitter, TikTok and YouTube — have all announced promises or initiatives to combat disinformation ahead of the 2022 midterms, saying they were committed to protecting the election process. But the report said those measures were ineffective, haphazardly enforced or simply too limited.Facebook, for example, announced that it would ban ads that called into question the legitimacy of the coming elections, but it exempted politicians from its fact-checking program. That, the report says, allows candidates and other influential leaders to undermine confidence in the vote by questioning ballot procedures or other rules.In the case of Twitter, an internal report released as part of a whistle-blower’s complaint from a former head of security, Peiter Zatko, disclosed that the company’s site integrity team had only two experts on misinformation.The New York University report, which incorporated responses from all the companies except YouTube, called for greater transparency in how companies rank, recommend and remove content. It also said they should enhance fact-checking efforts and remove provably untrue claims, and not simply label them false or questionable.A spokeswoman for Twitter, Elizabeth Busby, said the company was undertaking a multifaceted approach to ensuring reliable information about elections. That includes efforts to “pre-bunk” false information and to “reduce the visibility of potentially misleading claims via labels.”In a statement, YouTube said it agreed with “many of the points” made in the report and had already carried out many of its recommendations.“We’ve already removed a number of videos related to the midterms for violating our policies,” the statement said, “and the most viewed and recommended videos and channels related to the election are from authoritative sources, including news channels.”TikTok did not respond to a request for comment.There are already signs that the integrity of the vote in November will be as contentious as it was in 2020, when President Donald J. Trump and some of his supporters refused to accept the outcome, falsely claiming widespread fraud.Inattention by social media companies in the interim has allowed what the report describes as a coordinated campaign to take root among conservatives claiming, again without evidence, that wholesale election fraud is bent on tipping elections to Democrats.“Election denialism,” the report said, “was evolving in 2021 from an obsession with the former president’s inability to accept defeat into a broader, if equally baseless, attack on the patriotism of all Democrats, as well as non-Trump-loving Republicans, and legions of election administrators, many of them career government employees.” More

  • in

    Political Campaigns Flood Streaming Video With Custom Voter Ads

    The targeted political ads could spread some of the same voter-influence techniques that proliferated on Facebook to an even less regulated medium.Over the last few weeks, tens of thousands of voters in the Detroit area who watch streaming video services were shown different local campaign ads pegged to their political leanings.Digital consultants working for Representative Darrin Camilleri, a Democrat in the Michigan House who is running for State Senate, targeted 62,402 moderate, female — and likely pro-choice — voters with an ad promoting reproductive rights.The campaign also ran a more general video ad for Mr. Camilleri, a former public-school teacher, directed at 77,836 Democrats and Independents who have voted in past midterm elections. Viewers in Mr. Camilleri’s target audience saw the messages while watching shows on Lifetime, Vice and other channels on ad-supported streaming services like Samsung TV Plus and LG Channels.Although millions of American voters may not be aware of it, the powerful data-mining techniques that campaigns routinely use to tailor political ads to consumers on sites and apps are making the leap to streaming video. The targeting has become so precise that next door neighbors streaming the same true crime show on the same streaming service may now be shown different political ads — based on data about their voting record, party affiliation, age, gender, race or ethnicity, estimated home value, shopping habits or views on gun control.Political consultants say the ability to tailor streaming video ads to small swaths of viewers could be crucial this November for candidates like Mr. Camilleri who are facing tight races. In 2016, Mr. Camilleri won his first state election by just several hundred votes.“Very few voters wind up determining the outcomes of close elections,” said Ryan Irvin, the co-founder of Change Media Group, the agency behind Mr. Camilleri’s ad campaign. “Very early in an election cycle, we can pull from the voter database a list of those 10,000 voters, match them on various platforms and run streaming TV ads to just those 10,000 people.”Representative Darrin Camilleri, a member of the Michigan House who is running for State Senate, targeted local voters with streaming video ads before he campaigned in their neighborhoods. Emily Elconin for The New York TimesTargeted political ads on streaming platforms — video services delivered via internet-connected devices like TVs and tablets — seemed like a niche phenomenon during the 2020 presidential election. Two years later, streaming has become the most highly viewed TV medium in the United States, according to Nielsen.Savvy candidates and advocacy groups are flooding streaming services with ads in an effort to reach cord-cutters and “cord nevers,” people who have never watched traditional cable or broadcast TV.The trend is growing so fast that political ads on streaming services are expected to generate $1.44 billion — or about 15 percent — of the projected $9.7 billion on ad spending for the 2022 election cycle, according to a report from AdImpact, an ad tracking company. That would for the first time put streaming on par with political ad spending on Facebook and Google.The State of the 2022 Midterm ElectionsWith the primaries over, both parties are shifting their focus to the general election on Nov. 8.Midterm Data: Could the 2020 polling miss repeat itself? Will this election cycle really be different? Nate Cohn, The Times’s chief political analyst, looks at the data in his new newsletter.Republicans’ Abortion Struggles: Senator Lindsey Graham’s proposed nationwide 15-week abortion ban was intended to unite the G.O.P. before the November elections. But it has only exposed the party’s divisions.Democrats’ Dilemma: The party’s candidates have been trying to signal their independence from the White House, while not distancing themselves from President Biden’s base or agenda.The quick proliferation of the streaming political messages has prompted some lawmakers and researchers to warn that the ads are outstripping federal regulation and oversight.For example, while political ads running on broadcast and cable TV must disclose their sponsors, federal rules on political ad transparency do not specifically address streaming video services. Unlike broadcast TV stations, streaming platforms are also not required to maintain public files about the political ads they sold.The result, experts say, is an unregulated ecosystem in which streaming services take wildly different approaches to political ads.“There are no rules over there, whereas, if you are a broadcaster or a cable operator, you definitely have rules you have to operate by,” said Steve Passwaiter, a vice president at Kantar Media, a company that tracks political advertising.The boom in streaming ads underscores a significant shift in the way that candidates, party committees and issue groups may target voters. For decades, political campaigns have blanketed local broadcast markets with candidate ads or tailored ads to the slant of cable news channels. With such bulk media buying, viewers watching the same show at the same time as their neighbors saw the same political messages.But now campaigns are employing advanced consumer-profiling and automated ad-buying services to deliver different streaming video messages, tailored to specific voters.“In the digital ad world, you’re buying the person, not the content,” said Mike Reilly, a partner at MVAR Media, a progressive political consultancy that creates ad campaigns for candidates and advocacy groups.Targeted political ads are being run on a slew of different ad-supported streaming channels. Some smart TV manufacturers air the political ads on proprietary streaming platforms, like Samsung TV Plus and LG Channels. Viewers watching ad-supported streaming channels via devices like Roku may also see targeted political ads.Policies on political ad targeting vary. Amazon prohibits political party and candidate ads on its streaming services. YouTube TV and Hulu allow political candidates to target ads based on viewers’ ZIP code, age and gender, but they prohibit political ad targeting by voting history or party affiliation.Roku, which maintains a public archive of some political ads running on its platform, declined to comment on its ad-targeting practices.Samsung and LG, which has publicly promoted its voter-targeting services for political campaigns, did not respond to requests for comment. Netflix declined to comment about its plans for an ad-supported streaming service.Targeting political ads on streaming services can involve more invasive data-mining than the consumer-tracking techniques typically used to show people online ads for sneakers.Political consulting firms can buy profiles on more than 200 millions voters, including details on an individual’s party affiliations, voting record, political leanings, education levels, income and consumer habits. Campaigns may employ that data to identify voters concerned about a specific issue — like guns or abortion — and hone video messages to them.In addition, internet-connected TV platforms like Samsung, LG and Roku often use data-mining technology, called “automated content recognition,” to analyze snippets of the videos people watch and segment viewers for advertising purposes.Some streaming services and ad tech firms allow political campaigns to provide lists of specific voters to whom they wish to show ads.To serve those messages, ad tech firms employ precise delivery techniques — like using IP addresses to identify devices in a voter’s household. The device mapping allows political campaigns to aim ads at certain voters whether they are streaming on internet-connected TVs, tablets, laptops or smartphones.Sten McGuire, an executive at a4 Advertising, presented a webinar in March announcing a partnership to sell political ads on LG channels.New York TimesUsing IP addresses, “we can intercept voters across the nation,” Sten McGuire, an executive at a4 Advertising, said in a webinar in March announcing a partnership to sell political ads on LG channels. His company’s ad-targeting worked, Mr. McGuire added, “whether you are looking to reach new cord cutters or ‘cord nevers’ streaming their favorite content, targeting Spanish-speaking voters in swing states, reaching opinion elites and policy influencers or members of Congress and their staff.”Some researchers caution that targeted video ads could spread some of the same voter-influence techniques that have proliferated on Facebook to a new, and even less regulated, medium.Facebook and Google, the researchers note, instituted some restrictions on political ad targeting after Russian operatives used digital platforms to try to disrupt the 2016 presidential election. With such restrictions in place, political advertisers on Facebook, for instance, should no longer be able to target users interested in Malcolm X or Martin Luther King with paid messages urging them not to vote.Facebook and Google have also created public databases that enable people to view political ads running on the platforms.But many streaming services lack such targeting restrictions and transparency measures. The result, these experts say, is an opaque system of political influence that runs counter to basic democratic principles.“This occupies a gray area that’s not getting as much scrutiny as ads running on social media,” said Becca Ricks, a senior researcher at the Mozilla Foundation who has studied the political ad policies of popular streaming services. “It creates an unfair playing field where you can precisely target, and change, your messaging based on the audience — and do all of this without some level of transparency.”Some political ad buyers are shying away from more restricted online platforms in favor of more permissive streaming services.“Among our clients, the percentage of budget going to social channels, and on Facebook and Google in particular, has been declining,” said Grace Briscoe, an executive overseeing candidate and political issue advertising at Basis Technologies, an ad tech firm. “The kinds of limitations and restrictions that those platforms have put on political ads has disinclined clients to invest as heavily there.”Senators Amy Klobuchar and Mark Warner introduced the Honest Ads Act, which would require online political ads to include disclosures similar to those on broadcast TV ads.Al Drago for The New York TimesMembers of Congress have introduced a number of bills that would curb voter-targeting or require digital ads to adhere to the same rules as broadcast ads. But the measures have not yet been enacted.Amid widespread covertness in the ad-targeting industry, Mr. Camilleri, the member of the Michigan House running for State Senate, was unusually forthcoming about how he was using streaming services to try to engage specific swaths of voters.In prior elections, he said, he sent postcards introducing himself to voters in neighborhoods where he planned to make campaign stops. During this year’s primaries, he updated the practice by running streaming ads introducing himself to certain households a week or two before he planned to knock on their doors.“It’s been working incredibly well because a lot of people will say, ‘Oh, I’ve seen you on TV,’” Mr. Camilleri said, noting that many of his constituents did not appear to understand the ads were shown specifically to them and not to a general broadcast TV audience. “They don’t differentiate” between TV and streaming, he added, “because you’re watching YouTube on your television now.” More

  • in

    To Fight Election Falsehoods, Social Media Companies Ready a Familiar Playbook

    The election dashboards are back online, the fact-checking teams have reassembled, and warnings about misleading content are cluttering news feeds once again.As the United States marches toward another election season, social media companies are steeling themselves for a deluge of political misinformation. Those companies, including TikTok and Facebook, are trumpeting a series of election tools and strategies that look similar to their approaches in previous years.Disinformation watchdogs warn that while many of these programs are useful — especially efforts to push credible information in multiple languages — the tactics proved insufficient in previous years and may not be enough to combat the wave of falsehoods pushed this election season.Here are the anti-misinformation plans for Facebook, TikTok, Twitter and YouTube.FacebookFacebook’s approach this year will be “largely consistent with the policies and safeguards” from 2020, Nick Clegg, president of global affairs for Meta, Facebook’s parent company, wrote in a blog post last week.Posts rated false or partly false by one of Facebook’s 10 American fact-checking partners will get one of several warning labels, which can force users to click past a banner reading “false information” before they can see the content. In a change from 2020, those labels will be used in a more “targeted and strategic way” for posts discussing the integrity of the midterm elections, Mr. Clegg wrote, after users complained that they were “over-used.”Warning labels prevent users from immediately seeing or sharing false content.Provided by FacebookFacebook will also expand its efforts to address harassment and threats aimed at election officials and poll workers. Misinformation researchers said the company has taken greater interest in moderating content that could lead to real-world violence after the Jan. 6 attack on the U.S. Capitol.Facebook greatly expanded its election team after the 2016 election, to more than 300 people. Mark Zuckerberg, Facebook’s chief executive, took a personal interest in safeguarding elections.But Meta, Facebook’s parent company, has changed its focus since the 2020 election. Mr. Zuckerberg is now more focused instead on building the metaverse and tackling stiff competition from TikTok. The company has dispersed its election team and signaled that it could shut down CrowdTangle, a tool that helps track misinformation on Facebook, some time after the midterms.“I think they’ve just come to the conclusion that this is not really a problem that they can tackle at this point,” said Jesse Lehrich, co-founder of Accountable Tech, a nonprofit focused on technology and democracy.More Coverage of the 2022 Midterm ElectionsChallenging DeSantis: Florida Democrats would love to defeat Gov. Ron DeSantis in November. But first they must nominate a candidate who can win in a state where they seem to perpetually fall short.Uniting Around Mastriano: Doug Mastriano, the far-right G.O.P. nominee for Pennsylvania governor, has managed to win over party officials who feared he would squander a winnable race.O’Rourke’s Widening Campaign: Locked in an unexpectedly close race against Gov. Greg Abbott, Beto O’Rourke, the Democratic candidate, has ventured into deeply conservative corners of rural Texas in search of votes.The ‘Impeachment 10’: After Liz Cheney’s primary defeat in Wyoming, only two of the 10 House Republicans who voted to impeach Mr. Trump remain.In a statement, a spokesman from Meta said its elections team was absorbed into other parts of the company and that more than 40 teams are now focused on the midterms.TikTokIn a blog post announcing its midterm plans, Eric Han, the head of U.S. safety, said the company would continue its fact-checking program from 2020, which prevents some videos from being recommended until they are verified by outside fact checkers. It also introduced an election information portal, which provides voter information like how to register, six weeks earlier than it did in 2020.Even so, there are already clear signs that misinformation has thrived on the platform throughout the primaries.“TikTok is going to be a massive vector for disinformation this cycle,” Mr. Lehrich said, adding that the platform’s short video and audio clips are harder to moderate, enabling “massive amounts of disinformation to go undetected and spread virally.”TikTok said its moderation efforts would focus on stopping creators who are paid for posting political content in violation of the company’s rules. TikTok has never allowed paid political posts or political advertising. But the company said that some users were circumventing or ignoring those policies during the 2020 election. A representative from the company said TikTok would start approaching talent management agencies directly to outline their rules.Disinformation watchdogs have criticized the company for a lack of transparency over the origins of its videos and the effectiveness of its moderation practices. Experts have called for more tools to analyze the platform and its content — the kind of access that other companies provide.“The consensus is that it’s a five-alarm fire,” said Zeve Sanderson, the founding executive director at New York University’s Center for Social Media and Politics. “We don’t have a good understanding of what’s going on there,” he added.Last month, Vanessa Pappas, TikTok’s chief operating officer, said the company would begin sharing some data with “selected researchers” this year.TwitterIn a blog post outlining its plans for the midterm elections, the company said it would reactivate its Civic Integrity Policy — a set of rules adopted in 2018 that the company uses ahead of elections around the world. Under the policy, warning labels, similar to those used by Facebook, will once again be added to false or misleading tweets about elections, voting, or election integrity, often pointing users to accurate information or additional context. Tweets that receive the labels are not recommended or distributed by the company’s algorithms. The company can also remove false or misleading tweets entirely.Those labels were redesigned last year, resulting in 17 percent more clicks for additional information, the company said. Interactions, like replies and retweets, fell on tweets that used the modified labels.In Twitter’s tests, the redesigned warning labels increased click-through rates for additional context by 17 percent.Provided by TwitterThe strategy reflects Twitter’s attempts to limit false content without always resorting to removing tweets and banning users.The approach may help the company navigate difficult freedom of speech issues, which have dogged social media companies as they try to limit the spread of misinformation. Elon Musk, the Tesla executive, made freedom of speech a central criticism during his attempts to buy the company earlier this year.YouTubeUnlike the other major online platforms, YouTube has not released its own election misinformation plan for 2022 and has typically stayed quiet about its election misinformation strategy.“YouTube is nowhere to be found still,” Mr. Sanderson said. “That sort of aligns with their general P.R. strategy, which just seems to be: Don’t say anything and no one will notice.”Google, YouTube’s parent company, published a blog post in March emphasizing their efforts to surface authoritative content through the streamer’s recommendation engine and remove videos that mislead voters. In another post aimed at creators, Google details how channels can receive “strikes” for sharing certain kinds of misinformation and, after three strikes within a 90-day period, the channel will be terminated.The video streaming giant has played a major role in distributing political misinformation, giving an early home to conspiracy theorists like Alex Jones, who was later banned from the site. It has taken a stronger stance against medical misinformation, stating last September that it would remove all videos and accounts sharing vaccine misinformation. The company ultimately banned some prominent conservative personalities.More than 80 fact checkers at independent organizations around the world signed a letter in January warning YouTube that its platform is being “weaponized” to promote voter fraud conspiracy theories and other election misinformation.In a statement, Ivy Choi, a YouTube spokeswoman, said its election team had been meeting for months to prepare for the midterms and added that its recommendation engine is “continuously and prominently surfacing midterms-related content from authoritative news sources and limiting the spread of harmful midterms-related misinformation.” More

  • in

    Russian National Charged With Spreading Propaganda Through U.S. Groups

    Federal authorities say the man recruited several American political groups and used them to sow discord and interfere with elections.MIAMI — The Russian man with a trim beard and patterned T-shirt appeared in a Florida political group’s YouTube livestream in March, less than three weeks after his country had invaded Ukraine, and falsely claimed that what had happened was not an invasion.“I would like to address the free people around the world to tell you that Western propaganda is lying when they say that Russia invaded Ukraine,” he said through an interpreter.His name was Aleksandr Viktorovich Ionov, and he described himself as a “human rights activist.”But federal authorities say he was working for the Russian government, orchestrating a yearslong influence campaign to use American political groups to spread Russian propaganda and interfere with U.S. elections. On Friday, the Justice Department revealed that it had charged Mr. Ionov with conspiring to have American citizens act as illegal agents of the Russian government.Mr. Ionov, 32, who lives in Moscow and is not in custody, is accused of recruiting three political groups in Florida, Georgia and California from December 2014 through March, providing them with financial support and directing them to publish Russian propaganda. On Friday, the Treasury Department imposed sanctions against him.David Walker, the top agent in the F.B.I.’s Tampa field office, called the allegations “some of the most egregious and blatant violations we’ve seen by the Russian government in order to destabilize and undermine trust in American democracy.”In 2017 and 2019, Mr. Ionov supported the campaigns of two candidates for local office in St. Petersburg, Fla., where one of the American political groups was based, according to a 24-page indictment. He wrote to a Russian official in 2019 that he had been “consulting every week” on one of the campaigns, the indictment said.“Our election campaign is kind of unique,” a Russian intelligence officer wrote to Mr. Ionov, adding, “Are we the first in history?” Mr. Ionov later referred to the candidate, who was not named in the indictment, as the one “whom we supervise.”In 2016, according to the indictment, Mr. Ionov paid for the St. Petersburg group to conduct a four-city protest tour supporting a “Petition on Crime of Genocide Against African People in the United States,” which the group had previously submitted to the United Nations at his direction.“The goal is to heighten grievances,” Peter Strzok, a former top F.B.I. counterintelligence official, said of the sort of behavior Mr. Ionov is accused of carrying out. “They just want to fund opposing forces. It’s a means to encourage social division at a low cost. The goal is to create strife and division.”Members of the Uhuru Movement spoke to reporters in Florida on Friday. Martha Asencio-Rhine/Tampa Bay Times, via Associated PressThe Russian government has a long history of trying to sow division in the U.S., in particular during the 2016 presidential campaign. Mr. Strzok said the Russians were known to plant stories with fringe groups in an effort to introduce disinformation into the media ecosystem.Federal investigators described Mr. Ionov as the founder and president of the Anti-Globalization Movement of Russia and said it was funded by the Russian government. They said he worked with at least three Russian officials and in conjunction with the F.S.B., a Russian intelligence agency.The indictment issued on Friday did not name the U.S. political groups, their leaders or the St. Petersburg candidates, who were identified only as Unindicted Co-conspirator 3 and Unindicted Co-conspirator 4. And Mr. Ionov is the only person who has been charged in the case.But leaders of the Uhuru Movement, which is based in St. Petersburg and part of the African People’s Socialist Party, said that their office and chairman’s home had been raided by federal agents on Friday morning as part of the investigation.“They handcuffed me and my wife,” the chairman, Omali Yeshitela, said on Facebook Live from outside the group’s new headquarters in St. Louis. He said he did not take Russian government money but would not be “morally opposed” to accepting funds from Russians or “anyone else who wants to support the struggles for Black people.”The indictment said that Mr. Ionov paid for the founder and chairman of the St. Petersburg group — identified as Unindicted Co-conspirator 1 — to travel to Moscow in 2015. Upon his return, the indictment said, the chairman said in emails with other group leaders that Mr. Ionov wanted the group to be “an instrument” of the Russian government, which did not “disturb us.”“Yes, I have been to Russia,” Mr. Yeshitela said in his Facebook Live appearance on Friday, without addressing when he went and who paid for his trip. He added that he has also been to other countries, including South Africa and Nicaragua.In St. Petersburg, Akilé Anai of the Uhuru Movement said in a news conference that federal authorities had seized her car and other personal property.She called the investigation an attack on the Uhuru Movement, which has long been a presence in St. Petersburg but has had little success in local politics.“We can have relationships with whoever we want to,” she said, adding that the Uhuru Movement has made no secret of backing Russia in the war in Ukraine. “We are in support of Russia.”Ms. Anai ran for the City Council in 2017 and 2019 as Eritha “Akilé” Cainion. She received about 18 percent of vote in the 2019 runoff election.Mr. Ionov is also accused of directing an unidentified political group in Sacramento that pushed for California’s secession from the United States. The indictment said that he helped fund a 2018 protest in the State Capitol and encouraged the group’s leader to try to get into the governor’s office.And Mr. Ionov is accused of directing an unidentified political group in Atlanta, paying for its members to travel to San Francisco this year to protest at the headquarters of a social media company that restricted pro-Russian posts about the invasion of Ukraine. Mr. Ionov even provided designs for protest signs, according to the indictment.After Russia invaded Ukraine in February, the indictment said that Mr. Ionov told his Russian intelligence associates that he had asked the St. Petersburg group to support Russia in the “information war unleashed” by the West.Adam Goldman More

  • in

    YouTube Deletes Jan. 6 Video That Included Clip of Trump Sharing Election Lies

    The House select committee investigating the Jan. 6 riot has been trying to draw more eyes to its televised hearings by uploading clips of the proceedings online. But YouTube has removed one of those videos from its platform, saying the committee was advancing election misinformation.The excerpt, which was uploaded June 14, included recorded testimony from former Attorney General William P. Barr. But the problem for YouTube was that the video also included a clip of former President Donald J. Trump sharing lies about the election on the Fox Business channel.A screenshot of the committee’s website showing the video removal notification. The message initially said the video had been removed.Select Committee to Investigate the January 6th Attack on the United States Capitol“We had glitches where they moved thousands of votes from my account to Biden’s account,” Mr. Trump said falsely, before suggesting the F.B.I. and Department of Justice may have been involved.The excerpt of the hearing did not include Mr. Barr’s perspective, stated numerous times elsewhere in the hearing, that Mr. Trump’s assertion that the election was stolen was wrong. The video initially was replaced with a black box stating that the clip had been removed for violating YouTube’s terms of service.“Our election integrity policy prohibits content advancing false claims that widespread fraud, errors or glitches changed the outcome of the 2020 U.S. presidential election, if it does not provide sufficient context,” YouTube spokeswoman Ivy Choi said in a statement. “We enforce our policies equally for everyone, and have removed the video uploaded by the Jan. 6 committee channel.”The message on the video page has since been changed to “This video is private,” which may mean that YouTube would allow the committee to upload a version of the clip that makes clear that Trump’s claims are false. More

  • in

    Jan. 6 Committee Subpoenas Twitter, Meta, Alphabet and Reddit

    The panel investigating the attack on the Capitol is demanding information from Alphabet, Meta, Reddit and Twitter.WASHINGTON — The House committee investigating the Jan. 6 attack on the Capitol issued subpoenas on Thursday to four major social media companies — Alphabet, Meta, Reddit and Twitter — criticizing them for allowing extremism to spread on their platforms and saying they have failed to cooperate adequately with the inquiry.In letters accompanying the subpoenas, the panel named Facebook, a unit of Meta, and YouTube, which is owned by Alphabet’s Google subsidiary, as among the worst offenders that contributed to the spread of misinformation and violent extremism. The committee said it had been investigating how the companies “contributed to the violent attack on our democracy, and what steps — if any — social media companies took to prevent their platforms from being breeding grounds for radicalizing people to violence.”“It’s disappointing that after months of engagement, we still do not have the documents and information necessary to answer those basic questions,” said the panel’s chairman, Representative Bennie Thompson, Democrat of Mississippi.The committee sent letters in August to 15 social media companies — including sites where misinformation about election fraud spread, such as the pro-Trump website TheDonald.win — seeking documents pertaining to efforts to overturn the election and any domestic violent extremists associated with the Jan. 6 rally and attack.After months of discussions with the companies, only the four large corporations were issued subpoenas on Thursday, because the committee said the firms were “unwilling to commit to voluntarily and expeditiously” cooperating with its work. A committee aide said investigators were in various stages of negotiations with the other companies.In the year since the events of Jan. 6, social media companies have been heavily scrutinized for whether their sites played an instrumental role in organizing the attack.In the months surrounding the 2020 election, employees inside Meta raised warning signs that Facebook posts and comments containing “combustible election misinformation” were spreading quickly across the social network, according to a cache of documents and photos reviewed by The New York Times. Many of those employees criticized Facebook leadership’s inaction when it came to the spread of the QAnon conspiracy group, which they said also contributed to the attack.Frances Haugen, a former Facebook employee turned whistle-blower, said the company relaxed its safeguards too quickly after the election, which then led it to be used in the storming of the Capitol.Critics say that other platforms also played an instrumental role in the spread of misinformation while contributing to the events of Jan. 6.In the days after the attack, Reddit banned a discussion forum dedicated to former President Donald J. Trump, where tens of thousands of Mr. Trump’s supporters regularly convened to express solidarity with him.On Twitter, many of Mr. Trump’s followers used the site to amplify and spread false allegations of election fraud, while connecting with other Trump supporters and conspiracy theorists using the site. And on YouTube, some users broadcast the events of Jan. 6 using the platform’s video streaming technology.Representatives for the tech companies have been in discussions with the investigating committee, though how much in the way of evidence or user records the firms have handed over remains unclear.The committee said letters to the four firms accompanied the subpoenas.The panel said YouTube served as a platform for “significant communications by its users that were relevant to the planning and execution of Jan. 6 attack on the United States Capitol,” including livestreams of the attack as it was taking place.“To this day, YouTube is a platform on which user video spread misinformation about the election,” Mr. Thompson wrote.The panel said Facebook and other Metaplatforms were used to share messages of “hate, violence and incitement; to spread misinformation, disinformation and conspiracy theories around the election; and to coordinate or attempt to coordinate the Stop the Steal movement.”Public accounts about Facebook’s civic integrity team indicate that Facebook has documents that are critical to the select committee’s investigation, the panel said.“Meta has declined to commit to a deadline for producing or even identifying these materials,” Mr. Thompson wrote to Mark Zuckerberg, Meta’s chief executive.Key Figures in the Jan. 6 InquiryCard 1 of 12The House investigation. More

  • in

    YouTube’s stronger election misinformation policies had a spillover effect on Twitter and Facebook, researchers say.

    .dw-chart-subhed {
    line-height: 1;
    margin-bottom: 6px;
    font-family: nyt-franklin;
    color: #121212;
    font-size: 15px;
    font-weight: 700;
    }

    Share of Election-Related Posts on Social Platforms Linking to Videos Making Claims of Fraud
    Source: Center for Social Media and Politics at New York UniversityBy The New York TimesYouTube’s stricter policies against election misinformation was followed by sharp drops in the prevalence of false and misleading videos on Facebook and Twitter, according to new research released on Thursday, underscoring the video service’s power across social media.Researchers at the Center for Social Media and Politics at New York University found a significant rise in election fraud YouTube videos shared on Twitter immediately after the Nov. 3 election. In November, those videos consistently accounted for about one-third of all election-related video shares on Twitter. The top YouTube channels about election fraud that were shared on Twitter that month came from sources that had promoted election misinformation in the past, such as Project Veritas, Right Side Broadcasting Network and One America News Network.But the proportion of election fraud claims shared on Twitter dropped sharply after Dec. 8. That was the day YouTube said it would remove videos that promoted the unfounded theory that widespread errors and fraud changed the outcome of the presidential election. By Dec. 21, the proportion of election fraud content from YouTube that was shared on Twitter had dropped below 20 percent for the first time since the election.The proportion fell further after Jan. 7, when YouTube announced that any channels that violated its election misinformation policy would receive a “strike,” and that channels that received three strikes in a 90-day period would be permanently removed. By Inauguration Day, the proportion was around 5 percent.The trend was replicated on Facebook. A postelection surge in sharing videos containing fraud theories peaked at about 18 percent of all videos on Facebook just before Dec. 8. After YouTube introduced its stricter policies, the proportion fell sharply for much of the month, before rising slightly before the Jan. 6 riot at the Capitol. The proportion dropped again, to 4 percent by Inauguration Day, after the new policies were put in place on Jan. 7.To reach their findings, researchers collected a random sampling of 10 percent of all tweets each day. They then isolated tweets that linked to YouTube videos. They did the same for YouTube links on Facebook, using a Facebook-owned social media analytics tool, CrowdTangle.From this large data set, the researchers filtered for YouTube videos about the election broadly, as well as about election fraud using a set of keywords like “Stop the Steal” and “Sharpiegate.” This allowed the researchers to get a sense of the volume of YouTube videos about election fraud over time, and how that volume shifted in late 2020 and early 2021.Misinformation on major social networks has proliferated in recent years. YouTube in particular has lagged behind other platforms in cracking down on different types of misinformation, often announcing stricter policies several weeks or months after Facebook and Twitter. In recent weeks, however, YouTube has toughened its policies, such as banning all antivaccine misinformation and suspending the accounts of prominent antivaccine activists, including Joseph Mercola and Robert F. Kennedy Jr.Ivy Choi, a YouTube spokeswoman, said that YouTube was the only major online platform with a presidential election integrity policy. “We also raised up authoritative content for election-related search queries and reduced the spread of harmful election-related misinformation,” she said.Megan Brown, a research scientist at the N.Y.U. Center for Social Media and Politics, said it was possible that after YouTube banned the content, people could no longer share the videos that promoted election fraud. It is also possible that interest in the election fraud theories dropped considerably after states certified their election results.But the bottom line, Ms. Brown said, is that “we know these platforms are deeply interconnected.” YouTube, she pointed out, has been identified as one of the most-shared domains across other platforms, including in both of Facebook’s recently released content reports and N.Y.U.’s own research.“It’s a huge part of the information ecosystem,” Ms. Brown said, “so when YouTube’s platform becomes healthier, others do as well.” More

  • in

    Germany Struggles to Stop Online Abuse Ahead of Election

    Scrolling through her social media feed, Laura Dornheim is regularly stopped cold by a new blast of abuse aimed at her, including from people threatening to kill or sexually assault her. One person last year said he looked forward to meeting her in person so he could punch her teeth out.Ms. Dornheim, a candidate for Parliament in Germany’s election on Sunday, is often attacked for her support of abortion rights, gender equality and immigration. She flags some of the posts to Facebook and Twitter, hoping that the platforms will delete the posts or that the perpetrators will be barred. She’s usually disappointed.“There might have been one instance where something actually got taken down,” Ms. Dornheim said.Harassment and abuse are all too common on the modern internet. Yet it was supposed to be different in Germany. In 2017, the country enacted one of the world’s toughest laws against online hate speech. It requires Facebook, Twitter and YouTube to remove illegal comments, pictures or videos within 24 hours of being notified about them or risk fines of up to 50 million euros, or $59 million. Supporters hailed it as a watershed moment for internet regulation and a model for other countries.But an influx of hate speech and harassment in the run-up to the German election, in which the country will choose a new leader to replace Angela Merkel, its longtime chancellor, has exposed some of the law’s weaknesses. Much of the toxic speech, researchers say, has come from far-right groups and is aimed at intimidating female candidates like Ms. Dornheim.Some critics of the law say it is too weak, with limited enforcement and oversight. They also maintain that many forms of abuse are deemed legal by the platforms, such as certain kinds of harassment of women and public officials. And when companies do remove illegal material, critics say, they often do not alert the authorities or share information about the posts, making prosecutions of the people publishing the material far more difficult. Another loophole, they say, is that smaller platforms like the messaging app Telegram, popular among far-right groups, are not subject to the law.Free-expression groups criticize the law on other grounds. They argue that the law should be abolished not only because it fails to protect victims of online abuse and harassment, but also because it sets a dangerous precedent for government censorship of the internet.The country’s experience may shape policy across the continent. German officials are playing a key role in drafting one of the world’s most anticipated new internet regulations, a European Union law called the Digital Services Act, which will require Facebook and other online platforms to do more to address the vitriol, misinformation and illicit content on their sites. Ursula von der Leyen, a German who is president of the European Commission, the 27-nation bloc’s executive arm, has called for an E.U. law that would list gender-based violence as a special crime category, a proposal that would include online attacks.“Germany was the first to try to tackle this kind of online accountability,” said Julian Jaursch, a project director at the German think tank Stiftung Neue Verantwortung, which focuses on digital issues. “It is important to ask whether the law is working.”Campaign billboards in Germany’s race for chancellor, showing, from left, Annalena Baerbock of the Green Party, Olaf Scholz of the Social Democrats and Christian Lindner of the Free Democrats.Sean Gallup/Getty ImagesMarc Liesching, a professor at HTWK Leipzig who published an academic report on the policy, said that of the posts that had been deleted by Facebook, YouTube and Twitter, a vast majority were classified as violating company policies, not the hate speech law. That distinction makes it harder for the government to measure whether companies are complying with the law. In the second half of 2020, Facebook removed 49 million pieces of “hate speech” based on its own community standards, compared with the 154 deletions that it attributed to the German law, he found.The law, Mr. Liesching said, “is not relevant in practice.”With its history of Nazism, Germany has long tried to balance free speech rights against a commitment to combat hate speech. Among Western democracies, the country has some of the world’s toughest laws against incitement to violence and hate speech. Targeting religious, ethnic and racial groups is illegal, as are Holocaust denial and displaying Nazi symbols in public. To address concerns that companies were not alerting the authorities to illegal posts, German policymakers this year passed amendments to the law. They require Facebook, Twitter and YouTube to turn over data to the police about accounts that post material that German law would consider illegal speech. The Justice Ministry was also given more powers to enforce the law. “The aim of our legislative package is to protect all those who are exposed to threats and insults on the internet,” Christine Lambrecht, the justice minister, who oversees enforcement of the law, said after the amendments were adopted. “Whoever engages in hate speech and issues threats will have to expect to be charged and convicted.”Germans will vote for a leader to replace Angela Merkel, the country’s longtime chancellor.Markus Schreiber/Associated PressFacebook and Google have filed a legal challenge to block the new rules, arguing that providing the police with personal information about users violates their privacy.Facebook said that as part of an agreement with the government it now provided more figures about the complaints it received. From January through July, the company received more than 77,000 complaints, which led it to delete or block about 11,500 pieces of content under the German law, known as NetzDG.“We have zero tolerance for hate speech and support the aims of NetzDG,” Facebook said in a statement. Twitter, which received around 833,000 complaints and removed roughly 81,000 posts during the same period, said a majority of those posts did not fit the definition of illegal speech, but still violated the company’s terms of service.“Threats, abusive content and harassment all have the potential to silence individuals,” Twitter said in a statement. “However, regulation and legislation such as this also has the potential to chill free speech by emboldening regimes around the world to legislate as a way to stifle dissent and legitimate speech.”YouTube, which received around 312,000 complaints and removed around 48,000 pieces of content in the first six months of the year, declined to comment other than saying it complies with the law.The amount of hate speech has become increasingly pronounced during election season, according to researchers at Reset and HateAid, organizations that track online hate speech and are pushing for tougher laws.The groups reviewed nearly one million comments on far-right and conspiratorial groups across about 75,000 Facebook posts in June, finding that roughly 5 percent were “highly toxic” or violated the online hate speech law. Some of the worst material, including messages with Nazi symbolism, had been online for more than a year, the groups found. Of 100 posts reported by the groups to Facebook, roughly half were removed within a few days, while the others remain online.The election has also seen a wave of misinformation, including false claims about voter fraud.Annalena Baerbock, the 40-year-old leader of the Green Party and the only woman among the top candidates running to succeed Ms. Merkel, has been the subject of an outsize amount of abuse compared with her male rivals from other parties, including sexist slurs and misinformation campaigns, according to researchers.Ms. Baerbock, the Green Party candidate for chancellor, taking a selfie with one of her supporters.Laetitia Vancon for The New York TimesOthers have stopped running altogether. In March, a former Syrian refugee running for the German Parliament, Tareq Alaows, dropped out of the race after experiencing racist attacks and violent threats online.While many policymakers want Facebook and other platforms to be aggressive in screening user-generated content, others have concerns about private companies making decisions about what people can and can’t say. The far-right party Alternative for Germany, which has criticized the law for unfairly targeting its supporters, has vowed to repeal the policy “to respect freedom of expression.”Jillian York, an author and free speech activist with the Electronic Frontier Foundation in Berlin, said the German law encouraged companies to remove potentially offensive speech that is perfectly legal, undermining free expression rights.“Facebook doesn’t err on the side of caution, they just take it down,” Ms. York said. Another concern, she said, is that less democratic countries such as Turkey and Belarus have adopted laws similar to Germany’s so that they could classify certain material critical of the government as illegal.Renate Künast, a former government minister who once invited a journalist to accompany her as she confronted individuals in person who had targeted her with online abuse, wants to see the law go further. Victims of online abuse should be able to go after perpetrators directly for libel and financial settlements, she said. Without that ability, she added, online abuse will erode political participation, particularly among women and minority groups.In a survey of more than 7,000 German women released in 2019, 58 percent said they did not share political opinions online for fear of abuse.“They use the verbal power of hate speech to force people to step back, leave their office or not to be candidates,” Ms. Künast said.The Reichstag, where the German Parliament convenes, in Berlin.Emile Ducke for The New York TimesMs. Dornheim, the Berlin candidate, who has a master’s degree in computer science and used to work in the tech industry, said more restrictions were needed. She described getting her home address removed from public records after somebody mailed a package to her house during a particularly bad bout of online abuse.Yet, she said, the harassment has only steeled her resolve.“I would never give them the satisfaction of shutting up,” she said. More