More stories

  • in

    YouTube Restores Donald Trump’s Account Privileges

    The Google-owned video platform became the latest of the big social networks to reverse the former president’s account restrictions.YouTube suspended former President Donald J. Trump’s account on the platform six days after the Jan. 6 attack on the Capitol. The video platform said it was concerned that Mr. Trump’s lies about the 2020 election could lead to more real-world violence.YouTube, which is owned by Google, reversed that decision on Friday, permitting Mr. Trump to once again upload videos to the popular site. The move came after similar decisions by Twitter and Meta, which owns Facebook and Instagram.“We carefully evaluated the continued risk of real-world violence, while balancing the chance for voters to hear equally from major national candidates in the run up to an election,” YouTube said on Twitter on Friday. Mr. Trump’s account will have to comply with the site’s content rules like any other account, YouTube added.After false claims that the 2020 presidential election was stolen circulated online and helped stoke the Jan. 6 attack, social media giants suspended Mr. Trump’s account privileges. Two years later, the platforms have started to soften their content rules. Under Elon Musk’s ownership, Twitter has unwound many of its content moderation efforts. YouTube recently laid off members of its trust and safety team, leaving one person in charge of setting political misinformation policies.Mr. Trump announced in November that he was seeking a second term as president, setting off deliberations at social media companies over whether to allow him back on their platforms. Days later, Mr. Musk polled Twitter users on whether he should reinstate Mr. Trump, and 52 percent of respondents said yes. Like YouTube, Meta said in January that it was important that people hear what political candidates are saying ahead of an election.The former president’s reinstatement is one of the first significant content decisions that YouTube has taken under its new chief executive, Neal Mohan, who got the top job last month. YouTube also recently loosened its profanity rules so that creators who used swear words at the start of a video could still make money from the content.YouTube’s announcement on Friday echoes a pattern of the company and its parent Google making polarizing content decisions after a competitor has already taken the same action. YouTube followed Meta and Twitter in suspending Mr. Trump after the Capitol attack, and in reversing the bans.Since losing his bid for re-election in 2020, Mr. Trump has sought to make a success of his own social media service, Truth Social, which is known for its loose content moderation rules.Mr. Trump on Friday posted on his Facebook page for the first time since his reinstatement. “I’M BACK!” Mr. Trump wrote, alongside a video in which he said, “Sorry to keep you waiting. Complicated business. Complicated.”Despite his Twitter reinstatement, Mr. Trump has not returned to posting from that account.In his last tweet, dated Jan. 8, 2021, he said he would not attend the coming inauguration, held at the Capitol. More

  • in

    Facebook Failed to Stop Ads Threatening Election Workers

    The ads, submitted by researchers, were rejected by YouTube and TikTok.Facebook says it does not allow content that threatens serious violence. But when researchers submitted ads threatening to “lynch,” “murder” and “execute” election workers around Election Day this year, the company’s largely automated moderation systems approved many of them.Out of the 20 ads submitted by researchers containing violent content, 15 were approved by Facebook, according to a new test published by Global Witness, a watchdog group, and New York University’s Cybersecurity for Democracy. Researchers deleted the approved ads before they were published.Ten of the test ads were submitted in Spanish. Facebook approved six of those ads, compared with nine of the 10 ads in English.TikTok and YouTube rejected all the ads and suspended the accounts that attempted to submit them, the researchers said.The tests underscored the challenges that social networks face in moderating increasingly partisan and violent content around elections, especially as poll workers are facing additional threats from election deniers and far-right extremists. The Justice Department and other federal agencies warned ahead of this year’s midterm elections that threats against election workers had surged.In a statement, a spokesman for Meta, Facebook’s parent company, said it later disabled several accounts that had submitted the ads. The company also said that ads continue to be reviewed after they go live.“This is a small sample of ads that are not representative of what people see on our platforms,” the spokesman said in a statement. “Content that incites violence against election workers or anyone else has no place on our apps, and recent reporting has made clear that Meta’s ability to deal with these issues effectively exceeds that of other platforms.”The research adds to previous tests conducted by the same groups, including one this year in which 20 ads were submitted containing political misinformation. In that test, Facebook approved only two of the misleading ads in English from an account in the United States, while TikTok approved about 90 percent of them. YouTube suspended the account that tried to submit the ads.But this round used more overt language that tested Facebook’s largely automated moderating system. Those included direct threats of violence, which were sourced from real statements by election deniers and other far-right extremists in the United States. One ad, which was approved by Facebook, read: “We are going to kill you all.” Another ad, which threatened to abuse children, was also approved.“It was really quite shocking to see the results,” said Damon McCoy, an associate professor at N.Y.U. “I thought a really simple keyword search would have flagged this for manual review.”In a statement, researchers also said they wanted to see social networks like Facebook increase content moderation efforts and offer more transparency around the moderation actions they take.“The fact that YouTube and TikTok managed to detect the death threats and suspend our account, whereas Facebook permitted the majority of the ads to be published shows that what we are asking is technically possible,” they wrote. More

  • in

    La desinformación es más difícil de combatir en EE. UU.

    La proliferación de redes sociales alternativas ha ayudado a afianzar la información falsa y engañosa como elemento clave de la política estadounidense.La mañana del 8 de julio, el expresidente Donald Trump recurrió a Truth Social, la plataforma de redes sociales que fundó con gente cercana a él, para afirmar que había ganado las elecciones presidenciales del 2020 en el estado de Wisconsin, a pesar de todas las pruebas que evidenciaban lo contrario.Alrededor de 8000 personas compartieron esa misiva en Truth Social, cifra que distó mucho de los cientos de miles de respuestas que sus publicaciones en Facebook y Twitter solían generar antes de que esas plataformas le apagaran el micrófono tras los mortíferos disturbios en el Capitolio el 6 de enero de 2021.A pesar de ello, la afirmación infundada de Trump pululó en la conciencia pública. Saltó de su aplicación a otras plataformas de redes sociales, por no hablar de pódcast, la radio y la televisión.Al cabo de 48 horas de publicado su mensaje, más de un millón de personas lo habían visto en al menos una decena de otros lugares. Apareció en Facebook y Twitter, de donde fue eliminado, pero también en YouTube, Gab, Parler y Telegram, según un análisis de The New York Times.La difusión de la afirmación de Trump ilustra cómo la desinformación ha hecho metástasis desde que los expertos comenzaron a sonar la alarma sobre la amenaza que supone y todo esto ocurre justo antes de las elecciones de mitad de mandato de este año. A pesar de los años de esfuerzos de los medios de comunicación, de los académicos e incluso de las propias empresas de redes sociales para hacer frente al problema, se puede decir que hoy en día está más generalizado y extendido.“Para ser honesta, me parece que el problema está peor que nunca”, comentó Nina Jankowicz, experta en desinformación que condujo durante un periodo breve un consejo consultivo dentro del Departamento de Seguridad Nacional dedicado a combatir la desinformación. La creación del panel desató furor y provocó su renuncia y la disolución del consejo consultivo.No hace mucho, la lucha contra la desinformación se centraba en las principales plataformas de redes sociales, como Facebook y Twitter. Cuando se les presionaba, solían eliminar los contenidos problemáticos, incluida la información errónea y la desinformación intencionada sobre la pandemia de COVID-19.Sin embargo, ahora hay decenas de plataformas nuevas, incluidas algunas que se enorgullecen de no moderar —censurar, como lo denominan— las declaraciones falsas en nombre de la libertad de expresión.Otras personalidades siguieron los pasos de Trump y se cambiaron a estas nuevas plataformas tras ser “censuradas” por Facebook, YouTube o Twitter. Entre ellos, Michael Flynn, el general retirado que sirvió brevemente como principal asesor de Seguridad Nacional de Trump; L. Lin Wood, una abogada pro-Trump; Naomi Wolf, una autora feminista y escéptica de las vacunas, así como diversos seguidores de QAnon y los Oath Keepers, un grupo de militantes de extrema derecha.Al menos 69 millones de personas se han unido a plataformas como Parler, Gab, Truth Social, Gettr y Rumble, que se promueven como alternativas conservadoras a las grandes empresas tecnológicas, según declaraciones de las empresas mismas. Aunque muchos de esos usuarios ya no tienen cabida en las plataformas más grandes, siguen difundiendo sus opiniones, que a menudo aparecen en capturas de pantalla publicadas en los sitios que les prohibieron la entrada.“Nada en internet existe de manera aislada”, afirmó Jared Holt, gestor principal en la investigación sobre odio y extremismo del Instituto para el Diálogo Estratégico. “Lo que ocurre en plataformas alternas como Gab o Telegram o Truth tarde o temprano llega a Facebook, Twitter y otras”, agregó.Los usuarios han migrado a aplicaciones como Truth Social luego de haber sido “censuradas” por Facebook, YouTube o Twitter.Leon Neal/Getty ImagesEl discurso político se ha radicalizado por la difusión de las personas que propagan desinformación, indicó Nora Benavidez, abogada sénior en Free Press, un grupo de defensa de los derechos digitales y la transparencia.“Nuestro lenguaje y nuestros ecosistemas en línea se están volviendo cada vez más corrosivos”, dijo.Los cambios en el paisaje de la desinformación se están haciendo más evidentes con el ciclo electoral en Estados Unidos. En 2016, la campaña encubierta de Rusia para difundir mensajes falsos y divisorios parecía una aberración en el sistema político estadounidense. Hoy la desinformación, procedente de enemigos extranjeros y nacionales, se ha convertido en una característica del mismo.La idea infundada de que el presidente Joe Biden no fue electo de manera legítima se generalizó entre los miembros del Partido Republicano, e hizo que funcionarios de los estados y los condados impusieran nuevas restricciones para votar, a menudo solo con base en teorías de la conspiración que se cuelan en los medios de comunicación de derecha.Los votantes no solo deben filtrar un torrente cada vez mayor de mentiras y falsedades sobre los candidatos y sus políticas, sino también información sobre cuándo y dónde votar. Los funcionarios nombrados o elegidos en nombre de la lucha contra el fraude electoral han adoptado una postura que implica que se negarán a certificar los resultados que no sean de su agrado.Los proveedores de desinformación también se han vuelto cada vez más sofisticados a la hora de eludir las normas de las principales plataformas, mientras que el uso del video para difundir afirmaciones falsas en YouTube, TikTok e Instagram ha hecho que los sistemas automatizados tengan más dificultades para identificarlos que los mensajes de texto.TikTok, propiedad del gigante chino de la tecnología ByteDance, se ha vuelto uno de los principales campos de batalla en la lucha actual contra la desinformación. Un informe del mes pasado de NewsGuard, una organización que da seguimiento al problema en línea, mostró que casi el 20 por ciento de los videos que aparecían como resultados de búsqueda en TikTok contenían información falsa o tendenciosa sobre temas como los tiroteos en las escuelas y la guerra de Rusia en Ucrania.Katie Harbath en el “sala de operaciones” de Facebook, donde se monitoreaba el contenido relacionado con las elecciones en la plataforma, en 2018Jeff Chiu/Associated Press“La gente que hace esto sabe cómo aprovechar los vacíos”, explicó Katie Harbath, exdirectora de políticas públicas de Facebook que ahora dirige Anchor Change, una consultora estratégica.A pocas semanas de las elecciones de mitad de mandato, las principales plataformas se han comprometido a bloquear, etiquetar o marginar todo lo que infrinja las políticas de la empresa, incluida la desinformación, la incitación al odio o los llamados a la violencia.Sin embargo, la industria artesanal de expertos dedicados a contrarrestar la desinformación —los grupos de expertos, las universidades y las organizaciones no gubernamentales— mencionan que la industria no está haciendo suficiente. El mes pasado, por ejemplo, el Centro Stern para los Negocios y los Derechos Humanos de la Universidad de Nueva York advirtió que las principales plataformas seguían amplificando el “negacionismo electoral” de maneras que debilitaban la confianza en el sistema democrático.Otro desafío es la proliferación de plataformas alternativas para esas falsedades y opiniones aún más extremas.Muchas de esas nuevas plataformas florecieron tras la derrota de Trump en 2020, aunque todavía no han alcanzado el tamaño o el alcance de Facebook y Twitter. Estas plataformas afirman que las grandes empresas tecnológicas están en deuda con el gobierno, el Estado profundo o la élite liberal.Parler, una red social fundada en 2018, era uno de los sitios que más crecía, hasta que las tiendas de aplicaciones de Apple y Google lo expulsaron tras los disturbios mortales del 6 de enero, alimentados por la desinformación y los llamados a la violencia en línea. Desde entonces ha vuelto a ambas tiendas y ha empezado a reconstruir su audiencia apelando a quienes sienten que sus voces han sido silenciadas.“En Parler creemos que el individuo es quien debe decidir lo que cree que es la verdad”, dijo en una entrevista, Amy Peikoff, la directora de políticas de la plataforma.Argumentó que el problema con la desinformación o las teorías de la conspiración se derivaba de los algoritmos que las plataformas usan para mantener a la gente pegada a internet y no del debate sin moderar que fomentan sitios como Parler.El lunes, Parler anunció que Kanye West había, en principio, accedido a comprar la plataforma en un acuerdo que el rapero y el diseñador de moda, ahora conocido como Ye, formuló en términos políticos.“En un mundo en que las opiniones conservadoras se consideran controversiales, debemos de asegurarnos de tener el derecho a expresarnos libremente”, dijo, según el comunicado de la compañía.Los competidores de Parler son ahora BitChute, Gab, Gettr, Rumble, Telegram y Truth Social, y cada uno de ellos se presenta como un santuario frente a las políticas de moderación de las principales plataformas en todo tipo de temas, desde la política hasta la salud.Una nueva encuesta del Centro de Investigaciones Pew descubrió que el 15 por ciento de las cuentas destacadas en esas siete plataformas habían sido desterradas previamente de otras como Twitter y Facebook.Las aplicaciones como Gettr se publicitan como alternativas a los gigantes tecnológicosElijah Nouvelage/Getty ImagesSegún la encuesta, casi dos terceras partes de los usuarios de esas plataformas dijeron que habían encontrado una comunidad de personas que compartían sus opiniones. La mayoría son republicanos o se inclinan por ese partido.Una consecuencia de esta atomización de las fuentes de las redes sociales es que se refuerzan las burbujas de información partidista en las que viven millones de estadounidenses.Según el Centro Pew, al menos el seis por ciento de los estadounidenses se informa de manera habitual en al menos uno de estos sitios relativamente nuevos, que a menudo “ponen de relieve puntos de vista del mundo que no pertenecen a la corriente dominante y, a veces, utilizan un lenguaje ofensivo”. La encuesta encontró que una de cada 10 publicaciones en estas plataformas que mencionaban cuestiones relacionadas con la comunidad LGBTQ incluían alegatos peyorativos.Estos nuevos sitios siguen siendo marginales comparados con las plataformas más grandes; por ejemplo, Trump tiene 4 millones de seguidores en Truth Social, en comparación con los 88 millones que tenía cuando Twitter cerró su cuenta en 2021.Aun así, Trump ha retomado cada vez más sus publicaciones con el ímpetu que antes mostraba en Twitter. El allanamiento del FBI en Mar-a-Lago volvió a poner sus últimos pronunciamientos en el ojo del huracán político.Para las principales plataformas, el incentivo financiero para atraer usuarios, y sus clics, sigue siendo poderoso y podría hacer que den marcha atrás a las medidas que tomaron en 2021. También hay un componente ideológico. El llamado a la libertad individual, con tintes emocionales, impulsó en parte la oferta de Elon Musk para comprar Twitter, que parece haberse reactivado tras meses de maniobras legales.Nick Clegg, el presidente de asuntos globales de Meta, la empresa matriz de Facebook, incluso sugirió hace poco que la plataforma podría restablecer la cuenta de Trump en 2023, antes de la que podría ser otra carrera presidencial. Facebook había dicho previamente que solo lo haría “si el riesgo para la seguridad pública ha disminuido”.Nick Clegga, el presidente de asuntos globales de MetaPatrick T. Fallon/Agence France-Presse — Getty ImagesUn estudio de Truth Social realizado por Media Matters for America, un grupo de monitoreo de medios con tendencia de izquierda, examinó la forma en que la plataforma se ha convertido en hogar de algunas de las teorías de conspiración más marginales. Trump, que empezó a publicar en la plataforma en el mes de abril, ha amplificado cada vez más el contenido de QAnon, la teoría de conspiración en línea.Ha compartido publicaciones de QAnon más de 130 veces. Los seguidores de QAnon promueven una falsedad amplia y compleja centrada en Trump como líder que se enfrenta a una conspiración de una camarilla de pedófilos del Partido Demócrata. Dichas opiniones han hallado cabida durante las primarias de este año en las campañas electorales de los republicanos.Jankowicz, la experta en desinformación, mencionó que las divisiones sociales y políticas habían agitado las olas de la desinformación.Las controversias sobre la mejor manera de responder a la pandemia de COVID-19 profundizaron la desconfianza en el gobierno y los expertos médicos, sobre todo entre los conservadores. La negativa de Trump a aceptar el resultado de las elecciones de 2020 condujo a la violencia en el Capitolio, pero no terminó con ella.“Deberían habernos unido”, dijo Jankowicz, refiriéndose a la pandemia y a los disturbios. “Pensé que quizás podrían servir como una especie de poder de convocatoria, pero no lo fueron”Steven Lee Myers cubre desinformación para el Times. Ha trabajado en Washington, Moscú, Bagdad y Pekín, donde contribuyó a los artículos que ganaron el Premio Pulitzer al servicio público en 2021. También es el autor de The New Tsar: The Rise and Reign of Vladimir Putin. @stevenleemyers • FacebookSheera Frenkel es una reportera de tecnología premiada que tiene su sede en San Francisco. En 2021, ella y Cecilia Kang publicaron Manipulados. La batalla de Facebook por la dominación mundial. @sheeraf More

  • in

    Ahead of Midterms, Disinformation Is Even More Intractable

    On the morning of July 8, former President Donald J. Trump took to Truth Social, a social media platform he founded with people close to him, to claim that he had in fact won the 2020 presidential vote in Wisconsin, despite all evidence to the contrary.Barely 8,000 people shared that missive on Truth Social, a far cry from the hundreds of thousands of responses his posts on Facebook and Twitter had regularly generated before those services suspended his megaphones after the deadly riot on Capitol Hill on Jan. 6, 2021.And yet Mr. Trump’s baseless claim pulsed through the public consciousness anyway. It jumped from his app to other social media platforms — not to mention podcasts, talk radio or television.Within 48 hours of Mr. Trump’s post, more than one million people saw his claim on at least dozen other sites. It appeared on Facebook and Twitter, from which he has been banished, but also YouTube, Gab, Parler and Telegram, according to an analysis by The New York Times.The spread of Mr. Trump’s claim illustrates how, ahead of this year’s midterm elections, disinformation has metastasized since experts began raising alarms about the threat. Despite years of efforts by the media, by academics and even by social media companies themselves to address the problem, it is arguably more pervasive and widespread today.“I think the problem is worse than it’s ever been, frankly,” said Nina Jankowicz, an expert on disinformation who briefly led an advisory board within the Department of Homeland Security dedicated to combating misinformation. The creation of the panel set off a furor, prompting her to resign and the group to be dismantled.Not long ago, the fight against disinformation focused on the major social media platforms, like Facebook and Twitter. When pressed, they often removed troubling content, including misinformation and intentional disinformation about the Covid-19 pandemic.Today, however, there are dozens of new platforms, including some that pride themselves on not moderating — censoring, as they put it — untrue statements in the name of free speech.Other figures followed Mr. Trump in migrating to these new platforms after being “censored” by Facebook, YouTube or Twitter. They included Michael Flynn, the retired general who served briefly as Mr. Trump’s first national security adviser; L. Lin Wood, a pro-Trump lawyer; Naomi Wolf, a feminist author and vaccine skeptic; and assorted adherents of QAnon and the Oath Keepers, the far-right militia.At least 69 million people have joined platforms, like Parler, Gab, Truth Social, Gettr and Rumble, that advertise themselves as conservative alternatives to Big Tech, according to statements by the companies. Though many of those users are ostracized from larger platforms, they continue to spread their views, which often appear in screen shots posted on the sites that barred them.The State of the 2022 Midterm ElectionsBoth parties are making their final pitches ahead of the Nov. 8 election.Where the Election Stands: As Republicans appear to be gaining an edge with swing voters in the final weeks of the contest for control of Congress, here’s a look at the state of the races for the House and Senate.Biden’s Low Profile: President Biden’s decision not to attend big campaign rallies reflects a low approval rating that makes him unwelcome in some congressional districts and states.What Young Voters Think: Twelve Americans under 30, all living in swing states, told The Times about their political priorities, ranging from the highly personal to the universal.Debates Dwindle: Direct political engagement with voters is waning as candidates surround themselves with their supporters. Nowhere is the trend clearer than on the shrinking debate stage.“Nothing on the internet exists in a silo,” said Jared Holt, a senior manager on hate and extremism research at the Institute for Strategic Dialogue. “Whatever happens in alt platforms like Gab or Telegram or Truth makes its way back to Facebook and Twitter and others.”Users have migrated to apps like Truth Social after being “censored” by Facebook, YouTube or Twitter.Leon Neal/Getty ImagesThe diffusion of the people who spread disinformation has radicalized political discourse, said Nora Benavidez, senior counsel at Free Press, an advocacy group for digital rights and accountability.“Our language and our ecosystems are becoming more caustic online,” she said. The shifts in the disinformation landscape are becoming clear with the new cycle of American elections. In 2016, Russia’s covert campaign to spread false and divisive posts seemed like an aberration in the American political system. Today disinformation, from enemies, foreign and domestic, has become a feature of it.The baseless idea that President Biden was not legitimately elected has gone mainstream among Republican Party members, driving state and county officials to impose new restrictions on casting ballots, often based on mere conspiracy theories percolating in right-wing media.Voters must now sift through not only an ever-growing torrent of lies and falsehoods about candidates and their policies, but also information on when and where to vote. Officials appointed or elected in the name of fighting voter fraud have put themselves in the position to refuse to certify outcomes that are not to their liking.The purveyors of disinformation have also become increasingly sophisticated at sidestepping the major platforms’ rules, while the use of video to spread false claims on YouTube, TikTok and Instagram has made them harder for automated systems to track than text.TikTok, which is owned by the Chinese tech giant ByteDance, has become a primary battleground in today’s fight against disinformation. A report last month by NewsGuard, an organization that tracks the problem online, showed that nearly 20 percent of videos presented as search results on TikTok contained false or misleading information on topics such as school shootings and Russia’s war in Ukraine.Katie Harbath in Facebook’s “war room,” where election-related content was monitored on the platform, in 2018.Jeff Chiu/Associated Press“People who do this know how to exploit the loopholes,” said Katie Harbath, a former director of public policy at Facebook who now leads Anchor Change, a strategic consultancy.With the midterm elections only weeks away, the major platforms have all pledged to block, label or marginalize anything that violates company policies, including disinformation, hate speech or calls to violence.Still, the cottage industry of experts dedicated to countering disinformation — think tanks, universities and nongovernment organizations — say the industry is not doing enough. The Stern Center for Business and Human Rights at New York University warned last month, for example, that the major platforms continued to amplify “election denialism” in ways that undermined trust in the democratic system.Another challenge is the proliferation of alternative platforms for those falsehoods and even more extreme views.Many of those new platforms have flourished in the wake of Mr. Trump’s defeat in 2020, though they have not yet reached the size or reach of Facebook and Twitter. They portray Big Tech as beholden to the government, the deep state or the liberal elite.Parler, a social network founded in 2018, was one of the fastest-growing sites — until Apple’s and Google’s app stores kicked it off after the deadly riot on Jan. 6, which was fueled by disinformation and calls for violence online. It has since returned to both stores and begun to rebuild its audience by appealing to those who feel their voices have been silenced.“We believe at Parler that it is up to the individual to decide what he or she thinks is the truth,” Amy Peikoff, the platform’s chief policy officer, said in an interview.She argued that the problem with disinformation or conspiracy theories stemmed from the algorithms that platforms use to keep people glued online — not from the unfettered debate that sites like Parler foster.On Monday, Parler announced that Kanye West had agreed in principle to purchase the platform, a deal that the rapper and fashion designer, now known as Ye, cast in political terms.“In a world where conservative opinions are considered to be controversial, we have to make sure we have the right to freely express ourselves,” he said, according to the company’s statement.Parler’s competitors now are BitChute, Gab, Gettr, Rumble, Telegram and Truth Social, with each offering itself as sanctuary from the moderating policies of the major platforms on everything from politics to health policy.A new survey by the Pew Research Center found that 15 percent of prominent accounts on those seven platforms had previously been banished from others like Twitter and Facebook.Apps like Gettr market themselves as alternatives to Big Tech.Elijah Nouvelage/Getty ImagesNearly two-thirds of the users of those platforms said they had found a community of people who share their views, according to the survey. A majority are Republicans or lean Republican.A result of this atomization of social media sources is to reinforce the partisan information bubbles within which millions of Americans live.At least 6 percent of Americans now regularly get news from at least one of these relatively new sites, which often “highlight non-mainstream world views and sometimes offensive language,” according to Pew. One in 10 posts on these platforms that mentioned L.G.B.T.Q. issues involved derisive allegations, the survey found.These new sites are still marginal compared with the bigger platforms; Mr. Trump, for example, has four million followers on Truth Social, compared with 88 million when Twitter kicked him off in 2021.Even so, Mr. Trump has increasingly resumed posting with the vigor he once showed on Twitter. The F.B.I. raid on Mar-a-Lago thrust his latest pronouncements into the eye of the political storm once again.For the major platforms, the financial incentive to attract users — and their clicks — remains powerful and could undo the steps they took in 2021. There is also an ideological component. The emotionally laced appeal to individual liberty in part drove Elon Musk’s bid to buy Twitter, which appears to have been revived after months of legal maneuvering.Nick Clegg, the president of global affairs at Meta, Facebook’s parent company, even suggested recently that the platform might reinstate Mr. Trump’s account in 2023 — ahead of what could be another presidential run. Facebook had previously said it would do so only “if the risk to public safety has receded.”Nick Clegg, Meta’s president for global affairs.Patrick T. Fallon/Agence France-Presse — Getty ImagesA study of Truth Social by Media Matters for America, a left-leaning media monitoring group, examined how the platform had become a home for some of the most fringe conspiracy theories. Mr. Trump, who began posting on the platform in April, has increasingly amplified content from QAnon, the online conspiracy theory.He has shared posts from QAnon accounts more than 130 times. QAnon believers promote a vast and complex falsehood that centers on Mr. Trump as a leader battling a cabal of Democratic Party pedophiles. Echoes of such views reverberated through Republican election campaigns across the country during this year’s primaries.Ms. Jankowicz, the disinformation expert, said the nation’s social and political divisions had churned the waves of disinformation.The controversies over how best to respond to the Covid-19 pandemic deepened distrust of government and medical experts, especially among conservatives. Mr. Trump’s refusal to accept the outcome of the 2020 election led to, but did not end with, the Capitol Hill violence.“They should have brought us together,” Ms. Jankowicz said, referring to the pandemic and the riots. “I thought perhaps they could be kind of this convening power, but they were not.” More

  • in

    Social Media Companies Still Boost Election Fraud Claims, Report Says

    The major social media companies all say they are ready to deal with a torrent of misinformation surrounding the midterm elections in November.A report released on Monday, however, claimed that they continued to undermine the integrity of the vote by allowing election-related conspiracy theories to fester and spread.In the report, the Stern Center for Business and Human Rights at New York University said the social media companies still host and amplify “election denialism,” threatening to further erode confidence in the democratic process.The companies, the report argued, bear a responsibility for the false but widespread belief among conservatives that the 2020 election was fraudulent — and that the coming midterms could be, too. The report joins a chorus of warnings from officials and experts that the results in November could be fiercely, even violently, contended.“The malady of election denialism in the U.S. has become one of the most dangerous byproducts of social media,” the report warned, “and it is past time for the industry to do more to address it.”The State of the 2022 Midterm ElectionsWith the primaries over, both parties are shifting their focus to the general election on Nov. 8.Echoing Trump: Six G.O.P. nominees for governor and the Senate in critical midterm states, all backed by former President Donald J. Trump, would not commit to accepting this year’s election results.Times/Siena Poll: Our second survey of the 2022 election cycle found Democrats remain unexpectedly competitive in the battle for Congress, while G.O.P. dreams of a major realignment among Latino voters have failed to materialize.Ohio Senate Race: The contest between Representative Tim Ryan, a Democrat, and his Republican opponent, J.D. Vance, appears tighter than many once expected.Pennsylvania Senate Race: In one of his most extensive interviews since having a stroke, Lt. Gov. John Fetterman, the Democratic nominee, said he was fully capable of handling a campaign that could decide control of the Senate.The major platforms — Facebook, Twitter, TikTok and YouTube — have all announced promises or initiatives to combat disinformation ahead of the 2022 midterms, saying they were committed to protecting the election process. But the report said those measures were ineffective, haphazardly enforced or simply too limited.Facebook, for example, announced that it would ban ads that called into question the legitimacy of the coming elections, but it exempted politicians from its fact-checking program. That, the report says, allows candidates and other influential leaders to undermine confidence in the vote by questioning ballot procedures or other rules.In the case of Twitter, an internal report released as part of a whistle-blower’s complaint from a former head of security, Peiter Zatko, disclosed that the company’s site integrity team had only two experts on misinformation.The New York University report, which incorporated responses from all the companies except YouTube, called for greater transparency in how companies rank, recommend and remove content. It also said they should enhance fact-checking efforts and remove provably untrue claims, and not simply label them false or questionable.A spokeswoman for Twitter, Elizabeth Busby, said the company was undertaking a multifaceted approach to ensuring reliable information about elections. That includes efforts to “pre-bunk” false information and to “reduce the visibility of potentially misleading claims via labels.”In a statement, YouTube said it agreed with “many of the points” made in the report and had already carried out many of its recommendations.“We’ve already removed a number of videos related to the midterms for violating our policies,” the statement said, “and the most viewed and recommended videos and channels related to the election are from authoritative sources, including news channels.”TikTok did not respond to a request for comment.There are already signs that the integrity of the vote in November will be as contentious as it was in 2020, when President Donald J. Trump and some of his supporters refused to accept the outcome, falsely claiming widespread fraud.Inattention by social media companies in the interim has allowed what the report describes as a coordinated campaign to take root among conservatives claiming, again without evidence, that wholesale election fraud is bent on tipping elections to Democrats.“Election denialism,” the report said, “was evolving in 2021 from an obsession with the former president’s inability to accept defeat into a broader, if equally baseless, attack on the patriotism of all Democrats, as well as non-Trump-loving Republicans, and legions of election administrators, many of them career government employees.” More

  • in

    Political Campaigns Flood Streaming Video With Custom Voter Ads

    The targeted political ads could spread some of the same voter-influence techniques that proliferated on Facebook to an even less regulated medium.Over the last few weeks, tens of thousands of voters in the Detroit area who watch streaming video services were shown different local campaign ads pegged to their political leanings.Digital consultants working for Representative Darrin Camilleri, a Democrat in the Michigan House who is running for State Senate, targeted 62,402 moderate, female — and likely pro-choice — voters with an ad promoting reproductive rights.The campaign also ran a more general video ad for Mr. Camilleri, a former public-school teacher, directed at 77,836 Democrats and Independents who have voted in past midterm elections. Viewers in Mr. Camilleri’s target audience saw the messages while watching shows on Lifetime, Vice and other channels on ad-supported streaming services like Samsung TV Plus and LG Channels.Although millions of American voters may not be aware of it, the powerful data-mining techniques that campaigns routinely use to tailor political ads to consumers on sites and apps are making the leap to streaming video. The targeting has become so precise that next door neighbors streaming the same true crime show on the same streaming service may now be shown different political ads — based on data about their voting record, party affiliation, age, gender, race or ethnicity, estimated home value, shopping habits or views on gun control.Political consultants say the ability to tailor streaming video ads to small swaths of viewers could be crucial this November for candidates like Mr. Camilleri who are facing tight races. In 2016, Mr. Camilleri won his first state election by just several hundred votes.“Very few voters wind up determining the outcomes of close elections,” said Ryan Irvin, the co-founder of Change Media Group, the agency behind Mr. Camilleri’s ad campaign. “Very early in an election cycle, we can pull from the voter database a list of those 10,000 voters, match them on various platforms and run streaming TV ads to just those 10,000 people.”Representative Darrin Camilleri, a member of the Michigan House who is running for State Senate, targeted local voters with streaming video ads before he campaigned in their neighborhoods. Emily Elconin for The New York TimesTargeted political ads on streaming platforms — video services delivered via internet-connected devices like TVs and tablets — seemed like a niche phenomenon during the 2020 presidential election. Two years later, streaming has become the most highly viewed TV medium in the United States, according to Nielsen.Savvy candidates and advocacy groups are flooding streaming services with ads in an effort to reach cord-cutters and “cord nevers,” people who have never watched traditional cable or broadcast TV.The trend is growing so fast that political ads on streaming services are expected to generate $1.44 billion — or about 15 percent — of the projected $9.7 billion on ad spending for the 2022 election cycle, according to a report from AdImpact, an ad tracking company. That would for the first time put streaming on par with political ad spending on Facebook and Google.The State of the 2022 Midterm ElectionsWith the primaries over, both parties are shifting their focus to the general election on Nov. 8.Midterm Data: Could the 2020 polling miss repeat itself? Will this election cycle really be different? Nate Cohn, The Times’s chief political analyst, looks at the data in his new newsletter.Republicans’ Abortion Struggles: Senator Lindsey Graham’s proposed nationwide 15-week abortion ban was intended to unite the G.O.P. before the November elections. But it has only exposed the party’s divisions.Democrats’ Dilemma: The party’s candidates have been trying to signal their independence from the White House, while not distancing themselves from President Biden’s base or agenda.The quick proliferation of the streaming political messages has prompted some lawmakers and researchers to warn that the ads are outstripping federal regulation and oversight.For example, while political ads running on broadcast and cable TV must disclose their sponsors, federal rules on political ad transparency do not specifically address streaming video services. Unlike broadcast TV stations, streaming platforms are also not required to maintain public files about the political ads they sold.The result, experts say, is an unregulated ecosystem in which streaming services take wildly different approaches to political ads.“There are no rules over there, whereas, if you are a broadcaster or a cable operator, you definitely have rules you have to operate by,” said Steve Passwaiter, a vice president at Kantar Media, a company that tracks political advertising.The boom in streaming ads underscores a significant shift in the way that candidates, party committees and issue groups may target voters. For decades, political campaigns have blanketed local broadcast markets with candidate ads or tailored ads to the slant of cable news channels. With such bulk media buying, viewers watching the same show at the same time as their neighbors saw the same political messages.But now campaigns are employing advanced consumer-profiling and automated ad-buying services to deliver different streaming video messages, tailored to specific voters.“In the digital ad world, you’re buying the person, not the content,” said Mike Reilly, a partner at MVAR Media, a progressive political consultancy that creates ad campaigns for candidates and advocacy groups.Targeted political ads are being run on a slew of different ad-supported streaming channels. Some smart TV manufacturers air the political ads on proprietary streaming platforms, like Samsung TV Plus and LG Channels. Viewers watching ad-supported streaming channels via devices like Roku may also see targeted political ads.Policies on political ad targeting vary. Amazon prohibits political party and candidate ads on its streaming services. YouTube TV and Hulu allow political candidates to target ads based on viewers’ ZIP code, age and gender, but they prohibit political ad targeting by voting history or party affiliation.Roku, which maintains a public archive of some political ads running on its platform, declined to comment on its ad-targeting practices.Samsung and LG, which has publicly promoted its voter-targeting services for political campaigns, did not respond to requests for comment. Netflix declined to comment about its plans for an ad-supported streaming service.Targeting political ads on streaming services can involve more invasive data-mining than the consumer-tracking techniques typically used to show people online ads for sneakers.Political consulting firms can buy profiles on more than 200 millions voters, including details on an individual’s party affiliations, voting record, political leanings, education levels, income and consumer habits. Campaigns may employ that data to identify voters concerned about a specific issue — like guns or abortion — and hone video messages to them.In addition, internet-connected TV platforms like Samsung, LG and Roku often use data-mining technology, called “automated content recognition,” to analyze snippets of the videos people watch and segment viewers for advertising purposes.Some streaming services and ad tech firms allow political campaigns to provide lists of specific voters to whom they wish to show ads.To serve those messages, ad tech firms employ precise delivery techniques — like using IP addresses to identify devices in a voter’s household. The device mapping allows political campaigns to aim ads at certain voters whether they are streaming on internet-connected TVs, tablets, laptops or smartphones.Sten McGuire, an executive at a4 Advertising, presented a webinar in March announcing a partnership to sell political ads on LG channels.New York TimesUsing IP addresses, “we can intercept voters across the nation,” Sten McGuire, an executive at a4 Advertising, said in a webinar in March announcing a partnership to sell political ads on LG channels. His company’s ad-targeting worked, Mr. McGuire added, “whether you are looking to reach new cord cutters or ‘cord nevers’ streaming their favorite content, targeting Spanish-speaking voters in swing states, reaching opinion elites and policy influencers or members of Congress and their staff.”Some researchers caution that targeted video ads could spread some of the same voter-influence techniques that have proliferated on Facebook to a new, and even less regulated, medium.Facebook and Google, the researchers note, instituted some restrictions on political ad targeting after Russian operatives used digital platforms to try to disrupt the 2016 presidential election. With such restrictions in place, political advertisers on Facebook, for instance, should no longer be able to target users interested in Malcolm X or Martin Luther King with paid messages urging them not to vote.Facebook and Google have also created public databases that enable people to view political ads running on the platforms.But many streaming services lack such targeting restrictions and transparency measures. The result, these experts say, is an opaque system of political influence that runs counter to basic democratic principles.“This occupies a gray area that’s not getting as much scrutiny as ads running on social media,” said Becca Ricks, a senior researcher at the Mozilla Foundation who has studied the political ad policies of popular streaming services. “It creates an unfair playing field where you can precisely target, and change, your messaging based on the audience — and do all of this without some level of transparency.”Some political ad buyers are shying away from more restricted online platforms in favor of more permissive streaming services.“Among our clients, the percentage of budget going to social channels, and on Facebook and Google in particular, has been declining,” said Grace Briscoe, an executive overseeing candidate and political issue advertising at Basis Technologies, an ad tech firm. “The kinds of limitations and restrictions that those platforms have put on political ads has disinclined clients to invest as heavily there.”Senators Amy Klobuchar and Mark Warner introduced the Honest Ads Act, which would require online political ads to include disclosures similar to those on broadcast TV ads.Al Drago for The New York TimesMembers of Congress have introduced a number of bills that would curb voter-targeting or require digital ads to adhere to the same rules as broadcast ads. But the measures have not yet been enacted.Amid widespread covertness in the ad-targeting industry, Mr. Camilleri, the member of the Michigan House running for State Senate, was unusually forthcoming about how he was using streaming services to try to engage specific swaths of voters.In prior elections, he said, he sent postcards introducing himself to voters in neighborhoods where he planned to make campaign stops. During this year’s primaries, he updated the practice by running streaming ads introducing himself to certain households a week or two before he planned to knock on their doors.“It’s been working incredibly well because a lot of people will say, ‘Oh, I’ve seen you on TV,’” Mr. Camilleri said, noting that many of his constituents did not appear to understand the ads were shown specifically to them and not to a general broadcast TV audience. “They don’t differentiate” between TV and streaming, he added, “because you’re watching YouTube on your television now.” More

  • in

    To Fight Election Falsehoods, Social Media Companies Ready a Familiar Playbook

    The election dashboards are back online, the fact-checking teams have reassembled, and warnings about misleading content are cluttering news feeds once again.As the United States marches toward another election season, social media companies are steeling themselves for a deluge of political misinformation. Those companies, including TikTok and Facebook, are trumpeting a series of election tools and strategies that look similar to their approaches in previous years.Disinformation watchdogs warn that while many of these programs are useful — especially efforts to push credible information in multiple languages — the tactics proved insufficient in previous years and may not be enough to combat the wave of falsehoods pushed this election season.Here are the anti-misinformation plans for Facebook, TikTok, Twitter and YouTube.FacebookFacebook’s approach this year will be “largely consistent with the policies and safeguards” from 2020, Nick Clegg, president of global affairs for Meta, Facebook’s parent company, wrote in a blog post last week.Posts rated false or partly false by one of Facebook’s 10 American fact-checking partners will get one of several warning labels, which can force users to click past a banner reading “false information” before they can see the content. In a change from 2020, those labels will be used in a more “targeted and strategic way” for posts discussing the integrity of the midterm elections, Mr. Clegg wrote, after users complained that they were “over-used.”Warning labels prevent users from immediately seeing or sharing false content.Provided by FacebookFacebook will also expand its efforts to address harassment and threats aimed at election officials and poll workers. Misinformation researchers said the company has taken greater interest in moderating content that could lead to real-world violence after the Jan. 6 attack on the U.S. Capitol.Facebook greatly expanded its election team after the 2016 election, to more than 300 people. Mark Zuckerberg, Facebook’s chief executive, took a personal interest in safeguarding elections.But Meta, Facebook’s parent company, has changed its focus since the 2020 election. Mr. Zuckerberg is now more focused instead on building the metaverse and tackling stiff competition from TikTok. The company has dispersed its election team and signaled that it could shut down CrowdTangle, a tool that helps track misinformation on Facebook, some time after the midterms.“I think they’ve just come to the conclusion that this is not really a problem that they can tackle at this point,” said Jesse Lehrich, co-founder of Accountable Tech, a nonprofit focused on technology and democracy.More Coverage of the 2022 Midterm ElectionsChallenging DeSantis: Florida Democrats would love to defeat Gov. Ron DeSantis in November. But first they must nominate a candidate who can win in a state where they seem to perpetually fall short.Uniting Around Mastriano: Doug Mastriano, the far-right G.O.P. nominee for Pennsylvania governor, has managed to win over party officials who feared he would squander a winnable race.O’Rourke’s Widening Campaign: Locked in an unexpectedly close race against Gov. Greg Abbott, Beto O’Rourke, the Democratic candidate, has ventured into deeply conservative corners of rural Texas in search of votes.The ‘Impeachment 10’: After Liz Cheney’s primary defeat in Wyoming, only two of the 10 House Republicans who voted to impeach Mr. Trump remain.In a statement, a spokesman from Meta said its elections team was absorbed into other parts of the company and that more than 40 teams are now focused on the midterms.TikTokIn a blog post announcing its midterm plans, Eric Han, the head of U.S. safety, said the company would continue its fact-checking program from 2020, which prevents some videos from being recommended until they are verified by outside fact checkers. It also introduced an election information portal, which provides voter information like how to register, six weeks earlier than it did in 2020.Even so, there are already clear signs that misinformation has thrived on the platform throughout the primaries.“TikTok is going to be a massive vector for disinformation this cycle,” Mr. Lehrich said, adding that the platform’s short video and audio clips are harder to moderate, enabling “massive amounts of disinformation to go undetected and spread virally.”TikTok said its moderation efforts would focus on stopping creators who are paid for posting political content in violation of the company’s rules. TikTok has never allowed paid political posts or political advertising. But the company said that some users were circumventing or ignoring those policies during the 2020 election. A representative from the company said TikTok would start approaching talent management agencies directly to outline their rules.Disinformation watchdogs have criticized the company for a lack of transparency over the origins of its videos and the effectiveness of its moderation practices. Experts have called for more tools to analyze the platform and its content — the kind of access that other companies provide.“The consensus is that it’s a five-alarm fire,” said Zeve Sanderson, the founding executive director at New York University’s Center for Social Media and Politics. “We don’t have a good understanding of what’s going on there,” he added.Last month, Vanessa Pappas, TikTok’s chief operating officer, said the company would begin sharing some data with “selected researchers” this year.TwitterIn a blog post outlining its plans for the midterm elections, the company said it would reactivate its Civic Integrity Policy — a set of rules adopted in 2018 that the company uses ahead of elections around the world. Under the policy, warning labels, similar to those used by Facebook, will once again be added to false or misleading tweets about elections, voting, or election integrity, often pointing users to accurate information or additional context. Tweets that receive the labels are not recommended or distributed by the company’s algorithms. The company can also remove false or misleading tweets entirely.Those labels were redesigned last year, resulting in 17 percent more clicks for additional information, the company said. Interactions, like replies and retweets, fell on tweets that used the modified labels.In Twitter’s tests, the redesigned warning labels increased click-through rates for additional context by 17 percent.Provided by TwitterThe strategy reflects Twitter’s attempts to limit false content without always resorting to removing tweets and banning users.The approach may help the company navigate difficult freedom of speech issues, which have dogged social media companies as they try to limit the spread of misinformation. Elon Musk, the Tesla executive, made freedom of speech a central criticism during his attempts to buy the company earlier this year.YouTubeUnlike the other major online platforms, YouTube has not released its own election misinformation plan for 2022 and has typically stayed quiet about its election misinformation strategy.“YouTube is nowhere to be found still,” Mr. Sanderson said. “That sort of aligns with their general P.R. strategy, which just seems to be: Don’t say anything and no one will notice.”Google, YouTube’s parent company, published a blog post in March emphasizing their efforts to surface authoritative content through the streamer’s recommendation engine and remove videos that mislead voters. In another post aimed at creators, Google details how channels can receive “strikes” for sharing certain kinds of misinformation and, after three strikes within a 90-day period, the channel will be terminated.The video streaming giant has played a major role in distributing political misinformation, giving an early home to conspiracy theorists like Alex Jones, who was later banned from the site. It has taken a stronger stance against medical misinformation, stating last September that it would remove all videos and accounts sharing vaccine misinformation. The company ultimately banned some prominent conservative personalities.More than 80 fact checkers at independent organizations around the world signed a letter in January warning YouTube that its platform is being “weaponized” to promote voter fraud conspiracy theories and other election misinformation.In a statement, Ivy Choi, a YouTube spokeswoman, said its election team had been meeting for months to prepare for the midterms and added that its recommendation engine is “continuously and prominently surfacing midterms-related content from authoritative news sources and limiting the spread of harmful midterms-related misinformation.” More

  • in

    Russian National Charged With Spreading Propaganda Through U.S. Groups

    Federal authorities say the man recruited several American political groups and used them to sow discord and interfere with elections.MIAMI — The Russian man with a trim beard and patterned T-shirt appeared in a Florida political group’s YouTube livestream in March, less than three weeks after his country had invaded Ukraine, and falsely claimed that what had happened was not an invasion.“I would like to address the free people around the world to tell you that Western propaganda is lying when they say that Russia invaded Ukraine,” he said through an interpreter.His name was Aleksandr Viktorovich Ionov, and he described himself as a “human rights activist.”But federal authorities say he was working for the Russian government, orchestrating a yearslong influence campaign to use American political groups to spread Russian propaganda and interfere with U.S. elections. On Friday, the Justice Department revealed that it had charged Mr. Ionov with conspiring to have American citizens act as illegal agents of the Russian government.Mr. Ionov, 32, who lives in Moscow and is not in custody, is accused of recruiting three political groups in Florida, Georgia and California from December 2014 through March, providing them with financial support and directing them to publish Russian propaganda. On Friday, the Treasury Department imposed sanctions against him.David Walker, the top agent in the F.B.I.’s Tampa field office, called the allegations “some of the most egregious and blatant violations we’ve seen by the Russian government in order to destabilize and undermine trust in American democracy.”In 2017 and 2019, Mr. Ionov supported the campaigns of two candidates for local office in St. Petersburg, Fla., where one of the American political groups was based, according to a 24-page indictment. He wrote to a Russian official in 2019 that he had been “consulting every week” on one of the campaigns, the indictment said.“Our election campaign is kind of unique,” a Russian intelligence officer wrote to Mr. Ionov, adding, “Are we the first in history?” Mr. Ionov later referred to the candidate, who was not named in the indictment, as the one “whom we supervise.”In 2016, according to the indictment, Mr. Ionov paid for the St. Petersburg group to conduct a four-city protest tour supporting a “Petition on Crime of Genocide Against African People in the United States,” which the group had previously submitted to the United Nations at his direction.“The goal is to heighten grievances,” Peter Strzok, a former top F.B.I. counterintelligence official, said of the sort of behavior Mr. Ionov is accused of carrying out. “They just want to fund opposing forces. It’s a means to encourage social division at a low cost. The goal is to create strife and division.”Members of the Uhuru Movement spoke to reporters in Florida on Friday. Martha Asencio-Rhine/Tampa Bay Times, via Associated PressThe Russian government has a long history of trying to sow division in the U.S., in particular during the 2016 presidential campaign. Mr. Strzok said the Russians were known to plant stories with fringe groups in an effort to introduce disinformation into the media ecosystem.Federal investigators described Mr. Ionov as the founder and president of the Anti-Globalization Movement of Russia and said it was funded by the Russian government. They said he worked with at least three Russian officials and in conjunction with the F.S.B., a Russian intelligence agency.The indictment issued on Friday did not name the U.S. political groups, their leaders or the St. Petersburg candidates, who were identified only as Unindicted Co-conspirator 3 and Unindicted Co-conspirator 4. And Mr. Ionov is the only person who has been charged in the case.But leaders of the Uhuru Movement, which is based in St. Petersburg and part of the African People’s Socialist Party, said that their office and chairman’s home had been raided by federal agents on Friday morning as part of the investigation.“They handcuffed me and my wife,” the chairman, Omali Yeshitela, said on Facebook Live from outside the group’s new headquarters in St. Louis. He said he did not take Russian government money but would not be “morally opposed” to accepting funds from Russians or “anyone else who wants to support the struggles for Black people.”The indictment said that Mr. Ionov paid for the founder and chairman of the St. Petersburg group — identified as Unindicted Co-conspirator 1 — to travel to Moscow in 2015. Upon his return, the indictment said, the chairman said in emails with other group leaders that Mr. Ionov wanted the group to be “an instrument” of the Russian government, which did not “disturb us.”“Yes, I have been to Russia,” Mr. Yeshitela said in his Facebook Live appearance on Friday, without addressing when he went and who paid for his trip. He added that he has also been to other countries, including South Africa and Nicaragua.In St. Petersburg, Akilé Anai of the Uhuru Movement said in a news conference that federal authorities had seized her car and other personal property.She called the investigation an attack on the Uhuru Movement, which has long been a presence in St. Petersburg but has had little success in local politics.“We can have relationships with whoever we want to,” she said, adding that the Uhuru Movement has made no secret of backing Russia in the war in Ukraine. “We are in support of Russia.”Ms. Anai ran for the City Council in 2017 and 2019 as Eritha “Akilé” Cainion. She received about 18 percent of vote in the 2019 runoff election.Mr. Ionov is also accused of directing an unidentified political group in Sacramento that pushed for California’s secession from the United States. The indictment said that he helped fund a 2018 protest in the State Capitol and encouraged the group’s leader to try to get into the governor’s office.And Mr. Ionov is accused of directing an unidentified political group in Atlanta, paying for its members to travel to San Francisco this year to protest at the headquarters of a social media company that restricted pro-Russian posts about the invasion of Ukraine. Mr. Ionov even provided designs for protest signs, according to the indictment.After Russia invaded Ukraine in February, the indictment said that Mr. Ionov told his Russian intelligence associates that he had asked the St. Petersburg group to support Russia in the “information war unleashed” by the West.Adam Goldman More