More stories

  • in

    Los ricos están más locos que tú y yo

    Robert F. Kennedy Jr. está delirando. Sus posturas son una mezcla de fantasías de derecha con remanentes del progresista que fue alguna vez: veneración al bitcoin, teorías de conspiración antivacunas, afirmaciones de que el Prozac ocasiona tiroteos masivos, oposición al apoyo estadounidense a Ucrania, pero además habla bien del seguro médico de pagador único. Si no fuera por su apellido, nadie le prestaría atención y, a pesar de ese apellido, tiene cero posibilidades de ganar la nominación presidencial demócrata.Sin embargo, ahora que la campaña de Ron DeSantis (con su lema: “Concienciados, inmigrantes, concienciados, ‘woke’”) parece estar derrapándose, de repente Kennedy está recibiendo el apoyo de algunos de los nombres más importantes de Silicon Valley. Jack Dorsey, fundador de Twitter, le dio su apoyo, mientras que otras figuras destacadas de la tecnología han organizado actos de recaudación de fondos en su nombre. Elon Musk, quien está en proceso de destruir lo que Dorsey construyó, fue su anfitrión en un evento en un Espacio de Twitter.Pero ¿qué nos dice todo esto sobre el papel de los multimillonarios de la industria tecnológica en la vida política moderna de Estados Unidos? Hace poco escribí sobre una serie de tech bros, algo así como hombres alfa de la tecnología, que se han convertido en truthers, quienes creen conocer la verdad, sobre la recesión y la inflación, y han insistido en que las noticias sobre la mejora de la economía son falsas (olvidé mencionar la declaración de Dorsey en 2021 de que la hiperinflación estaba “sucediendo”, ¿cómo va eso?). Lo que el pequeño auge de Kennedy en Silicon Valley muestra es que esto es en realidad parte de un fenómeno más amplio.Lo que parece atraer a algunos de los magnates de la tecnología a RFK Jr. es su gusto por llevar la contra, su contrarianismo: su desprecio por la sabiduría convencional y la opinión de los expertos. Así que antes de adentrarme en los aspectos específicos de los hombres de la tecnología de este momento político tan extraño, permítanme decir algunas cosas sobre llevar la contra.Un hecho triste pero cierto de la vida es que la mayoría de las veces, la sabiduría convencional y la opinión de los expertos están en lo correcto; sin embargo, puede que encontrar los puntos en los que se equivocan tenga grandes beneficios personales y sociales. El truco para conseguirlo consiste en mantener el equilibrio entre un escepticismo excesivo y una credulidad excesiva.Es muy fácil caer en el filo de la navaja en cualquier dirección. Cuando era un académico joven y ambicioso, solía reírme de los economistas mayores y aburridos cuya reacción ante cualquier idea nueva era: “Es banal, está mal y lo dije en 1962”. Estos días, a veces me preocupa haberme convertido en ese tipo.Por otra parte, como lo dice el economista Adam Ozimek, el contrarianismo reflexivo es una “droga que pudre el cerebro”. Quienes sucumben a esa droga “pierden la capacidad de juzgar a otros que consideran contrarios, se vuelven incapaces de distinguir las buenas pruebas de las malas, lo cual provoca un desapego total de la creencia que los lleva a aferrarse a modas contrarias de baja calidad”.Los hombres de la tecnología parecen ser en particular susceptibles a la podredumbre cerebral del contrarianismo. Su éxito financiero suele convencerlos de que son excepcionalmente brillantes, capaces de dominar al instante cualquier tema, sin necesidad de consultar a personas que realmente han trabajado duro para entender los problemas. Y en muchos casos, se hicieron ricos desafiando la sabiduría convencional, lo que los predispone a creer que ese desafío está justificado por dondequiera que se le mire.A esto hay que añadir el hecho de que una gran riqueza hace que sea demasiado fácil rodearse de personas que te dicen lo que quieres oír y validan tu creencia en tu propia brillantez, una suerte de versión intelectual del traje nuevo del emperador.Y si los hombres de la tecnología que llevan la contra hablan, es entre ellos. El empresario tecnológico y escritor Anil Dash nos dice que “es imposible exagerar el grado en que muchos directores ejecutivos de grandes empresas tecnológicas y capitalistas de riesgo se están radicalizando al vivir dentro de su propia burbuja cultural y social”. Llama a este fenómeno del capitalismo de riesgo, venture capitalism en inglés, “VC QAnon”, un concepto que me parece que ayuda a explicar muchas de las extrañas posturas adoptadas últimamente por los multimillonarios tecnológicos.Permítanme añadir una especulación personal. Pudiera parecer extraño ver a hombres de una inmensa riqueza e influencia creyéndose teorías de la conspiración sobre élites que dirigen el mundo. ¿No son ellos las élites? Pero sospecho que los hombres famosos y ricos pueden sentirse especialmente frustrados por su incapacidad para controlar los acontecimientos o incluso para evitar que la gente los ridiculice en internet. Así que en lugar de aceptar que el mundo es un lugar complicado que nadie puede controlar, son susceptibles a la idea de que hay conspiraciones secretas que los tienen en la mira.Aquí hay un precedente histórico. Viendo el descenso de Elon Musk, sé que no soy el único que piensa en Henry Ford, quien sigue siendo en muchos sentidos el ejemplo definitivo de empresario famoso e influyente y que también se convirtió en un teórico de la conspiración furibundo y antisemita. Incluso pagó la reimpresión de Los protocolos de los sabios de Sión, una falsificación que probablemente fue promovida por la policía secreta rusa (el tiempo es un círculo plano).En todo caso, lo que estamos viendo ahora es algo extraordinario. Podría decirse que la facción más alocada de la política estadounidense en este momento no son los obreros de gorra roja en los comedores; son los multimillonarios de la tecnología que viven en enormes mansiones y vuelan en jets privados. De cierto modo, es bastante divertido. Pero, por desgracia, esta gente tiene dinero suficiente para hacer mucho daño.Paul Krugman ha sido columnista de Opinión desde 2000 y también es profesor distinguido en el Centro de Graduados de la Universidad de la Ciudad de Nueva York. Ganó el Premio Nobel de Ciencias Económicas en 2008 por sus trabajos sobre comercio internacional y geografía económica. @PaulKrugman More

  • in

    A.I.’s Use in Elections Sets Off a Scramble for Guardrails

    Gaps in campaign rules allow politicians to spread images and messaging generated by increasingly powerful artificial intelligence technology.In Toronto, a candidate in this week’s mayoral election who vows to clear homeless encampments released a set of campaign promises illustrated by artificial intelligence, including fake dystopian images of people camped out on a downtown street and a fabricated image of tents set up in a park.In New Zealand, a political party posted a realistic-looking rendering on Instagram of fake robbers rampaging through a jewelry shop.In Chicago, the runner-up in the mayoral vote in April complained that a Twitter account masquerading as a news outlet had used A.I. to clone his voice in a way that suggested he condoned police brutality.What began a few months ago as a slow drip of fund-raising emails and promotional images composed by A.I. for political campaigns has turned into a steady stream of campaign materials created by the technology, rewriting the political playbook for democratic elections around the world.Increasingly, political consultants, election researchers and lawmakers say setting up new guardrails, such as legislation reining in synthetically generated ads, should be an urgent priority. Existing defenses, such as social media rules and services that claim to detect A.I. content, have failed to do much to slow the tide.As the 2024 U.S. presidential race starts to heat up, some of the campaigns are already testing the technology. The Republican National Committee released a video with artificially generated images of doomsday scenarios after President Biden announced his re-election bid, while Gov. Ron DeSantis of Florida posted fake images of former President Donald J. Trump with Dr. Anthony Fauci, the former health official. The Democratic Party experimented with fund-raising messages drafted by artificial intelligence in the spring — and found that they were often more effective at encouraging engagement and donations than copy written entirely by humans.Some politicians see artificial intelligence as a way to help reduce campaign costs, by using it to create instant responses to debate questions or attack ads, or to analyze data that might otherwise require expensive experts.At the same time, the technology has the potential to spread disinformation to a wide audience. An unflattering fake video, an email blast full of false narratives churned out by computer or a fabricated image of urban decay can reinforce prejudices and widen the partisan divide by showing voters what they expect to see, experts say.The technology is already far more powerful than manual manipulation — not perfect, but fast improving and easy to learn. In May, the chief executive of OpenAI, Sam Altman, whose company helped kick off an artificial intelligence boom last year with its popular ChatGPT chatbot, told a Senate subcommittee that he was nervous about election season.He said the technology’s ability “to manipulate, to persuade, to provide sort of one-on-one interactive disinformation” was “a significant area of concern.”Representative Yvette D. Clarke, a Democrat from New York, said in a statement last month that the 2024 election cycle “is poised to be the first election where A.I.-generated content is prevalent.” She and other congressional Democrats, including Senator Amy Klobuchar of Minnesota, have introduced legislation that would require political ads that used artificially generated material to carry a disclaimer. A similar bill in Washington State was recently signed into law.The American Association of Political Consultants recently condemned the use of deepfake content in political campaigns as a violation of its ethics code.“People are going to be tempted to push the envelope and see where they can take things,” said Larry Huynh, the group’s incoming president. “As with any tool, there can be bad uses and bad actions using them to lie to voters, to mislead voters, to create a belief in something that doesn’t exist.”The technology’s recent intrusion into politics came as a surprise in Toronto, a city that supports a thriving ecosystem of artificial intelligence research and start-ups. The mayoral election takes place on Monday.A conservative candidate in the race, Anthony Furey, a former news columnist, recently laid out his platform in a document that was dozens of pages long and filled with synthetically generated content to help him make his tough-on-crime position.A closer look clearly showed that many of the images were not real: One laboratory scene featured scientists who looked like alien blobs. A woman in another rendering wore a pin on her cardigan with illegible lettering; similar markings appeared in an image of caution tape at a construction site. Mr. Furey’s campaign also used a synthetic portrait of a seated woman with two arms crossed and a third arm touching her chin.Anthony Furey, a candidate in Toronto’s mayoral election on Monday, used an A.I. image of a woman with three arms.The other candidates mined that image for laughs in a debate this month: “We’re actually using real pictures,” said Josh Matlow, who showed a photo of his family and added that “no one in our pictures have three arms.”Still, the sloppy renderings were used to amplify Mr. Furey’s argument. He gained enough momentum to become one of the most recognizable names in an election with more than 100 candidates. In the same debate, he acknowledged using the technology in his campaign, adding that “we’re going to have a couple of laughs here as we proceed with learning more about A.I.”Political experts worry that artificial intelligence, when misused, could have a corrosive effect on the democratic process. Misinformation is a constant risk; one of Mr. Furey’s rivals said in a debate that while members of her staff used ChatGPT, they always fact-checked its output.“If someone can create noise, build uncertainty or develop false narratives, that could be an effective way to sway voters and win the race,” Darrell M. West, a senior fellow for the Brookings Institution, wrote in a report last month. “Since the 2024 presidential election may come down to tens of thousands of voters in a few states, anything that can nudge people in one direction or another could end up being decisive.”Increasingly sophisticated A.I. content is appearing more frequently on social networks that have been largely unwilling or unable to police it, said Ben Colman, the chief executive of Reality Defender, a company that offers services to detect A.I. The feeble oversight allows unlabeled synthetic content to do “irreversible damage” before it is addressed, he said.“Explaining to millions of users that the content they already saw and shared was fake, well after the fact, is too little, too late,” Mr. Colman said.For several days this month, a Twitch livestream has run a nonstop, not-safe-for-work debate between synthetic versions of Mr. Biden and Mr. Trump. Both were clearly identified as simulated “A.I. entities,” but if an organized political campaign created such content and it spread widely without any disclosure, it could easily degrade the value of real material, disinformation experts said.Politicians could shrug off accountability and claim that authentic footage of compromising actions was not real, a phenomenon known as the liar’s dividend. Ordinary citizens could make their own fakes, while others could entrench themselves more deeply in polarized information bubbles, believing only what sources they chose to believe.“If people can’t trust their eyes and ears, they may just say, ‘Who knows?’” Josh A. Goldstein, a research fellow at Georgetown University’s Center for Security and Emerging Technology, wrote in an email. “This could foster a move from healthy skepticism that encourages good habits (like lateral reading and searching for reliable sources) to an unhealthy skepticism that it is impossible to know what is true.” More

  • in

    G.O.P. Targets Researchers Who Study Disinformation Ahead of 2024 Election

    A legal campaign against universities and think tanks seeks to undermine the fight against false claims about elections, vaccines and other hot political topics.On Capitol Hill and in the courts, Republican lawmakers and activists are mounting a sweeping legal campaign against universities, think tanks and private companies that study the spread of disinformation, accusing them of colluding with the government to suppress conservative speech online.The effort has encumbered its targets with expansive requests for information and, in some cases, subpoenas — demanding notes, emails and other information related to social media companies and the government dating back to 2015. Complying has consumed time and resources and already affected the groups’ ability to do research and raise money, according to several people involved.They and others warned that the campaign undermined the fight against disinformation in American society when the problem is, by most accounts, on the rise — and when another presidential election is around the corner. Many of those behind the Republican effort had also joined former President Donald J. Trump in falsely challenging the outcome of the 2020 presidential election.“I think it’s quite obviously a cynical — and I would say wildly partisan — attempt to chill research,” said Jameel Jaffer, the executive director of Columbia University’s Knight First Amendment Institute, an organization that works to safeguard freedom of speech and the press.The House Judiciary Committee, which in January came under Republican majority control, has sent scores of letters and subpoenas to the researchers — only some of which have been made public. It has threatened legal action against those who have not responded quickly or fully enough.A conservative advocacy group led by Stephen Miller, the former adviser to Mr. Trump, filed a class-action lawsuit last month in U.S. District Court in Louisiana that echoes many of the committee’s accusations and focuses on some of the same defendants.Targets include Stanford, Clemson and New York Universities and the University of Washington; the Atlantic Council, the German Marshall Fund and the National Conference on Citizenship, all nonpartisan, nongovernmental organizations in Washington; the Wikimedia Foundation in San Francisco; and Graphika, a company that researches disinformation online.In a related line of inquiry, the committee has also issued a subpoena to the World Federation of Advertisers, a trade association, and the Global Alliance for Responsible Media it created. The committee’s Republican leaders have accused the groups of violating antitrust laws by conspiring to cut off advertising revenue for content researchers and tech companies found to be harmful.A House subcommittee was created to scrutinize what Republicans have charged is a government effort to silence conservatives. Kenny Holston/The New York TimesThe committee’s chairman, Representative Jim Jordan of Ohio, a close ally of Mr. Trump, has accused the organizations of “censorship of disfavored speech” involving issues that have galvanized the Republican Party: the policies around the Covid-19 pandemic and the integrity of the American political system, including the outcome of the 2020 election.Much of the disinformation surrounding both issues has come from the right. Many Republicans are convinced that researchers who study disinformation have pressed social media platforms to discriminate against conservative voices.Those complaints have been fueled by Twitter’s decision under its new owner, Elon Musk, to release selected internal communications between government officials and Twitter employees. The communications show government officials urging Twitter to take action against accounts spreading disinformation but stopping short of ordering them to do, as some critics claimed.Patrick L. Warren, an associate professor at Clemson University, said researchers at the school have provided documents to the committee, and given some staff members a short presentation. “I think most of this has been spurred by our appearance in the Twitter files, which left people with a pretty distorted sense of our mission and work,” he said.Last year, the Republican attorneys general of Missouri and Louisiana sued the Biden administration in U.S. District Court in Louisiana, arguing that government officials effectively cajoled or coerced Twitter, Facebook and other social media platforms by threatening legislative changes. The judge, Terry A. Doughty, rejected a defense motion to dismiss the lawsuit in March.The current campaign’s focus is not government officials but rather private individuals working for universities or nongovernmental organizations. They have their own First Amendment guarantees of free speech, including their interactions with the social medial companies.The group behind the class action, America First Legal, named as defendants two researchers at the Stanford Internet Observatory, Alex Stamos and Renée DiResta; a professor at the University of Washington, Kate Starbird; an executive of Graphika, Camille François; and the senior director of the Atlantic Council’s Digital Forensic Research Lab, Graham Brookie.Renée DiResta, a researcher at the Stanford Internet Observatory, is among the defendants named in a lawsuit filed by America First Legal, a conservative group. Manuel Balce Ceneta/Associated PressIf the lawsuit proceeds, they could face trial and, potentially, civil damages if the accusations are upheld.Mr. Miller, the president of America First Legal, did not respond to a request for comment. In a statement last month, he said the lawsuit was “striking at the heart of the censorship-industrial complex.”Stephen Miller, a former adviser to former President Donald J. Trump, leads America First Legal. Kevin Dietsch/Getty ImagesThe researchers, who have been asked by the House committee to submit emails and other records, are also defendants in the lawsuit brought by the attorneys general of Missouri and Louisiana. The plaintiffs include Jill Hines, a director of Health Freedom Louisiana, an organization that has been accused of disinformation, and Jim Hoft, the founder of the Gateway Pundit, a right-wing news site. The court in the Western District of Louisiana has, under Judge Doughty, become a favored venue for legal challenges against the Biden administration.The attacks use “the same argument that starts with some false premises,” said Jeff Hancock, the founding director of the Stanford Social Media Lab, which is not a party to any of the legal action. “We see it in the media, in the congressional committees and in lawsuits, and it is the same core argument, with a false premise about the government giving some type of direction to the research we do.”The House Judiciary Committee has focused much of its questioning on two collaborative projects. One was the Election Integrity Partnership, which Stanford and the University of Washington formed before the 2020 election to identify attempts “to suppress voting, reduce participation, confuse voters or delegitimize election results without evidence.” The other, also organized by Stanford, was called the Virality Project and focused on the spread of disinformation about Covid-19 vaccines.Both subjects have become political lightning rods, exposing the researchers to partisan attacks online that have become ominously personal at times.In the case of the Stanford Internet Observatory, the requests for information — including all emails — have even extended to students who volunteered to work as interns for the Election Integrity Partnership.A central premise of the committee’s investigation — and the other complaints about censorship — is that the researchers or government officials had the power or ability to shut down accounts on social media. They did not, according to former employees at Twitter and Meta, which owns Facebook and Instagram, who said the decision to punish users who violated platform rules belonged solely to the companies.No evidence has emerged that government officials coerced the companies to take action against accounts, even when the groups flagged problematic content.“We have not only academic freedom as researchers to conduct this research but freedom of speech to tell Twitter or any other company to look at tweets we might think violate rules,” Mr. Hancock said.The universities and research organizations have sought to comply with the committee’s requests, though the collection of years of emails has been a time-consuming task complicated by issues of privacy. They face mounting legal costs and questions from directors and donors about the risks raised by studying disinformation. Online attacks have also taken a toll on morale and, in some cases, scared away students.In May, Mr. Jordan, the committee’s chairman, threatened Stanford with unspecified legal action for not complying with a previously issued subpoena, even though the university’s lawyers have been negotiating with the committee’s lawyers over how to shield students’ privacy. (Several of the students who volunteered are identified in the America First Legal lawsuit.)The committee declined to discuss details of the investigation, including how many requests or subpoenas it has filed in total. Nor has it disclosed how it expects the inquiry to unfold — whether it would prepare a final report or make criminal referrals and, if so, when. In its statements, though, it appears to have already reached a broad conclusion.“The Twitter files and information from private litigation show how the federal government worked with social media companies and other entities to silence disfavored speech online,” a spokesman, Russell Dye, said in a statement. “The committee is working hard to get to the bottom of this censorship to protect First Amendment rights for all Americans.”The partisan controversy is having an effect on not only the researchers but also the social media giants.Twitter, under Mr. Musk, has made a point of lifting restrictions and restoring accounts that had been suspended, including the Gateway Pundit’s. YouTube recently announced that it would no longer ban videos that advanced “false claims that widespread fraud, errors or glitches occurred in the 2020 and other past U.S. presidential elections.” More

  • in

    Possible Cyberattack Disrupts The Philadelphia Inquirer

    The Inquirer, citing “anomalous activity” on its computer systems, said it was unable to print its regular Sunday edition and told staff members not to work in the newsroom at least through Tuesday.A possible cyberattack on The Philadelphia Inquirer disrupted the newspaper’s print operation over the weekend and prompted it to close its newsroom through at least Tuesday, when its staff will be covering an expensive and fiercely contested mayoral primary.Elizabeth H. Hughes, the publisher and chief executive of The Inquirer, said that the newspaper discovered “anomalous activity on select computer systems” on Thursday and “immediately took those systems offline.”But The Inquirer was unable to print its regular Sunday edition, the newspaper reported. Instead, print subscribers received a Sunday “early edition,” which went to press on Friday night. The newspaper also reported on Sunday that its ability to post and update stories on its website, Inquirer.com, was “sometimes slower than normal.”The Monday print editions of The Inquirer and The Philadelphia Daily News, which The Inquirer also publishes, were distributed as scheduled, Evan Benn, a company spokesman, said.But employees will not be permitted to work in the newsroom at least through Tuesday because access to The Inquirer’s internet servers has been disrupted, Ms. Hughes said in an email to the staff on Sunday evening that was shared with The New York Times.Ms. Hughes said that the company was looking for a co-working space for Tuesday, when The Inquirer will be covering a closely contested Democratic primary that is all but certain to determine the next mayor of Philadelphia — the largest city in Pennsylvania, a presidential swing state.“I truly don’t think it will impact it at all, short of us not being able to be together in the formal newsroom,” said Diane Mastrull, an editor who is president of The Newspaper Guild of Greater Philadelphia, the union that represents reporters, photographers and other staff members at The Inquirer. “Covid has certainly taught us to do our jobs remotely.”She said on Monday that the newspaper’s content management system, which staff members use to write and edit stories, was “operating with continued workarounds.”“I would not use the word ‘normal,’” Ms. Mastrull said.Ms. Hughes said that The Inquirer had notified the F.B.I. and had “implemented alternative processes to enable publication of print editions.”The newspaper was also working with Kroll, a corporate investigation firm, to restore its systems and to investigate the episode, Ms. Hughes said.The Inquirer, in its news story on the “apparent cyberattack,” said it was the most significant disruption to the publication of the newspaper since January 1996, when a major blizzard dropped more than 30 inches of snow on Philadelphia.The newspaper reported that Ms. Hughes, citing a continuing investigation, had declined to answer detailed questions about the episode, including who was behind it, whether The Inquirer or its employees appeared to have been specifically targeted, or whether any sensitive employee or subscriber information might have been compromised.In an email on Monday, Mr. Benn, the company spokesman, said: “As our investigation is ongoing, we are unable to provide additional information at this time. Should we discover that any personal data was affected, we will notify and support” anyone who might have been affected.Special Agent E. Edward Conway of the F.B.I. field office in Philadelphia said that while the agency was aware of the issue, it was the bureau’s practice not to comment on specific cyber incidents. “However, when the F.B.I. learns about potential cyberattacks, it’s customary that we offer our assistance in these matters,” Mr. Conway said in an email.Ms. Mastrull, who was working as an editor over the weekend, said that staff members had noticed on Saturday that they could not log on to the content management system.They were given a workaround, she said, but the process created “very, very difficult working conditions” as the staff covered the last weekend of campaign events before the primary, Taylor Swift concerts at Lincoln Financial Field and Game 7 of the Eastern Conference semifinals between the Boston Celtics and the Philadelphia 76ers.Employees were “a little concerned that there weren’t enough protections against this, and very frustrated that the company’s communication was lacking specifics,” Ms. Mastrull said.In 2018, The Los Angeles Times said that a cyberattack had disrupted its printing operations and those at newspapers in San Diego and Florida. Unnamed sources cited by The Los Angeles Times suggested that the newspaper might have been hit by ransomware — a pernicious attack that scrambles computer programs and files before demanding that the victim pay a ransom to unscramble them.The Guardian reported that it was hit by a ransomware attack in December in which the personal data of staff members in Britain was compromised. The Guardian reported that the attack forced it to close its offices for several months.In an email to the staff of The Inquirer on Sunday night, Ms. Mastrull summarized the day’s news and paid tribute to the staff members who covered it, “despite a publishing system rendered virtually inoperable.”“Now all we have to do is find some co-working space so we can cover a really important election Tuesday,” she wrote. “Can’t keep us down!” More

  • in

    Misinformation Defense Worked in 2020, Up to a Point, Study Finds

    Nearly 68 million Americans still visited untrustworthy websites 1.5 billion times in a month, according to Stanford researchers, causing concerns for 2024.Not long after misinformation plagued the 2016 election, journalists and content moderators scrambled to turn Americans away from untrustworthy websites before the 2020 vote.A new study suggests that, to some extent, their efforts succeeded.When Americans went to the polls in 2020, a far smaller portion had visited websites containing false and misleading narratives compared with four years earlier, according to researchers at Stanford. Although the number of such sites ballooned, the average visits among those people dropped, along with the time spent on each site.Efforts to educate people about the risk of misinformation after 2016, including content labels and media literacy training, most likely contributed to the decline, the researchers found. Their study was published on Thursday in the journal Nature Human Behaviour.“I am optimistic that the majority of the population is increasingly resilient to misinformation on the web,” said Jeff Hancock, the founding director of the Stanford Social Media Lab and the lead author of the report. “We’re getting better and better at distinguishing really problematic, bad, harmful information from what’s reliable or entertainment.”“I am optimistic that the majority of the population is increasingly resilient to misinformation on the web,” said Jeff Hancock, the lead author of the Stanford report.Ian C. Bates for The New York TimesStill, nearly 68 million people in the United States checked out websites that were not credible, visiting 1.5 billion times in a month in 2020, the researchers estimated. That included domains that are now defunct, such as theantimedia.com and obamawatcher.com. Some people in the study visited some of those sites hundreds of times.As the 2024 election approaches, the researchers worry that misinformation is evolving and splintering. Beyond web browsers, many people are exposed to conspiracy theories and extremism simply by scrolling through mobile apps such as TikTok. More dangerous content has shifted onto encrypted messaging apps with difficult-to-trace private channels, such as Telegram or WhatsApp.The boom in generative artificial intelligence, the technology behind the popular ChatGPT chatbot, has also raised alarms about deceptive images and mass-produced falsehoods.The Stanford researchers said that even limited or concentrated exposure to misinformation could have serious consequences. Baseless claims of election fraud incited a riot at the Capitol on Jan. 6, 2021. More than two years later, congressional hearings, criminal trials and defamation court cases are still addressing what happened.The Stanford researchers monitored the online activity of 1,151 adults from Oct. 2 through Nov. 9, 2020, and found that 26.2 percent visited at least one of 1,796 unreliable websites. They noted that the time frame did not include the postelection period when baseless claims of voter fraud were especially pronounced.That was down from an earlier, separate report that found that 44.3 percent of adults visited at least one of 490 problematic domains in 2016.The shrinking audience may have been influenced by attempts, including by social media companies, to mitigate misinformation, according to the researchers. They noted that 5.6 percent of the visits to untrustworthy sites in 2020 originated from Facebook, down from 15.1 percent in 2016. Email also played a smaller role in sending users to such sites in 2020.Other researchers have highlighted more ways to limit the lure of misinformation, especially around elections. The Bipartisan Policy Center suggested in a report this week that states adopt direct-to-voter texts and emails that offer vetted information.Social media companies should also do more to discourage performative outrage and so-called groupthink on their platforms — behavior that can fortify extreme subcultures and intensify polarization, said Yini Zhang, an assistant communication professor at the University at Buffalo.Professor Zhang, who published a study this month about QAnon, said tech companies should instead encourage more moderate engagement, even by renaming “like” buttons to something like “respect.”“For regular social media users, what we can do is dial back on the tribal instincts, to try to be more introspective and say: ‘I’m not going to take the bait. I’m not going to pile on my opponent,’” she said.A QAnon flag on a vehicle headed to a pro-Trump rally in October. Yini Zhang, a University of Buffalo professor who published a study about QAnon, said social media companies should encourage users to “dial back on the tribal instincts.”Brittany Greeson for The New York TimesWith next year’s presidential election looming, researchers said they are concerned about populations known to be vulnerable to misinformation, such as older people, conservatives and people who do not speak English.More than 37 percent of people older than 65 visited misinformation sites in 2020 — a far higher rate than younger groups but an improvement from 56 percent in 2016, according to the Stanford report. In 2020, 36 percent of people who supported President Donald J. Trump in the election visited at least one misinformation site, compared with nearly 18 percent of people who supported Joseph R. Biden Jr. The participants also completed a survey that included questions about their preferred candidate.Mr. Hancock said that misinformation should be taken seriously, but that its scale should not be exaggerated. The Stanford study, he said, showed that the news consumed by most Americans was not misinformation but that certain groups of people were most likely to be targeted. Treating conspiracy theories and false narratives as an ever-present, wide-reaching threat could erode the public’s trust in legitimate news sources, he said.“I still think there’s a problem, but I think it’s one that we’re dealing with and that we’re also recognizing doesn’t affect most people most of the time,” Mr. Hancock said. “If we are teaching our citizens to be skeptical of everything, then trust is undermined in all the things that we care about.” More

  • in

    Attacks on Dominion Voting Persist Despite High-Profile Lawsuits

    Unproven claims about Dominion Voting Systems still spread widely online.With a series of billion-dollar lawsuits, including a $1.6 billion case against Fox News headed to trial this month, Dominion Voting Systems sent a stark warning to anyone spreading falsehoods that the company’s technology contributed to fraud in the 2020 election: Be careful with your words, or you might pay the price.Not everyone is heeding the warning.“Dominion, why don’t you show us what’s inside your machines?” Mike Lindell, the MyPillow executive and prominent election denier, shouted during a livestream last month. He added that the company, which has filed a $1.3 billion defamation lawsuit against him, was engaged in “the biggest cover-up for the biggest crime in United States history — probably in world history.”Claims that election software companies like Dominion helped orchestrate widespread fraud in the 2020 election have been widely debunked in the years since former President Donald J. Trump and his allies first pushed the theories. But far-right Americans on social media and influencers in the news media have continued in recent weeks and months to make unfounded assertions about the company and its electronic voting machines, pressuring government officials to scrap contracts with Dominion, sometimes successfully.The enduring attacks illustrate how Mr. Trump’s voter fraud claims have taken root in the shared imagination of his supporters. And they reflect the daunting challenge that Dominion, and any other group that draws the attention of conspiracy theorists, faces in putting false claims to rest.The attacks about Dominion have not reached the fevered pitch of late 2020, when the company was cast as a central villain in an elaborate and fictitious voter fraud story. In that tale, the company swapped votes between candidates, injected fake ballots or allowed glaring security vulnerabilities to remain on voting machines.Dominion says all those claims have been made without proof to support them.“Nearly two years after the 2020 election, no credible evidence has ever been presented to any court or authority that voting machines did anything other than count votes accurately and reliably in all states,” Dominion said in an emailed statement.On Friday, the judge in Delaware overseeing the Fox defamation case ruled that it was “CRYSTAL clear” that Fox News and Fox Business had made false claims about the company — a major setback for the network.Many prominent influencers have avoided mentioning the company since Dominion started suing prominent conspiracy theorists in 2021. Fox News fired Lou Dobbs that year — only days after it was sued by Smartmatic, another election software company — saying the network was focusing on “new formats.” Mr. Dobbs is also a defendant in Dominion’s case against Fox, which is scheduled to go to trial on April 17.Yet there have been nearly nine million mentions of Dominion across social media websites, broadcasts and traditional media since Dominion filed its first lawsuit in January 2021, including nearly a million that have mentioned “fraud” or related conspiracy theories, according to Zignal Labs, a media monitoring company. Some of the most widely shared posts came from Representative Marjorie Taylor Greene, Republican of Georgia, who tweeted last month that the lawsuits were politically motivated, and Kari Lake, the former Republican candidate for governor of Arizona who has advanced voter fraud theories about election machines since her defeat last year.Far-right Americans on social media and influencers in the news media continue to make unfounded assertions about Dominion and its electronic voting machines.Brynn Anderson/Associated PressMr. Lindell remains one of the loudest voices pushing unproven claims against Dominion and electronic voting machines, posting hundreds of videos to Frank Speech, his news site, attacking the company with tales of voter fraud.Last month, Mr. Lindell celebrated on his livestream after Shasta County, a conservative stronghold in Northern California, voted to use paper ballots after ending its contract with Dominion. A county supervisor had flown to meet privately with Mr. Lindell before the vote, discussing how to run elections without voting machines, according to Mr. Lindell. The supervisor ultimately voted to switch to paper ballots.In an interview this week with The New York Times, Mr. Lindell claimed to have spent millions on campaigns to end election fraud, focusing on abolishing electronic voting systems and replacing them with paper ballots and hand counting.“I will never back down, ever, ever, ever,” he said in the interview. He added that Dominion’s lawsuit against him, which is continuing after the United States Supreme Court declined to consider his appeal, was “frivolous” and that the company was “guilty.”“They can’t deny it, nobody can deny it,” Mr. Lindell said.Joe Oltmann, the host of “Conservative Daily Podcast” and a promoter of voter fraud conspiracy theories, hosted an episode in late March titled “Dominion Is FINISHED,” in which he claimed that there was a “device that’s used in Dominion machines to actually transfer ballots,” offering only speculative support.“This changes everything,” Mr. Oltmann said.Dominion sent Mr. Oltmann a letter in 2020 demanding that he preserve documents related to his claims about the company, which is often the first step in a defamation lawsuit.In a livestream last month on Rumble, the streaming platform popular among right-wing influencers, Tina Peters, a former county clerk in Colorado who was indicted on 10 charges related to allegations that she tampered with Dominion’s election equipment, devoted more than an hour to various election fraud claims, many of them featuring Dominion. The discussion included a suggestion that because boxes belonging to Dominion were stamped with “Made in China,” the election system was vulnerable to manipulation by the Chinese Communist Party.Mr. Oltmann and Ms. Peters did not respond to requests for comment.The Fox lawsuit has also added fuel to the conspiracy theory fire.Far-right news sites have largely ignored the finding that Fox News hosts disparaged voter fraud claims privately, even as they gave them significant airtime. Instead, the Gateway Pundit, a far-right site known for pushing voter fraud theories, focused on separate documents showing that Dominion executives “knew its voting systems had major security issues,” the site wrote.The documents showed the frenzied private messages between Dominion employees as they were troubleshooting problems, with one employee remarking, “our products suck.” In an email, a Dominion spokeswoman noted the remark was about a splash screen that was hiding an error message.In February, Mr. Trump shared the Gateway Pundit story on Truth Social, his right-wing social network, stoking a fresh wave of attacks against the company.“We will not be silent,” said one far-right influencer whose messages are sometimes shared by Mr. Trump on Truth Social. “Dominion is the enemy!” More

  • in

    YouTube Restores Donald Trump’s Account Privileges

    The Google-owned video platform became the latest of the big social networks to reverse the former president’s account restrictions.YouTube suspended former President Donald J. Trump’s account on the platform six days after the Jan. 6 attack on the Capitol. The video platform said it was concerned that Mr. Trump’s lies about the 2020 election could lead to more real-world violence.YouTube, which is owned by Google, reversed that decision on Friday, permitting Mr. Trump to once again upload videos to the popular site. The move came after similar decisions by Twitter and Meta, which owns Facebook and Instagram.“We carefully evaluated the continued risk of real-world violence, while balancing the chance for voters to hear equally from major national candidates in the run up to an election,” YouTube said on Twitter on Friday. Mr. Trump’s account will have to comply with the site’s content rules like any other account, YouTube added.After false claims that the 2020 presidential election was stolen circulated online and helped stoke the Jan. 6 attack, social media giants suspended Mr. Trump’s account privileges. Two years later, the platforms have started to soften their content rules. Under Elon Musk’s ownership, Twitter has unwound many of its content moderation efforts. YouTube recently laid off members of its trust and safety team, leaving one person in charge of setting political misinformation policies.Mr. Trump announced in November that he was seeking a second term as president, setting off deliberations at social media companies over whether to allow him back on their platforms. Days later, Mr. Musk polled Twitter users on whether he should reinstate Mr. Trump, and 52 percent of respondents said yes. Like YouTube, Meta said in January that it was important that people hear what political candidates are saying ahead of an election.The former president’s reinstatement is one of the first significant content decisions that YouTube has taken under its new chief executive, Neal Mohan, who got the top job last month. YouTube also recently loosened its profanity rules so that creators who used swear words at the start of a video could still make money from the content.YouTube’s announcement on Friday echoes a pattern of the company and its parent Google making polarizing content decisions after a competitor has already taken the same action. YouTube followed Meta and Twitter in suspending Mr. Trump after the Capitol attack, and in reversing the bans.Since losing his bid for re-election in 2020, Mr. Trump has sought to make a success of his own social media service, Truth Social, which is known for its loose content moderation rules.Mr. Trump on Friday posted on his Facebook page for the first time since his reinstatement. “I’M BACK!” Mr. Trump wrote, alongside a video in which he said, “Sorry to keep you waiting. Complicated business. Complicated.”Despite his Twitter reinstatement, Mr. Trump has not returned to posting from that account.In his last tweet, dated Jan. 8, 2021, he said he would not attend the coming inauguration, held at the Capitol. More

  • in

    Plans in Congress on China and TikTok Face Hurdles After Spy Balloon Furor

    With budgets tight and political knives drawn, lawmakers seeking to capitalize on a bipartisan urgency to confront China are setting their sights on narrower measures.WASHINGTON — Republicans and Democrats are pressing for major legislation to counter rising threats from China, but mere weeks into the new Congress, a bipartisan consensus is at risk of dissipating amid disputes about what steps to take and a desire among many Republicans to wield the issue as a weapon against President Biden.In the House and Senate, leading lawmakers in both parties have managed in an otherwise bitterly divided Congress to stay unified about the need to confront the dangers posed by China’s militarization, its deepening ties with Russia and its ever-expanding economic footprint.But a rising chorus of Republican vitriol directed at Mr. Biden after a Chinese spy balloon flew over the United States this month upended that spirit — giving way to G.O.P. accusations that the president was “weak on China” — and suggested that the path ahead for any bipartisan action is exceedingly narrow.“When the balloon story popped, so to speak, it felt like certain people used that as an opportunity to bash President Biden,” said Representative Raja Krishnamoorthi of Illinois, the top Democrat on the select panel the House created to focus on competition with China.“And it felt like no matter what he did, they wanted to basically call him soft on the C.C.P., and unable to protect America,” he said, referring to the Chinese Communist Party. “That’s where I think we can go wayward politically,”For now, only a few, mostly narrow ventures have drawn enough bipartisan interest to have a chance at advancing amid the political tide. They include legislation to ban TikTok, the Beijing-based social media platform lawmakers have warned for years is an intelligence-gathering gold mine for the Chinese government; bills that would ban Chinese purchases of farmland and other agricultural real estate, especially in areas near sensitive military sites; and measures to limit U.S. exports and outbound investments to China.Such initiatives are limited in scope, predominantly defensive and relatively cheap — which lawmakers say are important factors in getting legislation over the hurdles posed by this split Congress. And, experts point out, none are issues that would be felt keenly by voters, or translate particularly well into political pitches on the 2024 campaign trail.A Divided CongressThe 118th Congress is underway, with Republicans controlling the House and Democrats holding the Senate.Jan. 6 Video: Speaker Kevin McCarthy’s decision to grant the Fox News host Tucker Carlson access to thousands of hours of Jan. 6 Capitol security footage has effectively outsourced a bid to reinvestigate the attack.John Fetterman: The Democratic senator from Pennsylvania is the latest public figure to disclose his mental health struggles, an indication of growing acceptance, though some stigma remains.Entitlement Cuts: Under bipartisan pressure, Senator Rick Scott of Florida, a Republican, exempted Social Security and Medicare from his proposal to regularly review all federal programs.G.O.P. Legislative Agenda: Weeks into their chaotic House majority, Republican leaders have found themselves paralyzed on some of the biggest issues they promised to address.“There would be nervousness among Republicans about giving the administration a clear win, but I’m just not sure that the kind of legislation they’ll be looking at would be doing that,” said Zack Cooper, who researches U.S.-China competition at the American Enterprise Institute. “It’s more things that would penalize China than be focused on investing in the U.S. in the next couple of years.”At the start of the year, the momentum behind bipartisan efforts to confront China seemed strong, with Republicans and Democrats banding together to pass the bill setting up the select panel and legislation to deny China crude oil exports from the U.S. Strategic Petroleum Reserve. A resolution condemning Beijing for sending the spy balloon over the United States passed unanimously after Republican leaders decided not to take the opportunity to rebuke Mr. Biden, as many on the right had clamored for.But with partisan divisions beginning to intensify and a presidential election looming, it appears exceedingly unlikely that Congress will be able to muster an agreement as large or significant as the major legislation last year to subsidize microchip manufacturing and scientific research — a measure that members of both parties described as only one of many policy changes that would be needed to counter China. Only a few, mostly narrow ventures have drawn enough bipartisan interest to have a chance at advancing amid the political tide.Kenny Holston/The New York Times“The biggest challenge is just the overall politicized environment that we’re in right now and the lack of trust between the parties,” said Representative Mike Gallagher of Wisconsin, the chairman of the new select panel, who has committed to make his committee an “incubator and accelerator” on China legislation. “Everyone has their guard up.”Still, there are some areas of potential compromise. Many lawmakers are eyeing 2023 as the year Congress can close any peepholes China may have into the smartphones of more than 100 million TikTok American users, but they have yet to agree on how to try to do so.Some Republicans have proposed imposing sanctions to ice TikTok out of the United States, while Representative Michael McCaul, Republican of Texas and the chairman of the Foreign Affairs Committee, wants to allow the president to block the platform by lifting statutory prohibitions on banning foreign information sources.Senator Marco Rubio of Florida, the top Republican on the Senate Intelligence Committee, and Senator Angus King, independent of Maine and a member of the panel, want to prevent social media companies under Chinese or Russian influence from operating in the United States unless they divest from foreign ownership.But none have yet earned a seal of approval from Senator Mark Warner of Virginia, the Democrat who is chairman of the committee and whose support is considered critical to any bill’s success. He was the chief architect of last year’s sweeping China competition bill, known as the CHIPS and Science Act, and he wants to tackle foreign data collection more broadly.“We’ve had a whack-a-mole approach on foreign technology that poses a national security risk,” Mr. Warner said in an interview, bemoaning that TikTok was only the latest in a long line of foreign data firms, like the Chinese telecom giant Huawei and the Russian cybersecurity firm Kaspersky Lab, to be targeted by Congress. “We need an approach that is constitutionally defensible.”There is a similar flurry of activity among Republican and Democratic lawmakers proposing bans on Chinese purchases of farmland  in sensitive areas. But lawmakers remain split over how broad such a ban should be, whether agents of other adversary nations ought also to be subject to the prohibition, and whether Congress ought to update the whole process of reviewing foreign investment transactions, by including the Agriculture Department in the Committee on Foreign Investment in the United States, an interagency group.“It’s actually kind of a more fraught issue than you would imagine,” Mr. Gallagher said.Lawmakers in both parties who want to put forth legislation to limit U.S. goods and capital from reaching Chinese markets are also facing challenges. The Biden administration has already started to take unilateral action on the issue, and further steps could box lawmakers out. Even if Congress can stake out a role for itself, it is not entirely clear which committee would take the lead on a matter that straddles a number of areas of jurisdiction.  Even before the balloon incident, existential policy differences between Republicans and Democrats, particularly around spending, made for slim odds that Congress could achieve sweeping legislative breakthroughs regarding China. Architects of last year’s law were dour about the prospect of the current Congress attempting anything on a similar scale.“The chances of us passing another major, comprehensive bill are not high,” said Senator John Cornyn of Texas, the lead Republican on the CHIPS effort, who noted that with the slim G.O.P. majority in the House, it would be difficult to pass a costly investment bill.G.O.P. lawmakers have been demanding cuts to the federal budget, and House Speaker Kevin McCarthy, Republican of California, has indicated that even military spending might be on the chopping block. Though no one has specifically advocated cutting programs related to countering China, that has some lawmakers nervous, particularly since certain recent ventures Congress created to beef up security assistance to Taiwan have already failed to secure funding at their intended levels.That backdrop could complicate even bipartisan ventures seeking to authorize new programs to counter China diplomatically and militarily, such as a proposal in the works from Senator Robert Menendez of New Jersey, the chairman of the Foreign Relations Committee, and Senator James Risch of Idaho, the top Republican, to step up foreign aid and military assistance to American allies in Beijing’s sphere of influence.That likely means that action on any comprehensive China bill would need to be attached to another must-pass bill, such as the annual defense authorization bill, to break through the political logjams of this Congress, said Richard Fontaine, the CEO of the Center for a New American Security. “China has risen as a political matter and things are possible that weren’t before, but it has not risen so high as to make the hardest things politically possible,” Mr. Fontaine said. More