More stories

  • in

    Crisis at Gaza’s Main Hospital, and More

    The New York Times Audio app is home to journalism and storytelling, and provides news, depth and serendipity. If you haven’t already, download it here — available to Times news subscribers on iOS — and sign up for our weekly newsletter.The Headlines brings you the biggest stories of the day from the Times journalists who are covering them, all in about 10 minutes.Intense, close-quarters combat is taking place near Al-Shifa Hospital, the largest in the Gaza Strip.Khader Al Zanoun/Agence France-Presse — Getty ImagesOn Today’s Episode:Crisis Heightens at Gaza’s Main Hospital Amid Dispute Over Desperately Needed FuelTim Scott Suspends 2024 Campaign, After Sunny Message Failed to ResonateCan’t Think, Can’t Remember: More Americans Say They’re in a Cognitive FogEmily Lang More

  • in

    Does Information Affect Our Beliefs?

    New studies on social media’s influence tell a complicated story.It was the social-science equivalent of Barbenheimer weekend: four blockbuster academic papers, published in two of the world’s leading journals on the same day. Written by elite researchers from universities across the United States, the papers in Nature and Science each examined different aspects of one of the most compelling public-policy issues of our time: how social media is shaping our knowledge, beliefs and behaviors.Relying on data collected from hundreds of millions of Facebook users over several months, the researchers found that, unsurprisingly, the platform and its algorithms wielded considerable influence over what information people saw, how much time they spent scrolling and tapping online, and their knowledge about news events. Facebook also tended to show users information from sources they already agreed with, creating political “filter bubbles” that reinforced people’s worldviews, and was a vector for misinformation, primarily for politically conservative users.But the biggest news came from what the studies didn’t find: despite Facebook’s influence on the spread of information, there was no evidence that the platform had a significant effect on people’s underlying beliefs, or on levels of political polarization.These are just the latest findings to suggest that the relationship between the information we consume and the beliefs we hold is far more complex than is commonly understood. ‘Filter bubbles’ and democracySometimes the dangerous effects of social media are clear. In 2018, when I went to Sri Lanka to report on anti-Muslim pogroms, I found that Facebook’s newsfeed had been a vector for the rumors that formed a pretext for vigilante violence, and that WhatsApp groups had become platforms for organizing and carrying out the actual attacks. In Brazil last January, supporters of former President Jair Bolsonaro used social media to spread false claims that fraud had cost him the election, and then turned to WhatsApp and Telegram groups to plan a mob attack on federal buildings in the capital, Brasília. It was a similar playbook to that used in the United States on Jan. 6, 2021, when supporters of Donald Trump stormed the Capitol.But aside from discrete events like these, there have also been concerns that social media, and particularly the algorithms used to suggest content to users, might be contributing to the more general spread of misinformation and polarization.The theory, roughly, goes something like this: unlike in the past, when most people got their information from the same few mainstream sources, social media now makes it possible for people to filter news around their own interests and biases. As a result, they mostly share and see stories from people on their own side of the political spectrum. That “filter bubble” of information supposedly exposes users to increasingly skewed versions of reality, undermining consensus and reducing their understanding of people on the opposing side. The theory gained mainstream attention after Trump was elected in 2016. “The ‘Filter Bubble’ Explains Why Trump Won and You Didn’t See It Coming,” announced a New York Magazine article a few days after the election. “Your Echo Chamber is Destroying Democracy,” Wired Magazine claimed a few weeks later.Changing information doesn’t change mindsBut without rigorous testing, it’s been hard to figure out whether the filter bubble effect was real. The four new studies are the first in a series of 16 peer-reviewed papers that arose from a collaboration between Meta, the company that owns Facebook and Instagram, and a group of researchers from universities including Princeton, Dartmouth, the University of Pennsylvania, Stanford and others.Meta gave unprecedented access to the researchers during the three-month period before the 2020 U.S. election, allowing them to analyze data from more than 200 million users and also conduct randomized controlled experiments on large groups of users who agreed to participate. It’s worth noting that the social media giant spent $20 million on work from NORC at the University of Chicago (previously the National Opinion Research Center), a nonpartisan research organization that helped collect some of the data. And while Meta did not pay the researchers itself, some of its employees worked with the academics, and a few of the authors had received funding from the company in the past. But the researchers took steps to protect the independence of their work, including pre-registering their research questions in advance, and Meta was only able to veto requests that would violate users’ privacy.The studies, taken together, suggest that there is evidence for the first part of the “filter bubble” theory: Facebook users did tend to see posts from like-minded sources, and there were high degrees of “ideological segregation” with little overlap between what liberal and conservative users saw, clicked and shared. Most misinformation was concentrated in a conservative corner of the social network, making right-wing users far more likely to encounter political lies on the platform.“I think it’s a matter of supply and demand,” said Sandra González-Bailón, the lead author on the paper that studied misinformation. Facebook users skew conservative, making the potential market for partisan misinformation larger on the right. And online curation, amplified by algorithms that prioritize the most emotive content, could reinforce those market effects, she added.When it came to the second part of the theory — that this filtered content would shape people’s beliefs and worldviews, often in harmful ways — the papers found little support. One experiment deliberately reduced content from like-minded sources, so that users saw more varied information, but found no effect on polarization or political attitudes. Removing the algorithm’s influence on people’s feeds, so that they just saw content in chronological order, “did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes,” the researchers found. Nor did removing content shared by other users.Algorithms have been in lawmakers’ cross hairs for years, but many of the arguments for regulating them have presumed that they have real-world influence. This research complicates that narrative.But it also has implications that are far broader than social media itself, reaching some of the core assumptions around how we form our beliefs and political views. Brendan Nyhan, who researches political misperceptions and was a lead author of one of the studies, said the results were striking because they suggested an even looser link between information and beliefs than had been shown in previous research. “From the area that I do my research in, the finding that has emerged as the field has developed is that factual information often changes people’s factual views, but those changes don’t always translate into different attitudes,” he said. But the new studies suggested an even weaker relationship. “We’re seeing null effects on both factual views and attitudes.”As a journalist, I confess a certain personal investment in the idea that presenting people with information will affect their beliefs and decisions. But if that is not true, then the potential effects would reach beyond my own profession. If new information does not change beliefs or political support, for instance, then that will affect not just voters’ view of the world, but their ability to hold democratic leaders to account.Thank you for being a subscriberRead past editions of the newsletter here.If you’re enjoying what you’re reading, please consider recommending it to others. They can sign up here. Browse all of our subscriber-only newsletters here.I’d love your feedback on this newsletter. Please email thoughts and suggestions to interpreter@nytimes.com. You can also follow me on Twitter. More

  • in

    Misinformation Defense Worked in 2020, Up to a Point, Study Finds

    Nearly 68 million Americans still visited untrustworthy websites 1.5 billion times in a month, according to Stanford researchers, causing concerns for 2024.Not long after misinformation plagued the 2016 election, journalists and content moderators scrambled to turn Americans away from untrustworthy websites before the 2020 vote.A new study suggests that, to some extent, their efforts succeeded.When Americans went to the polls in 2020, a far smaller portion had visited websites containing false and misleading narratives compared with four years earlier, according to researchers at Stanford. Although the number of such sites ballooned, the average visits among those people dropped, along with the time spent on each site.Efforts to educate people about the risk of misinformation after 2016, including content labels and media literacy training, most likely contributed to the decline, the researchers found. Their study was published on Thursday in the journal Nature Human Behaviour.“I am optimistic that the majority of the population is increasingly resilient to misinformation on the web,” said Jeff Hancock, the founding director of the Stanford Social Media Lab and the lead author of the report. “We’re getting better and better at distinguishing really problematic, bad, harmful information from what’s reliable or entertainment.”“I am optimistic that the majority of the population is increasingly resilient to misinformation on the web,” said Jeff Hancock, the lead author of the Stanford report.Ian C. Bates for The New York TimesStill, nearly 68 million people in the United States checked out websites that were not credible, visiting 1.5 billion times in a month in 2020, the researchers estimated. That included domains that are now defunct, such as theantimedia.com and obamawatcher.com. Some people in the study visited some of those sites hundreds of times.As the 2024 election approaches, the researchers worry that misinformation is evolving and splintering. Beyond web browsers, many people are exposed to conspiracy theories and extremism simply by scrolling through mobile apps such as TikTok. More dangerous content has shifted onto encrypted messaging apps with difficult-to-trace private channels, such as Telegram or WhatsApp.The boom in generative artificial intelligence, the technology behind the popular ChatGPT chatbot, has also raised alarms about deceptive images and mass-produced falsehoods.The Stanford researchers said that even limited or concentrated exposure to misinformation could have serious consequences. Baseless claims of election fraud incited a riot at the Capitol on Jan. 6, 2021. More than two years later, congressional hearings, criminal trials and defamation court cases are still addressing what happened.The Stanford researchers monitored the online activity of 1,151 adults from Oct. 2 through Nov. 9, 2020, and found that 26.2 percent visited at least one of 1,796 unreliable websites. They noted that the time frame did not include the postelection period when baseless claims of voter fraud were especially pronounced.That was down from an earlier, separate report that found that 44.3 percent of adults visited at least one of 490 problematic domains in 2016.The shrinking audience may have been influenced by attempts, including by social media companies, to mitigate misinformation, according to the researchers. They noted that 5.6 percent of the visits to untrustworthy sites in 2020 originated from Facebook, down from 15.1 percent in 2016. Email also played a smaller role in sending users to such sites in 2020.Other researchers have highlighted more ways to limit the lure of misinformation, especially around elections. The Bipartisan Policy Center suggested in a report this week that states adopt direct-to-voter texts and emails that offer vetted information.Social media companies should also do more to discourage performative outrage and so-called groupthink on their platforms — behavior that can fortify extreme subcultures and intensify polarization, said Yini Zhang, an assistant communication professor at the University at Buffalo.Professor Zhang, who published a study this month about QAnon, said tech companies should instead encourage more moderate engagement, even by renaming “like” buttons to something like “respect.”“For regular social media users, what we can do is dial back on the tribal instincts, to try to be more introspective and say: ‘I’m not going to take the bait. I’m not going to pile on my opponent,’” she said.A QAnon flag on a vehicle headed to a pro-Trump rally in October. Yini Zhang, a University of Buffalo professor who published a study about QAnon, said social media companies should encourage users to “dial back on the tribal instincts.”Brittany Greeson for The New York TimesWith next year’s presidential election looming, researchers said they are concerned about populations known to be vulnerable to misinformation, such as older people, conservatives and people who do not speak English.More than 37 percent of people older than 65 visited misinformation sites in 2020 — a far higher rate than younger groups but an improvement from 56 percent in 2016, according to the Stanford report. In 2020, 36 percent of people who supported President Donald J. Trump in the election visited at least one misinformation site, compared with nearly 18 percent of people who supported Joseph R. Biden Jr. The participants also completed a survey that included questions about their preferred candidate.Mr. Hancock said that misinformation should be taken seriously, but that its scale should not be exaggerated. The Stanford study, he said, showed that the news consumed by most Americans was not misinformation but that certain groups of people were most likely to be targeted. Treating conspiracy theories and false narratives as an ever-present, wide-reaching threat could erode the public’s trust in legitimate news sources, he said.“I still think there’s a problem, but I think it’s one that we’re dealing with and that we’re also recognizing doesn’t affect most people most of the time,” Mr. Hancock said. “If we are teaching our citizens to be skeptical of everything, then trust is undermined in all the things that we care about.” More

  • in

    Meet the People Working on Getting Us to Hate Each Other Less

    Affective polarization — “a poisonous cocktail of othering, aversion and moralization” — has prompted an explosion of research as the threat to democratic norms and procedures mount.Intensely felt divisions over race, ethnicity and culture have become more deeply entrenched in the American political system, reflected in part in the election denialism found in roughly a third of the electorate and in state legislative initiatives giving politicians the power to overturn election results.Many researchers have begun to focus on this question: Is there a causal relationship between the intensification of hostility between Democrats and Republicans and the deterioration of support for democratic standards?“Growing affective polarization and negative partisanship,” Jennifer McCoy and Murat Somer, political scientists at Georgia State University and Koç University-Istanbul, write in a 2019 essay, “Toward a Theory of Pernicious Polarization and How It Harms Democracies: Comparative Evidence and Possible Remedies,”contribute to a perception among citizens that the opposing party and their policies pose a threat to the nation or an individual’s way of life. Most dangerously for democracy, these perceptions of threat open the door to undemocratic behavior by an incumbent and his/her supporters to stay in power, or by opponents to remove the incumbent from power.What is affective polarization? In 2016, Lilliana Mason, a political scientist at Johns Hopkins, wrote that when a voter’s “partisan social identity” merges with his or her racial, religious, sexual and cultural identities, “these various identities work together to drive an emotional type of polarization that cannot be explained by parties or issues alone.”Mason argues that “threats to a party’s status tend to drive anger, while reassurances drive enthusiasm” so thata party loss generates very negative, particularly angry, emotional reactions. This anger is driven not simply by dissatisfaction with potential policy consequences, but by a much deeper, more primal psychological reaction to group threat. Partisans are angered by a party loss because it makes them, as individuals, feel like losers too.One optimistic proposal to reduce partisan animosity is to focus public attention on the commonality of Democratic and Republican voters in their shared identity as Americans. Matthew Levendusky, a political scientist at the University of Pennsylvania, has written extensively on this subject, including in his 2018 paper “Americans, Not Partisans: Can Priming American National Identity Reduce Affective Polarization?” and in his soon-to-be-published book, “Our Common Bonds: Using What Americans Share to Help Bridge the Partisan Divide.”“I show,” Levendusky contends in his 2018 paper, “that when subjects’ sense of American national identity is heightened, they come to see members of the opposing party as fellow Americans rather than rival partisans. As a result, they like the opposing party more, thereby reducing affective polarization.”There are serious problems, however, with a depolarization strategy based on American identity, problems that go to the heart of the relentless power of issues of race, ethnicity and immigration­ to splinter the electorate.In their December 2022 paper, “ ‘American’ Is the Eye of the Beholder: American Identity, Racial Sorting, and Affective Polarization among White Americans,” Ryan Dawkins and Abigail Hanson write thatWhite Democrats and White Republicans have systematically different ideas about what attributes are essential to being a member of the national community. Second, the association between partisanship and these competing conceptions of American identity among White Americans has gotten stronger during the Trump era, largely because of Democrats adopting a more racially inclusive conception of American identity. Lastly, appeals to American identity only dampen out-partisan animosity when the demographic composition of the opposing party matches their racialized conception of American identity. When there is a mismatch between people’s racialized conception of American identity and the composition of the opposition party, American identity is associated with higher levels of partisan hostility.Dawkins and Hanson acknowledge that “national identity is perhaps the only superordinate identity that holds the promise of uniting partisans and closing the social distance between White Democrats and White Republicans,” but, they continue,If conceptions of national identity itself become the subject of the very sorting process that is driving affective polarization, then it can no longer serve as a unifying identity that binds the entire country together. In fact, frames that highlight the association of American identity to historic norms of whiteness can ultimately divide the country further, especially as the United States transitions into a majority-minority country. Indeed, continued demographic change will likely make the schism between White Democrats and White Republicans wider before things have any hope to improve.I asked Levendusky about the Dawkins-Hanson paper. He replied by email that he was now “convinced that there is no simple path from animosity (or affective polarization) to far downstream outcomes (albeit important ones)” — adding that “there’s a long way from ‘I dislike members of the other party’ to ‘I will vote for a candidate who broke democratic norms rather than a candidate from the other party’ and the process is likely complex and subtle.”In an August 2022 paper, “Does Affective Polarization Undermine Democratic Norms or Accountability? Maybe Not,” David E. Broockman, a political scientist at Berkeley, Joshua L. Kalla, a political scientist at Yale, and Sean J. Westwood, a political scientist at Dartmouth, pointedly reject the claim made by a number of scholars “that if citizens were less affectively polarized, they would be less likely to endorse norm violations, overlook copartisan politicians’ shortcomings, oppose compromise, adopt their party’s views, or misperceive economic conditions. A large, influential literature speculates as such.”Instead, Broockman, Kalla and Westwood contend, their own studies “find no evidence that these changes in affective polarization influence a broad range of political behaviors — only interpersonal attitudes. Our results suggest caution about the widespread assumption that reducing affective polarization would meaningfully bolster democratic norms or accountability.”Broockman and his co-authors measured the effect of reducing affective polarization on five domains: “electoral accountability, adopting one’s party’s policy positions, support for legislative bipartisanship, support for democratic norms, and perceptions of objective conditions.”“Our results,” they write, “run contrary to the literature’s widespread speculation: in these political domains, our estimates of the causal effects of reducing affective polarization are consistently null.”In an email, Westwood argued that the whole endeavor “to fix anti-democratic attitudes by changing levels of partisan animosity sounds promising, but it is like trying to heal a broken bone in a gangrenous leg when the real problem is the car accident that caused both injuries in the first place.”Westwood’s point is well-taken. In a country marked by battles over sex, race, religion, gender, regional disparities in economic growth, traditionalist-vs-postmaterialist values and, broadly, inequality, it is difficult to see how relatively short, survey based experiments could produce a significant, long-term dent in partisan hostility.Jan G. Voelkel, a sociologist at Stanford, and eight of his colleagues, report similar results in their October 2022 article “Interventions Reducing Affective Polarization Do Not Necessarily Improve Anti-democratic Attitudes.” “Scholars and practitioners alike,” they write, “have invested great effort in developing depolarization interventions that reduce affective polarization. Critically, however, it remains unclear whether these interventions reduce anti-democratic attitudes, or only change sentiments toward outpartisans.”Why?Because much prior work has focused on treating affective polarization itself, and assumed that these interventions would in turn improve downstream outcomes that pose consequential threats to democracy. Although this assumption may seem reasonable, there is little evidence evaluating its implications for the benefits of depolarization interventions.In “Megastudy Identifying Successful Interventions to Strengthen Americans’ Democratic Attitudes,” a separate analysis of 32,059 American voters “testing 25 interventions designed to reduce anti-democratic attitudes and partisan animosity,” however, Voelkel and many of his co-authors, Michael N. Stagnaro, James Chu, Sophia Pink, Joseph S. Mernyk, Chrystal Redekopp, Matthew Cashman, James N. Druckman, David G. Rand and Robb Willer significantly amended their earlier findings.In an email, Willer explained what was going on:One of the key findings of this new study is that we found some overlap between the interventions that reduced affective polarization and the interventions that reduced one specific anti-democratic attitude: support for undemocratic candidates. Specifically, we found that several of the interventions that were most effective in reducing American partisans’ dislike of rival partisans also made them more likely to say that they would not vote for a candidate from their party who engaged in one of several anti-democratic actions, such as not acknowledging the results of a lost election or removing polling stations from areas that benefit the rival party.Voelkel and his co-authors found that two interventions were the most effective.The first is known as the “Braley intervention” for Alia Braley, a political scientist at Berkeley and the lead author of “The Subversion Dilemma: Why Voters Who Cherish Democracy Participate in Democratic Backsliding.” In the Braley intervention, participants are “asked what people from the other party believe when it comes to actions that undermine how democracy works (e.g., using violence to block laws, reducing the number of polling stations to help the other party, or not accepting the results of elections if they lose).” They are then given “the correct answer” and “the answers make clear the other party does not support actions that undermine democracy.”The second “top-performing intervention” was to give participants “a video showing vivid imagery of societal instability and violence following democratic collapse in several countries, before concluding with imagery of the Jan. 6 U.S. Capitol attack.”“To our knowledge,” Willer wrote in his email, “this is the first evidence that the same stimuli could both reduce affective polarization and improve some aspect of Americans’ democratic attitudes, and it suggests these two factors may be causally linked, more than prior work — including our own — would suggest.”Kalla disputed the conclusions Willer drew from the megastudy:The most successful interventions in the megastudy for reducing anti-democratic views were interventions that directly targeted those anti-democratic views. For example, Braley et al.’s successful intervention was able to reduce anti-democratic views by correcting misperceptions about the other party’s willingness to subvert democracy.This intervention, Kalla continued,was not about affective polarization. What this suggests is that for practitioners interested in reducing anti-democratic attitudes, they should use interventions that directly speak to and target those anti-democratic views. As our work finds and Voelkel et al. replicates, obliquely attempting to reduce anti-democratic views through the causal pathway of affective polarization does not appear to be a successful strategy.I sent Kalla’s critique to Willer, who replied:I agree with Josh’s point that the most effective interventions for reducing support for undemocratic practices and candidates were interventions that were pretty clearly crafted with the primary goal in mind of targeting democratic attitudes. And while we find some relationships here that suggest there is a path to reducing support for undemocratic candidates via reducing affective polarization, the larger point that most interventions reducing affective polarization do not affect anti-democratic attitudes still stands, and our evidence continues to contradict the widespread popular assumption that affective polarization and anti-democratic attitudes are closely linked. We continue to find evidence in this newest study against that idea.One scholar, Herbert P. Kitschelt, a political scientist at Duke, contended that too much of the debate over affective polarization and democratic backsliding has been restricted to the analysis of competing psychological pressures, when in fact the scope in much larger. “The United States,” Kitschelt wrote in an email,has experienced a “black swan” confluence, interaction and mutual reinforcement of general factors that affect all advanced knowledge societies with specific historical and institutional factors unique to the U.S. that have created a poisonous concoction threatening U.S. democracy more so than that of any other Western society. Taken together, these conditions have created the scenario in which affective polarization thrives.Like most of the developed world, the United States is undergoing three disruptive transformations compounded by three additional historical factors specific to the United States, Kitschelt suggests. These transformations, he wrote, are:“The postindustrial change of the occupational structure expanding higher education and the income and status educational dividend, together with a transformation of gender and family relations, dismantling the paternalist family and improving the bargaining power of women, making less educated people — and especially males — the more likely socio-economic and cultural losers of the process.”“The expansion of education goes together with a secularization of society that has undercut the ideological foundations of paternalism, but created fierce resistance in certain quarters.”“The sociocultural and economic divisions furthermore correlate with residential patterns in which the growing higher educated, younger, secular and more gender-egalitarian share of the population lives in metropolitan and suburban areas, while the declining, less educated, older, more religious and more paternalists share of the population lives in exurbia or the countryside.”The three factors unique to this country, in his view, are:“The legacy of enslavement and racial oppression in the United States in which — following W.E.B. DuBois — the white lower class of less skilled laborers derived a ‘quasi-wage’ satisfaction from racist subordination of the minority, the satisfaction of enjoying a higher rank in society than African Americans.”“The vibrancy of evangelical ‘born again’ Christianity, sharply separated from the old European moderate, cerebral mainline Protestantism. The former attracts support over-proportionally among less educated people, and strictly segregates churches by race, thereby making it possible to convert white Evangelical churches into platforms of white racism. They have become political transmission belts of right-wing populism in the United States, with 80 percent of those whites who consider themselves ‘born again’ voting for the Trump presidential candidacy.”“The institutional particularities of the U.S. voting system that tends to divide populations into two rival parties, the first-past-the-post electoral system for the U.S. legislature and the directly elected presidency. While received wisdom has claimed that it moderates divisions, under conditions of mutually reinforcing economic, social, and cultural divides, it is likely to have the opposite effect. The most important additional upshot of this system is the overrepresentation of the countryside (i.e. the areas where the social, economic, and cultural losers of knowledge society tend to be located) in the legislative process and presidential elections/Electoral College.”Kitschelt argues that in order to understand affective polarization it is necessary to go “beyond the myopic and US-centric narrow vision field of American political psychologists.” The incentives “for politicians to prime this polarization and stoke the divides, including fanning the flames of affective polarization, can be understood only against the backdrop of these underlying socio-economic and cultural legacies and processes.”Kitschelt is not alone in this view. He pointed to a 2020 book, “American Affective Polarization in Comparative Perspective,” by Noam Gidron, James Adams and Will Horne, political scientists at Harvard, the University of California-Davis and Georgia State University, in which they make a case thatAmericans’ dislike of partisan opponents has increased more rapidly than in most other Western publics. We show that affective polarization is more intense when unemployment and inequality are high, when political elites clash over cultural issues such as immigration and national identity and in countries with majoritarian electoral institutions.Writing just before the 2020 election, Gidron, Adams and Horne point out that theissue of cultural disagreements appears highly pertinent in light of the ongoing nationwide protests in support of racial justice and the Black Lives Matter movement which has sparked a wider cultural debate over questions relating to race, police funding and broader questions over interpretations of America’s history. In a July 4th speech delivered at Mt. Rushmore, President Trump starkly framed these types of “culture war” debates as a defining political and social divide in America, asserting “our nation is witnessing a merciless campaign to wipe out our history, defame our heroes, erase our values and indoctrinate our children.”The study of affective polarization sheds light on how vicious American politics has become, and on how this viciousness has enabled Trump and those Republicans who have followed his lead, while hurting Democrats whose policy and legislative initiatives have been obstructed as much as they have succeeded.Richard Pildes, a professor of constitutional law at N.Y.U., addressed this point when he delivered the following remarks from his paper “Political Fragmentation in Democracies of the West” in 2021 at a legal colloquium in New York:There is little question that recent decades have seen a dramatic decline in the effectiveness of government, whether measured in the number of important bills Congress is able to enact, the proportion of all issues people identity as most important that Congress manages to address, or the number of enacted bills that update old policies enacted many decades earlier. Social scientists now write books with titles like Can America Govern Itself? Longitudinal data confirm the obvious, which is the more polarized Congress is, the less it enacts significant legislation; in the ten most polarized congressional terms, a bit more than 10.6 significant laws were enacted, while in the ten least polarized terms, that number goes up 60 percent, to around 16 significant enactments per term. The inability of democratic governments to deliver on the issues their populations care most about poses serious risks.What are the chances of reversing this trend?The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram. More

  • in

    Resistance to Misinformation Is Weakening on Twitter, a Report Found

    Concerns about misinformation on Twitter have flared in the days since Elon Musk’s takeover on Oct. 27, pushing away advertisers, rattling researchers and increasing fears that conspiracy theories and false narratives could pollute the political discourse on the platform ahead of the midterm elections.Researchers at the Fletcher School at Tufts University said in a report that “early signs show the platform is heading in the wrong direction under his leadership — at a particularly inconvenient time for American democracy.”The researchers said they had tracked narratives about civil war, election fraud, citizen policing of voting, and allegations of pedophilia and grooming on Twitter from July through October. They said they had found that the discussion reflected a commitment to combating misinformation, hate speech and toxic ideas.“Post-Musk takeover, the quality of the conversation has decayed,” as more extremists and misinformation peddlers tested the platform’s boundaries, the researchers wrote.Before Mr. Musk took control of Twitter, posts pushing back against misinformation, hate and other toxic speech were usually many times greater than the original false or misleading posts, the Tufts researchers discovered.Conspiracy theories focused on unfounded allegations of pedophilia or “grooming,” which advance an anti-L.G.B.T.Q. message, have encountered less resistance from a Musk-led Twitter, the Tufts report found. Earlier spikes in the topic were accompanied by strong condemnation; after Oct. 28, researchers wrote, “the conversation deteriorated quickly” as users tested Twitter moderators by repeatedly writing “GROOMER,” in an echo of a coordinated campaign to spread antisemitic content as the platform adjusted to Mr. Musk.On Monday, with hours to go before the vote, Mr. Musk tweeted out a link to Twitter’s rules, which he said “will evolve over time.” Watchdog groups quickly noticed that the page did not explicitly address misinformation, although it did prohibit users from using the platform to manipulate or interfere in elections, employ misleading and deceptive identities or share harmful synthetic or manipulated media. A separate page about misinformation in Twitter’s “Help Center” section remained live.Fears about ads appearing in proximity to misinformation and other problematic posts have led General Mills, United Airlines and several other large companies to pause their spending on Twitter in recent days. Content moderation has sparked heated exchanges on Madison Avenue with and about Mr. Musk. More

  • in

    Survey Looks at Acceptance of Political Violence in U.S.

    One in five adults in the United States would be willing to condone acts of political violence, a new national survey commissioned by public health experts found, revelations that they say capture the escalation in extremism that was on display during the Jan. 6 attack on the Capitol.The online survey of more than 8,600 adults in the United States was conducted from mid-May to early June by the research firm Ipsos on behalf of the Violence Prevention Research Program at the University of California, Davis, which released the results on Tuesday.The group that said they would be willing to condone such violence amounted to 20.5 percent of those surveyed, with the majority of that group answering that “in general” the use of force was at least “sometimes justified” — the remaining 3 percent answered that such violence was “usually” or “always” justified.About 12 percent of survey respondents answered that they would be at least “somewhat willing” to resort to violence themselves to threaten or intimidate a person.And nearly 12 percent of respondents also thought it was at least “sometimes justified” to use violence if it meant returning Donald J. Trump to the presidency.Key Revelations From the Jan. 6 HearingsCard 1 of 8Making a case against Trump. More

  • in

    Meta Will Give Researchers More Information on Political Ad Targeting

    Meta, which owns Facebook and Instagram, said that it planned to give outside researchers more detailed information on how political ads are targeted across its platform, providing insight into the ways that politicians, campaign operatives and political strategists buy and use ads ahead of the midterm elections.Starting on Monday, academics and researchers who are registered with an initiative called the Facebook Open Research and Transparency project will be allowed to see data on how each political or social ad was used to target people. The information includes which interest categories — such as “people who like dogs” or “people who enjoy the outdoors” — were chosen to aim an ad at someone.In addition, Meta said it planned to include summaries of targeting information for some of its ads in its publicly viewable Ad Library starting in July. The company created the Ad Library in 2019 so that journalists, academics and others could obtain information and help safeguard elections against the misuse of digital advertising.While Meta has given outsiders some access into how its political ads were used in the past, it has restricted the amount of information that could be seen, citing privacy reasons. Critics have claimed that the company’s system has been flawed and sometimes buggy, and have frequently asked for more data.That has led to conflicts. Meta previously clashed with a group of New York University academics who tried ingesting large amounts of self-reported data on Facebook users to learn more about the platform. The company cut off access to the group last year, citing violations of its platform rules.The new data that is being added to the Facebook Open Research Transparency project and the Ad Library is a way to share information on political ad targeting while trying to keep data on its users private, the company said.“By making advertiser targeting criteria available for analysis and reporting on ads run about social issues, elections and politics, we hope to help people better understand the practices used to reach potential voters on our technologies,” the company said in a statement.With the new data, for example, researchers browsing the Ad Library could see that over the course of a month, a Facebook page ran 2,000 political ads and that 40 percent of the ad budget was targeted to “people who live in Pennsylvania” or “people who are interested in politics.”Meta said it had been bound by privacy rules and regulations on what types of data it could share with outsiders. In an interview, Jeff King, a vice president in Meta’s business integrity unit, said the company had hired thousands of workers over the past few years to review those privacy issues.“Every single thing we release goes through a privacy review now,” he said. “We want to make sure we give people the right amount of data, but still remain privacy conscious while we do it.”The new data on political ads will cover the period from August 2020, three months before the last U.S. presidential election, to the present day. More

  • in

    What Happened When Facebook Employees Warned About Election Misinformation

    Company documents show that the social network’s employees repeatedly raised red flags about the spread of misinformation and conspiracies before and after the contested November vote.Sixteen months before last November’s presidential election, a researcher at Facebook described an alarming development. She was getting content about the conspiracy theory QAnon within a week of opening an experimental account, she wrote in an internal report.On Nov. 5, two days after the election, another Facebook employee posted a message alerting colleagues that comments with “combustible election misinformation” were visible below many posts.Four days after that, a company data scientist wrote in a note to his co-workers that 10 percent of all U.S. views of political material — a startlingly high figure — were of posts that alleged the vote was fraudulent.In each case, Facebook’s employees sounded an alarm about misinformation and inflammatory content on the platform and urged action — but the company failed or struggled to address the issues. The internal dispatches were among a set of Facebook documents obtained by The New York Times that give new insight into what happened inside the social network before and after the November election, when the company was caught flat-footed as users weaponized its platform to spread lies about the vote. More