More stories

  • in

    Facebook, Preparing for Chauvin Verdict, Will Limit Posts That Might Incite Violence

    Facebook on Monday said it planned to limit posts that contain misinformation and hate speech related to the trial of Derek Chauvin, the former Minneapolis police officer charged with the murder of George Floyd, to keep them from spilling over into real-world harm.As closing arguments began in the trial and Minneapolis braced for a verdict, Facebook said it would identify and remove posts on the social network that urged people to bring arms to the city. It also said it would protect members of Mr. Floyd’s family from harassment and take down content that praised, celebrated or mocked his death.“We know this trial has been painful for many people,” Monika Bickert, Facebook’s vice president of content policy, wrote in a blog post. “We want to strike the right balance between allowing people to speak about the trial and what the verdict means, while still doing our part to protect everyone’s safety.”Facebook, which has long positioned itself as a site for free speech, has become increasingly proactive in policing content that might lead to real-world violence. The Silicon Valley company has been under fire for years over the way it has handled sensitive news events. That includes last year’s presidential election, when online misinformation about voter fraud galvanized supporters of former President Donald J. Trump. Believing the election to have been stolen from Mr. Trump, some supporters stormed the Capitol building on Jan. 6.Leading up to the election, Facebook took steps to fight misinformation, foreign interference and voter suppression. The company displayed warnings on more than 150 million posts with election misinformation, removed more than 120,000 posts for violating its voter interference policies and took down 30 networks that posted false messages about the election.But critics said Facebook and other social media platforms did not do enough. After the storming of the Capitol, the social network stopped Mr. Trump from being able to post on the site. The company’s independent oversight board is now debating whether the former president will be allowed back on Facebook and has said it plans to issue its decision “in the coming weeks,” without giving a definite date.The death of Mr. Floyd, who was Black, led to a wave of Black Lives Matter protests across the nation last year. Mr. Chauvin, a former Minneapolis police officer who is white, faces charges of manslaughter, second-degree murder and third-degree murder for Mr. Floyd’s death. The trial began in late March. Mr. Chauvin did not testify.Facebook said on Monday that it had determined that Minneapolis was, at least temporarily, “a high-risk location.” It said it would remove pages, groups, events and Instagram accounts that violated its violence and incitement policy; take down attacks against Mr. Chauvin and Mr. Floyd; and label misinformation and graphic content as sensitive.The company did not have any further comment.“As the trial comes to a close, we will continue doing our part to help people safely connect and share what they are experiencing,” Ms. Bickert said in the blog post. More

  • in

    I Used to Think the Remedy for Bad Speech Was More Speech. Not Anymore.

    I used to believe that the remedy for bad speech is more speech. Now that seems archaic. Just as the founders never envisioned how the right of a well-regulated militia to own slow-loading muskets could apply to mass murderers with bullet-spewing military-style semiautomatic rifles, they could not have foreseen speech so twisted to malevolent intent as it is now.Cyber-libertarianism, the ethos of the internet with roots in 18th-century debate about the free market of ideas, has failed us miserably. Well after the pandemic is over, the infodemic will rage on — so long as it pays to lie, distort and misinform.Just recently, we saw the malignancies of our premier freedoms on display in the mass shooting in Boulder, Colo. At the center of the horror was a deeply disturbed man with a gun created for war, with the capacity to kill large numbers of humans, quickly. Within hours of the slaughter at the supermarket, a Facebook account with about 60,000 followers wrote that the shooting was fake — a so-called false flag, meant to cast blame on the wrong person.So it goes. Toxic misinformation, like AR-15-style weapons in the hands of men bent on murder, is just something we’re supposed to live with in a free society. But there are three things we could do now to clean up the river of falsities poisoning our democracy.First, teach your parents well. Facebook users over the age of 65 are far more likely to post articles from fake news sites than people under the age of 30, according to multiple studies.Certainly, the “I don’t know it for a fact, I just know it’s true” sentiment, as the Bill Maher segment has it, is not limited to seniors. But too many older people lack the skills to detect a viral falsity.That’s where the kids come in. March 18 was “MisinfoDay” in many Washington State high schools. On that day, students were taught how to spot a lie — training they could share with their parents and grandparents.Media literacy classes have been around for a while. No one should graduate from high school without being equipped with the tools to recognize bogus information. It’s like elementary civics. By extension, we should encourage the informed young to pass this on to their misinformed elders.Second, sue. What finally made the misinformation merchants on television and the web close the spigot on the Big Lie about the election were lawsuits seeking billions. Dominion Voting Systems and Smartmatic, two election technology companies, sued Fox News and others, claiming defamation.“Lies have consequences,” Dominion’s lawyers wrote in their complaint. “Fox sold a false story of election fraud in order to serve its own commercial purposes, severely injuring Dominion in the process.”In response to the Smartmatic suit, Fox said, “This lawsuit strikes at the heart of the news media’s First Amendment mission to inform on matters of public concern.” No, it doesn’t. There is no “mission” to misinform.The fraudsters didn’t even pretend they weren’t peddling lies. Sidney Powell, the lawyer who was one of the loudest promoters of the falsehood that Donald Trump won the election, was named in a Dominion lawsuit. “No reasonable person would conclude that the statements were truly statements of fact,” her lawyers wrote, absurdly, of her deception.Tell that to the majority of Republican voters who said they believed the election was stolen. They didn’t see the wink when Powell went on Fox and Newsmax to claim a massive voter fraud scheme.Dominion should sue Trump, the man at the top of the falsity food chain. The ex-president has shown he will repeat a lie over and over until it hurts him financially. That’s how the system works. And the bar for a successful libel suit, it should be noted, is very high.Finally, we need to dis-incentivize social media giants from spreading misinformation. This means striking at the algorithms that drive traffic — the lines of code that push people down rabbit holes of unreality.The Capitol Hill riot on Jan. 6 might not have happened without the platforms that spread false information, while fattening the fortunes of social media giants.“The last few years have proven that the more outrageous and extremist content social media platforms promote, the more engagement and advertising dollars they rake in,” said Representative Frank Pallone Jr., chairman of the House committee that recently questioned big tech chief executives.Taking away their legal shield — Section 230 of the Communications Decency Act — is the strongest threat out there. Sure, removing social media’s immunity from the untruthful things said on their platforms could mean the end of the internet as we know it. True. But that’s not necessarily a bad thing.So far, the threat has been mostly idle — all talk. At the least, lawmakers could more effectively use this leverage to force social media giants to redo their recommendation algorithms, making bogus information less likely to spread. When YouTube took such a step, promotion of conspiracy theories decreased significantly, according to researchers at the University of California, Berkeley, who published their findings in March 2020.Republicans may resist most of the above. Lies help them stay in power, and a misinformed public is good for their legislative agenda. They’re currently pushing a wave of voter suppression laws to fix a problem that doesn’t exist.I still believe the truth may set us free. But it has little chance of surviving amid the babble of orchestrated mendacity.Timothy Egan (@nytegan) is a contributing opinion writer who covers the environment, the American West and politics. He is a winner of the National Book Award and author, most recently, of “A Pilgrimage to Eternity.”The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram. More

  • in

    Zuckerberg, Dorsey and Pichai testify about disinformation.

    The chief executives of Google, Facebook and Twitter are testifying at the House on Thursday about how disinformation spreads across their platforms, an issue that the tech companies were scrutinized for during the presidential election and after the Jan. 6 riot at the Capitol.The hearing, held by the House Energy and Commerce Committee, is the first time that Mark Zuckerberg of Facebook, Jack Dorsey of Twitter and Sundar Pichai of Google are appearing before Congress during the Biden administration. President Biden has indicated that he is likely to be tough on the tech industry. That position, coupled with Democratic control of Congress, has raised liberal hopes that Washington will take steps to rein in Big Tech’s power and reach over the next few years.The hearing is also be the first opportunity since the Jan. 6 Capitol riot for lawmakers to question the three men about the role their companies played in the event. The attack has made the issue of disinformation intensely personal for the lawmakers since those who participated in the riot have been linked to online conspiracy theories like QAnon.Before the hearing, Democrats signaled in a memo that they were interested in questioning the executives about the Jan. 6 attacks, efforts by the right to undermine the results of the 2020 election and misinformation related to the Covid-19 pandemic.Republicans sent the executives letters this month asking them about the decisions to remove conservative personalities and stories from their platforms, including an October article in The New York Post about President Biden’s son Hunter.Lawmakers have debated whether social media platforms’ business models encourage the spread of hate and disinformation by prioritizing content that will elicit user engagement, often by emphasizing salacious or divisive posts.Some lawmakers will push for changes to Section 230 of the Communications Decency Act, a 1996 law that shields the platforms from lawsuits over their users’ posts. Lawmakers are trying to strip the protections in cases where the companies’ algorithms amplified certain illegal content. Others believe that the spread of disinformation could be stemmed with stronger antitrust laws, since the platforms are by far the major outlets for communicating publicly online.“By now it’s painfully clear that neither the market nor public pressure will stop social media companies from elevating disinformation and extremism, so we have no choice but to legislate, and now it’s a question of how best to do it,” said Representative Frank Pallone, the New Jersey Democrat who is chairman of the committee.The tech executives are expected to play up their efforts to limit misinformation and redirect users to more reliable sources of information. They may also entertain the possibility of more regulation, in an effort to shape increasingly likely legislative changes rather than resist them outright. More

  • in

    How Anti-Asian Activity Online Set the Stage for Real-World Violence

    On platforms such as Telegram and 4chan, racist memes and posts about Asian-Americans have created fear and dehumanization.In January, a new group popped up on the messaging app Telegram, named after an Asian slur.Hundreds of people quickly joined. Many members soon began posting caricatures of Asians with exaggerated facial features, memes of Asian people eating dog meat and images of American soldiers inflicting violence during the Vietnam War.This week, after a gunman killed eight people — including six women of Asian descent — at massage parlors in and near Atlanta, the Telegram channel linked to a poll that asked, “Appalled by the recent attacks on Asians?” The top answer, with 84 percent of the vote, was that the violence was “justified retaliation for Covid.”The Telegram group was a sign of how anti-Asian sentiment has flared up in corners of the internet, amplifying racist and xenophobic tropes just as attacks against Asian-Americans have surged. On messaging apps like Telegram and on internet forums like 4chan, anti-Asian groups and discussion threads have been increasingly active since November, especially on far-right message boards such as The Donald, researchers said.The activity follows a rise in anti-Asian misinformation last spring after the coronavirus, which first emerged in China, began spreading around the world. On Facebook and Twitter, people blamed the pandemic on China, with users posting hashtags such as #gobacktochina and #makethecommiechinesepay. Those hashtags spiked when former President Donald J. Trump last year called Covid-19 the “Chinese virus” and “Kung Flu.”While some of the online activity tailed off ahead of the November election, its re-emergence has helped lay the groundwork for real-world actions, researchers said. The fatal shootings in Atlanta this week, which have led to an outcry over treatment of Asian-Americans even as the suspect said he was trying to cure a “sexual addiction,” were preceded by a swell of racially motivated attacks against Asian-Americans in places like New York and the San Francisco Bay Area, according to the advocacy group Stop AAPI Hate.“Surges in anti-Asian rhetoric online means increased risk of real-world events targeting that group of people,” said Alex Goldenberg, an analyst at the Network Contagion Research Institute at Rutgers University, which tracks misinformation and extremism online.He added that the anti-China coronavirus misinformation — including the false narrative that the Chinese government purposely created Covid-19 as a bioweapon — had created an atmosphere of fear and invective.Anti-Asian speech online has typically not been as overt as anti-Semitic or anti-Black groups, memes and posts, researchers said. On Facebook and Twitter, posts expressing anti-Asian sentiments have often been woven into conspiracy theory groups such as QAnon and in white nationalist and pro-Trump enclaves. Mr. Goldenberg said forms of hatred against Black people and Jews have deep roots in extremism in the United States and that the anti-Asian memes and tropes have been more “opportunistically weaponized.”But that does not make the anti-Asian hate speech online less insidious. Melissa Ryan, chief executive of Card Strategies, a consulting firm that researches disinformation, said the misinformation and racist speech has led to a “dehumanization” of certain groups of people and to an increased risk of violence.Negative Asian-American tropes have long existed online but began increasing last March as parts of the United States went into lockdown over the coronavirus. That month, politicians including Representative Paul Gosar, Republican of Arizona, and Representative Kevin McCarthy, a Republican of California, used the terms “Wuhan virus” and “Chinese coronavirus” to refer to Covid-19 in their tweets.Those terms then began trending online, according to a study from the University of California, Berkeley. On the day Mr. Gosar posted his tweet, usage of the term “Chinese virus” jumped 650 percent on Twitter; a day later there was an 800 percent increase in their usage in conservative news articles, the study found.Mr. Trump also posted eight times on Twitter last March about the “Chinese virus,” causing vitriolic reactions. In the replies section of one of his posts, a Trump supporter responded, “U caused the virus,” directing the comment to an Asian Twitter user who had cited U.S. death statistics for Covid-19. The Trump fan added a slur about Asian people.In a study this week from the University of California, San Francisco, researchers who examined 700,000 tweets before and after Mr. Trump’s March 2020 posts found that people who posted the hashtag #chinesevirus were more likely to use racist hashtags, including #bateatingchinese.“There’s been a lot of discussion that ‘Chinese virus’ isn’t racist and that it can be used,” said Yulin Hswen, an assistant professor of epidemiology at the University of California, San Francisco, who conducted the research. But the term, she said, has turned into “a rallying cry to be able to gather and galvanize people who have these feelings, as well as normalize racist beliefs.”Representatives for Mr. Trump, Mr. McCarthy and Mr. Gosar did not respond to requests for comment.Misinformation linking the coronavirus to anti-Asian beliefs also rose last year. Since last March, there have been nearly eight million mentions of anti-Asian speech online, much of it falsehoods, according to Zignal Labs, a media insights firm..css-1xzcza9{list-style-type:disc;padding-inline-start:1em;}.css-rqynmc{font-family:nyt-franklin,helvetica,arial,sans-serif;font-size:0.9375rem;line-height:1.25rem;color:#333;margin-bottom:0.78125rem;}@media (min-width:740px){.css-rqynmc{font-size:1.0625rem;line-height:1.5rem;margin-bottom:0.9375rem;}}.css-rqynmc strong{font-weight:600;}.css-rqynmc em{font-style:italic;}.css-yoay6m{margin:0 auto 5px;font-family:nyt-franklin,helvetica,arial,sans-serif;font-weight:700;font-size:1.125rem;line-height:1.3125rem;color:#121212;}@media (min-width:740px){.css-yoay6m{font-size:1.25rem;line-height:1.4375rem;}}.css-1dg6kl4{margin-top:5px;margin-bottom:15px;}#masthead-bar-one{display:none;}#masthead-bar-one{display:none;}.css-1pd7fgo{background-color:white;border:1px solid #e2e2e2;width:calc(100% – 40px);max-width:600px;margin:1.5rem auto 1.9rem;padding:15px;box-sizing:border-box;}@media (min-width:740px){.css-1pd7fgo{padding:20px;width:100%;}}.css-1pd7fgo:focus{outline:1px solid #e2e2e2;}#NYT_BELOW_MAIN_CONTENT_REGION .css-1pd7fgo{border:none;padding:20px 0 0;border-top:1px solid #121212;}.css-1pd7fgo[data-truncated] .css-rdoyk0{-webkit-transform:rotate(0deg);-ms-transform:rotate(0deg);transform:rotate(0deg);}.css-1pd7fgo[data-truncated] .css-eb027h{max-height:300px;overflow:hidden;-webkit-transition:none;transition:none;}.css-1pd7fgo[data-truncated] .css-5gimkt:after{content:’See more’;}.css-1pd7fgo[data-truncated] .css-6mllg9{opacity:1;}.css-coqf44{margin:0 auto;overflow:hidden;}.css-coqf44 strong{font-weight:700;}.css-coqf44 em{font-style:italic;}.css-coqf44 a{color:#326891;-webkit-text-decoration:underline;text-decoration:underline;text-underline-offset:1px;-webkit-text-decoration-thickness:1px;text-decoration-thickness:1px;-webkit-text-decoration-color:#ccd9e3;text-decoration-color:#ccd9e3;}.css-coqf44 a:visited{color:#333;-webkit-text-decoration-color:#333;text-decoration-color:#333;}.css-coqf44 a:hover{-webkit-text-decoration:none;text-decoration:none;}In one example, a Fox News article from April that went viral baselessly said that the coronavirus was created in a lab in the Chinese city of Wuhan and intentionally released. The article was liked and shared more than one million times on Facebook and retweeted 78,800 times on Twitter, according to data from Zignal and CrowdTangle, a Facebook-owned tool for analyzing social media.By the middle of last year, the misinformation had started subsiding as election-related commentary increased. The anti-Asian sentiment ended up migrating to platforms like 4chan and Telegram, researchers said.But it still occasionally flared up, such as when Dr. Li-Meng Yan, a researcher from Hong Kong, made unproven assertions last fall that the coronavirus was a bioweapon engineered by China. In the United States, Dr. Yan became a right-wing media sensation. Her appearance on Tucker Carlson’s Fox News show in September has racked up at least 8.8 million views online.In November, anti-Asian speech surged anew. That was when conspiracies about a “new world order” related to President Biden’s election victory began circulating, said researchers from the Network Contagion Research Institute. Some posts that went viral painted Mr. Biden as a puppet of the Chinese Communist Party.In December, slurs about Asians and the term “Kung Flu” rose by 65 percent on websites and apps like Telegram, 4chan and The Donald, compared with the monthly average mentions from the previous 11 months on the same platforms, according to the Network Contagion Research Institute. The activity remained high in January and last month.During this second surge, calls for violence against Asian-Americans became commonplace.“Filipinos are not Asians because Asians are smart,” read a post in a Telegram channel that depicted a dog holding a gun to its head.After the shootings in Atlanta, a doctored screenshot of what looked like a Facebook post from the suspect circulated on Facebook and Twitter this week. The post featured a miasma of conspiracies about China engaging in a Covid-19 cover-up and wild theories about how it was planning to “secure global domination for the 21st century.”Facebook and Twitter eventually ruled that the screenshot was fake and blocked it. But by then, the post had been shared and liked hundreds of times on Twitter and more than 4,000 times on Facebook.Ben Decker More

  • in

    Facebook Ends Ban on Political Advertising

    AdvertisementContinue reading the main storySupported byContinue reading the main storyFacebook Ends Ban on Political AdvertisingThe social network had prohibited political ads on its site indefinitely after the November election. Such ads have been criticized for spreading misinformation.Mark Zuckerberg, the Facebook chief executive, testifying in October. Before the ban on political ads, he had said he wanted to maintain a hands-off approach toward speech on Facebook.Credit…Pool photo by Michael ReynoldsMarch 3, 2021Updated 6:16 p.m. ETSAN FRANCISCO — Facebook said on Wednesday that it planned to lift its ban on political advertising across its network, resuming a form of digital promotion that has been criticized for spreading misinformation and falsehoods and inflaming voters.The social network said it would allow advertisers to buy new ads about “social issues, elections or politics” beginning on Thursday, according to a copy of an email sent to political advertisers and viewed by The New York Times. Those advertisers must complete a series of identity checks before being authorized to place the ads, the company said.“We put this temporary ban in place after the November 2020 election to avoid confusion or abuse following Election Day,” Facebook said in a blog post. “We’ve heard a lot of feedback about this and learned more about political and electoral ads during this election cycle. As a result, we plan to use the coming months to take a closer look at how these ads work on our service to see where further changes may be merited.”Political advertising on Facebook has long faced questions. Mark Zuckerberg, Facebook’s chief executive, has said he wished to maintain a largely hands-off stance toward speech on the site — including political ads — unless it posed an immediate harm to the public or individuals, saying that he “does not want to be the arbiter of truth.”But after the 2016 presidential election, the company and intelligence officials discovered that Russians had used Facebook ads to sow discontent among Americans. Former President Donald J. Trump also used Facebook’s political ads to amplify claims about an “invasion” on the Mexican border in 2019, among other incidents.Facebook had banned political ads late last year as a way to choke off misinformation and threats of violence around the November presidential election. In September, the company said it planned to forbid new political ads for the week before Election Day and would act swiftly against posts that tried to dissuade people from voting. Then in October, Facebook expanded that action by declaring it would prohibit all political and issue-based advertising after the polls closed on Nov. 3 for an undetermined length of time.The company eventually clamped down on groups and pages that spread certain kinds of misinformation, such as discouraging people from voting or registering to vote. It has spent billions of dollars to root out foreign influence campaigns and other types of meddling from malicious state agencies and other bad actors.In December, Facebook lifted the ban to allow some advertisers to run political issue and candidacy ads in Georgia for the January runoff Senate election in the state. But the ban otherwise remained in effect for the remaining 49 states.Attitudes around how political advertising should be treated across Facebook are decidedly mixed. Politicians who are not well known often can raise their profile and awareness of their campaigns by using Facebook.“Political ads are not bad things in and of themselves,” said Siva Vaidhyanathan, a media studies professor and the author of a book studying Facebook’s effects on democracy. “They perform an essential service, in the act of directly representing the candidate’s concerns or positions.”He added, “When you ban all campaign ads on the most accessible and affordable platform out there, you tilt the balance toward the candidates who can afford radio and television.”Representative Alexandria Ocasio-Cortez, Democrat of New York, has also said that political advertising on Facebook can be a crucial component for Democratic digital campaign strategies.Some political ad buyers applauded the lifting of the ads ban.“The ad ban was something that Facebook did to appease the public for the misinformation that spread across the platform,” said Eileen Pollet, a digital campaign strategist and founder of Ravenna Strategies. “But it really ended up hurting good actors while bad actors had total free rein. And now, especially since the election is over, the ban had really been hurting nonprofits and local organizations.”Facebook has long sought to thread the needle between forceful moderation of its policies and a lighter touch. For years, Mr. Zuckerberg defended politicians’ right to say what they wanted on Facebook, but that changed last year amid rising alarm over potential violence around the November election.In January, Facebook barred Mr. Trump from using his account and posting on the platform after he took to social media to delegitimize the election results and incited a violent uprising among his supporters, who stormed the U.S. Capitol.Facebook said Mr. Trump’s suspension was “indefinite.” The decision is now under review by the Facebook Oversight Board, a third-party entity created by the company and composed of journalists, academics and others that adjudicates some of the company’s thorny content policy enforcement decisions. A decision is expected to come within the next few months.On Thursday, political advertisers on Facebook will be able to submit new ads or turn on existing political ads that have already been approved, the company said. Each ad will appear with a small disclaimer, stating that it has been “paid for by” a political organization. For those buying new ads, Facebook said it could take up to a week to clear the identity authorization and advertising review process.AdvertisementContinue reading the main story More

  • in

    Why Is Big Tech Policing Free Speech? Because the Government Isn’t

    #masthead-section-label, #masthead-bar-one { display: none }Capitol Riot FalloutVisual TimelineInside the SiegeNotable ArrestsThe Global Far RightCredit…Illustration by Hudson ChristieFeatureWhy Is Big Tech Policing Free Speech? Because the Government Isn’tDeplatforming President Trump showed that the First Amendment is broken — but not in the way his supporters think.Credit…Illustration by Hudson ChristieSupported byContinue reading the main storyJan. 26, 2021, 5:00 a.m. ETIn the months leading up to the November election, the social media platform Parler attracted millions of new users by promising something competitors, increasingly, did not: unfettered free speech. “If you can say it on the streets of New York,” promised the company’s chief executive, John Matze, in a June CNBC interview, “you can say it on Parler.”The giants of social media — Facebook, Twitter, YouTube, Instagram — had more stringent rules. And while they still amplified huge amounts of far-right content, they had started using warning labels and deletions to clamp down on misinformation about Covid-19 and false claims of electoral fraud, including in posts by President Trump. Conservative figures, including Senator Ted Cruz, Eric Trump and Sean Hannity, grew increasingly critical of the sites and beckoned followers to join them on Parler, whose investors include the right-wing activist and heiress Rebekah Mercer. The format was like Twitter’s, but with only two clear rules: no criminal activity and no spam or bots. On Parler, you could say what you wanted without being, as conservatives complained, “silenced.”After the election, as Trump sought to overturn his defeat with a barrage of false claims, Matze made a classic First Amendment argument for letting the disinformation stand: More speech is better. Let the marketplace of ideas run without interference. “If you don’t censor, if you don’t — you just let him do what he wants, then the public can judge for themselves,” Matze said of Trump’s Twitter account on the New York Times podcast “Sway.” “Just sit there and say: ‘Hey, that’s what he said. What do you guys think?’”Matze was speaking to the host of “Sway,” Kara Swisher, on Jan. 7 — the day after Trump told supporters to march on the U.S. Capitol and fight congressional certification of the Electoral College vote. In the chaos that followed Trump’s speech, the American marketplace of ideas clearly failed. Protecting democracy, for Trump loyalists, had become a cry to subvert and even destroy it. And while Americans’ freedoms of speech and the press were vital to exposing this assault, they were also among its causes. Right-wing media helped seed destabilizing lies; elected officials helped them grow; and the democratizing power of social media spread them, steadily, from one node to the next.Social media sites effectively function as the public square where people debate the issues of the day. But the platforms are actually more like privately owned malls: They make and enforce rules to keep their spaces tolerable, and unlike the government, they’re not obligated to provide all the freedom of speech offered by the First Amendment. Like the bouncers at a bar, they are free to boot anyone or anything they consider disruptive. In the days after Jan. 6, they swiftly cracked down on whole channels and accounts associated with the violence. Reddit removed the r/DonaldTrump subreddit. YouTube tightened its policy on posting videos that called the outcome of the election into doubt. TikTok took down posts with hashtags like #stormthecapitol. Facebook indefinitely suspended Trump’s account, and Twitter — which, like Facebook, had spent years making some exceptions to its rules for the president — took his account away permanently.Parler, true to its stated principles, did none of this. But it had a weak point: It was dependent on other private companies to operate. In the days after the Capitol assault, Apple and Google removed Parler from their app stores. Then Amazon Web Services stopped hosting Parler, effectively cutting off its plumbing. Parler sued, but it had agreed, in its contract, not to host content that “may be harmful to others”; having promised the streets of New York, it was actually bound by the rules of a kindergarten playground. In a court filing, Amazon provided samples of about 100 posts it had notified Parler were in violation of its contract in the weeks before the Capitol assault. “Fry ’em up,” one said, with a list of targets that included Nancy Pelosi and Chuck Schumer. “We are coming for you and you will know it.” On Jan. 21, a judge denied Parler’s demand to reinstate Amazon’s services.It’s unlikely the volume of incendiary content on Parler could rival that of Twitter or Facebook, where groups had openly planned for Jan. 6. But Parler is the one that went dark. A platform built to challenge the oligopoly of its giant rivals was deplatformed by other giants, in a demonstration of how easily they, too, could block speech at will.Over all, the deplatforming after Jan. 6 had the feeling of an emergency response to a wave of lies nearly drowning our democracy. For years, many tech companies had invoked the American ethos of free speech while letting disinformation and incitement spread abroad, even when it led to terrible violence. Now they leapt to action as if, with America in trouble, American ideals no longer applied. Parler eventually turned to overseas web-hosting services to get back online.“We couldn’t beat you in the war of ideas and discourse, so we’re pulling your mic” — that’s how Archon Fung, a professor at Harvard’s Kennedy School of Government, put it, in expressing ambivalence about the moves. It seemed curiously easier to take on Trump and his allies in the wake of Democrats’ victories in the Senate runoffs in Georgia, giving them control of both chambers of Congress along with the White House. (Press officers for Twitter and Facebook said no election outcome influenced the companies’ decision.) And in setting an example that might be applied to the speech of the other groups — foreign dissidents, sex-worker activists, Black Lives Matter organizers — the deplatforming takes on an ominous cast.Fadi Quran, a campaign director for the global human rights group Avaaz, told me he, too, found the precedent worrying. “Although the steps may have been necessary to protect American lives against violence,” he said, “they are a reminder of the power big tech has over our information infrastructure. This infrastructure should be governed by deliberative democratic processes.”But what would those democratic processes be? Americans have a deep and abiding suspicion of letting the state regulate speech. At the moment, tech companies are filling the vacuum created by that fear. But do we really want to trust a handful of chief executives with policing spaces that have become essential parts of democratic discourse? We are uncomfortable with government doing it; we are uncomfortable with Silicon Valley doing it. But we are also uncomfortable with nobody doing it at all. This is a hard place to be — or, perhaps, two rocks and a hard place.When Twitter banned Trump, he found a seemingly unlikely defender: Chancellor Angela Merkel of Germany, who criticized the decision as a “problematic” breach of the right to free speech. This wasn’t necessarily because Merkel considered the content of Trump’s speech defensible. The deplatforming troubled her because it came from a private company; instead, she said through a spokesman, the United States should have a law restricting online incitement, like the one Germany passed in 2017 to prevent the dissemination of hate speech and fake news stories.Among democracies, the United States stands out for its faith that free speech is the right from which all other freedoms flow. European countries are more apt to fight destabilizing lies by balancing free speech with other rights. It’s an approach informed by the history of fascism and the memory of how propaganda, lies and the scapegoating of minorities can sweep authoritarian leaders to power. Many nations shield themselves from such anti-pluralistic ideas. In Canada, it’s a criminal offense to publicly incite hatred “against any identifiable group.” South Africa prosecutes people for uttering certain racial slurs. A number of countries in Europe treat Nazism as a unique evil, making it a crime to deny the Holocaust.In the United States, laws like these surely wouldn’t survive Supreme Court review, given the current understanding of the First Amendment — an understanding that comes out of our country’s history and our own brushes with suppressing dissent. The First Amendment did not prevent the administration of John Adams from prosecuting more than a dozen newspaper editors for seditious libel or the Socialist and labor leader Eugene V. Debs from being convicted of sedition over a speech, before a peaceful crowd, opposing involvement in World War I. In 1951, the Supreme Court upheld the convictions of Communist Party leaders for “conspiring” to advocate the overthrow of the government, though the evidence showed only that they had met to discuss their ideological beliefs.It wasn’t until the 1960s that the Supreme Court enduringly embraced the vision of the First Amendment expressed, decades earlier, in a dissent by Justice Oliver Wendell Holmes Jr.: “The ultimate good desired is better reached by free trade in ideas.” In Brandenburg v. Ohio, that meant protecting the speech of a Ku Klux Klan leader at a 1964 rally, setting a high bar for punishing inflammatory words. Brandenburg “wildly overprotects free speech from any logical standpoint,” the University of Chicago law professor Geoffrey R. Stone points out. “But the court learned from experience to guard against a worse evil: the government using its power to silence its enemies.”This era’s concept of free speech still differed from today’s in one crucial way: The court was willing to press private entities to ensure they allowed different voices to be heard. As another University of Chicago law professor, Genevieve Lakier, wrote in a law-review article last year, a hallmark of the 1960s was the court’s “sensitivity to the threat that economic, social and political inequality posed” to public debate. As a result, the court sometimes required private property owners, like TV broadcasters, to grant access to speakers they wanted to keep out.But the court shifted again, Lakier says, toward interpreting the First Amendment “as a grant of almost total freedom” for private owners to decide who could speak through their outlets. In 1974, it struck down a Florida law requiring newspapers that criticized the character of political candidates to offer them space to reply. Chief Justice Warren Burger, in his opinion for the majority, recognized that barriers to entry in the newspaper market meant this placed the power to shape public opinion “in few hands.” But in his view, there was little the government could do about it..css-1xzcza9{list-style-type:disc;padding-inline-start:1em;}.css-c7gg1r{font-family:nyt-franklin,helvetica,arial,sans-serif;font-weight:700;font-size:0.875rem;line-height:0.875rem;margin-bottom:15px;color:#121212 !important;}@media (min-width:740px){.css-c7gg1r{font-size:0.9375rem;line-height:0.9375rem;}}.css-rqynmc{font-family:nyt-franklin,helvetica,arial,sans-serif;font-size:0.9375rem;line-height:1.25rem;color:#333;margin-bottom:0.78125rem;}@media (min-width:740px){.css-rqynmc{font-size:1.0625rem;line-height:1.5rem;margin-bottom:0.9375rem;}}.css-rqynmc strong{font-weight:600;}.css-rqynmc em{font-style:italic;}.css-yoay6m{margin:0 auto 5px;font-family:nyt-franklin,helvetica,arial,sans-serif;font-weight:700;font-size:1.125rem;line-height:1.3125rem;color:#121212;}@media (min-width:740px){.css-yoay6m{font-size:1.25rem;line-height:1.4375rem;}}.css-1dg6kl4{margin-top:5px;margin-bottom:15px;}.css-16ed7iq{width:100%;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-box-pack:center;-webkit-justify-content:center;-ms-flex-pack:center;justify-content:center;padding:10px 0;background-color:white;}.css-pmm6ed{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}.css-pmm6ed > :not(:first-child){margin-left:5px;}.css-5gimkt{font-family:nyt-franklin,helvetica,arial,sans-serif;font-size:0.8125rem;font-weight:700;-webkit-letter-spacing:0.03em;-moz-letter-spacing:0.03em;-ms-letter-spacing:0.03em;letter-spacing:0.03em;text-transform:uppercase;color:#333;}.css-5gimkt:after{content:’Collapse’;}.css-rdoyk0{-webkit-transition:all 0.5s ease;transition:all 0.5s ease;-webkit-transform:rotate(180deg);-ms-transform:rotate(180deg);transform:rotate(180deg);}.css-eb027h{max-height:5000px;-webkit-transition:max-height 0.5s ease;transition:max-height 0.5s ease;}.css-6mllg9{-webkit-transition:all 0.5s ease;transition:all 0.5s ease;position:relative;opacity:0;}.css-6mllg9:before{content:”;background-image:linear-gradient(180deg,transparent,#ffffff);background-image:-webkit-linear-gradient(270deg,rgba(255,255,255,0),#ffffff);height:80px;width:100%;position:absolute;bottom:0px;pointer-events:none;}#masthead-bar-one{display:none;}#masthead-bar-one{display:none;}.css-1amoy78{background-color:white;border:1px solid #e2e2e2;width:calc(100% – 40px);max-width:600px;margin:1.5rem auto 1.9rem;padding:15px;box-sizing:border-box;}@media (min-width:740px){.css-1amoy78{padding:20px;width:100%;}}.css-1amoy78:focus{outline:1px solid #e2e2e2;}.css-1amoy78[data-truncated] .css-rdoyk0{-webkit-transform:rotate(0deg);-ms-transform:rotate(0deg);transform:rotate(0deg);}.css-1amoy78[data-truncated] .css-eb027h{max-height:300px;overflow:hidden;-webkit-transition:none;transition:none;}.css-1amoy78[data-truncated] .css-5gimkt:after{content:’See more’;}.css-1amoy78[data-truncated] .css-6mllg9{opacity:1;}.css-k9atqk{margin:0 auto;overflow:hidden;}.css-k9atqk strong{font-weight:700;}.css-k9atqk em{font-style:italic;}.css-k9atqk a{color:#326891;-webkit-text-decoration:none;text-decoration:none;border-bottom:1px solid #ccd9e3;}.css-k9atqk a:visited{color:#333;-webkit-text-decoration:none;text-decoration:none;border-bottom:1px solid #ddd;}.css-k9atqk a:hover{border-bottom:none;}Capitol Riot FalloutFrom Riot to ImpeachmentThe riot inside the U.S. Capitol on Wednesday, Jan. 6, followed a rally at which President Trump made an inflammatory speech to his supporters, questioning the results of the election. Here’s a look at what happened and the ongoing fallout:As this video shows, poor planning and a restive crowd encouraged by President Trump set the stage for the riot.A two hour period was crucial to turning the rally into the riot.Several Trump administration officials, including cabinet members Betsy DeVos and Elaine Chao, announced that they were stepping down as a result of the riot.Federal prosecutors have charged more than 70 people, including some who appeared in viral photos and videos of the riot. Officials expect to eventually charge hundreds of others.The House voted to impeach the president on charges of “inciting an insurrection” that led to the rampage by his supporters.Traditionally, conservatives have favored that libertarian approach: Let owners decide how their property is used. That’s changing now that they find their speech running afoul of tech-company rules. “Listen to me, America, we were wiped out,” the right-wing podcaster Dan Bongino, an investor in Parler, said in a Fox News interview after Amazon pulled its services. “And to all the geniuses out there, too, saying this is a private company, it’s not a First Amendment fight — really, it’s not?” The law that prevents the government from censoring speech should still apply, he said, because “these companies are more powerful than a de facto government.” You needn’t sympathize with him to see the hit Parler took as the modern equivalent of, in Burger’s terms, disliking one newspaper and taking the trouble to start your own, only to find no one will sell you ink to print it.One problem with private companies’ holding the ability to deplatform any speaker is that they’re in no way insulated from politics — from accusations of bias to advertiser boycotts to employee walkouts. Facebook is a business, driven by profit and with no legal obligation to explain its decisions the way a court or regulatory body would. Why, for example, hasn’t Facebook suspended the accounts of other leaders who have used the platform to spread lies and bolster their power, like the president of the Philippines, Rodrigo Duterte? A spokesman said suspending Trump was “a response to a specific situation based on risk” — but so is every decision, and the risks can be just as high overseas.“It’s really media and public pressure that is the difference between Trump coming down and Duterte staying up,” says Evelyn Douek, a lecturer at Harvard Law School. “But the winds of public opinion are a terrible basis for free-speech decisions! Maybe it seems like it’s working right now. But in the longer run, how do you think unpopular dissidents and minorities will fare?”Deplatforming works, at least in the short term. There are indications that in the weeks after the platforms cleaned house — with Twitter suspending not just Trump but some 70,000 accounts, including many QAnon influencers — conversations about election fraud decreased significantly across several sites. After Facebook reintroduced a scoring system to promote news sources based on its judgment of their quality, the list of top performers, usually filled by hyperpartisan sources, featured CNN, NPR and local news outlets.But there’s no reason to think the healthier information climate will last. The very features that make social media so potent work both to the benefit and the detriment of democracy. YouTube, for instance, changed its recommendation algorithm in 2019, after researchers and reporters (including Kevin Roose at The New York Times) showed how it pushed some users toward radicalizing content. It’s also telling that, since the election, Facebook has stopped recommending civic groups for people to join. After Jan. 6, the researcher Aric Toler at Bellingcat surfaced a cheery video, automatically created by Facebook to promote its groups, which imposed the tagline “community means a lot” over images of a militia brandishing weapons and a photo of Robert Gieswein, who has since been charged in the assault on the Capitol. “I’m afraid that the technology has upended the possibility of a well-functioning, responsible speech environment,” the Harvard law professor Jack Goldsmith says. “It used to be we had masses of speech in a reasonable range, and some extreme speech we could tolerate. Now we have a lot more extreme speech coming from lots of outlets and mouthpieces, and it’s more injurious and harder to regulate.”For decades, tech companies mostly responded to such criticism with proud free-speech absolutism. But external pressures, and the absence of any other force to contain users, gradually dragged them into the expensive and burdensome role of policing their domains. Facebook, for one, now has legions of low-paid workers reviewing posts flagged as harmful, a task gruesome enough that the company has agreed to pay $52 million in mental-health compensation to settle a lawsuit by more than 10,000 moderators.Perhaps because it’s so easy to question their motives, some executives have taken to begging for mercy. “We are facing something that feels impossible,” said Jack Dorsey, Twitter’s chief executive, while being grilled by Congress last year. And Facebook’s founder and chief executive, Mark Zuckerberg, has agreed with lawmakers that the company has too much power over speech. Two weeks after suspending Trump, Facebook said its new oversight board, an independent group of 20 international experts, would review the decision, with the power to make a binding ruling.Zuckerberg and Dorsey have also suggested openness to government regulation that would hold platforms to external standards. That might include, for example, requiring rules for slowing the spread of disinformation from known offenders. European lawmakers, with their more skeptical free-speech tradition (and lack of allegiance to American tech companies), have proposed requiring platforms to show how their recommendations work and giving users more control over them, as has been done in the realm of privacy. Steps like these seem better suited to combating misinformation than eliminating, as is often suggested, the immunity platforms currently enjoy from lawsuits, which directly affects only a narrow range of cases, mostly involving defamation.There is no consensus on a path forward, but there is precedent for some intervention. When radio and television radically altered the information landscape, Congress passed laws to foster competition, local control and public broadcasting. From the 1930s until the 1980s, anyone with a broadcast license had to operate in the “public interest” — and starting in 1949, that explicitly included exposing audiences to multiple points of view in policy debates. The court let the elected branches balance the rights of private ownership with the collective good of pluralism.This model coincided with relatively high levels of trust in media and low levels of political polarization. That arrangement has been rare in American history. It’s hard to imagine a return to it. But it’s worth remembering that radio and TV also induced fear and concern, and our democracy adapted and thrived. The First Amendment of the era aided us. The guarantee of free speech is for democracy; it is worth little, in the end, apart from it.AdvertisementContinue reading the main story More

  • in

    How Facebook Incubated the Insurrection

    Illustration by Yoshi SodeokaSkip to contentSkip to site indexOpinionHow Facebook Incubated the InsurrectionRight-wing influencers embraced extremist views and Facebook rewarded them.Illustration by Yoshi SodeokaCredit…Supported byContinue reading the main storyStuart A. Thompson and Mr. Thompson is a writer and editor in Opinion. Mr. Warzel is Opinion’s writer-at-large.Jan. 14, 2021Dominick McGee didn’t enter the Capitol during the siege on Jan. 6. He was on the grounds when the mob of Donald Trump supporters broke past police barricades and began smashing windows. But he turned around, heading back to his hotel. Property destruction wasn’t part of his plan. Plus, his phone had died, ending his Facebook Live video midstream. He needed to find a charger. After all, Facebook was a big part of why he was in Washington in the first place.Mr. McGee is 26, a soft-spoken college student and an Army veteran from Augusta, Ga. Look at his Facebook activity today, and you’ll find a stream of pro-Trump fanfare and conspiracy theories.But for years, his feed was unremarkable — a place to post photos of family and friends, musings about love and motivational advice. More

  • in

    Uganda Blocks Facebook Ahead of Contentious Election

    AdvertisementContinue reading the main storySupported byContinue reading the main storyUganda Blocks Facebook Ahead of Contentious ElectionPresident Yoweri Museveni accused the company of “arrogance” after it removed fake accounts and pages linked to his re-election campaign.President Yoweri Museveni of Uganda has 10 rivals in the election scheduled for Thursday, including the rapper-turned-lawmaker Bobi Wine, whose real name is Robert Kyagulanyi.Credit…Baz Ratner/ReutersJan. 13, 2021Updated 5:33 a.m. ETNAIROBI, Kenya — President Yoweri Museveni of Uganda president has blocked Facebook from operating in his country, just days after the social media company removed fake accounts linked to his government ahead of a hotly contested general election set to take place on Thursday.In a televised address late on Tuesday night, Mr. Museveni accused Facebook of “arrogance” and said he had instructed his government to close the platform, along with other social media outlets, although Facebook was the only one he named.“That social channel you are talking about, if it is going to operate in Uganda, it should be used equitably by everybody who has to use it,” Mr. Museveni said. “We cannot tolerate this arrogance of anybody coming to decide for us who is good and who is bad,” he added.The ban on Facebook comes at the end of an election period that has been dogged by a crackdown on the political opposition, harassment of journalists and nationwide protests that have led to at least 54 deaths and hundreds of arrests, according to officials.Mr. Museveni, 76, who is running for a sixth term in office, is facing 10 rivals, including the rapper-turned-lawmaker Bobi Wine, 38. Mr. Wine, whose real name is Robert Kyagulanyi, has been beaten, sprayed with tear gas and charged in court with allegedly flouting coronavirus rules while on the campaign trail. Last week, Mr. Wine filed a complaint with the International Criminal Court accusing Mr. Museveni and other top current and former security officials of sanctioning a wave of violence and human rights violations against citizens, political figures and human rights lawyers.Facebook announced this week that it had taken down a network of accounts and pages in the East African nation that engaged in what it called “coordinated inauthentic behavior” aimed at manipulating public debate around the election. The company said the network was linked to the Government Citizens Interaction Center, an initiative that is part of Uganda’s Ministry of Information and Communications Technology and National Guidance.In a statement, a Facebook representative said the network “used fake and duplicate accounts to manage pages, comment on other people’s content, impersonate users, re-share posts in groups to make them appear more popular than they were.”Facebook’s investigation into the network began after research from the Atlantic Council’s Digital Forensic Research Lab showcased a network of social media accounts that had engaged in a campaign to criticize the opposition and promote Mr. Museveni and the governing party, the National Resistance Movement. After the research was published, Twitter also said it had shut down accounts linked to the election.Hours before Mr. Museveni’s speech, social media users across Uganda confirmed restrictions on their online communications, with the digital rights group NetBlocks reporting that platforms including Facebook, WhatsApp, Instagram and Twitter had been affected. On Wednesday, MTN Uganda, the country’s largest telecommunication company, confirmed it had received a directive from the Uganda Communications Commission to “suspend access and use, direct or otherwise of all social media platforms and online messaging applications over the network until further notice.”Felicia Anthonio, a campaigner with the digital rights nonprofit Access Now, said the authorities had blocked more than 100 virtual private networks, or VPNs, which could help users circumvent the censorship and safely browse the internet.Uganda blocked the internet during the 2016 elections, and in 2018, it introduced a social media tax aimed at raising revenue and curbing what the government called online “gossip.” The move, which was criticized as a threat to freedom of expression, had a negative effect on internet use over all, with millions of Ugandans giving up internet services altogether.In anticipation of another shutdown this week, a group of organizations that work to end internet cutoffs worldwide sent a letter to Mr. Museveni and the leaders of telecom companies in Uganda pleading with them to keep the internet and social media platforms accessible during the election.Mr. Museveni did not heed their call. On Tuesday night, he said the decision to block Facebook was “unfortunate” but “unavoidable.”“I am very sorry about the inconvenience,” he said, adding that he himself had been using the platform to interact with young voters. He has almost a million followers on Facebook and two million on Twitter.Striking a defiant note, Mr. Museveni said that if Facebook was going to “take sides,” then it would not be allowed to operate in the country.“Uganda is ours,” he said.AdvertisementContinue reading the main story More