More stories

  • in

    Zuckerberg, Dorsey and Pichai testify about disinformation.

    The chief executives of Google, Facebook and Twitter are testifying at the House on Thursday about how disinformation spreads across their platforms, an issue that the tech companies were scrutinized for during the presidential election and after the Jan. 6 riot at the Capitol.The hearing, held by the House Energy and Commerce Committee, is the first time that Mark Zuckerberg of Facebook, Jack Dorsey of Twitter and Sundar Pichai of Google are appearing before Congress during the Biden administration. President Biden has indicated that he is likely to be tough on the tech industry. That position, coupled with Democratic control of Congress, has raised liberal hopes that Washington will take steps to rein in Big Tech’s power and reach over the next few years.The hearing is also be the first opportunity since the Jan. 6 Capitol riot for lawmakers to question the three men about the role their companies played in the event. The attack has made the issue of disinformation intensely personal for the lawmakers since those who participated in the riot have been linked to online conspiracy theories like QAnon.Before the hearing, Democrats signaled in a memo that they were interested in questioning the executives about the Jan. 6 attacks, efforts by the right to undermine the results of the 2020 election and misinformation related to the Covid-19 pandemic.Republicans sent the executives letters this month asking them about the decisions to remove conservative personalities and stories from their platforms, including an October article in The New York Post about President Biden’s son Hunter.Lawmakers have debated whether social media platforms’ business models encourage the spread of hate and disinformation by prioritizing content that will elicit user engagement, often by emphasizing salacious or divisive posts.Some lawmakers will push for changes to Section 230 of the Communications Decency Act, a 1996 law that shields the platforms from lawsuits over their users’ posts. Lawmakers are trying to strip the protections in cases where the companies’ algorithms amplified certain illegal content. Others believe that the spread of disinformation could be stemmed with stronger antitrust laws, since the platforms are by far the major outlets for communicating publicly online.“By now it’s painfully clear that neither the market nor public pressure will stop social media companies from elevating disinformation and extremism, so we have no choice but to legislate, and now it’s a question of how best to do it,” said Representative Frank Pallone, the New Jersey Democrat who is chairman of the committee.The tech executives are expected to play up their efforts to limit misinformation and redirect users to more reliable sources of information. They may also entertain the possibility of more regulation, in an effort to shape increasingly likely legislative changes rather than resist them outright. More

  • in

    How Anti-Asian Activity Online Set the Stage for Real-World Violence

    On platforms such as Telegram and 4chan, racist memes and posts about Asian-Americans have created fear and dehumanization.In January, a new group popped up on the messaging app Telegram, named after an Asian slur.Hundreds of people quickly joined. Many members soon began posting caricatures of Asians with exaggerated facial features, memes of Asian people eating dog meat and images of American soldiers inflicting violence during the Vietnam War.This week, after a gunman killed eight people — including six women of Asian descent — at massage parlors in and near Atlanta, the Telegram channel linked to a poll that asked, “Appalled by the recent attacks on Asians?” The top answer, with 84 percent of the vote, was that the violence was “justified retaliation for Covid.”The Telegram group was a sign of how anti-Asian sentiment has flared up in corners of the internet, amplifying racist and xenophobic tropes just as attacks against Asian-Americans have surged. On messaging apps like Telegram and on internet forums like 4chan, anti-Asian groups and discussion threads have been increasingly active since November, especially on far-right message boards such as The Donald, researchers said.The activity follows a rise in anti-Asian misinformation last spring after the coronavirus, which first emerged in China, began spreading around the world. On Facebook and Twitter, people blamed the pandemic on China, with users posting hashtags such as #gobacktochina and #makethecommiechinesepay. Those hashtags spiked when former President Donald J. Trump last year called Covid-19 the “Chinese virus” and “Kung Flu.”While some of the online activity tailed off ahead of the November election, its re-emergence has helped lay the groundwork for real-world actions, researchers said. The fatal shootings in Atlanta this week, which have led to an outcry over treatment of Asian-Americans even as the suspect said he was trying to cure a “sexual addiction,” were preceded by a swell of racially motivated attacks against Asian-Americans in places like New York and the San Francisco Bay Area, according to the advocacy group Stop AAPI Hate.“Surges in anti-Asian rhetoric online means increased risk of real-world events targeting that group of people,” said Alex Goldenberg, an analyst at the Network Contagion Research Institute at Rutgers University, which tracks misinformation and extremism online.He added that the anti-China coronavirus misinformation — including the false narrative that the Chinese government purposely created Covid-19 as a bioweapon — had created an atmosphere of fear and invective.Anti-Asian speech online has typically not been as overt as anti-Semitic or anti-Black groups, memes and posts, researchers said. On Facebook and Twitter, posts expressing anti-Asian sentiments have often been woven into conspiracy theory groups such as QAnon and in white nationalist and pro-Trump enclaves. Mr. Goldenberg said forms of hatred against Black people and Jews have deep roots in extremism in the United States and that the anti-Asian memes and tropes have been more “opportunistically weaponized.”But that does not make the anti-Asian hate speech online less insidious. Melissa Ryan, chief executive of Card Strategies, a consulting firm that researches disinformation, said the misinformation and racist speech has led to a “dehumanization” of certain groups of people and to an increased risk of violence.Negative Asian-American tropes have long existed online but began increasing last March as parts of the United States went into lockdown over the coronavirus. That month, politicians including Representative Paul Gosar, Republican of Arizona, and Representative Kevin McCarthy, a Republican of California, used the terms “Wuhan virus” and “Chinese coronavirus” to refer to Covid-19 in their tweets.Those terms then began trending online, according to a study from the University of California, Berkeley. On the day Mr. Gosar posted his tweet, usage of the term “Chinese virus” jumped 650 percent on Twitter; a day later there was an 800 percent increase in their usage in conservative news articles, the study found.Mr. Trump also posted eight times on Twitter last March about the “Chinese virus,” causing vitriolic reactions. In the replies section of one of his posts, a Trump supporter responded, “U caused the virus,” directing the comment to an Asian Twitter user who had cited U.S. death statistics for Covid-19. The Trump fan added a slur about Asian people.In a study this week from the University of California, San Francisco, researchers who examined 700,000 tweets before and after Mr. Trump’s March 2020 posts found that people who posted the hashtag #chinesevirus were more likely to use racist hashtags, including #bateatingchinese.“There’s been a lot of discussion that ‘Chinese virus’ isn’t racist and that it can be used,” said Yulin Hswen, an assistant professor of epidemiology at the University of California, San Francisco, who conducted the research. But the term, she said, has turned into “a rallying cry to be able to gather and galvanize people who have these feelings, as well as normalize racist beliefs.”Representatives for Mr. Trump, Mr. McCarthy and Mr. Gosar did not respond to requests for comment.Misinformation linking the coronavirus to anti-Asian beliefs also rose last year. Since last March, there have been nearly eight million mentions of anti-Asian speech online, much of it falsehoods, according to Zignal Labs, a media insights firm..css-1xzcza9{list-style-type:disc;padding-inline-start:1em;}.css-rqynmc{font-family:nyt-franklin,helvetica,arial,sans-serif;font-size:0.9375rem;line-height:1.25rem;color:#333;margin-bottom:0.78125rem;}@media (min-width:740px){.css-rqynmc{font-size:1.0625rem;line-height:1.5rem;margin-bottom:0.9375rem;}}.css-rqynmc strong{font-weight:600;}.css-rqynmc em{font-style:italic;}.css-yoay6m{margin:0 auto 5px;font-family:nyt-franklin,helvetica,arial,sans-serif;font-weight:700;font-size:1.125rem;line-height:1.3125rem;color:#121212;}@media (min-width:740px){.css-yoay6m{font-size:1.25rem;line-height:1.4375rem;}}.css-1dg6kl4{margin-top:5px;margin-bottom:15px;}#masthead-bar-one{display:none;}#masthead-bar-one{display:none;}.css-1pd7fgo{background-color:white;border:1px solid #e2e2e2;width:calc(100% – 40px);max-width:600px;margin:1.5rem auto 1.9rem;padding:15px;box-sizing:border-box;}@media (min-width:740px){.css-1pd7fgo{padding:20px;width:100%;}}.css-1pd7fgo:focus{outline:1px solid #e2e2e2;}#NYT_BELOW_MAIN_CONTENT_REGION .css-1pd7fgo{border:none;padding:20px 0 0;border-top:1px solid #121212;}.css-1pd7fgo[data-truncated] .css-rdoyk0{-webkit-transform:rotate(0deg);-ms-transform:rotate(0deg);transform:rotate(0deg);}.css-1pd7fgo[data-truncated] .css-eb027h{max-height:300px;overflow:hidden;-webkit-transition:none;transition:none;}.css-1pd7fgo[data-truncated] .css-5gimkt:after{content:’See more’;}.css-1pd7fgo[data-truncated] .css-6mllg9{opacity:1;}.css-coqf44{margin:0 auto;overflow:hidden;}.css-coqf44 strong{font-weight:700;}.css-coqf44 em{font-style:italic;}.css-coqf44 a{color:#326891;-webkit-text-decoration:underline;text-decoration:underline;text-underline-offset:1px;-webkit-text-decoration-thickness:1px;text-decoration-thickness:1px;-webkit-text-decoration-color:#ccd9e3;text-decoration-color:#ccd9e3;}.css-coqf44 a:visited{color:#333;-webkit-text-decoration-color:#333;text-decoration-color:#333;}.css-coqf44 a:hover{-webkit-text-decoration:none;text-decoration:none;}In one example, a Fox News article from April that went viral baselessly said that the coronavirus was created in a lab in the Chinese city of Wuhan and intentionally released. The article was liked and shared more than one million times on Facebook and retweeted 78,800 times on Twitter, according to data from Zignal and CrowdTangle, a Facebook-owned tool for analyzing social media.By the middle of last year, the misinformation had started subsiding as election-related commentary increased. The anti-Asian sentiment ended up migrating to platforms like 4chan and Telegram, researchers said.But it still occasionally flared up, such as when Dr. Li-Meng Yan, a researcher from Hong Kong, made unproven assertions last fall that the coronavirus was a bioweapon engineered by China. In the United States, Dr. Yan became a right-wing media sensation. Her appearance on Tucker Carlson’s Fox News show in September has racked up at least 8.8 million views online.In November, anti-Asian speech surged anew. That was when conspiracies about a “new world order” related to President Biden’s election victory began circulating, said researchers from the Network Contagion Research Institute. Some posts that went viral painted Mr. Biden as a puppet of the Chinese Communist Party.In December, slurs about Asians and the term “Kung Flu” rose by 65 percent on websites and apps like Telegram, 4chan and The Donald, compared with the monthly average mentions from the previous 11 months on the same platforms, according to the Network Contagion Research Institute. The activity remained high in January and last month.During this second surge, calls for violence against Asian-Americans became commonplace.“Filipinos are not Asians because Asians are smart,” read a post in a Telegram channel that depicted a dog holding a gun to its head.After the shootings in Atlanta, a doctored screenshot of what looked like a Facebook post from the suspect circulated on Facebook and Twitter this week. The post featured a miasma of conspiracies about China engaging in a Covid-19 cover-up and wild theories about how it was planning to “secure global domination for the 21st century.”Facebook and Twitter eventually ruled that the screenshot was fake and blocked it. But by then, the post had been shared and liked hundreds of times on Twitter and more than 4,000 times on Facebook.Ben Decker More

  • in

    Liberals want to blame rightwing 'misinformation' for our problems. Get real | Thomas Frank

    One day in March 2015, I sat in a theater in New York City and took careful notes as a series of personages led by Hillary Clinton and Melinda Gates described the dazzling sunburst of liberation that was coming our way thanks to entrepreneurs, foundations and Silicon Valley. The presentation I remember most vividly was that of a famous TV actor who rhapsodized about the wonders of Twitter, Facebook and the rest: “No matter which platform you prefer,” she told us, “social media has given us all an extraordinary new world, where anyone, no matter their gender, can share their story across communities, continents and computer screens. A whole new world without ceilings.”Six years later and liberals can’t wait for that extraordinary new world to end. Today we know that social media is what gives you things like Donald Trump’s lying tweets, the QAnon conspiracy theory and the Capitol riot of 6 January. Social media, we now know, is a volcano of misinformation, a non-stop wallow in hatred and lies, generated for fun and profit, and these days liberal politicians are openly pleading with social media’s corporate masters to pleez clamp a ceiling on it, to stop people from sharing their false and dangerous stories.A “reality crisis” is the startling name a New York Times story recently applied to this dismal situation. An “information disorder” is the more medical-sounding label that other authorities choose to give it. Either way, the diagnosis goes, we Americans are drowning in the semiotic swirl. We have come loose from the shared material world, lost ourselves in an endless maze of foreign disinformation and rightwing conspiracy theory.In response, Joe Biden has called upon us as a nation to “defend the truth and defeat the lies”. A renowned CNN journalist advocates a “harm reduction model” to minimize “information pollution” and deliver the “rational views” that the public wants. A New York Times writer has suggested the president appoint a federal “reality czar” who would “help” the Silicon Valley platform monopolies mute the siren song of QAnon and thus usher us into a new age of sincerity.These days Democratic politicians lean on anyone with power over platforms to shut down the propaganda of the right. Former Democratic officials pen op-eds calling on us to get over free speech. Journalists fantasize about how easily and painlessly Silicon Valley might monitor and root out objectionable speech. In a recent HBO documentary on the subject, journalist after journalist can be seen rationalizing that, because social media platforms are private companies, the first amendment doesn’t apply to them … and, I suppose, neither should the American tradition of free-ranging, anything-goes political speech.In the absence of such censorship, we are told, the danger is stark. In a story about Steve Bannon’s ongoing Trumpist podcasts, for example, ProPublica informs us that “extremism experts say the rhetoric still feeds into an alternative reality that breeds anger and cynicism, which may ultimately lead to violence”.In liberal circles these days there is a palpable horror of the uncurated world, of thought spaces flourishing outside the consensus, of unauthorized voices blabbing freely in some arena where there is no moderator to whom someone might be turned in. The remedy for bad speech, we now believe, is not more speech, as per Justice Brandeis’s famous formula, but an “extremism expert” shushing the world.What an enormous task that shushing will be! American political culture is and always has been a matter of myth and idealism and selective memory. Selling, not studying, is our peculiar national talent. Hollywood, not historians, is who writes our sacred national epics. There were liars-for-hire in this country long before Roger Stone came along. Our politics has been a bath in bullshit since forever. People pitching the dumbest of ideas prosper fantastically in this country if their ideas happen to be what the ruling class would prefer to believe.“Debunking” was how the literary left used to respond to America’s Niagara of nonsense. Criticism, analysis, mockery and protest: these were our weapons. We were rational-minded skeptics, and we had a grand old time deflating creationists, faith healers, puffed-up militarists and corporate liars of every description.Censorship and blacklisting were, with important exceptions, the weapons of the puritanical right: those were their means of lashing out against rap music or suggestive plays or leftwingers who were gainfully employed.What explains the clampdown mania among liberals? The most obvious answer is because they need an excuse. Consider the history: the right has enjoyed tremendous success over the last few decades, and it is true that conservatives’ capacity for hallucinatory fake-populist appeals has helped them to succeed. But that success has also happened because the Democrats, determined to make themselves the party of the affluent and the highly educated, have allowed the right to get away with it.There have been countless times over the years where Democrats might have reappraised this dumb strategy and changed course. But again and again they chose not to, blaming their failure on everything but their glorious postindustrial vision. In 2016, for example, liberals chose to blame Russia for their loss rather than look in the mirror. On other occasions they assured one another that they had no problems with white blue-collar workers – until it became undeniable that they did, whereupon liberals chose to blame such people for rejecting them.To give up on free speech is to despair of reason itselfAnd now we cluck over a lamentable “information disorder”. The Republicans didn’t suffer the landslide defeat they deserved last November; the right is still as potent as ever; therefore Trumpist untruth is responsible for the malfunctioning public mind. Under no circumstances was it the result of the Democrats’ own lackluster performance, their refusal to reach out to the alienated millions with some kind of FDR-style vision of social solidarity.Or perhaps this new taste for censorship is an indication of Democratic healthiness. This is a party that has courted professional-managerial elites for decades, and now they have succeeded in winning them over, along with most of the wealthy areas where such people live. Liberals scold and supervise like an offended ruling class because to a certain extent that’s who they are. More and more, they represent the well-credentialed people who monitor us in the workplace, and more and more do they act like it.What all this censorship talk really is, though, is a declaration of defeat – defeat before the Biden administration has really begun. To give up on free speech is to despair of reason itself. (Misinformation, we read in the New York Times, is impervious to critical thinking.) The people simply cannot be persuaded; something more forceful is in order; they must be guided by we, the enlightened; and the first step in such a program is to shut off America’s many burbling fountains of bad takes.Let me confess: every time I read one of these stories calling on us to get over free speech or calling on Mark Zuckerberg to press that big red “mute” button on our political opponents, I feel a wave of incredulity sweep over me. Liberals believe in liberty, I tell myself. This can’t really be happening here in the USA.But, folks, it is happening. And the folly of it all is beyond belief. To say that this will give the right an issue to campaign on is almost too obvious. To point out that it will play straight into the right’s class-based grievance-fantasies requires only a little more sophistication. To say that it is a betrayal of everything we were taught liberalism stood for – a betrayal that we will spend years living down – may be too complex a thought for our punditburo to consider, but it is nevertheless true. More

  • in

    Rightwing 'super-spreader': study finds handful of accounts spread bulk of election misinformation

    A handful of rightwing “super-spreaders” on social media were responsible for the bulk of election misinformation in the run-up to the Capitol attack, according to a new study that also sheds light on the staggering reach of falsehoods pushed by Donald Trump.A report from the Election Integrity Partnership (EIP), a group that includes Stanford and the University of Washington, analyzed social media platforms including Facebook, Twitter, Instagram, YouTube, and TikTok during several months before and after the 2020 elections.It found that “super-spreaders” – responsible for the most frequent and most impactful misinformation campaigns – included Trump and his two elder sons, as well as other members of the Trump administration and the rightwing media.The study’s authors and other researchers say the findings underscore the need to disable such accounts to stop the spread of misinformation.“If there is a limit to how much content moderators can tackle, have them focus on reducing harm by eliminating the most effective spreaders of misinformation,” said said Lisa Fazio, an assistant professor at Vanderbilt University who studies the psychology of fake news but was not involved EIP report. “Rather than trying to enforce the rules equally across all users, focus enforcement on the most powerful accounts.” The report analyzed social media posts featuring words like “election” and “voting” to track key misinformation narratives related to the the 2020 election, including claims of mail carriers throwing away ballots, legitimate ballots strategically not being counted, and other false or unproven stories.The report studied how these narratives developed and the effect they had. It found during this time period, popular rightwing Twitter accounts “transformed one-off stories, sometimes based on honest voter concerns or genuine misunderstandings, into cohesive narratives of systemic election fraud”.Ultimately, the “false claims and narratives coalesced into the meta-narrative of a ‘stolen election’, which later propelled the January 6 insurrection”, the report said.“The 2020 election demonstrated that actors – both foreign and domestic – remain committed to weaponizing viral false and misleading narratives to undermine confidence in the US electoral system and erode Americans’ faith in our democracy,” the authors concluded.Next to no factchecking, with Trump as the super-spreader- in-chiefIn monitoring Twitter, the researchers analyzed more than more than 22 million tweets sent between 15 August and 12 December. The study determined which accounts were most influential by the size and speed with which they spread misinformation.“Influential accounts on the political right rarely engaged in factchecking behavior, and were responsible for the most widely spread incidents of false or misleading information in our dataset,” the report said.Out of the 21 top offenders, 15 were verified Twitter accounts – which are particularly dangerous when it comes to election misinformation, the study said. The “repeat spreaders” responsible for the most widely spread misinformation included Eric Trump, Donald Trump, Donald Trump Jr. and influencers like James O’Keefe, Tim Pool, Elijah Riot, and Sidney Powell. All 21 of the top accounts for misinformation leaned rightwing, the study showed.“Top-down mis- and disinformation is dangerous because of the speed at which it can spread,” the report said. “If a social media influencer with millions of followers shares a narrative, it can garner hundreds of thousands of engagements and shares before a social media platform or factchecker has time to review its content.”On nearly all the platforms analyzed in the study – including Facebook, Twitter, and YouTube – Donald Trump played a massive role.It pinpointed 21 incidents in which a tweet from Trump’s official @realDonaldTrump account jumpstarted the spread of a false narrative across Twitter. For example, Trump’s tweets baselessly claiming that the voting equipment manufacturer Dominion Voting Systems was responsible for election fraud played a large role in amplifying the conspiracy theory to a wider audience. False or baseless tweets sent by Trump’s account – which had 88.9m followers at the time – garnered more than 460,000 retweets.Meanwhile, Trump’s YouTube channel was linked to six distinct waves of misinformation that, combined, were the most viewed of any other repeat-spreader’s videos. His Facebook account had the most engagement of all those studied.The Election Integrity Partnership study is not the first to show the massive influence Trump’s social media accounts have had on the spread of misinformation. In one year – between 1 January 2020 and 6 January 2021 – Donald Trump pushed disinformation in more than 1,400 Facebook posts, a report from Media Matters for America released in February found. Trump was ultimately suspended from the platform in January, and Facebook is debating whether he will ever be allowed back.Specifically, 516 of his posts contained disinformation about Covid-19, 368 contained election disinformation, and 683 contained harmful rhetoric attacking his political enemies. Allegations of election fraud earned over 149.4 million interactions, or an average of 412,000 interactions per post, and accounted for 16% of interactions on his posts in 2020. Trump had a unique ability to amplify news stories that would have otherwise remained contained in smaller outlets and subgroups, said Matt Gertz of Media Matters for America.“What Trump did was take misinformation from the rightwing ecosystem and turn it into a mainstream news event that affected everyone,” he said. “He was able to take these absurd lies and conspiracy theories and turn them into national news. And if you do that, and inflame people often enough, you will end up with what we saw on January 6.”Effects of false election narratives on voters“Super-spreader” accounts were ultimately very successful in undermining voters’ trust in the democratic system, the report found. Citing a poll by the Pew Research Center, the study said that, of the 54% of people who voted in person, approximately half had cited concerns about voting by mail, and only 30% of respondents were “very confident” that absentee or mail-in ballots had been counted as intended.The report outlined a number of recommendations, including removing “super-spreader” accounts entirely.Outside experts agree that tech companies should more closely scrutinize top accounts and repeat offenders.Researchers said the refusal to take action or establish clear rules for when action should be taken helped to fuel the prevalence of misinformation. For example, only YouTube had a publicly stated “three-strike” system for offenses related to the election. Platforms like Facebook reportedly had three-strike rules as well but did not make the system publicly known.Only four of the top 20 Twitter accounts cited as top spreaders were actually removed, the study showed – including Donald Trump’s in January. Twitter has maintained that its ban of the former president is permanent. YouTube’s chief executive officer stated this week that Trump would be reinstated on the platform once the “risk of violence” from his posts passes. Facebook’s independent oversight board is now considering whether to allow Trump to return.“We have seen that he uses his accounts as a way to weaponize disinformation. It has already led to riots at the US Capitol; I don’t know why you would give him the opportunity to do that again,” Gertz said. “It would be a huge mistake to allow Trump to return.” More

  • in

    Fixing What the Internet Broke

    AdvertisementContinue reading the main storySupported byContinue reading the main storyon techFixing What the Internet BrokeHow sites like Facebook and Twitter can help reduce election misinformation.Credit…Angie WangMarch 4, 2021, 12:26 p.m. ETThis article is part of the On Tech newsletter. You can sign up here to receive it weekdays.January’s riot at the U.S. Capitol showed the damage that can result when millions of people believe an election was stolen despite no evidence of widespread fraud.The Election Integrity Partnership, a coalition of online information researchers, published this week a comprehensive analysis of the false narrative of the presidential contest and recommended ways to avoid a repeat.Internet companies weren’t solely to blame for the fiction of a stolen election, but the report concluded that they were hubs where false narratives were incubated, reinforced and cemented. I’m going to summarize here three of the report’s intriguing suggestions for how companies such as Facebook, YouTube and Twitter can change to help create a healthier climate of information about elections and everything else.One broad point: It can feel as if the norms and behaviors of people online are immutable and inevitable, but they’re not. Digital life is still relatively new, and what’s good or toxic is the result of deliberate choices by companies and all of us. We can fix what’s broken. And as another threat against the Capitol this week shows, it’s imperative we get this right.1) A higher bar for people with the most influence and the repeat offenders: Kim Kardashian can change more minds than your dentist. And research about the 2020 election has shown that a relatively small number of prominent organizations and people, including President Donald Trump, played an outsize role in establishing the myth of a rigged vote.Currently, sites like Facebook and YouTube mostly consider the substance of a post or video, divorced from the messenger, when determining whether it violates their policies. World leaders are given more leeway than the rest of us and other prominent people sometimes get a pass when they break the companies’ guidelines.This doesn’t make sense.If internet companies did nothing else, it would make a big difference if they changed how they treated the influential people who were most responsible for spreading falsehoods or twisted facts — and tended to do so again and again.The EIP researchers suggested three changes: create stricter rules for influential people; prioritize faster decisions on prominent accounts that have broken the rules before; and escalate consequences for habitual superspreaders of bogus information.YouTube has long had such a “three strikes” system for accounts that repeatedly break its rules, and Twitter recently adopted versions of this system for posts that it considers misleading about elections or coronavirus vaccinations.The hard part, though, is not necessarily making policies. It’s enforcing them when doing so could trigger a backlash.2) Internet companies should tell us what they’re doing and why: Big websites like Facebook and Twitter have detailed guidelines about what’s not allowed — for example, threatening others with violence or selling drugs.But internet companies often apply their policies inconsistently and don’t always provide clear reasons when people’s posts are flagged or deleted. The EIP report suggested that online companies do more to inform people about their guidelines and share evidence to support why a post broke the rules.3) More visibility and accountability for internet companies’ decisions: News organizations have reported on Facebook’s own research identifying ways that its computer recommendations steered some to fringe ideas and made people more polarized. But Facebook and other internet companies mostly keep such analyses a secret.The EIP researchers suggested that internet companies make public their research into misinformation and their assessments of attempts to counter it. That could improve people’s understanding of how these information systems work.The report also suggested a change that journalists and researchers have long wanted: ways for outsiders to see posts that have been deleted by the internet companies or labeled false. This would allow accountability for the decisions that internet companies make.There are no easy fixes to building Americans’ trust in a shared set of facts, particularly when internet sites enable lies to travel farther and faster than the truth. But the EIP recommendations show we do have options and a path forward. Before we go …Amazon goes big(ger) in New York: My colleagues Matthew Haag and Winnie Hu wrote about Amazon opening more warehouses in New York neighborhoods and suburbs to make faster deliveries. A related On Tech newsletter from 2020: Why Amazon needs more package hubs closer to where people live.Our homes are always watching: Law enforcement officials have increasingly sought videos from internet-connected doorbell cameras to help solve crimes but The Washington Post writes that the cameras have sometimes been a risk to them, too. In Florida, a man saw F.B.I. agents coming through his home camera and opened fire, killing two people.Square is buying Jay-Z’s streaming music service: Yes, the company that lets the flea market vendor swipe your credit card is going to own a streaming music company. No, it doesn’t make sense. (Square said it’s about finding new ways for musicians to make money.)Hugs to thisA kitty cat wouldn’t budge from the roof of a train in London for about two and a half hours. Here are way too many silly jokes about the train-surfing cat. (Or maybe JUST ENOUGH SILLY JOKES?)We want to hear from you. Tell us what you think of this newsletter and what else you’d like us to explore. You can reach us at ontech@nytimes.com.If you don’t already get this newsletter in your inbox, please sign up here.AdvertisementContinue reading the main story More

  • in

    Facebook Ends Ban on Political Advertising

    AdvertisementContinue reading the main storySupported byContinue reading the main storyFacebook Ends Ban on Political AdvertisingThe social network had prohibited political ads on its site indefinitely after the November election. Such ads have been criticized for spreading misinformation.Mark Zuckerberg, the Facebook chief executive, testifying in October. Before the ban on political ads, he had said he wanted to maintain a hands-off approach toward speech on Facebook.Credit…Pool photo by Michael ReynoldsMarch 3, 2021Updated 6:16 p.m. ETSAN FRANCISCO — Facebook said on Wednesday that it planned to lift its ban on political advertising across its network, resuming a form of digital promotion that has been criticized for spreading misinformation and falsehoods and inflaming voters.The social network said it would allow advertisers to buy new ads about “social issues, elections or politics” beginning on Thursday, according to a copy of an email sent to political advertisers and viewed by The New York Times. Those advertisers must complete a series of identity checks before being authorized to place the ads, the company said.“We put this temporary ban in place after the November 2020 election to avoid confusion or abuse following Election Day,” Facebook said in a blog post. “We’ve heard a lot of feedback about this and learned more about political and electoral ads during this election cycle. As a result, we plan to use the coming months to take a closer look at how these ads work on our service to see where further changes may be merited.”Political advertising on Facebook has long faced questions. Mark Zuckerberg, Facebook’s chief executive, has said he wished to maintain a largely hands-off stance toward speech on the site — including political ads — unless it posed an immediate harm to the public or individuals, saying that he “does not want to be the arbiter of truth.”But after the 2016 presidential election, the company and intelligence officials discovered that Russians had used Facebook ads to sow discontent among Americans. Former President Donald J. Trump also used Facebook’s political ads to amplify claims about an “invasion” on the Mexican border in 2019, among other incidents.Facebook had banned political ads late last year as a way to choke off misinformation and threats of violence around the November presidential election. In September, the company said it planned to forbid new political ads for the week before Election Day and would act swiftly against posts that tried to dissuade people from voting. Then in October, Facebook expanded that action by declaring it would prohibit all political and issue-based advertising after the polls closed on Nov. 3 for an undetermined length of time.The company eventually clamped down on groups and pages that spread certain kinds of misinformation, such as discouraging people from voting or registering to vote. It has spent billions of dollars to root out foreign influence campaigns and other types of meddling from malicious state agencies and other bad actors.In December, Facebook lifted the ban to allow some advertisers to run political issue and candidacy ads in Georgia for the January runoff Senate election in the state. But the ban otherwise remained in effect for the remaining 49 states.Attitudes around how political advertising should be treated across Facebook are decidedly mixed. Politicians who are not well known often can raise their profile and awareness of their campaigns by using Facebook.“Political ads are not bad things in and of themselves,” said Siva Vaidhyanathan, a media studies professor and the author of a book studying Facebook’s effects on democracy. “They perform an essential service, in the act of directly representing the candidate’s concerns or positions.”He added, “When you ban all campaign ads on the most accessible and affordable platform out there, you tilt the balance toward the candidates who can afford radio and television.”Representative Alexandria Ocasio-Cortez, Democrat of New York, has also said that political advertising on Facebook can be a crucial component for Democratic digital campaign strategies.Some political ad buyers applauded the lifting of the ads ban.“The ad ban was something that Facebook did to appease the public for the misinformation that spread across the platform,” said Eileen Pollet, a digital campaign strategist and founder of Ravenna Strategies. “But it really ended up hurting good actors while bad actors had total free rein. And now, especially since the election is over, the ban had really been hurting nonprofits and local organizations.”Facebook has long sought to thread the needle between forceful moderation of its policies and a lighter touch. For years, Mr. Zuckerberg defended politicians’ right to say what they wanted on Facebook, but that changed last year amid rising alarm over potential violence around the November election.In January, Facebook barred Mr. Trump from using his account and posting on the platform after he took to social media to delegitimize the election results and incited a violent uprising among his supporters, who stormed the U.S. Capitol.Facebook said Mr. Trump’s suspension was “indefinite.” The decision is now under review by the Facebook Oversight Board, a third-party entity created by the company and composed of journalists, academics and others that adjudicates some of the company’s thorny content policy enforcement decisions. A decision is expected to come within the next few months.On Thursday, political advertisers on Facebook will be able to submit new ads or turn on existing political ads that have already been approved, the company said. Each ad will appear with a small disclaimer, stating that it has been “paid for by” a political organization. For those buying new ads, Facebook said it could take up to a week to clear the identity authorization and advertising review process.AdvertisementContinue reading the main story More

  • in

    Optimizing for outrage: ex-Obama digital chief urges curbs on big tech

    [embedded content]
    A former digital strategist for Barack Obama has demanded an end to big tech’s profit-driven optimization of outrage and called for regulators to curb online disinformation and division.
    Michael Slaby – author of a new book, For All the People: Redeeming the Broken Promises of Modern Media and Reclaiming Our Civic Life – described tech giants Facebook and Google as “two gorillas” crushing the very creativity needed to combat conspiracy theories spread by former US president Donald Trump and others.
    “The systems are not broken,” Slaby, 43, told the Guardian by phone from his home in Rhinebeck, New York. “They are working exactly as they were designed for the benefit of their designers. They can be designed differently. We can express and encourage a different set of public values about the public goods that we need from our public sphere.”
    Facebook has almost 2.8 billion global monthly active users with a total of 3.3 billion using any of the company’s core products – Facebook, WhatsApp, Instagram and Messenger – on a monthly basis. Its revenue in the fourth quarter of last year was $28bn, up 33% from a year earlier, and profits climbed 53% to $11.2bn.
    But the social network founded by Mark Zuckerberg stands accused of poisoning the information well. Critics say it polarises users and allows hate speech and conspiracy theories to thrive, and that people who join extremist groups are often directed by the platform’s algorithm. The use of Facebook by Trump supporters involved in the 6 January insurrection at the US Capitol has drawn much scrutiny.
    Slaby believes Facebook and Twitter were too slow to remove Trump from their platforms. “This is where I think they hide behind arguments like the first amendment,” he said. “The first amendment is about government suppression of speech; it doesn’t have anything to do with your access to Facebook. More