More stories

  • in

    Zuckerberg, Dorsey and Pichai testify about disinformation.

    The chief executives of Google, Facebook and Twitter are testifying at the House on Thursday about how disinformation spreads across their platforms, an issue that the tech companies were scrutinized for during the presidential election and after the Jan. 6 riot at the Capitol.The hearing, held by the House Energy and Commerce Committee, is the first time that Mark Zuckerberg of Facebook, Jack Dorsey of Twitter and Sundar Pichai of Google are appearing before Congress during the Biden administration. President Biden has indicated that he is likely to be tough on the tech industry. That position, coupled with Democratic control of Congress, has raised liberal hopes that Washington will take steps to rein in Big Tech’s power and reach over the next few years.The hearing is also be the first opportunity since the Jan. 6 Capitol riot for lawmakers to question the three men about the role their companies played in the event. The attack has made the issue of disinformation intensely personal for the lawmakers since those who participated in the riot have been linked to online conspiracy theories like QAnon.Before the hearing, Democrats signaled in a memo that they were interested in questioning the executives about the Jan. 6 attacks, efforts by the right to undermine the results of the 2020 election and misinformation related to the Covid-19 pandemic.Republicans sent the executives letters this month asking them about the decisions to remove conservative personalities and stories from their platforms, including an October article in The New York Post about President Biden’s son Hunter.Lawmakers have debated whether social media platforms’ business models encourage the spread of hate and disinformation by prioritizing content that will elicit user engagement, often by emphasizing salacious or divisive posts.Some lawmakers will push for changes to Section 230 of the Communications Decency Act, a 1996 law that shields the platforms from lawsuits over their users’ posts. Lawmakers are trying to strip the protections in cases where the companies’ algorithms amplified certain illegal content. Others believe that the spread of disinformation could be stemmed with stronger antitrust laws, since the platforms are by far the major outlets for communicating publicly online.“By now it’s painfully clear that neither the market nor public pressure will stop social media companies from elevating disinformation and extremism, so we have no choice but to legislate, and now it’s a question of how best to do it,” said Representative Frank Pallone, the New Jersey Democrat who is chairman of the committee.The tech executives are expected to play up their efforts to limit misinformation and redirect users to more reliable sources of information. They may also entertain the possibility of more regulation, in an effort to shape increasingly likely legislative changes rather than resist them outright. More

  • in

    Zuckerberg faces Capitol attack grilling as Biden signals tougher line on big tech

    Mark Zuckerberg, the head of Facebook, could be in for a rough ride on Thursday when he testifies to Congress for the first time about the 6 January insurrection at the Capitol in Washington DC and amid growing questions over his platform’s role in fuelling the violence.The testimony will come after signs that the new administration of Joe Biden is preparing to take a tougher line on the tech industry’s power, especially when it comes to the social media platforms and their role in spreading misinformation and conspiracy theories.Zuckerberg will be joined by Sundar Pichai and Jack Dorsey, the chief executives of Google and Twitter respectively, at a hearing pointedly entitled “Disinformation nation: social media’s role in promoting extremism and misinformation” by the House of Representatives’ energy and commerce committee.The scrutiny comes after a report found that Facebook allowed groups linked to the QAnon, boogaloo and militia movements to glorify violence during the 2020 election and weeks leading up to the deadly mob violence at the US Capitol.Avaaz, a non-profit advocacy group, says it identified 267 pages and groups on Facebook that spread “violence-glorifying content” in the heat of the 2020 election to a combined following of 32 million users. More than two-thirds of the groups and pages had names aligned with several domestic extremist movements.The top 100 most popular false or misleading stories on Facebook related to the elections received an estimated 162m views, the report found. Avaaz called on the White House and Congress to open an investigation into Facebook’s failures and urgently pass legislation to protect American democracy.Fadi Quran, its campaign director, said: “This report shows that American voters were pummeled with false and misleading information on Facebook every step of the 2020 election cycle. We have over a year’s worth of evidence that the platform helped drive billions of views to pages and content that confused voters, created division and chaos, and, in some instances, incited violence.“But the most worrying finding in our analysis is that Facebook had the tools and capacity to better protect voters from being targets of this content, but the platform only used them at the very last moment, after significant harm was done.”Facebook claimed that Avaaz had used flawed methodology. Andy Stone, a spokesperson, said: “We’ve done more than any other internet company to combat harmful content, having already banned nearly 900 militarized social movements and removed tens of thousands of QAnon pages, groups and accounts from our apps.”He acknowledged: “Our enforcement isn’t perfect, which is why we’re always improving it while also working with outside experts to make sure that our policies remain in the right place.”But the report is likely to prompt tough questions for Zuckerberg in what is part of a wider showdown between Washington and Silicon Valley. Another flashpoint on Thursday could be Section 230 of the 1996 Communications Decency Act, which shields social media companies from liability for content their users post.Repealing the law is one of the few things on which Biden and his predecessor as president, Donald Trump, agree, though for different reasons. Democrats are concerned that Section 230 allows disinformation and conspiracy theories such as QAnon to flourish, while Trump and other Republicans have argued that it protects companies from consequences for censoring conservative voices.More generally, critics say that tech companies are too big and that the coronavirus pandemic has only increased their dominance. The cosy relationship between Barack Obama’s administration and Silicon Valley is a thing of the past, while libertarian Republicans who oppose government interference are a fading force.Amazon, Apple, Facebook and Google have all come under scrutiny from Congress and regulators in recent years. The justice department, the Federal Trade Commission (FTC) and state attorneys general are suing the behemoths over various alleged antitrust violations.In a letter this week to Biden and Merrick Garland, the new attorney general, a coalition of 29 progressive groups wrote: “It’s clear that the ability of Big Tech giants like Google to acquire monopoly power has been abetted by the leadership deficit at top enforcement agencies such as the FTC … We need a break from past, failed leadership, and we need it now.”There are signs that Biden is heeding such calls and spoiling for a confrontation. On Monday he nominated Lina Khan, an antitrust scholar who wants stricter regulation of internet companies, to the FTC. Earlier this month Tim Wu, a Columbia University law professor among the most outspoken critics of big tech, was appointed to the national economic council.There is support in Congress from the likes of David Cicilline, chairman of the House judiciary committee’s antitrust panel, which last year released a 449-page report detailing abuses of market power by Apple, Amazon, Google and Facebook.The Democratic congressman is reportedly poised to issue at least 10 legislative initiatives targeting big tech, a blitz that will make it harder for the companies and their lobbyists to focus their opposition on a single piece of legislation.Cicilline, also working on a separate bill targeting Section 230, told the Axios website: “My strategy is you’ll see a number of bills introduced, both because it’s harder for [the tech companies] to manage and oppose, you know, 10 bills as opposed to one.“It also is an opportunity for members of the committee who have expressed a real interest or enthusiasm about a particular issue, to sort of take that on and champion it.” More

  • in

    Trump will use 'his own platform’ to return to social media after Twitter ban

    Donald Trump will soon use “his own platform” to return to social media, an adviser said on Sunday, months after the former president was banned from Twitter for inciting the US Capitol riot.Trump has chafed in relative silence at his Mar-a-Lago resort in Florida since losing his Twitter account and the protections and powers of office. Recently he has released short statements which many have likened to his tweets of old.Speculation has been rife that Trump might seek to create his own TV network in an attempt to prise viewers from Fox News, which was first to call the crucial state of Arizona for Joe Biden on election night, to Trump’s considerable anger.But on Sunday adviser Jason Miller said social media was the immediate target.“The president’s been off of social media for a while,” he told Fox News Media Buzz host Howard Kurtz, “[but] his press releases, his statements have actually been getting almost more play than he ever did on Twitter before.”Miller said he had been told by a reporter the statements were “much more elegant” and “more presidential” than Trump’s tweets, but added: “I do think that we’re going to see President Trump returning to social media in probably about two or three months here with his own platform.“And this is something that I think will be the hottest ticket in social media, it’s going to completely redefine the game, and everybody is going to be waiting and watching to see what exactly President Trump does. But it will be his own platform.”Asked if Trump was going to create the platform himself or with a company, Miller said: “I can’t go much further than what I was able to just share, but I can say that it will be big once he starts.“There have been a lot of high-power meetings he’s been having at Mar-a-Lago with some teams of folks who have been coming in, and … it’s not just one company that’s approached the president, there have been numerous companies.“But I think the president does know what direction he wants to head here and this new platform is going to be big and everyone wants him, he’s gonna bring millions and millions, tens of millions of people to this new platform.”Trump, his supporters and prominent conservatives alleged bias from social media companies even before the events of 6 January, when five people including a police officer died as a mob stormed the Capitol, seeking at Trump’s urging to overturn his election defeat.In the aftermath of the attack, Trump was also suspended from Facebook and Instagram. Rightwing platforms including Gab and Parler have come under intense scrutiny amid investigations of the Capitol putsch.Trump was impeached for inciting the attack but acquitted when only seven Republican senators voted to convict.He therefore remains free to run for office and has dominated polls regarding prospective Republican nominees in 2024, raising impressive sums in political donations even while his business fortunes suffer amid numerous legal threats.Miller emphasised the hold Trump retains on his party.“He’s already had over 20 senators over 50 members of Congress either call or make the pilgrimage to Mar-a-Lago to ask for [his] endorsement,” he said.With the sort of performative hyperbole Trump aides often display for their watching boss, Miller claimed endorsements from the former president were “the most important in world history. There’s never ever been this type of endorsement that’s carried this much weight.”Saying the media should “pay attention to Georgia on Monday”, Miller said an endorsement there would “really shake things up in the political landscape”.Trump faces an investigation in Georgia over a call to a Republican official in which he sought to overturn defeat by Joe Biden. In January, Democrats won both Georgia seats in the US Senate. More

  • in

    How Anti-Asian Activity Online Set the Stage for Real-World Violence

    On platforms such as Telegram and 4chan, racist memes and posts about Asian-Americans have created fear and dehumanization.In January, a new group popped up on the messaging app Telegram, named after an Asian slur.Hundreds of people quickly joined. Many members soon began posting caricatures of Asians with exaggerated facial features, memes of Asian people eating dog meat and images of American soldiers inflicting violence during the Vietnam War.This week, after a gunman killed eight people — including six women of Asian descent — at massage parlors in and near Atlanta, the Telegram channel linked to a poll that asked, “Appalled by the recent attacks on Asians?” The top answer, with 84 percent of the vote, was that the violence was “justified retaliation for Covid.”The Telegram group was a sign of how anti-Asian sentiment has flared up in corners of the internet, amplifying racist and xenophobic tropes just as attacks against Asian-Americans have surged. On messaging apps like Telegram and on internet forums like 4chan, anti-Asian groups and discussion threads have been increasingly active since November, especially on far-right message boards such as The Donald, researchers said.The activity follows a rise in anti-Asian misinformation last spring after the coronavirus, which first emerged in China, began spreading around the world. On Facebook and Twitter, people blamed the pandemic on China, with users posting hashtags such as #gobacktochina and #makethecommiechinesepay. Those hashtags spiked when former President Donald J. Trump last year called Covid-19 the “Chinese virus” and “Kung Flu.”While some of the online activity tailed off ahead of the November election, its re-emergence has helped lay the groundwork for real-world actions, researchers said. The fatal shootings in Atlanta this week, which have led to an outcry over treatment of Asian-Americans even as the suspect said he was trying to cure a “sexual addiction,” were preceded by a swell of racially motivated attacks against Asian-Americans in places like New York and the San Francisco Bay Area, according to the advocacy group Stop AAPI Hate.“Surges in anti-Asian rhetoric online means increased risk of real-world events targeting that group of people,” said Alex Goldenberg, an analyst at the Network Contagion Research Institute at Rutgers University, which tracks misinformation and extremism online.He added that the anti-China coronavirus misinformation — including the false narrative that the Chinese government purposely created Covid-19 as a bioweapon — had created an atmosphere of fear and invective.Anti-Asian speech online has typically not been as overt as anti-Semitic or anti-Black groups, memes and posts, researchers said. On Facebook and Twitter, posts expressing anti-Asian sentiments have often been woven into conspiracy theory groups such as QAnon and in white nationalist and pro-Trump enclaves. Mr. Goldenberg said forms of hatred against Black people and Jews have deep roots in extremism in the United States and that the anti-Asian memes and tropes have been more “opportunistically weaponized.”But that does not make the anti-Asian hate speech online less insidious. Melissa Ryan, chief executive of Card Strategies, a consulting firm that researches disinformation, said the misinformation and racist speech has led to a “dehumanization” of certain groups of people and to an increased risk of violence.Negative Asian-American tropes have long existed online but began increasing last March as parts of the United States went into lockdown over the coronavirus. That month, politicians including Representative Paul Gosar, Republican of Arizona, and Representative Kevin McCarthy, a Republican of California, used the terms “Wuhan virus” and “Chinese coronavirus” to refer to Covid-19 in their tweets.Those terms then began trending online, according to a study from the University of California, Berkeley. On the day Mr. Gosar posted his tweet, usage of the term “Chinese virus” jumped 650 percent on Twitter; a day later there was an 800 percent increase in their usage in conservative news articles, the study found.Mr. Trump also posted eight times on Twitter last March about the “Chinese virus,” causing vitriolic reactions. In the replies section of one of his posts, a Trump supporter responded, “U caused the virus,” directing the comment to an Asian Twitter user who had cited U.S. death statistics for Covid-19. The Trump fan added a slur about Asian people.In a study this week from the University of California, San Francisco, researchers who examined 700,000 tweets before and after Mr. Trump’s March 2020 posts found that people who posted the hashtag #chinesevirus were more likely to use racist hashtags, including #bateatingchinese.“There’s been a lot of discussion that ‘Chinese virus’ isn’t racist and that it can be used,” said Yulin Hswen, an assistant professor of epidemiology at the University of California, San Francisco, who conducted the research. But the term, she said, has turned into “a rallying cry to be able to gather and galvanize people who have these feelings, as well as normalize racist beliefs.”Representatives for Mr. Trump, Mr. McCarthy and Mr. Gosar did not respond to requests for comment.Misinformation linking the coronavirus to anti-Asian beliefs also rose last year. Since last March, there have been nearly eight million mentions of anti-Asian speech online, much of it falsehoods, according to Zignal Labs, a media insights firm..css-1xzcza9{list-style-type:disc;padding-inline-start:1em;}.css-rqynmc{font-family:nyt-franklin,helvetica,arial,sans-serif;font-size:0.9375rem;line-height:1.25rem;color:#333;margin-bottom:0.78125rem;}@media (min-width:740px){.css-rqynmc{font-size:1.0625rem;line-height:1.5rem;margin-bottom:0.9375rem;}}.css-rqynmc strong{font-weight:600;}.css-rqynmc em{font-style:italic;}.css-yoay6m{margin:0 auto 5px;font-family:nyt-franklin,helvetica,arial,sans-serif;font-weight:700;font-size:1.125rem;line-height:1.3125rem;color:#121212;}@media (min-width:740px){.css-yoay6m{font-size:1.25rem;line-height:1.4375rem;}}.css-1dg6kl4{margin-top:5px;margin-bottom:15px;}#masthead-bar-one{display:none;}#masthead-bar-one{display:none;}.css-1pd7fgo{background-color:white;border:1px solid #e2e2e2;width:calc(100% – 40px);max-width:600px;margin:1.5rem auto 1.9rem;padding:15px;box-sizing:border-box;}@media (min-width:740px){.css-1pd7fgo{padding:20px;width:100%;}}.css-1pd7fgo:focus{outline:1px solid #e2e2e2;}#NYT_BELOW_MAIN_CONTENT_REGION .css-1pd7fgo{border:none;padding:20px 0 0;border-top:1px solid #121212;}.css-1pd7fgo[data-truncated] .css-rdoyk0{-webkit-transform:rotate(0deg);-ms-transform:rotate(0deg);transform:rotate(0deg);}.css-1pd7fgo[data-truncated] .css-eb027h{max-height:300px;overflow:hidden;-webkit-transition:none;transition:none;}.css-1pd7fgo[data-truncated] .css-5gimkt:after{content:’See more’;}.css-1pd7fgo[data-truncated] .css-6mllg9{opacity:1;}.css-coqf44{margin:0 auto;overflow:hidden;}.css-coqf44 strong{font-weight:700;}.css-coqf44 em{font-style:italic;}.css-coqf44 a{color:#326891;-webkit-text-decoration:underline;text-decoration:underline;text-underline-offset:1px;-webkit-text-decoration-thickness:1px;text-decoration-thickness:1px;-webkit-text-decoration-color:#ccd9e3;text-decoration-color:#ccd9e3;}.css-coqf44 a:visited{color:#333;-webkit-text-decoration-color:#333;text-decoration-color:#333;}.css-coqf44 a:hover{-webkit-text-decoration:none;text-decoration:none;}In one example, a Fox News article from April that went viral baselessly said that the coronavirus was created in a lab in the Chinese city of Wuhan and intentionally released. The article was liked and shared more than one million times on Facebook and retweeted 78,800 times on Twitter, according to data from Zignal and CrowdTangle, a Facebook-owned tool for analyzing social media.By the middle of last year, the misinformation had started subsiding as election-related commentary increased. The anti-Asian sentiment ended up migrating to platforms like 4chan and Telegram, researchers said.But it still occasionally flared up, such as when Dr. Li-Meng Yan, a researcher from Hong Kong, made unproven assertions last fall that the coronavirus was a bioweapon engineered by China. In the United States, Dr. Yan became a right-wing media sensation. Her appearance on Tucker Carlson’s Fox News show in September has racked up at least 8.8 million views online.In November, anti-Asian speech surged anew. That was when conspiracies about a “new world order” related to President Biden’s election victory began circulating, said researchers from the Network Contagion Research Institute. Some posts that went viral painted Mr. Biden as a puppet of the Chinese Communist Party.In December, slurs about Asians and the term “Kung Flu” rose by 65 percent on websites and apps like Telegram, 4chan and The Donald, compared with the monthly average mentions from the previous 11 months on the same platforms, according to the Network Contagion Research Institute. The activity remained high in January and last month.During this second surge, calls for violence against Asian-Americans became commonplace.“Filipinos are not Asians because Asians are smart,” read a post in a Telegram channel that depicted a dog holding a gun to its head.After the shootings in Atlanta, a doctored screenshot of what looked like a Facebook post from the suspect circulated on Facebook and Twitter this week. The post featured a miasma of conspiracies about China engaging in a Covid-19 cover-up and wild theories about how it was planning to “secure global domination for the 21st century.”Facebook and Twitter eventually ruled that the screenshot was fake and blocked it. But by then, the post had been shared and liked hundreds of times on Twitter and more than 4,000 times on Facebook.Ben Decker More

  • in

    Rightwing 'super-spreader': study finds handful of accounts spread bulk of election misinformation

    A handful of rightwing “super-spreaders” on social media were responsible for the bulk of election misinformation in the run-up to the Capitol attack, according to a new study that also sheds light on the staggering reach of falsehoods pushed by Donald Trump.A report from the Election Integrity Partnership (EIP), a group that includes Stanford and the University of Washington, analyzed social media platforms including Facebook, Twitter, Instagram, YouTube, and TikTok during several months before and after the 2020 elections.It found that “super-spreaders” – responsible for the most frequent and most impactful misinformation campaigns – included Trump and his two elder sons, as well as other members of the Trump administration and the rightwing media.The study’s authors and other researchers say the findings underscore the need to disable such accounts to stop the spread of misinformation.“If there is a limit to how much content moderators can tackle, have them focus on reducing harm by eliminating the most effective spreaders of misinformation,” said said Lisa Fazio, an assistant professor at Vanderbilt University who studies the psychology of fake news but was not involved EIP report. “Rather than trying to enforce the rules equally across all users, focus enforcement on the most powerful accounts.” The report analyzed social media posts featuring words like “election” and “voting” to track key misinformation narratives related to the the 2020 election, including claims of mail carriers throwing away ballots, legitimate ballots strategically not being counted, and other false or unproven stories.The report studied how these narratives developed and the effect they had. It found during this time period, popular rightwing Twitter accounts “transformed one-off stories, sometimes based on honest voter concerns or genuine misunderstandings, into cohesive narratives of systemic election fraud”.Ultimately, the “false claims and narratives coalesced into the meta-narrative of a ‘stolen election’, which later propelled the January 6 insurrection”, the report said.“The 2020 election demonstrated that actors – both foreign and domestic – remain committed to weaponizing viral false and misleading narratives to undermine confidence in the US electoral system and erode Americans’ faith in our democracy,” the authors concluded.Next to no factchecking, with Trump as the super-spreader- in-chiefIn monitoring Twitter, the researchers analyzed more than more than 22 million tweets sent between 15 August and 12 December. The study determined which accounts were most influential by the size and speed with which they spread misinformation.“Influential accounts on the political right rarely engaged in factchecking behavior, and were responsible for the most widely spread incidents of false or misleading information in our dataset,” the report said.Out of the 21 top offenders, 15 were verified Twitter accounts – which are particularly dangerous when it comes to election misinformation, the study said. The “repeat spreaders” responsible for the most widely spread misinformation included Eric Trump, Donald Trump, Donald Trump Jr. and influencers like James O’Keefe, Tim Pool, Elijah Riot, and Sidney Powell. All 21 of the top accounts for misinformation leaned rightwing, the study showed.“Top-down mis- and disinformation is dangerous because of the speed at which it can spread,” the report said. “If a social media influencer with millions of followers shares a narrative, it can garner hundreds of thousands of engagements and shares before a social media platform or factchecker has time to review its content.”On nearly all the platforms analyzed in the study – including Facebook, Twitter, and YouTube – Donald Trump played a massive role.It pinpointed 21 incidents in which a tweet from Trump’s official @realDonaldTrump account jumpstarted the spread of a false narrative across Twitter. For example, Trump’s tweets baselessly claiming that the voting equipment manufacturer Dominion Voting Systems was responsible for election fraud played a large role in amplifying the conspiracy theory to a wider audience. False or baseless tweets sent by Trump’s account – which had 88.9m followers at the time – garnered more than 460,000 retweets.Meanwhile, Trump’s YouTube channel was linked to six distinct waves of misinformation that, combined, were the most viewed of any other repeat-spreader’s videos. His Facebook account had the most engagement of all those studied.The Election Integrity Partnership study is not the first to show the massive influence Trump’s social media accounts have had on the spread of misinformation. In one year – between 1 January 2020 and 6 January 2021 – Donald Trump pushed disinformation in more than 1,400 Facebook posts, a report from Media Matters for America released in February found. Trump was ultimately suspended from the platform in January, and Facebook is debating whether he will ever be allowed back.Specifically, 516 of his posts contained disinformation about Covid-19, 368 contained election disinformation, and 683 contained harmful rhetoric attacking his political enemies. Allegations of election fraud earned over 149.4 million interactions, or an average of 412,000 interactions per post, and accounted for 16% of interactions on his posts in 2020. Trump had a unique ability to amplify news stories that would have otherwise remained contained in smaller outlets and subgroups, said Matt Gertz of Media Matters for America.“What Trump did was take misinformation from the rightwing ecosystem and turn it into a mainstream news event that affected everyone,” he said. “He was able to take these absurd lies and conspiracy theories and turn them into national news. And if you do that, and inflame people often enough, you will end up with what we saw on January 6.”Effects of false election narratives on voters“Super-spreader” accounts were ultimately very successful in undermining voters’ trust in the democratic system, the report found. Citing a poll by the Pew Research Center, the study said that, of the 54% of people who voted in person, approximately half had cited concerns about voting by mail, and only 30% of respondents were “very confident” that absentee or mail-in ballots had been counted as intended.The report outlined a number of recommendations, including removing “super-spreader” accounts entirely.Outside experts agree that tech companies should more closely scrutinize top accounts and repeat offenders.Researchers said the refusal to take action or establish clear rules for when action should be taken helped to fuel the prevalence of misinformation. For example, only YouTube had a publicly stated “three-strike” system for offenses related to the election. Platforms like Facebook reportedly had three-strike rules as well but did not make the system publicly known.Only four of the top 20 Twitter accounts cited as top spreaders were actually removed, the study showed – including Donald Trump’s in January. Twitter has maintained that its ban of the former president is permanent. YouTube’s chief executive officer stated this week that Trump would be reinstated on the platform once the “risk of violence” from his posts passes. Facebook’s independent oversight board is now considering whether to allow Trump to return.“We have seen that he uses his accounts as a way to weaponize disinformation. It has already led to riots at the US Capitol; I don’t know why you would give him the opportunity to do that again,” Gertz said. “It would be a huge mistake to allow Trump to return.” More

  • in

    Fran Lebowitz Isn’t Buying What Jack Dorsey Is Selling

    #masthead-section-label, #masthead-bar-one { display: none }N.Y.C. Mayoral RaceWho’s Running?11 Candidates’ N.Y.C. MomentsA Look at the Race5 Takeaways From the DebateAdvertisementContinue reading the main storySwayFran Lebowitz Isn’t Buying What Jack Dorsey Is SellingThe 70-year old social commentator and humorist doesn’t have a smartphone. That doesn’t stop her from having a take on big tech (and everything else).More episodes ofSwayFebruary 11, 2021Fran Lebowitz Isn’t Buying What Jack Dorsey Is SellingFebruary 9, 2021Bonus: Kara and Nicole Perlroth Debrief on Brad SmithFebruary 8, 2021  •  More

  • in

    Claim of anti-conservative bias by social media firms is baseless, report finds

    Republicans including Donald Trump have raged against Twitter and Facebook in recent months, alleging anti-conservative bias, censorship and a silencing of free speech. According to a new report from New York University, none of that is true.Disinformation expert Paul Barrett and researcher J Grant Sims found that far from suppressing conservatives, social media platforms have, through algorithms, amplified rightwing voices, “often affording conservatives greater reach than liberal or nonpartisan content creators”.Barrett and Sims’s report comes as Republicans up their campaign against social media companies. Conservatives have long complained that platforms such as Twitter, Facebook and YouTube show bias against the right, laments which intensified when Trump was banned from all three platforms for inciting the attack on the US Capitol which left five people dead.The NYU study, released by the Stern Center for Business and Human Rights, found that a claim of anti-conservative bias “is itself a form of disinformation: a falsehood with no reliable evidence to support it”.“There is no evidence to support the claim that the major social media companies are suppressing, censoring or otherwise discriminating against conservatives on their platforms,” Barrett said. “In fact, it is often conservatives who gain the most in terms of engagement and online attention, thanks to the platforms’ systems of algorithmic promotion of content.”The report found that Twitter, Facebook and other companies did not show bias when deleting incendiary tweets around the Capitol attack, as some on the right have claimed.Prominent conservatives including Ted Cruz, the Texas senator, have sought to crack down on big tech companies as they claim to be victims of suppression – which Barrett and Sims found does not exist.The researchers did outline problems social media companies face when accused of bias, and recommended a series of measures.“What is needed is a robust reform agenda that addresses the very real problems of social media content regulation as it currently exists,” Barrett said. “Only by moving forward from these false claims can we begin to pursue that agenda in earnest.”A 2020 study by the Pew Research Center reported that a majority of Americans believe social media companies censor political views. Pew found that 90% of Republicans believed views were being censored, and 69% of Republicans or people who leant Republican believed social media companies “generally support the views of liberals over conservatives”.Republicans including Trump have pushed to repeal section 230 of the Communications Decency Act, which protects social media companies from legal liability, claiming it allows platforms to suppress conservative voices.The NYU report suggests section 230 should be amended, with companies persuaded to “accept a range of new responsibilities related to policing content”, or risk losing liability protections. More

  • in

    Twitter Troll Tricked 4,900 Democrats in Vote-by-Phone Scheme, U.S. Says

    AdvertisementContinue reading the main storySupported byContinue reading the main storyTwitter Troll Tricked 4,900 Democrats in Vote-by-Phone Scheme, U.S. SaysDouglass Mackey, a right-wing provocateur, was accused of spreading memes that made Hillary Clinton supporters falsely believe they could cast ballots in 2016 via text message.Douglass Mackey was arrested on Wednesday in what appeared to be the first criminal case in the country involving voter suppression through the spread of disinformation on Twitter.Credit…Andrew Seng for The New York TimesJan. 27, 2021Updated 4:46 p.m. ETA man who was known as a far-right Twitter troll was arrested on Wednesday and charged with spreading disinformation online that tricked Democratic voters in 2016 into trying to cast their ballots by phone instead of going to the polls.Federal prosecutors accused Douglass Mackey, 31, of coordinating with co-conspirators to spread memes on Twitter falsely claiming that Hillary Clinton’s supporters could vote by sending a text message to a specific phone number.The co-conspirators were not named in the complaint, but one of them was Anthime Gionet, a far-right media personality known as “Baked Alaska,” who was arrested after participating in the Jan. 6 riot at the U.S. Capitol, according to a person briefed on the investigation, who spoke on the condition of anonymity to discuss an ongoing investigation.As a result of the misinformation campaign, prosecutors said, at least 4,900 unique phone numbers texted the number in a futile effort to cast votes for Mrs. Clinton.Mr. Mackey was arrested on Wednesday morning in West Palm Beach, Fla., in what appeared to be the first criminal case in the country involving voter suppression through the spread of disinformation on Twitter.“With Mackey’s arrest, we serve notice that those who would subvert the democratic process in this manner cannot rely on the cloak of internet anonymity to evade responsibility for their crimes,” said Seth DuCharme, the acting United States attorney in Brooklyn, whose office is prosecuting the case.Mrs. Clinton was not named in the complaint, but a person briefed on the investigation confirmed that she was the presidential candidate described in the charging documents.A lawyer for Mr. Mackey declined to comment.Mr. Mackey, who was released from custody on Wednesday on a $50,000 bond, faces an unusual charge: conspiracy to violate rights, which makes it illegal for people to conspire to “oppress” or “intimidate” anyone from exercising a constitutional right, such as voting. The charge carries a maximum sentence of 10 years in prison.The case will test the novel use of federal civil rights laws as a tool to hold people accountable for misinformation campaigns intended to interfere with elections, a problem that has recently become an urgent priority for social media platforms and law enforcement officials to stop.It has become a game of whack-a-mole to police users like Mr. Mackey, who prosecutors said would simply open new Twitter accounts after his old ones were suspended. Mr. Mackey used four different Twitter accounts from 2014 to 2018, the complaint said, always seeking to hide his true identity from the public.The goal of Mr. Mackey’s campaign, according to prosecutors, was to influence people to vote in a “legally invalid manner.”In 2018, Mr. Mackey was revealed to be the operator of a Twitter account using the pseudonym Ricky Vaughn, which boosted former President Donald J. Trump while spreading anti-Semitic and white nationalist propaganda.Mr. Mackey’s account had such a large following that it made the M.I.T. Media Lab’s list of the top 150 influencers in the 2016 election, ranking ahead of the Twitter accounts for NBC News, Drudge Report and CBS News.Twitter shut down the account in 2016, one month before the election, for violating the company’s rules by “participating in targeted abuse.” At that time, the account had about 58,000 followers. Three days later, an associate of Mr. Mackey’s opened a new account for him, prosecutors said, which was also quickly suspended.It was not clear how Mr. Mackey became connected to Mr. Gionet, or “Baked Alaska,” who was also a popular social media figure among white nationalists and far-right activists. A lawyer for Mr. Gionet declined to comment.Mr. Mackey is a Vermont native who graduated from Middlebury College. He worked for five years as an economist at a Brooklyn-based research firm, John Dunham & Associates, until his termination in the summer of 2016, a company representative said.The complaint showed a surgical precision in the disinformation campaign by Mr. Mackey and his four co-conspirators. In private group conversations on Twitter, they discussed how to insert their memes into trending conversations online, and dissected changes in wording and colors to make their messages more effective.Mr. Mackey was obsessed with his posts going viral, the complaint said, once telling his associates, “THE MEMES ARE SPREADING.” He and his co-conspirators joked about tricking “dopey” liberals.Their effort to misinform voters began after the group saw a similar campaign intended to deceive voters in the 2016 referendum in Britain on whether to leave the European Union, also known as Brexit, according to the complaint.Mr. Mackey and his associates created their own version, sharing photos that urged Mrs. Clinton’s supporters to vote for her on Election Day using a hashtag on Twitter or Facebook. To make the images look more legitimate, they affixed the logo of her campaign and linked to her website.Some of their memes appeared to target Black and Latino voters. One image had a Black woman standing in front of a sign supporting Mrs. Clinton, telling people to vote for Mrs. Clinton by texting a specific number. Mr. Mackey shared a similar image written in Spanish, prosecutors said.Less than a week before Election Day, the complaint said, Mr. Mackey sent a message on Twitter: “Obviously, we can win Pennsylvania. The key is to drive up turnout with non-college whites, and limit black turnout.”Around that time, Twitter began removing the images with false information and suspended Mr. Mackey’s account. But the memes had already taken on a life of their own, prosecutors said, as his associates continued to share them with a wider audience.Alan Feuer contributed reporting.AdvertisementContinue reading the main story More