More stories

  • in

    How Trump Steered Supporters Into Unwitting Donations

    Online donors were guided into weekly recurring contributions. Demands for refunds spiked. Complaints to banks and credit card companies soared. But the money helped keep Donald Trump’s struggling campaign afloat.Stacy Blatt was in hospice care last September listening to Rush Limbaugh’s dire warnings about how badly Donald J. Trump’s campaign needed money when he went online and chipped in everything he could: $500.It was a big sum for a 63-year-old battling cancer and living in Kansas City on less than $1,000 per month. But that single contribution — federal records show it was his first ever — quickly multiplied. Another $500 was withdrawn the next day, then $500 the next week and every week through mid-October, without his knowledge — until Mr. Blatt’s bank account had been depleted and frozen. When his utility and rent payments bounced, he called his brother, Russell, for help.What the Blatts soon discovered was $3,000 in withdrawals by the Trump campaign in less than 30 days. They called their bank and said they thought they were victims of fraud.“It felt,” Russell said, “like it was a scam.”But what the Blatts believed was duplicity was actually an intentional scheme to boost revenues by the Trump campaign and the for-profit company that processed its online donations, WinRed. Facing a cash crunch and getting badly outspent by the Democrats, the campaign had begun last September to set up recurring donations by default for online donors, for every week until the election.Contributors had to wade through a fine-print disclaimer and manually uncheck a box to opt out.As the election neared, the Trump team made that disclaimer increasingly opaque, an investigation by The New York Times showed. It introduced a second prechecked box, known internally as a “money bomb,” that doubled a person’s contribution. Eventually its solicitations featured lines of text in bold and capital letters that overwhelmed the opt-out language.The tactic ensnared scores of unsuspecting Trump loyalists — retirees, military veterans, nurses and even experienced political operatives. Soon, banks and credit card companies were inundated with fraud complaints from the president’s own supporters about donations they had not intended to make, sometimes for thousands of dollars.“Bandits!” said Victor Amelino, a 78-year-old Californian, who made a $990 online donation to Mr. Trump in early September via WinRed. It recurred seven more times — adding up to almost $8,000. “I’m retired. I can’t afford to pay all that damn money.”The sheer magnitude of the money involved is staggering for politics. In the final two and a half months of 2020, the Trump campaign, the Republican National Committee and their shared accounts issued more than 530,000 refunds worth $64.3 million to online donors. All campaigns make refunds for various reasons, including to people who give more than the legal limit. But the sum the Trump operation refunded dwarfed that of Joseph R. Biden Jr.’s campaign and his equivalent Democratic committees, which made 37,000 online refunds totaling $5.6 million in that time.The recurring donations swelled Mr. Trump’s treasury in September and October, just as his finances were deteriorating. He was then able to use tens of millions of dollars he raised after the election, under the guise of fighting his unfounded fraud claims, to help cover the refunds he owed.In effect, the money that Mr. Trump eventually had to refund amounted to an interest-free loan from unwitting supporters at the most important juncture of the 2020 race.Russell Blatt’s brother, Stacy, who was a supporter of Mr. Trump, died of cancer in February. Katie Currid for The New York TimesMarketers have long used ruses like prechecked boxes to steer American consumers into unwanted purchases, like magazine subscriptions. But consumer advocates said deploying the practice on voters in the heat of a presidential campaign — at such volume and with withdrawals every week — had much more serious ramifications.“It’s unfair, it’s unethical and it’s inappropriate,” said Ira Rheingold, the executive director of the National Association of Consumer Advocates.Harry Brignull, a user-experience designer in London who coined the term “dark patterns” for manipulative digital marketing practices, said the Trump team’s techniques were a classic of the “deceptive design” genre.“It should be in textbooks of what you shouldn’t do,” he said.Political strategists, digital operatives and campaign finance experts said they could not recall ever seeing refunds at such a scale. Mr. Trump, the R.N.C. and their shared accounts refunded far more money to online donors in the last election cycle than every federal Democratic candidate and committee in the country combined.Over all, the Trump operation refunded 10.7 percent of the money it raised on WinRed in 2020; the Biden operation’s refund rate on ActBlue, the parallel Democratic online donation-processing platform, was 2.2 percent, federal records show.How Refunds to Trump Donors Soared in 2020Refunds are shown as the percentage of money received by each operation to date via WinRed and ActBlue. More

  • in

    I Used to Think the Remedy for Bad Speech Was More Speech. Not Anymore.

    I used to believe that the remedy for bad speech is more speech. Now that seems archaic. Just as the founders never envisioned how the right of a well-regulated militia to own slow-loading muskets could apply to mass murderers with bullet-spewing military-style semiautomatic rifles, they could not have foreseen speech so twisted to malevolent intent as it is now.Cyber-libertarianism, the ethos of the internet with roots in 18th-century debate about the free market of ideas, has failed us miserably. Well after the pandemic is over, the infodemic will rage on — so long as it pays to lie, distort and misinform.Just recently, we saw the malignancies of our premier freedoms on display in the mass shooting in Boulder, Colo. At the center of the horror was a deeply disturbed man with a gun created for war, with the capacity to kill large numbers of humans, quickly. Within hours of the slaughter at the supermarket, a Facebook account with about 60,000 followers wrote that the shooting was fake — a so-called false flag, meant to cast blame on the wrong person.So it goes. Toxic misinformation, like AR-15-style weapons in the hands of men bent on murder, is just something we’re supposed to live with in a free society. But there are three things we could do now to clean up the river of falsities poisoning our democracy.First, teach your parents well. Facebook users over the age of 65 are far more likely to post articles from fake news sites than people under the age of 30, according to multiple studies.Certainly, the “I don’t know it for a fact, I just know it’s true” sentiment, as the Bill Maher segment has it, is not limited to seniors. But too many older people lack the skills to detect a viral falsity.That’s where the kids come in. March 18 was “MisinfoDay” in many Washington State high schools. On that day, students were taught how to spot a lie — training they could share with their parents and grandparents.Media literacy classes have been around for a while. No one should graduate from high school without being equipped with the tools to recognize bogus information. It’s like elementary civics. By extension, we should encourage the informed young to pass this on to their misinformed elders.Second, sue. What finally made the misinformation merchants on television and the web close the spigot on the Big Lie about the election were lawsuits seeking billions. Dominion Voting Systems and Smartmatic, two election technology companies, sued Fox News and others, claiming defamation.“Lies have consequences,” Dominion’s lawyers wrote in their complaint. “Fox sold a false story of election fraud in order to serve its own commercial purposes, severely injuring Dominion in the process.”In response to the Smartmatic suit, Fox said, “This lawsuit strikes at the heart of the news media’s First Amendment mission to inform on matters of public concern.” No, it doesn’t. There is no “mission” to misinform.The fraudsters didn’t even pretend they weren’t peddling lies. Sidney Powell, the lawyer who was one of the loudest promoters of the falsehood that Donald Trump won the election, was named in a Dominion lawsuit. “No reasonable person would conclude that the statements were truly statements of fact,” her lawyers wrote, absurdly, of her deception.Tell that to the majority of Republican voters who said they believed the election was stolen. They didn’t see the wink when Powell went on Fox and Newsmax to claim a massive voter fraud scheme.Dominion should sue Trump, the man at the top of the falsity food chain. The ex-president has shown he will repeat a lie over and over until it hurts him financially. That’s how the system works. And the bar for a successful libel suit, it should be noted, is very high.Finally, we need to dis-incentivize social media giants from spreading misinformation. This means striking at the algorithms that drive traffic — the lines of code that push people down rabbit holes of unreality.The Capitol Hill riot on Jan. 6 might not have happened without the platforms that spread false information, while fattening the fortunes of social media giants.“The last few years have proven that the more outrageous and extremist content social media platforms promote, the more engagement and advertising dollars they rake in,” said Representative Frank Pallone Jr., chairman of the House committee that recently questioned big tech chief executives.Taking away their legal shield — Section 230 of the Communications Decency Act — is the strongest threat out there. Sure, removing social media’s immunity from the untruthful things said on their platforms could mean the end of the internet as we know it. True. But that’s not necessarily a bad thing.So far, the threat has been mostly idle — all talk. At the least, lawmakers could more effectively use this leverage to force social media giants to redo their recommendation algorithms, making bogus information less likely to spread. When YouTube took such a step, promotion of conspiracy theories decreased significantly, according to researchers at the University of California, Berkeley, who published their findings in March 2020.Republicans may resist most of the above. Lies help them stay in power, and a misinformed public is good for their legislative agenda. They’re currently pushing a wave of voter suppression laws to fix a problem that doesn’t exist.I still believe the truth may set us free. But it has little chance of surviving amid the babble of orchestrated mendacity.Timothy Egan (@nytegan) is a contributing opinion writer who covers the environment, the American West and politics. He is a winner of the National Book Award and author, most recently, of “A Pilgrimage to Eternity.”The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram. More

  • in

    Zuckerberg, Dorsey and Pichai testify about disinformation.

    The chief executives of Google, Facebook and Twitter are testifying at the House on Thursday about how disinformation spreads across their platforms, an issue that the tech companies were scrutinized for during the presidential election and after the Jan. 6 riot at the Capitol.The hearing, held by the House Energy and Commerce Committee, is the first time that Mark Zuckerberg of Facebook, Jack Dorsey of Twitter and Sundar Pichai of Google are appearing before Congress during the Biden administration. President Biden has indicated that he is likely to be tough on the tech industry. That position, coupled with Democratic control of Congress, has raised liberal hopes that Washington will take steps to rein in Big Tech’s power and reach over the next few years.The hearing is also be the first opportunity since the Jan. 6 Capitol riot for lawmakers to question the three men about the role their companies played in the event. The attack has made the issue of disinformation intensely personal for the lawmakers since those who participated in the riot have been linked to online conspiracy theories like QAnon.Before the hearing, Democrats signaled in a memo that they were interested in questioning the executives about the Jan. 6 attacks, efforts by the right to undermine the results of the 2020 election and misinformation related to the Covid-19 pandemic.Republicans sent the executives letters this month asking them about the decisions to remove conservative personalities and stories from their platforms, including an October article in The New York Post about President Biden’s son Hunter.Lawmakers have debated whether social media platforms’ business models encourage the spread of hate and disinformation by prioritizing content that will elicit user engagement, often by emphasizing salacious or divisive posts.Some lawmakers will push for changes to Section 230 of the Communications Decency Act, a 1996 law that shields the platforms from lawsuits over their users’ posts. Lawmakers are trying to strip the protections in cases where the companies’ algorithms amplified certain illegal content. Others believe that the spread of disinformation could be stemmed with stronger antitrust laws, since the platforms are by far the major outlets for communicating publicly online.“By now it’s painfully clear that neither the market nor public pressure will stop social media companies from elevating disinformation and extremism, so we have no choice but to legislate, and now it’s a question of how best to do it,” said Representative Frank Pallone, the New Jersey Democrat who is chairman of the committee.The tech executives are expected to play up their efforts to limit misinformation and redirect users to more reliable sources of information. They may also entertain the possibility of more regulation, in an effort to shape increasingly likely legislative changes rather than resist them outright. More

  • in

    How Anti-Asian Activity Online Set the Stage for Real-World Violence

    On platforms such as Telegram and 4chan, racist memes and posts about Asian-Americans have created fear and dehumanization.In January, a new group popped up on the messaging app Telegram, named after an Asian slur.Hundreds of people quickly joined. Many members soon began posting caricatures of Asians with exaggerated facial features, memes of Asian people eating dog meat and images of American soldiers inflicting violence during the Vietnam War.This week, after a gunman killed eight people — including six women of Asian descent — at massage parlors in and near Atlanta, the Telegram channel linked to a poll that asked, “Appalled by the recent attacks on Asians?” The top answer, with 84 percent of the vote, was that the violence was “justified retaliation for Covid.”The Telegram group was a sign of how anti-Asian sentiment has flared up in corners of the internet, amplifying racist and xenophobic tropes just as attacks against Asian-Americans have surged. On messaging apps like Telegram and on internet forums like 4chan, anti-Asian groups and discussion threads have been increasingly active since November, especially on far-right message boards such as The Donald, researchers said.The activity follows a rise in anti-Asian misinformation last spring after the coronavirus, which first emerged in China, began spreading around the world. On Facebook and Twitter, people blamed the pandemic on China, with users posting hashtags such as #gobacktochina and #makethecommiechinesepay. Those hashtags spiked when former President Donald J. Trump last year called Covid-19 the “Chinese virus” and “Kung Flu.”While some of the online activity tailed off ahead of the November election, its re-emergence has helped lay the groundwork for real-world actions, researchers said. The fatal shootings in Atlanta this week, which have led to an outcry over treatment of Asian-Americans even as the suspect said he was trying to cure a “sexual addiction,” were preceded by a swell of racially motivated attacks against Asian-Americans in places like New York and the San Francisco Bay Area, according to the advocacy group Stop AAPI Hate.“Surges in anti-Asian rhetoric online means increased risk of real-world events targeting that group of people,” said Alex Goldenberg, an analyst at the Network Contagion Research Institute at Rutgers University, which tracks misinformation and extremism online.He added that the anti-China coronavirus misinformation — including the false narrative that the Chinese government purposely created Covid-19 as a bioweapon — had created an atmosphere of fear and invective.Anti-Asian speech online has typically not been as overt as anti-Semitic or anti-Black groups, memes and posts, researchers said. On Facebook and Twitter, posts expressing anti-Asian sentiments have often been woven into conspiracy theory groups such as QAnon and in white nationalist and pro-Trump enclaves. Mr. Goldenberg said forms of hatred against Black people and Jews have deep roots in extremism in the United States and that the anti-Asian memes and tropes have been more “opportunistically weaponized.”But that does not make the anti-Asian hate speech online less insidious. Melissa Ryan, chief executive of Card Strategies, a consulting firm that researches disinformation, said the misinformation and racist speech has led to a “dehumanization” of certain groups of people and to an increased risk of violence.Negative Asian-American tropes have long existed online but began increasing last March as parts of the United States went into lockdown over the coronavirus. That month, politicians including Representative Paul Gosar, Republican of Arizona, and Representative Kevin McCarthy, a Republican of California, used the terms “Wuhan virus” and “Chinese coronavirus” to refer to Covid-19 in their tweets.Those terms then began trending online, according to a study from the University of California, Berkeley. On the day Mr. Gosar posted his tweet, usage of the term “Chinese virus” jumped 650 percent on Twitter; a day later there was an 800 percent increase in their usage in conservative news articles, the study found.Mr. Trump also posted eight times on Twitter last March about the “Chinese virus,” causing vitriolic reactions. In the replies section of one of his posts, a Trump supporter responded, “U caused the virus,” directing the comment to an Asian Twitter user who had cited U.S. death statistics for Covid-19. The Trump fan added a slur about Asian people.In a study this week from the University of California, San Francisco, researchers who examined 700,000 tweets before and after Mr. Trump’s March 2020 posts found that people who posted the hashtag #chinesevirus were more likely to use racist hashtags, including #bateatingchinese.“There’s been a lot of discussion that ‘Chinese virus’ isn’t racist and that it can be used,” said Yulin Hswen, an assistant professor of epidemiology at the University of California, San Francisco, who conducted the research. But the term, she said, has turned into “a rallying cry to be able to gather and galvanize people who have these feelings, as well as normalize racist beliefs.”Representatives for Mr. Trump, Mr. McCarthy and Mr. Gosar did not respond to requests for comment.Misinformation linking the coronavirus to anti-Asian beliefs also rose last year. Since last March, there have been nearly eight million mentions of anti-Asian speech online, much of it falsehoods, according to Zignal Labs, a media insights firm..css-1xzcza9{list-style-type:disc;padding-inline-start:1em;}.css-rqynmc{font-family:nyt-franklin,helvetica,arial,sans-serif;font-size:0.9375rem;line-height:1.25rem;color:#333;margin-bottom:0.78125rem;}@media (min-width:740px){.css-rqynmc{font-size:1.0625rem;line-height:1.5rem;margin-bottom:0.9375rem;}}.css-rqynmc strong{font-weight:600;}.css-rqynmc em{font-style:italic;}.css-yoay6m{margin:0 auto 5px;font-family:nyt-franklin,helvetica,arial,sans-serif;font-weight:700;font-size:1.125rem;line-height:1.3125rem;color:#121212;}@media (min-width:740px){.css-yoay6m{font-size:1.25rem;line-height:1.4375rem;}}.css-1dg6kl4{margin-top:5px;margin-bottom:15px;}#masthead-bar-one{display:none;}#masthead-bar-one{display:none;}.css-1pd7fgo{background-color:white;border:1px solid #e2e2e2;width:calc(100% – 40px);max-width:600px;margin:1.5rem auto 1.9rem;padding:15px;box-sizing:border-box;}@media (min-width:740px){.css-1pd7fgo{padding:20px;width:100%;}}.css-1pd7fgo:focus{outline:1px solid #e2e2e2;}#NYT_BELOW_MAIN_CONTENT_REGION .css-1pd7fgo{border:none;padding:20px 0 0;border-top:1px solid #121212;}.css-1pd7fgo[data-truncated] .css-rdoyk0{-webkit-transform:rotate(0deg);-ms-transform:rotate(0deg);transform:rotate(0deg);}.css-1pd7fgo[data-truncated] .css-eb027h{max-height:300px;overflow:hidden;-webkit-transition:none;transition:none;}.css-1pd7fgo[data-truncated] .css-5gimkt:after{content:’See more’;}.css-1pd7fgo[data-truncated] .css-6mllg9{opacity:1;}.css-coqf44{margin:0 auto;overflow:hidden;}.css-coqf44 strong{font-weight:700;}.css-coqf44 em{font-style:italic;}.css-coqf44 a{color:#326891;-webkit-text-decoration:underline;text-decoration:underline;text-underline-offset:1px;-webkit-text-decoration-thickness:1px;text-decoration-thickness:1px;-webkit-text-decoration-color:#ccd9e3;text-decoration-color:#ccd9e3;}.css-coqf44 a:visited{color:#333;-webkit-text-decoration-color:#333;text-decoration-color:#333;}.css-coqf44 a:hover{-webkit-text-decoration:none;text-decoration:none;}In one example, a Fox News article from April that went viral baselessly said that the coronavirus was created in a lab in the Chinese city of Wuhan and intentionally released. The article was liked and shared more than one million times on Facebook and retweeted 78,800 times on Twitter, according to data from Zignal and CrowdTangle, a Facebook-owned tool for analyzing social media.By the middle of last year, the misinformation had started subsiding as election-related commentary increased. The anti-Asian sentiment ended up migrating to platforms like 4chan and Telegram, researchers said.But it still occasionally flared up, such as when Dr. Li-Meng Yan, a researcher from Hong Kong, made unproven assertions last fall that the coronavirus was a bioweapon engineered by China. In the United States, Dr. Yan became a right-wing media sensation. Her appearance on Tucker Carlson’s Fox News show in September has racked up at least 8.8 million views online.In November, anti-Asian speech surged anew. That was when conspiracies about a “new world order” related to President Biden’s election victory began circulating, said researchers from the Network Contagion Research Institute. Some posts that went viral painted Mr. Biden as a puppet of the Chinese Communist Party.In December, slurs about Asians and the term “Kung Flu” rose by 65 percent on websites and apps like Telegram, 4chan and The Donald, compared with the monthly average mentions from the previous 11 months on the same platforms, according to the Network Contagion Research Institute. The activity remained high in January and last month.During this second surge, calls for violence against Asian-Americans became commonplace.“Filipinos are not Asians because Asians are smart,” read a post in a Telegram channel that depicted a dog holding a gun to its head.After the shootings in Atlanta, a doctored screenshot of what looked like a Facebook post from the suspect circulated on Facebook and Twitter this week. The post featured a miasma of conspiracies about China engaging in a Covid-19 cover-up and wild theories about how it was planning to “secure global domination for the 21st century.”Facebook and Twitter eventually ruled that the screenshot was fake and blocked it. But by then, the post had been shared and liked hundreds of times on Twitter and more than 4,000 times on Facebook.Ben Decker More

  • in

    Fixing What the Internet Broke

    AdvertisementContinue reading the main storySupported byContinue reading the main storyon techFixing What the Internet BrokeHow sites like Facebook and Twitter can help reduce election misinformation.Credit…Angie WangMarch 4, 2021, 12:26 p.m. ETThis article is part of the On Tech newsletter. You can sign up here to receive it weekdays.January’s riot at the U.S. Capitol showed the damage that can result when millions of people believe an election was stolen despite no evidence of widespread fraud.The Election Integrity Partnership, a coalition of online information researchers, published this week a comprehensive analysis of the false narrative of the presidential contest and recommended ways to avoid a repeat.Internet companies weren’t solely to blame for the fiction of a stolen election, but the report concluded that they were hubs where false narratives were incubated, reinforced and cemented. I’m going to summarize here three of the report’s intriguing suggestions for how companies such as Facebook, YouTube and Twitter can change to help create a healthier climate of information about elections and everything else.One broad point: It can feel as if the norms and behaviors of people online are immutable and inevitable, but they’re not. Digital life is still relatively new, and what’s good or toxic is the result of deliberate choices by companies and all of us. We can fix what’s broken. And as another threat against the Capitol this week shows, it’s imperative we get this right.1) A higher bar for people with the most influence and the repeat offenders: Kim Kardashian can change more minds than your dentist. And research about the 2020 election has shown that a relatively small number of prominent organizations and people, including President Donald Trump, played an outsize role in establishing the myth of a rigged vote.Currently, sites like Facebook and YouTube mostly consider the substance of a post or video, divorced from the messenger, when determining whether it violates their policies. World leaders are given more leeway than the rest of us and other prominent people sometimes get a pass when they break the companies’ guidelines.This doesn’t make sense.If internet companies did nothing else, it would make a big difference if they changed how they treated the influential people who were most responsible for spreading falsehoods or twisted facts — and tended to do so again and again.The EIP researchers suggested three changes: create stricter rules for influential people; prioritize faster decisions on prominent accounts that have broken the rules before; and escalate consequences for habitual superspreaders of bogus information.YouTube has long had such a “three strikes” system for accounts that repeatedly break its rules, and Twitter recently adopted versions of this system for posts that it considers misleading about elections or coronavirus vaccinations.The hard part, though, is not necessarily making policies. It’s enforcing them when doing so could trigger a backlash.2) Internet companies should tell us what they’re doing and why: Big websites like Facebook and Twitter have detailed guidelines about what’s not allowed — for example, threatening others with violence or selling drugs.But internet companies often apply their policies inconsistently and don’t always provide clear reasons when people’s posts are flagged or deleted. The EIP report suggested that online companies do more to inform people about their guidelines and share evidence to support why a post broke the rules.3) More visibility and accountability for internet companies’ decisions: News organizations have reported on Facebook’s own research identifying ways that its computer recommendations steered some to fringe ideas and made people more polarized. But Facebook and other internet companies mostly keep such analyses a secret.The EIP researchers suggested that internet companies make public their research into misinformation and their assessments of attempts to counter it. That could improve people’s understanding of how these information systems work.The report also suggested a change that journalists and researchers have long wanted: ways for outsiders to see posts that have been deleted by the internet companies or labeled false. This would allow accountability for the decisions that internet companies make.There are no easy fixes to building Americans’ trust in a shared set of facts, particularly when internet sites enable lies to travel farther and faster than the truth. But the EIP recommendations show we do have options and a path forward. Before we go …Amazon goes big(ger) in New York: My colleagues Matthew Haag and Winnie Hu wrote about Amazon opening more warehouses in New York neighborhoods and suburbs to make faster deliveries. A related On Tech newsletter from 2020: Why Amazon needs more package hubs closer to where people live.Our homes are always watching: Law enforcement officials have increasingly sought videos from internet-connected doorbell cameras to help solve crimes but The Washington Post writes that the cameras have sometimes been a risk to them, too. In Florida, a man saw F.B.I. agents coming through his home camera and opened fire, killing two people.Square is buying Jay-Z’s streaming music service: Yes, the company that lets the flea market vendor swipe your credit card is going to own a streaming music company. No, it doesn’t make sense. (Square said it’s about finding new ways for musicians to make money.)Hugs to thisA kitty cat wouldn’t budge from the roof of a train in London for about two and a half hours. Here are way too many silly jokes about the train-surfing cat. (Or maybe JUST ENOUGH SILLY JOKES?)We want to hear from you. Tell us what you think of this newsletter and what else you’d like us to explore. You can reach us at ontech@nytimes.com.If you don’t already get this newsletter in your inbox, please sign up here.AdvertisementContinue reading the main story More

  • in

    Facebook Ends Ban on Political Advertising

    AdvertisementContinue reading the main storySupported byContinue reading the main storyFacebook Ends Ban on Political AdvertisingThe social network had prohibited political ads on its site indefinitely after the November election. Such ads have been criticized for spreading misinformation.Mark Zuckerberg, the Facebook chief executive, testifying in October. Before the ban on political ads, he had said he wanted to maintain a hands-off approach toward speech on Facebook.Credit…Pool photo by Michael ReynoldsMarch 3, 2021Updated 6:16 p.m. ETSAN FRANCISCO — Facebook said on Wednesday that it planned to lift its ban on political advertising across its network, resuming a form of digital promotion that has been criticized for spreading misinformation and falsehoods and inflaming voters.The social network said it would allow advertisers to buy new ads about “social issues, elections or politics” beginning on Thursday, according to a copy of an email sent to political advertisers and viewed by The New York Times. Those advertisers must complete a series of identity checks before being authorized to place the ads, the company said.“We put this temporary ban in place after the November 2020 election to avoid confusion or abuse following Election Day,” Facebook said in a blog post. “We’ve heard a lot of feedback about this and learned more about political and electoral ads during this election cycle. As a result, we plan to use the coming months to take a closer look at how these ads work on our service to see where further changes may be merited.”Political advertising on Facebook has long faced questions. Mark Zuckerberg, Facebook’s chief executive, has said he wished to maintain a largely hands-off stance toward speech on the site — including political ads — unless it posed an immediate harm to the public or individuals, saying that he “does not want to be the arbiter of truth.”But after the 2016 presidential election, the company and intelligence officials discovered that Russians had used Facebook ads to sow discontent among Americans. Former President Donald J. Trump also used Facebook’s political ads to amplify claims about an “invasion” on the Mexican border in 2019, among other incidents.Facebook had banned political ads late last year as a way to choke off misinformation and threats of violence around the November presidential election. In September, the company said it planned to forbid new political ads for the week before Election Day and would act swiftly against posts that tried to dissuade people from voting. Then in October, Facebook expanded that action by declaring it would prohibit all political and issue-based advertising after the polls closed on Nov. 3 for an undetermined length of time.The company eventually clamped down on groups and pages that spread certain kinds of misinformation, such as discouraging people from voting or registering to vote. It has spent billions of dollars to root out foreign influence campaigns and other types of meddling from malicious state agencies and other bad actors.In December, Facebook lifted the ban to allow some advertisers to run political issue and candidacy ads in Georgia for the January runoff Senate election in the state. But the ban otherwise remained in effect for the remaining 49 states.Attitudes around how political advertising should be treated across Facebook are decidedly mixed. Politicians who are not well known often can raise their profile and awareness of their campaigns by using Facebook.“Political ads are not bad things in and of themselves,” said Siva Vaidhyanathan, a media studies professor and the author of a book studying Facebook’s effects on democracy. “They perform an essential service, in the act of directly representing the candidate’s concerns or positions.”He added, “When you ban all campaign ads on the most accessible and affordable platform out there, you tilt the balance toward the candidates who can afford radio and television.”Representative Alexandria Ocasio-Cortez, Democrat of New York, has also said that political advertising on Facebook can be a crucial component for Democratic digital campaign strategies.Some political ad buyers applauded the lifting of the ads ban.“The ad ban was something that Facebook did to appease the public for the misinformation that spread across the platform,” said Eileen Pollet, a digital campaign strategist and founder of Ravenna Strategies. “But it really ended up hurting good actors while bad actors had total free rein. And now, especially since the election is over, the ban had really been hurting nonprofits and local organizations.”Facebook has long sought to thread the needle between forceful moderation of its policies and a lighter touch. For years, Mr. Zuckerberg defended politicians’ right to say what they wanted on Facebook, but that changed last year amid rising alarm over potential violence around the November election.In January, Facebook barred Mr. Trump from using his account and posting on the platform after he took to social media to delegitimize the election results and incited a violent uprising among his supporters, who stormed the U.S. Capitol.Facebook said Mr. Trump’s suspension was “indefinite.” The decision is now under review by the Facebook Oversight Board, a third-party entity created by the company and composed of journalists, academics and others that adjudicates some of the company’s thorny content policy enforcement decisions. A decision is expected to come within the next few months.On Thursday, political advertisers on Facebook will be able to submit new ads or turn on existing political ads that have already been approved, the company said. Each ad will appear with a small disclaimer, stating that it has been “paid for by” a political organization. For those buying new ads, Facebook said it could take up to a week to clear the identity authorization and advertising review process.AdvertisementContinue reading the main story More

  • in

    Twitter will test letting some users fact-check tweets.

    AdvertisementContinue reading the main storyTracking Viral MisinformationTwitter will test letting some users fact-check tweets.Jan. 25, 2021, 1:00 p.m. ETJan. 25, 2021, 1:00 p.m. ETFalse claims about the coronavirus and the election remain common on Twitter.Credit…Thomas White/ReutersTwitter said on Monday it would allow some users to fact-check misleading tweets, the latest effort by the company to combat misinformation.Users who join the program, called Birdwatch, can add notes to rebut false or misleading posts and rate the reliability of the fact-checking annotations made by other users. Users in the United States who verify their email addresses and phone numbers with Twitter, and have not violated Twitter’s rules in recent months, can apply to join Birdwatch.Twitter will start Birdwatch as a small pilot program with 1,000 users, and the fact-checking they produce will not be visible on Twitter but will appear on a separate site. If the experiment is successful, Twitter plans to expand the program to more than 100,000 people in the coming months and will make their contributions visible to all users.Twitter continues to grapple with misinformation on the platform. In the months before the U.S. presidential election, Twitter added fact-check labels written by its own employees to tweets from prominent accounts, temporarily disabled its recommendation algorithm, and added more context to trending topics. Still, false claims about the coronavirus and elections have proliferated on Twitter despite the company’s efforts to remove them. But Twitter has also faced backlash from some users who have argued that the company removes too much information.Giving some control over moderation directly to users could help restore trust and allow the company to move more quickly to address false claims, Twitter said.“We apply labels and add context to tweets, but we don’t want to limit efforts to circumstances where something breaks our rules or receives widespread public attention,” Keith Coleman, a vice president of product at Twitter, wrote in a blog post announcing the program. “We also want to broaden the range of voices that are part of tackling this problem, and we believe a community-driven approach can help.”AdvertisementContinue reading the main story More

  • in

    Bernie Sanders, internet te ama

    #masthead-section-label, #masthead-bar-one { display: none }The Presidential InaugurationHighlightsPhotos From the DayBiden’s SpeechWho Attended?Biden’s Long RoadAdvertisementContinue reading the main storySupported byContinue reading the main storyBernie Sanders, internet te amaEl senador por Vermont en una conferencia de prensa en México, en la nave espacial de “Star Trek”, en un fresco de Leonardo da Vinci. Sanders es, una vez más, la estrella de un meme.El senador por Vermont, Bernie Sanders, viendo cómodamente los actos de investidura el miércoles.Credit…Brendan Smialowski/Agence France-Presse — Getty ImagesMike Ives y 21 de enero de 2021 a las 12:14 ETRead in EnglishEl senador Bernie Sanders por Vermont es un ferviente defensor de los salarios justos y un excandidato presidencial que perdió la nominación demócrata frente al ahora presidente Joe Biden. Y gracias a sus prácticas elecciones de vestimenta también es ahora el centro de una aparentemente interminable avalancha de fotos alteradas que dominaron algunos rincones de internet en las horas posteriores a la investidura socialmente distanciada de Biden el miércoles.En medio de los trajes oscuros y los abrigos brillantes que salpicaban la escalinata del Capitolio, Sanders fue fotografiado sentado con una mascarilla, con las piernas cruzadas y envuelto en un voluminoso abrigo y guantes contra el gélido clima de Washington, D. C. Poco después, la imagen, tomada por el fotógrafo Brendan Smialowski para Getty Images, comenzó a circular por las redes sociales insertada en una amplia gama de fotografías y escenas de películas y obras de arte.“This could’ve been an email” pic.twitter.com/kn68z6eDhY— Ashley K. (@AshleyKSmalls) January 20, 2021
    En un día en el que todo giraba en torno a Biden, resultaba en cierto modo apropiado que Sanders, cuyo mayor apoyo político en la carrera presidencial procedía de los votantes jóvenes, fuera sin embargo el protagonista del mayor meme del día al no hacer otra cosa que sentarse y cruzar los brazos. En las elecciones primarias, Sanders disfrutó de un número de seguidores en línea significativamente mayor que Biden, especialmente entre aquellos que suelen comunicarse a través de memes.Aunque otros memes protagonizados por Sanders se utilizaron a menudo para decir algo —llevaba lo que parece ser el mismo abrigo en un video de recaudación de fondos de 2019 en el que está “una vez más pidiendo tu apoyo financiero”, una línea que ha sido reutilizada en una larga serie de maneras— no había un significado tan profundo en el meme más reciente. En lugar de utilizar su imagen para exponer una idea, simplemente se le ha colocado en nuevos contextos, con su pose, su atuendo y su expresión como chiste.Aunque el día pertenecía a Biden, el meme sirvió como un divertido espectáculo, un poco de diversión y frivolidad después de cuatro años en los que la política presidencial trajo a los seguidores de Sanders pocas razones para estar de buen humor.No fue el único meme inspirado en el día de la toma de posesión: otros se refirieron al atuendo de la exprimera dama Michelle Obama y a Lady Gaga, quien cantó el himno nacional vestida no muy diferente a un personaje de Los juegos del hambre. Pero incluso con Janet Yellen, la candidata de Biden a secretaria del Tesoro, igual de abrigada que el senador, era Sanders, el presidente entrante de la Comisión de Presupuestos del Senado, quien parecía el favorito.Las primeras publicaciones sobre él comenzaron con simples reseñas de su atuendo práctico y relativamente poco glamuroso. Algunos veían a sus tíos y padres en su elección de poner el estar abrigado por encima del estilo.Luego llegaron los memes, en los que los usuarios de las redes sociales tomaron la imagen original de Sanders y encontraron nuevos escenarios para él y su abrigo. Lo insertaron en la historia. Lo sentó en la bolera con The dude. Disfrutó del sol en una playa estatal cerrada en Nueva Jersey con el exgobernador de ese estado, Chris Christie.Otros llevaron la imagen de Sanders al cine, mostrándolo en el puente de la nave Enterprise en Star Trek y como miembro de los Avengers.El National Bobblehead Hall of Fame sacó provecho al vender su propia versión de la pose. Nick Sawhney, un ingeniero de software de Nueva York, creó una herramienta que permite insertar a Sanders en cualquier dirección de Google Maps street view.Algunos mensajes eran políticos. Otros “posiblemente blasfemos”.El avatar del senador parecía ocupado. Visitó un museo y se sentó en el Trono de Hierro de Juego de tronos. Se dejó caer en un partido de curling y se coló en un cuadro de Leonardo da Vinci.Hizo un cameo en Mario Kart, una conferencia de prensa en México y un viaje a la superficie de la luna. Hizo un recorrido por la ciudad de Nueva York.BuzzFeed News informó de que Sanders obtuvo sus guantes de Jen Ellis, una maestra de segundo grado en Essex Junction, Vermont. Ella dijo que le envió un par después de que él perdió la candidatura presidencial demócrata en 2016.Ellis tuiteó que los guantes estaban hechos de lana reutilizada y forradas con felpa.En una entrevista con la CBS, Sanders se rio de la atención.“En Vermont, nos vestimos, sabemos algo sobre el frío”, le dijo a Gayle King. “Y no nos preocupa tanto la buena moda. Solo queremos mantenernos calientes. Y eso es lo que he hecho hoy”.“Misión cumplida”, dijo King.Yonette Joseph colaboró con reportería.Daniel Victor es un reportero radicado en Londres que cubre una amplia variedad de historias con un enfoque en las últimas noticias. En 2012 dejó ProPublica y se integró al Times. @bydanielvictorAdvertisementContinue reading the main story More