More stories

  • in

    What Happened When Facebook Employees Warned About Election Misinformation

    Company documents show that the social network’s employees repeatedly raised red flags about the spread of misinformation and conspiracies before and after the contested November vote.Sixteen months before last November’s presidential election, a researcher at Facebook described an alarming development. She was getting content about the conspiracy theory QAnon within a week of opening an experimental account, she wrote in an internal report.On Nov. 5, two days after the election, another Facebook employee posted a message alerting colleagues that comments with “combustible election misinformation” were visible below many posts.Four days after that, a company data scientist wrote in a note to his co-workers that 10 percent of all U.S. views of political material — a startlingly high figure — were of posts that alleged the vote was fraudulent.In each case, Facebook’s employees sounded an alarm about misinformation and inflammatory content on the platform and urged action — but the company failed or struggled to address the issues. The internal dispatches were among a set of Facebook documents obtained by The New York Times that give new insight into what happened inside the social network before and after the November election, when the company was caught flat-footed as users weaponized its platform to spread lies about the vote. More

  • in

    Trump Finds Backing for His Own Media Venture

    A merger could give the former president access to nearly $300 million in cash — and perhaps a new platform.Former President Donald J. Trump said on Wednesday that he had lined up the investment money to create his own publicly traded media company, an attempt to reinsert himself in the public conversation online from which he has largely been absent since Twitter and Facebook banned him after the Jan. 6 insurrection.If finalized, the deal could give the new Trump company access to nearly $300 million in spending money.In a statement announcing the new venture, Mr. Trump and his investors said that the new company would be called Trump Media & Technology Group and that they would create a new social network called Truth Social. Its purpose, according to the statement, is “to create a rival to the liberal media consortium and fight back against the ‘Big Tech’ companies of Silicon Valley.”Since he left office and became the only American president to be impeached twice, Mr. Trump has had an active presence in conservative media. But he lacks the ability he once had to sway news cycles and dominate the national political debate. He filed a lawsuit this month asking Twitter to reinstate his account.The announcement on Wednesday also pointed to a promised new app listed for pre-sale on the App Store, with mock-up illustrations bearing more than a passing resemblance to Twitter.The details of Mr. Trump’s latest partnership were vague. The statement he issued was reminiscent of the kind of claims he made about his business dealings in New York as a real estate developer. It was replete with high-dollar amounts and superlatives that could not be verified.Rumors of Mr. Trump’s interest in starting his own media businesses have circulated since he was defeated in the November 2020 election. None materialized. Despite early reports that he was interested in starting his own cable channel to rival Fox News, that was never an idea that got very far given the immense costs and time needed to put into it. A close adviser, Jason Miller, started a rival social media platform for Trump supporters called Gettr. But Mr. Trump never signed on.In a statement on Wednesday night, Mr. Miller said of his and Mr. Trump’s negotiations, “We just couldn’t come to terms on a deal.”Mr. Trump’s partner is Digital World Acquisition, a special purpose acquisition company, or SPAC. These so-called blank-check companies are an increasingly popular type of investment vehicle that sells shares to the public with the intention of using the proceeds to buy private businesses.Digital World was incorporated in Miami a month after Mr. Trump lost the 2020 election.The company filed for an initial public stock offering this spring, and it sold shares to the public on the Nasdaq stock exchange last month. The I.P.O. raised about $283 million, and Digital World drummed up another $11 million by selling shares to investors through a so-called private placement.Digital World is backed by some marquee Wall Street names and others with high-powered connections. In regulatory filings after the I.P.O., major hedge funds including D.E. Shaw, Highbridge Capital Management, Lighthouse Partners and Saba Capital Management have reported owning substantial percentages of Digital World.Digital World’s chief executive is Patrick F. Orlando, a former employee of investment banks including the German Deutsche Bank, where he specialized in the trading of financial instruments known as derivatives. He created his own investment bank, Benessere Capital, in 2012, according to a recent regulatory filing.Digital World’s chief financial officer, Luis Orleans-Braganza, is a member of Brazil’s National Congress.Mr. Orlando disclosed in a recent filing that he owned nearly 18 percent of the company’s outstanding stock. Mr. Orlando and representatives for Digital World did not immediately respond to requests for comment.This is not Mr. Orlando’s first blank-check company. He has created at least two others, including one, Yunhong International, that is incorporated in the offshore tax haven of the Cayman Islands.At the time that investors bought shares in Digital World, it had not disclosed what, if any, companies it planned to acquire. On its website, Digital World said that its goal was “to focus on combining with a leading tech company.”At least one of the investors, Saba Capital Management, did not know at the time of the initial public offering that Digital World would be doing a transaction with Mr. Trump, according to a person familiar with the matter.Mr. Trump, who has repeatedly lied about the results of the 2020 election while accusing the mainstream news media of publishing “fake” stories to discredit him, leaned hard into the notion of truth as his new company’s governing ethos.“We live in a world where the Taliban has a huge presence on Twitter, yet your favorite American president has been silenced,” Mr. Trump said in his written statement, vowing to publish his first item soon. “This is unacceptable.” More

  • in

    YouTube’s stronger election misinformation policies had a spillover effect on Twitter and Facebook, researchers say.

    .dw-chart-subhed {
    line-height: 1;
    margin-bottom: 6px;
    font-family: nyt-franklin;
    color: #121212;
    font-size: 15px;
    font-weight: 700;
    }

    Share of Election-Related Posts on Social Platforms Linking to Videos Making Claims of Fraud
    Source: Center for Social Media and Politics at New York UniversityBy The New York TimesYouTube’s stricter policies against election misinformation was followed by sharp drops in the prevalence of false and misleading videos on Facebook and Twitter, according to new research released on Thursday, underscoring the video service’s power across social media.Researchers at the Center for Social Media and Politics at New York University found a significant rise in election fraud YouTube videos shared on Twitter immediately after the Nov. 3 election. In November, those videos consistently accounted for about one-third of all election-related video shares on Twitter. The top YouTube channels about election fraud that were shared on Twitter that month came from sources that had promoted election misinformation in the past, such as Project Veritas, Right Side Broadcasting Network and One America News Network.But the proportion of election fraud claims shared on Twitter dropped sharply after Dec. 8. That was the day YouTube said it would remove videos that promoted the unfounded theory that widespread errors and fraud changed the outcome of the presidential election. By Dec. 21, the proportion of election fraud content from YouTube that was shared on Twitter had dropped below 20 percent for the first time since the election.The proportion fell further after Jan. 7, when YouTube announced that any channels that violated its election misinformation policy would receive a “strike,” and that channels that received three strikes in a 90-day period would be permanently removed. By Inauguration Day, the proportion was around 5 percent.The trend was replicated on Facebook. A postelection surge in sharing videos containing fraud theories peaked at about 18 percent of all videos on Facebook just before Dec. 8. After YouTube introduced its stricter policies, the proportion fell sharply for much of the month, before rising slightly before the Jan. 6 riot at the Capitol. The proportion dropped again, to 4 percent by Inauguration Day, after the new policies were put in place on Jan. 7.To reach their findings, researchers collected a random sampling of 10 percent of all tweets each day. They then isolated tweets that linked to YouTube videos. They did the same for YouTube links on Facebook, using a Facebook-owned social media analytics tool, CrowdTangle.From this large data set, the researchers filtered for YouTube videos about the election broadly, as well as about election fraud using a set of keywords like “Stop the Steal” and “Sharpiegate.” This allowed the researchers to get a sense of the volume of YouTube videos about election fraud over time, and how that volume shifted in late 2020 and early 2021.Misinformation on major social networks has proliferated in recent years. YouTube in particular has lagged behind other platforms in cracking down on different types of misinformation, often announcing stricter policies several weeks or months after Facebook and Twitter. In recent weeks, however, YouTube has toughened its policies, such as banning all antivaccine misinformation and suspending the accounts of prominent antivaccine activists, including Joseph Mercola and Robert F. Kennedy Jr.Ivy Choi, a YouTube spokeswoman, said that YouTube was the only major online platform with a presidential election integrity policy. “We also raised up authoritative content for election-related search queries and reduced the spread of harmful election-related misinformation,” she said.Megan Brown, a research scientist at the N.Y.U. Center for Social Media and Politics, said it was possible that after YouTube banned the content, people could no longer share the videos that promoted election fraud. It is also possible that interest in the election fraud theories dropped considerably after states certified their election results.But the bottom line, Ms. Brown said, is that “we know these platforms are deeply interconnected.” YouTube, she pointed out, has been identified as one of the most-shared domains across other platforms, including in both of Facebook’s recently released content reports and N.Y.U.’s own research.“It’s a huge part of the information ecosystem,” Ms. Brown said, “so when YouTube’s platform becomes healthier, others do as well.” More

  • in

    Whistle-Blower to Accuse Facebook of Contributing to Jan. 6 Riot, Memo Says

    In an internal memo, Facebook defended itself and said that social media was not a primary cause of polarization.SAN FRANCISCO — Facebook, which has been under fire from a former employee who has revealed that the social network knew of many of the harms it was causing, was bracing for new accusations over the weekend from the whistle-blower and said in a memo that it was preparing to mount a vigorous defense.The whistle-blower, whose identity has not been publicly disclosed, planned to accuse the company of relaxing its security safeguards for the 2020 election too soon after Election Day, which then led it to be used in the storming of the U.S. Capitol on Jan. 6, according to the internal memo obtained by The New York Times. The whistle-blower planned to discuss the allegations on “60 Minutes” on Sunday, the memo said, and was also set to say that Facebook had contributed to political polarization in the United States.The 1,500-word memo, written by Nick Clegg, Facebook’s vice president of policy and global affairs, was sent on Friday to employees to pre-empt the whistle-blower’s interview. Mr. Clegg pushed back strongly on what he said were the coming accusations, calling them “misleading.” “60 Minutes” published a teaser of the interview in advance of its segment on Sunday.“Social media has had a big impact on society in recent years, and Facebook is often a place where much of this debate plays out,” he wrote. “But what evidence there is simply does not support the idea that Facebook, or social media more generally, is the primary cause of polarization.”Facebook has been in an uproar for weeks because of the whistle-blower, who has shared thousands of pages of company documents with lawmakers and The Wall Street Journal. The Journal has published a series of articles based on the documents, which show that Facebook knew how its apps and services could cause harm, including worsening body image issues among teenage girls using Instagram.Facebook has since scrambled to contain the fallout, as lawmakers, regulators and the public have said the company needs to account for the revelations. On Monday, Facebook paused the development of an Instagram service for children ages 13 and under. Its global head of safety, Antigone Davis, also testified on Thursday as irate lawmakers questioned her about the effects of Facebook and Instagram on young users.A Facebook spokesman declined to comment. A spokesman for “60 Minutes” did not immediately respond to a request for comment.Inside Facebook, executives including Mr. Clegg and the “Strategic Response” teams have called a series of emergency meetings to try to extinguish some of the outrage. Mark Zuckerberg, Facebook’s chief executive, and Sheryl Sandberg, the chief operating officer, have been briefed on the responses and have approved them, but have remained behind the scenes to distance themselves from the negative press, people with knowledge of the company have said.The firestorm is far from over. Facebook anticipated more allegations during the whistle-blower’s “60 Minutes” interview, according to the memo. The whistle-blower, who plans to reveal her identity during the interview, was set to say that Facebook had turned off some of its safety measures around the election — such as limits on live video — too soon after Election Day, the memo said. That allowed for misinformation to flood the platform and for groups to congregate online and plan the Jan. 6 storming of the Capitol building.Mr. Clegg said that was an inaccurate view and cited many of the safeguards and security mechanisms that Facebook had built over the past five years. He said the company had removed millions of groups such as the Proud Boys and others related to causes like the conspiracy theory QAnon and #StopTheSteal election fraud claims.The whistle-blower was also set to claim that many of Facebook’s problems stemmed from changes in the News Feed in 2018, the memo said. That was when the social network tweaked its algorithm to emphasize what it called Meaningful Social Interactions, or MSI, which prioritized posts from users’ friends and family and de-emphasized posts from publishers and brands.The goal was to make sure that Facebook’s products were “not just fun, but are good for people,” Mr. Zuckerberg said in an interview about the change at the time.But according to Friday’s memo, the whistle-blower would say that the change contributed to even more polarization among Facebook’s users. The whistle-blower was also set to say that Facebook then reaped record profits as its users flocked to the divisive content, the memo said.Mr. Clegg warned that the period ahead could be difficult for employees who might face questions from friends and family about Facebook’s role in the world. But he said that societal problems and political polarization have long predated the company and the advent of social networks in general.“The simple fact remains that changes to algorithmic ranking systems on one social media platform cannot explain wider societal polarization,” he wrote. “Indeed, polarizing content and misinformation are also present on platforms that have no algorithmic ranking whatsoever, including private messaging apps like iMessage and WhatsApp.”Mr. Clegg, who is scheduled to appear on the CNN program “Reliable Sources” on Sunday morning, also tried to relay an upbeat note to employees.“We will continue to face scrutiny — some of it fair and some of it unfair,” he said in the memo. “But we should also continue to hold our heads up high.”Here is Mr. Clegg’s memo in full:OUR POSITION ON POLARIZATION AND ELECTIONSYou will have seen the series of articles about us published in the Wall Street Journal in recent days, and the public interest it has provoked. This Sunday night, the ex-employee who leaked internal company material to the Journal will appear in a segment on 60 Minutes on CBS. We understand the piece is likely to assert that we contribute to polarization in the United States, and suggest that the extraordinary steps we took for the 2020 elections were relaxed too soon and contributed to the horrific events of January 6th in the Capitol.I know some of you – especially those of you in the US – are going to get questions from friends and family about these things so I wanted to take a moment as we head into the weekend to provide what I hope is some useful context on our work in these crucial areas.Facebook and PolarizationPeople are understandably anxious about the divisions in society and looking for answers and ways to fix the problems. Social media has had a big impact on society in recent years, and Facebook is often a place where much of this debate plays out. So it’s natural for people to ask whether it is part of the problem. But the idea that Facebook is the chief cause of polarization isn’t supported by the facts – as Chris and Pratiti set out in their note on the issue earlier this year.The rise of polarization has been the subject of swathes of serious academic research in recent years. In truth, there isn’t a great deal of consensus. But what evidence there is simply does not support the idea that Facebook, or social media more generally, is the primary cause of polarization.The increase in political polarization in the US pre-dates social media by several decades. If it were true that Facebook is the chief cause of polarization, we would expect to see it going up wherever Facebook is popular. It isn’t. In fact, polarization has gone down in a number of countries with high social media use at the same time that it has risen in the US.Specifically, we expect the reporting to suggest that a change to Facebook’s News Feed ranking algorithm was responsible for elevating polarizing content on the platform. In January 2018, we made ranking changes to promote Meaningful Social Interactions (MSI) – so that you would see more content from friends, family and groups you are part of in your News Feed. This change was heavily driven by internal and external research that showed that meaningful engagement with friends and family on our platform was better for people’s wellbeing, and we further refined and improved it over time as we do with all ranking metrics.Of course, everyone has a rogue uncle or an old school classmate who holds strong or extreme views we disagree with – that’s life – and the change meant you are more likely to come across their posts too. Even so, we’ve developed industry-leading tools to remove hateful content and reduce the distribution of problematic content. As a result, the prevalence of hate speech on our platform is now down to about 0.05%.But the simple fact remains that changes to algorithmic ranking systems on one social media platform cannot explain wider societal polarization. Indeed, polarizing content and misinformation are also present on platforms that have no algorithmic ranking whatsoever, including private messaging apps like iMessage and WhatsApp.Elections and DemocracyThere’s perhaps no other topic that we’ve been more vocal about as a company than on our work to dramatically change the way we approach elections. Starting in 2017, we began building new defenses, bringing in new expertise, and strengthening our policies to prevent interference. Today, we have more than 40,000 people across the company working on safety and security.Since 2017, we have disrupted and removed more than 150 covert influence operations, including ahead of major democratic elections. In 2020 alone, we removed more than 5 billion fake accounts — identifying almost all of them before anyone flagged them to us. And, from March to Election Day, we removed more than 265,000 pieces of Facebook and Instagram content in the US for violating our voter interference policies.Given the extraordinary circumstances of holding a contentious election in a pandemic, we implemented so called “break glass” measures – and spoke publicly about them – before and after Election Day to respond to specific and unusual signals we were seeing on our platform and to keep potentially violating content from spreading before our content reviewers could assess it against our policies.These measures were not without trade-offs – they’re blunt instruments designed to deal with specific crisis scenarios. It’s like shutting down an entire town’s roads and highways in response to a temporary threat that may be lurking somewhere in a particular neighborhood. In implementing them, we know we impacted significant amounts of content that did not violate our rules to prioritize people’s safety during a period of extreme uncertainty. For example, we limited the distribution of live videos that our systems predicted may relate to the election. That was an extreme step that helped prevent potentially violating content from going viral, but it also impacted a lot of entirely normal and reasonable content, including some that had nothing to do with the election. We wouldn’t take this kind of crude, catch-all measure in normal circumstances, but these weren’t normal circumstances.We only rolled back these emergency measures – based on careful data-driven analysis – when we saw a return to more normal conditions. We left some of them on for a longer period of time through February this year and others, like not recommending civic, political or new Groups, we have decided to retain permanently.Fighting Hate Groups and other Dangerous OrganizationsI want to be absolutely clear: we work to limit, not expand hate speech, and we have clear policies prohibiting content that incites violence. We do not profit from polarization, in fact, just the opposite. We do not allow dangerous organizations, including militarized social movements or violence-inducing conspiracy networks, to organize on our platforms. And we remove content that praises or supports hate groups, terrorist organizations and criminal groups.We’ve been more aggressive than any other internet company in combating harmful content, including content that sought to delegitimize the election. But our work to crack down on these hate groups was years in the making. We took down tens of thousands of QAnon pages, groups and accounts from our apps, removed the original #StopTheSteal Group, and removed references to Stop the Steal in the run up to the inauguration. In 2020 alone, we removed more than 30 million pieces of content violating our policies regarding terrorism and more than 19 million pieces of content violating our policies around organized hate in 2020. We designated the Proud Boys as a hate organization in 2018 and we continue to remove praise, support, and representation of them. Between August last year and January 12 this year, we identified nearly 900 militia organizations under our Dangerous Organizations and Individuals policy and removed thousands of Pages, groups, events, Facebook profiles and Instagram accounts associated with these groups.This work will never be complete. There will always be new threats and new problems to address, in the US and around the world. That’s why we remain vigilant and alert – and will always have to.That is also why the suggestion that is sometimes made that the violent insurrection on January 6 would not have occurred if it was not for social media is so misleading. To be clear, the responsibility for those events rests squarely with the perpetrators of the violence, and those in politics and elsewhere who actively encouraged them. Mature democracies in which social media use is widespread hold elections all the time – for instance Germany’s election last week – without the disfiguring presence of violence. We actively share with Law Enforcement material that we can find on our services related to these traumatic events. But reducing the complex reasons for polarization in America – or the insurrection specifically – to a technological explanation is woefully simplistic.We will continue to face scrutiny – some of it fair and some of it unfair. We’ll continue to be asked difficult questions. And many people will continue to be skeptical of our motives. That’s what comes with being part of a company that has a significant impact in the world. We need to be humble enough to accept criticism when it is fair, and to make changes where they are justified. We aren’t perfect and we don’t have all the answers. That’s why we do the sort of research that has been the subject of these stories in the first place. And we’ll keep looking for ways to respond to the feedback we hear from our users, including testing ways to make sure political content doesn’t take over their News Feeds.But we should also continue to hold our heads up high. You and your teams do incredible work. Our tools and products have a hugely positive impact on the world and in people’s lives. And you have every reason to be proud of that work. More

  • in

    Germany Struggles to Stop Online Abuse Ahead of Election

    Scrolling through her social media feed, Laura Dornheim is regularly stopped cold by a new blast of abuse aimed at her, including from people threatening to kill or sexually assault her. One person last year said he looked forward to meeting her in person so he could punch her teeth out.Ms. Dornheim, a candidate for Parliament in Germany’s election on Sunday, is often attacked for her support of abortion rights, gender equality and immigration. She flags some of the posts to Facebook and Twitter, hoping that the platforms will delete the posts or that the perpetrators will be barred. She’s usually disappointed.“There might have been one instance where something actually got taken down,” Ms. Dornheim said.Harassment and abuse are all too common on the modern internet. Yet it was supposed to be different in Germany. In 2017, the country enacted one of the world’s toughest laws against online hate speech. It requires Facebook, Twitter and YouTube to remove illegal comments, pictures or videos within 24 hours of being notified about them or risk fines of up to 50 million euros, or $59 million. Supporters hailed it as a watershed moment for internet regulation and a model for other countries.But an influx of hate speech and harassment in the run-up to the German election, in which the country will choose a new leader to replace Angela Merkel, its longtime chancellor, has exposed some of the law’s weaknesses. Much of the toxic speech, researchers say, has come from far-right groups and is aimed at intimidating female candidates like Ms. Dornheim.Some critics of the law say it is too weak, with limited enforcement and oversight. They also maintain that many forms of abuse are deemed legal by the platforms, such as certain kinds of harassment of women and public officials. And when companies do remove illegal material, critics say, they often do not alert the authorities or share information about the posts, making prosecutions of the people publishing the material far more difficult. Another loophole, they say, is that smaller platforms like the messaging app Telegram, popular among far-right groups, are not subject to the law.Free-expression groups criticize the law on other grounds. They argue that the law should be abolished not only because it fails to protect victims of online abuse and harassment, but also because it sets a dangerous precedent for government censorship of the internet.The country’s experience may shape policy across the continent. German officials are playing a key role in drafting one of the world’s most anticipated new internet regulations, a European Union law called the Digital Services Act, which will require Facebook and other online platforms to do more to address the vitriol, misinformation and illicit content on their sites. Ursula von der Leyen, a German who is president of the European Commission, the 27-nation bloc’s executive arm, has called for an E.U. law that would list gender-based violence as a special crime category, a proposal that would include online attacks.“Germany was the first to try to tackle this kind of online accountability,” said Julian Jaursch, a project director at the German think tank Stiftung Neue Verantwortung, which focuses on digital issues. “It is important to ask whether the law is working.”Campaign billboards in Germany’s race for chancellor, showing, from left, Annalena Baerbock of the Green Party, Olaf Scholz of the Social Democrats and Christian Lindner of the Free Democrats.Sean Gallup/Getty ImagesMarc Liesching, a professor at HTWK Leipzig who published an academic report on the policy, said that of the posts that had been deleted by Facebook, YouTube and Twitter, a vast majority were classified as violating company policies, not the hate speech law. That distinction makes it harder for the government to measure whether companies are complying with the law. In the second half of 2020, Facebook removed 49 million pieces of “hate speech” based on its own community standards, compared with the 154 deletions that it attributed to the German law, he found.The law, Mr. Liesching said, “is not relevant in practice.”With its history of Nazism, Germany has long tried to balance free speech rights against a commitment to combat hate speech. Among Western democracies, the country has some of the world’s toughest laws against incitement to violence and hate speech. Targeting religious, ethnic and racial groups is illegal, as are Holocaust denial and displaying Nazi symbols in public. To address concerns that companies were not alerting the authorities to illegal posts, German policymakers this year passed amendments to the law. They require Facebook, Twitter and YouTube to turn over data to the police about accounts that post material that German law would consider illegal speech. The Justice Ministry was also given more powers to enforce the law. “The aim of our legislative package is to protect all those who are exposed to threats and insults on the internet,” Christine Lambrecht, the justice minister, who oversees enforcement of the law, said after the amendments were adopted. “Whoever engages in hate speech and issues threats will have to expect to be charged and convicted.”Germans will vote for a leader to replace Angela Merkel, the country’s longtime chancellor.Markus Schreiber/Associated PressFacebook and Google have filed a legal challenge to block the new rules, arguing that providing the police with personal information about users violates their privacy.Facebook said that as part of an agreement with the government it now provided more figures about the complaints it received. From January through July, the company received more than 77,000 complaints, which led it to delete or block about 11,500 pieces of content under the German law, known as NetzDG.“We have zero tolerance for hate speech and support the aims of NetzDG,” Facebook said in a statement. Twitter, which received around 833,000 complaints and removed roughly 81,000 posts during the same period, said a majority of those posts did not fit the definition of illegal speech, but still violated the company’s terms of service.“Threats, abusive content and harassment all have the potential to silence individuals,” Twitter said in a statement. “However, regulation and legislation such as this also has the potential to chill free speech by emboldening regimes around the world to legislate as a way to stifle dissent and legitimate speech.”YouTube, which received around 312,000 complaints and removed around 48,000 pieces of content in the first six months of the year, declined to comment other than saying it complies with the law.The amount of hate speech has become increasingly pronounced during election season, according to researchers at Reset and HateAid, organizations that track online hate speech and are pushing for tougher laws.The groups reviewed nearly one million comments on far-right and conspiratorial groups across about 75,000 Facebook posts in June, finding that roughly 5 percent were “highly toxic” or violated the online hate speech law. Some of the worst material, including messages with Nazi symbolism, had been online for more than a year, the groups found. Of 100 posts reported by the groups to Facebook, roughly half were removed within a few days, while the others remain online.The election has also seen a wave of misinformation, including false claims about voter fraud.Annalena Baerbock, the 40-year-old leader of the Green Party and the only woman among the top candidates running to succeed Ms. Merkel, has been the subject of an outsize amount of abuse compared with her male rivals from other parties, including sexist slurs and misinformation campaigns, according to researchers.Ms. Baerbock, the Green Party candidate for chancellor, taking a selfie with one of her supporters.Laetitia Vancon for The New York TimesOthers have stopped running altogether. In March, a former Syrian refugee running for the German Parliament, Tareq Alaows, dropped out of the race after experiencing racist attacks and violent threats online.While many policymakers want Facebook and other platforms to be aggressive in screening user-generated content, others have concerns about private companies making decisions about what people can and can’t say. The far-right party Alternative for Germany, which has criticized the law for unfairly targeting its supporters, has vowed to repeal the policy “to respect freedom of expression.”Jillian York, an author and free speech activist with the Electronic Frontier Foundation in Berlin, said the German law encouraged companies to remove potentially offensive speech that is perfectly legal, undermining free expression rights.“Facebook doesn’t err on the side of caution, they just take it down,” Ms. York said. Another concern, she said, is that less democratic countries such as Turkey and Belarus have adopted laws similar to Germany’s so that they could classify certain material critical of the government as illegal.Renate Künast, a former government minister who once invited a journalist to accompany her as she confronted individuals in person who had targeted her with online abuse, wants to see the law go further. Victims of online abuse should be able to go after perpetrators directly for libel and financial settlements, she said. Without that ability, she added, online abuse will erode political participation, particularly among women and minority groups.In a survey of more than 7,000 German women released in 2019, 58 percent said they did not share political opinions online for fear of abuse.“They use the verbal power of hate speech to force people to step back, leave their office or not to be candidates,” Ms. Künast said.The Reichstag, where the German Parliament convenes, in Berlin.Emile Ducke for The New York TimesMs. Dornheim, the Berlin candidate, who has a master’s degree in computer science and used to work in the tech industry, said more restrictions were needed. She described getting her home address removed from public records after somebody mailed a package to her house during a particularly bad bout of online abuse.Yet, she said, the harassment has only steeled her resolve.“I would never give them the satisfaction of shutting up,” she said. More

  • in

    How They Failed: California Republicans, Media Critics and Facebook

    In a special Opinion Audio bonanza, Jane Coaston (The Argument), Ezra Klein (The Ezra Klein Show) and Kara Swisher (Sway) sit down to discuss what went wrong for the G.O.P. in the recall election of Gov. Gavin Newsom of California. “This was where the nationalization of politics really bit back for Republicans,” Jane says. The three hosts then debate whether the media industry’s criticism of itself does any good at all. “The media tweets like nobody’s watching,” Ezra says. Then the hosts turn to The Wall Street Journal’s revelations in “The Facebook Files” and discuss how to hold Facebook accountable. “We’re saying your tools in the hands of malevolent players are super dangerous,” Kara says, “but we have no power over them whatsoever.”And last, Ezra, Jane and Kara offer recommendations to take you deep into history, fantasy and psychotropics.[You can listen to this episode of “The Argument” on Apple, Spotify or Google or wherever you get your podcasts.]Read more about the subjects in this episode:Jane Coaston, Vox: “How California conservatives became the intellectual engine of Trumpism”Ezra Klein: “Gavin Newsom Is Much More Than the Lesser of Two Evils” and “A Different Way of Thinking About Cancel Culture”Kara Swisher: “The Endless Facebook Apology,” “Don’t Get Bezosed,” “The Medium of the Moment” “‘They’re Killing People’? Biden Isn’t Quite Right, but He’s Not Wrong.” and “The Terrible Cost of Mark Zuckerberg’s Naïveté”(A full transcript of the episode will be available midday on the Times website.)Photographs courtesy of The New York TimesThoughts? Email us at argument@nytimes.com or leave us a voice mail message at (347) 915-4324. We want to hear what you’re arguing about with your family, your friends and your frenemies. (We may use excerpts from your message in a future episode.)By leaving us a message, you are agreeing to be governed by our reader submission terms and agreeing that we may use and allow others to use your name, voice and message.This episode was produced by Phoebe Lett, Annie Galvin and Rogé Karma. It was edited by Stephanie Joyce, Alison Bruzek and Nayeema Raza. Engineering, music and sound design by Isaac Jones and Sonia Herrero. Fact-checking by Kate Sinclair, Michelle Harris and Kristin Lin. Audience strategy by Shannon Busta. Special thanks to Matt Kwong, Daphne Chen and Blakeney Schick. More

  • in

    The Alarming Rise of Peter Thiel, Tech Mogul and Political Provocateur

    THE CONTRARIAN Peter Thiel and Silicon Valley’s Pursuit of PowerBy Max ChafkinA few years ago, on a podcast called “This Is Actually Happening,” a penitent white supremacist recalled a formative childhood experience. One night his mother asked him: “You enjoying your burger?” She went on, “Did you know it’s made out of a cow?”“Something died?” the boy, then 5, replied.“Everything living dies,” she said. “You’re going to die.”Plagued thereafter by terror of death, the boy affected a fear-concealing swagger, which eventually became a fascist swagger.By chance, I’d just heard this episode when I opened “The Contrarian,” Max Chafkin’s sharp and disturbing biography of the Silicon Valley tech billionaire Peter Thiel, another far-right figure, though unrepentant.An epiphany from Thiel’s childhood sounded familiar. When he was 3, according to Chafkin, Thiel asked his father about a rug, which his father, Klaus Thiel, explained was cowhide. “Death happens to all animals. All people,” Klaus said. “It will happen to me one day. It will happen to you.”A near identical far-right coming-of-age tale — a Rechtsextremebildungsroman? The coincidence kicked off a wave of despair that crashed over me as I read Chafkin’s book. Where did these far-right Americans, powerful and not, ashamed and proud, come from? Why does a stock lecture about mortality lead some 3-to-5-year-old boys to develop contempt for the frailties in themselves — and in everyone else? Like the anonymous white supremacist, Thiel never recovered from bummer death news, and, according to Chafkin, still returns compulsively to “the brutal finality of the thing.” Thiel also turned to swaggering and, later, an evolving, sometimes contradictory, hodgepodge of libertarian and authoritarian beliefs.Thiel stalks through Chafkin’s biography “as if braced for a collision,” spoiling for a fight with whomever he designates a “liberal” — meaning anyone he suspects of snubbing him. Unsmiling, solipsistic and at pains to conceal his forever wounded vanity, Thiel in Chafkin’s telling comes across as singularly disagreeable, which is evidently the secret to both his worldly successes and his moral failures.Young Thiel had the usual dandruff-club hobbies: He played Dungeons & Dragons, read Tolkien and aced the SATs. He was arrogant, and set his worldview against those who mocked him for it. One of Thiel’s classmates at Stanford told Chafkin, “He viewed liberals through a lens as people who were not nice to him.” Looking back on Thiel’s anti-elitist and eventually illiberal politics, Chafkin is succinct: “He’d chosen to reject those who’d rejected him.”Chafkin serves as a tour guide to the ideological roadhouses where Thiel threw back shots of ultraconservative nostrums on his way to serve Donald Trump in 2016. There was his home life, where — first in Cleveland, then in South Africa and, finally, in suburban California — he ingested his German family’s complicity in apartheid (his father helped build a uranium mine in the Namib desert) and enthusiasm for Reagan; his requisite enlightenment via the novels of Ayn Rand; his excoriations of libs at Stanford, which (Chafkin reminds readers) still shows the influence of its eugenicist founding president, David Starr Jordan; and his depressing stint at a white-shoe corporate law firm, where he was disappointed to find “no liberals to fight.”These stages of the cross led Thiel to Silicon Valley in the mid-1990s, hot to leave big law and gamble on young Randian Übermenschen. An early bet on a coder named Max Levchin hit it big. The two devised PayPal, the company Thiel is famous for, which supercharged his antipathies with capital. Thiel, who’d published a book called “The Diversity Myth,” “made good on his aversion to multiculturalism,” Chafkin writes. “Besides youth, PayPal’s other defining quality was its white maleness.”In 2000, PayPal got in business with Elon Musk. “Peter thinks Musk is a fraud and a braggart,” one source tells Chafkin. “Musk thinks Peter is a sociopath.” According to Chafkin, Thiel remained coldblooded during the dot-com crash that year, as PayPal loopholed its way to market dominance. The company rebounded with a growth strategy known as “blitzscaling,” as well as the use of some supremely nasty tactics. “Whereas [Steve] Jobs viewed business as a form of cultural expression, even art,” Chafkin writes, “for Thiel and his peers it was a mode of transgression, even activism.”When PayPal went public, Thiel took out tens of millions and turned to investing full time. With various funds he scouted for more entrepreneurial twerps, and in the mid-2000s he latched onto Mark Zuckerberg of Facebook. He also set up a hedge fund called Clarium, where, according to Chafkin, Thiel’s staffers styled themselves as intellectuals and savored the wit of VDARE, an anti-immigration website that regularly published white nationalists. Hoping to make death less inevitable, at least for himself, Thiel also began to patronize the Alcor Life Extension Foundation, which has been steadily freezing the corpses of moneyed narcissists in liquid nitrogen since 1976.Thiel passed on investing in Tesla, telling Musk (according to Musk) that he didn’t “fully buy into the climate change thing.” But he gave Zuckerberg a loan for Facebook, which intermittently let him keep a leash on the young founder. After Sept. 11, Chafkin reports, Thiel also panicked about “the threat posed by Islamic terrorism — and Islam itself.” Libertarianism deserted him; he created Palantir, a data-analytics surveillance tech company designed, in essence, to root out terrorists. The C.I.A. used it, the N.Y.P.D. used it and Thiel became a contractor with big government. By 2006 his Clarium had $2 billion under management.Around this time, the wily Nick Denton, of the gossip empire Gawker, took notice of what Chafkin calls Thiel’s “extremist politics and ethically dubious business practices.” Gawker’s Valleywag site dragged Thiel, whose homosexuality was an open secret, suggesting he was repressed. This enraged Thiel, who by 2008 seemed to have lost it, firing off a floridly religious letter to Clarium investors warning of the imminent apocalypse and urging them to save their immortal souls and “accumulate treasures in heaven, in the eternal City of God.”The planet avoided the apocalypse, as it tends to do, but that year the financial crash laid the economy to waste. Several big investors pulled out of Thiel’s fund. In Chafkin’s telling, Thiel unaccountably blamed Denton for scaring away ultraconservatives by outing him. He determined to put Denton out of business, and in 2016, by clandestinely bankrolling a nuisance lawsuit designed to bankrupt Gawker, he did.Chafkin’s chronicle of Thiel’s wild abandon during the Obama years contains some of the most suspenseful passages in the book, as the narrative hurtles toward his acquisition of actual political power. Thiel seemed intoxicated by the rise of Obama, who galvanized the liberals Thiel most loved to hate. Chafkin recounts decadent parties at Thiel’s homes with barely clad men, along with his investments in nutjob projects, like seasteading, which promised life on floating ocean platforms free from government regulation. In a widely read essay, he argued that democracy and capitalism were at odds, because social programs and women’s suffrage curbed the absolute freedom of above-the-law capitalists like himself. He was officially antidemocracy.Thiel then began to direct money to nativist political candidates and causes, and to collaborate — via Palantir — with Lt. Gen. Michael Flynn, the strange right-wing figure who would later become a zealous Trumpite embraced by the QAnon cult. He built an army of mini-Thiels, the Thiel fellows, teenage boys (along with a few girls) whom he paid to quit college, forfeit normal social life and try to get rich in the Valley.Thiel backed Ron Paul for president in 2012, and helped Ted Cruz win a Texas Senate seat. (Gawker noted that Thiel’s support for the anti-gay Cruz was “no crazier than paying kids to drop out of school, cure death or create a floating libertarian ocean utopia.”) He contributed to Tea Party politicians with the aim of building a bigger “neo-reactionary” political movement, and in 2015, he gave his followers their own holy book when he published “Zero to One,” a compendium of antidemocracy, pro-monopoly blitzscaling tips.Peter Thiel, speaking at the Republican National Convention in July 2016. After Donald Trump won the nomination, Thiel decided Trump was a delightful disrupter and kindred spirit and urged voters to take him “seriously, but not literally.”Stephen Crowley/The New York TimesAt the same time, by investing in Lyft, TaskRabbit and Airbnb with his Founders Fund, Thiel seemed to be on the right side of history. When he spoke before mainstream audiences, he sometimes softened his extreme views and even laughed off his more gonzo follies — seasteading, for one.Yet one friend described Thiel to Chafkin as “Nazi-curious” (though the friend later said he was just being glib), and during this period Thiel also became, Chafkin writes, closer to Curtis Yarvin, a noxious avatar of the alt-right who had ties to Steve Bannon. He turned to survivalist prepping, kitting out a giant estate in New Zealand, where he took citizenship, making it possible that at a moment’s notice he could slip the knot of what, Chafkin says, had become his ultimate nemesis: the U.S. government itself.In the mid-2010s, a Palantir rep was also meeting with Cambridge Analytica, the creepy English data-mining firm that was later recorded boasting about using twisted data shenanigans to all but give the 2016 presidential election to Donald Trump.Like just about every powerful figure who eventually went all in for Trump, Thiel was initially skeptical, according to Chafkin. But once Trump won the nomination Thiel decided he was a delightful disrupter and kindred spirit. High from crushing Gawker, Thiel spoke for Trump at the Republican National Convention, and poured money into Rebekah Mercer’s PAC to rescue the campaign as Trump revealed increasing madness on the stump. He also urged voters to take Trump “seriously, but not literally.” Simultaneously, at Thiel’s recommendation, Chafkin suggests, Zuckerberg continued to allow popular content, including potentially misleading far-right articles, to stay at the top of Facebook’s trending stories, where they could attract more clicks and spike more get-out-the-vote cortisol.Why did Thiel go to such lengths for Trump? Chafkin quotes an anonymous longtime investor in Thiel’s firms: “He wanted to watch Rome burn.” Trump won, which meant that Thiel’s money and his burn-it-down ideology also won.Chafkin recounts that some of Thiel’s friends found this concretizaton of his cosmology too much to bear, and turned on him. But most did what most Trump opponents did for four years: waited it out, tried to wish away the erosion of American democracy and turned to their affairs.For his part, Thiel embraced the role of kingmaker, and Palantir benefited handsomely from contracts the Trump administration sent its way. Thiel found another winning sponsee: Josh Hawley, then Missouri’s attorney general, with whom he fought Google, which threatened the stability of many Thiel-backed companies, and which Hawley saw as communist, or something.Chafkin, a writer and editor at Bloomberg Businessweek, is especially interested in the friction between Zuckerberg and Thiel, who drifted apart for a time as Thiel became more involved in conservative politics. The words spent on discord in this relationship — and on tension between Thiel and other tech titans — distract from the more urgent chronicle of Thiel’s rise as one of the pre-eminent authors of the contemporary far-right movement.“The Contrarian” is chilling — literally chilling. As I read it, I grew colder and colder, until I found myself curled up under a blanket on a sunny day, icy and anxious. Scared people are scary, and Chafkin’s masterly evocation of his subject’s galactic fear — of liberals, of the U.S. government, of death — turns Thiel himself into a threat. I tried to tell myself that Thiel is just another rapacious solipsist, in it for the money, but I used to tell myself that about another rapacious solipsist, and he became president.By way of conclusion, Chafkin reports that Thiel rode out much of the pandemic in Maui, losing faith in Trump. Evidently Thiel considers the devastating coronavirus both an economic opportunity for Palantir, which went public in 2020 and has benefited from Covid-related government contracts, and a vindication of his predictions that the world as we know it is finished. More

  • in

    These Two Rumors Are Going Viral Ahead of California’s Recall Election

    As California’s Sept. 14 election over whether to recall Gov. Gavin Newsom draws closer, unfounded rumors about the event are growing.Here are two that are circulating widely online, how they spread and why, state and local officials said, they are wrong.Rumor No. 1: Holes in the ballot envelopes were being used to screen out votes that say “yes” to a recall.On Aug. 19, a woman posted a video on Instagram of herself placing her California special election ballot in an envelope.“You have to pay attention to these two holes that are in front of the envelope,” she said, bringing the holes close to the camera so viewers could see them. “You can see if someone has voted ‘yes’ to recall Newsom. This is very sketchy and irresponsible in my opinion, but this is asking for fraud.”The idea that the ballot envelope’s holes were being used to weed out the votes of those who wanted Gov. Newsom, a Democrat, to be recalled rapidly spread online, according to a review by The New York Times..css-1xzcza9{list-style-type:disc;padding-inline-start:1em;}.css-3btd0c{font-family:nyt-franklin,helvetica,arial,sans-serif;font-size:1rem;line-height:1.375rem;color:#333;margin-bottom:0.78125rem;}@media (min-width:740px){.css-3btd0c{font-size:1.0625rem;line-height:1.5rem;margin-bottom:0.9375rem;}}.css-3btd0c strong{font-weight:600;}.css-3btd0c em{font-style:italic;}.css-w739ur{margin:0 auto 5px;font-family:nyt-franklin,helvetica,arial,sans-serif;font-weight:700;font-size:1.125rem;line-height:1.3125rem;color:#121212;}#NYT_BELOW_MAIN_CONTENT_REGION .css-w739ur{font-family:nyt-cheltenham,georgia,’times new roman’,times,serif;font-weight:700;font-size:1.375rem;line-height:1.625rem;}@media (min-width:740px){#NYT_BELOW_MAIN_CONTENT_REGION .css-w739ur{font-size:1.6875rem;line-height:1.875rem;}}@media (min-width:740px){.css-w739ur{font-size:1.25rem;line-height:1.4375rem;}}.css-9s9ecg{margin-bottom:15px;}.css-uf1ume{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-box-pack:justify;-webkit-justify-content:space-between;-ms-flex-pack:justify;justify-content:space-between;}.css-wxi1cx{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:column;-ms-flex-direction:column;flex-direction:column;-webkit-align-self:flex-end;-ms-flex-item-align:end;align-self:flex-end;}.css-12vbvwq{background-color:white;border:1px solid #e2e2e2;width:calc(100% – 40px);max-width:600px;margin:1.5rem auto 1.9rem;padding:15px;box-sizing:border-box;}@media (min-width:740px){.css-12vbvwq{padding:20px;width:100%;}}.css-12vbvwq:focus{outline:1px solid #e2e2e2;}#NYT_BELOW_MAIN_CONTENT_REGION .css-12vbvwq{border:none;padding:10px 0 0;border-top:2px solid #121212;}.css-12vbvwq[data-truncated] .css-rdoyk0{-webkit-transform:rotate(0deg);-ms-transform:rotate(0deg);transform:rotate(0deg);}.css-12vbvwq[data-truncated] .css-eb027h{max-height:300px;overflow:hidden;-webkit-transition:none;transition:none;}.css-12vbvwq[data-truncated] .css-5gimkt:after{content:’See more’;}.css-12vbvwq[data-truncated] .css-6mllg9{opacity:1;}.css-qjk116{margin:0 auto;overflow:hidden;}.css-qjk116 strong{font-weight:700;}.css-qjk116 em{font-style:italic;}.css-qjk116 a{color:#326891;-webkit-text-decoration:underline;text-decoration:underline;text-underline-offset:1px;-webkit-text-decoration-thickness:1px;text-decoration-thickness:1px;-webkit-text-decoration-color:#326891;text-decoration-color:#326891;}.css-qjk116 a:visited{color:#326891;-webkit-text-decoration-color:#326891;text-decoration-color:#326891;}.css-qjk116 a:hover{-webkit-text-decoration:none;text-decoration:none;}The Instagram video collected nearly half a million views. On the messaging app Telegram, posts that said California was rigging the special election amassed nearly 200,000 views. And an article about the ballot holes on the far-right site The Gateway Pundit reached up to 626,000 people on Facebook, according to data from CrowdTangle, a Facebook-owned social media analytics tool.State and local officials said the ballot holes were not new and were not being used nefariously. The holes were placed in the envelope, on either end of a signature line, to help low-vision voters know where to sign it, said Jenna Dresner, a spokeswoman for the California Secretary of State’s Office of Election Cybersecurity.The ballot envelope’s design has been used for several election cycles, and civic design consultants recommended the holes for accessibility, added Mike Sanchez, a spokesman for the Los Angeles County registrar. He said voters could choose to put the ballot in the envelope in such a way that didn’t reveal any ballot marking at all through a hole.Instagram has since appended a fact-check label to the original video to note that it could mislead people. The fact check has reached up to 20,700 people, according to CrowdTangle data.Rumor No. 2: A felon stole ballots to help Governor Newsom win the recall election.On Aug. 17, the police in Torrance, Calif., published a post on Facebook that said officers had responded to a call about a man who was passed out in his car in a 7-Eleven parking lot. The man had items such as a loaded firearm, drugs and thousands of pieces of mail, including more than 300 unopened mail-in ballots for the special election, the police said.Far-right sites such as Red Voice Media and Conservative Firing Line claimed the incident was an example of Democrats’ trying to steal an election through mail-in ballots. Their articles were then shared on Facebook, where they collectively reached up to 1.57 million people, according to CrowdTangle data.Mark Ponegalek, a public information officer for the Torrance Police Department, said the investigation into the incident was continuing. The U.S. postal inspector was also involved, he said, and no conclusions had been reached.As a result, he said, online articles and posts concluding that the man was attempting voter fraud were “baseless.”“I have no indication to tell you one way or the other right now” whether the man intended to commit election fraud with the ballots he collected, Mr. Ponegalek said. He added that the man may have intended to commit identity fraud. More