More stories

  • in

    Facebook whistleblower accuses firm of serially misleading over safety

    FacebookFacebook whistleblower accuses firm of serially misleading over safety Frances Haugen filed at least eight complaints against the company regarding its approach to safety Dan Milmo Global technology editorTue 5 Oct 2021 07.50 EDTLast modified on Tue 5 Oct 2021 10.23 EDTThe Facebook whistleblower, Frances Haugen, who testifies at the US Congress on Tuesday, has filed at least eight complaints with the US financial watchdog accusing the social media company of serially misleading investors about its approach to safety and the size of its audience.The complaints, published online by the news programme 60 Minutes late on Monday, hours before Haugen’s testimony to US senators at 10am EDT (3pm BST), are based on tens of thousands of internal documents that Haugen copied shortly before she quit Facebook in May.The complaints and testimony from Haugen, who stepped forward on Sunday as the source of a damning series of revelations in the Wall Street Journal, are taking place against a backdrop of operational chaos for Facebook, whose platforms, including Instagram and WhatsApp, went offline around the world for nearly six hours on Monday.The first whistleblower complaint filed to the US Securities and Exchange Commission relates to the 6 January riots in Washington, when crowds of protesters stormed the Capitol, and alleges that Facebook knowingly chose to permit political misinformation and contests statements made by its chief executive, Mark Zuckerberg, to the contrary.“Our anonymous client is disclosing original evidence showing that Facebook … has, for years past and ongoing, violated US securities laws by making material misrepresentations and omissions in statements to investors and prospective investors,” the sweeping opening statement reads, “including, inter alia, through filings with the SEC, testimony to Congress, online statements and media stories.”The complaints against Facebook, which reflect a series of reports in the Wall Street Journal in recent weeks, also cover:
    The company’s approach to hate speech.
    Its approach to teenage mental health.
    Its monitoring of human trafficking.
    How the company’s algorithms promoted hate speech.
    Preferential disciplinary treatment for VIP users.
    Promoting ethnic violence.
    Failing to inform investors about a shrinking user base in certain demographics.
    The first complaint, regarding 6 January, contests testimony given to Congress in March by Facebook’s founder and chief executive, Mark Zuckerberg, in which he stated that: “We remove language that incites or facilitates violence, and we ban groups that proclaim a hateful and violent mission.”The complaint rebuts this, claiming that the company’s own records show it “knowingly chose to permit political misinformation and violent content/groups and failed to adopt or continue measures to combat these issues, including as related to the 2020 US election and the 6 January insurrection, in order to promote virality and growth on its platforms.”According to one internal Facebook document quoted in the complaints, the company admits: “For example, we estimate that we may action as little as 3-5% of hate [speech] and ~0.6% of V&V [violent and inciting content] on Facebook.”A complaint also alleges that Facebook misrepresented its “reach and frequency”, which are key metrics for the advertisers who provide the majority of Facebook’s revenue. That included concealing a decline in the key demographic of young users, the complaint stated. “During Covid, every cohort’s use of Facebook increased, except for those 23 and under, which continued to decline,” the complaint said.“For years, Facebook has misrepresented core metrics to investors and advertisers including the amount of content produced on its platforms and growth in individual users,” it said, adding this applied particularly in “high-value demographics” such as US teenagers.Facebook has been approached for comment.The human trafficking complaint alleges that Facebook and its photo-sharing app, Instagram, were aware in 2019 that the platforms were being used to “promote human trafficking and domestic servitude”. The hate speech complaint quotes another internal document that states: “We only take action against approximately 2% of the hate speech on the platform.” The teen health complaint focuses on the most damaging allegation from the WSJ series: that Instagram knew the app caused anxiety about body image among teenage girls.A complaint about Facebook’s approach to algorithms alleges that a tweak to the app’s News Feed product – a key part of users’ interaction with the app – led to the prioritisation of divisive content, while the complaint about ethnic violence contains an excerpt from an internal study that claims “in the Afghanistan market, the action rate for hate speech is worryingly low”.Facebook has issued a series of statements downplaying Haugen’s document leaks, saying: its Instagram research showed that many teens found the app helpful; it was investing heavily in security at the expense of its bottom line; polarisation had been growing in the US for decades before Facebook appeared; and the company had “made fighting misinformation and providing authoritative information a priority”.Responding to accusations that Facebook had misled the public and regulators, the company said: “We stand by our public statements and are ready to answer any questions regulators may have about our work.”TopicsFacebookSocial mediaUS CongressUS politicsDigital mediaSocial networkingnewsReuse this content More

  • in

    Facebook whistleblower to take her story before the US Senate

    FacebookFacebook whistleblower to take her story before the US SenateFrances Haugen, who came forward accusing the company of putting profit over safety, will testify in Washington on Tuesday Dan Milmo and Kari PaulMon 4 Oct 2021 23.00 EDTLast modified on Mon 4 Oct 2021 23.23 EDTA former Facebook employee who has accused the company of putting profit over safety will take her damning accusations to Washington on Tuesday when she testifies to US senators.Frances Haugen, 37, came forward on Sunday as the whistleblower behind a series of damaging reports in the Wall Street Journal that have heaped further political pressure on the tech giant. Haugen told the news program 60 Minutes that Facebook’s priority was making money over doing what was good for the public.“The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook. And Facebook, over and over again, chose to optimise for its own interests, like making more money,” she said.How losing a friend to misinformation drove Facebook whistleblower Read moreHaugen is expected to tell lawmakers that Facebook faces little oversight, and will urge Congress to take action. “As long as Facebook is operating in the dark, it is accountable to no one. And it will continue to make choices that go against the common good,” she wrote in her written testimony.Haugen was called to testify before the US Senate’s commerce subcommittee on the risks the company’s products pose to children. Lawmakers called the hearing in response to a Wall Street Journal story based on Haugen’s documents that showed Facebook was aware of the damage its Instagram app was causing to teen mental health and wellbeing. One survey in the leaked research estimated that 30% of teenage girls felt Instagram made dissatisfaction with their body worse.She is expected to compare Facebook to big big tobacco, which resisted telling the public that smoking damaged consumers’ health. “When we realized tobacco companies were hiding the harms it caused, the government took action. When we figured out cars were safer with seatbelts, the government took action,” Haugen wrote. “I implore you to do the same here.”Haugen will argue that Facebook’s closed design means it has no oversight, even from its own oversight board, a regulatory group that was formed in 2020 to make decisions independent of Facebook’s corporate leadership.“This inability to see into the actual systems of Facebook and confirm that Facebook’s systems work like they say is like the Department of Transportation regulating cars by watching them drive down the highway,” she wrote in her testimony. “Imagine if no regulator could ride in a car, pump up its wheels, crash test a car, or even know that seatbelts could exist.”Senator Richard Blumenthal, the Democrat whose committee is holding Tuesday’s hearing, told the Washington Post’s Technology 2020 newsletter that lawmakers will also ask Haugen about her remarks on the 2020 presidential election.Haugen alleged on 60 Minutes that following Joe Biden’s win in the election, Facebook prematurely reinstated old algorithms that valued engagement over all else, a move that she said contributed to the 6 January attack on the Capitol.“As soon as the election was over, they turned them back off or they changed the settings back to what they were before, to prioritize growth over safety. And that really feels like a betrayal of democracy to me,” she said.Following the election, Facebook also disbanded its civic team integrity team, a group that worked on issues related to political elections worldwide and which Haugen worked on. Facebook has said the team’s functions were distributed across the company.Haugen joined Facebook in 2019 as a product manager on the civic integrity team after spending more than a decade working in the tech industry, including at Pinterest and Google.Tuesday’s hearing is the second in mere weeks to focus on Facebook’s impact on children. Last week, lawmakers grilled Antigone Davis, Facebook’s global head of safety, and accused the company of “routinely” putting growth above children’s safety.Facebook has aggressively contested the accusations.On Friday, the company’s vice-president of policy and public affairs, Nick Clegg, wrote to Facebook employees ahead of Haugen’s public appearance. “Social media has had a big impact on society in recent years, and Facebook is often a place where much of this debate plays out,” he said. “But what evidence there is simply does not support the idea that Facebook, or social media more generally, is the primary cause of polarization.”On Monday, Facebook asked a federal judge throw out a revised anitrust lawsuit brought by the Federal Trade Commission (FTC) that seeks to force the company giant to sell Instagram and WhatsApp.TopicsFacebookSocial mediaUS SenateUS politicsnewsReuse this content More

  • in

    Facebook ‘tearing our societies apart’: key excerpts from a whistleblower

    FacebookFacebook ‘tearing our societies apart’: key excerpts from a whistleblower Frances Haugen tells US news show why she decided to reveal inside story about social networking firm Dan Milmo Global technology editorMon 4 Oct 2021 08.33 EDTLast modified on Mon 4 Oct 2021 10.30 EDTFrances Haugen’s interview with the US news programme 60 Minutes contained a litany of damning statements about Facebook. Haugen, a former Facebook employee who had joined the company to help it combat misinformation, told the CBS show the tech firm prioritised profit over safety and was “tearing our societies apart”.Haugen will testify in Washington on Tuesday, as political pressure builds on Facebook. Here are some of the key excerpts from Haugen’s interview.Choosing profit over the public goodHaugen’s most cutting words echoed what is becoming a regular refrain from politicians on both sides of the Atlantic: that Facebook puts profit above the wellbeing of its users and the public. “The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook. And Facebook, over and over again, chose to optimise for its own interests, like making more money.”She also accused Facebook of endangering public safety by reversing changes to its algorithm once the 2020 presidential election was over, allowing misinformation to spread on the platform again. “And as soon as the election was over, they turned them [the safety systems] back off or they changed the settings back to what they were before, to prioritise growth over safety. And that really feels like a betrayal of democracy to me.”Facebook’s approach to safety compared with othersIn a 15-year career as a tech professional, Haugen, 37, has worked for companies including Google and Pinterest but she said Facebook had the worst approach to restricting harmful content. She said: “I’ve seen a bunch of social networks and it was substantially worse at Facebook than anything I’d seen before.” Referring to Mark Zuckerberg, Facebook’s founder and chief executive, she said: “I have a lot of empathy for Mark. And Mark has never set out to make a hateful platform. But he has allowed choices to be made where the side-effects of those choices are that hateful, polarising content gets more distribution and more reach.”Instagram and mental healthThe document leak that had the greatest impact was a series of research slides that showed Facebook’s Instagram app was damaging the mental health and wellbeing of some teenage users, with 30% of teenage girls feeling that it made dissatisfaction with their body worse.She said: “And what’s super tragic is Facebook’s own research says, as these young women begin to consume this eating disorder content, they get more and more depressed. And it actually makes them use the app more. And so, they end up in this feedback cycle where they hate their bodies more and more. Facebook’s own research says it is not just that Instagram is dangerous for teenagers, that it harms teenagers, it’s that it is distinctly worse than other forms of social media.”Facebook has described the Wall Street Journal’s reporting on the slides as a “mischaracterisation” of its research.Why Haugen leaked the documentsHaugen said “person after person” had attempted to tackle Facebook’s problems but had been ground down. “Imagine you know what’s going on inside of Facebook and you know no one on the outside knows. I knew what my future looked like if I continued to stay inside of Facebook, which is person after person after person has tackled this inside of Facebook and ground themselves to the ground.”Having joined the company in 2019, Haugen said she decided to act this year and started copying tens of thousands of documents from Facebook’s internal system, which she believed show that Facebook is not, despite public comments to the contrary, making significant progress in combating online hate and misinformation . “At some point in 2021, I realised, ‘OK, I’m gonna have to do this in a systemic way, and I have to get out enough that no one can question that this is real.’”Facebook and violenceHaugen said the company had contributed to ethnic violence, a reference to Burma. In 2018, following the massacre of Rohingya Muslims by the military, Facebook admitted that its platform had been used to “foment division and incite offline violence” relating to the country. Speaking on 60 Minutes, Haugen said: “When we live in an information environment that is full of angry, hateful, polarising content it erodes our civic trust, it erodes our faith in each other, it erodes our ability to want to care for each other. The version of Facebook that exists today is tearing our societies apart and causing ethnic violence around the world.”Facebook and the Washington riotThe 6 January riot, when crowds of rightwing protesters stormed the Capitol, came after Facebook disbanded the Civic Integrity team of which Haugen was a member. The team, which focused on issues linked to elections around the world, was dispersed to other Facebook units following the US presidential election. “They told us: ‘We’re dissolving Civic Integrity.’ Like, they basically said: ‘Oh good, we made it through the election. There wasn’t riots. We can get rid of Civic Integrity now.’ Fast-forward a couple months, we got the insurrection. And when they got rid of Civic Integrity, it was the moment where I was like, ‘I don’t trust that they’re willing to actually invest what needs to be invested to keep Facebook from being dangerous.’”The 2018 algorithm changeFacebook changed the algorithm on its news feed – Facebook’s central feature, which supplies users with a customised feed of content such as friends’ photos and news stories – to prioritise content that increased user engagement. Haugen said this made divisive content more prominent.“One of the consequences of how Facebook is picking out that content today is it is optimising for content that gets engagement, or reaction. But its own research is showing that content that is hateful, that is divisive, that is polarising – it’s easier to inspire people to anger than it is to other emotions.” She added: “Facebook has realised that if they change the algorithm to be safer, people will spend less time on the site, they’ll click on less ads, they’ll make less money.”Haugen said European political parties contacted Facebook to say that the news feed change was forcing them to take more extreme political positions in order to win users’ attention. Describing polititicians’ concerns, she said: “You are forcing us to take positions that we don’t like, that we know are bad for society. We know if we don’t take those positions, we won’t win in the marketplace of social media.”In a statement to 60 Minutes, Facebook said: “Every day our teams have to balance protecting the right of billions of people to express themselves openly with the need to keep our platform a safe and positive place. We continue to make significant improvements to tackle the spread of misinformation and harmful content. To suggest we encourage bad content and do nothing is just not true. If any research had identified an exact solution to these complex challenges, the tech industry, governments, and society would have solved them a long time ago.”TopicsFacebookSocial networkingUS Capitol attackInstagramMental healthSocial mediaYoung peoplenewsReuse this content More

  • in

    Whistle-Blower to Accuse Facebook of Contributing to Jan. 6 Riot, Memo Says

    In an internal memo, Facebook defended itself and said that social media was not a primary cause of polarization.SAN FRANCISCO — Facebook, which has been under fire from a former employee who has revealed that the social network knew of many of the harms it was causing, was bracing for new accusations over the weekend from the whistle-blower and said in a memo that it was preparing to mount a vigorous defense.The whistle-blower, whose identity has not been publicly disclosed, planned to accuse the company of relaxing its security safeguards for the 2020 election too soon after Election Day, which then led it to be used in the storming of the U.S. Capitol on Jan. 6, according to the internal memo obtained by The New York Times. The whistle-blower planned to discuss the allegations on “60 Minutes” on Sunday, the memo said, and was also set to say that Facebook had contributed to political polarization in the United States.The 1,500-word memo, written by Nick Clegg, Facebook’s vice president of policy and global affairs, was sent on Friday to employees to pre-empt the whistle-blower’s interview. Mr. Clegg pushed back strongly on what he said were the coming accusations, calling them “misleading.” “60 Minutes” published a teaser of the interview in advance of its segment on Sunday.“Social media has had a big impact on society in recent years, and Facebook is often a place where much of this debate plays out,” he wrote. “But what evidence there is simply does not support the idea that Facebook, or social media more generally, is the primary cause of polarization.”Facebook has been in an uproar for weeks because of the whistle-blower, who has shared thousands of pages of company documents with lawmakers and The Wall Street Journal. The Journal has published a series of articles based on the documents, which show that Facebook knew how its apps and services could cause harm, including worsening body image issues among teenage girls using Instagram.Facebook has since scrambled to contain the fallout, as lawmakers, regulators and the public have said the company needs to account for the revelations. On Monday, Facebook paused the development of an Instagram service for children ages 13 and under. Its global head of safety, Antigone Davis, also testified on Thursday as irate lawmakers questioned her about the effects of Facebook and Instagram on young users.A Facebook spokesman declined to comment. A spokesman for “60 Minutes” did not immediately respond to a request for comment.Inside Facebook, executives including Mr. Clegg and the “Strategic Response” teams have called a series of emergency meetings to try to extinguish some of the outrage. Mark Zuckerberg, Facebook’s chief executive, and Sheryl Sandberg, the chief operating officer, have been briefed on the responses and have approved them, but have remained behind the scenes to distance themselves from the negative press, people with knowledge of the company have said.The firestorm is far from over. Facebook anticipated more allegations during the whistle-blower’s “60 Minutes” interview, according to the memo. The whistle-blower, who plans to reveal her identity during the interview, was set to say that Facebook had turned off some of its safety measures around the election — such as limits on live video — too soon after Election Day, the memo said. That allowed for misinformation to flood the platform and for groups to congregate online and plan the Jan. 6 storming of the Capitol building.Mr. Clegg said that was an inaccurate view and cited many of the safeguards and security mechanisms that Facebook had built over the past five years. He said the company had removed millions of groups such as the Proud Boys and others related to causes like the conspiracy theory QAnon and #StopTheSteal election fraud claims.The whistle-blower was also set to claim that many of Facebook’s problems stemmed from changes in the News Feed in 2018, the memo said. That was when the social network tweaked its algorithm to emphasize what it called Meaningful Social Interactions, or MSI, which prioritized posts from users’ friends and family and de-emphasized posts from publishers and brands.The goal was to make sure that Facebook’s products were “not just fun, but are good for people,” Mr. Zuckerberg said in an interview about the change at the time.But according to Friday’s memo, the whistle-blower would say that the change contributed to even more polarization among Facebook’s users. The whistle-blower was also set to say that Facebook then reaped record profits as its users flocked to the divisive content, the memo said.Mr. Clegg warned that the period ahead could be difficult for employees who might face questions from friends and family about Facebook’s role in the world. But he said that societal problems and political polarization have long predated the company and the advent of social networks in general.“The simple fact remains that changes to algorithmic ranking systems on one social media platform cannot explain wider societal polarization,” he wrote. “Indeed, polarizing content and misinformation are also present on platforms that have no algorithmic ranking whatsoever, including private messaging apps like iMessage and WhatsApp.”Mr. Clegg, who is scheduled to appear on the CNN program “Reliable Sources” on Sunday morning, also tried to relay an upbeat note to employees.“We will continue to face scrutiny — some of it fair and some of it unfair,” he said in the memo. “But we should also continue to hold our heads up high.”Here is Mr. Clegg’s memo in full:OUR POSITION ON POLARIZATION AND ELECTIONSYou will have seen the series of articles about us published in the Wall Street Journal in recent days, and the public interest it has provoked. This Sunday night, the ex-employee who leaked internal company material to the Journal will appear in a segment on 60 Minutes on CBS. We understand the piece is likely to assert that we contribute to polarization in the United States, and suggest that the extraordinary steps we took for the 2020 elections were relaxed too soon and contributed to the horrific events of January 6th in the Capitol.I know some of you – especially those of you in the US – are going to get questions from friends and family about these things so I wanted to take a moment as we head into the weekend to provide what I hope is some useful context on our work in these crucial areas.Facebook and PolarizationPeople are understandably anxious about the divisions in society and looking for answers and ways to fix the problems. Social media has had a big impact on society in recent years, and Facebook is often a place where much of this debate plays out. So it’s natural for people to ask whether it is part of the problem. But the idea that Facebook is the chief cause of polarization isn’t supported by the facts – as Chris and Pratiti set out in their note on the issue earlier this year.The rise of polarization has been the subject of swathes of serious academic research in recent years. In truth, there isn’t a great deal of consensus. But what evidence there is simply does not support the idea that Facebook, or social media more generally, is the primary cause of polarization.The increase in political polarization in the US pre-dates social media by several decades. If it were true that Facebook is the chief cause of polarization, we would expect to see it going up wherever Facebook is popular. It isn’t. In fact, polarization has gone down in a number of countries with high social media use at the same time that it has risen in the US.Specifically, we expect the reporting to suggest that a change to Facebook’s News Feed ranking algorithm was responsible for elevating polarizing content on the platform. In January 2018, we made ranking changes to promote Meaningful Social Interactions (MSI) – so that you would see more content from friends, family and groups you are part of in your News Feed. This change was heavily driven by internal and external research that showed that meaningful engagement with friends and family on our platform was better for people’s wellbeing, and we further refined and improved it over time as we do with all ranking metrics.Of course, everyone has a rogue uncle or an old school classmate who holds strong or extreme views we disagree with – that’s life – and the change meant you are more likely to come across their posts too. Even so, we’ve developed industry-leading tools to remove hateful content and reduce the distribution of problematic content. As a result, the prevalence of hate speech on our platform is now down to about 0.05%.But the simple fact remains that changes to algorithmic ranking systems on one social media platform cannot explain wider societal polarization. Indeed, polarizing content and misinformation are also present on platforms that have no algorithmic ranking whatsoever, including private messaging apps like iMessage and WhatsApp.Elections and DemocracyThere’s perhaps no other topic that we’ve been more vocal about as a company than on our work to dramatically change the way we approach elections. Starting in 2017, we began building new defenses, bringing in new expertise, and strengthening our policies to prevent interference. Today, we have more than 40,000 people across the company working on safety and security.Since 2017, we have disrupted and removed more than 150 covert influence operations, including ahead of major democratic elections. In 2020 alone, we removed more than 5 billion fake accounts — identifying almost all of them before anyone flagged them to us. And, from March to Election Day, we removed more than 265,000 pieces of Facebook and Instagram content in the US for violating our voter interference policies.Given the extraordinary circumstances of holding a contentious election in a pandemic, we implemented so called “break glass” measures – and spoke publicly about them – before and after Election Day to respond to specific and unusual signals we were seeing on our platform and to keep potentially violating content from spreading before our content reviewers could assess it against our policies.These measures were not without trade-offs – they’re blunt instruments designed to deal with specific crisis scenarios. It’s like shutting down an entire town’s roads and highways in response to a temporary threat that may be lurking somewhere in a particular neighborhood. In implementing them, we know we impacted significant amounts of content that did not violate our rules to prioritize people’s safety during a period of extreme uncertainty. For example, we limited the distribution of live videos that our systems predicted may relate to the election. That was an extreme step that helped prevent potentially violating content from going viral, but it also impacted a lot of entirely normal and reasonable content, including some that had nothing to do with the election. We wouldn’t take this kind of crude, catch-all measure in normal circumstances, but these weren’t normal circumstances.We only rolled back these emergency measures – based on careful data-driven analysis – when we saw a return to more normal conditions. We left some of them on for a longer period of time through February this year and others, like not recommending civic, political or new Groups, we have decided to retain permanently.Fighting Hate Groups and other Dangerous OrganizationsI want to be absolutely clear: we work to limit, not expand hate speech, and we have clear policies prohibiting content that incites violence. We do not profit from polarization, in fact, just the opposite. We do not allow dangerous organizations, including militarized social movements or violence-inducing conspiracy networks, to organize on our platforms. And we remove content that praises or supports hate groups, terrorist organizations and criminal groups.We’ve been more aggressive than any other internet company in combating harmful content, including content that sought to delegitimize the election. But our work to crack down on these hate groups was years in the making. We took down tens of thousands of QAnon pages, groups and accounts from our apps, removed the original #StopTheSteal Group, and removed references to Stop the Steal in the run up to the inauguration. In 2020 alone, we removed more than 30 million pieces of content violating our policies regarding terrorism and more than 19 million pieces of content violating our policies around organized hate in 2020. We designated the Proud Boys as a hate organization in 2018 and we continue to remove praise, support, and representation of them. Between August last year and January 12 this year, we identified nearly 900 militia organizations under our Dangerous Organizations and Individuals policy and removed thousands of Pages, groups, events, Facebook profiles and Instagram accounts associated with these groups.This work will never be complete. There will always be new threats and new problems to address, in the US and around the world. That’s why we remain vigilant and alert – and will always have to.That is also why the suggestion that is sometimes made that the violent insurrection on January 6 would not have occurred if it was not for social media is so misleading. To be clear, the responsibility for those events rests squarely with the perpetrators of the violence, and those in politics and elsewhere who actively encouraged them. Mature democracies in which social media use is widespread hold elections all the time – for instance Germany’s election last week – without the disfiguring presence of violence. We actively share with Law Enforcement material that we can find on our services related to these traumatic events. But reducing the complex reasons for polarization in America – or the insurrection specifically – to a technological explanation is woefully simplistic.We will continue to face scrutiny – some of it fair and some of it unfair. We’ll continue to be asked difficult questions. And many people will continue to be skeptical of our motives. That’s what comes with being part of a company that has a significant impact in the world. We need to be humble enough to accept criticism when it is fair, and to make changes where they are justified. We aren’t perfect and we don’t have all the answers. That’s why we do the sort of research that has been the subject of these stories in the first place. And we’ll keep looking for ways to respond to the feedback we hear from our users, including testing ways to make sure political content doesn’t take over their News Feeds.But we should also continue to hold our heads up high. You and your teams do incredible work. Our tools and products have a hugely positive impact on the world and in people’s lives. And you have every reason to be proud of that work. More

  • in

    Germany Struggles to Stop Online Abuse Ahead of Election

    Scrolling through her social media feed, Laura Dornheim is regularly stopped cold by a new blast of abuse aimed at her, including from people threatening to kill or sexually assault her. One person last year said he looked forward to meeting her in person so he could punch her teeth out.Ms. Dornheim, a candidate for Parliament in Germany’s election on Sunday, is often attacked for her support of abortion rights, gender equality and immigration. She flags some of the posts to Facebook and Twitter, hoping that the platforms will delete the posts or that the perpetrators will be barred. She’s usually disappointed.“There might have been one instance where something actually got taken down,” Ms. Dornheim said.Harassment and abuse are all too common on the modern internet. Yet it was supposed to be different in Germany. In 2017, the country enacted one of the world’s toughest laws against online hate speech. It requires Facebook, Twitter and YouTube to remove illegal comments, pictures or videos within 24 hours of being notified about them or risk fines of up to 50 million euros, or $59 million. Supporters hailed it as a watershed moment for internet regulation and a model for other countries.But an influx of hate speech and harassment in the run-up to the German election, in which the country will choose a new leader to replace Angela Merkel, its longtime chancellor, has exposed some of the law’s weaknesses. Much of the toxic speech, researchers say, has come from far-right groups and is aimed at intimidating female candidates like Ms. Dornheim.Some critics of the law say it is too weak, with limited enforcement and oversight. They also maintain that many forms of abuse are deemed legal by the platforms, such as certain kinds of harassment of women and public officials. And when companies do remove illegal material, critics say, they often do not alert the authorities or share information about the posts, making prosecutions of the people publishing the material far more difficult. Another loophole, they say, is that smaller platforms like the messaging app Telegram, popular among far-right groups, are not subject to the law.Free-expression groups criticize the law on other grounds. They argue that the law should be abolished not only because it fails to protect victims of online abuse and harassment, but also because it sets a dangerous precedent for government censorship of the internet.The country’s experience may shape policy across the continent. German officials are playing a key role in drafting one of the world’s most anticipated new internet regulations, a European Union law called the Digital Services Act, which will require Facebook and other online platforms to do more to address the vitriol, misinformation and illicit content on their sites. Ursula von der Leyen, a German who is president of the European Commission, the 27-nation bloc’s executive arm, has called for an E.U. law that would list gender-based violence as a special crime category, a proposal that would include online attacks.“Germany was the first to try to tackle this kind of online accountability,” said Julian Jaursch, a project director at the German think tank Stiftung Neue Verantwortung, which focuses on digital issues. “It is important to ask whether the law is working.”Campaign billboards in Germany’s race for chancellor, showing, from left, Annalena Baerbock of the Green Party, Olaf Scholz of the Social Democrats and Christian Lindner of the Free Democrats.Sean Gallup/Getty ImagesMarc Liesching, a professor at HTWK Leipzig who published an academic report on the policy, said that of the posts that had been deleted by Facebook, YouTube and Twitter, a vast majority were classified as violating company policies, not the hate speech law. That distinction makes it harder for the government to measure whether companies are complying with the law. In the second half of 2020, Facebook removed 49 million pieces of “hate speech” based on its own community standards, compared with the 154 deletions that it attributed to the German law, he found.The law, Mr. Liesching said, “is not relevant in practice.”With its history of Nazism, Germany has long tried to balance free speech rights against a commitment to combat hate speech. Among Western democracies, the country has some of the world’s toughest laws against incitement to violence and hate speech. Targeting religious, ethnic and racial groups is illegal, as are Holocaust denial and displaying Nazi symbols in public. To address concerns that companies were not alerting the authorities to illegal posts, German policymakers this year passed amendments to the law. They require Facebook, Twitter and YouTube to turn over data to the police about accounts that post material that German law would consider illegal speech. The Justice Ministry was also given more powers to enforce the law. “The aim of our legislative package is to protect all those who are exposed to threats and insults on the internet,” Christine Lambrecht, the justice minister, who oversees enforcement of the law, said after the amendments were adopted. “Whoever engages in hate speech and issues threats will have to expect to be charged and convicted.”Germans will vote for a leader to replace Angela Merkel, the country’s longtime chancellor.Markus Schreiber/Associated PressFacebook and Google have filed a legal challenge to block the new rules, arguing that providing the police with personal information about users violates their privacy.Facebook said that as part of an agreement with the government it now provided more figures about the complaints it received. From January through July, the company received more than 77,000 complaints, which led it to delete or block about 11,500 pieces of content under the German law, known as NetzDG.“We have zero tolerance for hate speech and support the aims of NetzDG,” Facebook said in a statement. Twitter, which received around 833,000 complaints and removed roughly 81,000 posts during the same period, said a majority of those posts did not fit the definition of illegal speech, but still violated the company’s terms of service.“Threats, abusive content and harassment all have the potential to silence individuals,” Twitter said in a statement. “However, regulation and legislation such as this also has the potential to chill free speech by emboldening regimes around the world to legislate as a way to stifle dissent and legitimate speech.”YouTube, which received around 312,000 complaints and removed around 48,000 pieces of content in the first six months of the year, declined to comment other than saying it complies with the law.The amount of hate speech has become increasingly pronounced during election season, according to researchers at Reset and HateAid, organizations that track online hate speech and are pushing for tougher laws.The groups reviewed nearly one million comments on far-right and conspiratorial groups across about 75,000 Facebook posts in June, finding that roughly 5 percent were “highly toxic” or violated the online hate speech law. Some of the worst material, including messages with Nazi symbolism, had been online for more than a year, the groups found. Of 100 posts reported by the groups to Facebook, roughly half were removed within a few days, while the others remain online.The election has also seen a wave of misinformation, including false claims about voter fraud.Annalena Baerbock, the 40-year-old leader of the Green Party and the only woman among the top candidates running to succeed Ms. Merkel, has been the subject of an outsize amount of abuse compared with her male rivals from other parties, including sexist slurs and misinformation campaigns, according to researchers.Ms. Baerbock, the Green Party candidate for chancellor, taking a selfie with one of her supporters.Laetitia Vancon for The New York TimesOthers have stopped running altogether. In March, a former Syrian refugee running for the German Parliament, Tareq Alaows, dropped out of the race after experiencing racist attacks and violent threats online.While many policymakers want Facebook and other platforms to be aggressive in screening user-generated content, others have concerns about private companies making decisions about what people can and can’t say. The far-right party Alternative for Germany, which has criticized the law for unfairly targeting its supporters, has vowed to repeal the policy “to respect freedom of expression.”Jillian York, an author and free speech activist with the Electronic Frontier Foundation in Berlin, said the German law encouraged companies to remove potentially offensive speech that is perfectly legal, undermining free expression rights.“Facebook doesn’t err on the side of caution, they just take it down,” Ms. York said. Another concern, she said, is that less democratic countries such as Turkey and Belarus have adopted laws similar to Germany’s so that they could classify certain material critical of the government as illegal.Renate Künast, a former government minister who once invited a journalist to accompany her as she confronted individuals in person who had targeted her with online abuse, wants to see the law go further. Victims of online abuse should be able to go after perpetrators directly for libel and financial settlements, she said. Without that ability, she added, online abuse will erode political participation, particularly among women and minority groups.In a survey of more than 7,000 German women released in 2019, 58 percent said they did not share political opinions online for fear of abuse.“They use the verbal power of hate speech to force people to step back, leave their office or not to be candidates,” Ms. Künast said.The Reichstag, where the German Parliament convenes, in Berlin.Emile Ducke for The New York TimesMs. Dornheim, the Berlin candidate, who has a master’s degree in computer science and used to work in the tech industry, said more restrictions were needed. She described getting her home address removed from public records after somebody mailed a package to her house during a particularly bad bout of online abuse.Yet, she said, the harassment has only steeled her resolve.“I would never give them the satisfaction of shutting up,” she said. More

  • in

    How They Failed: California Republicans, Media Critics and Facebook

    In a special Opinion Audio bonanza, Jane Coaston (The Argument), Ezra Klein (The Ezra Klein Show) and Kara Swisher (Sway) sit down to discuss what went wrong for the G.O.P. in the recall election of Gov. Gavin Newsom of California. “This was where the nationalization of politics really bit back for Republicans,” Jane says. The three hosts then debate whether the media industry’s criticism of itself does any good at all. “The media tweets like nobody’s watching,” Ezra says. Then the hosts turn to The Wall Street Journal’s revelations in “The Facebook Files” and discuss how to hold Facebook accountable. “We’re saying your tools in the hands of malevolent players are super dangerous,” Kara says, “but we have no power over them whatsoever.”And last, Ezra, Jane and Kara offer recommendations to take you deep into history, fantasy and psychotropics.[You can listen to this episode of “The Argument” on Apple, Spotify or Google or wherever you get your podcasts.]Read more about the subjects in this episode:Jane Coaston, Vox: “How California conservatives became the intellectual engine of Trumpism”Ezra Klein: “Gavin Newsom Is Much More Than the Lesser of Two Evils” and “A Different Way of Thinking About Cancel Culture”Kara Swisher: “The Endless Facebook Apology,” “Don’t Get Bezosed,” “The Medium of the Moment” “‘They’re Killing People’? Biden Isn’t Quite Right, but He’s Not Wrong.” and “The Terrible Cost of Mark Zuckerberg’s Naïveté”(A full transcript of the episode will be available midday on the Times website.)Photographs courtesy of The New York TimesThoughts? Email us at argument@nytimes.com or leave us a voice mail message at (347) 915-4324. We want to hear what you’re arguing about with your family, your friends and your frenemies. (We may use excerpts from your message in a future episode.)By leaving us a message, you are agreeing to be governed by our reader submission terms and agreeing that we may use and allow others to use your name, voice and message.This episode was produced by Phoebe Lett, Annie Galvin and Rogé Karma. It was edited by Stephanie Joyce, Alison Bruzek and Nayeema Raza. Engineering, music and sound design by Isaac Jones and Sonia Herrero. Fact-checking by Kate Sinclair, Michelle Harris and Kristin Lin. Audience strategy by Shannon Busta. Special thanks to Matt Kwong, Daphne Chen and Blakeney Schick. More

  • in

    In Canada, Will Young Voters Turn Out for the NDP and Jagmeet Singh?

    Ditching a collared dress shirt for a sleeveless hoodie, Jagmeet Singh, the leader of the left-leaning New Democratic Party, sways to the music in a recent TikTok video recreating a viral dance trend, with text overlaid about how youth voters are “going to make history” this election.But political analysts aren’t convinced TikToks and streams on Twitch — another social media platform he has appeared on — will translate into votes.Mr. Singh has continued to leverage social media as a campaign strategy as he did in the 2019 election. The party is also emphasizing issues like income distribution and taxing the ultra-wealthy, said Lars Osberg, an economics professor at Dalhousie University in Nova Scotia, a move reminiscent of Canada’s 1972 election. That is when David Lewis of the N.D.P. rose to prominence on the campaign slogan of getting rid of “corporate welfare bums.”But is all this enough to get young voters, one of the least dependable demographics, to the polls, and to get them to vote for the N.D.P.?“Young people did turn out back in 2015, because they really wanted to get rid of Stephen Harper,” said Professor Osberg, referring to the former Conservative Party leader. (The current one, Erin O’Toole, has made himself a less polarizing figure by reshaping his party to broaden its appeal.)But it was Justin Trudeau who captured the youth vote in 2015.The New Democrats may do well in some areas with large Indigenous populations, whose vote is generally split between that party and Mr. Trudeau’s Liberal Party.The Liberals have the greatest number of incumbent candidates who are Indigenous, but 28 of the total 50 Indigenous candidates are running with the New Democrats, according to a list compiled by the Assembly of First Nations.In a campaign where Indigenous issues have largely been sidelined, Mr. Singh has hit on Mr. Trudeau for falling short on his promise to bring clean drinking water to all Indigenous communities. And Indigenous voters may be losing confidence in the Liberals.“Right now, it’s looking like a lot of people in the community are saying, no, we’re not with you this time,” said Cameron Holmstrom, an Indigenous consultant who has worked with the New Democrats.Ian Austen contributed reporting. More

  • in

    Jeffrey Katzenberg Talks About His Billion-Dollar Flop

    The public failure of his start-up Quibi hasn’t stopped Jeffrey Katzenberg from doubling down on tech. A Hollywood power broker, he headed up Disney in the 1980s and ’90s and co-founded a rival studio, DreamWorks, before finding a puzzle he could not yet solve: getting people to pay for short-format content. Investors gave him and the former Hewlett-Packard C.E.O. and California gubernatorial candidate Meg Whitman $1.75 billion to build a video platform, but not enough customers opened up their wallets, at $4.99 a month, and Quibi folded within a year of its launch. Katzenberg says the problems were product-market fit and the Covid pandemic, not competition from TikTok or YouTube.[You can listen to this episode of “Sway” on Apple, Spotify, Google or wherever you get your podcasts.]In this conversation, Kara Swisher and Katzenberg delve into Quibi’s demise, the shifting power dynamics in Hollywood and his pivot to Silicon Valley. They also discuss his influence in another sphere: politics. And the former Hollywood executive, who co-chaired a fund-raiser to help fend off California’s recent recall effort, offers some advice to Gov. Gavin Newsom.(A full transcript of the episode will be available midday on the Times website.)Photograph by WndrCoThoughts? Email us at sway@nytimes.com.“Sway” is produced by Nayeema Raza, Blakeney Schick, Matt Kwong, Daphne Chen and Caitlin O’Keefe and edited by Nayeema Raza; fact-checking by Kate Sinclair; music and sound design by Isaac Jones; mixing by Carole Sabouraud and Sonia Herrero; audience strategy by Shannon Busta. Special thanks to Kristin Lin and Liriel Higa. More