More stories

  • in

    I Worked at Facebook. It’s Not Ready for This Year’s Election Wave.

    The world is not ready for the coming electoral tsunami. Neither is Facebook. With so many elections on the horizon — France, Kenya, Australia, Brazil, the Philippines and the United States will hold elections this year — the conversation now should focus on how Facebook is preparing.I know what it’s like to prepare for an election at Facebook. I worked there for 10 years, and from 2014 through the end of 2019, I led the company’s work across elections globally. It has poured more than $13 billion into building up its safety and security efforts in the United States since the 2016 elections, when the platform was too slow to recognize how its products could be weaponized to spread misinformation.Responsible election plans cannot be spun up in days or weeks. It takes time not only to organize internally but also to make meaningful and necessary connections with the communities around the world working to secure elections. Facebook must begin serious, concerted, well-funded efforts today.For some of the elections happening in the first half of this year, Facebook is cutting it close. But there’s still time for Facebook to commit to a publicly available road map that outlines how it plans to build up its resources to fight misinformation and hate speech around the world. Algorithms that find hate speech and election-related content; labels that give people more context, like those in the United States applied to content that questioned the election results; and efforts to get people accurate information about where, when and how to vote should all be a part of the baseline protections Facebook deploys across the globe. On top of these technical protections, it needs people with country-specific language and culture expertise to make tough decisions about speech or behavior that might violate the platform’s rules.I’m proud of the progress the company made in bringing more transparency to political and issue ads, developing civil society partnerships and taking down influence operations. None of that progress happened spontaneously. To combat the Internet Research Agency, a Russian troll farm that exposed 126 million Americans to its content before and after the 2016 elections, for example, Facebook needed new policies, new expertise and a revamped team at the platform dedicated to these issues. Because of those innovations, the company was able to take down 52 influence networks in 2021.Facebook couldn’t do this work alone. Partnerships with organizations such as the Atlantic Council, the National Democratic Institute, the International Republican Institute and many others were crucial.But even then, providing the technical infrastructure to combat misinformation is only half the battle. Facebook faced scrutiny again in 2020 and 2021 for how it handled everything from President Donald Trump’s Facebook account to false election fraud claims and Jan. 6. Many of the conversations I had at the time revolved around balancing the right to free speech with the harm that speech could cause someone.This is one of the central dilemmas companies like Facebook grapple with. What is the right call for company administrators when a sitting president of the United States violates their platform’s community standards, even as they believe that people should be able to hear what he has to say? When are people exercising their right to organize and protest against their government, as opposed to preparing for a violent insurrection?Similar issues come up in other countries. Last year the Russian government pressured Apple and Google to remove an app created by allies of Aleksei Navalny, an opponent of President Vladimir Putin’s. Refusing the government would have put their employees in Russia at risk. Complying would go against free-expression standards. The companies chose to protect their employees.These are the kinds of difficult questions that crop up in every country, but Facebook also needs country-specific monitoring. Human expertise is the only way to truly understand how heated discussions are shifting in real time and to be sensitive to linguistic and cultural nuances. The word “dill” in Russian translates to “ukrop,” for example, which has been used as a slur against Ukrainians. Some Ukrainians, however, reclaimed the word and even named a political party after it. A global framework that fails to account for these kinds of situations or that is overly reliant on technology to address them is not prepared to confront the reality of our complex world.Facebook has invested billions in this kind of work. But a majority of its investment for classifying misinformation, for example, has focused on the United States, even though daily active users in other countries make up the vast majority of the user base. And it’s not clear which efforts Facebook will extend from U.S. elections to those in other countries. It’s unlikely that within the next two years, much less the next few months, Facebook can build up protections in every country. But it must start planning now for how it will exponentially scale up people, products and partnerships to handle so many elections at once in 2022 and 2024.It should be transparent about how it will determine what to build in each country. In 2019, Facebook had more than 500 full-time employees and 30,000 people working on safety and security overall. Even with that amount of human talent, it could cover the national elections in only three major countries at once. At least that many people were needed for the United States in 2020. In two years, people in the United States, India, Indonesia, Ukraine, Taiwan, Mexico and Britain are to go to the polls in national elections. Facebook will need to consider hiring at least 1,000 more full-time employees to be ready for the next big election cycle. If the company is cutting it close for 2022, it has just enough time to be really ready for 2024.These problems are not ones that Facebook can fix on its own. Its parent, Meta, is a private company but one with tremendous influence on society and democratic discourse. Facebook needs to continue to recognize the responsibility it has to protect elections around the world and invest accordingly. Governments, civil society and the public should hold it accountable for doing so.Katie Harbath is the chief executive of Anchor Change, a company focused on issues at the intersection of tech and democracy. She formerly worked at Facebook, where she helped lead its work on elections.The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram. More

  • in

    Jan. 6 Committee Subpoenas Twitter, Meta, Alphabet and Reddit

    The panel investigating the attack on the Capitol is demanding information from Alphabet, Meta, Reddit and Twitter.WASHINGTON — The House committee investigating the Jan. 6 attack on the Capitol issued subpoenas on Thursday to four major social media companies — Alphabet, Meta, Reddit and Twitter — criticizing them for allowing extremism to spread on their platforms and saying they have failed to cooperate adequately with the inquiry.In letters accompanying the subpoenas, the panel named Facebook, a unit of Meta, and YouTube, which is owned by Alphabet’s Google subsidiary, as among the worst offenders that contributed to the spread of misinformation and violent extremism. The committee said it had been investigating how the companies “contributed to the violent attack on our democracy, and what steps — if any — social media companies took to prevent their platforms from being breeding grounds for radicalizing people to violence.”“It’s disappointing that after months of engagement, we still do not have the documents and information necessary to answer those basic questions,” said the panel’s chairman, Representative Bennie Thompson, Democrat of Mississippi.The committee sent letters in August to 15 social media companies — including sites where misinformation about election fraud spread, such as the pro-Trump website TheDonald.win — seeking documents pertaining to efforts to overturn the election and any domestic violent extremists associated with the Jan. 6 rally and attack.After months of discussions with the companies, only the four large corporations were issued subpoenas on Thursday, because the committee said the firms were “unwilling to commit to voluntarily and expeditiously” cooperating with its work. A committee aide said investigators were in various stages of negotiations with the other companies.In the year since the events of Jan. 6, social media companies have been heavily scrutinized for whether their sites played an instrumental role in organizing the attack.In the months surrounding the 2020 election, employees inside Meta raised warning signs that Facebook posts and comments containing “combustible election misinformation” were spreading quickly across the social network, according to a cache of documents and photos reviewed by The New York Times. Many of those employees criticized Facebook leadership’s inaction when it came to the spread of the QAnon conspiracy group, which they said also contributed to the attack.Frances Haugen, a former Facebook employee turned whistle-blower, said the company relaxed its safeguards too quickly after the election, which then led it to be used in the storming of the Capitol.Critics say that other platforms also played an instrumental role in the spread of misinformation while contributing to the events of Jan. 6.In the days after the attack, Reddit banned a discussion forum dedicated to former President Donald J. Trump, where tens of thousands of Mr. Trump’s supporters regularly convened to express solidarity with him.On Twitter, many of Mr. Trump’s followers used the site to amplify and spread false allegations of election fraud, while connecting with other Trump supporters and conspiracy theorists using the site. And on YouTube, some users broadcast the events of Jan. 6 using the platform’s video streaming technology.Representatives for the tech companies have been in discussions with the investigating committee, though how much in the way of evidence or user records the firms have handed over remains unclear.The committee said letters to the four firms accompanied the subpoenas.The panel said YouTube served as a platform for “significant communications by its users that were relevant to the planning and execution of Jan. 6 attack on the United States Capitol,” including livestreams of the attack as it was taking place.“To this day, YouTube is a platform on which user video spread misinformation about the election,” Mr. Thompson wrote.The panel said Facebook and other Metaplatforms were used to share messages of “hate, violence and incitement; to spread misinformation, disinformation and conspiracy theories around the election; and to coordinate or attempt to coordinate the Stop the Steal movement.”Public accounts about Facebook’s civic integrity team indicate that Facebook has documents that are critical to the select committee’s investigation, the panel said.“Meta has declined to commit to a deadline for producing or even identifying these materials,” Mr. Thompson wrote to Mark Zuckerberg, Meta’s chief executive.Key Figures in the Jan. 6 InquiryCard 1 of 12The House investigation. More

  • in

    End the Secrecy. Open Up Adoption Records.

    More from our inbox:Facebook MisinformationErasing Older Women at the Art Institute of ChicagoA Conversation With VotersSam Anthony, left, with his birth father, Craig Nelson, at Mr. Anthony’s home in Falls Church, Va., in August.Debra Steidel WallTo the Editor:Re “With DNA and Friend’s Help, a Dying Son Finds His Father” (front page, Oct. 10):If we continue to keep the process of finding one’s birth family and opening birth records as difficult as possible, as with Sam Anthony, profiled in your article, we are preventing valuable family connections that should be a basic human right.Adoptees are often completely cut off from our birth families the second our adoption papers are finalized. If it weren’t for DNA testing I would never have discovered that two half-siblings of mine had been adopted into a different family a few states away.Adoptees should not have to go to great lengths to reconnect with their birth family. But, unfortunately, the complicated and often expensive process of DNA testing and hiring private investigators is often the only way to find biological relatives.When birth records are sealed, adoptees suffer in order to uphold an archaic standard that was meant to shroud adoptions in secrecy to prevent shame. We live in a different era now and, like Sam, deserve a right to our records.Melissa Guida-RichardsMilford, Pa.The writer is the author of “What White Parents Should Know About Transracial Adoption.”To the Editor:This is the latest article in The Times exposing the egregious practice of denying adoptees the truth about their beginnings and hiding the babies’ fate from their birth parents.Steve Inskeep’s March 28 essay, “I Was Denied My Birth Story,” revealed his fury about not knowing “the story of how I came to live on this earth. Strangers hid part of me from myself.”Lisa Belkin reviewed Gabrielle Glaser’s book “American Baby” (Book Review, Jan. 24), another tragic tale about when adoptions are closed.How many tragic tales do we have to hear to understand that birth parents, adoptive families and adoptees need to know one another? How many children must lie awake at night wondering why they were given away? How many adoptees do not know their genetic history?The solution is easy — open adoption in which birth parents and adoptive families choose each other and stay in touch through social media, texts, photos and visits.With Ancestry.com and 23andMe closed adoptions do not remain closed. Why not avoid the emotional pain by sharing the truth from the beginning?Nancy KorsWalnut Creek, Calif.The writer is an adoption facilitator.Facebook Misinformation  Illustration by Mel Haasch; Photograph by Anna Moneymaker for The New York TimesTo the Editor:Re “Misinformation Tripped Alarms Inside Facebook” (front page, Oct. 24):New disclosures that point to a disconnect between self-serving public statements of Facebook executives and the internal expressions of concern of lower-level employees surrounding the 2020 election paint a picture of a company policy that enables and protects misinformation.These revelations, especially those involving the Jan. 6 insurrection, suggest that management overlooks or even accepts incendiary content in its pursuit of profits — a practice that is often out of sync with the conscience of its employees and is at odds with the best interests of the public.Taken together with the recent testimony of the whistle-blower Frances Haugen, who detailed to Congress a corporate culture that places profits ahead of its users’ mental health, this new documentation clearly strengthens the case for congressional oversight and public awareness.Facebook’s reach and influence are so vast that its apparent unwillingness to filter misinformation exceeds the bounds of free speech, harming its users and putting democracy at risk. The company has had a good run, but the days of its free ride maybe numbered.Roger HirschbergSouth Burlington, Vt.Erasing Older Women at the Art Institute of Chicago  Art Institute of ChicagoTo the Editor:Re “Museum Ousts Volunteers in Diversity Push. Uproar Ensues.” (news article, Oct. 22):Alas, the invisible old woman! While your article on the Art Institute of Chicago’s decision to end the volunteer careers of 82 docents focused on the controversy over the racial makeup of the docents, it neglected to really deal with the overt age discrimination that such otherwise worthwhile pushes for greater diversity promote.Not all docents are older or female, but they tend to be. Largely, they can volunteer with such expertise and loyalty because after long careers and/or raising families, many finally have the time to turn to volunteering in their communities. Yet the museum — along with much of our society — invalidates these older women, erasing their presence.Dee BaerWilmington, Del.The writer is a senior guide at the Delaware Art Museum.A Conversation With Voters  Aaron Nesheim for The New York TimesTo the Editor:Re “The Anti-Robocall: Listening to Voters Talk” (news article, Oct. 21):This wonderful article identifies a way to improve the minimal communication that currently prevails among those holding different opinions regarding values and public policy.As psychologists and spiritual teachers have long observed, deep, nonjudgmental listening to others with diverse perspectives can increase compassion for one another and perhaps lead to compromise solutions to the serious problems afflicting our nation and the world.Would that our Congress might take heed and schedule such listening sessions about the national issues too often discussed secretly that leave the public uninformed. Broadcasting honest dialogues that state positions and not just attacks on the other side on TV and the internet would manifest a concern for an informed citizenry.Bruce KerievskyMonroe Township, N.J. More

  • in

    Las investigaciones internas de Facebook: los documentos muestran señales de alarma sobre la desinformación

    Documentos de la empresa revelan que en varias ocasiones trabajadores de la red social advirtieron de la difusión de desinformación y teorías de la conspiración antes y después de las elecciones presidenciales de Estados Unidos.Dieciséis meses antes de las elecciones presidenciales celebradas en noviembre del año pasado, una investigadora de Facebook describió un acontecimiento alarmante. Una semana después de abrir una cuenta experimental, ya estaba recibiendo contenido sobre la teoría conspirativa de QAnon, según escribió en un informe interno.El 5 de noviembre, dos días después de las elecciones, otro empleado de Facebook escribió un mensaje para alertar a sus colegas sobre los comentarios con “desinformación electoral polémica” que se podían ver debajo de muchas publicaciones.Cuatro días después de eso, un científico de datos de la empresa escribió una nota para sus compañeros de trabajo en la que decía que el diez por ciento de todas las vistas de material político en Estados Unidos —una cifra sorprendentemente alta— eran publicaciones que alegaban un fraude electoral.En cada caso, los empleados de Facebook sonaron una alarma sobre desinformación y contenido inflamatorio en la plataforma e instaron a tomar medidas, pero la empresa no atendió los problemas o tuvo dificultades para hacerlo. La comunicación interna fue parte de un conjunto de documentos de Facebook que obtuvo The New York Times, que brindan nueva información sobre lo ocurrido dentro de la red social antes y después de las elecciones de noviembre, cuando a la empresa la tomaron desprevenida los usuarios que convirtieron la plataforma en un arma para difundir mentiras sobre la votación. More

  • in

    What Happened When Facebook Employees Warned About Election Misinformation

    Company documents show that the social network’s employees repeatedly raised red flags about the spread of misinformation and conspiracies before and after the contested November vote.Sixteen months before last November’s presidential election, a researcher at Facebook described an alarming development. She was getting content about the conspiracy theory QAnon within a week of opening an experimental account, she wrote in an internal report.On Nov. 5, two days after the election, another Facebook employee posted a message alerting colleagues that comments with “combustible election misinformation” were visible below many posts.Four days after that, a company data scientist wrote in a note to his co-workers that 10 percent of all U.S. views of political material — a startlingly high figure — were of posts that alleged the vote was fraudulent.In each case, Facebook’s employees sounded an alarm about misinformation and inflammatory content on the platform and urged action — but the company failed or struggled to address the issues. The internal dispatches were among a set of Facebook documents obtained by The New York Times that give new insight into what happened inside the social network before and after the November election, when the company was caught flat-footed as users weaponized its platform to spread lies about the vote. More

  • in

    Trump Finds Backing for His Own Media Venture

    A merger could give the former president access to nearly $300 million in cash — and perhaps a new platform.Former President Donald J. Trump said on Wednesday that he had lined up the investment money to create his own publicly traded media company, an attempt to reinsert himself in the public conversation online from which he has largely been absent since Twitter and Facebook banned him after the Jan. 6 insurrection.If finalized, the deal could give the new Trump company access to nearly $300 million in spending money.In a statement announcing the new venture, Mr. Trump and his investors said that the new company would be called Trump Media & Technology Group and that they would create a new social network called Truth Social. Its purpose, according to the statement, is “to create a rival to the liberal media consortium and fight back against the ‘Big Tech’ companies of Silicon Valley.”Since he left office and became the only American president to be impeached twice, Mr. Trump has had an active presence in conservative media. But he lacks the ability he once had to sway news cycles and dominate the national political debate. He filed a lawsuit this month asking Twitter to reinstate his account.The announcement on Wednesday also pointed to a promised new app listed for pre-sale on the App Store, with mock-up illustrations bearing more than a passing resemblance to Twitter.The details of Mr. Trump’s latest partnership were vague. The statement he issued was reminiscent of the kind of claims he made about his business dealings in New York as a real estate developer. It was replete with high-dollar amounts and superlatives that could not be verified.Rumors of Mr. Trump’s interest in starting his own media businesses have circulated since he was defeated in the November 2020 election. None materialized. Despite early reports that he was interested in starting his own cable channel to rival Fox News, that was never an idea that got very far given the immense costs and time needed to put into it. A close adviser, Jason Miller, started a rival social media platform for Trump supporters called Gettr. But Mr. Trump never signed on.In a statement on Wednesday night, Mr. Miller said of his and Mr. Trump’s negotiations, “We just couldn’t come to terms on a deal.”Mr. Trump’s partner is Digital World Acquisition, a special purpose acquisition company, or SPAC. These so-called blank-check companies are an increasingly popular type of investment vehicle that sells shares to the public with the intention of using the proceeds to buy private businesses.Digital World was incorporated in Miami a month after Mr. Trump lost the 2020 election.The company filed for an initial public stock offering this spring, and it sold shares to the public on the Nasdaq stock exchange last month. The I.P.O. raised about $283 million, and Digital World drummed up another $11 million by selling shares to investors through a so-called private placement.Digital World is backed by some marquee Wall Street names and others with high-powered connections. In regulatory filings after the I.P.O., major hedge funds including D.E. Shaw, Highbridge Capital Management, Lighthouse Partners and Saba Capital Management have reported owning substantial percentages of Digital World.Digital World’s chief executive is Patrick F. Orlando, a former employee of investment banks including the German Deutsche Bank, where he specialized in the trading of financial instruments known as derivatives. He created his own investment bank, Benessere Capital, in 2012, according to a recent regulatory filing.Digital World’s chief financial officer, Luis Orleans-Braganza, is a member of Brazil’s National Congress.Mr. Orlando disclosed in a recent filing that he owned nearly 18 percent of the company’s outstanding stock. Mr. Orlando and representatives for Digital World did not immediately respond to requests for comment.This is not Mr. Orlando’s first blank-check company. He has created at least two others, including one, Yunhong International, that is incorporated in the offshore tax haven of the Cayman Islands.At the time that investors bought shares in Digital World, it had not disclosed what, if any, companies it planned to acquire. On its website, Digital World said that its goal was “to focus on combining with a leading tech company.”At least one of the investors, Saba Capital Management, did not know at the time of the initial public offering that Digital World would be doing a transaction with Mr. Trump, according to a person familiar with the matter.Mr. Trump, who has repeatedly lied about the results of the 2020 election while accusing the mainstream news media of publishing “fake” stories to discredit him, leaned hard into the notion of truth as his new company’s governing ethos.“We live in a world where the Taliban has a huge presence on Twitter, yet your favorite American president has been silenced,” Mr. Trump said in his written statement, vowing to publish his first item soon. “This is unacceptable.” More

  • in

    YouTube’s stronger election misinformation policies had a spillover effect on Twitter and Facebook, researchers say.

    .dw-chart-subhed {
    line-height: 1;
    margin-bottom: 6px;
    font-family: nyt-franklin;
    color: #121212;
    font-size: 15px;
    font-weight: 700;
    }

    Share of Election-Related Posts on Social Platforms Linking to Videos Making Claims of Fraud
    Source: Center for Social Media and Politics at New York UniversityBy The New York TimesYouTube’s stricter policies against election misinformation was followed by sharp drops in the prevalence of false and misleading videos on Facebook and Twitter, according to new research released on Thursday, underscoring the video service’s power across social media.Researchers at the Center for Social Media and Politics at New York University found a significant rise in election fraud YouTube videos shared on Twitter immediately after the Nov. 3 election. In November, those videos consistently accounted for about one-third of all election-related video shares on Twitter. The top YouTube channels about election fraud that were shared on Twitter that month came from sources that had promoted election misinformation in the past, such as Project Veritas, Right Side Broadcasting Network and One America News Network.But the proportion of election fraud claims shared on Twitter dropped sharply after Dec. 8. That was the day YouTube said it would remove videos that promoted the unfounded theory that widespread errors and fraud changed the outcome of the presidential election. By Dec. 21, the proportion of election fraud content from YouTube that was shared on Twitter had dropped below 20 percent for the first time since the election.The proportion fell further after Jan. 7, when YouTube announced that any channels that violated its election misinformation policy would receive a “strike,” and that channels that received three strikes in a 90-day period would be permanently removed. By Inauguration Day, the proportion was around 5 percent.The trend was replicated on Facebook. A postelection surge in sharing videos containing fraud theories peaked at about 18 percent of all videos on Facebook just before Dec. 8. After YouTube introduced its stricter policies, the proportion fell sharply for much of the month, before rising slightly before the Jan. 6 riot at the Capitol. The proportion dropped again, to 4 percent by Inauguration Day, after the new policies were put in place on Jan. 7.To reach their findings, researchers collected a random sampling of 10 percent of all tweets each day. They then isolated tweets that linked to YouTube videos. They did the same for YouTube links on Facebook, using a Facebook-owned social media analytics tool, CrowdTangle.From this large data set, the researchers filtered for YouTube videos about the election broadly, as well as about election fraud using a set of keywords like “Stop the Steal” and “Sharpiegate.” This allowed the researchers to get a sense of the volume of YouTube videos about election fraud over time, and how that volume shifted in late 2020 and early 2021.Misinformation on major social networks has proliferated in recent years. YouTube in particular has lagged behind other platforms in cracking down on different types of misinformation, often announcing stricter policies several weeks or months after Facebook and Twitter. In recent weeks, however, YouTube has toughened its policies, such as banning all antivaccine misinformation and suspending the accounts of prominent antivaccine activists, including Joseph Mercola and Robert F. Kennedy Jr.Ivy Choi, a YouTube spokeswoman, said that YouTube was the only major online platform with a presidential election integrity policy. “We also raised up authoritative content for election-related search queries and reduced the spread of harmful election-related misinformation,” she said.Megan Brown, a research scientist at the N.Y.U. Center for Social Media and Politics, said it was possible that after YouTube banned the content, people could no longer share the videos that promoted election fraud. It is also possible that interest in the election fraud theories dropped considerably after states certified their election results.But the bottom line, Ms. Brown said, is that “we know these platforms are deeply interconnected.” YouTube, she pointed out, has been identified as one of the most-shared domains across other platforms, including in both of Facebook’s recently released content reports and N.Y.U.’s own research.“It’s a huge part of the information ecosystem,” Ms. Brown said, “so when YouTube’s platform becomes healthier, others do as well.” More

  • in

    Whistle-Blower to Accuse Facebook of Contributing to Jan. 6 Riot, Memo Says

    In an internal memo, Facebook defended itself and said that social media was not a primary cause of polarization.SAN FRANCISCO — Facebook, which has been under fire from a former employee who has revealed that the social network knew of many of the harms it was causing, was bracing for new accusations over the weekend from the whistle-blower and said in a memo that it was preparing to mount a vigorous defense.The whistle-blower, whose identity has not been publicly disclosed, planned to accuse the company of relaxing its security safeguards for the 2020 election too soon after Election Day, which then led it to be used in the storming of the U.S. Capitol on Jan. 6, according to the internal memo obtained by The New York Times. The whistle-blower planned to discuss the allegations on “60 Minutes” on Sunday, the memo said, and was also set to say that Facebook had contributed to political polarization in the United States.The 1,500-word memo, written by Nick Clegg, Facebook’s vice president of policy and global affairs, was sent on Friday to employees to pre-empt the whistle-blower’s interview. Mr. Clegg pushed back strongly on what he said were the coming accusations, calling them “misleading.” “60 Minutes” published a teaser of the interview in advance of its segment on Sunday.“Social media has had a big impact on society in recent years, and Facebook is often a place where much of this debate plays out,” he wrote. “But what evidence there is simply does not support the idea that Facebook, or social media more generally, is the primary cause of polarization.”Facebook has been in an uproar for weeks because of the whistle-blower, who has shared thousands of pages of company documents with lawmakers and The Wall Street Journal. The Journal has published a series of articles based on the documents, which show that Facebook knew how its apps and services could cause harm, including worsening body image issues among teenage girls using Instagram.Facebook has since scrambled to contain the fallout, as lawmakers, regulators and the public have said the company needs to account for the revelations. On Monday, Facebook paused the development of an Instagram service for children ages 13 and under. Its global head of safety, Antigone Davis, also testified on Thursday as irate lawmakers questioned her about the effects of Facebook and Instagram on young users.A Facebook spokesman declined to comment. A spokesman for “60 Minutes” did not immediately respond to a request for comment.Inside Facebook, executives including Mr. Clegg and the “Strategic Response” teams have called a series of emergency meetings to try to extinguish some of the outrage. Mark Zuckerberg, Facebook’s chief executive, and Sheryl Sandberg, the chief operating officer, have been briefed on the responses and have approved them, but have remained behind the scenes to distance themselves from the negative press, people with knowledge of the company have said.The firestorm is far from over. Facebook anticipated more allegations during the whistle-blower’s “60 Minutes” interview, according to the memo. The whistle-blower, who plans to reveal her identity during the interview, was set to say that Facebook had turned off some of its safety measures around the election — such as limits on live video — too soon after Election Day, the memo said. That allowed for misinformation to flood the platform and for groups to congregate online and plan the Jan. 6 storming of the Capitol building.Mr. Clegg said that was an inaccurate view and cited many of the safeguards and security mechanisms that Facebook had built over the past five years. He said the company had removed millions of groups such as the Proud Boys and others related to causes like the conspiracy theory QAnon and #StopTheSteal election fraud claims.The whistle-blower was also set to claim that many of Facebook’s problems stemmed from changes in the News Feed in 2018, the memo said. That was when the social network tweaked its algorithm to emphasize what it called Meaningful Social Interactions, or MSI, which prioritized posts from users’ friends and family and de-emphasized posts from publishers and brands.The goal was to make sure that Facebook’s products were “not just fun, but are good for people,” Mr. Zuckerberg said in an interview about the change at the time.But according to Friday’s memo, the whistle-blower would say that the change contributed to even more polarization among Facebook’s users. The whistle-blower was also set to say that Facebook then reaped record profits as its users flocked to the divisive content, the memo said.Mr. Clegg warned that the period ahead could be difficult for employees who might face questions from friends and family about Facebook’s role in the world. But he said that societal problems and political polarization have long predated the company and the advent of social networks in general.“The simple fact remains that changes to algorithmic ranking systems on one social media platform cannot explain wider societal polarization,” he wrote. “Indeed, polarizing content and misinformation are also present on platforms that have no algorithmic ranking whatsoever, including private messaging apps like iMessage and WhatsApp.”Mr. Clegg, who is scheduled to appear on the CNN program “Reliable Sources” on Sunday morning, also tried to relay an upbeat note to employees.“We will continue to face scrutiny — some of it fair and some of it unfair,” he said in the memo. “But we should also continue to hold our heads up high.”Here is Mr. Clegg’s memo in full:OUR POSITION ON POLARIZATION AND ELECTIONSYou will have seen the series of articles about us published in the Wall Street Journal in recent days, and the public interest it has provoked. This Sunday night, the ex-employee who leaked internal company material to the Journal will appear in a segment on 60 Minutes on CBS. We understand the piece is likely to assert that we contribute to polarization in the United States, and suggest that the extraordinary steps we took for the 2020 elections were relaxed too soon and contributed to the horrific events of January 6th in the Capitol.I know some of you – especially those of you in the US – are going to get questions from friends and family about these things so I wanted to take a moment as we head into the weekend to provide what I hope is some useful context on our work in these crucial areas.Facebook and PolarizationPeople are understandably anxious about the divisions in society and looking for answers and ways to fix the problems. Social media has had a big impact on society in recent years, and Facebook is often a place where much of this debate plays out. So it’s natural for people to ask whether it is part of the problem. But the idea that Facebook is the chief cause of polarization isn’t supported by the facts – as Chris and Pratiti set out in their note on the issue earlier this year.The rise of polarization has been the subject of swathes of serious academic research in recent years. In truth, there isn’t a great deal of consensus. But what evidence there is simply does not support the idea that Facebook, or social media more generally, is the primary cause of polarization.The increase in political polarization in the US pre-dates social media by several decades. If it were true that Facebook is the chief cause of polarization, we would expect to see it going up wherever Facebook is popular. It isn’t. In fact, polarization has gone down in a number of countries with high social media use at the same time that it has risen in the US.Specifically, we expect the reporting to suggest that a change to Facebook’s News Feed ranking algorithm was responsible for elevating polarizing content on the platform. In January 2018, we made ranking changes to promote Meaningful Social Interactions (MSI) – so that you would see more content from friends, family and groups you are part of in your News Feed. This change was heavily driven by internal and external research that showed that meaningful engagement with friends and family on our platform was better for people’s wellbeing, and we further refined and improved it over time as we do with all ranking metrics.Of course, everyone has a rogue uncle or an old school classmate who holds strong or extreme views we disagree with – that’s life – and the change meant you are more likely to come across their posts too. Even so, we’ve developed industry-leading tools to remove hateful content and reduce the distribution of problematic content. As a result, the prevalence of hate speech on our platform is now down to about 0.05%.But the simple fact remains that changes to algorithmic ranking systems on one social media platform cannot explain wider societal polarization. Indeed, polarizing content and misinformation are also present on platforms that have no algorithmic ranking whatsoever, including private messaging apps like iMessage and WhatsApp.Elections and DemocracyThere’s perhaps no other topic that we’ve been more vocal about as a company than on our work to dramatically change the way we approach elections. Starting in 2017, we began building new defenses, bringing in new expertise, and strengthening our policies to prevent interference. Today, we have more than 40,000 people across the company working on safety and security.Since 2017, we have disrupted and removed more than 150 covert influence operations, including ahead of major democratic elections. In 2020 alone, we removed more than 5 billion fake accounts — identifying almost all of them before anyone flagged them to us. And, from March to Election Day, we removed more than 265,000 pieces of Facebook and Instagram content in the US for violating our voter interference policies.Given the extraordinary circumstances of holding a contentious election in a pandemic, we implemented so called “break glass” measures – and spoke publicly about them – before and after Election Day to respond to specific and unusual signals we were seeing on our platform and to keep potentially violating content from spreading before our content reviewers could assess it against our policies.These measures were not without trade-offs – they’re blunt instruments designed to deal with specific crisis scenarios. It’s like shutting down an entire town’s roads and highways in response to a temporary threat that may be lurking somewhere in a particular neighborhood. In implementing them, we know we impacted significant amounts of content that did not violate our rules to prioritize people’s safety during a period of extreme uncertainty. For example, we limited the distribution of live videos that our systems predicted may relate to the election. That was an extreme step that helped prevent potentially violating content from going viral, but it also impacted a lot of entirely normal and reasonable content, including some that had nothing to do with the election. We wouldn’t take this kind of crude, catch-all measure in normal circumstances, but these weren’t normal circumstances.We only rolled back these emergency measures – based on careful data-driven analysis – when we saw a return to more normal conditions. We left some of them on for a longer period of time through February this year and others, like not recommending civic, political or new Groups, we have decided to retain permanently.Fighting Hate Groups and other Dangerous OrganizationsI want to be absolutely clear: we work to limit, not expand hate speech, and we have clear policies prohibiting content that incites violence. We do not profit from polarization, in fact, just the opposite. We do not allow dangerous organizations, including militarized social movements or violence-inducing conspiracy networks, to organize on our platforms. And we remove content that praises or supports hate groups, terrorist organizations and criminal groups.We’ve been more aggressive than any other internet company in combating harmful content, including content that sought to delegitimize the election. But our work to crack down on these hate groups was years in the making. We took down tens of thousands of QAnon pages, groups and accounts from our apps, removed the original #StopTheSteal Group, and removed references to Stop the Steal in the run up to the inauguration. In 2020 alone, we removed more than 30 million pieces of content violating our policies regarding terrorism and more than 19 million pieces of content violating our policies around organized hate in 2020. We designated the Proud Boys as a hate organization in 2018 and we continue to remove praise, support, and representation of them. Between August last year and January 12 this year, we identified nearly 900 militia organizations under our Dangerous Organizations and Individuals policy and removed thousands of Pages, groups, events, Facebook profiles and Instagram accounts associated with these groups.This work will never be complete. There will always be new threats and new problems to address, in the US and around the world. That’s why we remain vigilant and alert – and will always have to.That is also why the suggestion that is sometimes made that the violent insurrection on January 6 would not have occurred if it was not for social media is so misleading. To be clear, the responsibility for those events rests squarely with the perpetrators of the violence, and those in politics and elsewhere who actively encouraged them. Mature democracies in which social media use is widespread hold elections all the time – for instance Germany’s election last week – without the disfiguring presence of violence. We actively share with Law Enforcement material that we can find on our services related to these traumatic events. But reducing the complex reasons for polarization in America – or the insurrection specifically – to a technological explanation is woefully simplistic.We will continue to face scrutiny – some of it fair and some of it unfair. We’ll continue to be asked difficult questions. And many people will continue to be skeptical of our motives. That’s what comes with being part of a company that has a significant impact in the world. We need to be humble enough to accept criticism when it is fair, and to make changes where they are justified. We aren’t perfect and we don’t have all the answers. That’s why we do the sort of research that has been the subject of these stories in the first place. And we’ll keep looking for ways to respond to the feedback we hear from our users, including testing ways to make sure political content doesn’t take over their News Feeds.But we should also continue to hold our heads up high. You and your teams do incredible work. Our tools and products have a hugely positive impact on the world and in people’s lives. And you have every reason to be proud of that work. More