More stories

  • in

    Getting the Public Behind the Fight on Misinformation

    Misinformation is false or inaccurate information communicated regardless of intention to deceive. The spread of misinformation undermines trust in politics and the media, exacerbated by social media that encourages emotional responses, with users often only reading the headlines and engaging with false posts while sharing credible sources less. Once hesitant to respond, social media companies are increasingly enacting steps to stop the spread of misinformation. But why have these efforts failed to gain greater public support? 

    A 2021 poll from the Pearson Institute found that 95% of Americans believed that the spread of misinformation was concerning, with over 70% blaming, among others, social media companies. Though Americans overwhelmingly agree that misinformation must be addressed, why is there little public consensus on the appropriate solution? 

    Social Media and the Cold War Around Free Speech

    READ MORE

    To address this, we ran a national web survey with 1,050 respondents via Qualtrics, using gender, age and regional quota sampling. Our research suggests several challenges to combating misinformation. 

    First, there are often misconceptions about what social media companies can do. As private entities, they have the legal right to moderate content on their platform, whereas the First Amendment applies only to government restriction of speech. When asked to evaluate the statement “social media companies have a right to remove posts on their platform,” a clear majority of 58.7% agreed. Yet a divide emerges between Democrats, where 74.3% agreed with the statement compared to only 43.5% of Republicans.  

    Ignorance of the scope of the First Amendment may partially explain these findings, as well as respondents believing that, even if companies have the legal right, they should not engage in removal. Yet a history of tech companies initially couching policies as consistent with free speech principles only to later backtrack only adds to the confusion. For example, Twitter once maintained “a devotion to a fundamental free speech standard” of content neutrality, but by 2017 had shifted to a policy where not only posts could be removed but even accounts without offensive tweets. 

    Embed from Getty Images

    Second, while most acknowledge that social media companies should do something, there is little agreement on what that something should be. Overall, 70% of respondents, including a majority of both Democrats (84%) and Republicans (57.6%), agreed with the statement that “social media companies should take steps to restrict false information online, even if it limits freedom of information.”

    We then asked respondents if they would support five different means to combat misinformation. Here, none of the five proposed means mentioned in the survey found majority support, with the most popular option — providing factual information directly under posts labeled as misinformation — supported only by 46.6% of respondents. This was also the only option that a majority of Democrats supported (56.4%).

    Moreover, over a fifth of respondents (20.6%) did not support any of the options. Even focusing just on respondents that stated that social media companies should take steps failed to find broad support for most options. 

    So what might increase public buy-in to these efforts? Transparent policies are necessary so that responses do not appear ad hoc or inconsistent. While many users may not pay attention to terms of services, consistent policies may serve to counter perceptions that efforts selectively enforce or only target certain ideological viewpoints.

    Recent research finds that while almost half of Americans have seen posts labeled as potentially being misinformation on social media, they are wary of trusting fact-checks because they are unsure how information is identified as inaccurate. Greater explanation of the fact-checking process, including using multiple third-party services, may also help address this concern.

    Unique Insights from 2,500+ Contributors in 90+ Countries

    Social media companies, rather than relying solely on moderating content, may also wish to include subtle efforts that encourage users to evaluate posting behavior. Twitter and Facebook have already nodded in this direction with prompts to suggest users should read articles before sharing them. 

    Various crowdsourcing efforts may also serve to signal the accuracy of posts or the frequency with which they are being fact-checked. These efforts attempt to address the underlying hesitancy to combat misinformation while providing an alternative to content moderation that users may not see as transparent. While Americans overwhelmingly agree that misinformation is a problem, designing an effective solution requires a multi-faceted approach. 

    *[Funding for this survey was provided by the Institute for Humane Studies.]

    The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy. More

  • in

    Peter Thiel, PayPal founder and Trump ally, to step down from Meta board

    Peter Thiel, PayPal founder and Trump ally, to step down from Meta boardThiel, a major donor to the Republican party, was seen by critics as part of the reason why Facebook did not censor Trump Peter Thiel, the co-founder of PayPal and Palantir Technologies, is stepping down from the board of Facebook’s parent company, Meta, after 17 years.Finally, Facebook can say it’s not the most toxic social network | Marina HydeRead moreThiel, Facebook’s longest-serving board member and a major donor to the Republican party, plans to focus on backing Donald Trump’s allies in the November midterm elections, according to the New York Times. He recently donated $10m each to the Senate campaigns of Blake Masters, who is running for a seat in Arizona, and JD Vance, who is running in Ohio. Masters is the chief operating officer of Thiel’s family office and Vance used to work at one of Thiel’s venture funds.Thiel has long been a controversial figure on Facebook’s 10-person board, particularly as one of a few major tech figures who vocally supported Trump. Thiel, who donated millions of dollars to Trump’s campaign and served on the ex-president’s transition team, was seen by critics as a part of the reason Facebook did not take down Trump’s posts that violated its community standards. Thiel is a close confidant of Zuckerberg’s. He accompanied him to a private dinner with Trump in 2019 and has successfully advocated he withstand pressure to take political speech and ads off the platform.But recently he has publicly criticized Facebook’s content moderation decisions, saying he’d “take QAnon and Pizzagate conspiracy theories any day over a Ministry of Truth”.Thiel joined Facebook’s board in 2005, a year after the company was founded and seven years before its made its debut on Wall Street. The company said on Monday that he would stay on until Meta’s next shareholder meeting later this year, where he would not stand for re-election.“Peter has been a valuable member of our board and I’m deeply grateful for everything he’s done for our company,” said Mark Zuckerberg, chief executive of Meta, in a statement. “Peter is truly an original thinker who you can bring your hardest problems and get unique suggestions.”In a statement on Monday, Thiel called Zuckerberg “one of the great entrepreneurs of our time” and praised his “intelligence, energy and conscientiousness”.The Associated Press and Reuters contributed reporting.TopicsFacebookMetaSocial networkingPeter ThielRepublicansDonald TrumpUS politicsnewsReuse this content More

  • in

    Welcome to the Metaverse: The Peril and Potential of Governance

    The final chapter of Don DeLillo’s epic 1997 novel “Underworld” has proven a prescient warning of the dangers of the digitized life and culture into which we’ve communally plunged headfirst. Yet no sentiment, no open question posed in his 800-page opus rings as ominously, or remains as unsettling today, as this: “Is cyberspace a thing within the world or is it the other way around? Which contains the other, and how can you tell for sure?”

    Facebook Rebrands Itself After a Fictional Dystopia

    READ MORE

    Regrettably, people’s opinions on the metaverse currently depend on whether they view owning and operating a “digital self” through the lens of dystopia (“The Matrix”) or harmless fun (“Fortnite”). It is additionally unfortunate that an innovative space as dynamic and potentially revolutionary as the metaverse has become, in the public’s imagination, the intellectual property of one company.

    But the fact that future users so readily associate the metaverse with Facebook is a temporary result of PR and a wave of talent migration, and will be replaced by firsthand experiences gained through our exposure to the metaverse itself, and not a single firm’s vision for it.

    Meta Power

    So, what does this all mean? How will the metaverse shape the way we do business, the way we live our lives, the way we govern ourselves? Who owns the metaverse? Why do we need it? Who will be in charge?

    Taking a lead from this stellar primer, if we simply replace the word “metaverse” with the word “internet” wherever we see it, all of a sudden, its application and significance become easier to grasp. It also becomes clear that Facebook’s rebranding as Meta is not as much a reference to the creation of the metaverse but more in line with the company’s desire to become this new territory’s most enthusiastic homesteaders. Facebook is not so much creating the metaverse as it is hoping — like every other firm and government should hope — that it won’t be left behind in this new world.

    Embed from Getty Images

    As far as the metaverse’s impact, its political implications might end up being its least transformative. In the United States, for instance, the digitization of political campaigning has carved a meandering path to the present that is too simplistically summed up thus: Howard Dean crawled so that Barack Obama could walk so that Donald Trump could run so that Joe Biden could drop us all off at No Malarkey Station.

    Where this train goes next, both in the United States and globally, will be a function of individual candidates’ goals, and the all-seeing eye of algorithm-driven voter outreach. But the bottom line is that there will be campaign advertisements in the metaverse because, well, there are campaign advertisements everywhere, all the time.

    More interesting to consider is how leaders will engage the metaverse once in power. Encouragingly, from the governmental side, capabilities and opportunities abound to redefine the manner in which citizens reach their representatives and participate in their own governance. Early public sector adopters of metaversal development have but scratched the surface of these possibilities.

    For starters, the tiny island nation of Barbados has staked out the first metaversal embassy. This openness to embracing technology and a renewed focus on citizen interaction evidenced in this move are laudable and demonstrate the metaverse’s democratic value as a means for increased transparency in government and truly borderless global engagement. Though novel, Barbados’ digital embassy is no gimmick. You can be sure that additional diplomatic missions will soon follow suit in establishing their presence in the metaverse and will perhaps wish they had thought to do so earlier.

    Embed from Getty Images

    Another happy marriage of innovation and democracy is underway in South Korea. Its capital city has taken the mission of digitizing democracy a step further by setting the ambitious goal of creating a Metaverse Seoul by 2023 for the express purpose of transforming its citizenry’s access to municipal government. Things like virtual public hearings, a virtually accessible mayor’s office, virtual tourism, virtual conventions, markets and events will all be on the table as one of the world’s most economically and culturally rich metropolises opens its digital doors to all who wish to step inside.

    Digital Twinning

    Any time technology is employed in the service of empowering people and holding governments more accountable, such advancements should be celebrated. The metaverse can and must become a vehicle for freedom. It need not provide a tired, easy analog to Don DeLillo’s ominous underworld.

    But then there’s China. While some of its cities and state-run firms are making plans to embrace what functionality is afforded via metaversal innovation, there can be no question that the government in Beijing will have a tremendous say in what development, access and behavior is and isn’t permitted in any Chinese iterations of the metaverse. It is hard to imagine, for instance, certain digital assets, products or symbols making their way past the same level of censorship beneath which China already blankets its corner of cyberspace.

    Yet China’s most intriguing metaverse-related trend involves the spike in interest in digital property ownership occurring while its real-world real estate market continues to sputter. Such a considerable reallocation of resources away from physical assets into digital ones mirrors the increasing popularity of cryptocurrency as a safe haven from the risk of inflation. Call it a technological inevitability or a societal symptom of COVID-fueled pessimism, but the digital world now appears (to some) to present fewer risks and more forward-looking stability than the physical.   

    Unique Insights from 2,500+ Contributors in 90+ Countries

    China may be an extreme example, but the need to balance transparency, openness and prosperity with safety and control will exist for all governments in the metaverse just as it does in non-virtual reality. Real-world governmental issues will not find easy answers in the metaverse, but they might find useful twins. And as is the case in the industry, the digital twinning of democracy will give its willing practitioners the chance to experiment, to struggle, to build and rebuild, and to fail fast and often enough to eventually get some things right.

    Championing commendable applications of this new technology in government and business will position the metaverse as a useful thing within the real world, something that enriches real lives, that serves real people — not the other way around.

    The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy. More

  • in

    ‘Breeding grounds for radicalization’: Capitol attack panel signals loss of patience with big tech

    ‘Breeding grounds for radicalization’: Capitol attack panel signals loss of patience with big techSubpoenas are an escalation in the committee’s efforts for answers as companies ignored information requests The House select committee investigating the January 6 insurrection at the Capitol has ordered several social media firms to hand over data relating to the attack, asignificant step toward transparency that could have broader privacy implications.Facebook whistleblower to claim company contributed to Capitol attackRead moreThe committee on Thursday subpoenaed Twitter, Meta, Alphabet and Reddit for private messages exchanged on the platforms about the attack aas well as information regarding moderation policies that allowed communities to remain online even as they incited violence in early 2021.Congressman Bennie Thompson, the chairman of the select committee, said the committee is seeking to answer two key questions: how the spread of misinformation contributed to the violent attack, and what steps social media companies took to prevent their platforms from “being breeding grounds for radicalizing people to violence”.The subpoenas mark an escalation in the committee’s efforts to get answers from the tech companies. Thompson added in his letter that the subpoenas came after “months of engagement” with the firms and that the four companies have so far ignored requests for information.“We cannot allow our important work to be delayed any further,” he said.The panel in August asked 15 tech companies, including the four subpoenaed on Thursday as well as TikTok, Snapchat, Parler and 4chan, for records related to the riot.In letters sent this week the tech firms, Thompson lamented their lack of response. In a letter to Meta CEO Mark Zuckerberg, Thompson said that “despite repeated and specific requests for documents” related to Facebook’s practices on election misinformation and violent content, the committee had still not received these materials.Following the January 6 attack, social media platforms have been scrutinized for amplifying calls to violence, spreading misinformation and serving as an organizing tool for the rioters.Last March, lawmakers grilled the CEOs of Google, Twitter and Facebook about the platforms’ role in the Capitol riot. And in the months since, the major platforms have all announced initiatives to curb the spread of misinformation through their products.But still, much about the content moderation policies of major tech firms remains black box, with executives slow to reveal details of how misinformation and hate speech is moderated and how many resources are dedicated to mitigating such issues. Now, increased transparency could come by means of subpoena.For lawmakers, the problem came even more acutely into focus with papers leaked by whistleblower Frances Haugen in October 2021, which showed how Facebook failed to enforce policies that would rein in hate speech because they were detrimental to its bottom line. Speaking to Congress, Haugen called for more transparency from Facebook and other companies, including an independent oversight board.In a letter to Zuckerberg, the select committee cited revelations from Haugen, requesting access to the company’s internal analyses of the spread of misinformation and calls to violence relating to the 2020 election.In particular, the committee requested more information on the “Stop the Steal” movement and how it was regulated. A “Stop the Steal” Facebook group amassed hundreds of thousands of members and was used to coordinate some of the actions on January 6. While Facebook eventually took it down, other related pages stayed online, said Imran Ahmed, CEO of the Center for Countering Digital Hate.“It is absolutely crucial to understand the decision making process that led to them to leave those pages online – how they executed enforcement of their policies against violence, encouraging violence, intimidation, extremism and hate.”Similarly, Reddit has been requested to provide information on its community r/The_Donald, which was used to plan the January 6 action before it was banned weeks later on 27 January.Lawmakers were also seeking materials from Alphabet, the parent company of YouTube, because the video platform hosted significant communications by key players in the Capitol attack, including Trump’s former chief strategist Steve Bannon and rioters livestreaming their movements on January 6.Activists say the need to hold companies accountable for how their policies contributed to the Capitol riots should be held in balance with civil rights and privacy protections.The subpoenas may bring up privacy concerns, said Evan Greer, deputy director of digital rights group Fight for the Future. “Forcing companies to hand over private messages of its users could have major privacy implications,” Greer said.“It’s essential to remember that government surveillance and demands for data from private companies are primarily weaponized against marginalized communities,” they said. “The white supremacists who stormed the Capitol deserve to be held accountable, but we should never cheer on expansions of surveillance or government overreach.”Twitter, Meta, Alphabet and Reddit did not immediately respond to the Guardian’s request for comment. TopicsUS Capitol attackUS politicsFacebookMetaTwitterAlphabetRedditfeaturesReuse this content More

  • in

    Capitol attack panel subpoenas Google, Facebook and Twitter for digital records

    Capitol attack panel subpoenas Google, Facebook and Twitter for digital recordsSelect committee seeks records related to January 6 attackMove suggests panel is ramping up inquiry of social media posts The House select committee investigating the Capitol attack subpoenaed Twitter, Meta, Alphabet and Reddit on Thursday for records related to the 6 January insurrection, as it seeks to review data that could potentially incriminate the Trump White House.Facebook is part of Meta and Google is part of Alphabet.The move by the select committee suggests the panel is ramping up its examination of social media posts and messages that could provide evidentiary evidence as to who might have been in contact with the Trump White House around 6 January, one source said.Congressman Bennie Thompson, the chairman of the select committee, said in a statement that he authorized the four subpoenas since those platforms were used to communicate plans about the Capitol attack, and yet the social media companies ignored earlier requests.The subpoenas to the four social media companies were the last straw for the select committee after repeated engagements with the platforms went unheeded, Thompson said in letters that amounted to stinging rebukes over the platforms’ lack of cooperation.Thompson said in the subpoena letter to Twitter that the select committee was interested in obtaining key documents House investigators suspect the company is withholding that could shed light on how users used the platform to plan and execute the Capitol attack.The chairman said the select committee was interested in records from Reddit, since the “r/The_Donald” subreddit that eventually migrated to a website of the same name hosted significant discussion and planning related to the Capitol attack.Thompson said House investigators were seeking materials from Alphabet, the parent company of YouTube, which was a platform for significant communications by its users who played key roles in the Capitol attack.The select committee has been examining digital fingerprints left by the Trump White House and other individuals connected to the Capitol attack since the outset of the investigation, on everything from posts that show geolocations to metadata, the source said.To that end, the select committee issued data preservation requests to 35 telecom and social media companies in August, demanding that they save the materials in the event the panel’s technical team required their release, the source said.The Guardian first reported that month that the select committee, among other individuals, had requested the telecom and social media firms preserve the records of the former Trump White House chief of staff Mark Meadows in addition to a dozen House Republicans.The select committee gave the social media companies a 27 January deadline to comply with the subpoenas, but it was not clear whether the organizations would comply. A spokesperson for Twitter and Meta did not immediately respond to requests for comment.Congressman Kevin McCarthy, the Republican House minority leader who refused a request for cooperation late on Wednesday by the select committee, has previously threatened telecom and social media companies if they comply with the bipartisan panel’s investigation.“If these companies comply with the Democrat order to turn over private information, they are in violation of federal law,” McCarthy said at the time in August. “A Republican majority will not forget and will stand with Americans to hold them fully accountable under the law.”TopicsUS Capitol attackFacebookGoogleUS politicsSocial networkingAlphabetnewsReuse this content More

  • in

    Facebook’s very bad year. No, really, it might be the worst yet

    Facebook’s very bad year. No, really, it might be the worst yet From repeated accusations of fostering misinformation to multiple whistleblowers, the company weathered some battles in 2021It’s a now-perennial headline: Facebook has had a very bad year.Years of mounting pressure from Congress and the public culminated in repeated PR crises, blockbuster whistleblower revelations and pending regulation over the past 12 months.And while the company’s bottom line has not yet wavered, 2022 is not looking to be any better than 2021 – with more potential privacy and antitrust actions on the horizon.Here are some of the major battles Facebook has weathered in the past year.Capitol riots launch a deluge of scandalsFacebook’s year started with allegations that a deadly insurrection on the US Capitol was largely planned on its platform. Regulatory uproar over the incident reverberated for months, leading lawmakers to call CEO Mark Zuckerberg before Congress to answer for his platform’s role in the attack.In the aftermath, Zuckerberg defended his decision not to take action against Donald Trump, though the former president stoked anger and separatist flames on his personal and campaign accounts. Facebook’s inaction led to a rare public employee walkout and Zuckerberg later reversed the hands-off approach to Trump. Barring Trump from Facebook platforms sparked backlash once again – this time from Republican lawmakers alleging censorship.What ensued was a months-long back-and-forth between Facebook and its independent oversight board, with each entity punting the decision of whether to keep Trump off the platform. Ultimately, Facebook decided to extend Trump’s suspension to two years. Critics said this underscored the ineffectiveness of the body. “What is the point of the oversight board?” asked the Real Oversight Board, an activist group monitoring Facebook, after the non-verdict.Whistleblowers take on FacebookThe scandal with perhaps the biggest impact on the company this year came in the form of the employee-turned-whistleblower Frances Haugen, who leaked internal documents that exposed some of the inner workings of Facebook and just how much the company knew about the harmful effects its platform was having on users and society.Haugen’s revelations, first reported by the Wall Street Journal, showed Facebook was aware of many of its grave public health impacts and had the means to mitigate them – but chose not to do so.For instance, documents show that since at least 2019, Facebook has studied the negative impact Instagram had on teenage girls and yet did little to mitigate the harms and publicly denied that was the case. Those findings in particular led Congress to summon company executives to multiple hearings on the platform and teen users.Facebook has since paused its plans to launch an Instagram app for kids and introduced new safety measures encouraging users to take breaks if they use the app for long periods of time. In a Senate hearing on 8 December, the Instagram executive Adam Mosseri called on Congress to launch an independent body tasked with regulating social media more comprehensively, sidestepping calls for Instagram to regulate itself.Haugen also alleged Facebook’s tweaks to its algorithm, which turned off some safeguards intended to fight misinformation, may have led to the Capitol attack. She provided information underscoring how little of its resources it dedicates to moderating non-English language content.In response to the Haugen documents, Congress has promised legislation and drafted a handful of new bills to address Facebook’s power. One controversial measure would target Section 230, a portion of the Communications Decency Act that exempts companies from liability for content posted on their platforms.Haugen was not the only whistleblower to take on Facebook in 2021. In April, the former Facebook data scientist turned whistleblower Sophie Zhang revealed to the Guardian that Facebook repeatedly allowed world leaders and politicians to use its platform to deceive the public or harass opponents. Zhang has since been called to testify on these findings before parliament in the UK and India.Lawmakers around the world are eager to hear from the Facebook whistleblowers. Haugen also testified in the UK regarding the documents she leaked, telling MPs Facebook “prioritizes profit over safety”.Such testimony is likely to influence impending legislation, including the Online Safety Bill: a proposed act in the UK that would task the communications authority Ofcom with regulating content online and requiring tech firms to protect users from harmful posts or face substantial fines.Zuckerberg and Cook feud over Apple updateThough Apple has had its fair share of regulatory battles, Facebook did not find an ally in its fellow tech firm while facing down the onslaught of consumer and regulatory pressure that 2021 brought.The iPhone maker in April launched a new notification system to alert users when and how Facebook was tracking their browsing habits, supposedly as a means to give them more control over their privacy.Facebook objected to the new policy, arguing Apple was doing so to “self-preference their own services and targeted advertising products”. It said the feature would negatively affect small businesses relying on Facebook to advertise. Apple pressed on anyway, rolling it out in April and promising additional changes in 2022.Preliminary reports suggest Apple is, indeed, profiting from the change while Google and Facebook have seen advertising profits fall.Global outage takes out all Facebook productsIn early October, just weeks after Haugen’s revelations, things took a sudden turn for the worse when the company faced a global service outage.Perhaps Facebook’s largest and most sustained tech failure in recent history, the glitch left billions of users unable to access Facebook, Instagram or Whatsapp for six hours on 4 and 5 October.Facebook’s share price dropped 4.9% that day, cutting Zuckerberg’s personal wealth by $6bn, according to Bloomberg.Other threats to FacebookAs Facebook faces continuing calls for accountability, its time as the wunderkind of Silicon Valley has come to a close and it has become a subject of bipartisan contempt.Republicans repeatedly have accused Facebook of being biased against conservatism, while liberals have targeted the platform for its monopolistic tendencies and failure to police misinformation.In July, the Biden administration began to take a harder line with the company over vaccine misinformation – which Joe Biden said was “killing people” and the US surgeon general said was “spreading like wildfire” on the platform. Meanwhile, the appointment of the antitrust thought leader Lina Khan to head of the FTC spelled trouble for Facebook. She has been publicly critical of the company and other tech giants in the past, and in August refiled a failed FTC case accusing Facebook of anti-competitive practices.After a year of struggles, Facebook has thrown something of a Hail Mary: changing its name. The company announced it would now be called Meta, a reference to its new “metaverse” project, which will create a virtual environment where users can spend time.The name change was met with derision and skepticism from critics. But it remains to be seen whether Facebook, by any other name, will beat the reputation that precedes it.TopicsFacebookTim CookMark ZuckerbergUS CongressUS Capitol attackAppleUS politicsfeaturesReuse this content More

  • in

    Facebook revelations: what is in cache of internal documents?

    FacebookFacebook revelations: what is in cache of internal documents?Roundup of what we have learned after release of papers and whistleblower’s testimony to MPs Dan Milmo Global technology editorMon 25 Oct 2021 14.42 EDTLast modified on Mon 25 Oct 2021 16.04 EDTFacebook has been at the centre of a wave of damaging revelations after a whistleblower released tens of thousands of internal documents and testified about the company’s inner workings to US senators.Frances Haugen left Facebook in May with a cache of memos and research that have exposed the inner workings of the company and the impact its platforms have on users. The first stories based on those documents were published by the Wall Street Journal in September.Facebook whistleblower Frances Haugen calls for urgent external regulationRead moreHaugen gave further evidence about Facebook’s failure to act on harmful content in testimony to US senators on 5 October, in which she accused the company of putting “astronomical profits before people”. She also testified to MPs and peers in the UK on Monday, as a fresh wave of stories based on the documents was published by a consortium of news organisations.Facebook’s products – the eponymous platform, the Instagram photo-sharing app, Facebook Messenger and the WhatsApp messaging service – are used by 2.8 billion people a day and the company generated a net income – a US measure of profit – of $29bn (£21bn) last year.Here is what we have learned from the documents, and Haugen, since the revelations first broke last month.Teenage mental healthThe most damaging revelations focused on Instagram’s impact on the mental health and wellbeing of teenage girls. One piece of internal research showed that for teenage girls already having “hard moments”, one in three found Instagram made body issues worse. A further slide shows that one in three people who were finding social media use problematic found Instagram made it worse, with one in four saying it made issues with social comparison worse.Facebook described reports on the research, by the WSJ in September, as a “mischaracterisation” of its internal work. Nonetheless, the Instagram research has galvanised politicians on both sides of the Atlantic seeking to rein in Facebook.Violence in developing countriesHaugen has warned that Facebook is fanning ethnic violence in countries including Ethiopia and is not doing enough to stop it. She said that 87% of the spending on combating misinformation at Facebook is spent on English content when only 9% of users are English speakers. According to the news site Politico on Monday, just 6% of Arabic-language hate content was detected on Instagram before it made its way on to the platform.Haugen told Congress on 5 October that Facebook’s use of engagement-based ranking – where the platform ranks a piece of content, and whether to put it in front of users, on the amount of interactions it gets off people – was endangering lives. “Facebook … knows, they have admitted in public, that engagement-based ranking is dangerous without integrity and security systems, but then not rolled out those integrity and security systems to most of the languages in the world. And that’s what is causing things like ethnic violence in Ethiopia,” she said.Divisive algorithm changesIn 2018 Facebook changed the way it tailored content for users of its news feed feature, a key part of people’s experience of the platform. The emphasis on boosting “meaningful social interactions” between friends and family meant that the feed leant towards reshared material, which was often misinformed and toxic. “Misinformation, toxicity and violent content are inordinately prevalent among reshares,” said internal research. Facebook said it had an integrity team that was tackling the problematic content “as efficiently as possible”.Tackling falsehoods about the US presidential electionThe New York Times reported that internal research showed how, at one point after the US presidential election last year, 10% of all US views of political material on Facebook – a very high proportion for the platform – were of posts alleging that Joe Biden’s victory was fraudulent. One internal review criticised attempts to tackle “Stop the Steal” groups spreading claims that the election was rigged. “Enforcement was piecemeal,” said the research. The revelations have reignited concerns about Facebook’s role in the 6 January riots.Facebook said: “The responsibility for the violence that occurred … lies with those who attacked our Capitol and those who encouraged them.” However, the WSJ has also reported that Facebook’s automated systems were taking down posts generating only an estimated 3-5% of total views of hate speech.Disgruntled Facebook staffWithin the files disclosed by Haugen are testimonies from dozens of Facebook employees frustrated by the company’s failure to either acknowledge the harms it generates, or to properly support efforts to mitigate or prevent those harms. “We are FB, not some naive startup. With the unprecedented resources we have, we should do better,” wrote one employee quoted by Politico in the wake of the 6 January attack on the US capitol.“Never forget the day Trump rode down the escalator in 2015, called for a ban on Muslims entering the US, we determined that it violated our policies, and yet we explicitly overrode the policy and didn’t take the video down,” wrote another. “There is a straight line that can be drawn from that day to today, one of the darkest days in the history of democracy … History will not judge us kindly.”Facebook is struggling to recruit young usersA section of a complaint filed by Haugen’s lawyers with the US financial watchdog refers to young users in “more developed economies” using Facebook less. This is a problem for a company that relies on advertising for its income because young users, with unformed spending habits, can be lucrative to marketers. The complaint quotes an internal document stating that Facebook’s daily teenage and young adult (18-24) users have “been in decline since 2012-13” and “only users 25 and above are increasing their use of Facebook”. Further research reveals “engagement is declining for teens in most western, and several non-western, countries”.Haugen said engagement was a key metric for Facebook, because it meant users spent longer on the platform, which in turn appealed to advertisers who targeted users with adverts that accounted for $84bn (£62bn) of the company’s $86bn annual revenue. On Monday, Bloomberg said “time spent” for US teenagers on Facebook was down 16% year-on-year, and that young adults in the US were also spending 5% less time on the platform.Facebook is built for divisive contentOn Monday the NYT reported an internal memo warning that Facebook’s “core product mechanics”, or its basic workings, had let hate speech and misinformation grow on the platform. The memo added that the basic functions of Facebook were “not neutral”. “We also have compelling evidence that our core product mechanics, such as vitality, recommendations and optimising for engagement, are a significant part of why these types of speech flourish on the platform,” said the 2019 memo.A Facebook spokesperson said: “At the heart of these stories is a premise which is false. Yes, we are a business and we make profit, but the idea that we do so at the expense of people’s safety or wellbeing misunderstands where our own commercial interests lie. The truth is we have invested $13bn and have over 40,000 people to do one job: keep people safe on Facebook.”Facebook avoids confrontations with US politicians and rightwing news organisationsA document seen by the Financial Times showed a Facebook employee claiming Facebook’s public policy team blocked decisions to take down posts “when they see that they could harm powerful political actors”. The document said: “In multiple cases the final judgment about whether a prominent post violates a certain written policy are made by senior executives, sometimes Mark Zuckerberg.” The memo said moves to take down content by repeat offenders against Facebook’s guidelines, such as rightwing publishers, were often reversed because the publishers might retaliate. The wave of stories on Monday were based on disclosures made to the Securities and Exchange Commission – the US financial watchdog – and provided to Congress in redacted form by Haugen’s legal counsel. The redacted versions were obtained by a consortium of news organisations including the NYT, Politico and Bloomberg.TopicsFacebookSocial mediaSocial networkingUS Capitol attackUS politicsDigital mediaanalysisReuse this content More

  • in

    Facebook boss ‘not willing to protect public from harm’

    The ObserverFacebookFacebook boss ‘not willing to protect public from harm’ Frances Haugen says chief executive has not shown any desire to shield users from the consequences of harmful content Dan MilmoSat 23 Oct 2021 21.02 EDTLast modified on Sun 24 Oct 2021 04.23 EDTThe Facebook whistleblower whose revelations have tipped the social media giant into crisis has launched a stinging new criticism of Mark Zuckerberg, saying he has not shown any readiness to protect the public from the harm his company is causing.Frances Haugen told the Observer that Facebook’s founder and chief executive had not displayed a desire to run the company in a way that shields the public from the consequences of harmful content.Her intervention came as pressure mounted on the near-$1tn (£730bn) business following a fresh wave of revelations based on documents leaked by Haugen, a former Facebook employee. The New York Times reported that workers had repeatedly warned that Facebook was being flooded with false claims about the 2020 presidential election result being fraudulent and believed the company should have done more to tackle it.Frances Haugen: ‘I never wanted to be a whistleblower. But lives were in danger’Read moreHaugen, who appears before MPs and peers in Westminster on Monday, said Zuckerberg, who controls the business via a majority of its voting shares, has not shown any willingness to protect the public.“Right now, Mark is unaccountable. He has all the control. He has no oversight, and he has not demonstrated that he is willing to govern the company at the level that is necessary for public safety.”She added that giving all shareholders an equal say in the running of the company would result in changes at the top. “I believe in shareholder rights and the shareholders, or shareholders minus Mark, have been asking for years for one share one vote. And the reason for that is, I am pretty sure the shareholders would choose other leadership if they had an option.”Haugen, who quit as a Facebook product manager in May, said she had leaked tens of thousand of documents to the Wall Street Journal and to Congress because she had realised that the company would not change otherwise.She said: “There are great companies that have done major cultural changes. Apple did a major cultural change; Microsoft did a major cultural change. Facebook can change too. They just have to get the will.”This weekend, a consortium of US news organisations released a fresh wave of stories based on the Haugen documents. The New York Times reported that internal research showed how, at one point after the US presidential election last year, 10% of all US views of political material on Facebook – a very high proportion for Facebook – were of posts falsely alleging that Joe Biden’s victory was fraudulent. One internal review criticised attempts to tackle Stop the Steal groups spreading claims on the platform that the election was rigged. “Enforcement was piecemeal,” said the research.The revelations have reignited concerns about Facebook’s role in the 6 January riots, in which a mob seeking to overturn the election result stormed the Capitol in Washington. The New York Times added that some of the reporting for the story was based on documents not released by Haugen.A Facebook spokesperson said: “At the heart of these stories is a premise which is false. Yes, we’re a business and we make profit, but the idea that we do so at the expense of people’s safety or wellbeing misunderstands where our commercial interests lie. The truth is we’ve invested $13bn and have over 40,000 people to do one job: keep people safe on Facebook.”Facebook’s vice-president of integrity, Guy Rosen, said the company had put in place multiple measures to protect the public during and after the election and that “responsibility for the [6 January] insurrection lies with those who broke the law during the attack and those who incited them”.It was also reported on Friday that a new Facebook whistleblower had come forward and, like Haugen, had filed a complaint to the Securities and Exchange Commission, the US financial regulator, alleging that the company declined to enforce safety rules for fear of angering Donald Trump or impacting Facebook’s growth.Haugen will testify in person on Monday to the joint committee scrutinising the draft online safety bill, which would impose a duty of care on social media companies to protect users from harmful content, and allow the communications regulator, Ofcom, to fine those who breach this. The maximum fine is 10% of global turnover, so in the case of Facebook, this could run into billions of pounds. Facebook, whose services also include Instagram and WhatsApp, has 2.8 billion daily users and generated an income last year of $86bn.As well as issuing detailed rebuttals of Haugen’s revelations, Facebook is reportedly planning a major change that would attempt to put some distance between the company and its main platform. Zuckerberg could announce a rebranding of Facebook’s corporate identity on Thursday, according to a report that said the company is keen to emphasise its future as a player in the “metaverse”, a digital world in which people interact and lead their social and professional lives virtually.Haugen said Facebook must be compelled by all regulators to be more transparent with the information at its disposal internally, as detailed in her document leaks. She said one key reform would be to set up a formal structure whereby regulators could demand reports from Facebook on any problem that they identify.“Let’s imagine there was a brand of car that was having five times as many car accidents as other cars. We wouldn’t accept that car company saying, ‘this is really hard, we are trying our best, we are sorry, we are trying to do better in the future’. We would never accept that as an answer and we are hearing that from Facebook all the time. There needs to be an avenue where we can escalate a concern and they actually have to give us a response.”TopicsFacebookThe ObserverSocial networkingMark ZuckerbergUS elections 2020US CongressUS politicsReuse this content More