More stories

  • in

    'Your business model is the problem': tech CEOs grilled over role in Capitol attack

    The CEOs of America’s biggest technology companies faced a grilling from Congress about the 6 January insurrection at the Capitol, as protesters outside the hearing denounced the platforms for playing a role in fueling the violence.Sundar Pichai of Google, Mark Zuckerberg of Facebook and Jack Dorsey of Twitter on Thursday were called to testify before two committees of the House of Representatives on social media’s role in promoting extremism and misinformation.Protesters who had gathered outside the Capitol building ahead of the hearing portrayed the tech executives as the violent insurrectionists whose images went viral in the days after the 6 January riots. One cutout erected on the grounds showed Zuckerberg as the “QAnon Shaman”, a part-time actor with a horned furry hat who participated in the riot.“The platforms’ inability to deal with the violence, hate and disinformation they promote on their platforms shows that these companies are failing to regulate themselves,” said Emma Ruby-Sachs, the executive director of SumofUs, the human rights organization behind the protests. “After the past five years of manipulation, data harvesting and surveillance, the time has come to rein in big tech.”Lawmakers opened the hearing with video testimonies, criticizing the platforms for their role in the 6 January violence, as well as in the spread of medical misinformation about the Covid-19 vaccine.“You failed to meaningfully change after your platform has played a role in fomenting insurrection and abetting the spread of the virus and trampling American civil liberties,” said the Democratic representative Frank Pallone, the chair of the energy and commerce committee. “Your business model itself has become the problem and the time for self-regulation is over. It’s time we legislate to hold you accountable,” he added.“You’re not passive bystanders – you are not non-profits or religious organizations that are trying to do a good job for humanity – you’re making money,” Pallone later said. “The point we’re trying to make today is that when you spread misinformation, when extremists are actively promoted and amplified, you do it because you make more money.”“The witnesses here today have demonstrated time and time again, that self-regulation has not worked,” echoed Jan Schakowsky, a Democratic representative from Illinois. “They must be held accountable for allowing disinformation and misinformation to spread.”Meanwhile, Republican lawmakers quickly turned to the topic of “cancel culture” and perceived, but unproven, bias against conservatives on social media.In his opening statement, Facebook’s Zuckerberg, argued that the tech companies should not be making the decisions around what is allowed online, and stressed Facebook’s efforts to combat misinformation and its spread of vaccine information.Google’s Pichai, too, sought to highlight his company’s role in connecting users with vaccine information and other Covid-19 resources.Thursday’s session is the latest in a record number of hearings for the big technology players in the past year, as executives have repeatedly been called to the Hill to testify on antitrust issues, misinformation and hate speech.The hearing, which was titled “Disinformation nation: social media’s role in promoting extremism and misinformation”, was held by the House of Representatives’ energy and commerce committee.Lawmakers repeatedly pressed the CEOs on how their platforms were tackling hate speech and misinformation more widely.The Democratic representative Doris Matsui, of California, raised the issue of anti-Asian hate speech and directly asked Dorsey and Zuckerberg what they are doing to address it. She also asked why they took so long to remove racist hashtags that promoted blame for the coronavirus pandemic on Asian Americans, citing the recent attack on Asian women in Atlanta as a consequence of these policies.“The issues we are discussing here are not abstract,” she said. “They have real world consequences and implications that are too often measured in human lives.”She also cited a study that showed a substantial rise in hate speech the week after Donald Trump first used the term “China flu” in a tweet.Dorsey countered by saying he will not ban the racist hashtags outright because “a lot of these hashtags contain counter speech”, or posts refuting the racism the hashtags initiated. Zuckerberg similarly said that hate speech policies at Facebook are “nuanced” and that they have an obligation to protect free speech.Congressman Tony Cárdenas of California has asked Zuckerberg how the company addresses the major problem of misinformation that targets Latino users, noting that studies have shown Facebook catches less false content in Spanish than in English.Zuckerberg responded that Facebook has an international factchecking program with workers in more than 80 countries speaking “a bunch of languages” including Spanish. He also said Facebook translates accurate information about Covid-19 vaccines and other issues from English into a number of languages.Cárdenas noted the example of his Spanish-speaking mother-in-law saying she did not want to get a vaccine because she heard on social media it would place a microchip in her arm.“For God’s sake, that to me is unbelievable, that she got that information on social media platforms,” he said. “Clearly Spanish language misinformation is an issue.” More

  • in

    The Other Side of the Indian Farmers’ Protests

    In November 2020, the Friedrich Ebert Foundation published an article by Paul Nemitz and Matthias Pfeffer on the threat to digital sovereignty in Europe. They called attention to the need in Europe for “decentralised digital technologies” to combat a trend they see as essential for preserving “a flourishing medium-sized business sector, growing tax revenues, rising prosperity, a functioning democracy and rule of law.” 

    The authors felt encouraged by the fact that the European Council was at last looking at challenging the US tech platforms that dominate global cyberspace: Google, Amazon, Facebook, Apple and Microsoft. Europe appears ready to draft laws that would impose targeted regulation strategies different from those that apply to “small and medium-sized actors, or sectoral actors generally.”

    Indian Farmer Protests Explained

    READ MORE

    There are multiple reasons for such a move, which will inevitably be attacked by the corporations as violating the sacrosanct principle of free trade. Nemitz and Pfeffer recognize the complexity of the implicit goal, to ensure “strategic autonomy while preserving an open economy.” Besides the threat to traditional businesses incapable of competing with the platforms, they cite the fact that “unregulated digitalisation of the public sphere has already endangered the systemic role of the media in two respects” to the extent that 80% of “online advertising revenues today flow to just two corporations: Google and Facebook.” This threatens the viability of “costly professional journalism that is vital for democracy.”

    Europe is struggling to find a solution. In the context of the farmers’ protests in India, the Joint Action Committee Against Foreign Retail and E-commerce (JACAFRE) recently took an emphatic stand on the same subject by publishing an open letter addressed to Prime Minister Narendra Modi. In this case, the designated culprits are the US powerhouses of retail commerce, Amazon and Walmart, but the authors include what they see as a Quisling Indian company: the mega-corporation, Reliance Industries.

    The giant conglomerate claims to be “committed to innovation-led, exponential growth in the areas of hydrocarbon exploration and production, petroleum refining and marketing, petrochemicals, retail and telecommunications.” JACAFRE suspects it may also be committed to the idea of monopolistic control. It complains that Reliance’s propensity for establishing partnerships with Facebook and Google is akin to letting the fox in the henhouse. This has less to do with the platforms’ direct action than the coercive power their ever-increasingly possession and control of data represents. “If the new farm laws are closely examined,” the JACAFE’s authors claim, “it will be evident that unregulated digitalisation is a very important aspect of them.”

    Today’s Daily Devil’s Dictionary definition:

    Unregulated digitalization:

    A pandemic that grew slowly in the first two decades of the 21st century with the effect of undermining most human economic activities, personal relationships and even mental equilibrium

    Contextual Note

    Three years ago, Walmart purchased the Indian retailer Flipkart. Interviewed at the time, Parminder Jeet Singh, the executive director of IT for Change, complained that the data controlled by e-commerce companies is no longer limited to patterns of consumption but also extends to production and logistics. “They know everything, who needs it, when they need it, who should produce it, who should move it, when it should be moved, the complete control of the data of the whole system,” he said. That capacity is more than invasive. It is tantamount to omniscient and undetectable industrial spying combined with forms of social control that are potentially as powerful as China’s much decried social credit system.

    Embed from Getty Images

    In 2018, Singh appeared to worry more about Walmart than Facebook or Amazon, because it represents the physical economy. The day US companies dominate both the data and the physical resources of the Indian economy, Singh believes it would “game over” for Indian economic independence. He framed it in these terms: “If these two companies become a duopoly in the e-commerce sector, it’s actually a duopoly over the whole economy.” 

    On the positive side, he insisted that, contrary to many other countries, India has the “digitally industrialized” culture that would allow it not only to resist the domination of a US-based global company, but also permit it to succeed in building a native equivalent. He viewed Flipkart before Walmart’s takeover as a successful Indian company that had no need of a monopolistic US company to ensure its future growth. 

    Historical Note

    Fair Observer’s founder, CEO and editor-in-chief, Atul Singh, recently collaborated with analyst Manu Sharma on an article debunking the simplistic view shared across international media that persists in painting India’s protesting farmers as a David challenging a globalized Goliath insidiously promoted by Narendra Modi’s government. The Western media’s narrative puts the farmers in the role of resistance heroes against a new form of market-based tyranny.

    But as Singh and Sharma point out, this requires ignoring history and refusing to recognize the pressing need to move away from a “Soviet-inspired model” that ended up creating pockets of privilege and artificial dependence. These relics of India’s post-independence past became obstacles not only to productivity but to justice as well, to the extent that the existing system favored those who had learned to successfully exploit it.

    Singh and Sharma highlight the incoherence of a system that risks provoking deeper crises. Does that mean that Modi’s proposed reform is viable and without risk? The two authors acknowledge the very real fear farmers feel “that big private players will offer good money to farmers in the beginning, kill off their competition and then pay little for agricultural produce.” They realistically concede that, once in place, “India’s agricultural reforms will have intended and unintended consequences, both positive and negative.”

    But there may be more to the story. From the JACAFE’s perspective, the farmers’ instincts are correct. Their fear of the big players leveraging their clout in the traditional marketplace by exercising discretionary control of production and distribution becomes exponentially greater when considering that, thanks to their mastery of data, their control is not limited to the commodities themselves. It extends to all the data associated not only with the modes and means of production, but also with the channels of distribution and even habits of consumption. That explains why the JACAFE sees the 2018 takeover of Flipkart by Walmart as particularly foreboding.

    This dimension of the issue should also help us to understand why Prime Minister Modi has recently been playing cat and mouse with both Jeff Bezos of Amazon and Mark Zuckerberg of Facebook. At some point, the purely rhetorical game that even a mouse with a 56-inch chest can play while dodging the bite of a pair of voracious and muscular cats (Amazon and Walmart) has its limits. India is faced with a major quandary. It needs to accelerate its development of domestic resources in a manner that allows it to control the future economic consequences for its population but must, at the same time, look abroad for the investment that will fund such endeavors.

    .custom-post-from {float:right; margin: 0 10px 10px; max-width: 50%; width: 100%; text-align: center; background: #000000; color: #ffffff; padding: 15px 0 30px; }
    .custom-post-from img { max-width: 85% !important; margin: 15px auto; filter: brightness(0) invert(1); }
    .custom-post-from .cpf-h4 { font-size: 18px; margin-bottom: 15px; }
    .custom-post-from .cpf-h5 { font-size: 14px; letter-spacing: 1px; line-height: 22px; margin-bottom: 15px; }
    .custom-post-from input[type=”email”] { font-size: 14px; color: #000 !important; width: 240px; margin: auto; height: 30px; box-shadow:none; border: none; padding: 0 10px; background-image: url(“https://www.fairobserver.com/wp-content/plugins/moosend_form/cpf-pen-icon.svg”); background-repeat: no-repeat; background-position: center right 14px; background-size:14px;}
    .custom-post-from input[type=”submit”] { font-weight: normal; margin: 15px auto; height: 30px; box-shadow: none; border: none; padding: 0 10px 0 35px; background-color: #1878f3; color: #ffffff; border-radius: 4px; display: inline-block; background-image: url(“https://www.fairobserver.com/wp-content/plugins/moosend_form/cpf-email-icon.svg”); background-repeat: no-repeat; background-position: 14px center; background-size: 14px; }

    .custom-post-from .cpf-checkbox { width: 90%; margin: auto; position: relative; display: flex; flex-wrap: wrap;}
    .custom-post-from .cpf-checkbox label { text-align: left; display: block; padding-left: 32px; margin-bottom: 0; cursor: pointer; font-size: 11px; line-height: 18px;
    -webkit-user-select: none;
    -moz-user-select: none;
    -ms-user-select: none;
    user-select: none;
    order: 1;
    color: #ffffff;
    font-weight: normal;}
    .custom-post-from .cpf-checkbox label a { color: #ffffff; text-decoration: underline; }
    .custom-post-from .cpf-checkbox input { position: absolute; opacity: 0; cursor: pointer; height: 100%; width: 24%; left: 0;
    right: 0; margin: 0; z-index: 3; order: 2;}
    .custom-post-from .cpf-checkbox input ~ label:before { content: “f0c8”; font-family: Font Awesome 5 Free; color: #eee; font-size: 24px; position: absolute; left: 0; top: 0; line-height: 28px; color: #ffffff; width: 20px; height: 20px; margin-top: 5px; z-index: 2; }
    .custom-post-from .cpf-checkbox input:checked ~ label:before { content: “f14a”; font-weight: 600; color: #2196F3; }
    .custom-post-from .cpf-checkbox input:checked ~ label:after { content: “”; }
    .custom-post-from .cpf-checkbox input ~ label:after { position: absolute; left: 2px; width: 18px; height: 18px; margin-top: 10px; background: #ffffff; top: 10px; margin: auto; z-index: 1; }
    .custom-post-from .error{ display: block; color: #ff6461; order: 3 !important;}

    In a recent article on foreign direct investment (FDI) and foreign portfolio investment (FPI) in India, Singh and Sharma noted that the recent flood of cash can be attributed to the fact that “corporations from the US and the Gulf have bought big stakes in Reliance Industries, India’s biggest conglomerate. They are also buying shares in Indian companies. In effect, they are betting on future growth.” The problem with all foreign investment is that while it is focused on growth, the growth that investors are targeting is the value of their own investment and its contribution to augmenting their global power. From the investors’ point of view, the growth of the Indian economy is at best only a side-effect. The case of Reliance in particular will need to be monitored.

    In December 2020, Reliance’s chairman, Mukesh Ambani, promised a “more equal India … with increased incomes, increased employment, and improved quality of life for 1 billion Indians at the middle and bottom of the economic pyramid” thanks to the achievement of a $5-trillion economy by 2025. While reminding readers that “Facebook and Google are already partnered with Reliance and own stakes in Jio Platforms,” the Deccan Herald reports that the three companies have joined hands again to “to set up a national digital payment network.” The question some may be asking is this: When three partners occupy a central place in expanding Asia’s second-largest economy, who are the foxes and who are the hens?

    *[In the age of Oscar Wilde and Mark Twain, another American wit, the journalist Ambrose Bierce, produced a series of satirical definitions of commonly used terms, throwing light on their hidden meanings in real discourse. Bierce eventually collected and published them as a book, The Devil’s Dictionary, in 1911. We have shamelessly appropriated his title in the interest of continuing his wholesome pedagogical effort to enlighten generations of readers of the news. Read more of The Daily Devil’s Dictionary on Fair Observer.]

    The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy. More

  • in

    Zuckerberg faces Capitol attack grilling as Biden signals tougher line on big tech

    Mark Zuckerberg, the head of Facebook, could be in for a rough ride on Thursday when he testifies to Congress for the first time about the 6 January insurrection at the Capitol in Washington DC and amid growing questions over his platform’s role in fuelling the violence.The testimony will come after signs that the new administration of Joe Biden is preparing to take a tougher line on the tech industry’s power, especially when it comes to the social media platforms and their role in spreading misinformation and conspiracy theories.Zuckerberg will be joined by Sundar Pichai and Jack Dorsey, the chief executives of Google and Twitter respectively, at a hearing pointedly entitled “Disinformation nation: social media’s role in promoting extremism and misinformation” by the House of Representatives’ energy and commerce committee.The scrutiny comes after a report found that Facebook allowed groups linked to the QAnon, boogaloo and militia movements to glorify violence during the 2020 election and weeks leading up to the deadly mob violence at the US Capitol.Avaaz, a non-profit advocacy group, says it identified 267 pages and groups on Facebook that spread “violence-glorifying content” in the heat of the 2020 election to a combined following of 32 million users. More than two-thirds of the groups and pages had names aligned with several domestic extremist movements.The top 100 most popular false or misleading stories on Facebook related to the elections received an estimated 162m views, the report found. Avaaz called on the White House and Congress to open an investigation into Facebook’s failures and urgently pass legislation to protect American democracy.Fadi Quran, its campaign director, said: “This report shows that American voters were pummeled with false and misleading information on Facebook every step of the 2020 election cycle. We have over a year’s worth of evidence that the platform helped drive billions of views to pages and content that confused voters, created division and chaos, and, in some instances, incited violence.“But the most worrying finding in our analysis is that Facebook had the tools and capacity to better protect voters from being targets of this content, but the platform only used them at the very last moment, after significant harm was done.”Facebook claimed that Avaaz had used flawed methodology. Andy Stone, a spokesperson, said: “We’ve done more than any other internet company to combat harmful content, having already banned nearly 900 militarized social movements and removed tens of thousands of QAnon pages, groups and accounts from our apps.”He acknowledged: “Our enforcement isn’t perfect, which is why we’re always improving it while also working with outside experts to make sure that our policies remain in the right place.”But the report is likely to prompt tough questions for Zuckerberg in what is part of a wider showdown between Washington and Silicon Valley. Another flashpoint on Thursday could be Section 230 of the 1996 Communications Decency Act, which shields social media companies from liability for content their users post.Repealing the law is one of the few things on which Biden and his predecessor as president, Donald Trump, agree, though for different reasons. Democrats are concerned that Section 230 allows disinformation and conspiracy theories such as QAnon to flourish, while Trump and other Republicans have argued that it protects companies from consequences for censoring conservative voices.More generally, critics say that tech companies are too big and that the coronavirus pandemic has only increased their dominance. The cosy relationship between Barack Obama’s administration and Silicon Valley is a thing of the past, while libertarian Republicans who oppose government interference are a fading force.Amazon, Apple, Facebook and Google have all come under scrutiny from Congress and regulators in recent years. The justice department, the Federal Trade Commission (FTC) and state attorneys general are suing the behemoths over various alleged antitrust violations.In a letter this week to Biden and Merrick Garland, the new attorney general, a coalition of 29 progressive groups wrote: “It’s clear that the ability of Big Tech giants like Google to acquire monopoly power has been abetted by the leadership deficit at top enforcement agencies such as the FTC … We need a break from past, failed leadership, and we need it now.”There are signs that Biden is heeding such calls and spoiling for a confrontation. On Monday he nominated Lina Khan, an antitrust scholar who wants stricter regulation of internet companies, to the FTC. Earlier this month Tim Wu, a Columbia University law professor among the most outspoken critics of big tech, was appointed to the national economic council.There is support in Congress from the likes of David Cicilline, chairman of the House judiciary committee’s antitrust panel, which last year released a 449-page report detailing abuses of market power by Apple, Amazon, Google and Facebook.The Democratic congressman is reportedly poised to issue at least 10 legislative initiatives targeting big tech, a blitz that will make it harder for the companies and their lobbyists to focus their opposition on a single piece of legislation.Cicilline, also working on a separate bill targeting Section 230, told the Axios website: “My strategy is you’ll see a number of bills introduced, both because it’s harder for [the tech companies] to manage and oppose, you know, 10 bills as opposed to one.“It also is an opportunity for members of the committee who have expressed a real interest or enthusiasm about a particular issue, to sort of take that on and champion it.” More

  • in

    Rightwing 'super-spreader': study finds handful of accounts spread bulk of election misinformation

    A handful of rightwing “super-spreaders” on social media were responsible for the bulk of election misinformation in the run-up to the Capitol attack, according to a new study that also sheds light on the staggering reach of falsehoods pushed by Donald Trump.A report from the Election Integrity Partnership (EIP), a group that includes Stanford and the University of Washington, analyzed social media platforms including Facebook, Twitter, Instagram, YouTube, and TikTok during several months before and after the 2020 elections.It found that “super-spreaders” – responsible for the most frequent and most impactful misinformation campaigns – included Trump and his two elder sons, as well as other members of the Trump administration and the rightwing media.The study’s authors and other researchers say the findings underscore the need to disable such accounts to stop the spread of misinformation.“If there is a limit to how much content moderators can tackle, have them focus on reducing harm by eliminating the most effective spreaders of misinformation,” said said Lisa Fazio, an assistant professor at Vanderbilt University who studies the psychology of fake news but was not involved EIP report. “Rather than trying to enforce the rules equally across all users, focus enforcement on the most powerful accounts.” The report analyzed social media posts featuring words like “election” and “voting” to track key misinformation narratives related to the the 2020 election, including claims of mail carriers throwing away ballots, legitimate ballots strategically not being counted, and other false or unproven stories.The report studied how these narratives developed and the effect they had. It found during this time period, popular rightwing Twitter accounts “transformed one-off stories, sometimes based on honest voter concerns or genuine misunderstandings, into cohesive narratives of systemic election fraud”.Ultimately, the “false claims and narratives coalesced into the meta-narrative of a ‘stolen election’, which later propelled the January 6 insurrection”, the report said.“The 2020 election demonstrated that actors – both foreign and domestic – remain committed to weaponizing viral false and misleading narratives to undermine confidence in the US electoral system and erode Americans’ faith in our democracy,” the authors concluded.Next to no factchecking, with Trump as the super-spreader- in-chiefIn monitoring Twitter, the researchers analyzed more than more than 22 million tweets sent between 15 August and 12 December. The study determined which accounts were most influential by the size and speed with which they spread misinformation.“Influential accounts on the political right rarely engaged in factchecking behavior, and were responsible for the most widely spread incidents of false or misleading information in our dataset,” the report said.Out of the 21 top offenders, 15 were verified Twitter accounts – which are particularly dangerous when it comes to election misinformation, the study said. The “repeat spreaders” responsible for the most widely spread misinformation included Eric Trump, Donald Trump, Donald Trump Jr. and influencers like James O’Keefe, Tim Pool, Elijah Riot, and Sidney Powell. All 21 of the top accounts for misinformation leaned rightwing, the study showed.“Top-down mis- and disinformation is dangerous because of the speed at which it can spread,” the report said. “If a social media influencer with millions of followers shares a narrative, it can garner hundreds of thousands of engagements and shares before a social media platform or factchecker has time to review its content.”On nearly all the platforms analyzed in the study – including Facebook, Twitter, and YouTube – Donald Trump played a massive role.It pinpointed 21 incidents in which a tweet from Trump’s official @realDonaldTrump account jumpstarted the spread of a false narrative across Twitter. For example, Trump’s tweets baselessly claiming that the voting equipment manufacturer Dominion Voting Systems was responsible for election fraud played a large role in amplifying the conspiracy theory to a wider audience. False or baseless tweets sent by Trump’s account – which had 88.9m followers at the time – garnered more than 460,000 retweets.Meanwhile, Trump’s YouTube channel was linked to six distinct waves of misinformation that, combined, were the most viewed of any other repeat-spreader’s videos. His Facebook account had the most engagement of all those studied.The Election Integrity Partnership study is not the first to show the massive influence Trump’s social media accounts have had on the spread of misinformation. In one year – between 1 January 2020 and 6 January 2021 – Donald Trump pushed disinformation in more than 1,400 Facebook posts, a report from Media Matters for America released in February found. Trump was ultimately suspended from the platform in January, and Facebook is debating whether he will ever be allowed back.Specifically, 516 of his posts contained disinformation about Covid-19, 368 contained election disinformation, and 683 contained harmful rhetoric attacking his political enemies. Allegations of election fraud earned over 149.4 million interactions, or an average of 412,000 interactions per post, and accounted for 16% of interactions on his posts in 2020. Trump had a unique ability to amplify news stories that would have otherwise remained contained in smaller outlets and subgroups, said Matt Gertz of Media Matters for America.“What Trump did was take misinformation from the rightwing ecosystem and turn it into a mainstream news event that affected everyone,” he said. “He was able to take these absurd lies and conspiracy theories and turn them into national news. And if you do that, and inflame people often enough, you will end up with what we saw on January 6.”Effects of false election narratives on voters“Super-spreader” accounts were ultimately very successful in undermining voters’ trust in the democratic system, the report found. Citing a poll by the Pew Research Center, the study said that, of the 54% of people who voted in person, approximately half had cited concerns about voting by mail, and only 30% of respondents were “very confident” that absentee or mail-in ballots had been counted as intended.The report outlined a number of recommendations, including removing “super-spreader” accounts entirely.Outside experts agree that tech companies should more closely scrutinize top accounts and repeat offenders.Researchers said the refusal to take action or establish clear rules for when action should be taken helped to fuel the prevalence of misinformation. For example, only YouTube had a publicly stated “three-strike” system for offenses related to the election. Platforms like Facebook reportedly had three-strike rules as well but did not make the system publicly known.Only four of the top 20 Twitter accounts cited as top spreaders were actually removed, the study showed – including Donald Trump’s in January. Twitter has maintained that its ban of the former president is permanent. YouTube’s chief executive officer stated this week that Trump would be reinstated on the platform once the “risk of violence” from his posts passes. Facebook’s independent oversight board is now considering whether to allow Trump to return.“We have seen that he uses his accounts as a way to weaponize disinformation. It has already led to riots at the US Capitol; I don’t know why you would give him the opportunity to do that again,” Gertz said. “It would be a huge mistake to allow Trump to return.” More

  • in

    Optimizing for outrage: ex-Obama digital chief urges curbs on big tech

    [embedded content]
    A former digital strategist for Barack Obama has demanded an end to big tech’s profit-driven optimization of outrage and called for regulators to curb online disinformation and division.
    Michael Slaby – author of a new book, For All the People: Redeeming the Broken Promises of Modern Media and Reclaiming Our Civic Life – described tech giants Facebook and Google as “two gorillas” crushing the very creativity needed to combat conspiracy theories spread by former US president Donald Trump and others.
    “The systems are not broken,” Slaby, 43, told the Guardian by phone from his home in Rhinebeck, New York. “They are working exactly as they were designed for the benefit of their designers. They can be designed differently. We can express and encourage a different set of public values about the public goods that we need from our public sphere.”
    Facebook has almost 2.8 billion global monthly active users with a total of 3.3 billion using any of the company’s core products – Facebook, WhatsApp, Instagram and Messenger – on a monthly basis. Its revenue in the fourth quarter of last year was $28bn, up 33% from a year earlier, and profits climbed 53% to $11.2bn.
    But the social network founded by Mark Zuckerberg stands accused of poisoning the information well. Critics say it polarises users and allows hate speech and conspiracy theories to thrive, and that people who join extremist groups are often directed by the platform’s algorithm. The use of Facebook by Trump supporters involved in the 6 January insurrection at the US Capitol has drawn much scrutiny.
    Slaby believes Facebook and Twitter were too slow to remove Trump from their platforms. “This is where I think they hide behind arguments like the first amendment,” he said. “The first amendment is about government suppression of speech; it doesn’t have anything to do with your access to Facebook. More

  • in

    'It let white supremacists organize': the toxic legacy of Facebook's Groups

    Sign up for the Guardian Today US newsletterMark Zuckerberg, the Facebook CEO, announced last week the platform will no longer algorithmically recommend political groups to users in an attempt to “turn down the temperature” on online divisiveness.But experts say such policies are difficult to enforce, much less quantify, and the toxic legacy of the Groups feature and the algorithmic incentives promoting it will be difficult to erase.“This is like putting a Band-Aid on a gaping wound,” said Jessica J González, the co-founder of the anti-hate speech group Change the Terms. “It doesn’t do enough to combat the long history of abuse that’s been allowed to fester on Facebook.”Groups – a place to create ‘meaningful social infrastructure’Facebook launched Groups, a feature that allows people with shared interests to communicate on closed forums, in 2010, but began to make a more concerted effort to promote the feature around 2017 after the Cambridge Analytica scandal cast a shadow on the platform’s Newsfeed.In a long blogpost in 2017 February called Building Global Community, Zuckerberg argued there was “a real opportunity” through groups to create “meaningful social infrastructure in our lives”.He added: “More than one billion people are active members of Facebook groups, but most don’t seek out groups on their own – friends send invites or Facebook suggests them. If we can improve our suggestions and help connect one billion people with meaningful communities, that can strengthen our social fabric.”After growing its group suggestions and advertising the feature extensively – including during a 60-second spot in the 2020 Super Bowl – Facebook did see a rise in use. In February 2017 there were 100 million people on the platform who were in groups they considered “meaningful”. Today, that number is up to more than 600 million.That fast rise, however, came with little oversight and proved messy. In shifting its focus to Groups, Facebook began to rely more heavily on unpaid moderators to police hate speech on the platform. Groups proved a more private place to speak, for conspiracy theories to proliferate and for some users to organize real-life violence – all with little oversight from outside experts or moderators.Facebook in 2020 introduced a number of new rules to “keep Facebook groups safe”, including new consequences for individuals who violate rules and increased responsibility given to admins of groups to keep users in line. The company says it has hired 35,000 people to address safety on Facebook, including engineers, moderators and subject matter experts, and invested in AI technology to spot posts that violate it guidelines.“We apply the same rules to Groups that we apply to every other form of content across the platform,” a Facebook company spokesperson said. “When we find Groups breaking our rules we take action – from reducing their reach to removing them from recommendations, to taking them down entirely. Over the years we have invested in new tools and AI to find and remove harmful content and developed new policies to combat threats and abuse.”Researchers have long complained that little is shared publicly regarding how, exactly, Facebook algorithms work, what is being shared privately on the platform, and what information Facebook collects on users. The increased popularity of Groups made it even more difficult to keep track of activity on the platform.“It is a black box,” said González regarding Facebook policy on Groups. “This is why many of us have been calling for years for greater transparency about their content moderation and enforcement standards. ”Meanwhile, the platform’s algorithmic recommendations sucked users further down the rabbit hole. Little is known about exactly how Facebook algorithms work, but it is clear the platform recommends users join similar groups to ones they are already in based on keywords and shared interests. Facebook’s own researchers found that “64% of all extremist group joins are due to our recommendation tools”, an internal report in 2016 found.“Facebook has let white supremacists organize and conspiracy theorists organize all over its platform and has failed to contain that problem,” González said. “In fact it has significantly contributed to the spread of that problem through its recommendation system.”‘We need to do something to stop these conversations’Facebook’s own research showed that algorithmic recommendations of groups may have contributed to the rise of violence and extremism. On Sunday, the Wall Street Journal reported that internal documents showed executives were aware of risks posed by groups and were warned repeatedly by researchers to address them. In one presentation in 2020 August, researchers said roughly “70% of the top 100 most active US Civic Groups are considered non-recommendable for issues such as hate, misinfo, bullying and harassment”.“We need to do something to stop these conversations from happening and growing as quickly as they do,” the researchers wrote, according to the Wall Street Journal, and suggested taking measures to slow the growth of Groups until more could be done to address the issues.Several months later, Facebook halted algorithmic recommendations for political groups ahead of the US elections – a move that has been extended indefinitely with the policy announced last week. The change seemed to be motivated by the 6 January insurrection, which the FBI found had been tied to organizing on Facebook.In response to the story in the Wall Street Journal, Guy Rosen, Facebook’s vice-president of integrity, who oversees content moderation policies on the platform, said the problems were indicative of emerging threats rather than inability to address long-term problems. “If you’d have looked at Groups several years ago, you might not have seen the same set of behaviors,” he said.Facebook let white supremacists and conspiracy theorists organize all over its platform and has failed to contain that problemBut researchers say the use of Groups to organize and radicalize users is an old problem. Facebook groups had been tied to a number of harmful incidents and movements long before January’s violence.“Political groups on Facebook have always advantaged the fringe, and the outsiders,” said Joan Donovan, a lead researcher at Data and Society who studies the rise of hate speech on Facebook. “It’s really about reinforcement – the algorithm learns what you’ve clicked on and what you like and it tries to reinforce those behaviors. The groups become centers of coordination.”Facebook was criticized for its inability to police terror groups such as the Islamic State and al-Qaida using it as early as 2016. It was used extensively in organizing of the Unite the Right Rally in Charlottesville in 2019, where white nationalists and neo-Nazis violently marched. Militarized groups including Proud Boys, Boogaloo Bois and militia groups all organized, promoted and grew their ranks on Facebook. In 2020 officials arrested men who had planned a violent kidnapping of the Michigan governor, Gretchen Whitmer, on Facebook. A 17-year-old in Illinois shot three people, killing two, in a protest organized on Facebook.These same algorithms have allowed the anti-vaccine movement to thrive on Facebook, with hundreds of groups amassing hundreds of thousands of members over the years. A Guardian report in 2019 found the majority of search results for the term “vaccination” were anti-vaccine, led by two misinformation groups, “Stop Mandatory Vaccination” and “Vaccination Re-education Discussion Forum” with more than 140,000 members each. These groups were ultimately tied to harassment campaigns against doctors who support vaccines.In September 2020, Facebook stopped health groups from being algorithmically recommended to put a stop to such misinformation issues. It also has added other rules to stop the spread of misinformation, including banning users from creating a new group if an existing group they had administrated is banned.The origin of the QAnon movement has been traced to a post on a message board in 2017. By the time Facebook banned content related to the movement in 2020, a Guardian report had exposed that Facebook groups dedicated to the dangerous conspiracy theory QAnon were spreading on the platform at a rapid pace, with thousands of groups and millions of members.‘The calm before the storm’Zuckerberg has said in 2020 the company had removed more than 1m groups in the past year, but experts say the action coupled with the new policy on group recommendations are falling short.The platform promised to stop recommending political groups to users ahead of the elections in November and then victoriously claimed to have halved political group recommendations. But a report from the Markup showed that 12 groups among the top 100 groups recommended to users in its Citizen Browser project, which tracks links and group recommendations served to a nationwide panel of Facebook users, were political in nature.Indeed, the Stop the Steal groups that emerged to cast doubt on the results of the election and ultimately led to the 6 January violent insurrection amassed hundreds of thousands of followers – all while Facebook’s algorithmic recommendations of political groups were paused. Many researchers also worry that legitimate organizing groups will be swept up in Facebook’s actions against partisan political groups and extremism.“I don’t have a whole lot of confidence that they’re going to be able to actually sort out what a political group is or isn’t,” said Heidi Beirich, who is the co-founder of the Global Project Against Hate and Extremism and sits on Facebook’s Real Oversight Board, a group of academics and watchdogs criticizing Facebook’s content moderation policies.“They have allowed QAnon, militias and other groups proliferate so long, remnants of these movements remain all over the platform,” she added. “I don’t think this is something they are going to be able to sort out overnight.”“It doesn’t actually take a mass movement, or a massive sea of bodies, to do the kind of work on the internet that allows for small groups to have an outsized impact on the public conversation,” added Donovan. “This is the calm before the storm.” More

  • in

    Claim of anti-conservative bias by social media firms is baseless, report finds

    Republicans including Donald Trump have raged against Twitter and Facebook in recent months, alleging anti-conservative bias, censorship and a silencing of free speech. According to a new report from New York University, none of that is true.Disinformation expert Paul Barrett and researcher J Grant Sims found that far from suppressing conservatives, social media platforms have, through algorithms, amplified rightwing voices, “often affording conservatives greater reach than liberal or nonpartisan content creators”.Barrett and Sims’s report comes as Republicans up their campaign against social media companies. Conservatives have long complained that platforms such as Twitter, Facebook and YouTube show bias against the right, laments which intensified when Trump was banned from all three platforms for inciting the attack on the US Capitol which left five people dead.The NYU study, released by the Stern Center for Business and Human Rights, found that a claim of anti-conservative bias “is itself a form of disinformation: a falsehood with no reliable evidence to support it”.“There is no evidence to support the claim that the major social media companies are suppressing, censoring or otherwise discriminating against conservatives on their platforms,” Barrett said. “In fact, it is often conservatives who gain the most in terms of engagement and online attention, thanks to the platforms’ systems of algorithmic promotion of content.”The report found that Twitter, Facebook and other companies did not show bias when deleting incendiary tweets around the Capitol attack, as some on the right have claimed.Prominent conservatives including Ted Cruz, the Texas senator, have sought to crack down on big tech companies as they claim to be victims of suppression – which Barrett and Sims found does not exist.The researchers did outline problems social media companies face when accused of bias, and recommended a series of measures.“What is needed is a robust reform agenda that addresses the very real problems of social media content regulation as it currently exists,” Barrett said. “Only by moving forward from these false claims can we begin to pursue that agenda in earnest.”A 2020 study by the Pew Research Center reported that a majority of Americans believe social media companies censor political views. Pew found that 90% of Republicans believed views were being censored, and 69% of Republicans or people who leant Republican believed social media companies “generally support the views of liberals over conservatives”.Republicans including Trump have pushed to repeal section 230 of the Communications Decency Act, which protects social media companies from legal liability, claiming it allows platforms to suppress conservative voices.The NYU report suggests section 230 should be amended, with companies persuaded to “accept a range of new responsibilities related to policing content”, or risk losing liability protections. More

  • in

    Big tech facilitated QAnon and the Capitol attack. It’s time to hold them accountable

    Donald Trump’s election lies and the 6 January attack on the US Capitol have highlighted how big tech has led our society down a path of conspiracies and radicalism by ignoring the mounting evidence that their products are dangerous.But the spread of deadly misinformation on a global scale was enabled by the absence of antitrust enforcement by the federal government to rein in out-of-control monopolies such as Facebook and Google. And there is a real risk social media giants could sidestep accountability once again.Trump’s insistence that he won the election was an attack on democracy that culminated in the attack on the US Capitol. The events were as much the fault of Sundar Pichai, Jack Dorsey and Mark Zuckerberg – CEOs of Google, Twitter and Facebook, respectively – as they were the fault of Trump and his cadre of co-conspirators.During the early days of social media, no service operated at the scale of today’s Goliaths. Adoption was limited and online communities lived in small and isolated pockets. When the Egyptian uprisings of 2011 proved the power of these services, the US state department became their cheerleaders, offering them a veneer of exceptionalism which would protect them from scrutiny as they grew exponentially.Later, dictators and anti-democratic actors would study and co-opt these tools for their own purposes. As the megaphones got larger, the voices of bad actors also got louder. As the networks got bigger, the feedback loop amplifying those voices became stronger. It is unimaginable that QAnon could gain a mass following without tech companies’ dangerous indifference.Eventually, these platforms became immune to forces of competition in the marketplace – they became information monopolies with runaway scale. Absent any accountability from watchdogs or the marketplace, fringe conspiracy theories enjoyed unchecked propagation. We can mark networked conspiracies from birtherism to QAnon as straight lines through the same coterie of misinformers who came to power alongside Trump.Today, most global internet activity happens on services owned by either Facebook or Alphabet, which includes YouTube and Google. The internet has calcified into a pair of monopolies who protect their size by optimizing to maximize “engagement”. Sadly, algorithms designed to increase dependency and usage are far more profitable than ones that would encourage timely, local, relevant and, most importantly, accurate information. The truth, in a word, is boring. Facts rarely animate the kind of compulsive engagement rewarded by recommendation and search algorithms.The best tool – if not the only tool – to hold big tech accountable is antitrust enforcement: enforcing the existing antitrust laws designed to rein in companies’ influence over other political, economic and social institutions.Antitrust enforcement has historically been the US government’s greatest weapon against such firms. From breaking up the trusts at the start of the 20th century to the present day, antitrust enforcement spurs competition and ingenuity while re-empowering citizens. Most antitrust historians agree that absent US v Microsoft in 1998, which stopped Microsoft from bundling products and effectively killing off other browsers, the modern internet would have been strangled in the crib.The best tool to hold big tech accountable is antitrust enforcement: enforcing the existing antitrust laws designed to rein in companies’ influence over other political, economic and social institutionsIronically, Google and Facebook were the beneficiaries of such enforcement. Over two decades would pass before US authorities brought antitrust suits against Google and Facebook last year. Until then, antitrust had languished as a tool to counterbalance abusive monopolies. Big tech sees an existential threat in the renewed calls for antitrust, and these companies have aggressively lobbied to ensure key vacancies in the Biden administration are filled by their friends.The Democratic party is especially vulnerable to soft capture by these tech firms. Big tech executives are mostly left-leaning and donate millions to progressive causes while spouting feelgood rhetoric of inclusion and connectivity. During the Obama administration, Google and Facebook were treated as exceptional, avoiding any meaningful regulatory scrutiny. Democratic Senate leadership, specifically Senator Chuck Schumer, has recently signaled he will treat these companies with kid gloves.The Biden administration cannot repeat the Obama legacy of installing big tech-friendly individuals to these critical but often under-the-radar roles. The new administration, in consultation with Schumer, will be tasked with appointing a new assistant attorney general for antitrust at the Department of Justice and up to three members of the Federal Trade Commission. Figures friendly to big tech in those positions could abruptly settle the pending litigation against Google or Facebook.President Joe Biden and Schumer must reject any candidate who has worked in the service of big tech. Any former White House or congressional personnel who gave these companies a pass during the Obama administration should also be disqualified from consideration. Allowing big tech’s lawyers and plants to run the antitrust agencies would be the equivalent of allowing a climate-change-denying big oil executive run the Environmental Protection Agency.The public is beginning to recognize the harms to society wrought by big tech and a vibrant and bipartisan anti-monopoly movement with diverse scholars, and activists has risen over the past few years. Two-thirds of Democratic voters believe, along with a majority of Republicans, that Biden should “refuse to appoint executives, lobbyists, or lawyers for these companies to positions of power or influence in his administration while this legal activity is pending”. This gives the Democratic party an opportunity to do the right thing for our country and attract new voters by fighting for the web we want.Big tech played a central role in the dangerous attack on the US Capitol and all of the events which led to it. Biden’s antitrust appointees will be the ones who decide if there are any consequences to be paid. More