More stories

  • in

    'It let white supremacists organize': the toxic legacy of Facebook's Groups

    Sign up for the Guardian Today US newsletterMark Zuckerberg, the Facebook CEO, announced last week the platform will no longer algorithmically recommend political groups to users in an attempt to “turn down the temperature” on online divisiveness.But experts say such policies are difficult to enforce, much less quantify, and the toxic legacy of the Groups feature and the algorithmic incentives promoting it will be difficult to erase.“This is like putting a Band-Aid on a gaping wound,” said Jessica J González, the co-founder of the anti-hate speech group Change the Terms. “It doesn’t do enough to combat the long history of abuse that’s been allowed to fester on Facebook.”Groups – a place to create ‘meaningful social infrastructure’Facebook launched Groups, a feature that allows people with shared interests to communicate on closed forums, in 2010, but began to make a more concerted effort to promote the feature around 2017 after the Cambridge Analytica scandal cast a shadow on the platform’s Newsfeed.In a long blogpost in 2017 February called Building Global Community, Zuckerberg argued there was “a real opportunity” through groups to create “meaningful social infrastructure in our lives”.He added: “More than one billion people are active members of Facebook groups, but most don’t seek out groups on their own – friends send invites or Facebook suggests them. If we can improve our suggestions and help connect one billion people with meaningful communities, that can strengthen our social fabric.”After growing its group suggestions and advertising the feature extensively – including during a 60-second spot in the 2020 Super Bowl – Facebook did see a rise in use. In February 2017 there were 100 million people on the platform who were in groups they considered “meaningful”. Today, that number is up to more than 600 million.That fast rise, however, came with little oversight and proved messy. In shifting its focus to Groups, Facebook began to rely more heavily on unpaid moderators to police hate speech on the platform. Groups proved a more private place to speak, for conspiracy theories to proliferate and for some users to organize real-life violence – all with little oversight from outside experts or moderators.Facebook in 2020 introduced a number of new rules to “keep Facebook groups safe”, including new consequences for individuals who violate rules and increased responsibility given to admins of groups to keep users in line. The company says it has hired 35,000 people to address safety on Facebook, including engineers, moderators and subject matter experts, and invested in AI technology to spot posts that violate it guidelines.“We apply the same rules to Groups that we apply to every other form of content across the platform,” a Facebook company spokesperson said. “When we find Groups breaking our rules we take action – from reducing their reach to removing them from recommendations, to taking them down entirely. Over the years we have invested in new tools and AI to find and remove harmful content and developed new policies to combat threats and abuse.”Researchers have long complained that little is shared publicly regarding how, exactly, Facebook algorithms work, what is being shared privately on the platform, and what information Facebook collects on users. The increased popularity of Groups made it even more difficult to keep track of activity on the platform.“It is a black box,” said González regarding Facebook policy on Groups. “This is why many of us have been calling for years for greater transparency about their content moderation and enforcement standards. ”Meanwhile, the platform’s algorithmic recommendations sucked users further down the rabbit hole. Little is known about exactly how Facebook algorithms work, but it is clear the platform recommends users join similar groups to ones they are already in based on keywords and shared interests. Facebook’s own researchers found that “64% of all extremist group joins are due to our recommendation tools”, an internal report in 2016 found.“Facebook has let white supremacists organize and conspiracy theorists organize all over its platform and has failed to contain that problem,” González said. “In fact it has significantly contributed to the spread of that problem through its recommendation system.”‘We need to do something to stop these conversations’Facebook’s own research showed that algorithmic recommendations of groups may have contributed to the rise of violence and extremism. On Sunday, the Wall Street Journal reported that internal documents showed executives were aware of risks posed by groups and were warned repeatedly by researchers to address them. In one presentation in 2020 August, researchers said roughly “70% of the top 100 most active US Civic Groups are considered non-recommendable for issues such as hate, misinfo, bullying and harassment”.“We need to do something to stop these conversations from happening and growing as quickly as they do,” the researchers wrote, according to the Wall Street Journal, and suggested taking measures to slow the growth of Groups until more could be done to address the issues.Several months later, Facebook halted algorithmic recommendations for political groups ahead of the US elections – a move that has been extended indefinitely with the policy announced last week. The change seemed to be motivated by the 6 January insurrection, which the FBI found had been tied to organizing on Facebook.In response to the story in the Wall Street Journal, Guy Rosen, Facebook’s vice-president of integrity, who oversees content moderation policies on the platform, said the problems were indicative of emerging threats rather than inability to address long-term problems. “If you’d have looked at Groups several years ago, you might not have seen the same set of behaviors,” he said.Facebook let white supremacists and conspiracy theorists organize all over its platform and has failed to contain that problemBut researchers say the use of Groups to organize and radicalize users is an old problem. Facebook groups had been tied to a number of harmful incidents and movements long before January’s violence.“Political groups on Facebook have always advantaged the fringe, and the outsiders,” said Joan Donovan, a lead researcher at Data and Society who studies the rise of hate speech on Facebook. “It’s really about reinforcement – the algorithm learns what you’ve clicked on and what you like and it tries to reinforce those behaviors. The groups become centers of coordination.”Facebook was criticized for its inability to police terror groups such as the Islamic State and al-Qaida using it as early as 2016. It was used extensively in organizing of the Unite the Right Rally in Charlottesville in 2019, where white nationalists and neo-Nazis violently marched. Militarized groups including Proud Boys, Boogaloo Bois and militia groups all organized, promoted and grew their ranks on Facebook. In 2020 officials arrested men who had planned a violent kidnapping of the Michigan governor, Gretchen Whitmer, on Facebook. A 17-year-old in Illinois shot three people, killing two, in a protest organized on Facebook.These same algorithms have allowed the anti-vaccine movement to thrive on Facebook, with hundreds of groups amassing hundreds of thousands of members over the years. A Guardian report in 2019 found the majority of search results for the term “vaccination” were anti-vaccine, led by two misinformation groups, “Stop Mandatory Vaccination” and “Vaccination Re-education Discussion Forum” with more than 140,000 members each. These groups were ultimately tied to harassment campaigns against doctors who support vaccines.In September 2020, Facebook stopped health groups from being algorithmically recommended to put a stop to such misinformation issues. It also has added other rules to stop the spread of misinformation, including banning users from creating a new group if an existing group they had administrated is banned.The origin of the QAnon movement has been traced to a post on a message board in 2017. By the time Facebook banned content related to the movement in 2020, a Guardian report had exposed that Facebook groups dedicated to the dangerous conspiracy theory QAnon were spreading on the platform at a rapid pace, with thousands of groups and millions of members.‘The calm before the storm’Zuckerberg has said in 2020 the company had removed more than 1m groups in the past year, but experts say the action coupled with the new policy on group recommendations are falling short.The platform promised to stop recommending political groups to users ahead of the elections in November and then victoriously claimed to have halved political group recommendations. But a report from the Markup showed that 12 groups among the top 100 groups recommended to users in its Citizen Browser project, which tracks links and group recommendations served to a nationwide panel of Facebook users, were political in nature.Indeed, the Stop the Steal groups that emerged to cast doubt on the results of the election and ultimately led to the 6 January violent insurrection amassed hundreds of thousands of followers – all while Facebook’s algorithmic recommendations of political groups were paused. Many researchers also worry that legitimate organizing groups will be swept up in Facebook’s actions against partisan political groups and extremism.“I don’t have a whole lot of confidence that they’re going to be able to actually sort out what a political group is or isn’t,” said Heidi Beirich, who is the co-founder of the Global Project Against Hate and Extremism and sits on Facebook’s Real Oversight Board, a group of academics and watchdogs criticizing Facebook’s content moderation policies.“They have allowed QAnon, militias and other groups proliferate so long, remnants of these movements remain all over the platform,” she added. “I don’t think this is something they are going to be able to sort out overnight.”“It doesn’t actually take a mass movement, or a massive sea of bodies, to do the kind of work on the internet that allows for small groups to have an outsized impact on the public conversation,” added Donovan. “This is the calm before the storm.” More

  • in

    Claim of anti-conservative bias by social media firms is baseless, report finds

    Republicans including Donald Trump have raged against Twitter and Facebook in recent months, alleging anti-conservative bias, censorship and a silencing of free speech. According to a new report from New York University, none of that is true.Disinformation expert Paul Barrett and researcher J Grant Sims found that far from suppressing conservatives, social media platforms have, through algorithms, amplified rightwing voices, “often affording conservatives greater reach than liberal or nonpartisan content creators”.Barrett and Sims’s report comes as Republicans up their campaign against social media companies. Conservatives have long complained that platforms such as Twitter, Facebook and YouTube show bias against the right, laments which intensified when Trump was banned from all three platforms for inciting the attack on the US Capitol which left five people dead.The NYU study, released by the Stern Center for Business and Human Rights, found that a claim of anti-conservative bias “is itself a form of disinformation: a falsehood with no reliable evidence to support it”.“There is no evidence to support the claim that the major social media companies are suppressing, censoring or otherwise discriminating against conservatives on their platforms,” Barrett said. “In fact, it is often conservatives who gain the most in terms of engagement and online attention, thanks to the platforms’ systems of algorithmic promotion of content.”The report found that Twitter, Facebook and other companies did not show bias when deleting incendiary tweets around the Capitol attack, as some on the right have claimed.Prominent conservatives including Ted Cruz, the Texas senator, have sought to crack down on big tech companies as they claim to be victims of suppression – which Barrett and Sims found does not exist.The researchers did outline problems social media companies face when accused of bias, and recommended a series of measures.“What is needed is a robust reform agenda that addresses the very real problems of social media content regulation as it currently exists,” Barrett said. “Only by moving forward from these false claims can we begin to pursue that agenda in earnest.”A 2020 study by the Pew Research Center reported that a majority of Americans believe social media companies censor political views. Pew found that 90% of Republicans believed views were being censored, and 69% of Republicans or people who leant Republican believed social media companies “generally support the views of liberals over conservatives”.Republicans including Trump have pushed to repeal section 230 of the Communications Decency Act, which protects social media companies from legal liability, claiming it allows platforms to suppress conservative voices.The NYU report suggests section 230 should be amended, with companies persuaded to “accept a range of new responsibilities related to policing content”, or risk losing liability protections. More

  • in

    Big tech facilitated QAnon and the Capitol attack. It’s time to hold them accountable

    Donald Trump’s election lies and the 6 January attack on the US Capitol have highlighted how big tech has led our society down a path of conspiracies and radicalism by ignoring the mounting evidence that their products are dangerous.But the spread of deadly misinformation on a global scale was enabled by the absence of antitrust enforcement by the federal government to rein in out-of-control monopolies such as Facebook and Google. And there is a real risk social media giants could sidestep accountability once again.Trump’s insistence that he won the election was an attack on democracy that culminated in the attack on the US Capitol. The events were as much the fault of Sundar Pichai, Jack Dorsey and Mark Zuckerberg – CEOs of Google, Twitter and Facebook, respectively – as they were the fault of Trump and his cadre of co-conspirators.During the early days of social media, no service operated at the scale of today’s Goliaths. Adoption was limited and online communities lived in small and isolated pockets. When the Egyptian uprisings of 2011 proved the power of these services, the US state department became their cheerleaders, offering them a veneer of exceptionalism which would protect them from scrutiny as they grew exponentially.Later, dictators and anti-democratic actors would study and co-opt these tools for their own purposes. As the megaphones got larger, the voices of bad actors also got louder. As the networks got bigger, the feedback loop amplifying those voices became stronger. It is unimaginable that QAnon could gain a mass following without tech companies’ dangerous indifference.Eventually, these platforms became immune to forces of competition in the marketplace – they became information monopolies with runaway scale. Absent any accountability from watchdogs or the marketplace, fringe conspiracy theories enjoyed unchecked propagation. We can mark networked conspiracies from birtherism to QAnon as straight lines through the same coterie of misinformers who came to power alongside Trump.Today, most global internet activity happens on services owned by either Facebook or Alphabet, which includes YouTube and Google. The internet has calcified into a pair of monopolies who protect their size by optimizing to maximize “engagement”. Sadly, algorithms designed to increase dependency and usage are far more profitable than ones that would encourage timely, local, relevant and, most importantly, accurate information. The truth, in a word, is boring. Facts rarely animate the kind of compulsive engagement rewarded by recommendation and search algorithms.The best tool – if not the only tool – to hold big tech accountable is antitrust enforcement: enforcing the existing antitrust laws designed to rein in companies’ influence over other political, economic and social institutions.Antitrust enforcement has historically been the US government’s greatest weapon against such firms. From breaking up the trusts at the start of the 20th century to the present day, antitrust enforcement spurs competition and ingenuity while re-empowering citizens. Most antitrust historians agree that absent US v Microsoft in 1998, which stopped Microsoft from bundling products and effectively killing off other browsers, the modern internet would have been strangled in the crib.The best tool to hold big tech accountable is antitrust enforcement: enforcing the existing antitrust laws designed to rein in companies’ influence over other political, economic and social institutionsIronically, Google and Facebook were the beneficiaries of such enforcement. Over two decades would pass before US authorities brought antitrust suits against Google and Facebook last year. Until then, antitrust had languished as a tool to counterbalance abusive monopolies. Big tech sees an existential threat in the renewed calls for antitrust, and these companies have aggressively lobbied to ensure key vacancies in the Biden administration are filled by their friends.The Democratic party is especially vulnerable to soft capture by these tech firms. Big tech executives are mostly left-leaning and donate millions to progressive causes while spouting feelgood rhetoric of inclusion and connectivity. During the Obama administration, Google and Facebook were treated as exceptional, avoiding any meaningful regulatory scrutiny. Democratic Senate leadership, specifically Senator Chuck Schumer, has recently signaled he will treat these companies with kid gloves.The Biden administration cannot repeat the Obama legacy of installing big tech-friendly individuals to these critical but often under-the-radar roles. The new administration, in consultation with Schumer, will be tasked with appointing a new assistant attorney general for antitrust at the Department of Justice and up to three members of the Federal Trade Commission. Figures friendly to big tech in those positions could abruptly settle the pending litigation against Google or Facebook.President Joe Biden and Schumer must reject any candidate who has worked in the service of big tech. Any former White House or congressional personnel who gave these companies a pass during the Obama administration should also be disqualified from consideration. Allowing big tech’s lawyers and plants to run the antitrust agencies would be the equivalent of allowing a climate-change-denying big oil executive run the Environmental Protection Agency.The public is beginning to recognize the harms to society wrought by big tech and a vibrant and bipartisan anti-monopoly movement with diverse scholars, and activists has risen over the past few years. Two-thirds of Democratic voters believe, along with a majority of Republicans, that Biden should “refuse to appoint executives, lobbyists, or lawyers for these companies to positions of power or influence in his administration while this legal activity is pending”. This gives the Democratic party an opportunity to do the right thing for our country and attract new voters by fighting for the web we want.Big tech played a central role in the dangerous attack on the US Capitol and all of the events which led to it. Biden’s antitrust appointees will be the ones who decide if there are any consequences to be paid. More

  • in

    The silencing of Trump has highlighted the authoritarian power of tech giants | John Naughton

    It was eerily quiet on social media last week. That’s because Trump and his cultists had been “deplatformed”. By banning him, Twitter effectively took away the megaphone he’s been masterfully deploying since he ran for president. The shock of the 6 January assault on the Capitol was seismic enough to convince even Mark Zuckerberg that the plug finally had to be pulled. And so it was, even to the point of Amazon Web Services terminating the hosting of Parler, a Twitter alternative for alt-right extremists.The deafening silence that followed these measures was, however, offset by an explosion of commentary about their implications for freedom, democracy and the future of civilisation as we know it. Wading knee-deep through such a torrent of opinion about the first amendment, free speech, censorship, tech power and “accountability” (whatever that might mean), it was sometimes hard to keep one’s bearings. But what came to mind continually was H L Mencken’s astute insight that “for every complex problem there is an answer that is clear, simple and wrong”. The air was filled with people touting such answers.In the midst of the discursive chaos, though, some general themes could be discerned. The first highlighted cultural differences, especially between the US with its sacred first amendment on the one hand and European and other societies, which have more ambivalent histories of moderating speech. The obvious problem with this line of discussion is that the first amendment is about government regulation of speech and has nothing whatsoever to do with tech companies, which are free to do as they like on their platforms.A second theme viewed the root cause of the problem as the lax regulatory climate in the US over the last three decades, which led to the emergence of a few giant tech companies that effectively became the hosts for much of the public sphere. If there were many Facebooks, YouTubes and Twitters, so the counter-argument runs, then censorship would be less effective and problematic because anyone denied a platform could always go elsewhere.Then there were arguments about power and accountability. In a democracy, those who make decisions about which speech is acceptable and which isn’t ought to be democratically accountable. “The fact that a CEO can pull the plug on Potus’s loudspeaker without any checks and balances,” fumed EU commissioner Thierry Breton, “is not only confirmation of the power of these platforms, but it also displays deep weaknesses in the way our society is organised in the digital space.” Or, to put it another way, who elected the bosses of Facebook, Google, YouTube and Twitter?What was missing from the discourse was any consideration of whether the problem exposed by the sudden deplatforming of Trump and his associates and camp followers is actually soluble – at least in the way it has been framed until now. The paradox that the internet is a global system but law is territorial (and culture-specific) has traditionally been a way of stopping conversations about how to get the technology under democratic control. And it was running through the discussion all week like a length of barbed wire that snagged anyone trying to make progress through the morass.All of which suggests that it’d be worth trying to reframe the problem in more productive ways. One interesting suggestion for how to do that came last week in a thoughtful Twitter thread by Blayne Haggart, a Canadian political scientist. Forget about speech for a moment, he suggests, and think about an analogous problem in another sphere – banking. “Different societies have different tolerances for financial risk,” he writes, “with different regulatory regimes to match. Just like countries are free to set their own banking rules, they should be free to set strong conditions, including ownership rules, on how platforms operate in their territory. Decisions by a company in one country should not be binding on citizens in another country.”In those terms, HSBC may be a “global” bank, but when it’s operating in the UK it has to obey British regulations. Similarly, when operating in the US, it follows that jurisdiction’s rules. Translating that to the tech sphere, it suggests that the time has come to stop accepting the tech giant’s claims to be hyper-global corporations, whereas in fact they are US companies operating in many jurisdictions across the globe, paying as little local tax as possible and resisting local regulation with all the lobbying resources they can muster. Facebook, YouTube, Google and Twitter can bleat as sanctimoniously as they like about freedom of speech and the first amendment in the US, but when they operate here, as Facebook UK, say, then they’re merely British subsidiaries of an American corporation incorporated in California. And these subsidiaries obey British laws on defamation, hate speech and other statutes that have nothing to do with the first amendment. Oh, and they pay taxes on their local revenues.What I’ve been reading Capitol ideasWhat Happened? is a blog post by the Duke sociologist Kieran Healy, which is the most insightful attempt I’ve come across to explain the 6 January attack on Washington’s Capitol building.Tweet and sourHow @realDonaldTrump Changed Politics — and America. Derek Robertson in Politico on how Trump “governed” 140 characters at a time.Stay safeThe Plague Year is a terrific New Yorker essay by Lawrence Wright that includes some very good reasons not to be blase about Covid. More

  • in

    The Guardian view of Trump's populism: weaponised and silenced by social media | Editorial

    Donald Trump’s incitement of a mob attack on the US Capitol was a watershed moment for free speech and the internet. Bans against both the US president and his prominent supporters have spread across social media as well as email and e-commerce services. Parler, a social network popular with neo-Nazis, was ditched from mobile phone app stores and then forced offline entirely. These events suggest that the most momentous year of modern democracy was not 1989 – when the Berlin wall fell – but 1991, when web servers first became publicly available.There are two related issues at stake here: the chilling power afforded to huge US corporations to limit free speech; and the vast sums they make from algorithmically privileging and amplifying deliberate disinformation. The doctrines, regulations and laws that govern the web were constructed to foster growth in an immature sector. But the industry has grown into a monster – one which threatens democracy by commercialising the swift spread of controversy and lies for political advantage.What is required is a complete rethink of the ideological biases that have created conditions for tech giants to have such authority – and which has laid their users open to manipulation for profit. Social media companies currently do not have legal liability for the consequences of the activities that their platforms enable. Big tech can no longer go unpunished. Companies have had to make judgments about what their customers can expect to see when they visit their sites. It is only right that they are held accountable for the “terms and conditions” that embed consumer safeguards. It would be a good start if measures within the UK online harms bill, that go some way to protecting users from being exposed to violent extremism and hate, were to be enacted.In a society people also desire, and need, the ability to express themselves to become fully functioning individuals. Freedom of expression is important in a democracy, where voters need to weigh up competing arguments and appreciate for themselves different ideas. John Milton optimistically wrote in Areopagitica: “Let Truth and Falsehood grapple; whoever knew Truth put to the worse in a free and open encounter?” But 17th-century England did not know 21st-century Silicon Valley. Today, speech takes place online much more so than in public streets. Politics is so polarised that Mr Trump and his Republican allies claimed without any factual basis that electoral fraud was rampant.Facebook and Twitter can limit, control and censor speech as much as or more than the government. Until now, such firms exempted politicians from their own hate speech policies, arguing that what they said was worthy of public debate. This rests in part on the US supreme court. Legal academic Miguel Schor argued that the bench stood Orwell on his head in 2012 by concluding “false statements of fact enjoyed the same protection as core political speech”. He said judges feared creating an Orwellian ministry of truth, but said they miscalculated because the US “does have an official ministry of truth in the form of the president’s bully pulpit which Trump used to normalise lying”.Silicon Valley bosses did not silence Mr Trump in a fit of conscience, but because they think they can stave off anti-trust actions by a Democrat-controlled Congress. Elizabeth Warren threatened to break up big tech and blasted Facebook for “spreading Trump’s lies and disinformation.” Her plan to turn social media into “platform utilities” offers a way to advantage social values such as truth telling over the bottom line.Impunity for corporations, technology and politicians has grown so much that it is incompatible with a functioning democracy. Populists the world over have distorted speech to maintain power by dividing the electorate into separate camps, each convinced that the other is the victim of their opponent’s ideology. To achieve this, demagogues did not need an authoritarian state. As Mr Trump has demonstrated, an unregulated marketplace of ideas, where companies thrive by debasing politics, was enough. More

  • in

    Opinion divided over Trump's ban from social media

    As rioters were gathering around the US Capitol last Wednesday, a familiar question began to echo around the offices of the large social networks: what should they do about Donald Trump and his provocative posts?The answer has been emphatic: ban him.First he was suspended from Twitter, then from Facebook. Snapchat, Spotify, Twitch, Shopify, and Stripe have all followed suit, while Reddit, TikTok, YouTube and even Pinterest announced new restrictions on posting in support of the president or his actions. Parler, a social media platform that sells itself on a lack of moderation, was removed from app stores and refused service by Amazon.The action has sparked a huge debate about free speech and whether big technology companies – or, to be more precise, their billionaire chief executives – are fit to act as judge and jury in high-profile cases.So what are the arguments on both sides – and who is making them?FORFor many, such social media bans were the right thing to do – if too late. After all, the incitement has already occurred and the Capitol has already been stormed.“While I’m pleased to see social media platforms like Facebook, Twitter and YouTube take long-belated steps to address the president’s sustained misuse of their platforms to sow discord and violence, these isolated actions are both too late and not nearly enough,” said Mark Warner, a Democratic senator from Virginia. “Disinformation and extremism researchers have for years pointed to broader network-based exploitation of these platforms.”Greg Bensinger, a member of the editorial board of the New York Times, said what happened on 6 January “ought to be social media’s day of reckoning”.He added: “There is a greater calling than profits, and Mr Zuckerberg and Twitter’s CEO, Jack Dorsey, must play a fundamental role in restoring truth and decency to our democracy and democracies around the world.“That can involve more direct, human moderation of high-profile accounts; more prominent warning labels; software that can delay posts so that they can be reviewed before going out to the masses, especially during moments of high tension; and a far greater willingness to suspend or even completely block dangerous accounts like Mr Trump’s.”Even observers who had previously argued against taking action had changed their mind by the weekend. “Turn off Trump’s account,” wrote tech analyst Ben Thompson.“My preferred outcome to yesterday’s events is impeachment. Encouraging violence to undo an election result that one disagrees with is sedition, surely a high crime or misdemeanor, and I hold out hope that Congress will act over the next few days, as unlikely as that seems … Sometimes, though, the right level doesn’t work, yet the right thing needs to be done.” Free speech activist Jillian C York agreed that action had to be taken, but, she said on Monday: “I’m cautious about praising any of these companies, to be honest. I think that in particular Facebook deserves very little praise. They waited until the last moment to do anything, despite months of calls.“When it comes to Twitter, I think we can be a little bit more forgiving. They tried for many, many months to take cautious decisions. Yes, this is a sitting president; taking them down is a problem. And it is problematic, even if there is a line at which it becomes the right choice.” Some have wondered whether the platforms’ convenient decision to grow a backbone has less to do with the violence of the day and more with political manoeuvring.“It took blood & glass in the halls of Congress – and a change in the political winds – for the most powerful tech companies to recognise, at the last possible moment, the threat of Trump,” tweeted Senator Richard Blumenthal, from Connecticut.AGAINSTPredictably, opposition to Trump’s ban came from his own family. “Free speech is dead and controlled by leftist overlords,” tweeted his son Donald Jr. “The ayatollah and numerous other dictatorial regimes can have Twitter accounts with no issue despite threatening genocide to entire countries and killing homosexuals etc… but The President of the United States should be permanently suspended. Mao would be proud.”But the ban, and the precedent that it could set, has worried some analysts and media experts.“Banning a sitting president from social media platforms is, whichever way you look at it, an assault on free speech,” the Sunday Times wrote in an editorial. “The fact that the ban was called for by, among others, Michelle Obama, who said on Thursday that the Silicon Valley platforms should stop enabling him because of his ‘monstrous behaviour’, will add to the suspicion that the ban was politically motivated.”On Monday, the German chancellor, Angela Merkel – hardly known for her affection for the US president – made it clear that she thought it was “problematic” that Trump had been blocked. Her spokesperson, Steffen Seibert, called freedom of speech “a fundamental right of elementary significance”.She said any restriction should be “according to the law and within the framework defined by legislators – not according to a decision by the management of social media platforms”.The ban has also worried those who are already concerned about the strength of Silicon Valley.“The institutions of American democracy have consistently failed to hold President Trump’s unrestrained authoritarianism, hate and racism accountable,” says Silkie Carlo, director of Big Brother Watch, “but this corporate power grab does nothing to benefit American democracy in practice or in principle.”“American democracy is in peril if it relies on a corporate denial of service to protect the nation from its own president, rather than rely on accountable institutions of justice and democracy,” Carlo added.For York, such concerns are valid, but risk an over-emphasis on US politics and concerns. “The majority of the public doesn’t care about these issues on a day-to-day basis,” she says, citing world leaders such as Jair Bolsonaro and Narendra Modi as others who have engaged in hate speech and incitement on Twitter.“It’s only when it hits Trump, and that’s the problem. Because we should be thinking about this as a society day to day.” More

  • in

    Donald Trump being banned from social media is a dangerous distraction | Matt Stoller and Sarah Miller

    In the wake of Donald Trump’s instigation of a shocking attack on the US Capitol, it’s easy to demand that Trump be barred from social media.“These corporations should announce a permanent ban of his accounts,” said Representative Bennie Thompson, chair of the House homeland security committee. “Nothing short of that will meet this moment.”Indeed, Facebook, Google and Twitter have taken action, suspending the president from their platforms or removing videos.But whatever one thinks of stopping Trump fomenting violence by limiting his ability to communicate, the ability of democratically unaccountable monopolies with extraordinary control over communications infrastructure, like Facebook and Google, YouTube’s parent company, to silence political speech is exceptionally dangerous. It also sidesteps the underlying problem – that it’s their dominance and business model that promotes conspiratorial, fake and violent content to millions.Policymakers must recognize the choices that enabled the rise of these toxic but wildly lucrative business modelsTrump is not the first demagogue America has seen and he won’t be the last. But his power is amplified by a corrupted information ecosystem created by Google, Facebook and media barons like Rupert Murdoch. Those who came to the Capitol to riot sincerely believed they were stopping the subversion of American democracy because an entire information ecosystem encouraged them to discount any political or media institution that told them otherwise. That ecosystem of disinformation, extremism, rage and bigotry won’t go away by banning Trump or his supporters. That’s because the driving force behind it is profit: Facebook and Google make billions by fostering it.To understand why, policymakers must recognize the choices that enabled the rise of these toxic but wildly lucrative business models. Traditionally, US media regulation encouraged localized press and a neutral system of information distribution, starting with the Post Office in 1791. But beginning in the 1970s, policymakers changed their philosophy to encourage consolidation.They altered rules around advertising, publishing and information distribution markets, weakening antitrust laws, killing important protections like the Fairness Doctrine and passing the Telecommunications Act of 1996, which lifted local media ownership caps and unleashed a wave of mergers and acquisitions. They also enacted Section 230 of the Communications Decency Act, a provision that today allows tech platforms to escape liability for illegal content they help shape and monetize. And over the last 20 years, policymakers enabled Google and Facebook to roll up the entire digital advertising and communication space by permitting hundreds of mergers, without a single challenge.The net effect is that two giant corporations, Facebook and Google, dominate online communications, profiting by selling advertising against cheaply produced, addictive clickbait and conspiratorial content. Making matters worse, in seeking ad money and quick profits, Facebook and Google, as well as private equity, have killed the pro-social institutions on which we rely, such as local newspapers, by redirecting advertising revenue to themselves. More than one-fourth of American newspapers have disappeared in the last 15 years, with many of those left being hollowed out as “ghost papers” with no news-gathering ability.Filling their place are conspiracy theories like QAnon, which these platforms amplify to turn a handsome profit. Survey results show Google provided ad services to 86% of sites carrying coronavirus conspiracies.This isn’t a uniquely American problem: Facebook, with its addictive user interface designed to maximize engagement, has helped foster deadly mob attacks in India, Sri Lanka and Myanmar and bent to the will of autocrats elsewhere. It’s not just the dramatic, either. More than three in five Americans feel lonely, and there is evidence that social media usage isolates and alienates us, changing our brains and drawing some to political extremism.The problem, in other words, won’t go away with banning Trump, because the problem is that the steady supply of toxic, addictive content that keeps eyeballs on ads is at the heart of these monopolies’ business models. Trump is far from the only supplier of that content now, and there’s no doubt others will rise up to replace him, with a boost from Facebook and Google.The Biden administration and the new Congress can fix these twin problems of monopoly power and profit motive by returning to a traditional policy framework of fair competition, neutral communication networks and business models that finance local news and a diversity of voices.For the tech platforms, Congress and agencies like the Federal Trade Commission have the authority to ban targeted advertising, much in the same way Verizon, for example, is prohibited by law from listening to your private calls and using that information to directly or indirectly advertise to you based on that surveillance.Breaking up these goliaths and prohibiting mergers by dominant firms would force them to compete over users based on data privacy and safety, as Facebook once had to do when it was in a competitive social networking world in the early 2000s. And imposing neutrality, like non-discrimination rules and interoperability requirements, would end the tyranny of algorithms that push us towards incendiary content.The good news is Republican and Democratic attorneys general in 48 states have filed historic antitrust suits against Google and Facebook, seeking to break them up, and the Biden administration and many in Congress seem wide awake to the pernicious role of social media platforms, particularly Facebook and Google, in the fraying of America’s social fabric.But until political leaders recognize that these tech barons make their billions by selling tickets to the end of American democracy, it will continue to creep ever closer. Seeing Trump booted off Facebook may be emotionally satisfying and even potentially prevent dangerous behavior in the short term. But only a wholesale restructuring of our online communications infrastructure can preserve democracy. More

  • in

    Trump attempted a coup: he must be removed while those who aided him pay | Robert Reich

    A swift impeachment is imperative but from Rudy Giuliani and Don Jr to Fox News and Twitter, the president did not act aloneInsurrection: the day terror came to the US CapitolCall me old-fashioned, but when the president of the United States encourages armed insurgents to breach the Capitol and threaten the physical safety of Congress, in order to remain in power, I call it an attempted coup. Related: Saving Justice review: how Trump’s Eye of Sauron burned everything – including James Comey Continue reading… More