More stories

  • in

    'Your business model is the problem': tech CEOs grilled over role in Capitol attack

    The CEOs of America’s biggest technology companies faced a grilling from Congress about the 6 January insurrection at the Capitol, as protesters outside the hearing denounced the platforms for playing a role in fueling the violence.Sundar Pichai of Google, Mark Zuckerberg of Facebook and Jack Dorsey of Twitter on Thursday were called to testify before two committees of the House of Representatives on social media’s role in promoting extremism and misinformation.Protesters who had gathered outside the Capitol building ahead of the hearing portrayed the tech executives as the violent insurrectionists whose images went viral in the days after the 6 January riots. One cutout erected on the grounds showed Zuckerberg as the “QAnon Shaman”, a part-time actor with a horned furry hat who participated in the riot.“The platforms’ inability to deal with the violence, hate and disinformation they promote on their platforms shows that these companies are failing to regulate themselves,” said Emma Ruby-Sachs, the executive director of SumofUs, the human rights organization behind the protests. “After the past five years of manipulation, data harvesting and surveillance, the time has come to rein in big tech.”Lawmakers opened the hearing with video testimonies, criticizing the platforms for their role in the 6 January violence, as well as in the spread of medical misinformation about the Covid-19 vaccine.“You failed to meaningfully change after your platform has played a role in fomenting insurrection and abetting the spread of the virus and trampling American civil liberties,” said the Democratic representative Frank Pallone, the chair of the energy and commerce committee. “Your business model itself has become the problem and the time for self-regulation is over. It’s time we legislate to hold you accountable,” he added.“You’re not passive bystanders – you are not non-profits or religious organizations that are trying to do a good job for humanity – you’re making money,” Pallone later said. “The point we’re trying to make today is that when you spread misinformation, when extremists are actively promoted and amplified, you do it because you make more money.”“The witnesses here today have demonstrated time and time again, that self-regulation has not worked,” echoed Jan Schakowsky, a Democratic representative from Illinois. “They must be held accountable for allowing disinformation and misinformation to spread.”Meanwhile, Republican lawmakers quickly turned to the topic of “cancel culture” and perceived, but unproven, bias against conservatives on social media.In his opening statement, Facebook’s Zuckerberg, argued that the tech companies should not be making the decisions around what is allowed online, and stressed Facebook’s efforts to combat misinformation and its spread of vaccine information.Google’s Pichai, too, sought to highlight his company’s role in connecting users with vaccine information and other Covid-19 resources.Thursday’s session is the latest in a record number of hearings for the big technology players in the past year, as executives have repeatedly been called to the Hill to testify on antitrust issues, misinformation and hate speech.The hearing, which was titled “Disinformation nation: social media’s role in promoting extremism and misinformation”, was held by the House of Representatives’ energy and commerce committee.Lawmakers repeatedly pressed the CEOs on how their platforms were tackling hate speech and misinformation more widely.The Democratic representative Doris Matsui, of California, raised the issue of anti-Asian hate speech and directly asked Dorsey and Zuckerberg what they are doing to address it. She also asked why they took so long to remove racist hashtags that promoted blame for the coronavirus pandemic on Asian Americans, citing the recent attack on Asian women in Atlanta as a consequence of these policies.“The issues we are discussing here are not abstract,” she said. “They have real world consequences and implications that are too often measured in human lives.”She also cited a study that showed a substantial rise in hate speech the week after Donald Trump first used the term “China flu” in a tweet.Dorsey countered by saying he will not ban the racist hashtags outright because “a lot of these hashtags contain counter speech”, or posts refuting the racism the hashtags initiated. Zuckerberg similarly said that hate speech policies at Facebook are “nuanced” and that they have an obligation to protect free speech.Congressman Tony Cárdenas of California has asked Zuckerberg how the company addresses the major problem of misinformation that targets Latino users, noting that studies have shown Facebook catches less false content in Spanish than in English.Zuckerberg responded that Facebook has an international factchecking program with workers in more than 80 countries speaking “a bunch of languages” including Spanish. He also said Facebook translates accurate information about Covid-19 vaccines and other issues from English into a number of languages.Cárdenas noted the example of his Spanish-speaking mother-in-law saying she did not want to get a vaccine because she heard on social media it would place a microchip in her arm.“For God’s sake, that to me is unbelievable, that she got that information on social media platforms,” he said. “Clearly Spanish language misinformation is an issue.” More

  • in

    'It let white supremacists organize': the toxic legacy of Facebook's Groups

    Sign up for the Guardian Today US newsletterMark Zuckerberg, the Facebook CEO, announced last week the platform will no longer algorithmically recommend political groups to users in an attempt to “turn down the temperature” on online divisiveness.But experts say such policies are difficult to enforce, much less quantify, and the toxic legacy of the Groups feature and the algorithmic incentives promoting it will be difficult to erase.“This is like putting a Band-Aid on a gaping wound,” said Jessica J González, the co-founder of the anti-hate speech group Change the Terms. “It doesn’t do enough to combat the long history of abuse that’s been allowed to fester on Facebook.”Groups – a place to create ‘meaningful social infrastructure’Facebook launched Groups, a feature that allows people with shared interests to communicate on closed forums, in 2010, but began to make a more concerted effort to promote the feature around 2017 after the Cambridge Analytica scandal cast a shadow on the platform’s Newsfeed.In a long blogpost in 2017 February called Building Global Community, Zuckerberg argued there was “a real opportunity” through groups to create “meaningful social infrastructure in our lives”.He added: “More than one billion people are active members of Facebook groups, but most don’t seek out groups on their own – friends send invites or Facebook suggests them. If we can improve our suggestions and help connect one billion people with meaningful communities, that can strengthen our social fabric.”After growing its group suggestions and advertising the feature extensively – including during a 60-second spot in the 2020 Super Bowl – Facebook did see a rise in use. In February 2017 there were 100 million people on the platform who were in groups they considered “meaningful”. Today, that number is up to more than 600 million.That fast rise, however, came with little oversight and proved messy. In shifting its focus to Groups, Facebook began to rely more heavily on unpaid moderators to police hate speech on the platform. Groups proved a more private place to speak, for conspiracy theories to proliferate and for some users to organize real-life violence – all with little oversight from outside experts or moderators.Facebook in 2020 introduced a number of new rules to “keep Facebook groups safe”, including new consequences for individuals who violate rules and increased responsibility given to admins of groups to keep users in line. The company says it has hired 35,000 people to address safety on Facebook, including engineers, moderators and subject matter experts, and invested in AI technology to spot posts that violate it guidelines.“We apply the same rules to Groups that we apply to every other form of content across the platform,” a Facebook company spokesperson said. “When we find Groups breaking our rules we take action – from reducing their reach to removing them from recommendations, to taking them down entirely. Over the years we have invested in new tools and AI to find and remove harmful content and developed new policies to combat threats and abuse.”Researchers have long complained that little is shared publicly regarding how, exactly, Facebook algorithms work, what is being shared privately on the platform, and what information Facebook collects on users. The increased popularity of Groups made it even more difficult to keep track of activity on the platform.“It is a black box,” said González regarding Facebook policy on Groups. “This is why many of us have been calling for years for greater transparency about their content moderation and enforcement standards. ”Meanwhile, the platform’s algorithmic recommendations sucked users further down the rabbit hole. Little is known about exactly how Facebook algorithms work, but it is clear the platform recommends users join similar groups to ones they are already in based on keywords and shared interests. Facebook’s own researchers found that “64% of all extremist group joins are due to our recommendation tools”, an internal report in 2016 found.“Facebook has let white supremacists organize and conspiracy theorists organize all over its platform and has failed to contain that problem,” González said. “In fact it has significantly contributed to the spread of that problem through its recommendation system.”‘We need to do something to stop these conversations’Facebook’s own research showed that algorithmic recommendations of groups may have contributed to the rise of violence and extremism. On Sunday, the Wall Street Journal reported that internal documents showed executives were aware of risks posed by groups and were warned repeatedly by researchers to address them. In one presentation in 2020 August, researchers said roughly “70% of the top 100 most active US Civic Groups are considered non-recommendable for issues such as hate, misinfo, bullying and harassment”.“We need to do something to stop these conversations from happening and growing as quickly as they do,” the researchers wrote, according to the Wall Street Journal, and suggested taking measures to slow the growth of Groups until more could be done to address the issues.Several months later, Facebook halted algorithmic recommendations for political groups ahead of the US elections – a move that has been extended indefinitely with the policy announced last week. The change seemed to be motivated by the 6 January insurrection, which the FBI found had been tied to organizing on Facebook.In response to the story in the Wall Street Journal, Guy Rosen, Facebook’s vice-president of integrity, who oversees content moderation policies on the platform, said the problems were indicative of emerging threats rather than inability to address long-term problems. “If you’d have looked at Groups several years ago, you might not have seen the same set of behaviors,” he said.Facebook let white supremacists and conspiracy theorists organize all over its platform and has failed to contain that problemBut researchers say the use of Groups to organize and radicalize users is an old problem. Facebook groups had been tied to a number of harmful incidents and movements long before January’s violence.“Political groups on Facebook have always advantaged the fringe, and the outsiders,” said Joan Donovan, a lead researcher at Data and Society who studies the rise of hate speech on Facebook. “It’s really about reinforcement – the algorithm learns what you’ve clicked on and what you like and it tries to reinforce those behaviors. The groups become centers of coordination.”Facebook was criticized for its inability to police terror groups such as the Islamic State and al-Qaida using it as early as 2016. It was used extensively in organizing of the Unite the Right Rally in Charlottesville in 2019, where white nationalists and neo-Nazis violently marched. Militarized groups including Proud Boys, Boogaloo Bois and militia groups all organized, promoted and grew their ranks on Facebook. In 2020 officials arrested men who had planned a violent kidnapping of the Michigan governor, Gretchen Whitmer, on Facebook. A 17-year-old in Illinois shot three people, killing two, in a protest organized on Facebook.These same algorithms have allowed the anti-vaccine movement to thrive on Facebook, with hundreds of groups amassing hundreds of thousands of members over the years. A Guardian report in 2019 found the majority of search results for the term “vaccination” were anti-vaccine, led by two misinformation groups, “Stop Mandatory Vaccination” and “Vaccination Re-education Discussion Forum” with more than 140,000 members each. These groups were ultimately tied to harassment campaigns against doctors who support vaccines.In September 2020, Facebook stopped health groups from being algorithmically recommended to put a stop to such misinformation issues. It also has added other rules to stop the spread of misinformation, including banning users from creating a new group if an existing group they had administrated is banned.The origin of the QAnon movement has been traced to a post on a message board in 2017. By the time Facebook banned content related to the movement in 2020, a Guardian report had exposed that Facebook groups dedicated to the dangerous conspiracy theory QAnon were spreading on the platform at a rapid pace, with thousands of groups and millions of members.‘The calm before the storm’Zuckerberg has said in 2020 the company had removed more than 1m groups in the past year, but experts say the action coupled with the new policy on group recommendations are falling short.The platform promised to stop recommending political groups to users ahead of the elections in November and then victoriously claimed to have halved political group recommendations. But a report from the Markup showed that 12 groups among the top 100 groups recommended to users in its Citizen Browser project, which tracks links and group recommendations served to a nationwide panel of Facebook users, were political in nature.Indeed, the Stop the Steal groups that emerged to cast doubt on the results of the election and ultimately led to the 6 January violent insurrection amassed hundreds of thousands of followers – all while Facebook’s algorithmic recommendations of political groups were paused. Many researchers also worry that legitimate organizing groups will be swept up in Facebook’s actions against partisan political groups and extremism.“I don’t have a whole lot of confidence that they’re going to be able to actually sort out what a political group is or isn’t,” said Heidi Beirich, who is the co-founder of the Global Project Against Hate and Extremism and sits on Facebook’s Real Oversight Board, a group of academics and watchdogs criticizing Facebook’s content moderation policies.“They have allowed QAnon, militias and other groups proliferate so long, remnants of these movements remain all over the platform,” she added. “I don’t think this is something they are going to be able to sort out overnight.”“It doesn’t actually take a mass movement, or a massive sea of bodies, to do the kind of work on the internet that allows for small groups to have an outsized impact on the public conversation,” added Donovan. “This is the calm before the storm.” More

  • in

    The silencing of Trump has highlighted the authoritarian power of tech giants | John Naughton

    It was eerily quiet on social media last week. That’s because Trump and his cultists had been “deplatformed”. By banning him, Twitter effectively took away the megaphone he’s been masterfully deploying since he ran for president. The shock of the 6 January assault on the Capitol was seismic enough to convince even Mark Zuckerberg that the plug finally had to be pulled. And so it was, even to the point of Amazon Web Services terminating the hosting of Parler, a Twitter alternative for alt-right extremists.The deafening silence that followed these measures was, however, offset by an explosion of commentary about their implications for freedom, democracy and the future of civilisation as we know it. Wading knee-deep through such a torrent of opinion about the first amendment, free speech, censorship, tech power and “accountability” (whatever that might mean), it was sometimes hard to keep one’s bearings. But what came to mind continually was H L Mencken’s astute insight that “for every complex problem there is an answer that is clear, simple and wrong”. The air was filled with people touting such answers.In the midst of the discursive chaos, though, some general themes could be discerned. The first highlighted cultural differences, especially between the US with its sacred first amendment on the one hand and European and other societies, which have more ambivalent histories of moderating speech. The obvious problem with this line of discussion is that the first amendment is about government regulation of speech and has nothing whatsoever to do with tech companies, which are free to do as they like on their platforms.A second theme viewed the root cause of the problem as the lax regulatory climate in the US over the last three decades, which led to the emergence of a few giant tech companies that effectively became the hosts for much of the public sphere. If there were many Facebooks, YouTubes and Twitters, so the counter-argument runs, then censorship would be less effective and problematic because anyone denied a platform could always go elsewhere.Then there were arguments about power and accountability. In a democracy, those who make decisions about which speech is acceptable and which isn’t ought to be democratically accountable. “The fact that a CEO can pull the plug on Potus’s loudspeaker without any checks and balances,” fumed EU commissioner Thierry Breton, “is not only confirmation of the power of these platforms, but it also displays deep weaknesses in the way our society is organised in the digital space.” Or, to put it another way, who elected the bosses of Facebook, Google, YouTube and Twitter?What was missing from the discourse was any consideration of whether the problem exposed by the sudden deplatforming of Trump and his associates and camp followers is actually soluble – at least in the way it has been framed until now. The paradox that the internet is a global system but law is territorial (and culture-specific) has traditionally been a way of stopping conversations about how to get the technology under democratic control. And it was running through the discussion all week like a length of barbed wire that snagged anyone trying to make progress through the morass.All of which suggests that it’d be worth trying to reframe the problem in more productive ways. One interesting suggestion for how to do that came last week in a thoughtful Twitter thread by Blayne Haggart, a Canadian political scientist. Forget about speech for a moment, he suggests, and think about an analogous problem in another sphere – banking. “Different societies have different tolerances for financial risk,” he writes, “with different regulatory regimes to match. Just like countries are free to set their own banking rules, they should be free to set strong conditions, including ownership rules, on how platforms operate in their territory. Decisions by a company in one country should not be binding on citizens in another country.”In those terms, HSBC may be a “global” bank, but when it’s operating in the UK it has to obey British regulations. Similarly, when operating in the US, it follows that jurisdiction’s rules. Translating that to the tech sphere, it suggests that the time has come to stop accepting the tech giant’s claims to be hyper-global corporations, whereas in fact they are US companies operating in many jurisdictions across the globe, paying as little local tax as possible and resisting local regulation with all the lobbying resources they can muster. Facebook, YouTube, Google and Twitter can bleat as sanctimoniously as they like about freedom of speech and the first amendment in the US, but when they operate here, as Facebook UK, say, then they’re merely British subsidiaries of an American corporation incorporated in California. And these subsidiaries obey British laws on defamation, hate speech and other statutes that have nothing to do with the first amendment. Oh, and they pay taxes on their local revenues.What I’ve been reading Capitol ideasWhat Happened? is a blog post by the Duke sociologist Kieran Healy, which is the most insightful attempt I’ve come across to explain the 6 January attack on Washington’s Capitol building.Tweet and sourHow @realDonaldTrump Changed Politics — and America. Derek Robertson in Politico on how Trump “governed” 140 characters at a time.Stay safeThe Plague Year is a terrific New Yorker essay by Lawrence Wright that includes some very good reasons not to be blase about Covid. More

  • in

    The Guardian view of Trump's populism: weaponised and silenced by social media | Editorial

    Donald Trump’s incitement of a mob attack on the US Capitol was a watershed moment for free speech and the internet. Bans against both the US president and his prominent supporters have spread across social media as well as email and e-commerce services. Parler, a social network popular with neo-Nazis, was ditched from mobile phone app stores and then forced offline entirely. These events suggest that the most momentous year of modern democracy was not 1989 – when the Berlin wall fell – but 1991, when web servers first became publicly available.There are two related issues at stake here: the chilling power afforded to huge US corporations to limit free speech; and the vast sums they make from algorithmically privileging and amplifying deliberate disinformation. The doctrines, regulations and laws that govern the web were constructed to foster growth in an immature sector. But the industry has grown into a monster – one which threatens democracy by commercialising the swift spread of controversy and lies for political advantage.What is required is a complete rethink of the ideological biases that have created conditions for tech giants to have such authority – and which has laid their users open to manipulation for profit. Social media companies currently do not have legal liability for the consequences of the activities that their platforms enable. Big tech can no longer go unpunished. Companies have had to make judgments about what their customers can expect to see when they visit their sites. It is only right that they are held accountable for the “terms and conditions” that embed consumer safeguards. It would be a good start if measures within the UK online harms bill, that go some way to protecting users from being exposed to violent extremism and hate, were to be enacted.In a society people also desire, and need, the ability to express themselves to become fully functioning individuals. Freedom of expression is important in a democracy, where voters need to weigh up competing arguments and appreciate for themselves different ideas. John Milton optimistically wrote in Areopagitica: “Let Truth and Falsehood grapple; whoever knew Truth put to the worse in a free and open encounter?” But 17th-century England did not know 21st-century Silicon Valley. Today, speech takes place online much more so than in public streets. Politics is so polarised that Mr Trump and his Republican allies claimed without any factual basis that electoral fraud was rampant.Facebook and Twitter can limit, control and censor speech as much as or more than the government. Until now, such firms exempted politicians from their own hate speech policies, arguing that what they said was worthy of public debate. This rests in part on the US supreme court. Legal academic Miguel Schor argued that the bench stood Orwell on his head in 2012 by concluding “false statements of fact enjoyed the same protection as core political speech”. He said judges feared creating an Orwellian ministry of truth, but said they miscalculated because the US “does have an official ministry of truth in the form of the president’s bully pulpit which Trump used to normalise lying”.Silicon Valley bosses did not silence Mr Trump in a fit of conscience, but because they think they can stave off anti-trust actions by a Democrat-controlled Congress. Elizabeth Warren threatened to break up big tech and blasted Facebook for “spreading Trump’s lies and disinformation.” Her plan to turn social media into “platform utilities” offers a way to advantage social values such as truth telling over the bottom line.Impunity for corporations, technology and politicians has grown so much that it is incompatible with a functioning democracy. Populists the world over have distorted speech to maintain power by dividing the electorate into separate camps, each convinced that the other is the victim of their opponent’s ideology. To achieve this, demagogues did not need an authoritarian state. As Mr Trump has demonstrated, an unregulated marketplace of ideas, where companies thrive by debasing politics, was enough. More

  • in

    Opinion divided over Trump's ban from social media

    As rioters were gathering around the US Capitol last Wednesday, a familiar question began to echo around the offices of the large social networks: what should they do about Donald Trump and his provocative posts?The answer has been emphatic: ban him.First he was suspended from Twitter, then from Facebook. Snapchat, Spotify, Twitch, Shopify, and Stripe have all followed suit, while Reddit, TikTok, YouTube and even Pinterest announced new restrictions on posting in support of the president or his actions. Parler, a social media platform that sells itself on a lack of moderation, was removed from app stores and refused service by Amazon.The action has sparked a huge debate about free speech and whether big technology companies – or, to be more precise, their billionaire chief executives – are fit to act as judge and jury in high-profile cases.So what are the arguments on both sides – and who is making them?FORFor many, such social media bans were the right thing to do – if too late. After all, the incitement has already occurred and the Capitol has already been stormed.“While I’m pleased to see social media platforms like Facebook, Twitter and YouTube take long-belated steps to address the president’s sustained misuse of their platforms to sow discord and violence, these isolated actions are both too late and not nearly enough,” said Mark Warner, a Democratic senator from Virginia. “Disinformation and extremism researchers have for years pointed to broader network-based exploitation of these platforms.”Greg Bensinger, a member of the editorial board of the New York Times, said what happened on 6 January “ought to be social media’s day of reckoning”.He added: “There is a greater calling than profits, and Mr Zuckerberg and Twitter’s CEO, Jack Dorsey, must play a fundamental role in restoring truth and decency to our democracy and democracies around the world.“That can involve more direct, human moderation of high-profile accounts; more prominent warning labels; software that can delay posts so that they can be reviewed before going out to the masses, especially during moments of high tension; and a far greater willingness to suspend or even completely block dangerous accounts like Mr Trump’s.”Even observers who had previously argued against taking action had changed their mind by the weekend. “Turn off Trump’s account,” wrote tech analyst Ben Thompson.“My preferred outcome to yesterday’s events is impeachment. Encouraging violence to undo an election result that one disagrees with is sedition, surely a high crime or misdemeanor, and I hold out hope that Congress will act over the next few days, as unlikely as that seems … Sometimes, though, the right level doesn’t work, yet the right thing needs to be done.” Free speech activist Jillian C York agreed that action had to be taken, but, she said on Monday: “I’m cautious about praising any of these companies, to be honest. I think that in particular Facebook deserves very little praise. They waited until the last moment to do anything, despite months of calls.“When it comes to Twitter, I think we can be a little bit more forgiving. They tried for many, many months to take cautious decisions. Yes, this is a sitting president; taking them down is a problem. And it is problematic, even if there is a line at which it becomes the right choice.” Some have wondered whether the platforms’ convenient decision to grow a backbone has less to do with the violence of the day and more with political manoeuvring.“It took blood & glass in the halls of Congress – and a change in the political winds – for the most powerful tech companies to recognise, at the last possible moment, the threat of Trump,” tweeted Senator Richard Blumenthal, from Connecticut.AGAINSTPredictably, opposition to Trump’s ban came from his own family. “Free speech is dead and controlled by leftist overlords,” tweeted his son Donald Jr. “The ayatollah and numerous other dictatorial regimes can have Twitter accounts with no issue despite threatening genocide to entire countries and killing homosexuals etc… but The President of the United States should be permanently suspended. Mao would be proud.”But the ban, and the precedent that it could set, has worried some analysts and media experts.“Banning a sitting president from social media platforms is, whichever way you look at it, an assault on free speech,” the Sunday Times wrote in an editorial. “The fact that the ban was called for by, among others, Michelle Obama, who said on Thursday that the Silicon Valley platforms should stop enabling him because of his ‘monstrous behaviour’, will add to the suspicion that the ban was politically motivated.”On Monday, the German chancellor, Angela Merkel – hardly known for her affection for the US president – made it clear that she thought it was “problematic” that Trump had been blocked. Her spokesperson, Steffen Seibert, called freedom of speech “a fundamental right of elementary significance”.She said any restriction should be “according to the law and within the framework defined by legislators – not according to a decision by the management of social media platforms”.The ban has also worried those who are already concerned about the strength of Silicon Valley.“The institutions of American democracy have consistently failed to hold President Trump’s unrestrained authoritarianism, hate and racism accountable,” says Silkie Carlo, director of Big Brother Watch, “but this corporate power grab does nothing to benefit American democracy in practice or in principle.”“American democracy is in peril if it relies on a corporate denial of service to protect the nation from its own president, rather than rely on accountable institutions of justice and democracy,” Carlo added.For York, such concerns are valid, but risk an over-emphasis on US politics and concerns. “The majority of the public doesn’t care about these issues on a day-to-day basis,” she says, citing world leaders such as Jair Bolsonaro and Narendra Modi as others who have engaged in hate speech and incitement on Twitter.“It’s only when it hits Trump, and that’s the problem. Because we should be thinking about this as a society day to day.” More

  • in

    All I want for 2021 is to see Mark Zuckerberg up in court | John Naughton

    It’s always risky making predictions about the tech industry, but this year looks like being different, at least in the sense that there are two safe bets. One is that the attempts to regulate the tech giants that began last year will intensify; the second that we will be increasingly deluged by sanctimonious cant from Facebook & co as they seek to avoid democratic curbing of their unaccountable power.On the regulation front, last year in the US, Alphabet, Google’s corporate owner, found itself facing major antitrust suits from 38 states as well as from the Department of Justice. On this side of the pond, there are preparations for a Digital Markets Unit with statutory powers that will be able to neatly sidestep the tricky definitional questions of what constitutes a monopoly in a digital age. Instead, the unit will decide on a case-by-case basis whether a particular tech company has “strategic market status” if it possesses “substantial, entrenched market power in at least one digital activity” or if it acts as an online “gateway” for other businesses. And if a company is judged to have this status, then penalties and regulations will be imposed on it.Over in Brussels, the European Union has come up with a new two-pronged legal framework for curbing digital power – the Digital Markets Act and the Digital Services Act. The Digital Markets Act is aimed at curbing anti-competitive practices in the tech industry (like buying up potential competitors before they can scale up) and will include fines of 10% of global revenues for infringers. The Digital Services Act, for its part, will oblige social media platforms to take more responsibility for illegal content on their platforms – scams, terrorist content, images of abuse, etc – for which they could face fines of up to 6% of global revenue if they fail to police content adequately. So the US and UK approach focuses on corporate behaviour; the EU approach focuses on defining what is allowed legally.All of this action has been a long time coming and while it’s difficult to say exactly how it will play out, the bottom line is that the tech industry is – finally – going to become a regulated one. Its law-free bonanza is going to come to an end.Joe Biden’s choices for top staff in his administration include a depressing proportion of former tech company stalwartsThe big question, though, is: when? Antitrust actions proceed at a glacial pace because of the complexity of the issues and the bottomless legal budgets of the companies involved. The judge in one of the big American antitrust cases against Google has said that he expects the case to get to court only in late 2023 and then it could run for several years (as the Microsoft case did in the 1990s).The problem with that, as the veteran anti-monopoly campaigner Matt Stoller has pointed out, is that the longer monopolistic behaviour goes on, the more damage (eg, to advertisers whose revenue is being stolen and other businesses whose property is being appropriated) is being done. Google had $170bn in revenue last year and is growing on average at 10-20% a year. On a conservative estimate of 10% growth, the company will add another $100bn to its revenue by 2025, when the case will still be in the court. Facebook, says Stoller, “is at $80bn of revenue this year, but it is growing faster, so the net increase of revenue is a roughly similar amount. In other words, if the claims of the government are credible, then the lengthy case, while perhaps necessary, is also enabling these monopolists to steal an additional $100bn apiece.”What could speed up bringing these monopolists to account? A key factor is the vigour with which the US Department of Justice prosecutes its case(s). In the run-up to the 2020 election, the Democrats in Congress displayed an encouraging enthusiasm for tackling tech monopolies, but Joe Biden’s choices for top staff in his administration include a depressing proportion of former tech company stalwarts. And his vice-president-elect, Kamala Harris, consistently turned a blind eye to the anti-competitive acquisitions of the Silicon Valley giants throughout her time as California’s attorney general. So if people are hoping for antitrust zeal from the new US government, they may be in for disappointment.Interestingly, Stoller suggests that another approach (inspired by the way trust-busters in the US acted in the 1930s) could have useful leverage on corporate behaviour from now on. Monopolisation isn’t just illegal, he points out, “it is in fact a crime, an appropriation of the rights and property of others by a dominant actor. The lengthy trial is essentially akin to saying that bank robbers getting to keep robbing banks until they are convicted and can probably keep the additional loot.”Since a basic principle of the rule of law is that crime shouldn’t pay, an addition of the possibility of criminal charges to the antitrust actions might, like the prospect of being hanged in the morning (pace Dr Johnson), concentrate minds in Facebook, Google, Amazon and Apple. As an eternal optimist, I cannot think of a nicer prospect for 2021 than the sight of Mark Zuckerberg and Sundar Pichai in the dock – with Nick Clegg in attendance, taking notes. Happy new year!What I’ve been readingWho knew?What We Want Doesn’t Always Make Us Happy is a great Bloomberg column by Noah Smith.Far outIntriguing piece on how investors are using real-time satellite images to predict retailers’ sales (Stock Picks From Space), by Frank Partnoy on the Atlantic website.An American dream Lovely meditation on Nora Ephron’s New York, by Carrie Courogen on the Bright Wall/Dark Room website. More

  • in

    If you think Biden's administration would rein in big tech, think again | John Naughton

    Before the US presidential election I wondered aloud if Mark Zuckerberg had concluded that the re-election of Trump might be better for Facebook than a Biden victory. There were several reasons for thinking this. One was the strange way Zuckerberg appeared to be sucking up to Trump: at least one private dinner in the White House; the way he jumped on to Fox News when Twitter first placed a warning on a Trump tweet to say that Facebook would not be doing stuff like that; and the majority report of the House subcommittee on tech monopolies, in which it was clear that the Democrats had it in for the companies.But the most significant piece of evidence for the belief that a Biden administration would finally tackle the tech giants, and Facebook in particular, came in the long interview Biden gave last January to the New York Times, in which he was highly critical of the company.“I’ve never been a big Zuckerberg fan,” Biden said. “I think he’s a real problem … I’ve been in the view that not only should we be worrying about the concentration of power, we should be worried about the lack of privacy and them being exempt, which you’re not exempt. [The New York Times] can’t write something you know to be false and be exempt from being sued. But he can. The idea that it’s a tech company is that Section 230 should be revoked, immediately should be revoked, number one. For Zuckerberg and other platforms.” As readers of this column know only too well, section 230 of the 1996 US Telecommunications Act is the clause that exempts tech platforms from legal liability for anything that users post on their platforms. It’s the nearest thing social media has to a kill switch. Pull it and their business models evaporate. Trump had been threatening to pull it before the election, but he lacked the attention span to be able to do anything about it. Biden, on the other hand, had already talked about it in January and would have people around him who knew what they were doing. So maybe we were going to get some real progress in getting tech giants under control.And then he gets elected and what do we find? Biden’s transition eam is packed with tech industry insiders. Tom Sullivan, from Amazon, is earmarked for the Department of State. Mark Schwartz, also from Amazon, is heading for the Office of Management and Budget, as are Divya Kumaraiah from Airbnb and Brandon Belford from Lyft, the ride-hailing company. The US Treasury gets Nicole Isaac from LinkedIn, Microsoft’s department of spam, and Will Fields, who was Sidewalk Labs’ senior development associate. (Sidewalk Labs was the organiser of Google’s attempt – eventually cancelled – to turn Toronto’s waterfront into a data-geyser for surveillance capitalism.) The Environmental Protection Agency, a body that Trump looted and sidelined, gets Ann Dunkin, who is Dell’s chief technology officer. And so on.Well, I thought, perusing this sordid list, at least there’s nobody from Facebook on it. How innocent can you be? Politico reveals that the joint chair of Biden’s transition team, Jeff Zients, is a former Facebook board member. Another former board member is an adviser. And two others, one who was a Facebook director and another who was a company lobbyist, have, according to Politico “taken leadership roles”. And then, to cap it all, it turns out that Biden himself has a friendly relationship with a guy called Nick Clegg, who was once a serious politician and now doubles as Mark Zuckerberg’s bagman and representative on Earth.Truly, you couldn’t make this up. And just to add a touch of satire to it, the woman who is now a heartbeat away from the presidency, Kamala Harris, has a career-long record of cosying up to Silicon Valley. She participated, for example, in the marketing campaign for Lean In, Sheryl Sandberg’s anthem of capitalist feminism, even though at the time Harris was California’s law enforcement official most responsible for overseeing Facebook. As the state’s attorney general, she took a semi-comatose view of the way the big tech companies were allowed to gobble up potential rivals and bulldoze their way into new industries. Facebook’s controversial acquisitions of WhatsApp and Instagram, perhaps the most obvious anti-competitive mergers in the short history of the tech industry, happened on her watch and triggered no regulatory reflex. If Silicon Valley could be said to have a darling, then Ms Harris is it. And all those campaign donations from tech companies and moguls may turn out to have been a shrewd investment after all.Given these sobering circumstances, how should we calculate the odds of a Biden administration taking on the power of the tech giants? The answer: slightly better than those of a snowball staying cool in hell. But only slightly.What I’ve been readingIs 2020 just a taster?Graeme Wood has written a riveting essay, titled The Next Decade Could Be Even Worse, on the work of Peter Turchin, a quantitative historian who believes he has discovered iron laws that predict the rise and fall of societies.Birth of an iNationWhat if we viewed tech giants as countries? A thoughtful essay in Tortoise Media considers Apple as a one-party state as secretive as China. But more liberal. Phew!Is less Moore?I enjoyed a lovely post by Venkatesh Rao on the Ribbonfarm blog, about the mindset induced by living in a world governed by Moore’s Law. More