As so often is the case, Donald Trump gets to the heart of the problem. On 6 January, he was the president of the United States: probably the most powerful man in the world. He should be free to speak his mind, and voters should be free to listen. But he was also a habitual liar who, by the end of his term, had edged into repudiating the very democracy that had elevated him.
And then came his inflammatory words on that day, uttered even as rioters were breaking their way into the heart of US democracy. His words had a veneer of restraint – “We have to have peace, so go home.” But his statements were laced with lies, along with praise for the mob who terrorised lawmakers as they sought to confirm Biden as Trump’s successor – “We love you, you’re very special … great patriots … remember this day for ever.”
At 5.41pm and 6.15pm that day, Facebook removed two posts from Trump. The following day the company banned Trump from its platform indefinitely. Around the same day, Twitter also moved to ban the president – permanently.
So there was the problem that Donald Trump embodied – in a country whose commitment to free speech is baked into its core. The president might be a bitterly polarising figure, but surely he has a right to be heard – and for voters to be free to make up their own minds?
Facebook’s decision to the contrary would spark passionate debate within the United States. But it had a wider resonance. For how much longer would giant social media platforms act as an amplification system for any number of despots around the world. Would they, too, be banned?
The classic defence of free expression is that good speech defeats bad speech. Political speech – in some views – should be the most protected speech. It is vital we know who our leaders are. We have a right – surely? – to know if they are crooks, liars or demagogues.
On 7 January Facebook decided: no longer. And now the Facebook oversight board, of which I am a member, has published its own verdict on the decision: Facebook was both right and wrong. Right to remove his 6 January words and right, the following day, to ban the president from the platform. But wrong to ban him “indefinitely”.
The key word is “indefinitely” – if only because Facebook’s own policies do not appear to permit it. The oversight board (OSB) judgment doesn’t mince its words: “In applying a vague, standardless penalty and then referring this case to the board to resolve, Facebook seeks to avoid its responsibilities. The board declines Facebook’s request and insists that Facebook apply and justify a defined penalty.” Ball squarely back in Facebook’s court.
What Facebook has to do now – in our judgment, which the company is bound to implement – is to re-examine the arbitrary penalty it imposed on 7 January. It should take account of the gravity of the violation and the prospect of future harm.
The case is the most prominent the OSB has decided since it was established as an independent entity and will inevitably focus more attention on its work. Why is such a body thought necessary?
Let’s assume we might agree that it’s a bad thing for one person, Mark Zuckerberg, to be in charge of the rules of speech for 2 billion or more people. He is clearly a wonderfully talented engineer – but nothing in his background suggests he is equipped to think deeply about the complexities involved in free expression.
Maybe most people who have studied the behaviour of governments towards publishers and newspapers over 300 years might also agree that politicians are not the best people to be trusted with individual decisions about who gets to say what.
Into the void between those two polarities has stepped the OSB. At the moment we’re 19 individuals with backgrounds in journalism, law, academia and human rights: by the end of 2021 we hope to be nearer 40.
Are we completely independent from Facebook? It certainly feels that way. It’s true that Facebook was involved in selecting the first 20 members, but once the board reaches its full complement, we decide who our future colleagues will be. Since a few early meetings to understand Facebook processes around moderation and similar matters we have had nothing to do with the company.
We have our own board of distinguished trustees – again, free of any influence from Facebook. From what I’ve seen of my colleagues so far they’re an odd bunch to have picked if you were in search of a quiet life.
The Trump decision was reached through the processes we’ve devised ourselves. A panel of five – with a good spread of regional backgrounds – did the initial heavy lifting, including sifting through more than 9,000 responses from the public.
The wider board fed in its own views. We looked at Facebook’s own values – what they call voice, safety and dignity – as well as its content policies and community standards. But we also apply an international human rights lens in trying to balance freedom of expression with possible harms.
In the Trump case we looked at the UN Guiding Principles on Business and Human Rights (UNGPs), which establish a voluntary framework for the human rights responsibilities of private businesses. We also considered the right to freedom of expression set out in articles 19 and 20 of the International Covenant on Civil and Political Rights (ICCPR) – as well as the qualifying articles to do with the rights to life, security of person, non-discrimination, participation in public affairs and so on.
We also considered the 2013 Rabat Plan of Action, which attempts to identify and control hate speech online. We took into account a submission sent on behalf of Trump himself and sent Facebook 46 questions. They answered 37 fully, and two partially.
And then we debated, and argued – virtually/verbally and in writing. A number of drafts were circulated, with most board members pitching in with tweaks, challenges, corrections and disagreements. Gradually, a consensus developed – resulting in a closely argued 38-page decision which openly reflects the majority and minority opinions.
In addition to our ruling about the original and “indefinite” bans, we’ve sent Facebook a number of policy advisory statements. One of these concentrates on the question of how social media platforms should deal with “influential users” (a more useful conceit than “political leaders”).
Speed is clearly of the essence where potentially harmful speech is involved. While it’s important to protect the rights of people to hear political speech, “if the head of state or high government official has repeatedly posted messages that pose a risk of harm under international human rights norms, Facebook should suspend the account for a determinate period sufficient to protect against imminent harm”.
As in previous judgments, we are critical of a lack of clarity in some of Facebook’s own rules, together with insufficient transparency about how they’re enforced. We would like to see Facebook carry out a comprehensive review of its potential contribution to the narrative around electoral fraud and in the exacerbated tensions that culminated in the violence on 6 January.
And then this: “This should be an open reflection on the design and policy choices that Facebook has made that may enable its platform to be abused.” Which many people will read as not-so-coded reference to what is shorthanded as The Algorithm.
Social media is still in its infancy. Among the many thorny issues we periodically discuss as a board is, what is this thing we’re regulating? The existing language – “platform”, “publisher”, “public square” – doesn’t adequately describe these new entities.
Most of the suggested forms of more interventionist regulation stub their toes on the sheer novelty of this infant space for the unprecedented mass exchange of views.
The OSB is also taking its first steps. The Trump judgment cannot possibly satisfy everyone. But this 38-page text is, I hope, a serious contribution to thinking about how to handle free speech in an age of information chaos.
Source: US Politics - theguardian.com