in

Deepfakes are here and can be dangerous, but ignore the alarmists – they won’t harm our elections | Ciaran Martin

Sixteen days before the Brexit referendum, and only two days before the deadline to apply to cast a ballot, the IT system for voter registrations collapsed. The remain and leave campaigns were forced to agree a 48-hour registration extension. Around the same time, evidence was beginning to emerge of a major Russian “hack-and-leak” operation targeting the US presidential election. Inevitably, questions arose as to whether the Russians had successfully disrupted the Brexit vote.

The truth was more embarrassingly simple. A comprehensive technical investigation, supported by the National Cyber Security Centre – which I headed at the time – set out in detail what had happened. A TV debate on Brexit had generated unexpected interest. Applications spiked to double those projected. The website couldn’t cope and crashed. There was no sign of any hostile activity.

But this conclusive evidence did not stop a parliamentary committee, a year later, saying that it did “not rule out the possibility that there was foreign interference” in the incident. No evidence was provided for this remarkable assertion. What actually happened was a serious failure of state infrastructure, but it was not a hostile act.

This story matters because it has become too easy – even fashionable – to cast the integrity of elections into doubt. “Russia caused Brexit” is nothing more than a trope that provides easy comfort to the losing side. There was, and is, no evidence of any successful cyber operations or other digital interference in the UK’s 2016 vote.

But Brexit is far from the only example of such electoral alarmism. In its famous report on Russia in 2020, the Intelligence and Security Committee correctly said that the first detected attempt by Russia to interfere in British politics occurred in the context of the Scottish referendum campaign in 2014.

However, the committee did not add that the quality of such efforts was risible, and the impact of them was zero. Russia has been waging such campaigns against the UK and other western democracies for years. Thankfully, though, it hasn’t been very good at it. At least so far.

Over the course of the past decade, there are only two instances where digital interference can credibly be seen to have severely affected a democratic election anywhere in the world. The US in 2016 is undoubtedly one. The other is Slovakia last year, when an audio deepfake seemed to have an impact on the polls late on.

The incident in Slovakia fuelled part of a new wave of hysteria about electoral integrity. Now the panic is all about deepfakes. But we risk making exactly the same mistake with deepfakes as we did with cyber-attacks on elections: confusing activity and intent with impact, and what might be technically possible with what is realistically achievable.

So far, it has proved remarkably hard to fool huge swathes of voters with deepfakes. Many of them, including much of China’s information operations, are poor in quality. Even some of the better ones – like a recent Russian fake of Ukrainian TV purporting to show Kyiv admitting it was behind the Moscow terror attacks – look impressive, but are so wholly implausible in substance they are not believed by anyone. Moreover, a co-ordinated response by a country to a deepfake can blunt its impact: think of the impressive British response to the attempt to smear Sadiq Khan last November, when the government security minister lined up behind the Labour mayor of London in exhorting the British media and public to pay no attention to a deepfake audio being circulated.

This was in marked contrast to events in Slovakia, where gaps in Meta’s removal policy, and the country’s electoral reporting restrictions, made it much harder to circulate the message that the controversial audio was fake. If a deepfake does cut through in next month’s British election, what matters is how swiftly and comprehensively it is debunked.

None of this is to be complacent about the reality that hostile states are trying to interfere in British politics. They are. And with fast-developing tech and techniques, the threat picture can change. “Micro” operations, such as a localised attempt to use AI to persuade voters in New Hampshire to stay at home during the primaries, are one such area of concern. In the course of the UK campaign, one of my main worries would be about targeted local disinformation and deepfake campaigns in individual contests. It is important that the government focuses resources and capabilities on blunting these operations.

But saying that hostile states are succeeding in interfering in our elections, or that they are likely to, without providing any tangible evidence is not a neutral act. In fact, it’s really dangerous. If enough supposedly credible voices loudly cast aspersions on the integrity of elections, at least some voters will start to believe them. And if that happens, we will have done the adversaries’ job for them.

There is a final reason why we should be cautious about the “something-must-be-done” tendency where the risk of electoral interference is concerned. State intervention in these matters is not some cost-free, blindingly obvious solution that the government is too complacent to use. If false information is so great a problem that it requires government action, that requires, in effect, creating an arbiter of truth. To which arm of the state would we wish to assign this task?

  • Ciaran Martin is a professor at the Blavatnik School of Government at the University of Oxford, and a former chief executive of the National Cyber Security Centre


Source: US Politics - theguardian.com


Tagcloud:

Rishi Sunak admits it is ‘harder’ to buy a house under Tories

Nigel Farage pulls out of BBC interview at last minute amid Hitler row