More stories

  • in

    AI is coming to help national security – but could bring major risks, official report warns

    Sign up to our free weekly IndyTech newsletter delivered straight to your inboxSign up to our free IndyTech newsletterAI could have profound implications for national security – including posing a host of risks, a new government-commissioned report warns.Artificial intelligence is a valuable tool to help senior officials in government and intelligence make decisions, it says. But it could also lead to inaccuracies, confusion and other dangers, it warns.Senior officials must be trained to spot those problems, and there is a critical need for any AI systems to be carefully watched and continuously monitored to ensure they don’t lead to more bias and errors, it warns.Problems may arise, for instance, because some officials believe that AI is far more capable and certain than it actually is. In fact, artificial intelligence often works on probabilities – and can be wildly wrong, it warns.Choosing not use AI comes with its own risks, including missing patterns across data that could be central to keeping people safe, the report says.But the vast risks of using it also means that there could be more bias and uncertainty. “There is a critical need for careful design, continuous monitoring, and regular adjustment of AI systems to mitigate the risk of amplifying human biases and errors in intelligence assessment,” the report says.Those are the conclusions of the new report from the Alan Turing Institute, the UK’s national research organisation for AI. It was commissioned by British intelligence agencies, the Joint Intelligence Organisation (JIO) and Government Communication Headquarters (GCHQ).The official report did not give any information on how much AI is currently used by intelligence agencies, or how mature that technology is. But it urged that work to counteract the potentially major dangers should begin immediately, to ensure that any future introduction of AI is done safely.The government said that it would consider the recommendations of the report and that it was already working on combating the potential dangers that the technology could bring.“We are already taking decisive action to ensure we harness AI safely and effectively, including hosting the inaugural AI Safety Summit and the recent signing of our AI Compact at the Summit for Democracy in South Korea,” said Oliver Dowden, the deputy prime minister.“We will carefully consider the findings of this report to inform national security decision makers to make the best use of AI in their work protecting the country.”The report was written by the Centre for Emerging Technology and Security (CETaS), which is based within the Alan Turing Institute. Officials there noted the importance of decision makers ensuring that they understand the nature of information that has been informed by artificial intelligence.“Our research has found that AI is a critical tool for the intelligence analysis and assessment community. But it also introduces new dimensions of uncertainty, which must be effectively communicated to those making high-stakes decisions based on AI-enriched insights,” said Alexander Babuta, director of The Alan Turing Institute’s Centre for Emerging Technology and Security.“As the national institute for AI, we will continue to support the UK intelligence community with independent, evidence-based research, to maximise the many opportunities that AI offers to help keep the country safe.”GCHQ, which jointly commissioned the report, said that it saw great potential in AI – but that it was important to work on safe uses of it too.“AI is not new to GCHQ or the intelligence assessment community, but the accelerating pace of change is,” said Anne Keast-Butler, director of GCHQ. “In an increasingly contested and volatile world, we need to continue to exploit AI to identify threats and emerging risks, alongside our important contribution to ensuring AI safety and security.” More

  • in

    New ‘magical thinking’ law puts everyone’s privacy at risk, warns Signal president

    For free real time breaking news alerts sent straight to your inbox sign up to our breaking news emails Sign up to our free breaking news emails Upcoming UK internet legislation is based on “magical thinking” that puts everyone’s privacy at risk, the head of secure messaging app Signal has warned. Meredith Whittaker, Signal’s president, […] More

  • in

    UK banning TikTok on official devices

    For free real time breaking news alerts sent straight to your inbox sign up to our breaking news emails Sign up to our free breaking news emails The UK government will ban TikTok on official devices. A review by the National Cyber Security Centre will advise that the Chinese-owned app should be barred from government […] More

  • in

    UK considering fully banning TikTok, minister says

    For free real time breaking news alerts sent straight to your inbox sign up to our breaking news emails Sign up to our free breaking news emails TikTok could be fully banned in the UK, the security minister has suggested. The app has faced a range of bans in countries across the world, including the […] More

  • in

    New UK rules could force people to provide ID before using Reddit or Google in attempt to stop children viewing pornography, campaigners warn

    New rules in the UK could force people to provide ID before they use Google or Reddit, campaigners have warned.The regulation attempts to restrict pornographic websites so that they cannot be viewed by children, with a view to asking people to provide age verification before they can visit adult websites.But new changes to the rules attempt to take on websites that show pornographic content as just part of their offering. That includes social networks and search engines.That could mean that websites that nominally publish adult content – which may include Google, Twitter, Reddit and other major platforms – could be covered by the rules.That may mean that they could be forced to check users’ age before they are able to use those sites. While the precise way those checks will happen has still not been revealed, suggestions have included requiring people to provide credit card details or other personally identifying information.That is the latest warning from the Open Rights Group, which has been among a range of privacy activists and other campaigners attempting to fight against the new regulations.“There is no indication that this proposal will protect people from tracking and profiling porn viewing. We have to assume the same basic mistakes about privacy and security may be about to be made again,” said Jim Killock, executive director of the Open Rights Group.“The proposal could force people to age verify before using Google search or reading Reddit. This appears to be a huge boon to age verification companies, for little practical benefit for child safety, and much harm to people’s privacy.” The rules, sometimes referred to as “porn blocks” are part of the Online Safety Bill. Such age verification schemed have been proposed for years – but have been repeatedly delayed and changed as regulators attempt to find practical ways to put them in place. More