Spy agencies have big hopes for AI

Source

WHEN IT comes to artificial intelligence (AI), spy agencies have been at it longer than most. In the cold war, America’s National Security Agency (NSA) and Britain’s Government Communications Headquarters (GCHQ) explored early AI to help transcribe and translate the enormous volumes of Soviet phone-intercepts they began hoovering up in the 1960s.

Yet the technology was immature. One former European intelligence officer says his service did not use automatic transcription or translation in Afghanistan in the 2000s, relying on native speakers instead. Now the spooks are hoping to do better. The trends that have made AI attractive for business—more data, better algorithms, and more processing power to make it all hum—are giving spy agencies big ideas, too.

On February 24th GCHQ published a paper on how AI might change its work. “Machine-assisted fact-checking” could spot faked images, check disinformation against trusted sources and identify social-media bots. AI might block cyber-attacks by “analysing patterns of activity on networks and devices”, and fight organised crime by spotting suspicious chains of financial transactions.

This sort of thing is now commonplace. The Nuclear Threat Initiative, an NGO, recently showed that applying machine learning to publicly available trade data could spot previously unknown companies suspected of involvement in the illicit nuclear trade. But spy agencies are not restricted to publicly available data.

Some hope that, aided by their ability to snoop on private information, such modest applications could pave the way to an AI-fuelled juggernaut. “AI will revolutionise the practice of intelligence,” gushed a report published on March 1st by America’s National Security Commission on Artificial Intelligence, a high-powered study group co-chaired by Eric Schmidt, a former executive chairman of Alphabet, Google’s parent company; and Bob Work, a former deputy defence secretary.

The report does not lack ambition. It says that by 2030 America’s 17 or so spy agencies ought to have built a "federated architecture of continually learning analytic engines" that crunches everything from human intelligence to satellite imagery to foresee looming threats. The commission points approvingly to the Pentagon’s response to covid-19, which integrated dozens of data sets to identify covid hotspots and manage demand for supplies.

Yet what is possible in public health is not always so easy in national security. Western intelligence agencies must contend with laws governing how private data may be gathered and used. In its paper, GCHQ says that it will be mindful of systemic bias, such as whether voice-recognition software is more effective with some groups than others, and transparent about margins of error and uncertainty in its algorithms. American spies say, more vaguely, that they will respect “human dignity, rights, and freedoms.” These differences may need to be ironed out. One suggestion made by a recent task force of former American spooks in a report published by the Centre for Strategic and International Studies (CSIS) in Washington was that the “Five Eyes” intelligence alliance—America, Australia, Britain, Canada and New Zealand—create a shared cloud server on which to store data.

In any case, the constraints facing AI in intelligence are as much practical as ethical. Machine learning is good at spotting patterns—such as distinctive patterns of mobile phone use—but poor at predicting individual behaviour. That is especially true when data are scarce, as in counter-terrorism. Predictive-policing models can crunch data from thousands of burglaries each year. Terrorism is much rarer.

That rarity creates another problem, familiar to medics pondering mass-screening programs for rare diseases. Any predictive model will generate false positives, in which innocent people are flagged for investigation. Careful design can drive the false-positive rate down. But because the "base rate" is lower still—there are, mercifully, very few terrorists—even a well-designed system risks sending large numbers of spies off on wild-goose chases.

Even the data that do exist may not be suitable. Data from drone cameras, reconnaissance satellite and intercepted phone calls, for instance, are not currently formatted or labelled in ways that that are useful for machine learning. Fixing that is a “tedious, time-consuming, and still primarily human task exacerbated by differing labelling standards across and even within agencies,” notes the CSIS report. That may not be quite what would-be spies signed up for.