AI Policy Primer (February 2024)
Issue #7: Cybersecurity threat assessment, AI and agriculture, data policy
We’re back with our monthly roundup of AI policy news, now rebranded as the AI Policy Primer. We’ll be sharing the Primer alongside other content—such as essays and book reviews—regularly in the coming weeks and months on AI Policy Perspectives.
For this month’s Primer, we have an assessment of the cyber threat posed by AI from the UK’s National Cyber Security Centre, a look at AI’s use in agriculture, and a rundown of recent discussions focusing on access to training data.
Policymakers taking action
Near term AI cyber threat ‘evolution not revolution’
What happened: The UK’s National Cyber Security Centre (NCSC) published its assessment of the cyber threat from AI over the next two years. NCSC’s assessment uses the UK intelligence community’s formal probabilistic language (see yardstick on p.29) to conclude that AI will “almost certainly” increase the number and impact of cyber attacks. It notes, however, that the threat comes primarily from “evolution and enhancement” of existing techniques and approaches - and, as NCSC CEO Lindy Cameron surmised, “does not transform the risk landscape in the near term.” The NCSC expects the impacts to 2025 will include:
More convincing ‘social engineering’ attacks and information gathering capabilities - think fewer typos and more compelling prose in phishing emails - which will boost less sophisticated cyber criminals. The NCSC judges this “will likely” also contribute to the global ransomware threat.
More sophisticated uses of AI in cyber attacks, such as malware development and vulnerability research, “will continue to rely on human expertise” and are therefore “highly likely to be restricted to threat actors with access to quality training data, significant expertise...and resources”. This refers to highly capable state actors and some established (and capable) criminal groups.
What’s interesting: The cyber risks from AI steadily attracted policymaker attention in 2023, including most prominently at the UK’s AI Safety Summit where risks to cybersecurity featured heavily alongside biosecurity concerns. But as with many other areas of potential AI risk, there are a range of views on what exactly the threat landscape looks like, how we should allocate attention across current and future risks, and how imminent truly novel risks are. This assessment from the NCSC gives its best effort response to some of these questions. How it compares to the one made in the forthcoming ‘State of the Science’ report in May, commissioned at the UK AI Safety Summit, will be one to watch. The NCSC report is framed as further evidence of momentum following the UK Summit - and follows the UK’s publication of the first global guidelines on secure AI development, endorsed by 18 countries including the US, in late 2023.
Looking ahead: Made easier by well-established security alliances, cybersecurity and AI may prove to be a bright spot for international collaboration on AI governance in 2024, including at the South Korea and France Safety Summits. Watch for new international R&D collaborations on using AI for cyber defence and more formal information sharing agreements between allies on emerging cyber risks. It is also possible that the cyber conversation focuses more on the use of open source models by malicious actors.
Sector spotlight
Agricultural AI ploughs ahead
What happened: AI is being used by farmers around the world to enable precision farming, crop monitoring, and climate-resilient agricultural practices. The technology is also being deployed to measure soil health, which the Ecological Society of America says contains around 75% of all carbon stored on land, by underpinning the creation of ‘digital twins’ of farmland to quantify sequestration (long term storage of carbon in oceans, soils, vegetation, and geologic formations). A recent estimate put the global artificial intelligence and agriculture market size at $1.44 billion, predicting that the sector will generate an estimated revenue of around $12 billion by 2032.
What’s interesting:
According to the Food and Agriculture Organization of the United Nations (FAO), almost half of the Earth’s population lives in households that are “linked” to livelihoods dependent on agrifood systems. While only about 3% of all employment in high-income countries is typically in the agricultural sector, the figure can reach as high as 85% in some countries. However, while AI can be used to boost yields and minimise loss, it also risks consolidating power in the hands of a small number of farming groups and creating labour displacement effects that fall disproportionately on low and middle-income countries. Additionally, its success is likely to be contingent on the provision of technological infrastructure, measures to boost data accessibility, and efforts to close skill gaps.
Our protein folding system, AlphaFold, has been used in research related to crops, plants, and agriculture. For example, it has been used to study potato blight, the plant pathogen white blister rust, and the growth of rice blast fungus. Google DeepMind has a number of additional former and current projects in this space, from historical efforts to study the impact of poaching, climate abnormalities, and agriculture on animal behaviour to the GraphCast model that provides faster and more accurate global weather forecasting.
Looking ahead: AI’s use in agriculture may primarily be driven by the United States and Europe in the near term, which could mitigate its immediate impact on employment in the agricultural sectors of low and middle-income countries. Over the long term, however, a core policy global challenge will be to ensure that productivity gains in these geographies are realised in a way that protects livelihoods connected to the agrifood sector.
Issue spotlight
Policy discussions focus on data
What happened:
Data has become one of the focal points in AI policy discussions. Developers consider the availability of high quality data a prerequisite for increases in capability, while policymakers are increasingly looking to regulate specific types of data that are used to train large models (e.g. copyrighted data or personal data). The recently-agreed EU AI Act requires the developers of “general-purpose AI systems” to provide high level disclosures of copyrighted content used for their training.
Meanwhile, a number of high profile lawsuits have emerged in which creators of certain types of content (such as news publishers) argue for compensation for the use of their data to train large models. They suggest that their proprietary data is particularly important to the usability and performance of certain AI systems, or is particularly sought-after by its users.
What’s interesting:
However, the extent to which certain data sources elicit particular capabilities is unclear. Recent research proposes that LLMs trained on “easy” data (for example, a dataset of grade-school subject questions) perform well on “hard” data tasks (for example, graduate level STEM questions). They demonstrate that, surprisingly, models can learn to solve complex problems by training on easily-obtained, simpler data.
The paper suggests that models may not actually need large datasets of specialised – and often copyrighted – content to reach high performance. Given that ‘hard’ datasets tend to be restricted and expensive, the dynamic has implications for the ability of different actors to train capable models. It may also diminish the importance of providers of highly specialised information, which has recently been drawn into focus by the use of prompting regimes to enable general models to surpass those trained on proprietary data sources.
Looking ahead: The debate will continue through legislative action and in the courts, with parties taking hard stances about whether interventions are best focused at the level of inputs (e.g. hard restrictions on models training on certain types of data) or outputs (e.g. obligations to apply certain types of filters).
I'm fully confident in our fearless beurocrats to keep AI on its toes. /s
I will allow myself a little joke here, and then skedaddle.
The Nigerian princes will now be even more convincing. They'll offer complicated financial statements along with the eloquent proze. 😁 We're doomed!