Bronwyn Howell asks: where artificial intelligence is as capricious as humans, how do you make rules that govern its risk?

The 2023 USA Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence and the 2024 European Union Artificial Intelligence Act are both based on the  Precautionary Principle (PP). AI application developers must actively manage risks of harm to end users and society from their applications. 

Risk management lies at the core of both sets of governance arrangements. The EU AI Act classifies AI applications according to the level of risk they are anticipated to pose. It then specifies the processes for approval to be marketed to or used by EU citizens. Applications assessed as “high risk” must undergo rigorous testing before permission to market is granted. Those responsible for them must maintain extensive post-market monitoring, reporting and responsible operation. Developers of “low risk” applications must abide by a less rigorous disclosure process. If unexpected harm arises, operators of high-risk AI systems will be held accountable under strict liability laws. Others will face fault-based liability, with fault assumed unless the operator can prove that it has abided by its duty of care. The  NIST guidelines for AI risk management voluntarily applied in the US context are based on the ISO 31000 Risk Management standard, requiring risks to be identified, analysed, evaluated, treated, monitored and communicated. 

In the early days of AI technology, most applications used algorithms processing “big data”. These algorithms employed symbolic AI – the application of logical, reproducible rules and theorems in a scientific manner.  They sought a precise answer to a problem or accurate classification of objects where an answer was clearly correct or not. They were reliably predictable. The same response was expected each time the same inputs were provided. Specific risks could be identified and managed. The PP (see box) could be reasonably confidently applied. More research (e.g. refining the algorithms) provided more accurate (precise) responses (e.g. forecasts with narrower error ranges), allowing providers to be more confident in the use of the algorithms for highly targeted purposes.

The back four
The Precautionary Principle (PP) has a long history of managing the risks associated with technological innovation where explicit scientific knowledge is lacking.  It has found favor in a number of policy and regulatory areas, notably product and health safety and environmental risk management.

In the face of scientific uncertainty about the outcomes of deploying a new technology, and especially where there is a threat of serious or irreversible damage, it is “better to be safe than sorry”. This justifies strict regulatory controls on the release of the technology. Examples include the strict processes for pharmaceutical therapeutic treatment development and deployment. In the USA, the FDA requires extensive testing of new drugs in both laboratories and controlled and supervised testing amongst human subjects before market approval is given. Once deployed, continued surveillance is required, given that not all possible consequences can be known or anticipated when the product is released on the market. Moreover, the burden of proof that the intervention meets acceptable safety standards (and potentially, liability in the event of unexpected harmful consequences arising – e.g. failing to identify and test for the consequence pre-market or failing to notify authorities and take remedial action immediately it is identified post-market release) lies with the developer. 

So far, PP-based regulation has dealt comparatively well with uncertainty related to new technologies where the scientific uncertainty has related to a tightly defined and measurable point of scientific accuracy with an easily identified, measurable and verifiable set of harms in a specific population. For new drugs, it is usually possible to specify clinical and toxicologic observations of harm and safety to facilitate responsible use. The identity of populations likely to be harmed (e.g. test subjects taking the drug, the area over which the chemical is applied) can be defined, likewise for the components of toys which can contain parts unsuited for children under a specific age. These may be very broad initially when information is sparse. Still, as subsequent scientific investigation improves understanding (reduces uncertainty), these can be refined with greater precision (e.g. knowledge of the margin between a “safe” and “toxic” dose may become more precise, such as from an order of magnitude of g/ml to mg/ml, or the susceptible population narrowed down as more is learned about responses; the toy design can be modified to eliminate the harmful components).

However, recent AI developments seek to replicate humans’ intuitive responses to stimuli. These give rise to complex, dynamic systems such as generative pre-trained transformers (GPTs). The algorithms are trained using massive amounts of data to produce outputs based on sophisticated probabilistic recombinations of human responses. Large Language Models (LLMs) such as ChatGPT, Claude and Llama are examples. Rather than providing precise, reproducible outputs, these AIs instead are prone to human-like idiosyncrasies. Indeed, the merit of LLMs is arguably their potentially near-infinite creativity. An LLM producing two identical responses to a prompt has “failed”. Even though the responses contain the same facts, the organisation of the data and the language in the two responses should be different. And where the precise rules used to create the output of a symbolic AI can be specified and well-understood, no human or group of humans, or even the GPT itself, fully understands how a GPT formulates its outputs.

It may never be possible to determine whether a harmful outcome was due to a lack of due care in GPT development or whether an end user’s use (or misuse or abuse) has caused the harm.

Is it appropriate or even possible to use a regulatory risk management framework derived from the PP to regulate GPT AIs? When applications are narrowly targeted, the vectors along which harm could arise is well known. The applications are amenable to further scientific inquiry to refine the conditions under which they can be deployed with limited harm occurring. However, suppose it is not possible to know how a GPT formulates its output or be able to predict in advance what that output will be. In that case, it will not be possible to specify with confidence the “safe” boundaries within which it can operate.

Furthermore, GPTs are also General Purpose Technologies. A  wide variety of end users can deploy them for purposes not even contemplated by their developers.  Who, then, should be liable? It may never be possible to determine whether a harmful outcome was due to a lack of due care in GPT development or whether an end user’s use (or misuse or abuse) has caused the harm. Or whether the harm has arisen from some feature of the interaction of those two attributes.  And because GPTs are not logic-based, there may be no way of scientifically investigating or systematically applying learning from any individual case better to manage the application’s current use or future development.

Moreover, for symbolic AI applications, only a limited number of dimensions of risk and susceptible populations need to be analysed and monitored after the technology has been marketed. For GPTs this task magnifies exponentially. For example, the NIST guidelines identify 72 action subcategories for monitoring and managing symbolic AI applications. However, in the 2024 guidelines for GPTs this blew out to 467.  The administration costs of GPT risk management systems are much larger than those of other PP applications. And compliance will not necessarily reduce the risk of harm occurring. Harm could arise from a use case or user segment not anticipated or monitored by the application developer. 

This suggests that a single, one-size-fits-all set of rules and processes is not the appropriate form of regulation.

This time, the context in which GPT AIs are being released to use is materially different from the contexts where PP-inspired regulation has provided guidance to ensure that the trade-offs between user and society safety and the economic benefits from their deployment and use are carefully balanced. These technologies are “general purpose”. The wide variety of use cases to which they are put is far removed from the narrow, focused use cases of drug development and toy safety management. This suggests that a single, one-size-fits-all set of rules and processes is not the appropriate form of regulation. While managing risks is still important, different rules are needed for different contexts – as we observe already with the regulations governing drug development and toy safety management.

This suggests that industry- and use-case-specific rules rather than a separate, over-arching set of AI regulations will best serve society and user safety needs. The rules required to govern AIs reading and diagnosing medical images and who is best placed to monitor and enforce compliance with them must necessarily differ from those applying to using LLMs in the design industry, which will also differ from those governing their use in scientific research and education institutions.

Bronwyn Howell

Bronwyn is faculty member of Victoria University of Wellington, New Zealand and a nonresident senior fellow at the American Enterprise Institute, where she focuses on the regulation, development, and deployment …

Read More »

Leave a Reply

Your email address will not be published. Required fields are marked *