You cannot “meaningfully interact” with active code. Coding is a very specialized skill requiring expert knowledge. In the case of the code being embedded in other systems, there are few people, if any, who are in positions to review these elements in real time (particularly those with relevant security clearance). The skill sets needed to interrogate the systems are required both at the point of build, and regularly throughout the lifecycle of these systems, as they will inevitably fail or have unintended outcomes.
People get bored when working with autonomous systems. The situations where machines can be autonomous, but require human supervision, are often the most dangerous. Humans tune out and get bored or distracted – with disastrous effects. Research data shows that humans cannot actively supervise machines for long periods of time without risk increasing, particularly where the systems are largely autonomous. Part of this risk also links to Weizenbaum’s magical thinking: Humans often assume that systems cannot fail – and yet they do.
The complexity, speed, and scale of many autonomous, and even automatic, systems do not allow time to challenge them. The speed at which information is provided, and the time-sensitive decisions need to be made, will often render a potential appropriate human intervention impossible.
If “meaningful human interaction on the loop” is remote, there are even greater risks. Network delays (due to issues of bandwidth, lag time, human cognitive delays, and information poverty – not having all the information you need, some of which cannot be captured by automated or autonomous projects) amplify existing risks. Black boxes and lack of transparency prevent predictability. Even where low risks exist, the speed and scale of autonomy may expedite or expand the potential of serious harm.
The term “human” usually means a fairly narrow type of human, which is not representative of broader humanity or humankind. The human is often the most overlooked term in this phrase, and yet it does a lot of the heavy lifting. The humans who have access to, and understanding of, these systems are a particularly narrow group with a particular worldview and set of values. They do not necessarily understand or represent those affected by the automated systems and are not always listened to in the face of commercial imperatives. Petrov had the power to override the system – would someone today be able to do the same?
Skills required to understand complex adaptive systems will be lost with increased automation. Those with the ability to understand and question the flow of the entire system, including whether it is performing ethically and in line with responsible codes of conduct, are increasingly sidelined or their skills are lost. Interdisciplinary skill sets will not be easy to replicate or replace in the long-term. Complex adaptive systems can at times function in ways that cannot be predicted, even by those with the correct skill sets. The likelihood of low-probability high risk (black swan) events will increase as more automated and autonomous systems are connected and scaled.
Each time you hear about “meaningful human interaction (or control) on the loop,” challenge what this means in the context of a specific system. More useful would be to ask whether automation or autonomy is in fact the right choice for what the problem is. Is automation suitable, legally compliant, and ethical? Having experts with diverse and interdisciplinary skills involved throughout the development and life cycle of a solution and system directed at solving a challenge will have far greater impact than any “human on the loop.” After the system is active, just being “on the loop” will almost certainly not provide the capabilities for any human to pause, reflect, question, and stop the trajectory of the machine, as Petrov was able to.
Kobi Leins University of Melbourne
Dr. Kobi Leins is visiting senior research fellow at King’s College, London; expert for Standards Australia providing technical advice to the International Standards Organisation on forthcoming AI Standards; co-founder of IEEE’s Responsible Innovation of AI and the Life Sciences; non-resident fellow of the United Nations Institute for Disarmament Research; and advisory board member of the Carnegie Artificial Intelligence and Equality Initiative (AIEI). Leins is also the author of New War Technologies and International Law (Cambridge University Press, 2021).
Anja Kaspersen AIEI Senior Fellow
Anja Kaspersen is a senior fellow at Carnegie Council for Ethics in International Affairs. Together with Senior Fellow Wendell Wallach, she co-directs the Carnegie Artificial Intelligence and Equality Initiative (AIEI), which seeks to understand the innumerable ways in which AI impacts equality, and in response, propose potential mechanisms to ensure the benefits of AI for all people.