Kobi Leins and Lesley Seebeck highlight the need for adequate AI governance that embeds not only economic but also political and societal values.
When we talk about artificial intelligence (AI) in this article, we are talking about software and hardware, both structured and unstructured; machine learning, both supervised and unsupervised; the sensors that provide input and the actuators that effect output, as well as all the data (always historical) that fuels the computing power, whatever form it takes. Artificial intelligence includes data, algorithms, deep learning, and hardware and software. Together, these pieces create a form of power.
Every technology has a history and a context. A prominent example from Winner’s book, The Whale and the Reactor, involves traffic overpasses in and around New York designed by Robert Moses. Many of the overpasses were deliberately built to prevent access by public buses, hence excluding low-income people. This design disproportionately impacted racial minorities, who depended entirely on public transportation. Winner argues that politics is built into everything we make, and that historical moral questions asked throughout history – including by Plato and Hannah Arendt – are questions relevant to technology: our experience of being free or unfree, the social arrangements that either foster equality or inequality, the kinds of institutions that hold and use power and authority.
The capabilities of AI, and the way that it is being used by corporations and by governments, raise these questions with renewed vigour. Current systems using facial recognition or policing tools that reinforce prejudice are examples of technology that builds on politics. The difference is that, in digital systems, politics may not be as obvious as a physical overpass. Nevertheless, they may be similarly challenging to rectify after they have been built, and equally, create outcomes that generate power and control over certain constituents. They also operate at speeds and scales that have enormous impacts in short periods of time.
But AI, we would argue, also represents something more than the possibility to bake in certain values. From a very particular historical moment, with a very narrow band of players, the technology itself embeds a certain set of political and economic values. The very size of the companies producing the soft and hardware, and the need to collect and manage training and operational data sets, often make it difficult to compete. While increasing efforts to digitalise economies and societies are perceived as a fast track to success in developing economies, there remains concern over the power structures embedded in the systems powering that change. For example, much of the data collection resides with US companies such as Facebook or Google, or Chinese companies such as Alibaba or TenCent. Algorithms are typically ‘black-boxed’: their training regimens and biases opaque to users. Hardware is suspect: it may have back doors or access points. Singapore, and Taiwan, even Estonia are held up as successes to be followed. China, too, is often viewed as exemplar, though as much for government control over unruly publics, as for economic prosperity.
The reality is more complex. Firstly, digitalisation is truly disruptive. Governments and companies seeking to harness digital technologies have a tiger by the tail. It is more likely, if not inevitable, that digitalisation will distort, break or potentially, optimistically transform political and business models, often in unforeseen or even unpredictable ways. The ability of other states to intervene or disrupt social media, for example, remains a real and current risk. Digital economies often replace other economic behaviour patterns – for better or for worse.
Second, with digitalisation comes cyber – the dark side of online engagement. The use of digital infrastructures increases risk and vulnerabilities. Some are built in. Some are simply the product of the fact that ‘everything is broken’ – a feature and a bug – a recognition that the Internet was not built with security in mind. Other risks include, but are not limited to, the hardware that supports the infrastructure, such as the often-overlooked cables.
Third, Southeast Asian countries are essentially ‘customers’ rather than providers of technology. Even Singapore, which hosts many advanced companies, is more of a ‘customer’. That means taking on the built-in assumptions and cultural norms of others, including the leakage of data. And those assumptions include a world view and set of norms about privacy (or the lack thereof), human rights and other values will be built into the technological systems acquired, rather than reflecting the geographic and human values of where they will be used.
Fourth, the relationship between governments and the corporations providing the technology will be key. As corporations are often larger in scope and economic power than many governments, the power play between the two has become increasingly complex. At times their interests may intersect, but at other times they may deeply diverge. At times, governments will outsource to platforms and then be deeply challenged because platforms, through speed and capability, usurp government roles – a challenge particularly for democracies – and pressure the social contract. Being aware of how this interplay may work, and ensuring that states’ individual interests, as well as regional interests, are protected, will remain key.
Last, Southeast Asia spreads across a technological fracture between the West and China, and to some extent, India. Talking about a single future of the Internet makes little sense in Asia more broadly, as it hosts at least two spheres of Internet and consequently different visions of the future. Decisions here can mean much more than consumer choice of platform or be limited by different externally generated factors. That is because technologies exert both hard power, including through cyber, and soft power, such as through the culture increasingly embedded in technology. Choices around technology adoption, whether hardware or software, also infer choices around the value system underpinning the technology. That is inherently problematic in societies that potentially span both value systems.
Given that in many societies, even connectivity remains an issue, technology choices by the more powerful, whether companies or governments – or even ‘influencers’ on social media – is likely to drive even greater inequalities in society. Connectivity is a necessary condition for democracy but is not of itself a sufficient condition – and that is even more fraught in a world in which connectivity itself may be captured through the technologies of others.
But there are many positives to offset the challenges. Europe and other countries are seeking new and interesting allegiances as allies in the digital world. This is an opportunity for ASEAN nations to find their own voice in the debates, and to position themselves, not just economically, but also politically, in the AI debates. As with the overpass example above, often too easily assumptions about AI – and even the assumptions
embedded in automation – are overlooked as they are invisible, too deeply embedded in the tech stack. New relationships and alliances can form new political imaginaries and possibilities and be more robust than monocultures of digital acceptance.
We can expect inequalities to be solidified, if not exacerbated. That will generate political tensions, leading government – often unaware of the implications of the use of technology – to succumb to the temptation of using technologies for control, and so the vicious cycle continues. Regulation can provide part of the answer: currently societies such as the Institute of Electrical and Electronics Engineers (IEEE), the International Organization for Standardization (ISO) and others are scrambling to provide adequate governance, and regulation will almost certainly follow. But regulation can only go so far. Ultimately these are political, societal and economic decisions, reflecting civic values that lie at the heart of individual states and regions and their future.
Dr. Kobi Leins is Senior Research Fellow in Digital Ethics at the School of Engineering and IT and the Centre for AI and Digital Ethics (CAIDE), University of Melbourne, and Non-Resident Fellow at the United Nations Institute for Disarmament Research; Dr. Lesley Seebeck is Honorary Professor at the College of Engineering and Computer Science, Australian National University.