South Africa must find its voice in global AI governance debates
As lawmakers across the world scramble to regulate advanced artificial intelligence systems, South Africa could play a vital role in shaping the course of future systems by engaging directly with the private labs responsible for their development, in addition to making use of the usual regulatory routes for AI governance.
A version of this article by Tharin Pillay appeared in News24.com.
Advanced artificial intelligence systems are among the most important inventions of our time. As a general-purpose technology akin to electricity, they are poised to transform every sector of the global economy, from healthcare to education to agriculture. In particular, the release of ChatGPT just over a year ago has catalysed policymakers across the world, who recognise the urgent need to regulate this technology so that its benefits can be maximised while its risks are constrained.
But understanding what form regulations should take – in South Africa and elsewhere – is complicated, for three reasons: the nature of modern AI, the sociopolitical context in which regulations are being made, and the nature of the risks involved. To get a sense of the best path forward for South Africa, it will be useful to briefly examine each of these reasons in turn.
The nature of modern AI
AI is an evolving concept. Three decades ago, the term referred to systems like the IBM computer that beat chess player Garry Kasparov in 1997; while today it refers to a range of systems powered by machine learning, from the content curation systems that decide the order in which information is encountered online to large generative models like ChatGPT. These latter systems – often called foundation or frontier models, because of their broad applicability across multiple tasks and their cutting-edge capabilities – are the focus of most present regulatory efforts.
Foundation models comprise three main parts: the computing power required to run them (imagine warehouses full of advanced computer chips), the oceans of data they are trained on, and the algorithms that govern how they process this data and make decisions. At present, scaling up the amounts of computing power and data used to train these systems has reliably led to improvements in their capabilities – often in unpredictable ways. For example, many technical experts were surprised by the performance of GPT-4, the model that currently underlies the paid version of ChatGPT.
The fact that even the developers of these models cannot say precisely what future models will be capable of (beyond that, if current trends continue, they will indeed become more capable), coupled with the fact that huge sums are being invested in training ever-larger systems, leaves policymakers working under considerable uncertainty.
Adding to this challenge is the fact that these systems are fundamentally opaque – their creators are currently unable to explain how, for a given input, a system arrives at a given output. This complicates demands that these systems be “explainable”, which is often a feature in emerging governance frameworks.
Sociopolitical context
Another challenge is that AI technologies are being developed on an uneven playing field. Leading foundation models are currently being developed by a handful of private-sector companies (such as OpenAI, Anthropic, and Google DeepMind) rooted in the United States, with Chinese labs in second place. This gives disproportionate influence to the countries where these labs reside, as they are bound first by national regulation. And since these models cost hundreds of millions of dollars to train, there are inherent constraints on who can spearhead their development.
Regulations are being built atop the pre-existing political cultures of different regions – hence the European Union’s comprehensive AI Act (still being negotiated), the US’ sector-specific and piecemeal approach, and China’s interim regulations which state that their models must “adhere to the core values of socialism” and “not generate incitement to subvert state power”.
The fact that foundation models also have clear military and surveillance applications, making them powerful tools for authoritarian states, also threatens to undermine the international collaboration necessary to govern this transnational technology.
Nature of the risks
The third difficulty is that, because of their generality, foundation models present a wide range of risks and challenges. These include the tendency of these systems to perpetuate latent biases; their ability to supercharge mis- and disinformation, their disruption of copyright regimes, their impacts on labour markets, and the potential of future systems to cause catastrophic harm in the hands of malicious actors, who could use them for example to perpetrate advanced cyberattacks, or to design and distribute bespoke pathogens.
Different actors in this space focus tend to emphasise different risks – and often disagree on which of the above risks is most important – but there is an emerging consensus on both the principles that ought to guide global governance, and the steps necessary to operationalise these principles; such as the need for developers to share detailed information about their models with regulators before and after they are deployed, and the need for third parties to audit these models to assess their safety and security, and their human rights impacts.
South Africa’s role in global AI governance
It’s important to consider how to approach these problems locally. Our elections authority, for example, could consider whether to issue rules for political parties seeking to use AI systems in electioneering; and our data protection regulator could investigate whether there are sufficient privacy safeguards in the design and use of AI technologies within our borders.
But as the development of this world-altering technology is being driven by a handful of private companies based primarily in the United States, the central question for South Africa ought to be – how can we actively participate in the development and global governance of existing and future systems? It’s not ideal that private companies have acquired such outsized geopolitical influence, but this is the reality we find ourselves in.
At present, South Africa’s stance towards governing AI skews more toward broad rhetoric than thoughtful, concrete policy. Meanwhile, while leading labs like OpenAI have expressed the need for the technology to be “shaped by diverse perspectives reflecting the public interest”, they have continued releasing new and more capable products to the global public at pace.
Given the rapid pace of AI development, it’s inevitable that national AI governance policy will lag behind technological advances. To supplement the slow process of national policymaking, and to address the risks canvassed above, the governance of this emerging technology requires a novel, creative, and collaborative approach.
As many AI labs have acknowledged the need for regulation, and the need for voices from the Global South to be represented in the development and design of their products, the South African state and civil society has a potent opportunity to directly engage with these labs, representing the voices and interests of South African people. Such direct advocacy for our own interests would be strengthened by collaborating with other African democracies, to speak collectively to the unique challenges and needs of the continent.
As we head into what is poised to be the “biggest election year in history”, where increasingly advanced AI systems are likely to play a pivotal role in the shape of public discourse, the need for such direct action to supplement existing regulatory efforts – and to steer the course of future systems – is more urgent than ever.
- Tharin Pillay is a writer, researcher, and Tech Rights Fellow at ALT Advisory. A version of this article appeared in News24.