South Africa requires comprehensive AI legislation – experts
“South Africa’s ambition to be a player in the global Artificial Intelligence [AI] space necessitates a regulatory regime that can regulate the robustness of AI and the possible threats that it may impose on individuals and organisations.”
This was argued by ICT legal experts at ENS Africa, Ridwaan Boda, Era Gunning and Lindo Ntuli, who prompted the discussion following the European Parliament passing pioneering Artificial Intelligence Act on 13 March – a first piece of legislation of this nature seen globally.
South Africa currently has no explicit legal framework for AI despite showing an eagerness to be a key player in the space.
“Regulating AI is… crucial in ensuring that AI achieves outcomes that are in the interests of society,” said the experts.
Outline of AI risks
Boda, Gunning, and Ntuli outlined some (but not all) of the pertinent risks that prompt arguments for the need for AI legislation.
In the EU, AI risks are divided into four categories:
- Unacceptable Risk (banned);
- High Risk (strictly regulated);
- Limited Risk;
- and Minimal Risk.
“Not all AI can be regarded in the same way when it comes to AI risk management,” said the experts. For instance, the use of popular generative AI tools like ChatGPT differs significantly from integrating AI in core operations or developing custom AI systems.
AI also introduces new data privacy and cybersecurity concerns, with its ability to potentially de-anonymise data through pattern recognition, alongside traditional data breach issues.
The ENS experts said that this “predate[s] the emergence of AI, however, the advancement and robustness of AI models have the capacity to unmask anonymised data through inferences.”
According to Boda, Gunning and Ntuli, the complexity and lack of transparency in AI systems, particularly in the financial sector, also present challenges in understanding and questioning AI outcomes – leading to potential biases, incorrect decisions, and inappropriate models.
Embedded bias in AI can also systematically discriminate against certain individuals, a notable concern in customer classification processes.
Moreover, “hallucinations,” in AI, where generative AI creates things like fictitious legal cases or spreads misinformation, underscores the ethical and security challenges in AI deployment.
There’s also the risk of “prompt hacking,” a situation where AI is tricked into facilitating unlawful activities, further stressing the importance of ensuring AI technologies are used responsibly and securely.
Lastly, Intellectual Property (IP) issues surrounding AI inventions and ownership remain unresolved globally, with organizations struggling to manage AI development and outputs rights.
Going forward
Given the abovementioned risks, the ENS ICT experts say that it is therefor critical to implement comprehensive legislation to protect the best interests of society.
“While AI remains largely unregulated in South Africa, existing legislation like the Protection of Personal Information Act, 2013, does regulate some activities conducted by organisations using AI, by preventing the unlawful processing of personal information,” explained the experts.
“It is important to note that Boards in South Africa remain accountable for IT governance. By allowing AI to be used and implemented in an organisation without any governance mechanisms will, in the absence of regulation, not only create legal risk but also reputational, commercial and financial risk and, in some cases, embarrassment,” they added.
ENS say that until comprehensive regulation is implemented, companies should adopt responsible AI practices through training, policies, ethics governance, AI assessments, and proper AI contracts.