Artificial intelligence can benefit small businesses, but it’s crucial to tread carefully when using AI. This is because there can be many legal implications of using AI.
South Africa doesn’t currently have direct laws regulating the use of AI, however, the South African Department of Communications and Digital Technologies recently drafted a policy framework on AI, which indicates promise for AI regulations being drafted and implemented in future.
Here, we aim to give you an idea of the legal implications you can face using AI in your business.
Understanding the Legal Landscape of AI in South Africa
South African businesses, while currently operating in a space with no specific AI laws, are witnessing that AI is revolutionising business compliance, driving the need to be aware of existing legislation that still applies, such as the Protection of Personal Information Act (POPIA), the Consumer Protection Act (CPA), and Intellectual Property (IP) laws.
Additionally, it’s crucial to stay informed about global AI regulatory trends, as South Africa is likely to align with these international standards in the future. The evolving nature of AI technology and its applications creates legal grey areas. This emphasises the need for businesses to be cautious and adaptable in their compliance efforts.
The use of AI introduces legal uncertainties. An example of this would be the use of deepfakes and AI-generated art. Deepfakes, which are convincingly realistic fake videos or audio, can be used for defamation, fraud, and manipulation, posing challenges to laws on evidence and authenticity. On the other hand, AI-generated art raises questions around copyright and authorship, as it is unclear who owns the rights to AI-created art or reproductions.
What Can South Africa Learn from International AI Laws?
There are quite a few official international AI laws that South Africa can use in its own AI regulatory act. These laws include the following:
The European Union AI Act (EU AI Act)
The EU AI Act is the first law globally to regulate AI. It prohibits certain AI systems that are considered high-risk, such as social scoring systems that can lead to discrimination. The Act also establishes requirements and regulations for high-risk AI applications. South African businesses can learn from this Act by evaluating the risks associated with using and implementing AI tools.
The OECD AI Principles
The OECD AI Principles by the Organisation for Economic Co-operation and Development are an intergovernmental standard on Artificial Intelligence. They aim to promote responsible and innovative AI use that respects human rights through five key values, which include;
- Inclusive growth, sustainable development and well-being.
- Human rights and democratic values, including fairness and privacy.
- Transparency and explainability.
- Robustness, security and safety.
- Accountability.
AI and Data Act (AIDA)
Canada’s AI and Data Act (AIDA) focuses on having high-impact regulatory AI systems that affect health and safety. It emphasises that businesses ensure fairness for individual rights. Additionally, they need to implement data management practices for privacy protection.
Singapore’s Model AI Governance Framework
Singapore’s Model AI Governance Framework serves as the groundwork for businesses to responsibly deploy AI. It places an emphasis on accountability, data, trusted development and deployment, incident reporting, testing and assurance, and plenty of other key aspects to ensure responsible use.
Compliance Checklist for SMEs Using AI
For SMEs adopting AI tools and systems, this checklist will help ensure ethical, legal, and secure implementation.
1. Develop Internal Ethical AI Guidelines
Create a framework that prioritizes fairness, accountability, and transparency in how AI is developed and used.
2. Audit AI Systems for Bias and Discrimination
Implement regular checks to ensure your AI tools do not reinforce unfair outcomes or exclude specific groups.
3. Align AI Practices with Human Rights Principles
Use international ethical standards as a reference to guide your AI development and deployment.
4. Stay Up to Date with AI Governance and Regulation
Monitor local and international legal developments to ensure compliance, especially around emerging AI-specific laws.
5. Implement a Clear Data Governance Policy
Standardise how data is collected, stored, and used across your organisation.
6. Ensure Compliance with South Africa’s data protection laws (e.g., POPIA)
All personal data handled by AI must follow lawful processing, consent, and security principles.
7. Be Transparent About Data Usage
Clearly inform users how their data is being used by your AI systems, especially in customer-facing applications.
8. Apply Cybersecurity Best Practices to AI Systems
Protect against unauthorized access and attacks through encryption, secure APIs, and routine vulnerability assessments.
9. Conduct AI-specific Risk Assessments
Regularly evaluate the potential operational, reputational, and legal risks associated with AI tools.
10. Establish an AI Incident Response Plan
Ensure your team is prepared to respond quickly to breaches, failures, or ethical concerns related to AI use.
11. Involve Multidisciplinary Teams in AI decision-making
Include legal, ethical, technical, and business perspectives when developing or deploying AI.
12. Train staff on ethical AI and Data Protection Practices
Build awareness and capacity across your team to manage and use AI responsibly.