Afriwise Blog

Regulating artificial intelligence in Nigeria

Written by Aelex | 2/07/2024

Artificial Intelligence (AI) represents one of the greatest technological advancement of man and since its emergence, it has witnessed a widespread global adoption. From the introduction of smart assistants like Alexa, Cortana, Siri and Google assistants, to the introduction of marketing chat bots, to the more recent introduction of Chat GPT, AI has continued to evolve.

 

In recent years, Nigeria has shown increased interest and investment in AI-related initiatives, such as  Uniccon’s Group Omeife, Africa’s first humanoid, the deployment of AI for risk assessment by digital lenders and the deployment of AI for diagnosis and scanning medical records in the health sector. New search trends released by Google revealed that Nigerians have become more interested than ever in AI as their interests grew by 100 percent in 2022 compared to 2021.

 

Despite the appeal of AI, the use of AI in its various forms comes with its drawbacks and like any novel technology, maximising its benefits will require putting in place adequate guardrails around its use.

In this respect, some jurisdictions have made more progress than others. The European Parliament, in early 2024, passed into law the EU Artificial Intelligence (AI) Act, a broad legislation that empowers the EU and its member states to impose restrictions on the use of high-risk systems, prohibit some systems outrightly and generally engender a safer environment for the adoption of AI systems, where they affect EU citizens. The State of California, USA, has introduced a similar bill that will regulate the development and training of advanced AI models to ensure they are not exploited for nefarious purposes. There are also similar initiatives in China, the UK, and Brazil, to name a few.

In Nigeria, while there is no specific AI legislation, this article will consider existing laws in the areas of data protection and data privacy, cybercrime, intellectual property, consumer protection, and capital markets, and the extent of their applicability to AI adoption and use in Nigeria.


AI and Data Privacy/Protection 

One significant effect of the adoption of AI is the automation of several activities, with little or no human input.

The Nigeria Data Protection Act, 2023 (NDPA), the primary data protection legislation in Nigeria, restricts the exclusive use of automated decision-making processes for processing personal data, including profiling, that result in legal or similarly significant effects on the data subject. Exceptions to this restriction include obtaining the data subject’s consent, fulfilment of a legal requirement, or where it is necessary for the performance of a contract involving the data subject. Therefore, businesses relying on AI for automated decision-making, must do this only in compliance with the NDPA.

The Nigeria Data Protection Implementation Framework also mandates companies to adopt privacy by design, embedding data protection into technical systems from the start. For example, when developing software or applications handling personal data, it’s crucial to integrate data protection measures, to ensure safety of such personal data.


The Nigeria Data Protection Commission (NDPC), the agency primarily charged with the enforcement of the NDPA also recently issued a draft General Application and Implementation Directive (GAID), to the NDPA. The GAID seeks to regulate the use of AI by requiring data controllers or processors who deploy or intend to deploy AI for processing personal data to take into consideration, the provisions of the NDPA, the GAID itself, public policy, and other regulatory instruments issued by the Nigeria Data Protection Commission. Specifically, the GAID provides that such controllers or processors, in putting in place the technical and organizational parameters for their AI deployment in data processing, should take into account the right of data subjects not to be subject to a decision based on automated processes, the right to be forgotten, safeguards for processing sensitive personal data, safeguards for child rights and other vulnerable groups, regulation of cross-border data flows, and privacy by design. Businesses deploying AI for personal data processing are also mandatorily required to conduct a Data Privacy Impact Assessment (“DPIA”) on such processing activities.

AI and Intellectual Property 

In May 2024, Hollywood actress Scarlett Johannson alleged that OpenAI had copied her voice for its virtual assistant, Sky, without her consent. Before this, OpenAI was sued by The New York Times Company, for allegedly infringing on their copyright, by training ChatGPT on millions of its article.

While OpenAI may be at the  center of these examples, it is not the sole AI company involved in copyright infringement disputes. Other AI companies have faced accusations of training their systems with copyrighted material available online. The defense often hinges on the argument that, as the material is publicly accessible and the AI tools do not reproduce the materials in their entirety, such usage is permissible.

The Copyright Act 2022 (CA 2022), plays a crucial role in governing the use of AI, particularly in relation to intellectual property rights. For instance, it protects original works, including literary, musical, and artistic works, audio visual, sound recordings, and broadcasts. Copyrighted content that falls into these categories and is used by AI may be eligible for copyright protection, provided it is original and fixed in a tangible medium of expression.

Separately, although the CA 2022 attributes authorship and ownership to human creators, the emergence of AI challenges this concept, raising questions about who owns the copyright to works generated by AI. Currently, the CA 2022 does not explicitly address AI authorship, potentially leading to legal uncertainties. The use of AI in creating or distributing content can lead to copyright infringement issues. For instance, AI systems trained on copyrighted material without permission may infringe on existing rights. The Act provides remedies for infringement, including injunctions, damages, and accounts of profits, which could apply to unauthorized use of copyrighted works by AI.

Despite the limitation of the CA 2022 in addressing AI authorship, on authors’ ability to establish copyright in materials used by AI, the CA 2022 empowers the Nigerian Copyright Commission (NCC) to demand information and access any database relating to copyright, without warrant.  This means that the NCC  can potentially demand that an AI deployer provides access to the underlying data used in training its model, to ascertain if it was developed using copyrighted information.

 

AI and Cybercrime 

In 2018, the camera of a facial recognition authentication system in China was hijacked, allowing its hackers to gain access to privileged information and impersonate several users. Using this data, they were able to defraud local tax authorities of $77 million. 

This example shows the vulnerabilities of AI-powered systems to cyber attacks, and the importance of protection to curb bad actors. Some provisions of the Cybercrimes Act, 2015 impose sanctions for this kind of behaviour. The Cybercrimes Act provides that anyone who without authorization, intentionally accesses a computer system or network  with the intent of obtaining computer data, securing access to any program, commercial/industrial secrets or classified information; commits an offence and liable to punishment. 

Similarly, if an AI system used for transmitting computer data, content, or traffic data, is unlawfully intercepted by technical means, such action would constitute an offence under the Cybercrime Act.

 

AI and Consumer Protection

Increased generative AI adoption in the commercial space has also come with increased consumer risks. For instance, in January 2024,  AI-generated videos of Taylor Swift were circulated, falsely claiming the singer was giving away free cookware, with consumers only needing to pay a shipping fee.

The Federal Consumer Commission Protection Act (FCCPA) provides that ‘an undertaking shall not knowingly apply to any goods a trade description that is likely to mislead consumers as to any matter implied or expressed in that trade description or alter, deface, cover, remove or obscure a trade description or trade mark applied to any goods in a manner calculated to mislead consumers’. Trade descriptions under the FCCPA are generally any form of branding that may inform a customer to request or order the goods. Trade descriptions can thus include product labels, advertisement content, product catalogues, email pitches, or business proposals.

 

--

Read the original publication at Ǽlex