Introduction
Artificial intelligence is advancing at an extraordinary pace. Today, it is increasingly difficult to determine whether a video, image, or audio clip is real or AI-generated. Many people now encounter online content and pause to ask a troubling question: Is this real, or was it created by AI?
This uncertainty marks a defining moment in the digital age. While AI has unlocked powerful tools for innovation and creativity, it has also created new avenues for deception. One of the most concerning consequences of this shift is the growing risk of defamation, where false or manipulated content can damage a person’s reputation before the truth has a chance to emerge.
How AI is being used today: Innovation and Imitation
AI-generated content is not inherently harmful. In fact, it is being used positively across many sectors e.g.:
- Entertainment and media, where AI enhances visual effects and storytelling.
- Education, through simulations and interactive learning tools.
- Accessibility, by creating realistic avatars and voice tools for people with disabilities.
However, the same technology is also being misused to create:
- Doctored images and videos
- False narratives about private individuals and public figures
- Viral misinformation designed to provoke outrage and engagement
The danger lies not only in the creation of false content but also in how easily it is believed and shared. In many cases, people cannot readily tell that the content is fabricated, and by the time the truth emerges, reputations may already have been irreparably harmed.
Deepfakes and the Speed of Online Judgment
Social media thrives on immediacy. Allegations spread faster than clarifications, and outrage often travels farther than facts. When AI-generated or manipulated content enters this environment, it can quickly become a powerful weapon.
Deepfakes thrive in this ecosystem because they look convincing. A single image or short video can seem to “prove” a narrative, even when it is entirely false. This is where AI intersects dangerously with defamation.
Case Study: Asherkine and the Gifted Girl
A recent controversy illustrates this risk clearly. During an interview, influencer Asherkine asked a girl he had intended to gift whether she was in a relationship. She answered in the negative. Shortly after the interview, a photo surfaced online showing her appearing to be very intimate with a man.
The man who posted the photo claimed they had been dating and that he spent money on her, only for her to deny being in a relationship during the interview in anticipation of receiving gifts from Asherkine. The internet responded swiftly. The girl received intense backlash; she was dragged online, insulted, and labelled with various derogatory names.
Eventually, she shared her side of the story. She provided the original image, which showed that the boy in the viral photo had not been present at all. According to her explanation, the image circulating online had been doctored using AI, with the man digitally inserted into the photograph. As public sentiment began to shift, attention turned to the boy who had created the doctored image and posted it with a false narrative. He eventually issued a public apology.
According to TechCabal, the manipulated content circulated on X (formerly Twitter), demonstrating how quickly AI-generated falsehoods can escalate online.[1]
This sequence of events raises a critical legal question: Does this amount to defamation?
Understanding Defamation: A Legal Examination
Defamation refers to the publication of a false statement that injures a person’s reputation.[2] It exists to protect individuals from reputational harm caused by untrue assertions presented as facts.
Defamation generally appears in two forms:
- Libel: Defamation in written or published form, including online posts, images, and videos.
- Slander: Defamation in spoken form.
For a claim of defamation to succeed, certain elements must typically be established.
- A False Statement of Fact
The statement or representation must be false. Opinions are generally protected, but presenting a manipulated image as proof of a relationship or conduct that never occurred may qualify as a false statement of fact.
- Publication to a Third Party
The false statement must be communicated to someone other than the person defamed. Social media posts, reposts, and viral images easily satisfy this requirement.
- Fault
Depending on the circumstances, the person responsible must have acted negligently or with malicious intent. Posting AI-doctored content without verification, especially when it is likely to cause reputational harm, may amount to negligence.
- Damage to Reputation
There must be harm to the individual’s reputation. In the Asherkine case, the online backlash, name-calling, and public shaming demonstrate clear reputational damage.
Viewed through this lens, the circulation of an AI-manipulated image that falsely portrays an individual and exposes them to public ridicule closely aligns with the elements of defamation.[3] From the foregoing discussion, it is evident that the essential elements of defamation are present in this situation. Accordingly, had legal action been pursued against the individual responsible for circulating the doctored content, there would likely have been a strong prospect of obtaining a favorable verdict.
AI Deepfakes and the Expanding Legal Grey Area
AI deepfakes stretch traditional defamation laws to their limits. Most defamation principles were developed long before the existence of realistic digital manipulation. Yet courts around the world have shown that new technology does not invalidate old legal principles, it merely tests how they are applied. In 2025, the Nigerian government charged activist and publisher Omoyele Sowore with defamation under the Criminal Code Act for allegedly spreading false information about President Bola Ahmed Tinubu online, alongside cybercrime charges. This highlights how social media posts can attract criminal defamation allegations under Nigerian law.[4]
Also, the Courts have repeatedly treated doctored images, edited videos, and misleading online publications as libel, even where the medium is novel. The key question remains whether the content conveys a false statement of fact capable of harming reputation. Deepfakes, by their very nature, are designed to appear factual and believable, making them particularly dangerous in defamation claims.
Defamation and the Law: Nigerian and Comparative Perspectives
Under Nigerian law, defamation exists both as a civil wrong and a criminal offence. The Criminal Code and Penal Code criminalize the publication of false statements that harm another person’s reputation.[5] These provisions apply regardless of whether the defamatory content is published in print, spoken aloud, or circulated online.
In addition, the Cybercrimes (Prohibition, Prevention, etc.) Act 2015 provides a modern legal basis for addressing online defamation. Section 24 of the Act criminalizes the intentional transmission of false or offensive content through computer systems with the intent to cause harm, annoyance, or reputational damage. AI-generated images and videos shared online with harmful intent may fall squarely within this provision.[6]
Comparatively, courts in other jurisdictions have also adapted defamation law to digital realities. In the United Kingdom, for example, the Defamation Act 2013 requires proof of “serious harm” to reputation, a threshold that viral deepfakes can easily meet. In the United States, while online platforms often enjoy intermediary protections, individuals who create or knowingly share defamatory content may still be personally liable.[7]
This comparative approach reinforces a crucial point: the absence of specific deep-fake legislation does not mean there is no legal remedy.
The Way Forward: Law, Literacy, and Accountability
Addressing AI-driven defamation requires a multi-layered approach:
- Legal reform, to explicitly recognize AI-generated and manipulated media as potential instruments of defamation.
- Digital literacy, so the public learns to question and verify viral content.
- Platform responsibility, including better detection tools and faster response mechanisms for false content.
AI itself is not the enemy. The problem lies in unregulated use, careless sharing, and the absence of accountability.
Conclusion: Protecting Reputation in the Age of Artificial Reality
The rapid rise of AI-generated deepfakes has fundamentally reshaped how truth is perceived and evaluated in the digital space. As the Asherkine controversy illustrates, a single manipulated image or video can ignite widespread outrage, inflict serious reputational damage, and subject individuals to public judgment before the facts are fully understood. In a digital environment where content spreads instantly, and verification often comes too late, the consequences for those targeted can be profound and enduring.
This evolving technological landscape presents an urgent challenge for the law. Legal systems that were developed in an era of traditional media must now confront a reality in which highly convincing falsehoods can be manufactured and disseminated within seconds. While the core principles of defamation law remain relevant, courts and policymakers must increasingly grapple with how those principles apply to AI-generated fabrications that blur the line between reality and manipulation.
Ultimately, protecting reputation in the age of artificial reality will require a combination of legal adaptation, technological responsibility, and public awareness. Platforms must strengthen safeguards against the spread of manipulated media, creators must exercise greater ethical restraint, and individuals must approach viral content with a greater degree of scepticism and verification.
Until stronger regulatory and technological protections emerge, caution, accountability, and critical scrutiny remain the most effective defenses in a world where digital images and videos can no longer be accepted at face value. In this new era, the age-old legal struggle between truth and falsehood has taken on a more complex and urgent dimension, one in which seeing is no longer believing, and the protection of reputation has never been more vital.
Author

Jessica Onyinyechukwu Osademe
Trainee Associate
Rosewood Legal
josademe@rosewoodlegal.com
Co-author

Lateefat Omotomilola Hakeem-Bakare
Principal Partner
Rosewood Legal
lhakeem-bakare@rosewoodlegal.com
Published on Thursday, March 9, 2026
References:
[1]TechCabal, “The Men Undressing Women with Grok,” TechCabal, July 17, 2025, [2]Section 373- 375 Criminal Code Act 2004, Section 391-392 Penal Code, [3]BusinessDay Nigeria, “Right to the Freedom of Expression and the Nature of the Law on Defamation in Nigeria,”, [4]Premium times news, Yakubu Mohammed (September 16, 2025), [5]Criminal Code Act (Nigeria), Sections 373–375; Penal Code Act (Nigeria), Sections 391–392, [6]Cybercrimes (Prohibition, Prevention, etc.) Act 2015 (Nigeria), Section 24, [7]Defamation Act 2013 (UK); Cubby, Inc. v. CompuServe Inc., 776 F. Supp. 135 (S.D.N.Y. 1991)