Future of Democracy in Nigeria’s 2027 Elections and the Rise of AI Deepfakes Videos

Nigeria Elections 2027 is drawing near in this era of Artificial Intelligence, How AI Deepfakes Could Shape Democracy? These are questions most Nigerians are asking in this current administration. Last year, a rather surprising video of President Bola Ahmed Tinubu spread quickly online. In the clip, Tinubu appeared to say he was a Chelsea fan, so upset by the team’s losses that he wanted to buy the club. It looked real enough — but it wasn’t. The video was a deepfake, created by artificial intelligence. Therefore, in the road to 2027, how can we start Protecting Nigeria’s Elections from AI Deepfake Threats?

Though it seemed harmless, the incident showed how easily AI can put false words into a leader’s mouth and how fast such lies can travel before the truth catches up. Something really has to be done about it.

As Nigeria moves closer to the 2027 elections, the stakes are far higher. What happens when fake videos, voices, or images target politicians during campaigns? In a country where many people trust what they see and share online, the danger is clear. We saw similar chaos during the #EndSARS protests and again during the COVID-19 pandemic, when misinformation spread faster than facts.

Nigeria’s institutions are taking notice. The electoral commission, INEC, has set up an AI division, while fact-checkers, civil society groups, and tech experts are working on grassroots awareness and digital literacy campaigns. These efforts may prove crucial in protecting voters — and democracy itself — from the rising threat of AI-powered misinformation

Road to 2027: Protecting Nigeria’s Elections from AI Deepfake Threats
Road to 2027: Protecting Nigeria’s Elections from AI Deepfake Threats

AI Deepfakes and the 2027 Nigerian Elections: What You Need to Know About other Countries

Nigeria is not alone in facing the threat of AI-powered misinformation. Around the world, artificial intelligence is already being weaponised to spread political falsehoods.

In the United States, cloned voices have been used in robocalls to impersonate politicians. In South Africa, the daughter of former President Jacob Zuma circulated a deepfake of U.S. President Donald Trump endorsing the uMkhonto weSizwe (MK) party, sparking widespread debate online.

Indonesia’s last year’s presidential campaign saw fake videos of candidate Prabowo Subianto speaking Arabic to court Muslim voters. In Germany, a network known as “Storm-1516” set up AI-driven websites to smear politicians ahead of elections.

The pattern repeats across continents — from France and Argentina to Bangladesh, the Philippines, Canada and Spain.

The risk for Nigeria is that these same tactics could be even more damaging here. With high trust in visual and audio media, deep ethnic and religious divides, and low digital literacy in many communities, the country presents fertile ground for AI-driven manipulation.

How AI Deepfakes Could Shape Democracy: What’s at stake with AI and elections

For decades, Nigerian elections have carried the weight of mistrust. Allegations of ballot rigging, opaque collation, and political horse-trading have left many citizens skeptical of the process. Party loyalty is fluid, alliances are built on convenience rather than conviction, and campaigns often revolve around personalities instead of policies.

This fragile foundation has made every election a tense national event. Nigerians approach the polls with apprehension, fearing manipulation, violence, or outcomes they suspect are already decided. The result is a trust deficit that now collides with a new and more complex challenge: technology.

“In 2019 it was cheap fakes; in 2023 it was false edits and captions. Today, we face hyper-realistic voices and videos that ordinary citizens can hardly distinguish from reality,” says Dr. Chinonso E. Okoye, Senior Special Assistant on Cyber & Infrastructure Security to the Governor of Anambra State.

Funso Doherty, a former Lagos gubernatorial candidate, agrees. “Misinformation has always existed in politics,” he says. “But AI has the capacity to take this to another level.”

What Journalists and Fact-checkers has to say

As AI tools grow more sophisticated at distorting reality, journalists and fact-checkers are scrambling to keep pace.

“There are a wide range of tools,” says Fatimah Quadri of The FactCheckHub. “The most common ones are Hive Moderation and Illuminarty AI. The challenge is speed. Misinformation often travels faster than our corrections.”

That speed is overwhelming. “Now that we are dealing with AI misinformation, people need to be kinder to journalists,” urges Nelly Kalu, Editorial Projects and Product Manager at the Center for Collaborative Investigative Journalism (CCIJ). AI content, she says, is often “too fast, too quick, and too much for them to deal with.”

To stay ahead, fact-checkers are increasingly turning to “prebunking” — offering voters verified facts before falsehoods take hold. “Trust is built not just by debunking, but by being proactive, transparent, and consistent,” Quadri explains.

Yet awareness remains a major issue. “Many voters can spot simple photo manipulations,” Quadri notes. “But deepfakes, AI-generated voices, or hyper-realistic images are far harder to detect. In Nigeria, where trust in visuals and voice recordings is high, that makes voters especially vulnerable.”

How Regulators are Preparing for Nigeria’s 2027 Elections in the Age of AI Deepfakes

In May 2025, Nigeria’s electoral commission, INEC, announced a new Artificial Intelligence Division, tasked with improving decision-making, boosting voter engagement, and combating disinformation.

But experts warn that setting up new units is not enough. “There is a need to invest in training electoral officials, cybersecurity experts, and fact-checkers. Educating the electorate about AI disinformation is crucial. And platforms must be held accountable for removing manipulated content quickly,” says Kingsley Owadara, AI ethicist and founder of the Pan-Africa Center for AI Ethics.

Owadara outlines a three-step strategy: restrict AI systems from producing harmful propaganda, detect synthetic content with forensic tools, and remove manipulated material swiftly through escalation protocols and evidence capture. Still, he admits detection remains an uphill task. “No detector is fully reliable as generators evolve. Detection must combine technology, human review, and clear confidence labels on content.”

What is the Solution for AI DeepFakes?

He recommends that electoral bodies use auditing tools like IBM’s AI Fairness 360 to measure bias and strengthen safeguards.

Others point to deeper structural weaknesses. “Our cyber crime laws touch on internet fraud, but we lack a comprehensive AI policy,” says Victoria Oladipo, founder of Learn Politics. “We need guidelines for usage, clear consequences for misuse, and investment in training. Otherwise, AI misinformation will outpace our institutions.”

Other countries are already moving faster. In the Philippines, electoral authorities now require candidates to disclose their use of AI in campaign materials, with deepfakes considered an electoral offence during the May 2025 elections.

Nigeria’s legal system still lags. The 2022 Electoral Act bans publishing false statements about a candidate’s character, punishable by a ₦100,000 ($65) fine or six months in prison. The 2015 Cybercrimes Act also targets malicious online content but has been criticised for its vague scope and misuse against government critics.

“Regulation in this part of the world, sometimes, is not done honestly,” says Hamza Ibrahim of the Centre for Information Technology and Development (CITAD). He argues that laws are often drafted without wide consultation, leaving room for political bias. The danger, he warns, is that poorly designed laws may do little to stem disinformation while curbing legitimate dissent.

What is IBM’s AI Fairness 360 (AIF360) and will it Help Nigeria Elections?

IBM’s AI Fairness 360 (AIF360) is an open-source toolkit developed to help detect, measure, and reduce bias in artificial intelligence systems. First, it provides a set of algorithms, metrics, and educational resources that allow developers, researchers, and policymakers to test whether AI models treat people fairly across different groups—such as race, gender, or age. Secondly, it offers both bias detection and bias mitigation techniques, AIF360 helps organizations build more transparent and trustworthy AI systems. It is widely used in areas like finance, healthcare, and governance, where fairness and accountability are critical.

How IBM’s AI Fairness 360 Works:

The toolkit runs tests on AI models to check if they are treating people fairly. For example, it looks at whether an algorithm makes decisions that unintentionally favor or disadvantage a group based on gender, ethnicity, or age. It uses metrics to measure fairness, then applies bias-reduction techniques (called “mitigation algorithms”) to make the system’s outputs more balanced and transparent. Developers and auditors can also generate detailed reports showing where bias exists and what steps were taken to fix it

How it Can Help Nigerian Elections:

In Nigeria’s elections, AIF360 could be used in three key ways:

  1. Detecting Bias in Electoral Tools – If INEC uses AI for voter registration, identity verification, or monitoring misinformation, AIF360 can help ensure the system doesn’t unfairly exclude certain groups (like rural voters, women, or smaller ethnic groups).
  2. Checking AI Misinformation Detection Systems – Since AI is now being used to flag deepfakes or fake news, the toolkit can test whether detection algorithms work equally well across languages, dialects, and cultural contexts in Nigeria.
  3. Building Trust – By making election-related AI systems more transparent and auditable, AIF360 can boost public confidence that the technology is not being used to manipulate or suppress voters.

Who will Benefit from IBM’s AI Fairness 360 (AIF360) if Nigeria to Prevent Deep Fake Artificial Intelligence?

The Nigerian people and different organisations would directly benefit from using IBM’s AI Fairness 360 (AIF360) in the electoral and governance space. Let’s discuss it further as this may help the Future of Cyber Security Predictions & Protection.

Government & Electoral Bodies

  • Independent National Electoral Commission (INEC) – ensuring voter registration, verification, and election monitoring systems are fair and unbiased.
  • National Orientation Agency (NOA) – using AIF360 insights to design fair digital literacy campaigns.
  • National Information Technology Development Agency (NITDA) – aligning Nigeria’s AI policy framework with fairness and transparency.
  • Federal Ministry of Communications, Innovation & Digital Economy – supporting ethical AI adoption in governance.

Media & Fact-Checking Organisations

  • The FactCheckHub – testing fairness in misinformation detection tools.
  • Dubawa Nigeria – improving the credibility of AI-based fact-checking.
  • Centre for Collaborative Investigative Journalism (CCIJ) – auditing AI systems used for investigative reporting.

Academia & Research

  • Universities (e.g., University of Lagos, Covenant University, Ahmadu Bello University) – applying AIF360 in AI research, journalism, and governance studies.
  • Pan-Africa Center for AI Ethics (founded by Kingsley Owadara) – leveraging the toolkit for ethics research and training.

Civil Society & Advocacy Groups

  • Centre for Information Technology and Development (CITAD) – monitoring fairness in digital platforms.
  • Yiaga Africa – using AI audits to track election transparency.
  • Connected Development (CODE) – ensuring inclusivity in civic tech platforms like FollowTheMoney.
  • BudgIT Nigeria – checking fairness in AI-driven governance and accountability tools.

Ordinary Citizens

  • Voters in rural communities – protection from exclusion if AI voter verification systems are biased.
  • Ethnic and religious groups – assurance of equal treatment in AI-powered election tools.
  • Youth & women groups – benefiting from fairer digital platforms for political participation.

Helpful Insights

What are Nigerian Tech Giants and Startups and Academia doing to Prevent the Harm Deepfakes May Cause in the Election?

The fight against AI-driven election misinformation cannot be left to INEC, regulators, or global tech giants alone. Nigerian startups, universities, and grassroots groups are stepping forward to fill the gap.

“Our edge is local innovation,” says Dr. Chinonso E. Okoye, Senior Special Assistant on Cyber & Infrastructure Security to the Governor of Anambra State. “Startups can build detectors tuned to Nigerian voices and imagery. Academia can train AI models on local datasets. Fact-checkers can deploy AI-assisted claim matching to cut response times in half.”

Beyond INEC: How Nigerian Innovators Are Building Defences Against AI Election Misinformation

At Purplebee Technologies in Ekiti State, Operations Manager Omotayo Ibidunmoye believes that trust must be built from the ground up. “Digital literacy training, transparent communication, and community-driven information hubs are essential,” she says. “We must train young Nigerians to spot manipulated content and create local reporting channels to flag suspicious material.”

Civil society groups and newsrooms are also moving quickly, developing their own tools to stay ahead of AI-powered disinformation campaigns.

“We can fight AI misinformation by making AI tools that are intelligent and fast enough to counter it,” says Nelly Kalu, Editorial Projects and Product Manager at the Center for Collaborative Investigative Journalism (CCIJ). “Think of it like Transformers—good machines fighting bad machines.”

CCIJ is currently developing ElectionWatch, a data-driven platform designed to track and analyse election misinformation on TikTok, Telegram, and other fast-growing platforms. The tool, supported by JournalismAI and the Google News Initiative, is being trained on Nigerian electoral data to identify disinformation trends in real time. “It’s the kind of tool we wish we had during our last election investigation,” Kalu adds, noting that the project could expand to other African countries in the near future.

FactCheckAfrica is another homegrown player. The organisation has launched MyAIFactChecker, an AI-powered news authenticator that allows users to submit headlines or claims and receive instant credibility assessments. The tool also provides summaries and tone analysis in local languages including Hausa, Yoruba, and Swahili. Recognised for its potential, Google selected MyAIFactChecker for its 2024 Startup Accelerator Program.

“There’s a need for more homegrown tools made by Nigerians and Africans,” says Prudence Emudianughe, Chief Operating Officer at FactCheckAfrica. “When a tool is built locally, it’s easier for people to relate to and trust.”

Can this Deepfake Threats be Resolved?

Still, the challenge is far from solved. MyAIFactChecker cannot yet verify deepfake audio, video, or images—forms of manipulation increasingly weaponised in Nigerian politics. Ahead of the 2023 elections, an alleged phone call featuring opposition figures Atiku Abubakar, Ifeanyi Okowa, and Aminu Tambuwal went viral, purporting to show them plotting electoral malpractice. Analysts from the Collaborative Media Project and the Center for Democracy and Development (CDD) later flagged irregularities suggesting the clip was AI-generated.

To meet this threat, Emudianughe says her team is working on upgrading MyAIFactChecker to detect AI-generated multimedia content. “The next election will not only be about fake headlines,” she warns. “It will be about voices, faces, and videos that look and sound real—but are not.”

Road to 2027: Protecting Nigeria’s Elections from AI Deepfake Threats

What are the Warning Signs and the Road Ahead of the upcoming Nigerian elections? The warning signs are already here. Global platforms such as Google and TikTok have introduced watermarking tools like SynthID and automatic labels for synthetic content, while the 2024 Tech Accord has set a template for platform cooperation on safeguarding election integrity.

But experts say tools alone cannot solve the problem. “Detection alone is not enough,” Okoye warns. “It must be paired with policy, rapid response, and human judgment.”

For Nigeria, the path to the 2027 general elections is urgent and unambiguous: expand digital literacy across rural and urban communities, enforce accountability on platforms, empower fact-checkers with real-time data, and establish national protocols against AI-driven political propaganda.

Because in the age of artificial intelligence (with its benefits and risks), the challenge is no longer whether fake content will appear. The real test is whether Nigeria’s democracy can withstand the speed and scale at which it spreads.

Exit mobile version