By Elizabeth Osayande
A researcher at the University of Georgia, USA, Moses Ubaka Okocha reflects a growing concern about the intersection of technology, media, and public health. As advancements in artificial intelligence, AI continue to transform various industries, journalism stands at a pivotal moment, particularly in dynamic media environments like Nigeria. Okocha, a media expert, has taken a deep dive into the implications of AI-generated deepfakes, which are manipulated audio or video files that imitate real individuals within the context of health misinformation.
Again, in Nigeria where media literacy varies significantly across different demographics, the rise of deepfakes has become a tool for spreading false health information, often targeting vulnerable populations such as the elderly or those suffering from serious illnesses. These manipulations can have dire consequences, undermining public health efforts and eroding trust in credible journalistic sources.
Vanguard recently had a chat with him to shed more light on the above issue.
Excerpt:
You have been conducting extensive research on the impact of AI deepfakes in the media, especially regarding public health. Can you start by explaining what deepfakes are and how they have infiltrated the Nigerian media landscape?
Deepfakes are essentially manipulated videos or audio recordings that create a convincing likeness of real people. In Nigeria, we’ve seen a troubling rise in their use, particularly in promoting health misinformation and disinformation.
This includes everything from fake endorsements of unverified treatments to impersonations of reputable journalists or public figures and media outlets. The sophistication of these deepfakes makes them persuasive, especially among vulnerable populations.
What are some specific examples you have ve encountered that illustrate the dangers of deepfakes?
Most of these journalistic deepfakes take almost the same format where well-known journalists are being impersonated in promoting dubious health products for conditions like hypertension. People, especially the elderly or those dealing with serious health issues, might not have the media literacy needed to recognize these deepfakes.
This leads to serious risks—not just in terms of personal health, but also financial scams targeting unsuspecting individuals.
You mentioned that trust in visual content is under siege. How has this changed the way people consume media?
Traditionally, visual content has been seen as trustworthy; the saying “the camera never lies” reflects that belief. However, with the rise of deepfakes, this notion is becoming increasingly questionable. People find it harder to distinguish between fact and fabrication. The emotional engagement and appeals that visuals provide only complicate matters further, as deepfake videos can manipulate feelings and sway public perception with ease.
That is quite concerning. How do you feel these deepfakes influence public health messaging in particular?
The impact on public health is profound. Deepfakes can imitate credible health institutions, distorting scientific evidence and spreading dangerous misinformation. This is especially prevalent on social media, where visual content transcends cultural and demographic boundaries easily, leading to widespread misconceptions and potential harm.
You have pointed out that the threat of deepfakes is a global issue. Can you provide insights into how this has played out in other countries, like the United States?
Absolutely. Data shows that deepfake fraud across the world increased by more than 10 times from 2022 to 2023, with countries like the United States impacted. We witnessed this unfold during the 2024 elections, where deepfakes were strategically deployed to influence public opinion and manipulate voter perceptions.
Although countries like the United States have taken steps to address this issue—such as the Executive Order issued by President Biden in 2023, which aims to harness the potential of AI while mitigating its risks—much more needs to be done to tackle this global challenge.
What do you see as the best way forward in combating this growing threat?
We must take a more proactive stance. Countries around the world need to invest in training and collaboration around media literacy, both for audiences and journalists. Education is key.
We must adopt a synergistic approach because the challenges posed by deepfakes are interconnected globally. What impacts one nation can easily ripple out and affect others.
Disclaimer
Comments expressed here do not reflect the opinions of Vanguard newspapers or any employee thereof.