Nikola Matev is a student at Fontys University of Applied Science studying Artificial Intelligence, and an Intern at InsightHQ.
Misinformation is all around us!
AI-generated misinformation has emerged as a significant concern in 2025, with experts identifying it as one of the top global risks. The proliferation of large language models and generative AI tools has made it easier than ever to create convincing fake content at scale. According to NewsGuard, the number of AI enabled fake news sites increased tenfold in 2023, with these sites often operating with little to no human supervision.
The threat extends beyond simple text generation. AI technologies can now create realistic deepfakes, manipulated audio, and synthetic videos that are increasingly difficult to distinguish from authentic content. During the 2024 election cycle, while the feared wave of targeted deepfakes didn’t fully materialise, AI was widely used to create memes and content whose artificial origins weren’t disguised, often shared openly by politicians and their supporters.
A week ago, just before bombing Iran, President Trump held a press conference in the oval office and who was standing behind him? Juventus football club! My friend James Crawford, who is Managing Director of PR Agency One and board director at AMEC shared his opinion on the matter in a LinkedIn post. He stated: “At first glance, it’s absurd. Almost comical. But as someone who has commissioned his fair share of press photography and press briefings, I couldn’t stop thinking about this image, laughing and scratching my head at the same time”.
Now remember the “photo” Trump shared of him dressed as the Pope. How do we make the distinction between real and fake? It seems obvious that a picture of Trump as the pope is 100% fake, but I thought the same thing about an Italian team’s players standing behind a president (not their own) whilst he’s discussing military actions.
The point being, we as humans can no longer distinguish real from fake news. The solution, we need help!
Garbage in garbage out!
Two weeks ago, I was in Vienna for AMEC’s global summit. AMEC is the International Association for Measuring and Evaluating Communications, and is responsible for the Barcelona principles and an integrated framework for measuring communications effectiveness, both used widely.
Geoffrey Hinton
Geoffrey Hinton, often called the “godfather of AI,” has become one of the most vocal critics of AI’s potential dangers, including its role in spreading misinformation. After leaving Google in 2023, Hinton has consistently warned about AI’s capacity to generate convincing fake content.
Hinton’s primary concerns centre on AI’s ability to make it “much easier for authoritarian governments to manipulatethe electorate with fake news that is targeted to each individual”. He has called for legislation making it illegal to produce or share fake images or videos unless they are clearly marked as fake.
The AI pioneer emphasises that “the average person will not be able to know what is true anymore” due to AI-generated misinformation.
Yoshua Bengio
Yoshua Bengio, another member of the AI “godfathers” trio, has focused extensively on AI safety and the risks of misinformation. Bengio has been particularly concerned about AI’s role in electoral manipulation and its potential to “supercharge” disinformation.
He has noted that current safety protections implemented by AI companies are quickly defeated by hackers and academics, highlighting fundamental scientific challenges in making AI systems secure against misuse.
Bengio’s approach focuses on distinguishing between AI “scientists” (systems that investigate the world) and AI “executives” (systems that can act in the world), recommending restrictions on the latter to prevent potential misuse for disinformation purposes.
Yann LeCun
Yann LeCun, Meta’s chief AI scientist, presents a markedly different perspective from his fellow Turing Award winners. LeCun has consistently argued that concerns about AI-driven misinformation are overstated and that AI actually helps mitigate dangers like hate speech, misinformation, and propagandist attempts to corrupt electoral systems.
His perspective emphasises that misinformation has existed throughout history with every new communication technology, from the printing press to the internet, and that AI will ultimately serve as a solution rather than a problem. LeCun advocates for open-source AI development, arguing that it promotes transparency and prevents concentration of power in a few corporate entities.
Eliezer Yudkowsky
Eliezer Yudkowsky is the founder of the Machine Intelligence Research Institute. He takes perhaps the most extreme position on AI risks, though his focus extends far beyond misinformation to existential threats. Yudkowsky views AI misinformation as one component of a larger alignment problem that could ultimately threaten human survival.
While Yudkowsky’s primary concern is the development of artificial general intelligence that could become uncontrollable, he acknowledges that current AI systems pose immediate risks through their potential for misuse in creating convincing misinformation.
Ilya Sutskever
Ilya Sutskever, former OpenAI co-founder and current founder of Safe Superintelligence, has focused heavily on the alignment problem and safety considerations of advanced AI systems. While his primary concern is ensuring safe superintelligence, Sutskever recognises the immediate challenges posed by AI-generated misinformation.
Sutskever’s approach emphasises the need for “superintelligence that will not harm humanity at a large scale”. His departure from OpenAI was partly motivated by concerns about the pace of AI development and insufficient attention to safety measures, including the potential for AI systems to generate harmful misinformation.
Yuval Noah Harari
Yuval Noah Harari, while not an AI researcher, provides crucial insights into the societal implications of AI-driven misinformation. His analysis focuses on how AI challenges democracy and collective decision-making processes.
Harari warns that AI could become the centre of a new nexus of misinformation, potentially preventing future generations from uncovering its flaws. He emphasises the distinction between misinformation (honest mistakes) and disinformation (deliberate lies), noting that AI’s sophistication makes this distinction increasingly important.
He believes social media platform owners like Elon Musk and Mark Zuckerberg don’t want to censor anyone and for them this is a problem of freedom of speech. Yuval however believes this problem is because corporate algorithms are deciding which stories to promote to audiences.
He believes the power is in the hands of social media algorithms and their masters and it’s their responsibility not to distribute fake/miss informative content.
The relationship between AI and misinformation represents a complex challenge requiring multifaceted solutions. As we already explored, misinformation could lead to poor data quality and that’s essential to address, otherwise we get stuck in an endless loop of reproducing and distributing fake data.
While there are significant disagreements among AI leaders about the severity and nature of the risks, there is growing consensus that proactive measures are necessary to address both current harms and future risks.
As AI technologies continue to evolve, the challenge of distinguishing authentic from artificial content will only intensify. Success in addressing AI-driven misinformation will depend on continued collaboration between technologists, policymakers, educators, and civil society to develop comprehensive strategies that protect democratic discourse while preserving the benefits of AI innovation.