Skip to content
Accueil » When AI distorts the news: a study warns about flaws in smart assistants

When AI distorts the news: a study warns about flaws in smart assistants

AI_Assistants_information_reliability

AI assistants are now becoming a source of information for millions of people, and a recent study highlights worrying gaps in their reliability.

Conducted by the European Broadcasting Union, in collaboration with the BBC and 22 public media outlets from 18 countries, this international investigation, unveiled at the EBU Assembly in Naples, reveals that nearly half of the answers provided by popular AIs contain significant errors.

As 2026 approaches and AI becomes ever more integrated into our lives, these results highlight a major challenge: how can we ensure reliable information in the age of algorithms? The Yiaho team looks back at this study.

An in-depth analysis of AI’s flaws

To assess the performance of AI assistants, professional journalists put four popular platforms—ChatGPT, Copilot, Gemini and Perplexity—through a rigorous test.

Nearly 3,000 answers were generated from 30 current-affairs questions, then analyzed using strict criteria: accuracy, source quality, editorial clarity, and context.

The findings are troubling: 45% of answers contain at least one notable error, whether factual inaccuracies, questionable sources, or contextual distortions.

Among the issues identified:

  • 31% of answers suffer from serious flaws in source attribution, with references that are missing, incorrect, or misleading.
  • Even more worrying, 20% of answers contain major inaccuracies, including outdated or completely fabricated information—a phenomenon often referred to as “hallucination” in AI jargon.
  • Unfortunately, Gemini stands out as the worst performer, with significant errors in 76% of its answers—a rate far higher than its competitors’.

Another worrying point is the drop in “refusal rates.” Unlike earlier versions, which could decline to answer a complex question, today’s AI assistants tend to provide an answer, even at the risk of spreading unreliable information.

This behavior increases the risk of misinformation, especially in a context where trust in the media is already fragile.

Why does this concern all of us?

AI assistants are no longer simple tech gadgets: they are gradually replacing traditional search engines for a growing share of the population.

According to the Reuters Institute’s Digital News Report 2025, 15% of under-25s turn to these tools to stay informed, compared with only 7% of all online users.

This trend, especially pronounced among young people, makes the reliability of these technologies crucial.

Toward solutions for more reliable AI?

Among the avenues being considered: better source traceability, mechanisms to flag uncertainty, and deeper integration of journalistic standards into algorithms.

As 2026 comes into view, AI technologies will likely continue to evolve, potentially correcting some of these shortcomings. But today’s results are a reminder that the race to innovate must not come at the expense of reliability.

For AI assistants to become true allies in access to information, they will need not only to mimic the authority of the media, but also to adopt its core values: rigor, transparency, and accountability.

In the meantime, users are encouraged to stay vigilant. Checking sources, cross-referencing information, and relying on trusted media remain essential habits in a world where AI, despite its promises, is not yet infallible.

Source:

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Glen

Glen