spot_img
Friday, August 29, 2025
spot_img
HomeGadgetsIs AI feeding us lies? New study reveals the sources chatbots rely...

Is AI feeding us lies? New study reveals the sources chatbots rely on and how they can fuel misinformation risks

-


For decades, Google functioned as the go-to “answer box,” delivering links from across the web. Users still had to sift through websites to find what they needed. Artificial intelligence chatbots like ChatGPT, Perplexity, and Google’s AI Mode have upended that model, offering direct answers that bypass traditional sources. But a new study raises serious concerns about where these bots are pulling their information from.

Reddit tops the list of AI’s go-to sources

According to research published by SEO platform Semrush, Reddit is by far the most frequently cited source across ChatGPT, Perplexity, and Google’s AI tools, accounting for more than 40 percent of all citations — a figure that dwarfs Wikipedia at 26 percent and YouTube at 23 percent. This dominance reflects years of users appending “Reddit” to Google searches for authentic discussions. However, with AI now automating this process, the risks of relying heavily on user-generated commentary are more serious.

Deals driving visibility

Industry partnerships may explain Reddit’s prominence. Reuters reported that Google struck a $60 million annual deal to license Reddit content for training its AI models, while OpenAI followed with a similar agreement to integrate Reddit discussions into ChatGPT. While this ensures vast, real-time data, it also means AI systems are pulling unverified, sometimes misleading, and even satirical content into responses without human discernment.

Why experts are worried

The Semrush study highlighted how citation overlap between AI chatbots and Google’s traditional top search results remains partial at best. Perplexity aligned most closely with Google’s top ten links, while ChatGPT showed the weakest overlap, pulling from sources that traditional search often overlooks. Experts warn that this mix of opaque sourcing and user-driven platforms risks amplifying misinformation. As OpenAI CEO Sam Altman himself admitted, he is surprised people trust ChatGPT so readily.

While Google’s AI Mode sometimes provides sidebars with citations, most AI responses offer little visibility into where their information originated. This lack of transparency leaves users vulnerable to taking flawed or comedic posts at face value. “AI systems can retrieve information from top domains, but they often draw from different subpages, including obscure forums and comments,” the study explained.

« Back to recommendation stories


The findings underline a new reality: in the race to make chatbots conversational and convenient, accuracy and reliability risk being sidelined. User-generated platforms like Reddit, YouTube, and Facebook now shape much of what AI tells us, but these spaces were never designed to be definitive sources of truth.

Add as a Reliable and Trusted News Source





Source link

Related articles

spot_img

Latest posts