AI

Even the best AI Chatbots often hallucinate

Even the best AI chatbots often hallucinate, demonstrating that their output cannot yet be fully trusted, according to researchers from various universities and the Allen Institute for Artificial Intelligence.

For their study, the researchers developed a benchmark called WildHallucinations to assess the accuracy of AI chatbot responses. The AI chatbots were required to answer a range of questions that users might commonly ask (pdf).

The researchers deliberately included topics without Wikipedia pages for about half of the questions, given that most AI chatbots are trained on Wikipedia data.

One language model that performed well was Claude 3 Haiku, but this was partly because the chatbot answered only 72 percent of the questions, skipping those it couldn’t answer.

The topic also plays a significant role; for example, the language model Mistral-7B hallucinated in over forty percent of its responses on the topic of “people.”

The researchers note that hallucinations are problematic when users trust the chatbot’s output. “The main takeaway from our study is that we cannot yet fully trust the output of model generations,” researcher Wenting Zhao told TechCrunch. “At present, even the best models can generate hallucination-free text in only 35 percent of cases.”

Please visit the TechCrunch article for more information.