Wednesday, July 24, 2024

When OpenAI introduced its ChatGPT chatbot, people were impressed by its humanlike responses and ability to discuss various topics. However, it soon became apparent that these chatbots often generate false information. Google and Microsoft also released similar chatbots that provided inaccurate information. ChatGPT even cited fake court cases in a legal brief. A new start-up called Vectara, founded by former Google employees, is now studying how often chatbots invent information. Their research suggests that chatbots hallucinate or make things up at least 3% of the time, with rates reaching as high as 27%. This behavior is concerning when considering sensitive data like court documents or medical information. Due to the infinite ways chatbots can respond, it is impossible to determine the exact frequency of hallucination. Vectara’s researchers conducted experiments to test hallucination rates when summarizing news articles. Even in this specific task, the chatbots consistently invented information. The research also highlighted the varying rates of hallucination among different AI companies, with OpenAI having the lowest rate at around 3%, and Google’s Palm chat having the highest rate at 27%. Vectara aims to raise awareness about the potential inaccuracies of information generated by chatbots, including its own service. The start-up offers a service that retrieves information from a company’s private collection of files. The researchers hope that their findings will prompt the industry to address the issue of hallucination in chatbots. While companies like OpenAI and Google are working to minimize the problem, completely eliminating hallucination remains uncertain.

Check out our other content

Check out other tags:

Most Popular Articles