Although AI models are advanced and can do extraordinary things, they are still capable of making mistakes and producing incorrect answers -- known as hallucinations.
All of the major AI chatbots, including ChatGPT and Google Bard, are prone to these hallucinations. Both OpenAI and Google even include disclosures that their chatbots possibly produce incorrect information.
"ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers," says OpenAI in a ChatGPT blog post.
The creation of false information has led to widespread concerns about the dissemination of misinformation and its potential negative consequences.
In a new research post, OpenAI shares that it may have found a way to make AI models act more logically and avoid hallucinations.
OpenAI trained a model that is capable of solving complex mathematical problems through "process supervision," a method that provides feedback for each individual step as opposed to "outcome supervision," which provides feedback on an end result.