Despite their impressive abilities, the fact remains that AI chatbots like ChatGPT are highly unpredictable and difficult to tame – they often go off guardrails, producing misinformation or rambling things that can only be described as nonsensical. This phenomenon is referred to as AI “hallucinations,” and OpenAI has finally announced that it’s doing something about it.
The ChatGPT maker has revealed a new strategy for fighting hallucinations where AI models are trained to reward themselves for each correct step of reasoning when they’re arriving at an answer, in a process called “process supervision”. This is different from “outcome supervision,” which is the current implementation where the reward is given out at a correct final conclusion.
Process supervision could lead to better explainable AI, according to researchers, since the strategy follows a more human-like chain of thought. OpenAI adds that mitigating hallucinations is a critical step towards creating AGI, or intelligence that would be capable of understanding the world as well as any human.
OpenAI’s blog post provides multiple mathematical examples demonstrating the improvements in accuracy that using process supervision brings. However, the company states that it’s “unknown” how well the supervision process will work beyond the domain of maths, adding that they will explore its impact in other domains.
OpenAI has been very prominently warning users against blindly trusting ChatGPT right from the start, with the AI bot’s interface presenting a disclaimer that reads, “ChatGPT may produce inaccurate information about people, places, or facts.”
The company has once again acknowledged these shortcomings in its report: “Even state-of-the-art models are prone to producing falsehoods —they exhibit a tendency to invent facts in moments of uncertainty. These hallucinations are particularly problematic in domains that require multi-step reasoning, since a single logical error is enough to derail a much larger solution. Detecting and mitigating hallucinations is essential to improve reasoning capabilities.”
However, some experts and critics argue that the measures taken by OpenAI are so far not sufficient and that more transparency and regulation are needed.