Skip to content
GTC

GTC

All Information about Technology

Menu
  • Gadgets
  • Network
  • Programming
  • Software
  • Technology News
Menu
OpenAI introduces a new strategy to fight ChatGPT hallucinations

OpenAI introduces a new strategy to fight ChatGPT hallucinations

Posted on 02/06/2023 by

Despite their impressive abilities, the fact remains that AI chatbots like ChatGPT are highly unpredictable and difficult to tame – they often go off guardrails, producing misinformation or rambling things that can only be described as nonsensical. This phenomenon is referred to as AI “hallucinations,” and OpenAI has finally announced that it’s doing something about it.

The ChatGPT maker has revealed a new strategy for fighting hallucinations where AI models are trained to reward themselves for each correct step of reasoning when they’re arriving at an answer, in a process called “process supervision”. This is different from “outcome supervision,” which is the current implementation where the reward is given out at a correct final conclusion.

Process supervision could lead to better explainable AI, according to researchers, since the strategy follows a more human-like chain of thought. OpenAI adds that mitigating hallucinations is a critical step towards creating AGI, or intelligence that would be capable of understanding the world as well as any human.

OpenAI’s blog post provides multiple mathematical examples demonstrating the improvements in accuracy that using process supervision brings. However, the company states that it’s “unknown” how well the supervision process will work beyond the domain of maths, adding that they will explore its impact in other domains.

OpenAI has been very prominently warning users against blindly trusting ChatGPT right from the start, with the AI ​​bot’s interface presenting a disclaimer that reads, “ChatGPT may produce inaccurate information about people, places, or facts.”

The company has once again acknowledged these shortcomings in its report: “Even state-of-the-art models are prone to producing falsehoods —they exhibit a tendency to invent facts in moments of uncertainty. These hallucinations are particularly problematic in domains that require multi-step reasoning, since a single logical error is enough to derail a much larger solution. Detecting and mitigating hallucinations is essential to improve reasoning capabilities.”

However, some experts and critics argue that the measures taken by OpenAI are so far not sufficient and that more transparency and regulation are needed.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Understanding the Basics of Network Security
  • Understanding the Rights of Tenants and Landlords
  • Ways to Make Your Home Safe and Secure
  • How Proxies Are Enabling Smarter E-Commerce Strategies
  • Understanding the Language of Art: Visual Vocabulary
  • Contact Us
  • Disclosure Policy
  • Sitemap

computer software computer software devices howard beale assassination how can i see what devices are connected to my network? how does software work with hardware network network (1976) network 1976 poster network app network cast network computer network definition network film review network for good network imdb networking definition and examples network marketing network monologue network movie network remake network solutions network speed test network summary network switch network wiki oscars for network software software's software components of a computer software engineer software examples software is or are software meaning software notes software parts of computer software resources wikipedia software update the network the network cast the network summary types of software types of software components uses of software what is software where was network filmed

©2025 GTC | Built using WordPress and Responsive Blogily theme by Superb

WhatsApp us