Home
News

Don't Trust It Too Much": OpenAI CEO Sam Altman Acknowledges ChatGPT's Fallibility

Sam Altman, the CEO of OpenAI, has advised users to exercise caution when using ChatGPT. He emphasised that while the AI is powerful, it often generates false information and should not be trusted without verification. Speaking on OpenAI's official podcast, Altman noted the surprising trust users place in ChatGPT despite its limitations. "People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates," he said. "It should be the tech that you don't trust that much."

Altman's remarks have sparked discussions among tech enthusiasts and regular users alike. Many rely on ChatGPT for various tasks such as writing, research, and parenting advice. However, Altman cautioned that ChatGPT can make convincing but incorrect claims and should be used with care.

Sam Altman Warns ChatGPT Users: Don’t Trust It Blindly — Here’s Why

Understanding AI Limitations

ChatGPT operates by predicting the next word in a sentence based on patterns from its training data. It lacks human-like understanding and sometimes produces inaccurate or fabricated information, a phenomenon known as "hallucination" in AI terms. Altman highlighted the need for transparency and managing user expectations by stating, "It's not super reliable." He stressed honesty about these limitations.

Despite its flaws, millions use ChatGPT daily. Altman acknowledged its popularity but warned against overreliance on its answers without scrutiny. This sentiment echoes ongoing debates within the AI community about the technology's reliability.

New Features and Concerns

Altman also discussed upcoming features for ChatGPT, including persistent memory and potential ad-supported models. These enhancements aim to improve personalisation and monetisation but have raised concerns about privacy and data usage.

Geoffrey Hinton, known as the "godfather of AI," has also shared his views on AI's reliability. In an interview with CBS, Hinton admitted that despite warning about superintelligent AI dangers, he tends to trust GPT-4 more than he should. "I tend to believe what it says, even though I should probably be suspicious," Hinton confessed.

A Cautionary Example

To illustrate GPT-4's limitations, Hinton tested it with a simple riddle: "Sally has three brothers. Each of her brothers has two sisters. How many sisters does Sally have?" GPT-4 answered incorrectly; the correct answer is one-Sally herself. "It surprises me it still screws up on that," Hinton remarked but expressed hope that future models like GPT-5 might perform better.

Both Altman and Hinton agree that while AI can be incredibly useful, it shouldn't be seen as an infallible source of truth. As AI becomes more integrated into everyday life, these warnings remind us to trust but verify information provided by such technologies.

Best Mobiles in India

Notifications
Settings
Clear Notifications
Notifications
Use the toggle to switch on notifications
  • Block for 8 hours
  • Block for 12 hours
  • Block for 24 hours
  • Don't block
Gender
Select your Gender
  • Male
  • Female
  • Others
Age
Select your Age Range
  • Under 18
  • 18 to 25
  • 26 to 35
  • 36 to 45
  • 45 to 55
  • 55+
X