Elon Musk Praises ChatGPT 5 for Honest ‘I Don’t Know’ Response – A Big Step in AI Trust

Elon Musk praises ChatGPT 5 for admitting 'I don't know' after 34 seconds, marking a milestone in AI reliability.

Aug 20, 2025 - 09:54
Elon Musk Praises ChatGPT 5 for Honest ‘I Don’t Know’ Response – A Big Step in AI Trust

In the fast-moving world of artificial intelligence, every update sparks a new debate. The latest one came after ChatGPT 5, OpenAI’s newest model, gave a surprisingly honest response. Instead of offering an inaccurate or “hallucinated” answer, it admitted, “I don’t know – and I can’t reliably find out.” This response, after 34 seconds of deep processing, left many users stunned. Interestingly, even Elon Musk — who has often criticized OpenAI while pushing his own AI model Grok — praised this move as “impressive.”

Let’s break down what happened, why it matters, and how it changes the conversation around AI trust and reliability.

ChatGPT 5 Takes a Different Approach

Instead of producing a half-true or misleading answer, ChatGPT 5 chose honesty. This is a big shift from earlier AI behavior, where models sometimes generated confident but incorrect responses.

Why Elon Musk Found It “Impressive”

Elon Musk, founder of xAI and the man behind Grok, rarely praises OpenAI. But this time, he admitted that ChatGPT’s response was significant. His reason was clear: it’s better for AI to admit its limits than spread misinformation.


Tackling the Problem of “Hallucination”

AI hallucination — when a chatbot makes up facts — has been one of the biggest concerns in the industry. By openly stating it doesn’t know, ChatGPT 5 reduces the risk of misleading users. This builds long-term trust between humans and machines.

OpenAI’s Consistent Efforts Since 2022

Since the first version of ChatGPT was released in late 2022, OpenAI has been working to minimize errors. ChatGPT 5 is the latest step in this journey. Although the company admits there is still a 10% chance of mistakes, the shift toward transparency marks clear progress.

Nick Turley’s Clear Message to Users

Nick Turley, head of ChatGPT, emphasized that the tool should not replace human experts. He advised users to always double-check facts. According to him, ChatGPT should be used as a second opinion — not the final source.

Users Appreciate Transparency

Many users on X (formerly Twitter) said the response made them trust ChatGPT more. Knowing that the chatbot won’t bluff its way through tough questions makes people feel safer relying on it for sensitive tasks.

The Bigger Goal: Artificial General Intelligence (AGI)

The ultimate aim for OpenAI and others in the AI race is AGI — machines with human-like intelligence. While AGI is still a concept for the future, small steps like this bring AI closer to being responsible, safe, and human-like in its reasoning.

Competition Heats Up Between ChatGPT and Grok

Elon Musk’s Grok chatbot is marketed as a smarter and bolder alternative to ChatGPT. Yet, Musk acknowledging ChatGPT’s strength shows the intensity of the rivalry. Both companies are racing to set new benchmarks in how AI interacts with humans.

ChatGPT 5 Expands Its Reach

Since its release on August 7, ChatGPT 5 has grown its user base quickly. OpenAI even launched ChatGPT Go — a budget-friendly subscription plan for India at ₹399 per month — making advanced AI accessible to more users.

What This Means for the Future

The “I don’t know” moment highlights a turning point in AI evolution. For users, it signals that big tech companies are serious about safety, ethics, and trust. For competitors, it raises the bar — honesty might just be the new gold standard in AI communication.

Conclusion

By choosing to admit uncertainty rather than risk inaccuracy, ChatGPT 5 has changed how we see AI. Elon Musk’s rare praise adds weight to this shift. As the AI race continues, transparency may become just as important as intelligence.