AI can be full of crippling self-doubt just like humans after being taught to say ‘I don’t know’

To view this video please enable JavaScript, and consider upgrading to a web
browser that
supports HTML5
video

Up Next

Ever asked ChatGPT what the meaning of life is on a particularly slow day at work?

It may try to give you an answer by confidently reinforcing your own worldview, mimicking human emotional pulls, or just make something up entirely in a so-called ‘hallucination’.

Hallucinations occur when the AI is incentivised to make guesses rather than simply admitting it doesn’t know the answer, which can be particularly dangerous if being used in a medical context.

This educated guessing has been damaging to the brand amid reliability concerns with the AI model even admitting, when asked, that it can be ‘confidently wrong’.

This ‘overconfidence’ has seen TikTokers laughing openly when AI refuses to say if a human’s stupid hat looks ridiculous or is steadfast in its belief that December is spelt with an X.

@huskistaken

Learned something new!

♬ original sound – Husk

But this hubris could be fatal, especially considering we are relying on AI models to drive us around or spot health problems.

A.I., AI
Not everything chatbots say is correct (Picture: Getty)

Now, researchers have developed a solution that enables AI to recognise situations with unfamiliar or unseen knowledge.

They say they used clues from the way the human brain solves the issue.

Mimicking brain waves, they tried to set AI’s initial confidence to a low level, close to chance, and significantly reduce its overconfidence bias.

Their training method helps models first learn the state of ‘I don’t know anything yet’ – like a newborn baby – before actual learning begins.

‘While conventional models tend to give incorrect answers with high confidence even for data they have not encountered during training, models with warm-up training showed a clear improvement in their ability to lower confidence and recognise that they ‘do not know’, researchers from the Korea Advanced Institute of Science and Technology explained.

‘This study demonstrates that by incorporating key principles of brain development, AI can recognise its own knowledge state in a way that is more similar to humans,’ Se-Bum Paik, an author of the study published in the journal Nature Machine Intelligence, said.

‘This is important because it helps AI understand when it is uncertain or might be mistaken, not just improve how often it gives the right answer.’

AI can completely hallucinate by

  • confidently answering a question when the correct answer is ‘I don’t know.’
  • Inventing a book citation that doesn’t exist
  • Claiming a person said something they never said
  • Mixing up dates or facts
Robot hand showing hologram ai circuit brain and lights
Chatbots can learn huge amounts of information, but still make up answers (Picture: Getty)

What doesn’t ChatGPT know

Metro asked their own AI ‘what don’t you know?’ and it appears the answer is ‘quite a lot’. This is what it said:

1. Anything beyond my knowledge cutoff (2025)
I don’t automatically know events, discoveries, or changes that happened after that—unless you tell me or I look them up. So very recent news, new laws, or current prices can be out of date.

2. Your personal life (unless you share it)
I don’t know your thoughts, experiences, identity, or situation unless you explicitly tell me. I’m not reading minds or accessing private data.

3. Unobservable or unknowable facts
Some things just aren’t known to anyone—like:

  • The exact future (I can only estimate probabilities)
  • Unsolved mysteries (e.g., what fully caused certain historical events)
  • Questions that don’t have clear answers (“What’s the meaning of life?” depends on perspective)

4. Real-time information (by default)
I don’t inherently “see” live data like:

  • Current weather
  • Stock prices
  • Live sports scores
    I can get these if I use tools, but I don’t constantly track them.

5. Hidden or private data
I don’t have access to:

  • Personal databases
  • Private company records
  • Your files, messages, or accounts (unless you share them here)

6. Perfect accuracy
Even within what I do know, I can:

  • Misinterpret a question
  • Give outdated or incomplete info
  • Be confidently wrong sometimes

OpenAI is currently valued at $852 billion.

Several lawsuits have sought damages from AI and tech companies over the influence of chatbots and social media on loved ones’ mental health.

(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *