Podcast: Is 2023 an AI Hallucination Odyssey?

DOS Won’t Hunt explores how Stanley Kubrick’s prescient depiction of an overly confident HAL 9000 relates to AI hallucinations seen today.

Joao-Pierre S. Ruth, Senior Editor

September 5, 2023

There are healthy reasons to question the reliability of generative AI’s answers, no matter how confident the model may seem.

This week, VentureBeat covered Gleen AI raising $4.9 million, noting the startup is working on its own fix for AI hallucinations. From the VentureBeat story, Gleen AI’s method is to limit AI to only work with factual data. The approach Gleen AI takes is that data must meet certain thresholds of factuality before it is fed to the AI. Curate the data before it has the potential to infect the AI model with digital shenanigans.

Hallucination in AI is a growing concern, with regulators and companies looking for ways to mitigate the issue as the technology becomes increasingly ubiquitous. In July, the Federal Trade Commission started an investigation into OpenAI’s ChatGPT over concerns the platform was violating consumer protection laws and that the model was generating false and misleading statements.

Back in April, Reuters reported that a mayor in Australia, Brian Hood, was considering filing a defamation lawsuit against OpenAI because ChatGPT spat out false claims that he did prison time for bribery. Hood wanted OpenAI to correct the issue, though there have been no further updates whether he pursued the case or if the misleading statements were dealt with.

The trouble with AI hallucination, when AI churns out factually incorrect statements, is the model can wholeheartedly “believe” it was spot on with its response. The false claims about Hood demonstrate how the phenomena opens the door for doing damage to reputations and generally mislead the public.

When generative AI started to turn heads early this year, AI hallucination was fairly evident. For example, some media outlets had been using the technology to produce articles that would later be corrected for their false information.

For the record, InformationWeek has established a policy on generative AI and our newsroom.

Now that we are months into generative AI hype, there are concerns that some AI platforms are getting “dumber,” generating wrong answers to math problems that they solved just months earlier.

If AI is very confident of its work and the people who rely on it do not catch the flaws in time, this might become a HAL 9000 problem.

Stanley Kubrick’s groundbreaking movie “2001: A Space Odyssey,” produced all the way back in 1968, introduced audiences to the granddaddy of all AI and robot overlord nightmares -- a computer called the HAL 9000 that was just so eager to fulfill a mission in space.

In the movie, HAL was described as being able to reproduce, or mimic, human brain functions at greater speed and reliability. This allegedly reliable computer was so enthusiastic about completing its task, it committed multiple homicides along the way. Even when confronted with the possibility that it was wrong, HAL doubled down and tried to gaslight the surviving crew into believing that the humans were the problem.

Confidence, or hubris, was rather evident early on when HAL responded to a question about its abilities this way: “The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error.”

Prepare for a mission to Jupiter with a computer that is determined to finish its job, no matter the cost, as DOS Won’t Hunt looks at AI hallucination and “2001: A Space Odyssey.”

Listen to the full podcast here

What to Read Next:

Generative AI Faces an Existential IP Reckoning of Its Own Making

Podcast: Ghosts Out of the Shell -- AI vs Human Hallucinations

FTC Launches Investigation Into ChatGPT: Report

About the Author(s)

Joao-Pierre S. Ruth

Senior Editor

Joao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight. Follow him on Twitter: @jpruth.


Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights