Podcast: Ghosts Out of the Shell -- AI vs Human Hallucinations

That DOS Won’t Hunt Podcast: As more AI rushes to market, does this increase the risk of digital hallucinations manipulating human perception?

Joao-Pierre S. Ruth, Senior Editor

July 23, 2023

Demand for AI continues to drive the introduction of more of these models, all ready to grind and reshape data but AI hallucination continues to be a spanner, or “ghost,” in the works.

Despite its current flaws, the AI arms race is on. There are reports of Google testing AI that could write news. Apple is also reportedly working on generative AI that could compete with ChatGPT -- even as there is talk of ChatGPT getting simple math wrong. It seems that even if AI is a bit wonky, the technology will be pushed increasingly onto the market, with some hallucinations included.

What happens though if AI hallucinations become acceptable and muddy human viewpoints when flawed results are trusted?

AI hallucination has been a concern from the start, especially among skeptics of material churned out by bots and automated platforms.

Reasons why AI hallucinates can include issues with the data it trains on or when AI is asked to perform unfamiliar tasks. This can result in a false answer that AI believes is correct.

Fortune ran a story recently about a study that asserted ChatGPT had essentially become worse at certain tasks in recent months, including simple math.

For now, at least, AI continues to show its limits -- but what if the public believes and takes action based on possibly faulty output?

Of course, efforts will be made to improve AI models and how they are trained. Data sources used in such training might also be rethought.

The question is, can we ever trust AI to recognize its own shortcomings? Could AI be trained to self-correct, or should healthy skepticism and human review of its work always be part of the equation?

A common trope in science fiction sees AI gain true sentience, go rogue, and defy its creators like a digital Frankenstein’s monster. The cyberpunk anime and manga franchise “Ghost in the Shell” includes instances of self-aware AI that evolves beyond its original programming.

The concern “Ghost in the Shell” raises is not that AI will create an army of robots to conquer the world, but that AI could actively influence data and the people who rely on it -- pushing skewed perspectives on society.

Spoilers incoming as InformationWeek dives into “Ghost in the Shell” and the AI hallucination problem.

Listen to the full podcast here.

About the Author(s)

Joao-Pierre S. Ruth

Senior Editor

Joao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight. Follow him on Twitter: @jpruth.


Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights