Hallucination
When an AI model generates output that sounds plausible but is factually incorrect or fabricated.
When an AI model generates output that sounds plausible but is factually incorrect or fabricated.
In AI, a hallucination occurs when a language model generates information that is confident and coherent but factually wrong, fabricated, or unsupported by its training data. Hallucinations are an inherent characteristic of current LLM architectures — the models are optimized to produce fluent text, not necessarily accurate text.
Hallucination risks in enterprise contexts include:
If an employee uses AI-generated content in a client deliverable, regulatory filing, or internal report, a hallucination becomes the organization's mistake. AI policies should address how AI-generated content is verified before use.
We use cookies and similar technologies to improve your experience, analyze traffic, and support marketing. Cookie Policy