Know what your firm knows — instantly

Discover what your firm can achieve when every lawyer has instant access to your full institutional knowledge.

Yannic Explains: Why Hallucinations Happen

In this video, Yannic Kilcher, PhD, co-founder and CTO at DeepJudge, explains why large language models (LLMs) sometimes generate answers that sound convincing but aren’t true. Hallucinations happen because LLMs don’t “know” facts—they predict the most likely text based on their training data. That works well for stable, well-documented knowledge (like the height of Mount Everest) but breaks down with outdated, conflicting, or missing information. While hallucinations can’t be fully eliminated, better context and stronger search dramatically improve accuracy.

Subscribe to our newsletter

Get the latest news and updates from DeepJudge in our monthly newsletter, the DeepBrief.