Hallucinations won't disappear, but 80% come from 4 avoidable mistakes.

The four levers

  1. Give the model permission to say "I don't know".
  2. Ground answers with mandatory citations.
  3. Separate task from knowledge — LLM reasons, DB remembers.
  4. Tune temperature per task, 0 for factual.

Numbers

LLMs don't count. Always externalize via tool use.