Hallucinations won't disappear, but 80% come from 4 avoidable mistakes.
The four levers
- Give the model permission to say "I don't know".
- Ground answers with mandatory citations.
- Separate task from knowledge — LLM reasons, DB remembers.
- Tune temperature per task, 0 for factual.
Numbers
LLMs don't count. Always externalize via tool use.