AI Hallucinations: Amusing at home, unwelcome at work
How we’ve built accuracy and confidence into Coda Brain.
Glenn Jaume
Product Manager at Coda
AI · 6 min read
Why hallucinations don’t cut it in enterprise AI.
When using AI at work, the tolerance for inaccuracy is extremely low; it’s critical to have full confidence in the responses and to be able to verify their truthfulness. Without that trust, the benefits and efficiencies of AI are severely undermined. Imagine you’re putting together a presentation for your senior leadership, and you need to collect sales numbers for the past quarter. You ask whatever AI you’re using to gather this data for you. Right before the presentation—or, worse, during it—you realize that the AI has hallucinated a few sales deals that don’t actually exist. Or, even worse still, you already made decisions based on that false data. Now, you’ve lost all trust in the AI. Next time, you’ll hesitate to use it. Or, you’ll waste tons of time fact-checking its answers, which rather defeats the point of using AI in the first place. Clearly, this isn’t acceptable, and that’s why the bar for accuracy in enterprise AI is so high.How we’ve built confidence into Coda Brain.
Right from the start of building Coda Brain, our turnkey enterprise AI platform, we knew that trust and accuracy were a non-negotiable. There are four features we’ve built into Coda Brain to ensure the responses it gives are accurate and verifiable:- Increasing relevancy with RAG.
- Showing our working with citations.
- Keeping the human-in-the-loop.
- Protecting security with permissions-awareness.