Dakshini℠ was designed with Odia as its primary focus, so accuracy in real conversational settings became one of the most important benchmarks. Because Odia has no established evaluation standards, the team built a custom testing process to measure how well the system understands intent, handles grammar, maintains context, and generates natural responses.
In Odia, Dakshini℠ consistently achieves 89–93% accuracy across different conversation types such as everyday queries, citizen-service questions, factual lookups, and multi-turn dialogues. This level was reached through continuous tuning, dataset refinement, and adjustments to the language-specific adapters. Human reviewers played a key role, evaluating responses for fluency, tone, correctness, and cultural fit. Their feedback loops helped the model move closer to how real Odia speakers communicate.
For the multilingual component, Dakshini℠ performs strongly across the 75+ supported languages. While Odia remains the most optimized, languages such as Hindi, English, Bengali, Marathi, Tamil, Telugu, Spanish, and French show solid response quality with consistent intent recognition. The model automatically detects the input language and switches modes without requiring manual selection.
To keep responses factual and prevent hallucinations, Dakshini℠ combines language generation with retrieval. This means every answer is checked against a knowledge source whenever possible, improving reliability and reducing errors. During large-scale tests, the system demonstrated stable response times and maintained accuracy even when handling high user traffic, making it suitable for government portals, enterprise systems, and public-facing applications.