Skip to main content

The Role of GenAI Chatbots in Modern Organisations

One of the biggest challenges to rapidly develop and improve GenAI solutions is that the text-based form makes it difficult to measure the quality of your solution without going through long, manual testing processes with a complicated stakeholder landscape. So, coming up with a way to make things more objective and measurable can give your organisation a lot of speed in the GenAI transformation. This is not a one-size-fits-all approach and might be something you need to think about for different types of applications. At ADC, we’ve researched how to do this for chat-based assistants (internal or external).  

In this article, you can expect to explore key elements of ADC’s Chat Quality Framework, including real-world case study examples and insights into how ADC implements these strategies.  

Chatbot quality framework

What is ADC's Quality Framework?

At ADC, we have developed a Chatbot Quality Framework that seeks to address the challenge of ensuring chatbot excellence by providing a versatile tool designed to support in-house development, facilitate external benchmarking, and assist in project scoping and execution. Our framework is structured around several categories, each with its own set of criteria and considerations.  

This comprehensive approach ensures that chatbots, whether customer-facing or internal, achieve excellence and set new standards for operational efficiency. By defining key pillars along with specific criteria, we establish what constitutes a high-quality chatbot. Imagine having a chatbot that not only responds accurately but learns from each interaction to improve future responses. To support this, we have developed robust evaluation and scoring methods that ensure complete assessments. While this framework is intended for internal use, its value is evident in our projects, where ensuring chatbot quality is paramount.  

By leveraging this framework, we consistently deliver chatbots that meet stringent quality standards, reinforcing our commitment to excellence in the field of AI-driven solutions. 

Key Elements of the Chatbot Framework

Answer Quality

Evaluating the quality of a chatbot’s answers involves several important factors. A high-quality knowledge base ensures that responses are accurate and factually correct, drawing from reliable sources. Good prompting techniques, such as using clear and specific questions or providing context to guide the chatbot, further enhance the chatbot’s ability to understand user questions and deliver relevant answers. It is crucial that the chatbot conveys information clearly, without distorting or fabricating content. Ensuring consistent responses to similar questions, along with these factors, improves communication standards and ultimately leads to higher answer quality. 

Conversational Ability

To excel in conversation, an AI assistant must effectively integrate several key capabilities. Remembering past interactions is essential for providing contextually relevant answers. This allows the chatbot to continue conversations seamlessly. When faced with unclear user input, the assistant should be proactive and ask questions to clarify and ensure understanding to show its ability to anticipate the user needs and offer helpful guidance. To further elevate the conversational prowess, support for multiple languages is key, as it allows a diverse range of users to engage effectively with the chatbot. 

User Experience

Crafting a positive user experience with a chatbot involves blending adaptability and empathy. The tone of voice should resonate with users, aligning with their preferences and the purpose of the chatbot. It should also demonstrate empathy, recognising user emotions and responding with sensitivity. Clear and concise communication is essential to keep users engaged, ensuring responses are easy to read and understand. Enhancing accessibility through intuitive design further enriches user interaction. By integrating user feedback, the chatbot continuously improves, becoming more adept and user-friendly over time. 

Performance

Effective chatbot performance hinges on adaptability and responsiveness. It should seamlessly manage varying levels of user interaction, maintaining consistent performance regardless of traffic intensity. Adaptability allows the chatbot to adjust to different domains and user needs through flexible design and customisable features. Quick response times are vital, ensuring interactions remain smooth and efficient. By optimising the processing algorithms and resource management, the chatbot can maintain high-speed performance, even as it evolves and adapts to new challenges over time.  

Functionality

Enhancing a chatbot’s functionality involves integrating tools and workflows that automate tasks without human intervention. For complex inquiries, a seamless escalation process to human agents might be needed to ensure efficient resolution of unresolved issues. Supporting multimodality, such as text-to-speech, boosts engagement through richer interactions. For example, consider a customer service chatbot handling a technical support query. It can initially offer solutions via text and images to guide the user through troubleshooting steps. If the issue persists, the chatbot can escalate the matter to a human agent, seamlessly transferring the chat history and context for continuity. This blend of automation and human intervention ensures efficient and effective problem-solving, enhancing overall user satisfaction. 

Security

Security is central to chatbot development, focusing on protecting the system from harmful content and prompt injection attacks. Current models have guardrails against abusive inputs, but additional measures are necessary. Security considerations include handling user information carefully and preventing the chatbot from sharing unwanted data. Adhering to regulations like the EU AI Act ensures ethical use, while secure processing in compliance with GDPR is vital for maintaining user trust and confidentiality. 

Output Quality is Directly Tied to Input Quality

The old saying “garbage in, garbage out” holds just as true for chatbots as it does for any other system. If your chatbot consistently receives poorly structured, incomplete, or ambiguous input from users, the chatbot’s responses will reflect that. Quality input – whether through careful prompt design, structured user queries, or reliable external data sources – has a direct impact on how relevant and accurate your chatbot’s answers can be. 

Practical Tips

Use Clear Formatting

As highlighted in the table at the top of this article, well-structured documents with headers, sub-headers, and bullet points help the chatbot find precisely what it needs. For example, FAQ documents that are broken down into bite-sized sections, each with a clear heading, lets chatbots instantly locate the right chunk of information.

Separate Internal and External Data

Not all content is meant for public eyes. By tagging or partitioning internal documents (e.g., company policies) separately from those meant for customers (e.g., how-to guides), you minimise the risk of serving the wrong content to the wrong user.

Make Sections Self-Explanatory

RAG-based chatbots typically retrieve chunks of text independently. If each section is written so it can stand on its own, rather than referencing external paragraphs, it’s far easier for the model to interpret correctly without losing context.

Adopt a Thoughtful Chunking Strategy

Instead of splitting text arbitrarily (like every sentence), consider chunking by topic or logical paragraphs. Longer, self-contained paragraphs often yield better retrieval and interpretation, leading to more relevant answers.

Include Images as Structured Data (When Applicable)

If your chatbot needs to handle visual information, convert key images into textual descriptions or embed them via a vision model. This ensures that image content is also “high-quality input” rather than an overlooked file.

By focusing on well-labelled, logically grouped data sources and prompting users for the information you truly need, you’ll see immediate gains in the relevance and clarity of the chatbot’s output. After all, the system’s ability to respond accurately is only as good as the data it’s allowed to process. 

Ambiguity Detection Requires a Dedicated Agent

Many chatbot failures stem from their inability to gracefully handle ambiguous user requests. If a user’s message can be interpreted in multiple ways, the chatbot needs a specialised process – or “agent” – that identifies and resolves those ambiguities rather than guessing. 

An example of a specialised process could be as follows: 

Chatbot Quality Framework
  1. User input → Orchestrator: The user submits a query, which is immediately assessed by a central “orchestrator.” If the query is clear, the orchestrator passes it along to retrieve information. 
  2. Ambiguity check: If the orchestrator detects uncertainty, like missing information or multiple interpretations, the user is directed to a Clarification loop rather than retrieving data. This ensures the system asks follow-up questions (e.g., “Were you trying to check order status or place a new order?”) before proceeding. 
  3. Retrieve and generate a response: Once the user’s intent is clarified, the chatbot retrieves relevant data and formulates a response. 
  4. Review the response: A final check ensures the response is aligned with the clarified intent. If additional clarification is still needed, the process returns to the Clarification stage. 
  5. Chatbot response: The final, disambiguated answer is delivered. 

Having a dedicated ambiguity detection agent drastically reduces guesswork in responding to user queries, thereby enhancing trust and user satisfaction by ensuring any unclear requests are promptly clarified. This approach fosters a more natural, intuitive experience, while a separate logic layer for disambiguation keeps the code base clean and maintainable. Ultimately, by introducing a robust mechanism for clarifying intent, chatbots become more resilient to vague input and deliver more accurate, frustration-free conversations. 

Case Study: AI-Assisted Desk Research with Semi-Structured Data

A public organisation responsible for managing the distribution of public funds faced a key challenge: critical information about funded projects was buried in large volumes of unstructured textual reports. These reports were tied to different funding programs – policy-driven initiatives under which funds are allocated. With too many documents to review manually, it was difficult to extract insights, conduct impact studies, or efficiently answer cross-program questions. 

To address this, we developed an internal AI assistant that makes this information accessible and actionable. Built on a modular, multi-agent architecture, the assistant assigns questions to program-specific agents. Each agent extracts relevant content from unstructured text, generates a response, and performs a validity check. Programs are processed in parallel, ensuring both consistency and scalability. 

An orchestrator coordinates the agents and synthesises their outputs into a clear final answer, whether the user asks about one program or many. The result is a flexible tool that enables efficient desk research, providing fast, interactive access to insights across both current and future funding programs. 

Key Applications of ADC's Chatbot Quality Framework

At ADC, our Chatbot Quality Framework plays a vital role in various aspects of our work. This includes: 

  1. Project Scoping and Client Consultations: Our Chatbot Quality Framework helps clarify expectations and select essential features tailored to each client’s unique needs. 
  2. Technical Development: Our framework guides the design and optimisation of custom AI assistants, providing a clear roadmap for building robust solutions and refining capabilities through data-driven insights. 
  3. Benchmarking and Competitive Analysis: Our framework aids in benchmarking and competitive analysis, empowering clients to make informed decisions between custom-built and off-the-shelf solutions, ensuring AI solutions meet and exceed quality and performance standards. 

Continue the Conversation

Interested in learning more about ADC’s Chatbot Quality Framework and how it can benefit your organisation? Reach out to Ida Riis Jensen (Senior Consultant).

Send message

Stay Updated

Interested in the latest case studies, insightful blog articles, and upcoming events? Subscribe to our monthly data & AI newsletter below!

Gallery of ADC