OpenDialog Docs
opendialog.aiStart BuildingTalk to an expert
  • GETTING STARTED
    • Introduction
    • Getting ready
    • Billing and plans
    • Quick Start AI Agents
      • Quick Start AI Agent
      • The "Start from Scratch" AI Agent
        • Chat Management Conversation
        • Welcome Conversation
        • Topic Conversation
        • Global No Match Conversation
        • Supporting LLM Actions
        • Semantic Classifier: Query Classifier
      • A Process Handling AI Agent
  • STEP BY STEP GUIDES
    • AI Agent Creation Overview
    • Add a new topic of discussion
    • Use knowledge sources via RAG
    • Adding a structured conversation
    • Add a 3rd party integration
    • Test and tweak your AI Agent
    • Publish your AI Agent
  • CORE CONCEPTS
    • OpenDialog Approach
      • Designing Conversational AI Agents
    • OpenDialog Platform
      • Scenarios
        • Conversations
        • Scenes
        • Turns and intents
      • Language Services
      • OpenDialog Account Management
        • Creating and managing users
        • Deleting OpenDialog account
        • Account Security
    • OpenDialog Conversation Engine
    • Contexts and attributes
      • Contexts
      • Attributes
      • Attribute Management
      • Conditions and operators
      • Composite Attributes
  • CREATE AI APPLICATIONS
    • Designing your application
      • Conversation Design
        • Conversational Patterns
          • Introduction to conversational patterns
          • Building robust assistants
            • Contextual help
            • Restart
            • End chat
            • Contextual and Global No Match
            • Contextual FAQ
          • Openings
            • Anatomy of an opening
            • Transactional openings
            • Additional information
          • Authentication
            • Components
            • Example dialog
            • Using in OpenDialog
          • Information collection
            • Components
            • Example dialog
            • Using in OpenDialog
            • Additional information
          • Recommendations
            • Components
            • Example dialog
            • Additional information
          • Extended telling
            • Components
            • Example dialog
            • Additional information
          • Repair
            • Types of repair
            • User request not understood
            • Example dialog
            • Additional information
          • Transfer
            • Components
            • Example dialog
            • Additional information
          • Closing
            • Components
            • Example dialog
            • Using in OpenDialog
            • Additional information
        • Best practices
          • Use Case
          • Subject Matter Expertise
          • Business Goals
          • User needs
            • Primary research
            • Secondary research
            • Outcome: user profile
          • Assistant personality
          • Sample dialogs
          • Conversation structure
          • API Integration Capabilities
          • NLU modeling
          • Testing strategy
          • The team
            • What does a conversation designer do
          • Select resources
      • Message Design
        • Message editor
        • Constructing Messages
        • Message Conditions
        • Messages best practices
        • Subsequent Messages - Virtual Intents
        • Using Attributes in Messages
        • Using Markdown in messages
        • Message Types
          • Text Message
          • Image Message
          • Button Message
          • Date Picker Message
          • Audio Message
          • Form Message
          • Full Page Message
          • Conversation Handover message
          • Autocomplete Message
          • Address Autocomplete Message
          • List Message
          • Rich Message
          • Location Message
          • E-Sign Message
          • File Upload Message
          • Meta Messages
            • Progress Bar Message
          • Attribute Message
      • Webchat Interface design
        • Webchat Interface Settings
        • Webchat Controls
      • Accessibility
      • Inclusive design
    • Leveraging Generative AI
      • Language Services
        • Semantic Intent Classifier
          • OpenAI
          • Azure
          • Google Gemini
          • Output attributes
        • Retrieval Augmented Generation
        • Example-based intent classification [Deprecated]
      • Interpreters
        • Available interpreters
          • OpenDialog interpreter
          • Amazon Lex interpreter
          • Google Dialogflow
            • Google Dialogflow interpreter
            • Google Dialogflow Knowledge Base
          • OpenAI interpreter
        • Using a language service interpreter
        • Interpreter Orchestration
        • Troubleshooting interpreters
      • LLM Actions
        • OpenAI
        • Azure OpenAI
        • Output attributes
        • Using conversation history (memory) in LLM actions
        • LLM Action Analytics
    • 3rd party Integrations in your application
      • Webhook actions
      • Actions from library
        • Freshdesk Action
        • Send to Email Action
        • Set Attributes Action
      • Conversation Hand-off
        • Chatwoot
    • Previewing your application
    • Launching your application
    • Monitoring your application
    • Debugging your application
    • Translating your application
    • FAQ
    • Troubleshooting and Common Problems
  • Developing With OpenDialog
    • Integrating with OpenDialog
    • Actions
      • Webhook actions
      • LLM actions
    • WebChat
      • Chat API
      • WebChat authentication
      • User Tracking
      • Load Webchat within page Element
      • How to enable JavaScript in your browser
      • SDK
        • Methods
        • Events
        • Custom Components
    • External APIs
  • Release Notes
    • Version 3 Upgrade Guide
    • Release Notes
Powered by GitBook
On this page
  • Types of testing
  • When to test
  • Prototype testing
  • Testing in development
  • Testing a live deployment
  1. CREATE AI APPLICATIONS
  2. Designing your application
  3. Conversation Design
  4. Best practices

Testing strategy

Testing your assistant is a critical part of the success of your application and how the application reflects on your brand.

Types of testing

The different types of testing include the following.

  • Functional testing

    • Verify that the flow is correct, as designed and described in the journey diagram. E.g. when user selects an option, make sure that the next prompt is as designed

    • Test that integrations work

  • NLU testing

    • For intent-based NLU services, test that user utterances that are known (used as sample utterances) are correctly classified (with traditional intent classifier/NLU service). E.g. through automated testing and a confusion matrix

    • When LLMs are used as intent classifiers/extractors, test the prompts/examples

  • Coverage testing

    • Get user (and stakeholder) feedback on extent of functionality and use case coverage. E.g. perhaps an outcome that is sent to human to resolve can be handled automatically

    • Generate possible user utterances and test for coverage and correct intent recognition. Generated using LLMs or input from target users

  • Usability testing

    • Get user feedback on the ease of use, how intuitive and clear the assistant is, the wording, etc… Is it easy to use, is it useful (USERindex), likelihood to use it again and recommend

When to test

The motto should be: "Test early and often".

The different types of testing need to occur at any point in the development and live stages of your assistant. The following pages describe the types of testing to complete in the different stages of the development lifecycle.

The different types of testing can be performed during at any time.

Prototype testing

Once a prototype is available, testing can start. Some types of testing to undertake are:

  • Functional testing to ensure that the conversation flow is correctly implemented and that integrations work as expected

  • Usability testing to get user feedback on the interaction with the assistant

  • Coverage testing to discover additional utterances and intents that make the NLU interactions more robust

Testing in development

Functional, usability, intent and coverage testing can all be performed on sprint deliverables.

Once a beta version is available, end-to-end testing of the experience becomes feasible and critical.

Testing a live deployment

Once an assistant is live, continued testing is needed.

This includes recurring automated testing to ensure continued quality and performance and security, especially following updates or any changes to the system or integrations.

Consider running analytics including the Analyze functionality in the OD platform to gather data and improve the robustness of the assistant based on the insights gained from the analytics data.

PreviousNLU modelingNextThe team

Last updated 6 months ago