OpenDialog Docs
opendialog.aiStart BuildingTalk to an expert
  • GETTING STARTED
    • Introduction
    • Getting ready
    • Billing and plans
    • Quick Start AI Agents
      • Quick Start AI Agent
      • The "Start from Scratch" AI Agent
        • Chat Management Conversation
        • Welcome Conversation
        • Topic Conversation
        • Global No Match Conversation
        • Supporting LLM Actions
        • Semantic Classifier: Query Classifier
      • A Process Handling AI Agent
  • STEP BY STEP GUIDES
    • AI Agent Creation Overview
    • Add a new topic of discussion
    • Use knowledge sources via RAG
    • Adding a structured conversation
    • Add a 3rd party integration
    • Test and tweak your AI Agent
    • Publish your AI Agent
  • CORE CONCEPTS
    • OpenDialog Approach
      • Designing Conversational AI Agents
    • OpenDialog Platform
      • Scenarios
        • Conversations
        • Scenes
        • Turns and intents
      • Language Services
      • OpenDialog Account Management
        • Creating and managing users
        • Deleting OpenDialog account
        • Account Security
    • OpenDialog Conversation Engine
    • Contexts and attributes
      • Contexts
      • Attributes
      • Attribute Management
      • Conditions and operators
      • Composite Attributes
  • CREATE AI APPLICATIONS
    • Designing your application
      • Conversation Design
        • Conversational Patterns
          • Introduction to conversational patterns
          • Building robust assistants
            • Contextual help
            • Restart
            • End chat
            • Contextual and Global No Match
            • Contextual FAQ
          • Openings
            • Anatomy of an opening
            • Transactional openings
            • Additional information
          • Authentication
            • Components
            • Example dialog
            • Using in OpenDialog
          • Information collection
            • Components
            • Example dialog
            • Using in OpenDialog
            • Additional information
          • Recommendations
            • Components
            • Example dialog
            • Additional information
          • Extended telling
            • Components
            • Example dialog
            • Additional information
          • Repair
            • Types of repair
            • User request not understood
            • Example dialog
            • Additional information
          • Transfer
            • Components
            • Example dialog
            • Additional information
          • Closing
            • Components
            • Example dialog
            • Using in OpenDialog
            • Additional information
        • Best practices
          • Use Case
          • Subject Matter Expertise
          • Business Goals
          • User needs
            • Primary research
            • Secondary research
            • Outcome: user profile
          • Assistant personality
          • Sample dialogs
          • Conversation structure
          • API Integration Capabilities
          • NLU modeling
          • Testing strategy
          • The team
            • What does a conversation designer do
          • Select resources
      • Message Design
        • Message editor
        • Constructing Messages
        • Message Conditions
        • Messages best practices
        • Subsequent Messages - Virtual Intents
        • Using Attributes in Messages
        • Using Markdown in messages
        • Message Types
          • Text Message
          • Image Message
          • Button Message
          • Date Picker Message
          • Audio Message
          • Form Message
          • Full Page Message
          • Conversation Handover message
          • Autocomplete Message
          • Address Autocomplete Message
          • List Message
          • Rich Message
          • Location Message
          • E-Sign Message
          • File Upload Message
          • Meta Messages
            • Progress Bar Message
          • Attribute Message
      • Webchat Interface design
        • Webchat Interface Settings
        • Webchat Controls
      • Accessibility
      • Inclusive design
    • Leveraging Generative AI
      • Language Services
        • Semantic Intent Classifier
          • OpenAI
          • Azure
          • Google Gemini
          • Output attributes
        • Retrieval Augmented Generation
        • Example-based intent classification [Deprecated]
      • Interpreters
        • Available interpreters
          • OpenDialog interpreter
          • Amazon Lex interpreter
          • Google Dialogflow
            • Google Dialogflow interpreter
            • Google Dialogflow Knowledge Base
          • OpenAI interpreter
        • Using a language service interpreter
        • Interpreter Orchestration
        • Troubleshooting interpreters
      • LLM Actions
        • OpenAI
        • Azure OpenAI
        • Output attributes
        • Using conversation history (memory) in LLM actions
        • LLM Action Analytics
    • 3rd party Integrations in your application
      • Webhook actions
      • Actions from library
        • Freshdesk Action
        • Send to Email Action
        • Set Attributes Action
      • Conversation Hand-off
        • Chatwoot
    • Previewing your application
    • Launching your application
    • Monitoring your application
    • Debugging your application
    • Translating your application
    • FAQ
    • Troubleshooting and Common Problems
  • Developing With OpenDialog
    • Integrating with OpenDialog
    • Actions
      • Webhook actions
      • LLM actions
    • WebChat
      • Chat API
      • WebChat authentication
      • User Tracking
      • Load Webchat within page Element
      • How to enable JavaScript in your browser
      • SDK
        • Methods
        • Events
        • Custom Components
    • External APIs
  • Release Notes
    • Version 3 Upgrade Guide
    • Release Notes
Powered by GitBook
On this page
  • Starting and open behavior
  • Conversation engine at the start of an interaction
  • How a user intent is selected
  • Intent prioritisation for Semantic Intent Classifiers
  • During an interaction, these prioritization rules apply
  1. CORE CONCEPTS

OpenDialog Conversation Engine

PreviousAccount SecurityNextContexts and attributes

Last updated 8 months ago

Creating an architecture for your assistant is flexible in the OpenDialog platform.

In order to build robust applications, and to be able to troubleshoot your design, it is critical to understand how the conversation engine decides where to look for a next action.

Starting and open behavior

Starting components, for instance a turn that is a starting turn, are only considered when you join a level for the first time. After the first interaction, they are no longer considered.

Open turns are considered only after the starting turn is over. They are always considered from that time on.

Components can be both open and starting.

Because of these rules, it is very important for a designer to consider the behavior for any component.

Conversation engine at the start of an interaction

At the start of an interaction, the conversation engine

  • Explores all starting conversations, all the starting scenes within these conversations, all the starting turns within those scenes and any request intents associated with those starting turns will be considered as possible starting intents

  • Considers the incoming utterance and attempts to match it to one of the possible starting intents

  • If the match is successful the conversation state is updated to that intent and we are now in a fully defined conversation state down to the level of an intent.

How a user intent is selected

When there are multiple available user intents in your conversation design, the conversation engine needs to decide which one to select.

First, the conversation engine will run the interpreter associated with each available user intent in the conversation design. The interpreter will take the user's input and will attempt to match an intent with it. If the interpreted intent name matches the expected intent name in the conversation design, then we will continue with the intent. If the interpretation does not match, or provides no interpreted intents, then we will discard the expected intent from consideration.

Using the example from the screenshot above, if the user were to be in this conversational position and says "add my partner to my policy", then the conversation engine will run two interpreters: one for UpdatePolicy and one for AddDriver. If our interpreters are working well, it would match the user's input to the AddDriver intent. This would mean that the conversation engine will discard the UpdatePolicy intent. As there are no additional conditions on the AddDriver intent, it will be selected by the conversation engine.

If both interpreters were to return intents that match, then the conversation engine will select the intent that appears first. In the example from the screenshot above, this would mean the UpdatePolicy intent would be selected if both intents were matched by their interpreters.

Intent prioritisation for Semantic Intent Classifiers

Using the example from the screenshot above, let's presume that both intents are connected to a Semantic Intent Classifier in which ManagePolicy is an intent and UpdateMilage is a sub-intent of this intent. Under regular circumstances, if both intents were to match, then ManagePolicy would be selected as it appears first. However for Semantic Intent Classifiers, when both intents match, the sub-intent will be selected. Therefore in this example, an input of "update my milage" will match both intents, but will UpdateMilage be selected by the conversation engine as it has a higher priority as a sub-intent.

During an interaction, these prioritization rules apply

  1. If the matching user intent defines a transition, the engine will follow that transition.

  1. If there is no transition, look for a matching app response intent within the turn.

  1. If there is no transition and no response intent is present in the turn, the engine will look for a next intent in another turn within the same scene.

  1. If an intent has the completing behavior: after this intent is executed we go back up to the scenario level.

  2. If none of the rules in this section apply, then a no match is triggered, either local or global (scenario level).

As allow you to define intents and sub-intents, it is more likely that interpreters using your classifier will match more than one intent. To ensure that you do not need to carefully order intents in the Designer, Semantic Intent Classifiers return their intents with a priority ranking. Sub-intents are ranked the highest priority, as they are more specific and granular. Whereas intents are ranked as a lower priority, as they are general and broad.

Semantic Intent Classifiers
There are two available user intents in this conversational position.
There are two available user intents in this conversational position.
A screenshot of the OpenDialog Designer. It shows two user intents, one called Update Policy and another called Add Driver.
A screenshot of the OpenDialog Designer. It shows two user intents, one called Manage Policy and another called Update Milage.