OpenDialog Docs
opendialog.aiStart BuildingTalk to an expert
  • GETTING STARTED
    • Introduction
    • Getting ready
    • Billing and plans
    • Quick Start AI Agents
      • Quick Start AI Agent
      • The "Start from Scratch" AI Agent
        • Chat Management Conversation
        • Welcome Conversation
        • Topic Conversation
        • Global No Match Conversation
        • Supporting LLM Actions
        • Semantic Classifier: Query Classifier
      • A Process Handling AI Agent
  • STEP BY STEP GUIDES
    • AI Agent Creation Overview
    • Add a new topic of discussion
    • Use knowledge sources via RAG
    • Adding a structured conversation
    • Add a 3rd party integration
    • Test and tweak your AI Agent
    • Publish your AI Agent
  • CORE CONCEPTS
    • OpenDialog Approach
      • Designing Conversational AI Agents
    • OpenDialog Platform
      • Scenarios
        • Conversations
        • Scenes
        • Turns and intents
      • Language Services
      • OpenDialog Account Management
        • Creating and managing users
        • Deleting OpenDialog account
        • Account Security
    • OpenDialog Conversation Engine
    • Contexts and attributes
      • Contexts
      • Attributes
      • Attribute Management
      • Conditions and operators
      • Composite Attributes
  • CREATE AI APPLICATIONS
    • Designing your application
      • Conversation Design
        • Conversational Patterns
          • Introduction to conversational patterns
          • Building robust assistants
            • Contextual help
            • Restart
            • End chat
            • Contextual and Global No Match
            • Contextual FAQ
          • Openings
            • Anatomy of an opening
            • Transactional openings
            • Additional information
          • Authentication
            • Components
            • Example dialog
            • Using in OpenDialog
          • Information collection
            • Components
            • Example dialog
            • Using in OpenDialog
            • Additional information
          • Recommendations
            • Components
            • Example dialog
            • Additional information
          • Extended telling
            • Components
            • Example dialog
            • Additional information
          • Repair
            • Types of repair
            • User request not understood
            • Example dialog
            • Additional information
          • Transfer
            • Components
            • Example dialog
            • Additional information
          • Closing
            • Components
            • Example dialog
            • Using in OpenDialog
            • Additional information
        • Best practices
          • Use Case
          • Subject Matter Expertise
          • Business Goals
          • User needs
            • Primary research
            • Secondary research
            • Outcome: user profile
          • Assistant personality
          • Sample dialogs
          • Conversation structure
          • API Integration Capabilities
          • NLU modeling
          • Testing strategy
          • The team
            • What does a conversation designer do
          • Select resources
      • Message Design
        • Message editor
        • Constructing Messages
        • Message Conditions
        • Messages best practices
        • Subsequent Messages - Virtual Intents
        • Using Attributes in Messages
        • Using Markdown in messages
        • Message Types
          • Text Message
          • Image Message
          • Button Message
          • Date Picker Message
          • Audio Message
          • Form Message
          • Full Page Message
          • Conversation Handover message
          • Autocomplete Message
          • Address Autocomplete Message
          • List Message
          • Rich Message
          • Location Message
          • E-Sign Message
          • File Upload Message
          • Meta Messages
            • Progress Bar Message
          • Attribute Message
      • Webchat Interface design
        • Webchat Interface Settings
        • Webchat Controls
      • Accessibility
      • Inclusive design
    • Leveraging Generative AI
      • Language Services
        • Semantic Intent Classifier
          • OpenAI
          • Azure
          • Google Gemini
          • Output attributes
        • Retrieval Augmented Generation
        • Example-based intent classification [Deprecated]
      • Interpreters
        • Available interpreters
          • OpenDialog interpreter
          • Amazon Lex interpreter
          • Google Dialogflow
            • Google Dialogflow interpreter
            • Google Dialogflow Knowledge Base
          • OpenAI interpreter
        • Using a language service interpreter
        • Interpreter Orchestration
        • Troubleshooting interpreters
      • LLM Actions
        • OpenAI
        • Azure OpenAI
        • Output attributes
        • Using conversation history (memory) in LLM actions
        • LLM Action Analytics
    • 3rd party Integrations in your application
      • Webhook actions
      • Actions from library
        • Freshdesk Action
        • Send to Email Action
        • Set Attributes Action
      • Conversation Hand-off
        • Chatwoot
    • Previewing your application
    • Launching your application
    • Monitoring your application
    • Debugging your application
    • Translating your application
    • FAQ
    • Troubleshooting and Common Problems
  • Developing With OpenDialog
    • Integrating with OpenDialog
    • Actions
      • Webhook actions
      • LLM actions
    • WebChat
      • Chat API
      • WebChat authentication
      • User Tracking
      • Load Webchat within page Element
      • How to enable JavaScript in your browser
      • SDK
        • Methods
        • Events
        • Custom Components
    • External APIs
  • Release Notes
    • Version 3 Upgrade Guide
    • Release Notes
Powered by GitBook
On this page
  • The basics
  • Interpreter Orchestration in action
  • Organise your interpreters
  • Set interpreters throughout
  • Where to find
  • How it works
  • How to use - video
  1. CREATE AI APPLICATIONS
  2. Leveraging Generative AI
  3. Interpreters

Interpreter Orchestration

Use OpenDialog Interpreter Orchestration to set a highest priority interpreter at different levels of your conversation.

PreviousUsing a language service interpreterNextTroubleshooting interpreters

Last updated 1 year ago

Interpreter orchestration is a feature that is available ON DEMAND. If you would like to use this feature, please reach out to support@opendialog.ai.

The basics

Interpreter orchestration allows you to pick one interpreter to prioritise during your conversation. This can be accessed via the side panel of the conversation designer, through the interpreter drop-down. From here, you are able to see all of the available interpreters you can select for that specific conversation. This functionality can be accessed at the following levels:

  • Scenario

  • Conversation

  • Scene

  • Turn

By integrating Interpreter Orchestration into your workflow, it will allow for you to set highest priorities and be able to create smoother, more efficient running conversations with OpenDialog.

Interpreter Orchestration in action

Organise your interpreters

When designing a conversation with OpenDialog, you can set up multiple interpreters. Whether that be the OpenDialog interpreter, a self made language service or from a third party source; such as Open AI or Amazon LEX. Although having access to all these interpreters is useful, it can sometimes become confusing to know which ones are best for certain elements of your conversation.

Through Interpreter Orchestration, you are able to choose which interpreter to set as a highest priority, meaning that this one will be checked and run through first when requesting an intent. This feature influences the conversation engine by giving a tool to designers to control what is important to a specific part of their conversation.

Set interpreters throughout

Sometimes, just having one interpreter for your entire scenario doesn't work. With multiple conversations, turns and scenes, you may want to prioritise more than one interpreter throughout the journey.

With Interpreter Orchestration, you can do just this! The prioritisation drop down is available from the side panel of the conversation designer at scenario, conversation, turn and scene levels. This means that you are able to set a priority for that specific component, instead of for the entire conversation

Where to find

The Interpreter Orchestration functionality can be accessed from the side panel of the conversation designer.

To access the Interpreter Orchestration feature within the intents sidebar for a given scenario:

  • Go to your workspace overview (Scenarios)

  • Select the scenario you would like to update

  • Click 'Interpreter' from the side panel

  • Choose the interpreter you would like to prioritise for that scenario

This functionality can also be accessed from different component levels:

  • Go to your workspace overview (Scenarios)

  • Select the scenario you would like to update

  • Select the conversation, scene or turn you would like to set your interpreter for

  • Click 'Interpreter' from the side panel

  • Choose the interpreter you would like to prioritise for that scenario

How it works

By using interpreter orchestration, you are altering how the conversation engine reasons, and the order in which is will select intents.

At the start of an interaction, the conversation engine:

  • Explores all starting conversations, all the starting scenes within these conversations, all the starting turns within those scenes and any request intents associated with those starting turns will be considered as possible starting intents

  • Considers the incoming utterance and attempts to match it to one of the possible starting intents

  • If the match is successful the conversation state is updated to that intent and we are now in a fully defined conversation state down to the level of an intent

This is how an interaction would usually take place within the conversation designer, however, interpreter orchestration changes this process slightly. For example:

At the start of an interaction, whilst using interpreter orchestration, the conversation engine:

  • Considers the incoming utterance and attempts to match it to one of the starting intents within the specified highest priority interpreter, instead of making it's way through all the available interpreters and language services.

Through this, the process of intent matching is streamlined, and gives the designer more control of the flow of the conversation. By not interpreting the intents with low priority interpreters, this makes the engine a lot more effective.

How to use - video