OpenDialog Docs
opendialog.aiStart BuildingTalk to an expert
  • GETTING STARTED
    • Introduction
    • Getting ready
    • Billing and plans
    • Quick Start AI Agents
      • Quick Start AI Agent
      • The "Start from Scratch" AI Agent
        • Chat Management Conversation
        • Welcome Conversation
        • Topic Conversation
        • Global No Match Conversation
        • Supporting LLM Actions
        • Semantic Classifier: Query Classifier
      • A Process Handling AI Agent
  • STEP BY STEP GUIDES
    • AI Agent Creation Overview
    • Add a new topic of discussion
    • Use knowledge sources via RAG
    • Adding a structured conversation
    • Add a 3rd party integration
    • Test and tweak your AI Agent
    • Publish your AI Agent
  • CORE CONCEPTS
    • OpenDialog Approach
      • Designing Conversational AI Agents
    • OpenDialog Platform
      • Scenarios
        • Conversations
        • Scenes
        • Turns and intents
      • Language Services
      • OpenDialog Account Management
        • Creating and managing users
        • Deleting OpenDialog account
        • Account Security
    • OpenDialog Conversation Engine
    • Contexts and attributes
      • Contexts
      • Attributes
      • Attribute Management
      • Conditions and operators
      • Composite Attributes
  • CREATE AI APPLICATIONS
    • Designing your application
      • Conversation Design
        • Conversational Patterns
          • Introduction to conversational patterns
          • Building robust assistants
            • Contextual help
            • Restart
            • End chat
            • Contextual and Global No Match
            • Contextual FAQ
          • Openings
            • Anatomy of an opening
            • Transactional openings
            • Additional information
          • Authentication
            • Components
            • Example dialog
            • Using in OpenDialog
          • Information collection
            • Components
            • Example dialog
            • Using in OpenDialog
            • Additional information
          • Recommendations
            • Components
            • Example dialog
            • Additional information
          • Extended telling
            • Components
            • Example dialog
            • Additional information
          • Repair
            • Types of repair
            • User request not understood
            • Example dialog
            • Additional information
          • Transfer
            • Components
            • Example dialog
            • Additional information
          • Closing
            • Components
            • Example dialog
            • Using in OpenDialog
            • Additional information
        • Best practices
          • Use Case
          • Subject Matter Expertise
          • Business Goals
          • User needs
            • Primary research
            • Secondary research
            • Outcome: user profile
          • Assistant personality
          • Sample dialogs
          • Conversation structure
          • API Integration Capabilities
          • NLU modeling
          • Testing strategy
          • The team
            • What does a conversation designer do
          • Select resources
      • Message Design
        • Message editor
        • Constructing Messages
        • Message Conditions
        • Messages best practices
        • Subsequent Messages - Virtual Intents
        • Using Attributes in Messages
        • Using Markdown in messages
        • Message Types
          • Text Message
          • Image Message
          • Button Message
          • Date Picker Message
          • Audio Message
          • Form Message
          • Full Page Message
          • Conversation Handover message
          • Autocomplete Message
          • Address Autocomplete Message
          • List Message
          • Rich Message
          • Location Message
          • E-Sign Message
          • File Upload Message
          • Meta Messages
            • Progress Bar Message
          • Attribute Message
      • Webchat Interface design
        • Webchat Interface Settings
        • Webchat Controls
      • Accessibility
      • Inclusive design
    • Leveraging Generative AI
      • Language Services
        • Semantic Intent Classifier
          • OpenAI
          • Azure
          • Google Gemini
          • Output attributes
        • Retrieval Augmented Generation
        • Example-based intent classification [Deprecated]
      • Interpreters
        • Available interpreters
          • OpenDialog interpreter
          • Amazon Lex interpreter
          • Google Dialogflow
            • Google Dialogflow interpreter
            • Google Dialogflow Knowledge Base
          • OpenAI interpreter
        • Using a language service interpreter
        • Interpreter Orchestration
        • Troubleshooting interpreters
      • LLM Actions
        • OpenAI
        • Azure OpenAI
        • Output attributes
        • Using conversation history (memory) in LLM actions
        • LLM Action Analytics
    • 3rd party Integrations in your application
      • Webhook actions
      • Actions from library
        • Freshdesk Action
        • Send to Email Action
        • Set Attributes Action
      • Conversation Hand-off
        • Chatwoot
    • Previewing your application
    • Launching your application
    • Monitoring your application
    • Debugging your application
    • Translating your application
    • FAQ
    • Troubleshooting and Common Problems
  • Developing With OpenDialog
    • Integrating with OpenDialog
    • Actions
      • Webhook actions
      • LLM actions
    • WebChat
      • Chat API
      • WebChat authentication
      • User Tracking
      • Load Webchat within page Element
      • How to enable JavaScript in your browser
      • SDK
        • Methods
        • Events
        • Custom Components
    • External APIs
  • Release Notes
    • Version 3 Upgrade Guide
    • Release Notes
Powered by GitBook
On this page
  • How to use Generative AI in OpenDialog
  • Interpretation
  • Semantic Search and RAG
  • LLM Actions
  1. CREATE AI APPLICATIONS

Leveraging Generative AI

The OpenDialog platform is tightly integrated with LLM capabilities to support a wide range of conversational applications - from simple question answer bots to sophisticated multi-step experiences.

PreviousInclusive designNextLanguage Services

Last updated 8 months ago

How to use Generative AI in OpenDialog

There are three ways of using Generative AI in OpenDialog.

  • Semantic Interpretation: Using LLMs to understand what the user said, extract any relevant information and make that available to the conversation engine as an explicit intent and related entities (or other components).

Semantic Interpreters are setup in Language Services and can be shared across multiple scenarios by creating an interpreter in a scenario that accesses a Language Service interpreter.

  • RAG with Semantic Search: Using LLMs to vectorise knowledge sources (URLs, files, etc) and then query those sources to extract content that can be used in prompts to generate answers.

Knowledge Services are setup in Language Services and can be shared across multiple scenarios. They are accessed via LLM actions via a special query string.

  • LLM Actions: The most flexible LLM component, actions can be used for a number of tasks from generating actions, reasoning about the conversation, personalising responses and more.

LLM Actions are specific to scenarios and can be found under the Integrate section of your scenario. At their core they are actions as other OpenDialog actions.

These three capabilities are tightly integrated between each other and the OpenDialog conversation model enable you to quickly design a number of different behaviours and flexibly combine them according to what your application needs.

We will look at each in brief here and then you can dive into the detail through throughout this section.

Interpretation

When a user says something we have two choices. We can either pass it directly to a prompt and generate a response (i.e. pass it straight to an LLM action in OpenDialog) or we can go through an interpreter first to get an indication of what sort of intent it is. For example, is the user just doing small talk, are they asking a question (what is the topic of the question) or are they doing something else such as attempting to trip up or game the bot.

Understanding the intent can enable us to then pick a more appropriate subsequent step and LLM prompt or knowledge source for content generation.

The main interpretation tool in OpenDialog is the . This classifier takes semantic instructions (i.e. instructions written in natural language) and then uses an LLM prompt to categorise an utterance against the intents and/or sub-intents.

This gives you as the conversation design an explicit representation of what the user said that you can use in design to reason about what are the apporpriate next steps.

Semantic Search and RAG

The Knoweldge Service allows you to split knowledge based on different topics, with each topic having multiple sources. This enables you to choose at runtime the most suitable knowledge source based on the type of question, the user and the specific task that the user is performing.

LLM Actions

LLM Actions are tightly integrated with OpenDialog attributes and Semantic Search enabling you to not just define specific prompts but prompt templates that are, at runtime, populated with attributes from a specific conversation.

The OpenDialog is a flexible RAG service that you can use to ingest documents and then query using either a user query or something else through an LLM action to then generate responses.

OpenDialog are the most flexible component. A powerful prompt manager they enable you to define prompts and extract specific outputs that you can then use in your conversation. This could be the answer to a user question, but it can also be a reasoning task that will identify how to proceed with the conversation.

Semantic Intent Classifier
Knowledge Service
LLM Actions