OpenDialog Docs
opendialog.aiStart BuildingTalk to an expert
  • GETTING STARTED
    • Introduction
    • Getting ready
    • Billing and plans
    • Quick Start AI Agents
      • Quick Start AI Agent
      • The "Start from Scratch" AI Agent
        • Chat Management Conversation
        • Welcome Conversation
        • Topic Conversation
        • Global No Match Conversation
        • Supporting LLM Actions
        • Semantic Classifier: Query Classifier
      • A Process Handling AI Agent
  • STEP BY STEP GUIDES
    • AI Agent Creation Overview
    • Add a new topic of discussion
    • Use knowledge sources via RAG
    • Adding a structured conversation
    • Add a 3rd party integration
    • Test and tweak your AI Agent
    • Publish your AI Agent
  • CORE CONCEPTS
    • OpenDialog Approach
      • Designing Conversational AI Agents
    • OpenDialog Platform
      • Scenarios
        • Conversations
        • Scenes
        • Turns and intents
      • Language Services
      • OpenDialog Account Management
        • Creating and managing users
        • Deleting OpenDialog account
        • Account Security
    • OpenDialog Conversation Engine
    • Contexts and attributes
      • Contexts
      • Attributes
      • Attribute Management
      • Conditions and operators
      • Composite Attributes
  • CREATE AI APPLICATIONS
    • Designing your application
      • Conversation Design
        • Conversational Patterns
          • Introduction to conversational patterns
          • Building robust assistants
            • Contextual help
            • Restart
            • End chat
            • Contextual and Global No Match
            • Contextual FAQ
          • Openings
            • Anatomy of an opening
            • Transactional openings
            • Additional information
          • Authentication
            • Components
            • Example dialog
            • Using in OpenDialog
          • Information collection
            • Components
            • Example dialog
            • Using in OpenDialog
            • Additional information
          • Recommendations
            • Components
            • Example dialog
            • Additional information
          • Extended telling
            • Components
            • Example dialog
            • Additional information
          • Repair
            • Types of repair
            • User request not understood
            • Example dialog
            • Additional information
          • Transfer
            • Components
            • Example dialog
            • Additional information
          • Closing
            • Components
            • Example dialog
            • Using in OpenDialog
            • Additional information
        • Best practices
          • Use Case
          • Subject Matter Expertise
          • Business Goals
          • User needs
            • Primary research
            • Secondary research
            • Outcome: user profile
          • Assistant personality
          • Sample dialogs
          • Conversation structure
          • API Integration Capabilities
          • NLU modeling
          • Testing strategy
          • The team
            • What does a conversation designer do
          • Select resources
      • Message Design
        • Message editor
        • Constructing Messages
        • Message Conditions
        • Messages best practices
        • Subsequent Messages - Virtual Intents
        • Using Attributes in Messages
        • Using Markdown in messages
        • Message Types
          • Text Message
          • Image Message
          • Button Message
          • Date Picker Message
          • Audio Message
          • Form Message
          • Full Page Message
          • Conversation Handover message
          • Autocomplete Message
          • Address Autocomplete Message
          • List Message
          • Rich Message
          • Location Message
          • E-Sign Message
          • File Upload Message
          • Meta Messages
            • Progress Bar Message
          • Attribute Message
      • Webchat Interface design
        • Webchat Interface Settings
        • Webchat Controls
      • Accessibility
      • Inclusive design
    • Leveraging Generative AI
      • Language Services
        • Semantic Intent Classifier
          • OpenAI
          • Azure
          • Google Gemini
          • Output attributes
        • Retrieval Augmented Generation
        • Example-based intent classification [Deprecated]
      • Interpreters
        • Available interpreters
          • OpenDialog interpreter
          • Amazon Lex interpreter
          • Google Dialogflow
            • Google Dialogflow interpreter
            • Google Dialogflow Knowledge Base
          • OpenAI interpreter
        • Using a language service interpreter
        • Interpreter Orchestration
        • Troubleshooting interpreters
      • LLM Actions
        • OpenAI
        • Azure OpenAI
        • Output attributes
        • Using conversation history (memory) in LLM actions
        • LLM Action Analytics
    • 3rd party Integrations in your application
      • Webhook actions
      • Actions from library
        • Freshdesk Action
        • Send to Email Action
        • Set Attributes Action
      • Conversation Hand-off
        • Chatwoot
    • Previewing your application
    • Launching your application
    • Monitoring your application
    • Debugging your application
    • Translating your application
    • FAQ
    • Troubleshooting and Common Problems
  • Developing With OpenDialog
    • Integrating with OpenDialog
    • Actions
      • Webhook actions
      • LLM actions
    • WebChat
      • Chat API
      • WebChat authentication
      • User Tracking
      • Load Webchat within page Element
      • How to enable JavaScript in your browser
      • SDK
        • Methods
        • Events
        • Custom Components
    • External APIs
  • Release Notes
    • Version 3 Upgrade Guide
    • Release Notes
Powered by GitBook
On this page
  • Introduction
  • See it in action
  • Step by step guide
  • Adding knowledge: Knowledge Services (RAG)
  • Vectorize your knowledge source
  • Retrieving your knowledge string
  • Providing instructions : LLM Actions
  • When to trigger a response: adding an LLM Action to a response intent
  • Adapting the message: the LLM response attribute
  1. STEP BY STEP GUIDES

Use knowledge sources via RAG

Let's enhance your AI Agent's knowledge with Retrieval Augmented Generation.

PreviousAdd a new topic of discussionNextAdding a structured conversation

Last updated 5 months ago

Introduction

Now, we obviously want our AI Agent to provide more relevant and precise information to our users than just the static response you provided as an example.

If you want to use dynamic responses, you will need to take a few additional steps:

  • Making sure the AI Agent has the necessary knowledge at hand to respond regarding your new topic of discussion by adding additional knowledge sources to your RAG service.

  • Give the AI Agent the instructions to generate a dynamic response based on that knowledge by creating an LLM Action .

  • Indicate to the AI Agent when to generate the response by referencing the LLM action on the correct response intent

  • Adapt the AI Agent’s message to use the generated content by referencing the LLM action through an attribute in the message editor.

See it in action

Step by step guide

Adding knowledge: Knowledge Services (RAG)

As a first step, you’ll need to add relevant content to your RAG service. Let’s walk through the process of adding and setting up an additional knowledge source.

Navigate to your prebuilt RAG Service

  • In the left-hand navigation panel, click on Language Services

  • Locate your prebuilt RAG service, labelled “ProjectName Knowledge”

  • Click on the link to navigate to the service

Within the RAG/ Knowledge service, you’ll see predefined topics and existing knowledge sources that were uploaded during the initial setup. Now, let’s add a new topic for your AI Agent to reference, related to your new topic of discussion.

Adding a new knowledge topic

  • Click the Add Topic button in the top-right corner

  • Enter a name for your topic, such as “AboutCompany”

  • Provide a brief description of your topic aligned with the one you used earlier

Example: “Information about the brand, organisation, and related services.”

  • Click ‘Create Topic’ to save

Now, with your new topic created, it’s time to begin adding sources to it. OpenDialog provides you with the ability to choose from a variety of different source types.

  • URL: Link to a URL page that has information relevant to your topic.

  • Document: Upload relevant documents.

  • Text: Manually enter text as a knowledge source.

Adding knowledge sources to your topic

  • Click the Add sources button in the centre right of the screen

  • Select the source type you want to add (URL, document or text)

  • In the pop-up, follow the instructions

  • Click ‘Upload source’ to continue

Vectorize your knowledge source

To ensure the AI Agent can efficiently use the information you just added, you will need to transform its information into a machine readable, numerical format, this is called vectorisation.

Vectorizing your knowledge sources

  • Select one or more knowledge sources using the select box next to it

  • Click the Vectorize button in the action menu above the knowledge source table

  • Confirm the vectorization in the pop-up

  • View the updated vectorisation status in the knowledge source table

Via this action menu, you can also delete a knowledge source or set a schedule to re-vectorize sources that update frequently, ensuring your AI Agent always has the latest information.

Retrieving your knowledge string

In order to use the vectorised knowledge in your conversation design setup and LLM Actions later, you will need the ability to reference it. In OpenDialog, you do this using a knowledge string.

A knowledge string follows this syntax: %%RAGServiceName.TopicName%%

For example: %%SpaceKnowledge.AboutCompany%%

For ease of use, locate the knowledge string for your new topic of discussion in the right-hand test panel and copy/paste this string into a note or blank document for further use in LLM Actions system prompts.

Save your new knowledge topic and its sources

  • Scroll up to the top of the screen

  • Click 'Update Topic' to save your changes

Providing instructions : LLM Actions

In order to use your new knowledge source in a response, you will need to accomplish three more steps:

  1. Create an LLM action to provide the Large Language Model with instructions on how to use your knowledge sources to generate responses.

  2. Indicate to your AI Agent when to trigger a response generation by adding this LLM action to the app response intent

  3. Reference the output attribute for this LLM action in the response message

Navigate to LLM Actions in your scenario

  • In the left-hand navigation panel, hover 'Scenarios'

  • Select your “ProjectName” scenario in the list of scenarios

  • Hover over ‘Integrate’ in the updated navigation panel

  • Select LLM Actions

Once in the LLM Actions overview, you’ll see prebuilt actions like “Topic Response Generator”, for example, which were automatically created to generate responses for your primary topic. We will be using this LLM action as a basis for our new LLM Action.

Let’s have a look at what this “Topic Response Generator” LLM action looks like.

View a pre-existing LLM Action

  • In the LLM Action overview, locate the “Topic Response Generation” LLM Action

  • Click on the card

An LLM action is made up of 3 main components:

1 - the LLM engine that powers it, visible under the Engine settings tab

2 - the prompt configuration that provides the LLM with instructions, under the prompt configuration tab, and further settings that allow you to determine how the LLM response will be referenced thanks to output attributes.

3 - guardrails to constrain the LLM responses and configure their safety settings, under the safeguarding tab.

In this initial guide, we are not going to dig any deeper into the preconfigured prompt configuration just yet. All you need to remember for now is that:

  • A knowledge source gets referenced in prompt instructions using a knowledge string

  • The knowledge string is used in the prompt instructions in a specific knowledge sections, indicated as follows: <knowledge>

  • The LLM response that comes back when the action is run is saved in OpenDialog under an output attribute which by default is the {llm_response} attribute

For this initial setup, we will use the same configuration as the ‘Topic Response Generation’ LLM Action.

Duplicate an existing LLM Action

  • Navigate back to the LLM Action overview using the link in the top-left corner of the central panel

  • On the 'Topic Response Generator Card' click on the 3-dotted menu

  • Select Duplicate

Now let’s taylor our new LLM action and provide it with instructions to reference our newly setup knowledge source.

👉🏻 You will also need the knowledge string you put aside earlier.

Editing an LLM Action

  • Click in the newly created LLM Action

  • Edit the name of your LLM Action, eg. “About Company Response Generator”

  • Edit the description to specify that this action generates responses for your topic.

  • Select the LLM Engine you wish to use (*)

(*) The same configuration of the LLM Action you have duplicated will apply. When selecting the OpenAI engine, the correct configuration will already be selected. You can use an OpenDialog-managed account, or use your own account credentials.

Now, we are going to update the prompt instructions in order to adapt to the additional knowledge source you have just added.

Updating prompt instructions

  • Navigate to the ‘Prompt Configuration’ tab

  • Go to the system prompt input field

  • Paste the prompt instructions you copied earlier, using Cmd+V (or Ctrl+V)

  • Locate the knowledge section in the prompt instructions, <knowledge>

  • Replace the mentioned knowledge string to your newly added topic.

    For reference, it is formatted as follows: %%ServiceName.TopicName%%

  • Scroll back up to the top of the page

  • Click the ‘Save action’ button

When to trigger a response: adding an LLM Action to a response intent

You are now ready to return to your conversation design and update the response intent with your LLM Action.

To do so, navigate back to the Design section of your scenario, by clicking Design in the navigation bar and then Conversation. Now, using the filter buttons in the top left corner of the central panel, or the conversation nodes in the centre, navigate back to the topic intent you set up earlier.

Topic conversations > About Company > About Company > Intents>AboutCompanyResponse

Adding an LLM Action to an intent

  • View the intent settings panel

  • Locate “Add conditions, actions & attributes” on the bottom of the panel

  • Click the link

  • In the Actions section of the panel, select Add new action.

  • Select your newly created LLM Action by its name in the dropdown

  • The updated intent settings will Autosave

You are all set! When your scenario matches this intent, your LLM prompts will be sent to the language model and the related llm_response attribute will be populated.

Adapting the message: the LLM response attribute

To display the LLM's response text in your scenario, we will need to update its message in the message editor, to reference the {llm_response} attribute. Remember, this is the attribute where the LLM’s response generated by your LLM action gets stored in OpenDialog.

Updating your message to use the dynamically generated response

  • Go back to the Basic settings using the link on the bottom of the panel

  • Click the Edit Messages button in the panel

  • Click the Edit icon on the message card

  • Locate the text block

  • Delete the static message in the text block

  • Type an opening curly brace { to access the attribute autocomplete field

  • Start typing llm…

  • Select the desired attribute from the dropdown, in our case: llm_response

  • Scroll back up to the top of the page

  • Click “Save Message”

Your new knowledge source is now ready, and the AI Agent is set to generate informed responses based on the enriched content.

Your additional topic is now live! With these steps, you can empower your AI Agent to handle a wider range of user questions while maintaining a smooth, relevant conversation flow. To add more additional topics, go back to the top of this guide, rinse and repeat!

For more information on how to structure prompt instructions for LLM Actions, you can take .

a look at our further documentation
Follow this video to add and use a knowledge source for RAG
From RAG to responses
Overview of adding and using a knowledge service