Use knowledge sources via RAG

Let's enhance your AI Agent's knowledge with Retrieval Augmented Generation.

Introduction

Now, we obviously want our AI Agent to provide more relevant and precise information to our users than just the static response you provided as an example.

If you want to use dynamic responses, you will need to take a few additional steps:

  • Making sure the AI Agent has the necessary knowledge at hand to respond regarding your new topic of discussion by adding additional knowledge sources to your RAG service.

  • Give the AI Agent the instructions to generate a dynamic response based on that knowledge by creating an LLM Action .

  • Indicate to the AI Agent when to generate the response by referencing the LLM action on the correct response intent

  • Adapt the AI Agent’s message to use the generated content by referencing the LLM action through an attribute in the message editor.

From RAG to responses

See it in action

Follow this video to add and use a knowledge source for RAG

Step by step guide

Overview of adding and using a knowledge service

Adding knowledge: Knowledge Services (RAG)

As a first step, you’ll need to add relevant content to your RAG service. Let’s walk through the process of adding and setting up an additional knowledge source.

Within the RAG/ Knowledge service, you’ll see predefined topics and existing knowledge sources that were uploaded during the initial setup. Now, let’s add a new topic for your AI Agent to reference, related to your new topic of discussion.

Now, with your new topic created, it’s time to begin adding sources to it. OpenDialog provides you with the ability to choose from a variety of different source types.

  • URL: Link to a URL page that has information relevant to your topic.

  • Document: Upload relevant documents.

  • Text: Manually enter text as a knowledge source.

Vectorize your knowledge source

To ensure the AI Agent can efficiently use the information you just added, you will need to transform its information into a machine readable, numerical format, this is called vectorisation.

Via this action menu, you can also delete a knowledge source or set a schedule to re-vectorize sources that update frequently, ensuring your AI Agent always has the latest information.

Retrieving your knowledge string

In order to use the vectorised knowledge in your conversation design setup and LLM Actions later, you will need the ability to reference it. In OpenDialog, you do this using a knowledge string.

A knowledge string follows this syntax: %%RAGServiceName.TopicName%%

For example: %%SpaceKnowledge.AboutCompany%%

For ease of use, locate the knowledge string for your new topic of discussion in the right-hand test panel and copy/paste this string into a note or blank document for further use in LLM Actions system prompts.

Providing instructions : LLM Actions

In order to use your new knowledge source in a response, you will need to accomplish three more steps:

  1. Create an LLM action to provide the Large Language Model with instructions on how to use your knowledge sources to generate responses.

  2. Indicate to your AI Agent when to trigger a response generation by adding this LLM action to the app response intent

  3. Reference the output attribute for this LLM action in the response message

Once in the LLM Actions overview, you’ll see prebuilt actions like “Topic Response Generator”, for example, which were automatically created to generate responses for your primary topic. We will be using this LLM action as a basis for our new LLM Action.

Let’s have a look at what this “Topic Response Generator” LLM action looks like.

An LLM action is made up of 3 main components:

1 - the LLM engine that powers it, visible under the Engine settings tab

2 - the prompt configuration that provides the LLM with instructions, under the prompt configuration tab, and further settings that allow you to determine how the LLM response will be referenced thanks to output attributes.

3 - guardrails to constrain the LLM responses and configure their safety settings, under the safeguarding tab.

In this initial guide, we are not going to dig any deeper into the preconfigured prompt configuration just yet. All you need to remember for now is that:

  • A knowledge source gets referenced in prompt instructions using a knowledge string

  • The knowledge string is used in the prompt instructions in a specific knowledge sections, indicated as follows: <knowledge>

  • The LLM response that comes back when the action is run is saved in OpenDialog under an output attribute which by default is the {llm_response} attribute

For more information on how to structure prompt instructions for LLM Actions, you can take a look at our further documentation.

For this initial setup, we will use the same configuration as the ‘Topic Response Generation’ LLM Action.

Now let’s taylor our new LLM action and provide it with instructions to reference our newly setup knowledge source.

👉🏻 You will also need the knowledge string you put aside earlier.

(*) The same configuration of the LLM Action you have duplicated will apply. When selecting the OpenAI engine, the correct configuration will already be selected. You can use an OpenDialog-managed account, or use your own account credentials.

Now, we are going to update the prompt instructions in order to adapt to the additional knowledge source you have just added.

When to trigger a response: adding an LLM Action to a response intent

You are now ready to return to your conversation design and update the response intent with your LLM Action.

To do so, navigate back to the Design section of your scenario, by clicking Design in the navigation bar and then Conversation. Now, using the filter buttons in the top left corner of the central panel, or the conversation nodes in the centre, navigate back to the topic intent you set up earlier.

Topic conversations > About Company > About Company > Intents>AboutCompanyResponse

You are all set! When your scenario matches this intent, your LLM prompts will be sent to the language model and the related llm_response attribute will be populated.

Adapting the message: the LLM response attribute

To display the LLM's response text in your scenario, we will need to update its message in the message editor, to reference the {llm_response} attribute. Remember, this is the attribute where the LLM’s response generated by your LLM action gets stored in OpenDialog.

Your new knowledge source is now ready, and the AI Agent is set to generate informed responses based on the enriched content.

Your additional topic is now live! With these steps, you can empower your AI Agent to handle a wider range of user questions while maintaining a smooth, relevant conversation flow. To add more additional topics, go back to the top of this guide, rinse and repeat!

Last updated