OpenDialog Docs
opendialog.aiStart BuildingTalk to an expert
  • GETTING STARTED
    • Introduction
    • Getting ready
    • Billing and plans
    • Quick Start AI Agents
      • Quick Start AI Agent
      • The "Start from Scratch" AI Agent
        • Chat Management Conversation
        • Welcome Conversation
        • Topic Conversation
        • Global No Match Conversation
        • Supporting LLM Actions
        • Semantic Classifier: Query Classifier
      • A Process Handling AI Agent
  • STEP BY STEP GUIDES
    • AI Agent Creation Overview
    • Add a new topic of discussion
    • Use knowledge sources via RAG
    • Adding a structured conversation
    • Add a 3rd party integration
    • Test and tweak your AI Agent
    • Publish your AI Agent
  • CORE CONCEPTS
    • OpenDialog Approach
      • Designing Conversational AI Agents
    • OpenDialog Platform
      • Scenarios
        • Conversations
        • Scenes
        • Turns and intents
      • Language Services
      • OpenDialog Account Management
        • Creating and managing users
        • Deleting OpenDialog account
        • Account Security
    • OpenDialog Conversation Engine
    • Contexts and attributes
      • Contexts
      • Attributes
      • Attribute Management
      • Conditions and operators
      • Composite Attributes
  • CREATE AI APPLICATIONS
    • Designing your application
      • Conversation Design
        • Conversational Patterns
          • Introduction to conversational patterns
          • Building robust assistants
            • Contextual help
            • Restart
            • End chat
            • Contextual and Global No Match
            • Contextual FAQ
          • Openings
            • Anatomy of an opening
            • Transactional openings
            • Additional information
          • Authentication
            • Components
            • Example dialog
            • Using in OpenDialog
          • Information collection
            • Components
            • Example dialog
            • Using in OpenDialog
            • Additional information
          • Recommendations
            • Components
            • Example dialog
            • Additional information
          • Extended telling
            • Components
            • Example dialog
            • Additional information
          • Repair
            • Types of repair
            • User request not understood
            • Example dialog
            • Additional information
          • Transfer
            • Components
            • Example dialog
            • Additional information
          • Closing
            • Components
            • Example dialog
            • Using in OpenDialog
            • Additional information
        • Best practices
          • Use Case
          • Subject Matter Expertise
          • Business Goals
          • User needs
            • Primary research
            • Secondary research
            • Outcome: user profile
          • Assistant personality
          • Sample dialogs
          • Conversation structure
          • API Integration Capabilities
          • NLU modeling
          • Testing strategy
          • The team
            • What does a conversation designer do
          • Select resources
      • Message Design
        • Message editor
        • Constructing Messages
        • Message Conditions
        • Messages best practices
        • Subsequent Messages - Virtual Intents
        • Using Attributes in Messages
        • Using Markdown in messages
        • Message Types
          • Text Message
          • Image Message
          • Button Message
          • Date Picker Message
          • Audio Message
          • Form Message
          • Full Page Message
          • Conversation Handover message
          • Autocomplete Message
          • Address Autocomplete Message
          • List Message
          • Rich Message
          • Location Message
          • E-Sign Message
          • File Upload Message
          • Meta Messages
            • Progress Bar Message
          • Attribute Message
      • Webchat Interface design
        • Webchat Interface Settings
        • Webchat Controls
      • Accessibility
      • Inclusive design
    • Leveraging Generative AI
      • Language Services
        • Semantic Intent Classifier
          • OpenAI
          • Azure
          • Google Gemini
          • Output attributes
        • Retrieval Augmented Generation
        • Example-based intent classification [Deprecated]
      • Interpreters
        • Available interpreters
          • OpenDialog interpreter
          • Amazon Lex interpreter
          • Google Dialogflow
            • Google Dialogflow interpreter
            • Google Dialogflow Knowledge Base
          • OpenAI interpreter
        • Using a language service interpreter
        • Interpreter Orchestration
        • Troubleshooting interpreters
      • LLM Actions
        • OpenAI
        • Azure OpenAI
        • Output attributes
        • Using conversation history (memory) in LLM actions
        • LLM Action Analytics
    • 3rd party Integrations in your application
      • Webhook actions
      • Actions from library
        • Freshdesk Action
        • Send to Email Action
        • Set Attributes Action
      • Conversation Hand-off
        • Chatwoot
    • Previewing your application
    • Launching your application
    • Monitoring your application
    • Debugging your application
    • Translating your application
    • FAQ
    • Troubleshooting and Common Problems
  • Developing With OpenDialog
    • Integrating with OpenDialog
    • Actions
      • Webhook actions
      • LLM actions
    • WebChat
      • Chat API
      • WebChat authentication
      • User Tracking
      • Load Webchat within page Element
      • How to enable JavaScript in your browser
      • SDK
        • Methods
        • Events
        • Custom Components
    • External APIs
  • Release Notes
    • Version 3 Upgrade Guide
    • Release Notes
Powered by GitBook
On this page
  • Where to find
  • What you'll need
  • How to use
  • Setting up the Google Gemini Semantic Intent Classifier in OpenDialog
  • Using the Google Gemini Semantic Intent Classifier in your scenario
  • Testing the Google Gemini Semantic Intent Classifier in your scenario
  1. CREATE AI APPLICATIONS
  2. Leveraging Generative AI
  3. Language Services
  4. Semantic Intent Classifier

Google Gemini

PreviousAzureNextOutput attributes

Last updated 8 months ago

Where to find

Semantic Intent Classifiers allow you to integrate with a large language model provider, for example, Google Gemini, in order to allow your scenario to understand user input. It can be found under the Language Services section of your OpenDialog Workspace. Once you select 'Create new service' and subsequently 'Semantic Intent Classifier' you will then have the opportunity to create it. Select Google Gemini to start setting up your Gemini integration.

What you'll need

To set up an integration between your LLM action, and an Google Gemini model, you will need to have a Google Cloud account. Within your account you will need to create a project and enable the Vertex API. After this you will need to create an Google Service Account with the role of "Vertex AI User" and download a JSON key.

  1. How to create a project and enable the Vertex API

  2. How to create and download a JSON key

To configure your Google Gemini Semantic Intent Classifier you will need to provide the following four elements:

  • The Location which is the valid region and zone the integration should use, such as europe-west2.

  • The Project ID which is the name of the project that the Service Account belongs to.

  • The Model which is the specific Gemini LLM you would like to use, such as gemini-1.5-pro.

  • The JSON credentials which is the contents of the JSON key file you downloaded from your Google Cloud Service account.

How to use

Setting up the Google Gemini Semantic Intent Classifier in OpenDialog

To set up your Google Gemini Semantic Intent Classifier, navigate to "Language services". Use the "Create new service" button to begin creating a new Language service. Next select the "Semantic Intent Classifier type".

After providing a name and a description for your Semantic Intent Classifier, select "Google Gemini" and provide the necessary account details to set up your integration. Hit 'Create Service' to set up this integration and use it within your scenario.

Using the Google Gemini Semantic Intent Classifier in your scenario

To use your Google Gemini Semantic Intent Classifier within your scenario, you will need to create an Interpreter. Navigate to "Interpret" and click "Add new interpreter'. Enter a name for the interpreter and then select the Language Service type. This will provide you with a field in which you need to select the Semantic Intent Classifier you created. Once you have selected the desired Language service, click "Save Configuration".

Now that you have created an interpreter, you can add it to an intent in the Designer. Navigate to "Design" and then into the conversation, scene and turn that you would like to use the Semantic Intent Classifier within. Create a user intent and select the interpreter you created, which will be annotated with a purple "SIC" label. After selecting the interpreter you can select one of its intents in the intent name field. Once you have done this you have successfully connected your Semantic Intent Classifier to your scenario.

Testing the Google Gemini Semantic Intent Classifier in your scenario

You can check your Semantic Intent Classifier by testing your conversation in the Preview (Test - Preview). When you navigate to the area of the conversation which uses your classifier and you provide a valid input, you should see the desired intent is matched under the "Incoming Intent" panel. If you defined any output attributes you will also see these in the right-hand side "Context" panel, as well as any other output attributes.

Setting up an OpenAI integration via OpenDialog
Use the "Create an LLM action" button to set up a new OpenAI integration