OpenDialog Docs
opendialog.aiStart BuildingTalk to an expert
  • GETTING STARTED
    • Introduction
    • Getting ready
    • Billing and plans
    • Quick Start AI Agents
      • Quick Start AI Agent
      • The "Start from Scratch" AI Agent
        • Chat Management Conversation
        • Welcome Conversation
        • Topic Conversation
        • Global No Match Conversation
        • Supporting LLM Actions
        • Semantic Classifier: Query Classifier
      • A Process Handling AI Agent
  • STEP BY STEP GUIDES
    • AI Agent Creation Overview
    • Add a new topic of discussion
    • Use knowledge sources via RAG
    • Adding a structured conversation
    • Add a 3rd party integration
    • Test and tweak your AI Agent
    • Publish your AI Agent
  • CORE CONCEPTS
    • OpenDialog Approach
      • Designing Conversational AI Agents
    • OpenDialog Platform
      • Scenarios
        • Conversations
        • Scenes
        • Turns and intents
      • Language Services
      • OpenDialog Account Management
        • Creating and managing users
        • Deleting OpenDialog account
        • Account Security
    • OpenDialog Conversation Engine
    • Contexts and attributes
      • Contexts
      • Attributes
      • Attribute Management
      • Conditions and operators
      • Composite Attributes
  • CREATE AI APPLICATIONS
    • Designing your application
      • Conversation Design
        • Conversational Patterns
          • Introduction to conversational patterns
          • Building robust assistants
            • Contextual help
            • Restart
            • End chat
            • Contextual and Global No Match
            • Contextual FAQ
          • Openings
            • Anatomy of an opening
            • Transactional openings
            • Additional information
          • Authentication
            • Components
            • Example dialog
            • Using in OpenDialog
          • Information collection
            • Components
            • Example dialog
            • Using in OpenDialog
            • Additional information
          • Recommendations
            • Components
            • Example dialog
            • Additional information
          • Extended telling
            • Components
            • Example dialog
            • Additional information
          • Repair
            • Types of repair
            • User request not understood
            • Example dialog
            • Additional information
          • Transfer
            • Components
            • Example dialog
            • Additional information
          • Closing
            • Components
            • Example dialog
            • Using in OpenDialog
            • Additional information
        • Best practices
          • Use Case
          • Subject Matter Expertise
          • Business Goals
          • User needs
            • Primary research
            • Secondary research
            • Outcome: user profile
          • Assistant personality
          • Sample dialogs
          • Conversation structure
          • API Integration Capabilities
          • NLU modeling
          • Testing strategy
          • The team
            • What does a conversation designer do
          • Select resources
      • Message Design
        • Message editor
        • Constructing Messages
        • Message Conditions
        • Messages best practices
        • Subsequent Messages - Virtual Intents
        • Using Attributes in Messages
        • Using Markdown in messages
        • Message Types
          • Text Message
          • Image Message
          • Button Message
          • Date Picker Message
          • Audio Message
          • Form Message
          • Full Page Message
          • Conversation Handover message
          • Autocomplete Message
          • Address Autocomplete Message
          • List Message
          • Rich Message
          • Location Message
          • E-Sign Message
          • File Upload Message
          • Meta Messages
            • Progress Bar Message
          • Attribute Message
      • Webchat Interface design
        • Webchat Interface Settings
        • Webchat Controls
      • Accessibility
      • Inclusive design
    • Leveraging Generative AI
      • Language Services
        • Semantic Intent Classifier
          • OpenAI
          • Azure
          • Google Gemini
          • Output attributes
        • Retrieval Augmented Generation
        • Example-based intent classification [Deprecated]
      • Interpreters
        • Available interpreters
          • OpenDialog interpreter
          • Amazon Lex interpreter
          • Google Dialogflow
            • Google Dialogflow interpreter
            • Google Dialogflow Knowledge Base
          • OpenAI interpreter
        • Using a language service interpreter
        • Interpreter Orchestration
        • Troubleshooting interpreters
      • LLM Actions
        • OpenAI
        • Azure OpenAI
        • Output attributes
        • Using conversation history (memory) in LLM actions
        • LLM Action Analytics
    • 3rd party Integrations in your application
      • Webhook actions
      • Actions from library
        • Freshdesk Action
        • Send to Email Action
        • Set Attributes Action
      • Conversation Hand-off
        • Chatwoot
    • Previewing your application
    • Launching your application
    • Monitoring your application
    • Debugging your application
    • Translating your application
    • FAQ
    • Troubleshooting and Common Problems
  • Developing With OpenDialog
    • Integrating with OpenDialog
    • Actions
      • Webhook actions
      • LLM actions
    • WebChat
      • Chat API
      • WebChat authentication
      • User Tracking
      • Load Webchat within page Element
      • How to enable JavaScript in your browser
      • SDK
        • Methods
        • Events
        • Custom Components
    • External APIs
  • Release Notes
    • Version 3 Upgrade Guide
    • Release Notes
Powered by GitBook
On this page
  • Default output attributes
  • action_success
  • llm_action_error
  • llm_action_flagged_moderation_categories
  • llm_action_prompt_exclusions
  • llm_action_response_exclusions
  • Configurable output attributes
  • llm_response
  1. CREATE AI APPLICATIONS
  2. Leveraging Generative AI
  3. LLM Actions

Output attributes

PreviousAzure OpenAINextUsing conversation history (memory) in LLM actions

Last updated 10 months ago

When using LLM actions you may get many output attributes alongside any you have defined. Please see the following sections for descriptions and examples of how you might use each of these attributes.

Default output attributes

action_success

This attribute is output by all types of actions (not just LLM actions). It is a boolean attribute that denotes whether the action succeeded or failed. A LLM action might fail for a few reasons:

  • The credentials are invalid

  • The provider's API is unresponsive

  • The input or output has been moderated

You could use this attribute in conditions in your conversation design. For example if you had an LLM action to generate some response text but it failed, you could use the action_success attribute to a failure message to let the user know something went wrong. This condition would use the Is True and Is False operations.

llm_action_error

This attribute denotes the reason for a LLM action's failure. llm_action_error is a string attribute that can have the following values:

  • prompt_moderation: This value can only occur when is enabled. It is returned if an aspect of the prompt was flagged. The prompt includes any text or attributes you have configured in the system prompt and user prompt, as well as the user's utterance.

  • response_moderation: This value can only occur when is enabled. It is returned if an aspect of the LLM's response was flagged.

  • prompt_exclusion: This value can only occur when a is provided. It is returned if the prompt contained any terms in the list. The prompt includes any text or attributes you have configured in the system prompt and user prompt, as well as the user's utterance.

  • response_exclusion: This value can only occur when a is provided. It is returned if the LLM's response contained any terms in the list.

  • llm_request_failed: This value occurs when the request to the LLM fails. This can via misconfiguration, or connection issues. If the action is misconfigured, this value can be returned due to invalid API keys or model names. If the action is configured correctly, this value can be returned due to issues with the LLM provider's API.

llm_action_flagged_moderation_categories

llm_action_prompt_exclusions

If a user utterance exclusion list is provided and any of the terms are found in the prompt, then this attribute denotes the found terms. llm_action_prompt_exclusions is a string collection attribute that will contain zero or many excluded terms.

llm_action_response_exclusions

If a response exclusion list is provided and any of the terms are found in the LLM's response, then this attribute denotes the found terms. llm_action_response_exclusions is a string collection attribute that will contain zero or many excluded terms.

Configurable output attributes

llm_response

The llm_response attribute is included by default for all new LLM actions. It is a string attribute which contains the generated text returned by the LLM. This attribute can be removed if it is not required.

You can use this attribute in a message to display the LLM's response.

You could use this attribute in conditions in your conversation design. For example if you had an LLM action to generate some response text but it failed, you could use the llm_action_error attribute to failure messages to let the user know what went wrong and if/how it can be resolved. These conditions would use the In Set operation, such as "user.llm_action_error In Set prompt_exclusion" to condition a message to only display if the LLM action failed due an excluded term being present in the prompt.

If content moderation is enabled, and the prompt or response has been flagged, then this attribute denotes the flagged categories. llm_action_flagged_moderation_categories is a string collection attribute than will contain zero or many categories. The list of possible moderation categories varies between and .

You could use this attribute in conditions in your conversation design. For example if you had an LLM action to generate some response text but it failed due to prompt or response moderation, you could use the llm_action_flagged_moderation_categories attribute to failure messages to let the user know what moderation category was flagged. These conditions would use the In Set operation, such as "user.llm_action_flagged_moderation_categories In Set violence" to condition a message to only display if the LLM action failed due the violence category being flagged.

You could use this attribute in conditions in your conversation design. For example if you had an LLM action to generate some response text but it failed, you could use the llm_action_prompt_exclusions attribute to failure messages to let the user know which terms they used matched the exclusion list. These conditions would use the In Set operation, such as "user.llm_action_prompt_exclusions In Set complaint" to condition a message to only display if the prompt contained the term "complaint".

You could use this attribute in conditions in your conversation design. For example if you had an LLM action to generate some response text but it failed, you could use the llm_action_response_exclusions attribute to failure messages to let the user know which terms the LLM response used which matched the exclusion list. These conditions would use the In Set operation, such as "user.llm_action_response_exclusions In Set guarantee" to condition a message to only display if the LLM response contained the term "guarantee".

condition
OpenAI
Azure OpenAI
condition
condition
condition
condition
custom output attributes
content moderation
content moderation
user utterance exclusion list
response utterance exclusion list