Dify
English
English
  • Getting Started
    • Welcome to Dify
      • Features and Specifications
      • List of Model Providers
    • Dify Community
      • Deploy with Docker Compose
      • Start with Local Source Code
      • Deploy with aaPanel
      • Start Frontend Docker Container Separately
      • Environment Variables Explanation
      • FAQs
    • Dify Cloud
    • Dify Premium on AWS
    • Dify for Education
  • Guides
    • Model
      • Add New Provider
      • Predefined Model Integration
      • Custom Model Integration
      • Interfaces
      • Schema
      • Load Balancing
    • Application Orchestration
      • Create Application
      • Chatbot Application
        • Multiple Model Debugging
      • Agent
      • Application Toolkits
        • Moderation Tool
    • Workflow
      • Key Concepts
      • Variables
      • Node Description
        • Start
        • End
        • Answer
        • LLM
        • Knowledge Retrieval
        • Question Classifier
        • Conditional Branch IF/ELSE
        • Code Execution
        • Template
        • Doc Extractor
        • List Operator
        • Variable Aggregator
        • Variable Assigner
        • Iteration
        • Parameter Extraction
        • HTTP Request
        • Agent
        • Tools
        • Loop
      • Shortcut Key
      • Orchestrate Node
      • File Upload
      • Error Handling
        • Predefined Error Handling Logic
        • Error Type
      • Additional Features
      • Debug and Preview
        • Preview and Run
        • Step Run
        • Conversation/Run Logs
        • Checklist
        • Run History
      • Application Publishing
      • Structured Outputs
      • Bulletin: Image Upload Replaced by File Upload
    • Knowledge
      • Create Knowledge
        • 1. Import Text Data
          • 1.1 Import Data from Notion
          • 1.2 Import Data from Website
        • 2. Choose a Chunk Mode
        • 3. Select the Indexing Method and Retrieval Setting
      • Manage Knowledge
        • Maintain Documents
        • Maintain Knowledge via API
      • Metadata
      • Integrate Knowledge Base within Application
      • Retrieval Test / Citation and Attributions
      • Knowledge Request Rate Limit
      • Connect to an External Knowledge Base
      • External Knowledge API
    • Tools
      • Quick Tool Integration
      • Advanced Tool Integration
      • Tool Configuration
        • Google
        • Bing
        • SearchApi
        • StableDiffusion
        • Dall-e
        • Perplexity Search
        • AlphaVantage
        • Youtube
        • SearXNG
        • Serper
        • SiliconFlow (Flux AI Supported)
        • ComfyUI
    • Publishing
      • Publish as a Single-page Web App
        • Web App Settings
        • Text Generator Application
        • Conversation Application
      • Embedding In Websites
      • Developing with APIs
      • Re-develop Based on Frontend Templates
    • Annotation
      • Logs and Annotation
      • Annotation Reply
    • Monitoring
      • Data Analysis
      • Integrate External Ops Tools
        • Integrate LangSmith
        • Integrate Langfuse
        • Integrate Opik
    • Extension
      • API-Based Extension
        • External Data Tool
        • Deploy API Tools with Cloudflare Workers
        • Moderation
      • Code-Based Extension
        • External Data Tool
        • Moderation
    • Collaboration
      • Discover
      • Invite and Manage Members
    • Management
      • App Management
      • Team Members Management
      • Personal Account Management
      • Subscription Management
      • Version Control
  • Workshop
    • Basic
      • How to Build an AI Image Generation App
    • Intermediate
      • Build An Article Reader Using File Upload
      • Building a Smart Customer Service Bot Using a Knowledge Base
      • Generating analysis of Twitter account using Chatflow Agent
  • Community
    • Seek Support
    • Become a Contributor
    • Contributing to Dify Documentation
  • Plugins
    • Introduction
    • Quick Start
      • Install and Use Plugins
      • Develop Plugins
        • Initialize Development Tools
        • Tool Plugin
        • Model Plugin
          • Create Model Providers
          • Integrate the Predefined Model
          • Integrate the Customizable Model
        • Agent Strategy Plugin
        • Extension Plugin
        • Bundle
      • Debug Plugin
    • Manage Plugins
    • Schema Specification
      • Manifest
      • Endpoint
      • Tool
      • Agent
      • Model
        • Model Designing Rules
        • Model Schema
      • General Specifications
      • Persistent Storage
      • Reverse Invocation of the Dify Service
        • App
        • Model
        • Tool
        • Node
    • Best Practice
      • Develop a Slack Bot Plugin
      • Dify MCP Plugin Guide: Connect Zapier and Automate Email Delivery with Ease
    • Publish Plugins
      • Publish Plugins Automatically
      • Publish to Dify Marketplace
        • Plugin Developer Guidelines
        • Plugin Privacy Protection Guidelines
      • Publish to Your Personal GitHub Repository
      • Package the Plugin File and Publish it
      • Signing Plugins for Third-Party Signature Verification
    • FAQ
  • Development
    • Backend
      • DifySandbox
        • Contribution Guide
    • Models Integration
      • Integrate Open Source Models from Hugging Face
      • Integrate Open Source Models from Replicate
      • Integrate Local Models Deployed by Xinference
      • Integrate Local Models Deployed by OpenLLM
      • Integrate Local Models Deployed by LocalAI
      • Integrate Local Models Deployed by Ollama
      • Integrate Models on LiteLLM Proxy
      • Integrating with GPUStack for Local Model Deployment
      • Integrating AWS Bedrock Models (DeepSeek)
    • Migration
      • Migrating Community Edition to v1.0.0
  • Learn More
    • Use Cases
      • DeepSeek & Dify Integration Guide: Building AI Applications with Multi-Turn Reasoning
      • Private Deployment of Ollama + DeepSeek + Dify: Build Your Own AI Assistant
      • Build a Notion AI Assistant
      • Create a MidJourney Prompt Bot with Dify
      • Create an AI Chatbot with Business Data in Minutes
      • Integrating Dify Chatbot into Your Wix Website
      • How to connect with AWS Bedrock Knowledge Base?
      • Building the Dify Scheduler
      • Building an AI Thesis Slack Bot on Dify
    • Extended Reading
      • What is LLMOps?
      • Retrieval-Augmented Generation (RAG)
        • Hybrid Search
        • Re-ranking
        • Retrieval Modes
      • How to Use JSON Schema Output in Dify?
    • FAQ
      • Self-Host
      • LLM Configuration and Usage
      • Plugins
  • Policies
    • Open Source License
    • User Agreement
      • Terms of Service
      • Privacy Policy
      • Get Compliance Report
  • Features
    • Workflow
Powered by GitBook
On this page
  • General Error
  • Code Node
  • LLM Node
  • HTTP
  • Tool
  1. Guides
  2. Workflow
  3. Error Handling

Error Type

PreviousPredefined Error Handling LogicNextAdditional Features

Last updated 4 months ago

This article summarizes the potential exceptions and corresponding error types that may occur in different types of nodes.

General Error

  • System error

    Typically caused by system issues such as a disabled sandbox service or network connection problems.

  • Operational Error

    Occurs when developers are unable to configure or run the node correctly.

Code Node

nodes support running Python and JavaScript code for data transformation in workflows or chat flows. Here are 4 common runtime errors:

  1. Code Node Error (CodeNodeError)

    This error occurs due to exceptions in developer-written code, such as: missing variables, calculation logic errors, or treating string array inputs as string variables. You can locate the issue using the error message and exact line number.

Code Error
  1. Sandbox Network Issues (System Error)

    This error commonly occurs when there are network traffic or connection issues, such as when the sandbox service isn't running or proxy services have interrupted the network. You can resolve this through the following steps:

    1. Check network service quality

    2. Start the sandbox service

    3. Verify proxy settings

  1. Depth Limit Error (DepthLimitError)

    The current node's default configuration only supports up to 5 levels of nested structures. An error will occur if it exceeds 5 levels.

  1. Output Validation Error (OutputValidationError)

    An error occurs when the actual output variable type doesn't match the selected output variable type. Developers need to change the selected output variable type to avoid this issue.

LLM Node

Here are 6 common runtime errors:

  1. Variable Not Found Error (VariableNotFoundError)

    This error occurs when the LLM cannot find system prompts or variables set in the context. Application developers can resolve this by replacing the problematic variables.

  1. Invalid Context Structure Error (InvalidContextStructureError)

    An error occurs when the context within the LLM node receives an invalid data structure (such as array[object]).

    Context only supports string (String) data structures.

  1. Invalid Variable Type Error (InvalidVariableTypeError)

    This error appears when the system prompt type is not in the standard Prompt text format or Jinja syntax format.

  2. Model Not Exist Error (ModelNotExistError)

    Each LLM node requires a configured model. This error occurs when no model is selected.

  3. LLM Authorization Required Error (LLMModeRequiredError)

    The model selected in the LLM node has no configured API Key. You can refer to the documentation for model authorization.

  4. No Prompt Found Error (NoPromptFoundError)

    An error occurs when the LLM node's prompt is empty, as prompts cannot be blank.

HTTP

  1. Authorization Configuration Error (AuthorizationConfigError)

    This error occurs when authentication information (Auth) is not configured.

  2. File Fetch Error (FileFetchError) This error appears when file variables cannot be retrieved.

  3. Invalid HTTP Method Error (InvalidHttpMethodError)

    An error occurs when the request header method is not one of the following: GET, HEAD, POST, PUT, PATCH, or DELETE.

  4. Response Size Error (ResponseSizeError)

    HTTP response size is limited to 10MB. An error occurs if the response exceeds this limit.

  5. HTTP Response Code Error (HTTPResponseCodeError) An error occurs when the request response returns a code that doesn't start with 2 (such as 200, 201). If exception handling is enabled, errors will occur for status codes 400, 404, and 500; otherwise, these won't trigger errors.

Tool

The following 3 errors commonly occur during runtime:

  1. Tool Execution Error (ToolNodeError)

    An error that occurs during tool execution itself, such as when reaching the target API's request limit.

  2. Tool Parameter Error (ToolParameterError)

    An error occurs when the configured tool node parameters are invalid, such as passing parameters that don't match the tool node's defined parameters.

  3. Tool File Processing Error (ToolFileError)

    An error occurs when the tool node cannot find the required files.

Sandbox network issues
OutputValidationError

The node is a core component of Chatflow and Workflow, utilizing LLM' capabilities in dialogue, generation, classification, and processing to complete various tasks based on user input instructions.

InvalidContextStructureError

nodes allow seamless integration with external services through customizable requests for data retrieval, webhook triggering, image generation, or file downloads via HTTP requests. Here are 5 common errors for this node:

LLM
HTTP
Code