Dify
English
English
  • Getting Started
    • Welcome to Dify
      • Features and Specifications
      • List of Model Providers
    • Dify Community
      • Deploy with Docker Compose
      • Start with Local Source Code
      • Deploy with aaPanel
      • Start Frontend Docker Container Separately
      • Environment Variables Explanation
      • FAQs
    • Dify Cloud
    • Dify Premium on AWS
    • Dify for Education
  • Guides
    • Model
      • Add New Provider
      • Predefined Model Integration
      • Custom Model Integration
      • Interfaces
      • Schema
      • Load Balancing
    • Application Orchestration
      • Create Application
      • Chatbot Application
        • Multiple Model Debugging
      • Agent
      • Application Toolkits
        • Moderation Tool
    • Workflow
      • Key Concepts
      • Variables
      • Node Description
        • Start
        • End
        • Answer
        • LLM
        • Knowledge Retrieval
        • Question Classifier
        • Conditional Branch IF/ELSE
        • Code Execution
        • Template
        • Doc Extractor
        • List Operator
        • Variable Aggregator
        • Variable Assigner
        • Iteration
        • Parameter Extraction
        • HTTP Request
        • Agent
        • Tools
        • Loop
      • Shortcut Key
      • Orchestrate Node
      • File Upload
      • Error Handling
        • Predefined Error Handling Logic
        • Error Type
      • Additional Features
      • Debug and Preview
        • Preview and Run
        • Step Run
        • Conversation/Run Logs
        • Checklist
        • Run History
      • Application Publishing
      • Structured Outputs
      • Bulletin: Image Upload Replaced by File Upload
    • Knowledge
      • Create Knowledge
        • 1. Import Text Data
          • 1.1 Import Data from Notion
          • 1.2 Import Data from Website
        • 2. Choose a Chunk Mode
        • 3. Select the Indexing Method and Retrieval Setting
      • Manage Knowledge
        • Maintain Documents
        • Maintain Knowledge via API
      • Metadata
      • Integrate Knowledge Base within Application
      • Retrieval Test / Citation and Attributions
      • Knowledge Request Rate Limit
      • Connect to an External Knowledge Base
      • External Knowledge API
    • Tools
      • Quick Tool Integration
      • Advanced Tool Integration
      • Tool Configuration
        • Google
        • Bing
        • SearchApi
        • StableDiffusion
        • Dall-e
        • Perplexity Search
        • AlphaVantage
        • Youtube
        • SearXNG
        • Serper
        • SiliconFlow (Flux AI Supported)
        • ComfyUI
    • Publishing
      • Publish as a Single-page Web App
        • Web App Settings
        • Text Generator Application
        • Conversation Application
      • Embedding In Websites
      • Developing with APIs
      • Re-develop Based on Frontend Templates
    • Annotation
      • Logs and Annotation
      • Annotation Reply
    • Monitoring
      • Data Analysis
      • Integrate External Ops Tools
        • Integrate LangSmith
        • Integrate Langfuse
        • Integrate Opik
    • Extension
      • API-Based Extension
        • External Data Tool
        • Deploy API Tools with Cloudflare Workers
        • Moderation
      • Code-Based Extension
        • External Data Tool
        • Moderation
    • Collaboration
      • Discover
      • Invite and Manage Members
    • Management
      • App Management
      • Team Members Management
      • Personal Account Management
      • Subscription Management
      • Version Control
  • Workshop
    • Basic
      • How to Build an AI Image Generation App
    • Intermediate
      • Build An Article Reader Using File Upload
      • Building a Smart Customer Service Bot Using a Knowledge Base
      • Generating analysis of Twitter account using Chatflow Agent
  • Community
    • Seek Support
    • Become a Contributor
    • Contributing to Dify Documentation
  • Plugins
    • Introduction
    • Quick Start
      • Install and Use Plugins
      • Develop Plugins
        • Initialize Development Tools
        • Tool Plugin
        • Model Plugin
          • Create Model Providers
          • Integrate the Predefined Model
          • Integrate the Customizable Model
        • Agent Strategy Plugin
        • Extension Plugin
        • Bundle
      • Debug Plugin
    • Manage Plugins
    • Schema Specification
      • Manifest
      • Endpoint
      • Tool
      • Agent
      • Model
        • Model Designing Rules
        • Model Schema
      • General Specifications
      • Persistent Storage
      • Reverse Invocation of the Dify Service
        • App
        • Model
        • Tool
        • Node
    • Best Practice
      • Develop a Slack Bot Plugin
      • Dify MCP Plugin Guide: Connect Zapier and Automate Email Delivery with Ease
    • Publish Plugins
      • Publish Plugins Automatically
      • Publish to Dify Marketplace
        • Plugin Developer Guidelines
        • Plugin Privacy Protection Guidelines
      • Publish to Your Personal GitHub Repository
      • Package the Plugin File and Publish it
      • Signing Plugins for Third-Party Signature Verification
    • FAQ
  • Development
    • Backend
      • DifySandbox
        • Contribution Guide
    • Models Integration
      • Integrate Open Source Models from Hugging Face
      • Integrate Open Source Models from Replicate
      • Integrate Local Models Deployed by Xinference
      • Integrate Local Models Deployed by OpenLLM
      • Integrate Local Models Deployed by LocalAI
      • Integrate Local Models Deployed by Ollama
      • Integrate Models on LiteLLM Proxy
      • Integrating with GPUStack for Local Model Deployment
      • Integrating AWS Bedrock Models (DeepSeek)
    • Migration
      • Migrating Community Edition to v1.0.0
  • Learn More
    • Use Cases
      • DeepSeek & Dify Integration Guide: Building AI Applications with Multi-Turn Reasoning
      • Private Deployment of Ollama + DeepSeek + Dify: Build Your Own AI Assistant
      • Build a Notion AI Assistant
      • Create a MidJourney Prompt Bot with Dify
      • Create an AI Chatbot with Business Data in Minutes
      • Integrating Dify Chatbot into Your Wix Website
      • How to connect with AWS Bedrock Knowledge Base?
      • Building the Dify Scheduler
      • Building an AI Thesis Slack Bot on Dify
    • Extended Reading
      • What is LLMOps?
      • Retrieval-Augmented Generation (RAG)
        • Hybrid Search
        • Re-ranking
        • Retrieval Modes
      • How to Use JSON Schema Output in Dify?
    • FAQ
      • Self-Host
      • LLM Configuration and Usage
      • Plugins
  • Policies
    • Open Source License
    • User Agreement
      • Terms of Service
      • Privacy Policy
      • Get Compliance Report
  • Features
    • Workflow
Powered by GitBook
On this page
  • Provider Configuration Methods
  • Configuration Instructions
  1. Guides
  2. Model

Add New Provider

PreviousModelNextPredefined Model Integration

Last updated 6 months ago

Provider Configuration Methods

Providers support three configuration models:

Predefined Model

This indicates that users only need to configure unified provider credentials to use the predefined models under the provider.

Customizable Model

Users need to add credentials configuration for each model. For example, Xinference supports both LLM and Text Embedding, but each model has a unique model_uid. If you want to connect both, you need to configure a model_uid for each model.

Fetch from Remote

Similar to the predefined-model configuration method, users only need to configure unified provider credentials, and the models are fetched from the provider using the credential information.

For instance, with OpenAI, we can fine-tune multiple models based on gpt-turbo-3.5, all under the same api_key. When configured as fetch-from-remote, developers only need to configure a unified api_key to allow Dify Runtime to fetch all the developer's fine-tuned models and connect to Dify.

These three configuration methods can coexist, meaning a provider can support predefined-model + customizable-model or predefined-model + fetch-from-remote, etc. This allows using predefined models and models fetched from remote with unified provider credentials, and additional custom models can be used if added.

Configuration Instructions

Terminology

  • module: A module is a Python Package, or more colloquially, a folder containing an __init__.py file and other .py files.

Steps

Adding a new provider mainly involves several steps. Here is a brief outline to give you an overall understanding. Detailed steps will be introduced below.

  • Create a provider YAML file and write it according to the .

  • Create provider code and implement a class.

  • Create corresponding model type modules under the provider module, such as llm or text_embedding.

  • Create same-named code files under the corresponding model module, such as llm.py, and implement a class.

  • If there are predefined models, create same-named YAML files under the model module, such as claude-2.1.yaml, and write them according to the .

  • Write test code to ensure functionality is available.

Let's Get Started

To add a new provider, first determine the provider's English identifier, such as anthropic, and create a module named after it in model_providers.

Under this module, we need to prepare the provider's YAML configuration first.

Preparing Provider YAML

Taking Anthropic as an example, preset the basic information of the provider, supported model types, configuration methods, and credential rules.

provider: anthropic  # Provider identifier
label:  # Provider display name, can be set in en_US English and zh_Hans Chinese. If zh_Hans is not set, en_US will be used by default.
  en_US: Anthropic
icon_small:  # Small icon of the provider, stored in the _assets directory under the corresponding provider implementation directory, same language strategy as label
  en_US: icon_s_en.png
icon_large:  # Large icon of the provider, stored in the _assets directory under the corresponding provider implementation directory, same language strategy as label
  en_US: icon_l_en.png
supported_model_types:  # Supported model types, Anthropic only supports LLM
- llm
configurate_methods:  # Supported configuration methods, Anthropic only supports predefined models
- predefined-model
provider_credential_schema:  # Provider credential rules, since Anthropic only supports predefined models, unified provider credential rules need to be defined
  credential_form_schemas:  # Credential form item list
  - variable: anthropic_api_key  # Credential parameter variable name
    label:  # Display name
      en_US: API Key
    type: secret-input  # Form type, secret-input here represents an encrypted information input box, only displaying masked information when editing.
    required: true  # Whether it is required
    placeholder:  # PlaceHolder information
      zh_Hans: 在此输入你的 API Key
      en_US: Enter your API Key
  - variable: anthropic_api_url
    label:
      en_US: API URL
    type: text-input  # Form type, text-input here represents a text input box
    required: false
    placeholder:
      zh_Hans: 在此输入你的 API URL
      en_US: Enter your API URL
model_credential_schema:
  model: # Fine-tuned model name
    label:
      en_US: Model Name
      zh_Hans: 模型名称
    placeholder:
      en_US: Enter your model name
      zh_Hans: 输入模型名称
  credential_form_schemas:
  - variable: openai_api_key
    label:
      en_US: API Key
    type: secret-input
    required: true
    placeholder:
      zh_Hans: 在此输入你的 API Key
      en_US: Enter your API Key
  - variable: openai_organization
    label:
        zh_Hans: 组织 ID
        en_US: Organization
    type: text-input
    required: false
    placeholder:
      zh_Hans: 在此输入你的组织 ID
      en_US: Enter your Organization ID
  - variable: openai_api_base
    label:
      zh_Hans: API Base
      en_US: API Base
    type: text-input
    required: false
    placeholder:
      zh_Hans: 在此输入你的 API Base
      en_US: Enter your API Base

Implement Provider Code

We need to create a Python file with the same name under model_providers, such as anthropic.py, and implement a class that inherits from the __base.provider.Provider base class, such as AnthropicProvider.

Custom Model Providers

For providers like Xinference that offer custom models, this step can be skipped. Just create an empty XinferenceProvider class and implement an empty validate_provider_credentials method. This method will not actually be used and is only to avoid abstract class instantiation errors.

class XinferenceProvider(Provider):
    def validate_provider_credentials(self, credentials: dict) -> None:
        pass

Predefined Model Providers

def validate_provider_credentials(self, credentials: dict) -> None:
    """
    Validate provider credentials
    You can choose any validate_credentials method of model type or implement validate method by yourself,
    such as: get model list api

    if validate failed, raise exception

    :param credentials: provider credentials, credentials form defined in `provider_credential_schema`.
    """

You can also reserve the validate_provider_credentials implementation first and directly reuse it after implementing the model credential validation method.

Adding Models

For predefined models, we can connect them by simply defining a YAML file and implementing the calling code.

For custom models, we only need to implement the calling code to connect them, but the parameters they handle may be more complex.


Testing

To ensure the availability of the connected provider/model, each method written needs to have corresponding integration test code written in the tests directory.

Taking Anthropic as an example.

Before writing test code, you need to add the credential environment variables required for testing the provider in .env.example, such as: ANTHROPIC_API_KEY.

Before executing, copy .env.example to .env and then execute.

Writing Test Code

Create a module with the same name as the provider under the tests directory: anthropic, and continue to create test_provider.py and corresponding model type test py files in this module, as shown below:

.
├── __init__.py
├── anthropic
│   ├── __init__.py
│   ├── test_llm.py       # LLM Test
│   └── test_provider.py  # Provider Test

Write test code for various situations of the implemented code above, and after passing the tests, submit the code.

If the connected provider offers customizable models, such as OpenAI which provides fine-tuned models, we need to add . Taking OpenAI as an example:

You can also refer to the in the directories of other providers under the model_providers directory.

Providers need to inherit from the __base.model_provider.ModelProvider base class and implement the validate_provider_credentials method to validate the provider's unified credentials. You can refer to .

👈🏻

👈🏻

Provider Schema
AI Model Entity
model_credential_schema
YAML configuration information
AnthropicProvider
Adding Predefined Models
Adding Custom Models