Dify
English
English
  • Getting Started
    • Welcome to Dify
      • Features and Specifications
      • List of Model Providers
    • Dify Community
      • Deploy with Docker Compose
      • Start with Local Source Code
      • Deploy with aaPanel
      • Start Frontend Docker Container Separately
      • Environment Variables Explanation
      • FAQs
    • Dify Cloud
    • Dify Premium on AWS
    • Dify for Education
  • Guides
    • Model
      • Add New Provider
      • Predefined Model Integration
      • Custom Model Integration
      • Interfaces
      • Schema
      • Load Balancing
    • Application Orchestration
      • Create Application
      • Chatbot Application
        • Multiple Model Debugging
      • Agent
      • Application Toolkits
        • Moderation Tool
    • Workflow
      • Key Concepts
      • Variables
      • Node Description
        • Start
        • End
        • Answer
        • LLM
        • Knowledge Retrieval
        • Question Classifier
        • Conditional Branch IF/ELSE
        • Code Execution
        • Template
        • Doc Extractor
        • List Operator
        • Variable Aggregator
        • Variable Assigner
        • Iteration
        • Parameter Extraction
        • HTTP Request
        • Agent
        • Tools
        • Loop
      • Shortcut Key
      • Orchestrate Node
      • File Upload
      • Error Handling
        • Predefined Error Handling Logic
        • Error Type
      • Additional Features
      • Debug and Preview
        • Preview and Run
        • Step Run
        • Conversation/Run Logs
        • Checklist
        • Run History
      • Application Publishing
      • Structured Outputs
      • Bulletin: Image Upload Replaced by File Upload
    • Knowledge
      • Create Knowledge
        • 1. Import Text Data
          • 1.1 Import Data from Notion
          • 1.2 Import Data from Website
        • 2. Choose a Chunk Mode
        • 3. Select the Indexing Method and Retrieval Setting
      • Manage Knowledge
        • Maintain Documents
        • Maintain Knowledge via API
      • Metadata
      • Integrate Knowledge Base within Application
      • Retrieval Test / Citation and Attributions
      • Knowledge Request Rate Limit
      • Connect to an External Knowledge Base
      • External Knowledge API
    • Tools
      • Quick Tool Integration
      • Advanced Tool Integration
      • Tool Configuration
        • Google
        • Bing
        • SearchApi
        • StableDiffusion
        • Dall-e
        • Perplexity Search
        • AlphaVantage
        • Youtube
        • SearXNG
        • Serper
        • SiliconFlow (Flux AI Supported)
        • ComfyUI
    • Publishing
      • Publish as a Single-page Web App
        • Web App Settings
        • Text Generator Application
        • Conversation Application
      • Embedding In Websites
      • Developing with APIs
      • Re-develop Based on Frontend Templates
    • Annotation
      • Logs and Annotation
      • Annotation Reply
    • Monitoring
      • Data Analysis
      • Integrate External Ops Tools
        • Integrate LangSmith
        • Integrate Langfuse
        • Integrate Opik
    • Extension
      • API-Based Extension
        • External Data Tool
        • Deploy API Tools with Cloudflare Workers
        • Moderation
      • Code-Based Extension
        • External Data Tool
        • Moderation
    • Collaboration
      • Discover
      • Invite and Manage Members
    • Management
      • App Management
      • Team Members Management
      • Personal Account Management
      • Subscription Management
      • Version Control
  • Workshop
    • Basic
      • How to Build an AI Image Generation App
    • Intermediate
      • Build An Article Reader Using File Upload
      • Building a Smart Customer Service Bot Using a Knowledge Base
      • Generating analysis of Twitter account using Chatflow Agent
  • Community
    • Seek Support
    • Become a Contributor
    • Contributing to Dify Documentation
  • Plugins
    • Introduction
    • Quick Start
      • Install and Use Plugins
      • Develop Plugins
        • Initialize Development Tools
        • Tool Plugin
        • Model Plugin
          • Create Model Providers
          • Integrate the Predefined Model
          • Integrate the Customizable Model
        • Agent Strategy Plugin
        • Extension Plugin
        • Bundle
      • Debug Plugin
    • Manage Plugins
    • Schema Specification
      • Manifest
      • Endpoint
      • Tool
      • Agent
      • Model
        • Model Designing Rules
        • Model Schema
      • General Specifications
      • Persistent Storage
      • Reverse Invocation of the Dify Service
        • App
        • Model
        • Tool
        • Node
    • Best Practice
      • Develop a Slack Bot Plugin
      • Dify MCP Plugin Guide: Connect Zapier and Automate Email Delivery with Ease
    • Publish Plugins
      • Publish Plugins Automatically
      • Publish to Dify Marketplace
        • Plugin Developer Guidelines
        • Plugin Privacy Protection Guidelines
      • Publish to Your Personal GitHub Repository
      • Package the Plugin File and Publish it
      • Signing Plugins for Third-Party Signature Verification
    • FAQ
  • Development
    • Backend
      • DifySandbox
        • Contribution Guide
    • Models Integration
      • Integrate Open Source Models from Hugging Face
      • Integrate Open Source Models from Replicate
      • Integrate Local Models Deployed by Xinference
      • Integrate Local Models Deployed by OpenLLM
      • Integrate Local Models Deployed by LocalAI
      • Integrate Local Models Deployed by Ollama
      • Integrate Models on LiteLLM Proxy
      • Integrating with GPUStack for Local Model Deployment
      • Integrating AWS Bedrock Models (DeepSeek)
    • Migration
      • Migrating Community Edition to v1.0.0
  • Learn More
    • Use Cases
      • DeepSeek & Dify Integration Guide: Building AI Applications with Multi-Turn Reasoning
      • Private Deployment of Ollama + DeepSeek + Dify: Build Your Own AI Assistant
      • Build a Notion AI Assistant
      • Create a MidJourney Prompt Bot with Dify
      • Create an AI Chatbot with Business Data in Minutes
      • Integrating Dify Chatbot into Your Wix Website
      • How to connect with AWS Bedrock Knowledge Base?
      • Building the Dify Scheduler
      • Building an AI Thesis Slack Bot on Dify
    • Extended Reading
      • What is LLMOps?
      • Retrieval-Augmented Generation (RAG)
        • Hybrid Search
        • Re-ranking
        • Retrieval Modes
      • How to Use JSON Schema Output in Dify?
    • FAQ
      • Self-Host
      • LLM Configuration and Usage
      • Plugins
  • Policies
    • Open Source License
    • User Agreement
      • Terms of Service
      • Privacy Policy
      • Get Compliance Report
  • Features
    • Workflow
Powered by GitBook
On this page
  • Prerequisites
  • Create New Project
  • Choose Model Plugin Template
  • Adding a New Model Provider
  1. Plugins
  2. Quick Start
  3. Develop Plugins
  4. Model Plugin

Create Model Providers

PreviousModel PluginNextIntegrate the Predefined Model

Last updated 2 months ago

Creating a Model Type Plugin The first step in creating a Model type plugin is to initialize the plugin project and create the model provider file, followed by integrating specific predefined/custom models.

Prerequisites

  • Dify plugin scaffolding tool

  • Python environment, version ≥ 3.12

For detailed instructions on preparing the plugin development scaffolding tool, please refer to .

Create New Project

In the current path, run the CLI tool to create a new dify plugin project:

./dify-plugin-darwin-arm64 plugin init

If you have renamed the binary file to dify and copied it to the /usr/local/bin path, you can run the following command to create a new plugin project:

dify plugin init

Choose Model Plugin Template

Plugins are divided into three types: tools, models, and extensions. All templates in the scaffolding tool provide complete code projects. This example will use an LLM type plugin.

Plugin type: llm

Configure Plugin Permissions

Configure the following permissions for this LLM plugin:

  • Models

  • LLM

  • Storage

Model Type Configuration

Model providers support three configuration methods:

  1. predefined-model: Common large model types, only requiring unified provider credentials to use predefined models under the provider. For example, OpenAI provider offers a series of predefined models like gpt-3.5-turbo-0125 and gpt-4o-2024-05-13. For detailed development instructions, refer to Integrating Predefined Models.

  2. customizable-model: You need to manually add credential configurations for each model. For example, Xinference supports both LLM and Text Embedding, but each model has a unique model_uid. To integrate both, you need to configure a model_uid for each model. For detailed development instructions, refer to Integrating Custom Models.

These configuration methods can coexist, meaning a provider can support predefined-model + customizable-model or predefined-model + fetch-from-remote combinations.

Adding a New Model Provider

Here are the main steps to add a new model provider:

  1. Create Model Provider Configuration YAML File

    Add a YAML file in the provider directory to describe the provider's basic information and parameter configuration. Write content according to ProviderSchema requirements to ensure consistency with system specifications.

  2. Write Model Provider Code

    Create provider class code, implementing a Python class that meets system interface requirements for connecting with the provider's API and implementing core functionality.


Here are the full details of how to do each step.

1. Create Model Provider Configuration File

Manifest is a YAML format file that declares the model provider's basic information, supported model types, configuration methods, and credential rules. The plugin project template will automatically generate configuration files under the /providers path.

Here's an example of the anthropic.yaml configuration file for Anthropic:

provider: anthropic
label:
 en_US: Anthropic
description:
 en_US: Anthropic's powerful models, such as Claude 3.
icon_small:
 en_US: icon_s_en.svg
icon_large:
 en_US: icon_l_en.svg
background: "#F0F0EB"
help:
 title:
   en_US: Get your API Key from Anthropic
 url:
   en_US: https://console.anthropic.com/account/keys
supported_model_types:
 - llm
configurate_methods:
 - predefined-model
provider_credential_schema:
 credential_form_schemas:
   - variable: anthropic_api_key
     label:
       en_US: API Key
     type: secret-input
     required: true
     placeholder:
       en_US: Enter your API Key
   - variable: anthropic_api_url
     label:
       en_US: API URL
     type: text-input
     required: false
     placeholder:
       en_US: Enter your API URL
models:
 llm:
   predefined:
     - "models/llm/*.yaml"
   position: "models/llm/_position.yaml"
extra:
 python:
   provider_source: provider/anthropic.py
   model_sources:
     - "models/llm/llm.py"

If the accessing vendor provides a custom model, such as OpenAI provides a fine-tuned model, you need to add the model_credential_schema field.

The following is sample code for the OpenAI family of models:

model_credential_schema:
model:
 label:
   en_US: Model Name
 placeholder:
   en_US: Enter your model name
 credential_form_schemas:
   - variable: openai_api_key
     label:
       en_US: API Key
     type: secret-input
     required: true
     placeholder:
       en_US: Enter your API Key
   - variable: openai_organization
     label:
       en_US: Organization
     type: text-input
     required: false
     placeholder:
       en_US: Enter your Organization ID
   - variable: openai_api_base
     label:
       en_US: API Base
     type: text-input
     required: false
     placeholder:
       en_US: Enter your API Base

2. Write model provider code

Create a python file with the same name, e.g. anthropic.py, in the /providers folder and implement a class that inherits from the __base.provider.Provider base class, e.g. AnthropicProvider. The following is the Anthropic sample code:

import logging
from dify_plugin.entities.model import ModelType
from dify_plugin.errors.model import CredentialsValidateFailedError
from dify_plugin import ModelProvider

logger = logging.getLogger(__name__)


class AnthropicProvider(ModelProvider):
    def validate_provider_credentials(self, credentials: dict) -> None:
        """
        Validate provider credentials

        if validate failed, raise exception

        :param credentials: provider credentials, credentials form defined in `provider_credential_schema`.
        """
        try:
            model_instance = self.get_model_instance(ModelType.LLM)
            model_instance.validate_credentials(model="claude-3-opus-20240229", credentials=credentials)
        except CredentialsValidateFailedError as ex:
            raise ex
        except Exception as ex:
            logger.exception(f"{self.get_provider_schema().provider} credentials validate failed")
            raise ex

Vendors need to inherit the __base.model_provider.ModelProvider base class and implement the validate_provider_credentials vendor uniform credentials validation method, see AnthropicProvider.

def validate_provider_credentials(self, credentials: dict) -> None:
    """
    Validate provider credentials
    You can choose any validate_credentials method of model type or implement validate method by yourself,
    such as: get model list api

    if validate failed, raise exception

    :param credentials: provider credentials, credentials form defined in `provider_credential_schema`.
    """

Of course, it is also possible to reserve the validate_provider_credentials implementation first and reuse it directly after the model credentials verification method is implemented. For other types of model providers, please refer to the following configuration methods.

Custom Model Providers

For custom model providers like Xinference, you can skip the full implementation step. Simply create an empty class called XinferenceProvider and implement an empty validate_provider_credentials method in it.

Detailed Explanation:

• XinferenceProvider is a placeholder class used to identify custom model providers.

• While the validate_provider_credentials method won't be actually called, it must exist because its parent class is abstract and requires all child classes to implement this method. By providing an empty implementation, we can avoid instantiation errors that would occur from not implementing the abstract method.

class XinferenceProvider(Provider):
    def validate_provider_credentials(self, credentials: dict) -> None:
        pass

After initializing the model provider, the next step is to integrate specific llm models provided by the provider. For detailed instructions, please refer to:

Model Plugin Permission

For a more complete look at the Model Provider YAML specification, see for details.

Schema
Develop Predefined Models
Develop Custom Models
Initializing Development Tools