Add New Provider
Last updated
Last updated
Providers support three configuration models:
Predefined Model
This indicates that users only need to configure unified provider credentials to use the predefined models under the provider.
Customizable Model
Users need to add credentials configuration for each model. For example, Xinference supports both LLM and Text Embedding, but each model has a unique model_uid. If you want to connect both, you need to configure a model_uid for each model.
Fetch from Remote
Similar to the predefined-model
configuration method, users only need to configure unified provider credentials, and the models are fetched from the provider using the credential information.
For instance, with OpenAI, we can fine-tune multiple models based on gpt-turbo-3.5, all under the same api_key. When configured as fetch-from-remote
, developers only need to configure a unified api_key to allow Dify Runtime to fetch all the developer's fine-tuned models and connect to Dify.
These three configuration methods can coexist, meaning a provider can support predefined-model
+ customizable-model
or predefined-model
+ fetch-from-remote
, etc. This allows using predefined models and models fetched from remote with unified provider credentials, and additional custom models can be used if added.
Terminology
module
: A module
is a Python Package, or more colloquially, a folder containing an __init__.py
file and other .py
files.
Steps
Adding a new provider mainly involves several steps. Here is a brief outline to give you an overall understanding. Detailed steps will be introduced below.
Create a provider YAML file and write it according to the .
Create provider code and implement a class
.
Create corresponding model type modules
under the provider module
, such as llm
or text_embedding
.
Create same-named code files under the corresponding model module
, such as llm.py
, and implement a class
.
If there are predefined models, create same-named YAML files under the model module
, such as claude-2.1.yaml
, and write them according to the .
Write test code to ensure functionality is available.
To add a new provider, first determine the provider's English identifier, such as anthropic
, and create a module
named after it in model_providers
.
Under this module
, we need to prepare the provider's YAML configuration first.
Preparing Provider YAML
Taking Anthropic
as an example, preset the basic information of the provider, supported model types, configuration methods, and credential rules.
Implement Provider Code
We need to create a Python file with the same name under model_providers
, such as anthropic.py
, and implement a class
that inherits from the __base.provider.Provider
base class, such as AnthropicProvider
.
Custom Model Providers
For providers like Xinference that offer custom models, this step can be skipped. Just create an empty XinferenceProvider
class and implement an empty validate_provider_credentials
method. This method will not actually be used and is only to avoid abstract class instantiation errors.
Predefined Model Providers
You can also reserve the validate_provider_credentials
implementation first and directly reuse it after implementing the model credential validation method.
Adding Models
For predefined models, we can connect them by simply defining a YAML file and implementing the calling code.
For custom models, we only need to implement the calling code to connect them, but the parameters they handle may be more complex.
To ensure the availability of the connected provider/model, each method written needs to have corresponding integration test code written in the tests
directory.
Taking Anthropic
as an example.
Before writing test code, you need to add the credential environment variables required for testing the provider in .env.example
, such as: ANTHROPIC_API_KEY
.
Before executing, copy .env.example
to .env
and then execute.
Writing Test Code
Create a module
with the same name as the provider under the tests
directory: anthropic
, and continue to create test_provider.py
and corresponding model type test py files in this module, as shown below:
Write test code for various situations of the implemented code above, and after passing the tests, submit the code.
If the connected provider offers customizable models, such as OpenAI
which provides fine-tuned models, we need to add . Taking OpenAI
as an example:
You can also refer to the in the directories of other providers under the model_providers
directory.
Providers need to inherit from the __base.model_provider.ModelProvider
base class and implement the validate_provider_credentials
method to validate the provider's unified credentials. You can refer to .
👈🏻
👈🏻