Dify
English
English
  • Getting Started
    • Welcome to Dify
      • Features and Specifications
      • List of Model Providers
    • Dify Community
      • Deploy with Docker Compose
      • Start with Local Source Code
      • Deploy with aaPanel
      • Start Frontend Docker Container Separately
      • Environment Variables Explanation
      • FAQs
    • Dify Cloud
    • Dify Premium on AWS
    • Dify for Education
  • Guides
    • Model
      • Add New Provider
      • Predefined Model Integration
      • Custom Model Integration
      • Interfaces
      • Schema
      • Load Balancing
    • Application Orchestration
      • Create Application
      • Chatbot Application
        • Multiple Model Debugging
      • Agent
      • Application Toolkits
        • Moderation Tool
    • Workflow
      • Key Concepts
      • Variables
      • Node Description
        • Start
        • End
        • Answer
        • LLM
        • Knowledge Retrieval
        • Question Classifier
        • Conditional Branch IF/ELSE
        • Code Execution
        • Template
        • Doc Extractor
        • List Operator
        • Variable Aggregator
        • Variable Assigner
        • Iteration
        • Parameter Extraction
        • HTTP Request
        • Agent
        • Tools
        • Loop
      • Shortcut Key
      • Orchestrate Node
      • File Upload
      • Error Handling
        • Predefined Error Handling Logic
        • Error Type
      • Additional Features
      • Debug and Preview
        • Preview and Run
        • Step Run
        • Conversation/Run Logs
        • Checklist
        • Run History
      • Application Publishing
      • Structured Outputs
      • Bulletin: Image Upload Replaced by File Upload
    • Knowledge
      • Create Knowledge
        • 1. Import Text Data
          • 1.1 Import Data from Notion
          • 1.2 Import Data from Website
        • 2. Choose a Chunk Mode
        • 3. Select the Indexing Method and Retrieval Setting
      • Manage Knowledge
        • Maintain Documents
        • Maintain Knowledge via API
      • Metadata
      • Integrate Knowledge Base within Application
      • Retrieval Test / Citation and Attributions
      • Knowledge Request Rate Limit
      • Connect to an External Knowledge Base
      • External Knowledge API
    • Tools
      • Quick Tool Integration
      • Advanced Tool Integration
      • Tool Configuration
        • Google
        • Bing
        • SearchApi
        • StableDiffusion
        • Dall-e
        • Perplexity Search
        • AlphaVantage
        • Youtube
        • SearXNG
        • Serper
        • SiliconFlow (Flux AI Supported)
        • ComfyUI
    • Publishing
      • Publish as a Single-page Web App
        • Web App Settings
        • Text Generator Application
        • Conversation Application
      • Embedding In Websites
      • Developing with APIs
      • Re-develop Based on Frontend Templates
    • Annotation
      • Logs and Annotation
      • Annotation Reply
    • Monitoring
      • Data Analysis
      • Integrate External Ops Tools
        • Integrate LangSmith
        • Integrate Langfuse
        • Integrate Opik
    • Extension
      • API-Based Extension
        • External Data Tool
        • Deploy API Tools with Cloudflare Workers
        • Moderation
      • Code-Based Extension
        • External Data Tool
        • Moderation
    • Collaboration
      • Discover
      • Invite and Manage Members
    • Management
      • App Management
      • Team Members Management
      • Personal Account Management
      • Subscription Management
      • Version Control
  • Workshop
    • Basic
      • How to Build an AI Image Generation App
    • Intermediate
      • Build An Article Reader Using File Upload
      • Building a Smart Customer Service Bot Using a Knowledge Base
      • Generating analysis of Twitter account using Chatflow Agent
  • Community
    • Seek Support
    • Become a Contributor
    • Contributing to Dify Documentation
  • Plugins
    • Introduction
    • Quick Start
      • Install and Use Plugins
      • Develop Plugins
        • Initialize Development Tools
        • Tool Plugin
        • Model Plugin
          • Create Model Providers
          • Integrate the Predefined Model
          • Integrate the Customizable Model
        • Agent Strategy Plugin
        • Extension Plugin
        • Bundle
      • Debug Plugin
    • Manage Plugins
    • Schema Specification
      • Manifest
      • Endpoint
      • Tool
      • Agent
      • Model
        • Model Designing Rules
        • Model Schema
      • General Specifications
      • Persistent Storage
      • Reverse Invocation of the Dify Service
        • App
        • Model
        • Tool
        • Node
    • Best Practice
      • Develop a Slack Bot Plugin
      • Dify MCP Plugin Guide: Connect Zapier and Automate Email Delivery with Ease
    • Publish Plugins
      • Publish Plugins Automatically
      • Publish to Dify Marketplace
        • Plugin Developer Guidelines
        • Plugin Privacy Protection Guidelines
      • Publish to Your Personal GitHub Repository
      • Package the Plugin File and Publish it
      • Signing Plugins for Third-Party Signature Verification
    • FAQ
  • Development
    • Backend
      • DifySandbox
        • Contribution Guide
    • Models Integration
      • Integrate Open Source Models from Hugging Face
      • Integrate Open Source Models from Replicate
      • Integrate Local Models Deployed by Xinference
      • Integrate Local Models Deployed by OpenLLM
      • Integrate Local Models Deployed by LocalAI
      • Integrate Local Models Deployed by Ollama
      • Integrate Models on LiteLLM Proxy
      • Integrating with GPUStack for Local Model Deployment
      • Integrating AWS Bedrock Models (DeepSeek)
    • Migration
      • Migrating Community Edition to v1.0.0
  • Learn More
    • Use Cases
      • DeepSeek & Dify Integration Guide: Building AI Applications with Multi-Turn Reasoning
      • Private Deployment of Ollama + DeepSeek + Dify: Build Your Own AI Assistant
      • Build a Notion AI Assistant
      • Create a MidJourney Prompt Bot with Dify
      • Create an AI Chatbot with Business Data in Minutes
      • Integrating Dify Chatbot into Your Wix Website
      • How to connect with AWS Bedrock Knowledge Base?
      • Building the Dify Scheduler
      • Building an AI Thesis Slack Bot on Dify
    • Extended Reading
      • What is LLMOps?
      • Retrieval-Augmented Generation (RAG)
        • Hybrid Search
        • Re-ranking
        • Retrieval Modes
      • How to Use JSON Schema Output in Dify?
    • FAQ
      • Self-Host
      • LLM Configuration and Usage
      • Plugins
  • Policies
    • Open Source License
    • User Agreement
      • Terms of Service
      • Privacy Policy
      • Get Compliance Report
  • Features
    • Workflow
Powered by GitBook
On this page
  • Overview
  • Advantages of Private Deployment:
  • Prerequisites
  • Hardware Requirements:
  • Software Requirements:
  • Deployment Steps
  • 1. Install Ollama
  • 2. Install Dify Community Edition
  • 3. Integrate DeepSeek with Dify
  • Build AI Applications
  • DeepSeek AI Chatbot (Simple Application)
  • DeepSeek AI Chatflow / Workflow (Advanced Application)
  • FAQ
  • 1. Connection Errors When Using Docker
  • 2. How to Modify the Address and Port of Ollama Service?
  1. Learn More
  2. Use Cases

Private Deployment of Ollama + DeepSeek + Dify: Build Your Own AI Assistant

PreviousDeepSeek & Dify Integration Guide: Building AI Applications with Multi-Turn ReasoningNextBuild a Notion AI Assistant

Last updated 2 months ago

Overview

DeepSeek is an innovative open-source large language model (LLM) that brings a revolutionary experience to AI-powered conversations with its advanced algorithmic architecture and reflective reasoning capabilities. By deploying it privately, you gain full control over data security and system configurations while maintaining flexibility in your deployment strategy.

Dify, an open-source AI application development platform, offers a complete private deployment solution. By seamlessly integrating a locally deployed DeepSeek model into the Dify platform, enterprises can build powerful AI applications within their own infrastructure while ensuring data privacy.

Advantages of Private Deployment:

  • Superior Performance: Delivers a conversational experience comparable to commercial models.

  • Isolated Environment: Runs entirely offline, eliminating data leakage risks.

  • Full Data Control: Retains complete ownership of data assets, ensuring compliance.


Prerequisites

Hardware Requirements:

  • CPU: ≥ 2 Cores

  • RAM/GPU Memory: ≥ 16 GiB (Recommended)

Software Requirements:

  • Docker Compose


Deployment Steps

1. Install Ollama

➜  ~ ollama -v
ollama version is 0.5.5

Select an appropriate DeepSeek model size based on your available hardware. A 7B model is recommended for initial installation.

Run the following command to install the DeepSeek R1 model:

ollama run deepseek-r1:7b

2. Install Dify Community Edition

Clone the Dify GitHub repository and follow the installation process:

git clone https://github.com/langgenius/dify.git
cd dify/docker
cp .env.example .env
docker compose up -d  # Use `docker-compose up -d` if running Docker Compose V1

Dify Community Edition runs on port 80 by default. You can access your private Dify platform at: http://your_server_ip

3. Integrate DeepSeek with Dify

Go to Profile → Settings → Model Providers in the Dify platform. Select Ollama and click Add Model.

Note: The “DeepSeek” option in Model Providers refers to the online API service, whereas the Ollama option is used for a locally deployed DeepSeek model.

Configure the Model:

• Model Name: Enter the deployed model name, e.g., deepseek-r1:7b.

Build AI Applications

DeepSeek AI Chatbot (Simple Application)

  1. On the Dify homepage, click Create Blank App, select Chatbot, and give it a name.

  1. Select the deepseek-r1:7b model under Ollama in the Model Provider section.

  1. Enter a message in the chat preview to verify the model’s response. If it replies correctly, the chatbot is online.

  1. Click the Publish button to obtain a shareable link or embed the chatbot into other websites.

DeepSeek AI Chatflow / Workflow (Advanced Application)

  1. Click Create Blank App, then select Chatflow or Workflow, and name the application.

  1. Add an LLM node, select the deepseek-r1:7b model under the Ollama framework, and insert the {{#sys.query#}} variable into the system prompt to connect to the initial node. If you encounter any API issues, you can handle them via Load Balancing or the Error Handling node.

  1. Add an End Node to complete the configuration. Test the workflow by entering a query. If the response is correct, the setup is complete.

FAQ

1. Connection Errors When Using Docker

If running Dify and Ollama inside Docker results in the following error:

httpconnectionpool(host=127.0.0.1, port=11434): max retries exceeded with url:/cpi/chat
(Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f8562812c20>:
fail to establish a new connection:[Errno 111] Connection refused'))

Cause:

Ollama is not accessible inside the Docker container because localhost refers to the container itself.

Solution:

Setting environment variables on Mac:

If Ollama is run as a macOS application, environment variables should be set using launchctl:

  1. For each environment variable, call launchctl setenv.

    launchctl setenv OLLAMA_HOST "0.0.0.0"
  2. Restart Ollama application.

  3. If the above steps are ineffective, you can use the following method:

    The issue lies within Docker itself, and to access the Docker host. You should connect to host.docker.internal. Therefore, replacing localhost with host.docker.internal in the service will make it work effectively.

    http://host.docker.internal:11434

Setting environment variables on Linux:

If Ollama is run as a systemd service, environment variables should be set using systemctl:

  1. Edit the systemd service by calling systemctl edit ollama.service. This will open an editor.

  2. For each environment variable, add a line Environment under section [Service]:

    [Service]
    Environment="OLLAMA_HOST=0.0.0.0"
  3. Save and exit.

  4. Reload systemd and restart Ollama:

    systemctl daemon-reload
    systemctl restart ollama

Setting environment variables on Windows:

On windows, Ollama inherits your user and system environment variables.

  1. First Quit Ollama by clicking on it in the task bar.

  2. Edit system environment variables from the control panel.

  3. Edit or create New variable(s) for your user account for OLLAMA_HOST, OLLAMA_MODELS, etc.

  4. Click OK/Apply to save.

  5. Run ollama from a new terminal window.

2. How to Modify the Address and Port of Ollama Service?

Ollama binds 127.0.0.1 port 11434 by default. Change the bind address with the OLLAMA_HOST environment variable.

is a cross-platform LLM management client (MacOS, Windows, Linux) that enables seamless deployment of large language models like DeepSeek, Llama, and Mistral. Ollama provides a one-click model deployment solution, ensuring that all data remains stored locally for complete security and privacy.

Visit and follow the installation instructions for your platform. After installation, verify it by running the following command:

After running the command, you should see all containers running with proper port mappings. For detailed instructions, refer to .

• Base URL: Set the Ollama client’s local service URL, typically http://your_server_ip:11434. If you encounter connection issues, please refer to the .

• Other settings: Keep default values. According to the , the max token length is 32,768.

Chatflow / Workflow applications enable the creation of more complex AI solutions, such as document recognition, image processing, and speech recognition. For more details, please check the .

Add an LLM Node, select the deepseek-r1:7b model under Ollama, and use the {{#sys.query#}} variable into the system prompt to connect to the initial node. If you encounter any API issues, you can handle them via or the node.

Docker
Ollama
Dify Community Edition
Ollama
Ollama's official website
Deploy with Docker Compose
FAQ
DeepSeek model specifications
Workflow Documentation
Load Balancing
Error Handling