An Agent Strategy Plugin helps an LLM carry out tasks like reasoning or decision-making, including choosing and calling tools, as well as handling results. This allows the system to address problems more autonomously.
Below, you'll see how to develop a plugin that supports Function Calling to automatically fetch the current time.
Prerequisites
Dify plugin scaffolding tool
Python environment (version ≥ 3.12)
For details on preparing the plugin development tool, see .
Tip: Run dify version in your terminal to confirm that the scaffolding tool is installed.
1. Initializing the Plugin Template
Run the following command to create a development template for your Agent plugin:
dify plugin init
Follow the on-screen prompts and refer to the sample comments for guidance.
➜ ./dify-plugin-darwin-arm64 plugin init ─╯
Edit profile of the plugin
Plugin name (press Enter to next step): # Enter the plugin name
Author (press Enter to next step): # Enter the plugin author
Description (press Enter to next step): # Enter the plugin description
---
Select the language you want to use for plugin development, and press Enter to continue,
BTW, you need Python 3.12+ to develop the Plugin if you choose Python.
-> python # Select Python environment
go (not supported yet)
---
Based on the ability you want to extend, we have divided the Plugin into four types: Tool, Model, Extension, and Agent Strategy.
- Tool: It's a tool provider, but not only limited to tools, you can implement an endpoint there, for example, you need both Sending Message and Receiving Message if you are
- Model: Just a model provider, extending others is not allowed.
- Extension: Other times, you may only need a simple http service to extend the functionalities, Extension is the right choice for you.
- Agent Strategy: Implement your own logics here, just by focusing on Agent itself
What's more, we have provided the template for you, you can choose one of them below:
tool
-> agent-strategy # Select Agent strategy template
llm
text-embedding
---
Configure the permissions of the plugin, use up and down to navigate, tab to select, after selection, press enter to finish
Backwards Invocation:
Tools:
Enabled: [✔] You can invoke tools inside Dify if it's enabled # Enabled by default
Models:
Enabled: [✔] You can invoke models inside Dify if it's enabled # Enabled by default
LLM: [✔] You can invoke LLM models inside Dify if it's enabled # Enabled by default
→ Text Embedding: [✘] You can invoke text embedding models inside Dify if it's enabled
Rerank: [✘] You can invoke rerank models inside Dify if it's enabled
TTS: [✘] You can invoke TTS models inside Dify if it's enabled
Speech2Text: [✘] You can invoke speech2text models inside Dify if it's enabled
Moderation: [✘] You can invoke moderation models inside Dify if it's enabled
Apps:
Enabled: [✘] Ability to invoke apps like BasicChat/ChatFlow/Agent/Workflow etc.
Resources:
Storage:
Enabled: [✘] Persistence storage for the plugin
Size: N/A The maximum size of the storage
Endpoints:
Enabled: [✘] Ability to register endpoints
After initialization, you'll get a folder containing all the resources needed for plugin development. Familiarizing yourself with the overall structure of an Agent Strategy Plugin will streamline the development process:
All key functionality for this plugin is in the strategies/ directory.
2. Developing the Plugin
Agent Strategy Plugin development revolves around two files:
Plugin Declaration: strategies/basic_agent.yaml
Plugin Implementation: strategies/basic_agent.py
2.1 Defining Parameters
To build an Agent plugin, start by specifying the necessary parameters in strategies/basic_agent.yaml. These parameters define the plugin's core features, such as calling an LLM or using tools.
We recommend including the following four parameters first:
model: The large language model to call (e.g., GPT-4, GPT-4o-mini).
tools: A list of tools that enhance your plugin's functionality.
query: The user input or prompt content sent to the model.
maximum_iterations: The maximum iteration count to prevent excessive computation.
Example Code:
identity:
name: basic_agent # the name of the agent_strategy
author: novice # the author of the agent_strategy
label:
en_US: BasicAgent # the engilish label of the agent_strategy
description:
en_US: BasicAgent # the english description of the agent_strategy
parameters:
- name: model # the name of the model parameter
type: model-selector # model-type
scope: tool-call&llm # the scope of the parameter
required: true
label:
en_US: Model
zh_Hans: 模型
pt_BR: Model
- name: tools # the name of the tools parameter
type: array[tools] # the type of tool parameter
required: true
label:
en_US: Tools list
zh_Hans: 工具列表
pt_BR: Tools list
- name: query # the name of the query parameter
type: string # the type of query parameter
required: true
label:
en_US: Query
zh_Hans: 查询
pt_BR: Query
- name: maximum_iterations
type: number
required: false
default: 5
label:
en_US: Maxium Iterations
zh_Hans: 最大迭代次数
pt_BR: Maxium Iterations
max: 50 # if you set the max and min value, the display of the parameter will be a slider
min: 1
extra:
python:
source: strategies/basic_agent.py
Once you've configured these parameters, the plugin will automatically generate a user-friendly interface so you can easily manage them:
2.2 Retrieving Parameters and Execution
After users fill out these basic fields, your plugin needs to process the submitted parameters. In strategies/basic_agent.py, define a parameter class for the Agent, then retrieve and apply these parameters in your logic.
Verify incoming parameters:
from dify_plugin.entities.agent import AgentInvokeMessage
from dify_plugin.interfaces.agent import AgentModelConfig, AgentStrategy, ToolEntity
from pydantic import BaseModel
class BasicParams(BaseModel):
maximum_iterations: int
model: AgentModelConfig
tools: list[ToolEntity]
query: str
After getting the parameters, the specific business logic is executed:
In an Agent Strategy Plugin, invoking the model is central to the workflow. You can invoke an LLM efficiently using session.model.llm.invoke() from the SDK, handling text generation, dialogue, and so forth.
If you want the LLM handle tools, ensure it outputs structured parameters to match a tool's interface. In other words, the LLM must produce input arguments that the tool can accept based on the user's instructions.
To view the complete functionality implementation, please refer to the Example Code for model invocation.
This code achieves the following functionality: after a user inputs a command, the Agent strategy plugin automatically calls the LLM, constructs the necessary parameters for tool invocation based on the generated results, and enables the model to flexibly dispatch integrated tools to efficiently complete complex tasks.
2.4 Adding Memory to the Model
Adding Memory to your Agent plugin allows the model to remember previous conversations, making interactions more natural and effective. With memory enabled, the model can maintain context and provide more relevant responses.
Steps:
Configure Memory Functionality
Add the history-messages feature to the Agent plugin’s YAML configuration file strategies/agent.yaml:
identity:
name: basic_agent # Agent strategy name
author: novice # Author
label:
en_US: BasicAgent # English label
description:
en_US: BasicAgent # English description
features:
- history-messages # Enable history messages feature
...
Enable Memory Settings
After modifying the plugin configuration and restarting, you will see the Memory toggle. Click the toggle button on the right to enable memory.
Once enabled, you can adjust the memory window size using the slider, which determines how many previous conversation turns the model can “remember”.
Debug History Messages
Add the following code to check the history messages:
history_messages: []
history_messages: [UserPromptMessage(role=<PromptMessageRole.USER: 'user'>, content='hello, my name is novice', name=None), AssistantPromptMessage(role=<PromptMessageRole.ASSISTANT: 'assistant'>, content='Hello, Novice! How can I assist you today?', name=None, tool_calls=[])]
Integrate History Messages into Model Calls
Update the model call to incorporate conversation history with the current query:
class BasicAgentAgentStrategy(AgentStrategy):
def _invoke(self, parameters: dict[str, Any]) -> Generator[AgentInvokeMessage]:
params = BasicParams(**parameters)
chunks: Generator[LLMResultChunk, None, None] | LLMResult = (
self.session.model.llm.invoke(
model_config=LLMModelConfig(**params.model.model_dump(mode="json")),
# Add history messages
prompt_messages=params.model.history_prompt_messages
+ [UserPromptMessage(content=params.query)],
tools=[
self._convert_tool_to_prompt_message_tool(tool)
for tool in params.tools
],
stop=params.model.completion_params.get("stop", [])
if params.model.completion_params
else [],
stream=True,
)
)
...
Check the Outcome
After implementing Memory, the model can respond based on conversation history. In the example below, the model successfully remembers the user’s name mentioned in previous conversation.
2.5 Handling a Tool
After specifying the tool parameters, the Agent Strategy Plugin must actually call these tools. Use session.tool.invoke() to make those requests.
If you'd like the LLM itself to generate the parameters needed for tool calls, you can do so by combining the model's output with your tool-calling code.
tool_instances = (
{tool.identity.name: tool for tool in params.tools} if params.tools else {}
)
for tool_call_id, tool_call_name, tool_call_args in tool_calls:
tool_instance = tool_instances[tool_call_name]
self.session.tool.invoke(
provider_type=ToolProviderType.BUILT_IN,
provider=tool_instance.identity.provider,
tool_name=tool_instance.identity.name,
parameters={**tool_instance.runtime_parameters, **tool_call_args},
)
With this in place, your Agent Strategy Plugin can automatically perform Function Calling—for instance, retrieving the current time.
2.6 Creating Logs
Often, multiple steps are necessary to complete a complex task in an Agent Strategy Plugin. It's crucial for developers to track each step's results, analyze the decision process, and optimize strategy. Using create_log_message and finish_log_message from the SDK, you can log real-time states before and after calls, aiding in quick problem diagnosis.
For example:
Log a "starting model call" message before calling the model, clarifying the task's execution progress.
Log a "call succeeded" message once the model responds, ensuring the model's output can be traced end to end.
You'll see the plugin installed in your Workspace, and team members can also access it.
Packaging the Plugin (Optional)
Once everything works, you can package your plugin by running:
# Replace ./basic_agent/ with your actual plugin project path.
dify plugin package ./basic_agent/
A file named google.difypkg (for example) appears in your current folder—this is your final plugin package.
Congratulations! You've fully developed, tested, and packaged your Agent Strategy Plugin.
Publishing the Plugin (Optional)
Further Exploration
Agent Strategy Plugin UI
Request parameters for generating tools
Memory
History messages
Outcome
Tool Invocation
Agent Output execution results
After finalizing the plugin's declaration file and implementation code, run python -m main in the plugin directory to restart it. Next, confirm the plugin runs correctly. Dify offers remote debugging—go to to obtain your debug key and remote server address.
Browser Plugins
You can now upload it to the . Before doing so, ensure it meets the . Once approved, your code merges into the main branch, and the plugin automatically goes live on the .
Complex tasks often need multiple rounds of thinking and tool calls, typically repeating model invoke → tool use until the task ends or a maximum iteration limit is reached. Managing prompts effectively is crucial in this process. Check out the for a standardized approach to letting models call external tools and handle their outputs.