An Agent Strategy Plugin helps an LLM carry out tasks like reasoning or decision-making, including choosing and calling tools, as well as handling results. This allows the system to address problems more autonomously.
Below, you'll see how to develop a plugin that supports Function Calling to automatically fetch the current time.
Tip: Run dify version in your terminal to confirm that the scaffolding tool is installed.
1. Initializing the Plugin Template
Run the following command to create a development template for your Agent plugin:
difyplugininit
Follow the on-screen prompts and refer to the sample comments for guidance.
➜./dify-plugin-darwin-arm64plugininit─╯EditprofileofthepluginPluginname (press Entertonextstep): # Enter the plugin nameAuthor (press Entertonextstep): # Enter the plugin authorDescription (press Entertonextstep): # Enter the plugin description---Selectthelanguageyouwanttouseforplugindevelopment,andpressEntertocontinue,BTW,youneedPython3.12+todevelopthePluginifyouchoosePython.-> python# Select Python environmentgo (not supportedyet)---Basedontheabilityyouwanttoextend,wehavedividedthePluginintofourtypes:Tool,Model,Extension,andAgentStrategy.-Tool:It's a tool provider, but not only limited to tools, you can implement an endpoint there, for example, you need both Sending Message and Receiving Message if you are- Model: Just a model provider, extending others is not allowed.- Extension: Other times, you may only need a simple http service to extend the functionalities, Extension is the right choice for you.- Agent Strategy: Implement your own logics here, just by focusing on Agent itselfWhat'smore,wehaveprovidedthetemplateforyou,youcanchooseoneofthembelow:tool-> agent-strategy# Select Agent strategy templatellmtext-embedding---Configurethepermissionsoftheplugin,useupanddowntonavigate,tabtoselect,afterselection,pressentertofinishBackwardsInvocation:Tools:Enabled: [✔] You can invoke tools inside Dify ifit's enabled # Enabled by defaultModels: Enabled: [✔] You can invoke models inside Dify if it'senabled# Enabled by defaultLLM: [✔] You can invoke LLM models inside Dify ifit's enabled # Enabled by default → Text Embedding: [✘] You can invoke text embedding models inside Dify if it'senabledRerank: [✘] You can invoke rerank models inside Dify ifit's enabled TTS: [✘] You can invoke TTS models inside Dify if it'senabledSpeech2Text: [✘] You can invoke speech2text models inside Dify ifit's enabled Moderation: [✘] You can invoke moderation models inside Dify if it'senabledApps:Enabled: [✘] Ability to invoke apps like BasicChat/ChatFlow/Agent/Workflow etc.Resources:Storage:Enabled: [✘] Persistence storage for the pluginSize:N/AThemaximumsizeofthestorageEndpoints:Enabled: [✘] Ability to register endpoints
After initialization, you'll get a folder containing all the resources needed for plugin development. Familiarizing yourself with the overall structure of an Agent Strategy Plugin will streamline the development process:
All key functionality for this plugin is in the strategies/ directory.
2. Developing the Plugin
Agent Strategy Plugin development revolves around two files:
Plugin Declaration: strategies/basic_agent.yaml
Plugin Implementation: strategies/basic_agent.py
2.1 Defining Parameters
To build an Agent plugin, start by specifying the necessary parameters in strategies/basic_agent.yaml. These parameters define the plugin's core features, such as calling an LLM or using tools.
We recommend including the following four parameters first:
model: The large language model to call (e.g., GPT-4, GPT-4o-mini).
tools: A list of tools that enhance your plugin's functionality.
query: The user input or prompt content sent to the model.
maximum_iterations: The maximum iteration count to prevent excessive computation.
Example Code:
Once you've configured these parameters, the plugin will automatically generate a user-friendly interface so you can easily manage them:
Agent Strategy Plugin UI
2.2 Retrieving Parameters and Execution
After users fill out these basic fields, your plugin needs to process the submitted parameters. In strategies/basic_agent.py, define a parameter class for the Agent, then retrieve and apply these parameters in your logic.
Verify incoming parameters:
After getting the parameters, the specific business logic is executed:
2.3 Invoking the Model
In an Agent Strategy Plugin, invoking the model is central to the workflow. You can invoke an LLM efficiently using session.model.llm.invoke() from the SDK, handling text generation, dialogue, and so forth.
If you want the LLM handle tools, ensure it outputs structured parameters to match a tool's interface. In other words, the LLM must produce input arguments that the tool can accept based on the user's instructions.
Construct the following parameters:
model
prompt_messages
tools
stop
stream
Example code for method definition:
To view the complete functionality implementation, please refer to the Example Code for model invocation.
This code achieves the following functionality: after a user inputs a command, the Agent strategy plugin automatically calls the LLM, constructs the necessary parameters for tool invocation based on the generated results, and enables the model to flexibly dispatch integrated tools to efficiently complete complex tasks.
Request parameters for generating tools
2.4 Adding Memory to the Model
Adding Memory to your Agent plugin allows the model to remember previous conversations, making interactions more natural and effective. With memory enabled, the model can maintain context and provide more relevant responses.
Steps:
Configure Memory Functionality
Add the history-messages feature to the Agent plugin’s YAML configuration file strategies/agent.yaml:
Enable Memory Settings
After modifying the plugin configuration and restarting, you will see the Memory toggle. Click the toggle button on the right to enable memory.
Memory
Once enabled, you can adjust the memory window size using the slider, which determines how many previous conversation turns the model can “remember”.
Debug History Messages
Add the following code to check the history messages:
History messages
The console will display an output similar to:
Integrate History Messages into Model Calls
Update the model call to incorporate conversation history with the current query:
Check the Outcome
After implementing Memory, the model can respond based on conversation history. In the example below, the model successfully remembers the user’s name mentioned in previous conversation.
Outcome
2.5 Handling a Tool
After specifying the tool parameters, the Agent Strategy Plugin must actually call these tools. Use session.tool.invoke() to make those requests.
Construct the following parameters:
provider
tool_name
parameters
Example code for method definition:
If you'd like the LLM itself to generate the parameters needed for tool calls, you can do so by combining the model's output with your tool-calling code.
With this in place, your Agent Strategy Plugin can automatically perform Function Calling—for instance, retrieving the current time.
Tool Invocation
2.6 Creating Logs
Often, multiple steps are necessary to complete a complex task in an Agent Strategy Plugin. It's crucial for developers to track each step's results, analyze the decision process, and optimize strategy. Using create_log_message and finish_log_message from the SDK, you can log real-time states before and after calls, aiding in quick problem diagnosis.
For example:
Log a "starting model call" message before calling the model, clarifying the task's execution progress.
Log a "call succeeded" message once the model responds, ensuring the model's output can be traced end to end.
When the setup is complete, the workflow log will output the execution results:
Agent Output execution results
If multiple rounds of logs occur, you can structure them hierarchically by setting a parent parameter in your log calls, making them easier to follow.
Reference method:
Sample code for agent-plugin functions
Invoke Model
The following code demonstrates how to give the Agent strategy plugin the ability to invoke the model:
Handle Tools
The following code shows how to implement model calls for the Agent strategy plugin and send canonicalized requests to the tool.
Example of a complete function code
A complete sample plugin code that includes a invoking model, handling tool and a function to output multiple rounds of logs:
3. Debugging the Plugin
After finalizing the plugin's declaration file and implementation code, run python -m main in the plugin directory to restart it. Next, confirm the plugin runs correctly. Dify offers remote debugging—go to "Plugin Management" to obtain your debug key and remote server address.
Back in your plugin project, copy .env.example to .env and insert the relevant remote server and debug key info.
Then run:
You'll see the plugin installed in your Workspace, and team members can also access it.
Browser Plugins
Packaging the Plugin (Optional)
Once everything works, you can package your plugin by running:
A file named google.difypkg (for example) appears in your current folder—this is your final plugin package.
Congratulations! You've fully developed, tested, and packaged your Agent Strategy Plugin.
Complex tasks often need multiple rounds of thinking and tool calls, typically repeating model invoke → tool use until the task ends or a maximum iteration limit is reached. Managing prompts effectively is crucial in this process. Check out the complete Function Calling implementation for a standardized approach to letting models call external tools and handle their outputs.
identity:
name: basic_agent # the name of the agent_strategy
author: novice # the author of the agent_strategy
label:
en_US: BasicAgent # the engilish label of the agent_strategy
description:
en_US: BasicAgent # the english description of the agent_strategy
parameters:
- name: model # the name of the model parameter
type: model-selector # model-type
scope: tool-call&llm # the scope of the parameter
required: true
label:
en_US: Model
zh_Hans: 模型
pt_BR: Model
- name: tools # the name of the tools parameter
type: array[tools] # the type of tool parameter
required: true
label:
en_US: Tools list
zh_Hans: 工具列表
pt_BR: Tools list
- name: query # the name of the query parameter
type: string # the type of query parameter
required: true
label:
en_US: Query
zh_Hans: 查询
pt_BR: Query
- name: maximum_iterations
type: number
required: false
default: 5
label:
en_US: Maxium Iterations
zh_Hans: 最大迭代次数
pt_BR: Maxium Iterations
max: 50 # if you set the max and min value, the display of the parameter will be a slider
min: 1
extra:
python:
source: strategies/basic_agent.py
from dify_plugin.entities.agent import AgentInvokeMessage
from dify_plugin.interfaces.agent import AgentModelConfig, AgentStrategy, ToolEntity
from pydantic import BaseModel
class BasicParams(BaseModel):
maximum_iterations: int
model: AgentModelConfig
tools: list[ToolEntity]
query: str
history_messages: []
history_messages: [UserPromptMessage(role=<PromptMessageRole.USER: 'user'>, content='hello, my name is novice', name=None), AssistantPromptMessage(role=<PromptMessageRole.ASSISTANT: 'assistant'>, content='Hello, Novice! How can I assist you today?', name=None, tool_calls=[])]
class BasicAgentAgentStrategy(AgentStrategy):
def _invoke(self, parameters: dict[str, Any]) -> Generator[AgentInvokeMessage]:
params = BasicParams(**parameters)
chunks: Generator[LLMResultChunk, None, None] | LLMResult = (
self.session.model.llm.invoke(
model_config=LLMModelConfig(**params.model.model_dump(mode="json")),
# Add history messages
prompt_messages=params.model.history_prompt_messages
+ [UserPromptMessage(content=params.query)],
tools=[
self._convert_tool_to_prompt_message_tool(tool)
for tool in params.tools
],
stop=params.model.completion_params.get("stop", [])
if params.model.completion_params
else [],
stream=True,
)
)
...