Model
Request LLM
Entry
self.session.model.llmEndpoint:
def invoke(
self,
model_config: LLMModelConfig,
prompt_messages: list[PromptMessage],
tools: list[PromptMessageTool] | None = None,
stop: list[str] | None = None,
stream: bool = True,
) -> Generator[LLMResultChunk, None, None] | LLMResult:
passExample
Best Practices
Request Summary
Request Rerank
Entry
Endpoint
Request TTS
Entry
Endpoint
Request Speech2Text
Request Moderation
Last updated