Conversation
β¦lve typing/lint issues
β¦TS streaming capability
β¦nts and update llm/tts
| if hasattr(tool, "callable_function"): | ||
| fn = tool.callable_function | ||
| tool_def: dict[str, Any] = { | ||
| "type": "function", | ||
| "function": { | ||
| "name": fn.name, | ||
| }, | ||
| } | ||
| if fn.description: | ||
| tool_def["function"]["description"] = fn.description | ||
| if fn.parameters: | ||
| tool_def["function"]["parameters"] = fn.parameters | ||
| tool_defs.append(tool_def) | ||
| else: | ||
| # Generic tool β use id as name | ||
| tool_defs.append({ | ||
| "type": "function", | ||
| "function": {"name": tool.id}, | ||
| }) |
There was a problem hiding this comment.
π΄ Tool serialization references non-existent callable_function attribute on Tool
In _build_tools_param, the code checks hasattr(tool, "callable_function") at llm.py:244, but no Tool subclass in the framework (FunctionTool, RawFunctionTool, ProviderTool) has a callable_function attribute. FunctionTool uses _info (a FunctionToolInfo with .name and .description) and _func. As a result, the if branch at line 244 is dead code β every tool falls through to the else at line 258, which only serializes {"type": "function", "function": {"name": tool.id}}, omitting the function's description and parameters. This makes function calling effectively non-functional since the LLM backend receives no schema information about what the tools do or what arguments they accept.
Prompt for agents
The _build_tools_param method in llm.py references a non-existent `callable_function` attribute on Tool objects. In the livekit agents framework, tool information is accessed differently depending on the tool type:
- FunctionTool (from livekit.agents.llm.tool_context) has:
- `.id` -> returns `self._info.name`
- `._info` -> a `FunctionToolInfo` with `.name` and `.description`
- The function's parameters can be obtained by inspecting the function signature or using the framework's built-in serialization utilities.
- RawFunctionTool has:
- `.id` -> returns `self._info.name`
- `._info` -> a `RawFunctionToolInfo` with `.name` and `.raw_schema` (dict containing name, description, parameters)
The code should check `isinstance(tool, FunctionTool)` or `isinstance(tool, RawFunctionTool)` and access the appropriate info attributes. Look at how other LLM plugins (e.g., livekit-plugins-anthropic, livekit-plugins-google) serialize tools using the `_provider_format` utilities in `livekit.agents.llm._provider_format` for the canonical approach.
Was this helpful? React with π or π to provide feedback.
| async with blaze._client.stream( | ||
| "POST", | ||
| url, | ||
| json=messages if not tools_param else body, |
There was a problem hiding this comment.
π΄ Inconsistent JSON request body format between tool and non-tool LLM requests
At llm.py:323, the request body format changes depending on whether tools are present: json=messages if not tools_param else body. Without tools, the HTTP body is a bare JSON array ([{"role":"user","content":"..."}]). With tools, it's a JSON object ({"messages":[...],"tools":[...]}). These are structurally different β a REST API endpoint will expect one format or the other. If the API expects a bare array, the tools-enabled path will fail; if the API expects an object with a "messages" key, the non-tools path will fail.
| json=messages if not tools_param else body, | |
| json=body, |
Was this helpful? React with π or π to provide feedback.
No description provided.