Function calling (also known as tool calling) allows LLMs to request information from external services and APIs during conversations. This extends your voice AI bot’s capabilities beyond its training data to access real-time information and perform actions.Documentation Index
Fetch the complete documentation index at: https://daily-docs-pr-4424.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Pipeline Integration
Function calling works seamlessly within your existing pipeline structure. The LLM service handles function calls automatically when they’re needed:- User asks a question requiring external data
- LLM recognizes the need and calls appropriate function
- Your function handler executes and returns results
- LLM incorporates results into its response
- Response flows to TTS and user as normal
Understanding Function Calling
Function calling allows your bot to access real-time data and perform actions that aren’t part of its training data. For example, you could give your bot the ability to:- Check current weather conditions
- Look up stock prices
- Query a database
- Control smart home devices
- Schedule appointments
- You define functions the LLM can use and register them to the LLM service used in your pipeline
- When needed, the LLM requests a function call
- Your application executes any corresponding functions
- The result is sent back to the LLM
- The LLM uses this information in its response
Implementation
1. Define Functions
Pipecat provides a standardizedFunctionSchema that works across all supported LLM providers. This makes it easy to define functions once and use them with any provider.
As a shorthand, you could also bypass specifying a function configuration at all and instead use “direct” functions. Under the hood, these are converted to FunctionSchemas.
Using the Standard Schema (Recommended)
system_instruction in the LLM service’s Settings, not as a context message. The ToolsSchema will be automatically converted to the correct format for your LLM provider through adapters.
Using Direct Functions (Shorthand)
You can bypass specifying a function configuration as aFunctionSchema and instead pass the function directly to your ToolsSchema. Pipecat will auto-configure the function, gathering relevant metadata from its signature and docstring. Metadata includes:
- name
- description
- properties (including individual property descriptions)
- list of required properties
FunctionCallParams, followed by any others necessary for the function.
Provider-Specific Custom Tools
LLMContext expects tools to be provided as a ToolsSchema. For normal
function calling, prefer standard_tools with FunctionSchema or direct
functions so Pipecat can convert them to each provider’s native format.
When a provider has tools that don’t fit Pipecat’s standard function schema,
add those provider-native definitions through ToolsSchema.custom_tools. These
custom tools are passed only to the matching adapter and are appended to the
converted standard tools.
Raw provider-native tool lists are not the normal
LLMContext path. Some
lower-level adapter code still preserves non-ToolsSchema tools for legacy or
direct provider-specific paths, but LLMContext(tools=...) validates tools as
a ToolsSchema. Use custom_tools as the provider-specific escape hatch
while staying in the universal context flow.2. Register Function Handlers
Register handlers for your functions using one of these LLM service methods:register_functionregister_direct_function
cancel_on_interruption=True(default): Function call is cancelled if user interruptscancel_on_interruption=False: Function call continues as async; LLM doesn’t wait for result before continuingtimeout_secs=None(default): Optional per-tool timeout in seconds. Overrides the globalfunction_call_timeout_secsfor this specific function
cancel_on_interruption=False for long-running operations or when you want the LLM to continue the conversation without waiting. When set to False, the function call is treated as asynchronous: the LLM continues the conversation immediately without waiting for the result. Once the result returns, it’s injected back into the context as a developer message, triggering a new LLM inference at that point. This allows for truly non-blocking function calls where the conversation can proceed while the function executes in the background. Async function calls can also send intermediate updates before the final result.
Use cancel_on_interruption=True (the default) when the LLM should wait for the function result before responding. This ensures the LLM has the complete information before generating its next response.
Use timeout_secs to set a specific timeout for a function that differs from the global default. For example, you might want a longer timeout for database queries or shorter timeouts for quick lookups.
Async Function Call Cancellation
If you register async function calls withcancel_on_interruption=False, you can also enable model-directed cancellation:
enable_async_tool_cancellation=True and at least one async function is registered, Pipecat automatically adds the built-in cancel_async_tool_call tool and supporting system instructions. The LLM can call that tool to cancel a stale in-progress async function call, for example when the user changes their request before a long-running lookup completes.
3. Create the Pipeline
Include your LLM service in your pipeline with the registered functions:Function Handler Details
FunctionCallParams
Every function handler receives aFunctionCallParams object containing all the information needed for execution:
params.tool_resources is a deprecated alias for params.app_resources. Use
app_resources in new code.Handler Structure
Your function handler should:- Receive necessary arguments, either:
- From
params.arguments - Directly from function arguments, if using direct functions
- From
- Process data or call external services
- Return results via
params.result_callback(result)
Sharing Resources with app_resources
When function handlers need access to shared resources like database connections, API clients, or application state, you can pass them viaapp_resources when creating the PipelineTask. These resources are then accessible in every function handler via params.app_resources.
- Resources are passed by reference — the caller retains their handle and can read mutations after the task finishes
- The framework never copies or clears the
app_resourcesobject - All function handlers in the pipeline share the same
app_resourcesinstance - Useful for database connections, API clients, caches, or any shared state
PipelineTask(tool_resources=...) and FunctionCallParams.tool_resources are
deprecated aliases retained for compatibility. Prefer
PipelineTask(app_resources=...) and params.app_resources.Controlling Function Call Behavior (Advanced)
When returning results from a function handler, you can control how the LLM processes those results using aFunctionCallResultProperties object passed to the result callback.
Properties
FunctionCallResultProperties provides fine-grained control over LLM execution:
run_llm=True: Run LLM after function call (default behavior)run_llm=False: Don’t run LLM after function call (useful for chained calls)on_context_updated: Async callback executed after the function result is added to contextis_final=False: Treat this as an intermediate result for an async function call. Only use this for functions registered withcancel_on_interruption=False
Example Usage
Intermediate Results for Async Functions
Async function calls can send progress updates before their final result. Register the function withcancel_on_interruption=False, then call params.result_callback(..., properties=FunctionCallResultProperties(is_final=False)) for each intermediate update. Finish with a normal params.result_callback(...).
Key Takeaways
- Function calling extends LLM capabilities beyond training data to real-time information
- Context integration is automatic - function calls and results are stored in conversation history
- Multiple definition approaches - use standard schema for portability, direct functions for simplicity
- Async function calls are opt-in - set
cancel_on_interruption=Falsefor deferred results, intermediate updates, and optional async-tool cancellation - Pipeline integration is seamless - functions work within your existing voice AI architecture
- Advanced control available - fine-tune LLM execution and monitor function call lifecycle
What’s Next
Now that you understand function calling, let’s explore how to configure text-to-speech services to convert your LLM’s responses (including function call results) into natural-sounding speech.Text to Speech
Learn how to configure speech synthesis in your voice AI pipeline