Skip to main content

Documentation Index

Fetch the complete documentation index at: https://daily-docs-pr-4424.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Function calling (also known as tool calling) allows LLMs to request information from external services and APIs during conversations. This extends your voice AI bot’s capabilities beyond its training data to access real-time information and perform actions.

Pipeline Integration

Function calling works seamlessly within your existing pipeline structure. The LLM service handles function calls automatically when they’re needed:
pipeline = Pipeline([
    transport.input(),
    stt,
    context_aggregator.user(),     # Collects user transcriptions
    llm,                          # Processes context, calls functions when needed
    tts,
    transport.output(),
    context_aggregator.assistant(), # Collects function results and responses
])
Function call flow:
  1. User asks a question requiring external data
  2. LLM recognizes the need and calls appropriate function
  3. Your function handler executes and returns results
  4. LLM incorporates results into its response
  5. Response flows to TTS and user as normal
Context integration: Function calls and their results are automatically stored in conversation context by the context aggregators, maintaining complete conversation history.

Understanding Function Calling

Function calling allows your bot to access real-time data and perform actions that aren’t part of its training data. For example, you could give your bot the ability to:
  • Check current weather conditions
  • Look up stock prices
  • Query a database
  • Control smart home devices
  • Schedule appointments
Here’s how it works:
  1. You define functions the LLM can use and register them to the LLM service used in your pipeline
  2. When needed, the LLM requests a function call
  3. Your application executes any corresponding functions
  4. The result is sent back to the LLM
  5. The LLM uses this information in its response

Implementation

1. Define Functions

Pipecat provides a standardized FunctionSchema that works across all supported LLM providers. This makes it easy to define functions once and use them with any provider. As a shorthand, you could also bypass specifying a function configuration at all and instead use “direct” functions. Under the hood, these are converted to FunctionSchemas.
from pipecat.adapters.schemas.function_schema import FunctionSchema
from pipecat.adapters.schemas.tools_schema import ToolsSchema

# Define a function using the standard schema
weather_function = FunctionSchema(
    name="get_current_weather",
    description="Get the current weather in a location",
    properties={
        "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA",
        },
        "format": {
            "type": "string",
            "enum": ["celsius", "fahrenheit"],
            "description": "The temperature unit to use.",
        },
    },
    required=["location", "format"]
)

# Create a tools schema with your functions
tools = ToolsSchema(standard_tools=[weather_function])

# Pass this to your LLM context
context = LLMContext(tools=tools)
user_aggregator, assistant_aggregator = LLMContextAggregatorPair(context)
The bot’s personality (e.g. “You are a helpful assistant”) is set via system_instruction in the LLM service’s Settings, not as a context message. The ToolsSchema will be automatically converted to the correct format for your LLM provider through adapters.

Using Direct Functions (Shorthand)

You can bypass specifying a function configuration as a FunctionSchema and instead pass the function directly to your ToolsSchema. Pipecat will auto-configure the function, gathering relevant metadata from its signature and docstring. Metadata includes:
  • name
  • description
  • properties (including individual property descriptions)
  • list of required properties
Note that the function signature is a bit different when using direct functions. The first parameter is FunctionCallParams, followed by any others necessary for the function.
from pipecat.adapters.schemas.tools_schema import ToolsSchema
from pipecat.services.llm_service import FunctionCallParams

# Define a direct function
async def get_current_weather(params: FunctionCallParams, location: str, format: str):
    """Get the current weather.

    Args:
        location: The city and state, e.g. "San Francisco, CA".
        format: The temperature unit to use. Must be either "celsius" or "fahrenheit".
    """
    weather_data = {"conditions": "sunny", "temperature": "75"}
    await params.result_callback(weather_data)

# Create a tools schema, passing your function directly to it
tools = ToolsSchema(standard_tools=[get_current_weather])

# Pass this to your LLM context
context = LLMContext(tools=tools)
user_aggregator, assistant_aggregator = LLMContextAggregatorPair(context)

Provider-Specific Custom Tools

LLMContext expects tools to be provided as a ToolsSchema. For normal function calling, prefer standard_tools with FunctionSchema or direct functions so Pipecat can convert them to each provider’s native format. When a provider has tools that don’t fit Pipecat’s standard function schema, add those provider-native definitions through ToolsSchema.custom_tools. These custom tools are passed only to the matching adapter and are appended to the converted standard tools.
from pipecat.adapters.schemas.function_schema import FunctionSchema
from pipecat.adapters.schemas.tools_schema import AdapterType, ToolsSchema

# Standard function converted by Pipecat
weather_function = FunctionSchema(
    name="get_current_weather",
    description="Get the current weather",
    properties={"location": {"type": "string"}},
    required=["location"],
)

# Provider-native tool appended only for OpenAI-family adapters.
# This object must match the target OpenAI API you are using.
provider_tool = {"type": "tool_search"}

tools = ToolsSchema(
    standard_tools=[weather_function],
    custom_tools={AdapterType.OPENAI: [provider_tool]},
)
Raw provider-native tool lists are not the normal LLMContext path. Some lower-level adapter code still preserves non-ToolsSchema tools for legacy or direct provider-specific paths, but LLMContext(tools=...) validates tools as a ToolsSchema. Use custom_tools as the provider-specific escape hatch while staying in the universal context flow.
For normal callable functions, use FunctionSchema or direct functions instead of provider-native function definitions. Today, custom_tools is supported for OpenAI-family adapters and Gemini. Anthropic standard functions should be represented with FunctionSchema.

2. Register Function Handlers

Register handlers for your functions using one of these LLM service methods:
  • register_function
  • register_direct_function
Which one you use depends on whether your function is a “direct” function.
from pipecat.services.llm_service import FunctionCallParams

llm = OpenAILLMService(api_key="your-api-key")

# Main function handler - called to execute the function
async def fetch_weather_from_api(params: FunctionCallParams):
    # Fetch weather data from your API
    weather_data = {"conditions": "sunny", "temperature": "75"}
    await params.result_callback(weather_data)

# Register the function
llm.register_function(
    "get_current_weather",
    fetch_weather_from_api,
    cancel_on_interruption=True,  # Cancel if user interrupts (default: True)
    timeout_secs=30.0,  # Optional: Override global timeout for this function
)
Key registration options:
  • cancel_on_interruption=True (default): Function call is cancelled if user interrupts
  • cancel_on_interruption=False: Function call continues as async; LLM doesn’t wait for result before continuing
  • timeout_secs=None (default): Optional per-tool timeout in seconds. Overrides the global function_call_timeout_secs for this specific function
Use cancel_on_interruption=False for long-running operations or when you want the LLM to continue the conversation without waiting. When set to False, the function call is treated as asynchronous: the LLM continues the conversation immediately without waiting for the result. Once the result returns, it’s injected back into the context as a developer message, triggering a new LLM inference at that point. This allows for truly non-blocking function calls where the conversation can proceed while the function executes in the background. Async function calls can also send intermediate updates before the final result. Use cancel_on_interruption=True (the default) when the LLM should wait for the function result before responding. This ensures the LLM has the complete information before generating its next response. Use timeout_secs to set a specific timeout for a function that differs from the global default. For example, you might want a longer timeout for database queries or shorter timeouts for quick lookups.

Async Function Call Cancellation

If you register async function calls with cancel_on_interruption=False, you can also enable model-directed cancellation:
llm = OpenAILLMService(
    api_key="your-api-key",
    enable_async_tool_cancellation=True,
)
When enable_async_tool_cancellation=True and at least one async function is registered, Pipecat automatically adds the built-in cancel_async_tool_call tool and supporting system instructions. The LLM can call that tool to cancel a stale in-progress async function call, for example when the user changes their request before a long-running lookup completes.

3. Create the Pipeline

Include your LLM service in your pipeline with the registered functions:
# Initialize the LLM context with your function schemas
context = LLMContext(tools=tools)

# Create the context aggregator to collect the user and assistant context
user_aggregator, assistant_aggregator = LLMContextAggregatorPair(context)

# Create the pipeline
pipeline = Pipeline([
    transport.input(),     # Input from the transport
    stt,                   # STT processing
    user_aggregator,       # User context aggregation
    llm,                   # LLM processing
    tts,                   # TTS processing
    transport.output(),    # Output to the transport
    assistant_aggregator,  # Assistant context aggregation
])

Function Handler Details

FunctionCallParams

Every function handler receives a FunctionCallParams object containing all the information needed for execution:
@dataclass
class FunctionCallParams:
    function_name: str                          # Name of the called function
    tool_call_id: str                           # Unique identifier for this call
    arguments: Mapping[str, Any]                # Arguments from the LLM
    llm: LLMService                             # Reference to the LLM service
    context: LLMContext                         # Current conversation context
    result_callback: FunctionCallResultCallback # Return results here
    app_resources: Any                          # Application-defined resources shared across tool calls
Using the parameters:
async def example_function_handler(params: FunctionCallParams):
    # Access function details
    print(f"Called function: {params.function_name}")
    print(f"Call ID: {params.tool_call_id}")

    # Extract arguments
    location = params.arguments["location"]

    # Access LLM context for conversation history
    messages = params.context.messages

    # Access shared resources (database, API clients, etc.)
    if params.app_resources:
        db = params.app_resources.database
        user_id = params.app_resources.current_user_id

    # Use LLM service for additional operations
    await params.llm.push_frame(TTSSpeakFrame("Looking up weather data..."))

    # Return results
    await params.result_callback({"conditions": "nice", "temperature": "75"})
See the API reference for complete details.
params.tool_resources is a deprecated alias for params.app_resources. Use app_resources in new code.

Handler Structure

Your function handler should:
  1. Receive necessary arguments, either:
    • From params.arguments
    • Directly from function arguments, if using direct functions
  2. Process data or call external services
  3. Return results via params.result_callback(result)
async def fetch_weather_from_api(params: FunctionCallParams):
    try:
        # Extract arguments
        location = params.arguments.get("location")
        format_type = params.arguments.get("format", "celsius")

        # Call external API
        api_result = await weather_api.get_weather(location, format_type)

        # Return formatted result
        await params.result_callback({
            "location": location,
            "temperature": api_result["temp"],
            "conditions": api_result["conditions"],
            "unit": format_type
        })
    except Exception as e:
        # Handle errors
        await params.result_callback({
            "error": f"Failed to get weather: {str(e)}"
        })

Sharing Resources with app_resources

When function handlers need access to shared resources like database connections, API clients, or application state, you can pass them via app_resources when creating the PipelineTask. These resources are then accessible in every function handler via params.app_resources.
from dataclasses import dataclass
from pipecat.pipeline.task import PipelineTask
from pipecat.services.llm_service import FunctionCallParams

# Define your application resources
@dataclass
class AppResources:
    database: DatabaseConnection
    api_client: WeatherAPIClient
    user_id: str

# Create your resources
resources = AppResources(
    database=db_connection,
    api_client=weather_client,
    user_id="user-123"
)

# Pass resources to the pipeline task
task = PipelineTask(
    pipeline,
    app_resources=resources
)

# Access resources in function handlers
async def query_user_preferences(params: FunctionCallParams):
    # Access shared resources
    db = params.app_resources.database
    user_id = params.app_resources.user_id

    # Query database with shared connection
    prefs = await db.query("SELECT * FROM preferences WHERE user_id = ?", user_id)

    await params.result_callback(prefs)
Key points:
  • Resources are passed by reference — the caller retains their handle and can read mutations after the task finishes
  • The framework never copies or clears the app_resources object
  • All function handlers in the pipeline share the same app_resources instance
  • Useful for database connections, API clients, caches, or any shared state
PipelineTask(tool_resources=...) and FunctionCallParams.tool_resources are deprecated aliases retained for compatibility. Prefer PipelineTask(app_resources=...) and params.app_resources.

Controlling Function Call Behavior (Advanced)

When returning results from a function handler, you can control how the LLM processes those results using a FunctionCallResultProperties object passed to the result callback.

Properties

FunctionCallResultProperties provides fine-grained control over LLM execution:
@dataclass
class FunctionCallResultProperties:
    run_llm: bool | None = None                 # Whether to run LLM after this result
    on_context_updated: Callable | None = None  # Callback when context is updated
    is_final: bool = True                       # Whether this is the final result
Property options:
  • run_llm=True: Run LLM after function call (default behavior)
  • run_llm=False: Don’t run LLM after function call (useful for chained calls)
  • on_context_updated: Async callback executed after the function result is added to context
  • is_final=False: Treat this as an intermediate result for an async function call. Only use this for functions registered with cancel_on_interruption=False
Skip LLM execution (run_llm=False) when you have back-to-back function calls. If you skip a completion, you must manually trigger one from the context aggregator.
See the API reference for complete details.

Example Usage

from pipecat.frames.frames import FunctionCallResultProperties
from pipecat.services.llm_service import FunctionCallParams

async def fetch_weather_from_api(params: FunctionCallParams):
    # Fetch weather data
    weather_data = {"conditions": "sunny", "temperature": "75"}

    # Don't run LLM after this function call
    properties = FunctionCallResultProperties(run_llm=False)

    await params.result_callback(weather_data, properties=properties)

async def query_database(params: FunctionCallParams):
    # Query database
    results = await db.query(params.arguments["query"])

    async def on_update():
        await notify_system("Database query complete")

    # Run LLM after function call and notify when context is updated
    properties = FunctionCallResultProperties(
        run_llm=True,
        on_context_updated=on_update
    )

    await params.result_callback(results, properties=properties)

Intermediate Results for Async Functions

Async function calls can send progress updates before their final result. Register the function with cancel_on_interruption=False, then call params.result_callback(..., properties=FunctionCallResultProperties(is_final=False)) for each intermediate update. Finish with a normal params.result_callback(...).
from pipecat.frames.frames import FunctionCallResultProperties
from pipecat.services.llm_service import FunctionCallParams

async def track_delivery(params: FunctionCallParams):
    await params.result_callback(
        {"status": "picked_up"},
        properties=FunctionCallResultProperties(is_final=False),
    )

    await params.result_callback(
        {"status": "nearby"},
        properties=FunctionCallResultProperties(is_final=False),
    )

    await params.result_callback({"status": "delivered"})

llm.register_function(
    "track_delivery",
    track_delivery,
    cancel_on_interruption=False,
)
Intermediate results are injected into the LLM context as async-tool developer messages. They do not close the function call; the call remains in progress until the final result is sent.

Key Takeaways

  • Function calling extends LLM capabilities beyond training data to real-time information
  • Context integration is automatic - function calls and results are stored in conversation history
  • Multiple definition approaches - use standard schema for portability, direct functions for simplicity
  • Async function calls are opt-in - set cancel_on_interruption=False for deferred results, intermediate updates, and optional async-tool cancellation
  • Pipeline integration is seamless - functions work within your existing voice AI architecture
  • Advanced control available - fine-tune LLM execution and monitor function call lifecycle

What’s Next

Now that you understand function calling, let’s explore how to configure text-to-speech services to convert your LLM’s responses (including function call results) into natural-sounding speech.

Text to Speech

Learn how to configure speech synthesis in your voice AI pipeline