This guide discusses calling custom functions as tools. Writer also offers prebuilt tools that models can execute remotely:
You need an API key to access the Writer API. Get an API key by following the steps in the API quickstart.We recommend setting the API key as an environment variable in a
.env
file with the name WRITER_API_KEY
.Overview
To use tool calling, follow these steps:- Define your functions in code
- Pass the functions to the model in a chat completion request
- Append the assistant’s response (containing tool calls) to the message history
- Check to see which functions the model wants to invoke and run the corresponding functions
- Append the tool results to the message history
- Pass the updated messages back to the model to get the final response
- Append the final response to maintain complete conversation history
Tool calling overview

Example: Calculate the mean of a list of numbers

Example: Calculate the mean and standard deviation of a list of numbers

Define your custom functions
First, define the custom functions in your code. Typical use cases for tool calling include calling an API, performing mathematical calculations, or running complex business logic. You can define these functions in your code as you would any other function. Here’s an example of a function to calculate the mean of a list of numbers.Describe functions as tools
After you’ve defined your functions, create atools
array to pass to the model.
The tools
array describes your functions as tools available to the model. You describe tools in the form of a JSON schema. Each tool should include a type
of function
and a function
object that includes a name
, description
, and a dictionary of parameters
.
Tool structure
Thetools
array contains an object with the following parameters:
Parameter | Type | Description |
---|---|---|
type | string | The type of tool, which is function for a custom function |
function | object | An object containing the tool’s description and parameter definitions |
function.name | string | The name of the tool |
function.description | string | A description of what the tool does and when the model should use it |
function.parameters | object | An object containing the tool’s input parameters |
function.parameters.type | string | The type of the parameter, which is object for a JSON schema |
function.parameters.properties | object | An object containing the tool’s parameters in the form of a JSON schema. See below for more details. |
function.parameters.required | array | An array of the tool’s required parameters |
function.parameters.properties
object contains the tool’s parameter definitions as a JSON schema. The object’s keys should be the names of the parameters, and the values should be objects containing the parameter’s type and description.
When the model decides you should use the tool to answer the user’s question, it returns the parameters that you should use when calling the function you’ve defined.
Example tool array
Here’s an example of atools
array for the calculate_mean
function:
To help the model understand when to use the tool, follow these best practices for the
function.description
parameter:- Indicate that the tool is a function that invokes a no-code agent
- Specify the function’s purpose and capabilities
- Describe when the tool should be used
“A function that calculates the mean of a list of numbers. Any user request asking for the mean of a list of numbers should use this tool.”
Pass tools to the model
Once the tools array is complete, pass it to the chat completions endpoint along with the chat messages.Tool choice control
The chat completion endpoint has atool_choice parameter
that controls how the model decides when to use the tools you’ve defined.
Value | Description |
---|---|
auto | The model decides which tools to use, if any. |
none | The model doesn’t use tools and only returns a generated response. |
required | The model must use at least one of the tools you’ve defined. |
calculate_mean
tool, you can set tool_choice
to {"type": "function", "function": {"name": "calculate_mean"}}
.
In this example, tool_choice
is auto
, which means the model decides which tools to use, if any, based on the message and tool descriptions.
Process tool calls
When the model identifies a need to call a tool based on the user’s input, it indicates it in the response and includes the necessary parameters to pass when calling the tool. You then execute the tool’s function and return the result to the model.Proper conversation history management requires appending the assistant’s response, tool results, and final response to the message history.
The examples below demonstrate processing a single tool call. For handling multiple tool calls in one request, see the multiple tool calls section.
Streaming
When using streaming, the tool calls come back in chunks inside of the delta object of the choices array. To process the tool calls:- Stream and collect tool calls from the response chunks
- Reconstruct the assistant’s response and append it to the message history
- Execute functions and append tool results to the message history
- Get the final response from the model and append it to the message history
Step 1: Stream and collect tool calls from the response chunks
First, stream and collect tool calls from the response chunks.Step 2: Check finish reason and reconstruct the assistant’s response
Check thefinish_reason
to determine if tools were called, then reconstruct and append the assistant’s response to the conversation history.Step 3: Execute functions and append tool results
Execute each function in thefunction_calls
list and append the results to the messages array.Step 4: Get and append the final response
After appending tool results, get the final response from the model and append it to maintain complete conversation history.Complete streaming code example
Multiple tool calls
When the model uses multiple tools in a single request, you’ll receive multiple tool calls in the response. This section shows you how to handle multiple tool calls for both streaming and non-streaming approaches.Non-streaming: The processing logic for multiple tool calls is identical to single tool calls - the same code handles both scenarios. The only difference is that the loop for checking the response for tool calling processes multiple tool calls instead of one.Streaming: Multiple tool calls require different logic because tool calls stream in chunks. You need to collect and reconstruct multiple tool calls from streaming chunks before processing them.
Define multiple functions
First, define the functions in your code.Define tools array with multiple functions
Create a tools array that includes both functions:For more information about describing functions as tools, see describe functions as tools section
Process multiple tool calls
When the model identifies a need to call multiple tools based on the user’s input, you’ll receive multiple tool calls in the response. The processing differs between streaming and non-streaming approaches.Streaming
When using streaming with multiple tool calls, the tool calls come back in chunks inside of the delta object of the choices array. To process multiple tool calls:- Stream and collect multiple tool calls from the response chunks
- Reconstruct the assistant’s response with all tool calls and append it to the message history
- Execute all functions and append tool results to the message history
- Get the final response from the model and append it to the message history
Step 1: Stream and collect multiple tool calls from the response chunks
First, stream and collect multiple tool calls from the response chunks. This is where multiple tool calls differ from single tool calls:Step 2: Reconstruct and append assistant message with multiple tool calls
Reconstruct the assistant message with all collected tool calls:Step 3: Execute all functions and append results
Execute each function and append the results to the conversation history:Step 4: Get the final response
Get the final response from the model after all tool calls are processed:Step 5: Append final response to conversation history
Append the final response to maintain complete conversation history:Complete streaming code example
When processing multiple tool calls, ensure that:
- All tool calls are executed before getting the final response
- Each tool result is properly appended to the conversation history
- Error handling is implemented for each tool call
- The final response includes all tool results for context
Example: External API call
The following example covers a common use case for tool calling: calling an external API. The code uses a publicly available dictionary API to return information about an English word’s phonetic pronunciation. This example is using non-streaming; for streaming, refer to the multiple tool calls streaming example to adjust the code.Define function calling an API
First, define the function in your code. The examples below take in a word, call the dictionary API, and return the phonetic pronunciation of the word as a JSON-formatted string.Define tools array
Next, define a tools array that describes the tool with a JSON schema.Pass the tools to the model
Call thechat.chat
method with the tools
parameter set to the tools
array and tool_choice
set to auto
.
Check response for tool calling
Loop through thetool_calls
array to check for the invocation of the tool. Then, call the tool’s function with the arguments the model provided.