This guide helps you understand how to use tool calling, sometimes known as function calling. Tool calling allows you to extend the capabilities of AI chat applications by enabling direct interaction between models and functions you define.
Your custom functions can perform a wide range of tasks, such as querying databases, fetching real-time data from APIs, processing data, or executing business logic. You can then integrate the result of these tool calls back into the model’s output.
Tool calling is available for Palmyra-X-004.
This guide discusses calling custom functions as tools. Writer also offers prebuilt tools that models can execute remotely:
First, define the custom functions in your code. Typical use cases for tool calling include calling an API, performing mathematical calculations, or running complex business logic. You can define these functions in your code as you would any other function.
Here’s an example of a function to calculate the mean of a list of numbers.
After you’ve defined your functions, create a tools array to pass to the model.
The tools array describes your functions as tools available to the model. You describe tools in the form of a JSON schema. Each tool should include a type of function and a function object that includes a name, description, and a dictionary of parameters.
The function.parameters.properties object contains the tool’s parameter definitions as a JSON schema. The object’s keys should be the names of the parameters, and the values should be objects containing the parameter’s type and description.
When the model decides you should use the tool to answer the user’s question, it returns the parameters that you should use when calling the function you’ve defined.
Here’s an example of a tools array for the calculate_mean function:
tools =[{"type":"function","function":{"name":"calculate_mean","description":"Calculate the mean (average) of a list of numbers.","parameters":{"type":"object","properties":{"numbers":{"type":"array","items":{"type":"number"},"description":"List of numbers"}},"required":["numbers"]}}}]
To help the model understand when to use the tool, follow these best practices for the function.description parameter:
Indicate that the tool is a function that invokes a no-code application
Specify the application’s purpose and capabilities
Describe when the tool should be used
An example description for a tool that invokes a function to calculate the mean of a list of numbers:
“A function that calculates the mean of a list of numbers. Any user request asking for the mean of a list of numbers should use this tool.”
Once the tools array is complete, you pass it to the chat completions endpoint along with the chat model and messages. Set tool_choice to auto to take full advantage of the model’s capabilities.
If you are unfamiliar with the chat completions endpoint, learn more in the chat completion guide.
You can use tool calling with stream set to either true or false.
from writerai import Writer# Initialize the Writer client. If you don't pass the `apiKey` parameter,# the client looks for the `WRITER_API_KEY` environment variable.client = Writer()messages =[{"role":"user","content":"what is the mean of [1,3,5,7,9]?"}]response = client.chat.chat( model="palmyra-x-004", messages=messages, tools=tools, tool_choice="auto")
When the model identifies a need to call a tool based on the user’s input, it invokes it in the response, passing along any necessary parameters. You then execute the tool’s function and return the result to the model.
The method for checking for tool calls and executing the tool’s function differs depending on whether you’re streaming the response or not.
Iterate through the response chunks to check for tool calls, concatenate the streaming tool call content, and handle non-tool-call content, such as content generated when the user asks a question not requiring a tool call.
streaming_content =""function_calls =[]for chunk in response: choice = chunk.choices[0]if choice.delta:# Check for tool callsif choice.delta.tool_calls:for tool_call in choice.delta.tool_calls:if tool_call.id:# Append an empty dictionary to the function_calls list with the tool call ID function_calls.append({"name":"","arguments":"","call_id": tool_call.id})if tool_call.function:# Append function name and arguments to the last dictionary in the function_calls list function_calls[-1]["name"]+=( tool_call.function.nameif tool_call.function.nameelse"") function_calls[-1]["arguments"]+=( tool_call.function.argumentsif tool_call.function.argumentselse"")# Handle non-tool-call contentelif choice.delta.content: streaming_content += choice.delta.content
Check for the finish reason and then call each function
While inside of the loop and the if-statement for choice.delta, check for the finish_reason of the choice. If the finish_reason is stop, this means the model has finished generating the response without calling any tools. If the finish_reason is tool_calls, call each function in the function_calls list and append the result to the messages array. Be sure to convert the function response to a string before appending it to the messages array.
# Inside of the loop and the if-statement for `choice.delta`# A finish reason of stop means the model has finished generating the responseif choice.finish_reason =="stop": messages.append({"role":"assistant","content": streaming_content})# A finish reason of tool_calls means the model has finished deciding which tools to callelif choice.finish_reason =="tool_calls":for function_call in function_calls:if function_call["name"]=="calculate_mean": arguments_dict = json.loads(function_call["arguments"]) function_response = calculate_mean(arguments_dict["numbers"]) messages.append({"role":"tool","content":str(function_response),"tool_call_id": function_call["call_id"],"name": function_call["name"],})
After you’ve appended the tool call results to the messages array, you can pass the messages array back to the model to get the final response.
Note that this code block should be inside of the check for the finish_reason of tool_calls, after the loop that iterates through the function_calls list:
# Inside of `elif choice.finish_reason == "tool_calls"`final_response = client.chat.chat( model="palmyra-x-004", messages=messages, stream=True)final_streaming_content =""for chunk in final_response: choice = chunk.choices[0]if choice.delta and choice.delta.content: final_streaming_content += choice.delta.contentprint(final_streaming_content)# The mean is 5
Here is the full code example for streaming tool calling:
import jsonimport dotenvfrom writerai import Writerdotenv.load_dotenv()client = Writer()defcalculate_mean(numbers:list)->float:returnsum(numbers)/len(numbers)tools =[{"type":"function","function":{"name":"calculate_mean","description":"Calculate the mean (average) of a list of numbers.","parameters":{"type":"object","properties":{"numbers":{"type":"array","items":{"type":"number"},"description":"List of numbers"}},"required":["numbers"]}}}]messages =[{"role":"user","content":"what is the mean of [1,3,5,7,9]?"}]response = client.chat.chat( model="palmyra-x-004", messages=messages, tools=tools, tool_choice="auto", stream=True)streaming_content =""function_calls =[]for chunk in response: choice = chunk.choices[0]if choice.delta:# Check for tool callsif choice.delta.tool_calls:for tool_call in choice.delta.tool_calls:if tool_call.id:# Append an empty dictionary to the function_calls list with the tool call ID function_calls.append({"name":"","arguments":"","call_id": tool_call.id})if tool_call.function:# Append function name and arguments to the last dictionary in the function_calls list function_calls[-1]["name"]+=( tool_call.function.nameif tool_call.function.nameelse"") function_calls[-1]["arguments"]+=( tool_call.function.argumentsif tool_call.function.argumentselse"")# Handle non-tool-call contentelif choice.delta.content: streaming_content += choice.delta.content# A finish reason of stop means the model has finished generating the responseif choice.finish_reason =="stop": messages.append({"role":"assistant","content": streaming_content})# A finish reason of tool_calls means the model has finished deciding which tools to callelif choice.finish_reason =="tool_calls":for function_call in function_calls:if function_call["name"]=="calculate_mean": arguments_dict = json.loads(function_call["arguments"]) function_response = calculate_mean(arguments_dict["numbers"]) messages.append({"role":"tool","content":str(function_response),"tool_call_id": function_call["call_id"],"name": function_call["name"],}) final_response = client.chat.chat( model="palmyra-x-004", messages=messages, stream=True) final_streaming_content =""for chunk in final_response: choice = chunk.choices[0]if choice.delta and choice.delta.content: final_streaming_content += choice.delta.contentprint(final_streaming_content)# The mean is 5
Then, pass the result back to the model by appending it to the messages array. Be sure to convert the function response to a string if necessary before appending it to the messages array.
# Within the if statement for tool callmessages.append({"role":"tool","tool_call_id": tool_call_id,"name": function_name,"content":str(function_response),})
After you’ve appended the tool call results to the messages array, you can pass the messages array back to the model to get the final response.
final_response = client.chat.chat( model="palmyra-x-004", messages=messages, stream=False)print(f"Final response: \n{final_response.choices[0].message.content}\n")# Final response: "The mean is 5"
Here is the full code example for non-streaming tool calling:
import jsonimport dotenvfrom writerai import Writerdotenv.load_dotenv()client = Writer()defcalculate_mean(numbers:list)->float:returnsum(numbers)/len(numbers)tools =[{"type":"function","function":{"name":"calculate_mean","description":"Calculate the mean (average) of a list of numbers.","parameters":{"type":"object","properties":{"numbers":{"type":"array","items":{"type":"number"},"description":"List of numbers"}},"required":["numbers"]}}}]messages =[{"role":"user","content":"what is the mean of [1,3,5,7,9]?"}]response = client.chat.chat( model="palmyra-x-004", messages=messages, tools=tools, tool_choice="auto", stream=False)response_message = response.choices[0].messagetool_calls = response_message.tool_callsif tool_calls: tool_call = tool_calls[0] tool_call_id = tool_call.id function_name = tool_call.function.name function_args = json.loads(tool_call.function.arguments)if function_name =="calculate_mean": function_response = calculate_mean(function_args["numbers"]) messages.append({"role":"tool","tool_call_id": tool_call_id,"name": function_name,"content":str(function_response),})final_response = client.chat.chat( model="palmyra-x-004", messages=messages, stream=False)print(f"Final response: \n{final_response.choices[0].message.content}\n")# Final response: "The mean is 5"
First, define the function in your code. The examples below take in a word, call the dictionary API, and return the phonetic pronunciation of the word as a JSON-formatted string.
import requestsdefget_word_pronunciation(word): url =f"https://api.dictionaryapi.dev/api/v2/entries/en/{word}" response = requests.get(url)if response.status_code ==200:return json.dumps(response.json()[0]['phonetics'])else:returnf"Failed to retrieve word pronunciation. Status code: {response.status_code}"
Next, define a tools array that describes the tool with a JSON schema.
tools =[{"type":"function","function":{"name":"get_word_pronunciation","description":"A function that will return JSON containing the phonetic pronunciation of an English word","parameters":{"type":"object","properties":{"word":{"type":"string","description":"The word to get the phonetic pronunciation for",}},"required":["word"],},},}]
Call the chat.chat method with the tools parameter set to the tools array and tool_choice set to auto.
from writerai import Writer# Initialize the Writer client. If you don't pass the `apiKey` parameter,# the client looks for the `WRITER_API_KEY` environment variable.client = Writer()messages =[{"role":"user","content":"what is the phonetic pronunciation of the word 'epitome' in English?"}]response = client.chat.chat( model="palmyra-x-004", messages=messages, tools=tools, tool_choice="auto", stream=False)
Finally, pass the result back to the model by appending it to the messages array, and get the final response.
messages.append({"role":"tool","tool_call_id": tool_call_id,"name": function_name,"content": function_response,})final_response = client.chat.chat( model="palmyra-x-004", messages=messages, stream=False)print(f"Final response: {final_response.choices[0].message.content}")# Final response: The phonetic pronunciation of the word "epitome" in English is /əˈpɪt.ə.mi/...
Here is the full code example:
import requestsfrom writerai import Writer# Initialize the Writer client. If you don't pass the `apiKey` parameter,# the client looks for the `WRITER_API_KEY` environment variable.client = Writer()defget_word_pronunciation(word): url =f"https://api.dictionaryapi.dev/api/v2/entries/en/{word}" response = requests.get(url)if response.status_code ==200:return json.dumps(response.json()[0]['phonetics'])else:returnf"Failed to retrieve word pronunciation. Status code: {response.status_code}"tools =[{"type":"function","function":{"name":"get_word_pronunciation","description":"A function that will return JSON containing the phonetic pronunciation of an English word","parameters":{"type":"object","properties":{"word":{"type":"string","description":"The word to get the phonetic pronunciation for",}},"required":["word"],},},}]messages =[{"role":"user","content":"what is the phonetic pronunciation of the word 'epitome' in English?"}]response = client.chat.chat( model="palmyra-x-004", messages=messages, tools=tools, tool_choice="auto", stream=False)response_message = response.choices[0].messagemessages.append(response_message)tool_calls = response_message.tool_callsif tool_calls: tool_call = tool_calls[0] tool_call_id = tool_call.id function_name = tool_call.function.name function_args = json.loads(tool_call.function.arguments)if function_name =="get_word_pronunciation": function_response = get_word_pronunciation(function_args["word"])messages.append({"role":"tool","tool_call_id": tool_call_id,"name": function_name,"content": function_response,})final_response = client.chat.chat( model="palmyra-x-004", messages=messages, stream=False)print(f"Final response: {final_response.choices[0].message.content}")
By following this guide, you can incorporate tool calling into your application and augment the capabilities of a model with real-time data, math operations, business logic, and much more. For more examples, check out the tool calling cookbooks available on GitHub.