Writer AI module
This module leverages the Writer Python SDK to enable applications to interact with large language models (LLMs) in chat or text completion formats. It provides tools to manage conversation states and to dynamically interact with LLMs using both synchronous and asynchronous methods.
Getting your API key
To utilize the Writer AI module, you’ll need to configure the WRITER_API_KEY
environment variable with an API key obtained from AI Studio. Here is a detailed guide to setup up this key. You will need to select an API app under Developer tools
Once you have your API key, set it as an environment variable on your system:
You can manage your environment variables using methods that best suit your setup, such as employing tools like python-dotenv.
Furthermore, when deploying an application with writer deploy
, the WRITER_API_KEY
environment variable is automatically configured with the API key specified during the deployment process.
Chat completion with the Conversation class
The Conversation
class manages LLM communications within a chat framework, storing the conversation history and handling the interactions.
Initializing a conversation
A Conversation
can be initialized with either a system prompt or a list of previous messages. It can also accept a default configuration dictionary that sets parameters for all interactions.
Adding messages to conversation
Messages can be added to a Conversation
instance using the +
operator or the add
method.
Completing and streaming Conversations
The complete
and stream_complete
methods facilitate interaction with the LLM based on the accumulated messages and configuration. These methods execute calls to generate responses and return them in the form of a message object, but do not alter the conversation’s messages
list, allowing you to validate or modify the output before deciding to add it to the history.
Instance-wide configuration parameters can be complemented or overriden on individual call’s level, if a config
dictionary is provided to the method:
Using Graphs with Conversation
A Graph
is a collection of files meant to provide their contents to the LLM during conversations. Framework allows you to create, retrieve, update, and delete graphs, as well as manage the files within them.
Creating and Managing Graphs
To create and manipulate graphs, use the following methods:
Adding and Removing Files from Graphs
You can upload files, associate them with graphs, and download or remove them.
Applying Graphs to Conversation completion
You can utilize graphs within conversations. For instance, you may want to provide the LLM access to a collection of files during an ongoing conversation to query or analyze the file content. When passing a graph to the conversation, the LLM can query the graph to retrieve relevant data.
Alternatively, you can define a graph using JSON:
Using Function Calls with Conversations
Function tools are only available with palmyra-x-004
model
Framework allows you to register Python functions that can be called automatically during conversations. When the LLM determines a need for specific information or processing, it issues a request to use the local code (your function), and Framework handles that request automatically.
Defining Function Tools
Function tools are defined using either a Python class or a JSON configuration.
Alternatively, you can define a function tool in JSON format, but the callable function must still be passed:
Function tools require the following properties:
name: str
: A string that defines how the function is referenced by the LLM. It should describe the function’s purpose.callable: Callable
: A Python function that will be called automatically when needed by the LLM.parameters: dict
: A dictionary that specifies what input the function expects. The keys should match the function’s parameter names, and each parameter should have atype
, and an optionaldescription
.
Supported types are:string
,number
,integer
,float
,boolean
,array
,object
andnull
.
Automated Function Calling
When a conversation involves a tool (either a graph or a function), Framework automatically handles the requests from LLM to use the tools during interactions. If the tool needs multiple steps (for example, querying data and processing it), Framework will handle those steps recursively, calling functions as needed until the final result is returned.
By default, to prevent endless recursion, Framework will only handle 3 consecutive tool calls. You can expand it in case it doesn’t suit your case – both complete()
and stream_complete()
accept a max_tool_depth
parameter, which configures the maximum allowed recursion depth:
Providing a Tool or a List of Tools
You can pass either a single tool or a list of tools to the complete()
or stream_complete()
methods. The tools can be a combination of FunctionTool, Graph, or JSON-defined tools.
Text generation without a conversation state
These complete
and stream_complete
methods are designed for one-off text generation without the need to manage a conversation state. They return the model’s response as a string. Each function accepts a config
dictionary allowing call-specific configurations.