WRITER_API_KEY
environment variable with an API key obtained from AI Studio. Here is a detailed guide to setup up this key. You will need to select an API app under Developer tools
Once you have your API key, set it as an environment variable on your system:
Conversation
class manages LLM communications within a chat framework, storing the conversation history and handling the interactions.
Conversation
can be initialized with either a system prompt or a list of previous messages. It can also accept a default configuration dictionary that sets parameters for all interactions.
Conversation
instance using the +
operator or the add
method.
complete
and stream_complete
methods facilitate interaction with the LLM based on the accumulated messages and configuration. These methods execute calls to generate responses and return them in the form of a message object, but do not alter the conversation’s messages
list, allowing you to validate or modify the output before deciding to add it to the history.
config
dictionary is provided to the method:
Graph
is a collection of files meant to provide their contents to the LLM during conversations. Framework allows you to create, retrieve, update, and delete graphs, as well as manage the files within them.
palmyra-x4
and palmyra-x5
models.name: str
: A string that defines how the function is referenced by the LLM. It should describe the function’s purpose.callable: Callable
: A Python function that will be called automatically when needed by the LLM.parameters: dict
: A dictionary that specifies what input the function expects. The keys should match the function’s parameter names, and each parameter should have a type
, and an optional description
.string
, number
, integer
, float
, boolean
, array
, object
and null
.complete()
and stream_complete()
accept a max_tool_depth
parameter, which configures the maximum allowed recursion depth:
complete()
or stream_complete()
methods. The tools can be a combination of FunctionTool, Graph, or JSON-defined tools.
complete
and stream_complete
methods are designed for one-off text generation without the need to manage a conversation state. They return the model’s response as a string. Each function accepts a config
dictionary allowing call-specific configurations.
ask
and stream_ask
methods allow you to query one or more graphs to generate responses from the information stored within them.
Graph.ask
, Graph.stream_ask
): Used when working with a single graph instance. These methods are tied directly to the Graph object, encapsulating operations within that instance.writer.ai.ask
, writer.ai.stream_ask
): Designed for querying multiple graphs simultaneously. These methods operate on a broader scale, allowing mixed inputs of graph objects and IDs.question: str
: The main query for the LLM.
• Optional subqueries: bool
(default: False
): Allows the LLM to generate additional questions during response preparation for more detailed answers. Enabling this might increase response time.
Method-level methods require:
• graphs_or_graph_ids: list[Graph | str]
: A list of graphs to use for the question. You can pass Graph
objects directly into the list, use graph IDs in string form, or a mix of both.
Graph.ask
and Graph.stream_ask
, are designed for interacting with a single graph. By calling these methods on a specific Graph
instance, you can easily pose questions and retrieve answers tailored to that graph’s content.
writer.ai.ask
and writer.ai.stream_ask
, are designed for querying multiple graphs simultaneously. They are useful when you need to aggregate or compare data across multiple graphs.
Tools
classTools
in the Writer Framework; for more thorough documentation, check out (this guide)[https://dev.writer.com/tools].writer.ai.tools
instance provides access to Writer SDK tools
resources, such as text splitting, medical content comprehension, and PDF parsing. Below is a guide on how to use each method.
split
method divides text into chunks based on a selected strategy.
content
(str): The text to be split.strategy
(str): The splitting strategy (llm_split
, fast_split
, or hybrid_split
).comprehend_medical
method processes medical text and extracts relevant entities based on a specified response type.
content
(str): The medical text to process.response_type
(str): The type of medical response (Entities
, RxNorm
, ICD-10-CM
, or SNOMED CT
).parse_pdf
method extracts text content from a PDF file. The file can be referenced by its ID, or provided as a File
object.
file_id_or_file
(str or File): The file to parse (by ID or as an object).format
(str): The format of the extracted content (text
or markdown
).