POST
/
v1
/
chat
cURL
curl --location --request POST https://api.writer.com/v1/chat \
 --header "Authorization: Bearer <token>" \
 --header "Content-Type: application/json" \
--data-raw '{"model":"palmyra-x5","messages":[{"content":"Write a memo summarizing this earnings report.","role":"user"}]}'
{
  "id": "57e4f58f-f7b1-41d8-be17-a6279c073aad",
  "object": "chat.completion",
  "choices": [
    {
      "index": 0,
      "finish_reason": "stop",
      "message": {
        "content": "The earnings report shows...",
        "role": "assistant",
        "refusal": null,
        "tool_calls": [],
        "graph_data": {
          "sources": [],
          "status": "finished",
          "subqueries": []
        },
        "llm_data": {
          "prompt": "Write a memo summarizing this earnings report.",
          "model": "palmyra-x5"
        },
        "translation_data": null,
        "web_search_data": null
      }
    }
  ],
  "created": 1715361795,
  "model": "palmyra-x5",
  "usage": {
    "prompt_tokens": 40,
    "total_tokens": 340,
    "completion_tokens": 300,
    "prompt_token_details": {
      "cached_tokens": 0
    },
    "completion_token_details": {
      "reasoning_tokens": 0
    }
  },
  "system_fingerprint": "v1",
  "service_tier": "standard"
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your Writer API key.

Body

application/json
model
string
required

The ID of the model to use for creating the chat completion. Supports palmyra-x5, palmyra-x4, palmyra-fin, palmyra-med, palmyra-creative, and palmyra-x-003-instruct.

messages
chat_message · object[]
required

An array of message objects that form the conversation history or context for the model to respond to. The array must contain at least one message.

Minimum length: 1
max_tokens
integer

Defines the maximum number of tokens (words and characters) that the model can generate in the response. This can be adjusted to allow for longer or shorter responses as needed. The maximum value varies by model. See the models overview for more information about the maximum number of tokens for each model.

temperature
number
default:1

Controls the randomness or creativity of the model's responses. A higher temperature results in more varied and less predictable text, while a lower temperature produces more deterministic and conservative outputs.

top_p
number

Sets the threshold for "nucleus sampling," a technique to focus the model's token generation on the most likely subset of tokens. Only tokens with cumulative probability above this threshold are considered, controlling the trade-off between creativity and coherence.

n
integer

Specifies the number of completions (responses) to generate from the model in a single request. This parameter allows for generating multiple responses, offering a variety of potential replies from which to choose.

stop

A token or sequence of tokens that, when generated, will cause the model to stop producing further content. This can be a single token or an array of tokens, acting as a signal to end the output.

logprobs
boolean
default:false

Specifies whether to return log probabilities of the output tokens.

stream
boolean
default:false

Indicates whether the response should be streamed incrementally as it is generated or only returned once fully complete. Streaming can be useful for providing real-time feedback in interactive applications.

tools
(Function tool · object | Graph tool · object | LLM tool · object | Translation tool · object | Vision tool · object | Web search tool · object)[]

An array containing tool definitions for tools that the model can use to generate responses. The tool definitions use JSON schema. You can define your own functions or use one of the built-in graph, llm, translation, or vision tools. Note that you can only use one built-in tool type in the array (only one of graph, llm, translation, or vision). You can pass multiple custom tools of type function in the same request.

tool_choice
object

Configure how the model will call functions:

  • auto: allows the model to automatically choose the tool to use, or not call a tool
  • none: disables tool calling; the model will instead generate a message
  • required: requires the model to call one or more tools

You can also use a JSON object to force the model to call a specific tool. For example, {"type": "function", "function": {"name": "get_current_weather"}} requires the model to call the get_current_weather function, regardless of the prompt.

stream_options
object

Additional options for streaming.

response_format
object

The response format to use for the chat completion, available with palmyra-x4 and palmyra-x5.

text is the default response format. JSON Schema is supported for structured responses. If you specify json_schema, you must also provide a json_schema object.

Response

Successful response

id
string<uuid>
required

A globally unique identifier (UUID) for the response generated by the API. This ID can be used to reference the specific operation or transaction within the system for tracking or debugging purposes.

object
enum<string>
required

The type of object returned, which is always chat.completion for chat responses.

Available options:
chat.completion
choices
object[]
required

An array of objects representing the different outcomes or results produced by the model based on the input provided.

Minimum length: 1
created
integer
required

The Unix timestamp (in seconds) when the response was created. This timestamp can be used to verify the timing of the response relative to other events or operations.

model
string
required

Identifies the specific model used to generate the response.

usage
object

Usage information for the chat completion response. Please note that at this time Knowledge Graph tool usage is not included in this object.

system_fingerprint
string

A string representing the backend configuration that the model runs with.

service_tier
string

The service tier used for processing the request.