Skip to main content
This guide shows you how to migrate from the deprecated medical comprehend API endpoint to chat completions that can delegate medical analysis tasks to the palmyra-med model. After completing these steps, you can analyze medical text using the palmyra-med model, which provides medical analysis capabilities within a chat completion workflow.

Compare the APIs

The medical comprehend API and the LLM tool with the palmyra-med model both provide medical text analysis capabilities, but the LLM tool integrates medical analysis into conversational workflows and supports more flexible configuration and output options. The table below compares the two approaches.
AspectMedical comprehend APILLM tool with palmyra-med model
Endpoint/v1/tools/comprehend/medical/v1/chat with LLM tool specification in the tools array
Request structurePass text and analysis parameters such as content and response_type directly in the request bodyProvide the medical text as part of the conversation messages and specify the analysis tool and model in the tools array. Additional parameters (such as custom instructions or output schema) are included in the tool configuration, not as direct request fields.
Response formatStructured JSON with a fixed schema (predefined medical entities and types)Answer in choices[0].message.content as either natural language or structured JSON (including custom schemas or user-defined formats)
Parameter controlExplicit API parameters for each entity and type supported by the APIMost analysis details are defined by the message prompt and tool configuration; you can specify formats, schema, or request conversational output as needed

Choose your migration approach

Choose your migration approach based on the type of output you need:
  • Select the LLM tool with structured output if you require clearly defined, machine-readable results (for example, extracting predefined medical entities like RxNorm, ICD-10-CM, or SNOMED CT). This option closely matches the original medical comprehend API response format and is best for downstream systems that depend on standardized data.
  • Choose conversational analysis with natural language responses if your use case benefits from more flexible or descriptive outputs, or if you want to define your own entity or analysis formats that are not supported by the original API. This approach is ideal for interactive workflows or when you want the model to summarize, explain, or provide guidance in plain language.

Extract entities and generate structured output

If you need structured entity extraction similar to the medical comprehend API, use the structured output feature with the LLM tool to get JSON responses:
curl --location 'https://api.writer.com/v1/chat' \
  --header 'Content-Type: application/json' \
  --header "Authorization: Bearer $WRITER_API_KEY" \
  --data '{
    "model": "palmyra-x5",
    "messages": [
      {
        "role": "user",
        "content": "Extract medical entities from this text: the symptoms are soreness, a temperature and cough"
      }
    ],
    "tool_choice": "auto",
    "tools": [
      {
        "type": "llm",
        "function": {
          "description": "A function that invokes the Palmyra Med model, specialized in analyzing medical text. Any user request for medical analysis should use this tool.",
          "model": "palmyra-med"
        }
      }
    ],
    "response_format": {
      "type": "json_schema",
      "json_schema": {
        "name": "medical_entities",
        "schema": {
          "type": "object",
          "properties": {
            "entities": {
              "type": "array",
              "items": {
                "type": "object",
                "properties": {
                  "text": {"type": "string"},
                  "category": {"type": "string"},
                  "score": {"type": "number"}
                },
                "required": ["text", "category", "score"],
                "additionalProperties": false
              }
            }
          },
          "required": ["entities"],
          "additionalProperties": false
        },
        "strict": true
      }
    }
  }'

Analyze medical text with natural language responses

If you need conversational medical analysis and interpretation, use the LLM tool without structured output:
curl --location 'https://api.writer.com/v1/chat' \
  --header 'Content-Type: application/json' \
  --header "Authorization: Bearer $WRITER_API_KEY" \
  --data '{
    "model": "palmyra-x5",
    "messages": [
      {
        "role": "user",
        "content": "Analyze the following medical text and identify key symptoms and conditions: the symptoms are soreness, a temperature and cough"
      }
    ],
    "tool_choice": "auto",
    "tools": [
      {
        "type": "llm",
        "function": {
          "description": "A function that invokes the Palmyra Med model, specialized in analyzing medical text and providing medical insights. Any user request for medical analysis should use this tool.",
          "model": "palmyra-med"
        }
      }
    ]
  }'

Migrate your code

The tabs below show a request using the medical comprehend API and the same request using the LLM tool with the palmyra-med model.
  • Before: Medical comprehend API
  • After: LLM tool with structured output
The medical comprehend API accepts medical text and a response type:
curl --location 'https://api.writer.com/v1/tools/comprehend/medical' \
  --header 'Content-Type: application/json' \
  --header "Authorization: Bearer $WRITER_API_KEY" \
  --data '{
    "content": "the symptoms are soreness, a temperature and cough", 
    "response_type": "Entities"
  }'
Response:
{
  "entities": [
    {
      "category": "MEDICAL_CONDITION",
      "text": "soreness",
      "score": 0.95,
      "traits": []
    },
    {
      "category": "MEDICAL_CONDITION",
      "text": "temperature",
      "score": 0.92,
      "traits": []
    },
    {
      "category": "MEDICAL_CONDITION",
      "text": "cough",
      "score": 0.98,
      "traits": []
    }
  ]
}

Access LLM metadata

The LLM tool response includes metadata in the llm_data field:
from writerai import Writer

# Initialize the Writer client. If you don't pass the `api_key` parameter,
# the client looks for the `WRITER_API_KEY` environment variable.
client = Writer()

messages = [{"role": "user", "content": "Extract medical entities from this text: the symptoms are soreness, a temperature and cough"}];

tools = [{
  "type": "llm",
  "function": {
    "description": "A function that invokes the Palmyra Med model, specialized in analyzing medical text. Any user request for medical analysis should use this tool.",
    "model": "palmyra-med"
  }
}];

response = client.chat.chat(
  model="palmyra-x5", 
  messages=messages, 
  tools=tools,
  tool_choice="auto"
)

# Get the analysis result
analysis = response.choices[0].message.content

# Get LLM metadata
llm_metadata = response.choices[0].message.llm_data
prompt = llm_metadata.prompt
model_used = llm_metadata.model

print(f"Analysis from {model_used}: {analysis}")
Learn more about the LLM tool and related features: