Skip to main content

Documentation Index

Fetch the complete documentation index at: https://dev.writer.com/llms.txt

Use this file to discover all available pages before exploring further.

This guide shows you how to configure Microsoft Azure as an external model provider in AI Studio. After setting up this provider, you can use Azure OpenAI models or Foundry Models when building agents.

Prerequisites

Before adding Microsoft Azure models to AI Studio, you need:
  • An Azure subscription with access to Azure OpenAI or Foundry Models
  • At least one deployed model on your Azure resource (see setup options below)
  • The resource endpoint and API key for the deployment you plan to use
AI Studio supports text generation and embedding models from Microsoft Azure. Video, audio, and image generation models are not supported.
AI Studio currently supports the following Azure models, with more coming soon:
  • gpt-4.1
  • gpt-4o
  • gpt-5
  • mistral-large-3
To request a model, reach out to your account manager or support.

Choose a setup path

Choose one of the following setup paths depending on how you deploy models in Azure:
  • Azure OpenAI: Create an Azure OpenAI resource and deploy models through the Azure portal
  • Foundry Models: Deploy models through the Foundry portal

Option 1: Azure OpenAI

Create a resource

Create an Azure OpenAI resource in the Azure portal before deploying models.
  1. Sign in to the Azure portal
  2. Select Create a resource and search for Azure OpenAI
  3. Select Create
  4. On the Basics tab, provide the following:
FieldDescription
SubscriptionThe Azure subscription to use for your Azure OpenAI resource
Resource groupThe resource group to contain your Azure OpenAI resource. Create a new group or use an existing one
RegionThe location for your resource. Different regions may introduce latency
NameA descriptive name for the resource, such as writer-aistudio-openai
  1. Select Next
  2. On the Network tab, select All networks, including the internet, can access this resource (required for AI Studio to reach the endpoint)
  3. Select Next to open the Tags tab. Add tags if your organization requires them
  4. Select Next to reach Review + submit
  5. Confirm your settings and select Create
For more details, see Create and deploy an Azure OpenAI resource on Microsoft Learn.

Deploy a model

Before you can use a model in AI Studio, you need to deploy it on your Azure OpenAI resource.
  1. Sign in to Microsoft Foundry
  2. Navigate to the Azure OpenAI resource you created earlier
  3. Select Deployments from the left pane
  4. Select + Deploy model > Deploy base model
  5. Select the model you want to deploy and select Confirm
  6. Set the Deployment name, which is the name you use in API calls and the value AI Studio maps to your model
  7. Select Deploy
  8. Wait until the Provisioning state changes to Succeeded
For more details on deploying models on Azure OpenAI resources, see Deploy a model.

Option 2: Foundry Models

Create a Foundry project

Foundry Models are deployed within a Foundry project. Create one if you don’t already have a project.
  1. Sign in to Microsoft Foundry. Ensure the New Foundry toggle is on
  2. Select + New project from the top navigation
  3. Enter a Project name
  4. Expand Advanced options to configure:
FieldDescription
Foundry resourceThe Foundry resource that manages this project
RegionThe Azure region for the project (for example, East US 2)
SubscriptionThe Azure subscription to bill against
Resource groupThe resource group to contain the project resources. Select an existing group or create a new one
  1. Select Create
For more details on creating Foundry projects, see Create a Foundry project.

Deploy a model

  1. From the Foundry portal homepage, select Discover in the upper navigation, then Models in the left pane
  2. Select a model and review its details
  3. Select Deploy > Custom settings to configure the deployment (or Default settings for quick setup)
  4. For partner and community models, read the terms of use and select Agree and Proceed to subscribe through Azure Marketplace
  5. Set the Deployment name, which is used in API calls and the value AI Studio maps to your model
  6. Select Deploy
  7. Wait until the deployment status shows Succeeded
For more details, see Deploy Foundry Models on Microsoft Learn.

Retrieve the target URI and API key

After deploying a model through either setup path, navigate to the deployment to copy your credentials.
  1. In the Foundry portal, navigate to the model you deployed
  2. Copy the Target URI and API Key from the deployment details to use in AI Studio
Store your API key securely. If a key is compromised, rotate it from the deployment detail page or the Azure portal.

Add Microsoft Azure models in AI Studio

After deploying a model and copying your endpoint and key from either setup path:
  1. Navigate to Models & Guardrails > Models in AI Studio
  2. Select + Add model
  3. Select Microsoft Azure as the provider
  4. Select your model from the Model dropdown
  5. Enter your credentials:
    • API Base: The Target URI from your deployment (for example, https://your-resource.openai.azure.com/)
    • API Version: The API version to use (for example, 2023-07-01-preview)
    • Azure API Key: The API key from your deployment
  6. Configure team access:
    • All teams: Anyone with builder access can use the model
    • Specific teams: Restrict to selected teams
  7. Select Save

Monitor costs

Azure bills usage directly to your Azure subscription based on tokens processed and deployment configuration. AI Studio also tracks usage and costs for external models, providing visibility into spending across all your models in one place. For information about monitoring model health and automatic recovery, see Monitor model health.

Troubleshoot Microsoft Azure configuration

Invalid credentials error

If you see an “Invalid credentials” or “Authentication failed” error:
  • Verify the API key is copied correctly without extra spaces
  • Check that the key is still active in the Azure portal
  • Ensure the key belongs to the same resource as the API Base URL

Model not available error

If a model doesn’t appear or returns an error:
  • Confirm the model deployment shows Succeeded in the Azure portal or Foundry portal
  • Verify the deployment name matches the model selected in AI Studio
  • Check that the API version is supported for your deployment

Connection failed error

If AI Studio cannot connect to your Azure resource:
  • Confirm the API Base URL uses https:// and matches the endpoint in your portal
  • Verify the resource network settings allow public access
  • Check Azure service health for regional incidents

Unhealthy model status

If a model shows as unhealthy in AI Studio:
  • AI Studio automatically retries unhealthy models after a cooldown period
  • For transient issues like temporary Azure outages, no action is needed
  • For persistent issues, check the troubleshooting items above

Next steps