Hello Rachid Arzouni ,
Greetings,
You’re absolutely right that Azure OpenAI uses a different endpoint structure compared to OpenAI’s native API, and this is usually where the compatibility issue arises when trying to integrate with platforms like ElevenLabs.
The main difference is that Azure OpenAI requires a custom base URL that follows this format:
https://<your-resource-name>.openai.azure.com/openai/deployments/<deployment-id>/chat/completions?api-version=<api-version>
Whereas OpenAI’s native API typically uses:
https://api.openai.com/v1/chat/completions
So, if the third‑party tool (in this case, ElevenLabs) is hard‑coded to expect OpenAI’s format (/v1/chat/completions
), it unfortunately won’t be able to talk directly to Azure’s endpoints unless the tool provides an option to configure both the endpoint URL and the headers.
To make Azure OpenAI compatible, you typically need:
1. API Key → Passed in the header as `api-key` (instead of Bearer Token like native OpenAI).
2. Endpoint URL → Your unique Azure endpoint as described above.
3. Deployment Name → Azure doesn’t directly expose models by name (like `gpt-4o`), instead you deploy a model and call it by your chosen deployment ID.
Since ElevenLabs mentioned they only work if the API matches OpenAI’s format exactly, this means support for Azure isn’t guaranteed out-of-the-box. A potential workaround is creating a small proxy or adapter service that translates between ElevenLabs’ expected OpenAI API format and Azure OpenAI’s endpoint structure. This way, ElevenLabs calls your proxy as if it were OpenAI, and your proxy forwards/reshapes the request to Azure.
So in summary:
• Direct use of Azure API keys with ElevenLabs likely won’t work unless they support custom endpoint + header format.
• The only way around it right now is checking if ElevenLabs provides advanced configuration, or setting up a proxy to bridge the gap.
Best regards,
Jerald Felix