Not able to access the custom neural voice trained model through API

Vivek P N 0 Reputation points
2025-08-27T10:20:27.3033333+00:00

Hi,

Im pretty new to the world of Azure, and I have been trying to understand the capabilities of Custom neural voice. After multiple trial and error, I was able to train a model and even deploy it, but I'm fully stuck in trying to generate an audio based on text using the REST API.

I'm on a Free tier now, but within 30 days, so I have some credits left to test out this flow. It would be helpful if I could get some help with the following error that Im seeing after making a call to REST API.

POST https://eastus.tts.speech.microsoft.com/cognitiveservices/v1

Request Body:

VivTrained is the name of the deployed model, and eastus is the Service region.

<speak version='1.0' xml:lang='en-US'>  <voice name='eastus-VivTrained'>    Hello, this is your custom neural voice speaking from Azure.  </voice></speak>

Headers:

Ocp-Apim-Subscription-Key: <Resource key>
Content-Type:application/ssml+xml
X-Microsoft-OutputFormat:audio-16khz-32kbitrate-mono-mp3
User-Agent:curl

Response:

400 Unsupported voice eastus-VivTrained.
Azure AI Speech
Azure AI Speech
An Azure service that integrates speech processing into apps and services.
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. Amira Bedhiafi 36,716 Reputation points Volunteer Moderator
    2025-08-29T17:18:38.33+00:00

    Hello Vivek !

    Thank you for posting on Microsoft Learn.

    You called the regional TTS endpoint and passed a voice name that endpoint doesn’t know about eastus-VivTrained.

    CNV models are accessed either via the custom endpoint URL created when you deploy the model, or by adding a deploymentId query parameter to the voice endpoint. You must also use the exact voice name or alias shown in your deployment (no region prefix).

    https://learn.microsoft.com/en-us/azure/ai-services/speech-service/rest-text-to-speech

    In Speech studio, do to Custom voice then Deployments then in your deployment, copy the Endpoint URL. It usually looks like:

    https://eastus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId=<YOUR-DEPLOYMENT-ID>
    

    Then call it like this :

    curl -X POST \
      "https://eastus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId=<YOUR-DEPLOYMENT-ID>" \
      -H "Ocp-Apim-Subscription-Key: <YOUR_SPEECH_RESOURCE_KEY>" \
      -H "Content-Type: application/ssml+xml" \
      -H "X-Microsoft-OutputFormat: audio-16khz-32kbitrate-mono-mp3" \
      -H "User-Agent: curl" \
      --data-binary @- <<'SSML'
    <speak version="1.0" xml:lang="en-US">
      <!-- Use the EXACT voice name/alias shown in the deployment details and do NOT prefix with region -->
      <voice name="VivTrained">
        Hello, this is your custom neural voice speaking from Azure.
      </voice>
    </speak>
    SSML
    

    Or, if you prefer to assemble the URL yourself, switch to the voice host and add the deployment id:

    https://eastus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId=<YOUR-DEPLOYMENT-ID>
    

    and do not use eastus.tts.speech.microsoft.com for custom voices. https://learn.microsoft.com/en-us/azure/ai-services/speech-service/rest-text-to-speech

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.