An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
Hello Abhay Tiwari,
Thank you for your patience and the detailed breakdown for validating the behavior against a direct REST call.
For Azure OpenAI image generation, the officially supported guidance is that requests are routed entirely via the deployment name in the URL path, in the form of
/openai/deployments/<deployment-name>/images/generations
The documented guidance below for GPT‑Image‑1 and GPT‑Image‑1.5 explicitly documents this deployment‑scoped REST pattern, and all working examples construct requests where the deployment appears in the URL while the request body contains only image parameters such as prompt, size, n, and `quality.
With reference to the same guidance below , at the REST level, Azure OpenAI does not require the deployment identifier to be repeated as the model field in the JSON body for image generation, and sending no model field at all is a valid and supported request shape when the deployment is specified in the path. This matches the behavior observed with the known‑good curl request.
How to Use Image Generation Models from OpenAI - Microsoft Foundry | Microsoft Learn
With respect to Go SDKs, there is currently no official Microsoft‑published Go example that mirrors this exact REST shape using openai-go/v3 image helpers. The ImageService.Generate surface in openai-go is designed around OpenAI‑style semantics where:
- The request path is always images/generations
- The model field serves dual purposes for routing and validation.
So this is creating a mismatch for image generation, where
- The routing must be deployment‑based via the URL
- The model field is validated strictly against canonical image model names (gpt-image-1, gpt-image-1-mini, gpt-image-1.5) rather than deployment identifiers.
As a result, currently there is no supported configuration in openai-go/v3 that simultaneously forces deployment‑scoped routing for images and omits or neutralizes the model field in the request body to exactly match the Azure REST documented guidance.
The only fully supported and documented pattern available today for Azure GPT image deployments is
- using direct REST calls as documented
- using SDKs whose API surfaces expose deployment routing explicitly and independently of the request body
Please review them in the same documented reference below -
How to Use Image Generation Models from OpenAI - Microsoft Foundry | Microsoft Learn
The deployment itself is healthy and correctly configured.The divergence is at the SDK request‑shape level rather than a service or configuration issue.
Thank you