Share via

OpenAI Responses API Shell tool with Azure Deployments

GS 400 Reputation points
2026-04-01T13:59:57.6733333+00:00

Hello,

I was trying to use the OpenAI Shell tool with gpt 5.4 deployed in Azure using the reponses API but I get this

Error code: 500 - {'error': {'message': 'The server had an error processing your request. Sorry about that! You can retry your request, or contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 if you keep seeing this error. (Please include the request ID [Request ID] in your email.)', 'type': 'server_error', 'param': None, 'code': 'server_error'}}

How can I use that tool ?

https://developers.openai.com/api/docs/guides/tools-shell

Foundry Tools
Foundry Tools

Formerly known as Azure AI Services or Azure Cognitive Services is a unified collection of prebuilt AI capabilities within the Microsoft Foundry platform


2 answers

Sort by: Most helpful
  1. SRILAKSHMI C 16,975 Reputation points Microsoft External Staff Moderator
    2026-04-06T14:43:24.0766667+00:00

    Hello GS,

    Thanks for sharing the details and the error message,

    The 500 (server_error) you’re encountering when trying to use the shell tool with the Responses API on Azure OpenAI is not due to an issue in your request.

    Instead, it indicates that The built-in “shell” tool is not currently supported in Azure OpenAI deployments, even though it is documented in the OpenAI public API.

    Clarification

    • The Tools - Shell capability shown in OpenAI documentation applies to the public OpenAI platform
    • In Azure OpenAI, the Responses API does not yet natively support the built-in shell tool

    Because of this:

    Shell tool calls may return:

    • 400 errors (policy-related, e.g., network restrictions)
    • or 500 server_error (backend cannot provision execution environment)

    So the behavior you're seeing is expected when attempting to use this feature in Azure today.

    Why this happens

    The shell tool depends on:

    • Ephemeral container execution environments
    • Secure sandboxing + optional outbound network access

    In Azure, these capabilities are:

    • Feature-gated
    • Not enabled across all regions/subscriptions
    • Often restricted due to security and compliance controls

    There is currently No portal setting, No API flag, No self-service way to enable it

    1: Implement your own “shell tool” using function calling

    You can replicate the same behavior by letting the model decide when to run a command, and executing it in your own environment.

    How it works:

    1. Define a function like run_shell
    2. Let the model generate a function call
    3. Execute the command in your backend (VM/container)
    4. Send the output back to the model

    Example (Python sketch):

    from openai import OpenAI
    

    This is the recommended production pattern in Azure.

    2: Use OpenAI public platform

    If you specifically need the native shell tool experience:

    • You would need to use the public OpenAI API
    • The Tools-Shell capability is supported there (as per OpenAI docs)

    Can this be enabled in Azure?

    At the moment:

    • Shell tool support is not generally available in Azure OpenAI
    • You can raise a support request to:
      • Confirm availability for your subscription/region
      • Track feature rollout status

    Please refer this

    I Hope this helps. Do let me know if you have any further queries.

    Thank you!

    0 comments No comments

  2. Q&A Assist
    2026-04-01T14:00:22.7566667+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    The 500 server_error indicates an internal service issue on the Azure side rather than a problem with the request syntax. For Azure OpenAI/Foundry models, the recommended handling is:

    1. Add/verify retry logic
      The SDKs and clients are designed to automatically retry transient 5xx errors with exponential backoff. Ensure the client used by the Shell tool (or any wrapper around it) is configured with retries enabled and, if possible, increase the retry count:
      • In Python, max_retries controls this behavior.
      • In JavaScript/TypeScript, use maxRetries.
      5xx errors are explicitly called out as retriable:
      • Python: connection errors, 408, 429, and >= 500 internal errors are retried twice by default.
      • JavaScript: connection errors, 408, 429, and >= 500 internal errors are retried twice by default.
    2. Adjust generation parameters to reduce 500 errors
      For some internal errors (for example, invalid Unicode output or unexpected special tokens), Azure guidance is to:
      • Set temperature to < 1.
      • Reset presence_penalty, frequency_penalty, and any logit_bias settings to their defaults.
      • Optionally set top_p to a lower, non-default value to avoid low‑probability tokens that can trigger these issues.
      These mitigations apply to errors such as:
      • InternalServerError - 500 - Failed to create completion as the model generated invalid Unicode output.
      • Server error (500): Unexpected special token.
      The same tuning can help when using tools like the Shell tool, since they still rely on the underlying chat/responses APIs.
    3. Confirm basic connectivity and configuration
      • Ensure the endpoint and deployment are correct and reachable.
      • If requests fail unexpectedly, test with a smaller/simple prompt and without tools to rule out payload or tool‑wiring issues.
    4. If 500 persists with retries and safe parameters
      • Collect the x-request-id (request ID) from the response headers or SDK object (for example, _request_id in Python) and the exact time and region.
      • Open an Azure support request and include the request ID, as the error message suggests. Persistent 500s with correct configuration typically require backend investigation.

    At present, using the OpenAI Shell tool against Azure deployments depends on the underlying Azure OpenAI Responses/Chat APIs. If the model deployment is healthy and the above mitigations are in place, the tool should work; otherwise, repeated 500s need to be escalated via Azure support with the provided request ID.


    References:

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.