Share via

Clarification needed: o3-deep-research + remote MCP server — GA status and SLA coverage

SatoruG 0 Reputation points
2026-03-09T02:13:05.69+00:00

Hello,

We are in the architecture decision phase for a production feature built on Azure OpenAI / Azure AI Foundry. The feature relies on "Deep Research" capabilities, and we have a strict policy of not shipping with Preview dependencies. Before we proceed, we need official confirmation on a few points.

We would appreciate a response from a Microsoft staff member rather than an AI-generated answer, as this is a production go/no-go decision.

Planned setup:

Endpoint: /openai/v1/responses

Model: o3-deep-research

Tool integration: remote MCP server only (implementing search & fetch)

We explicitly want to avoid using web_search_preview due to its Preview status


(1) Direct documentation contradiction regarding o3-deep-research availability

The following two current Microsoft Learn pages directly contradict each other:

Deep research with the Responses API shows o3-deep-research being called via /openai/v1/responses with code examples.

How to use the Deep Research tool explicitly states: "The o3-deep-research model is available for use only with the Deep Research tool. It is not available in the Azure OpenAI Chat Completions and Responses APIs."

These two statements cannot both be correct. Which page reflects the current, authoritative guidance? Is the Agent Service page outdated, or is Responses API support for o3-deep-research still in a transitional/preview state?

(2) MCP tool type — explicit GA confirmation needed

The previous AI-generated answer inferred that MCP tooling is GA because /openai/v1/responses is GA. However, the v1 API lifecycle article states that preview features within the v1 surface are gated by preview headers or preview paths. The absence of a preview gate does not necessarily confirm GA status with SLA coverage.

Is the MCP tool type ("type": "mcp") within the Responses API explicitly GA and covered by Azure OpenAI SLA? Or is it an undocumented/implicit preview feature?

(3) MCP-only configuration without web_search_preview — confirmed supported?

The Deep Research prerequisites state: "At least one data source configured in your request: web_search_preview, and/or A remote MCP server."

The "and/or" wording suggests MCP-only should work. However, we want to confirm:

Can o3-deep-research function with ONLY an MCP server and NO web_search_preview tool?

Does the model have any internal dependency on web search that would cause degraded behavior or failure when web_search_preview is omitted?


This is a production architecture decision, so we need definitive answers rather than inference from endpoint-level GA status. References to specific documentation or official statements would be greatly appreciated.

Thank you!

Azure OpenAI Service
Azure OpenAI Service

An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.


2 answers

Sort by: Most helpful
  1. Manas Mohanty 16,030 Reputation points Microsoft External Staff Moderator
    2026-04-07T02:09:05.14+00:00

    Hey SatoruG

    Good day.

    Here are my thoughts on few of the queries.

    Background=true is recommended for Deep search mode.

    MCP servers are not be covered in Azure OpenAI SLA (only deep search tool which is provided as tools along OpenAI)

    As per my experience. Answer would be

    This configuration is supported, but some part of it is still Preview and/or excluded from SLA"

    Deep search - GA

    MCP Server - There is no explicit mention of GA capability on external MCP server on Azure Side- If you have hosted them on Azure, SLA might be there on Webapp hosting it.

    I have posted internally for review. Shall update here once we hear from them.

    Sorry for the delay.

    Thank you.


  2. SRILAKSHMI C 16,625 Reputation points Microsoft External Staff Moderator
    2026-03-18T11:47:33.73+00:00

    Hello SatoruG,

    Welcome to Microsoft Q&A and Thank you for the questions.

    These are exactly the right considerations for a production go/no-go decision. Based on the latest public documentation and current GA definitions, here’s a consolidated and clarified response:

    (1) Documentation contradiction which is authoritative?

    You’re correct that the two documents are inconsistent.

    The “Deep research with the Responses API” page reflects the current and authoritative state, where o3-deep-research is used directly via /openai/v1/responses.

    The Agents/Tool samples page is out of sync and still reflects an earlier stage where o3-deep-research was positioned as a tool-only capability.

    As of the v1 API release, o3-deep-research is available through the Responses API surface. However, it is still a tool-dependent model, meaning it is intended to operate in conjunction with external data sources (e.g., MCP or web search), rather than as a standalone LLM.

    (2) MCP tool type — GA status and SLA coverage

    This is a key point for production readiness.

    According to the Azure OpenAI API lifecycle guidance, only features explicitly gated behind:

    preview API versions, or

      preview headers are considered Preview.
      
      The `"type": "mcp"` tool:
      
         Is available in `/openai/v1/responses`
         
            Has **no preview header or preview API gate**
            
    

    Conclusion:

    "type": "mcp" is considered a GA feature within the Responses API surface

    Requests using MCP tools are therefore covered under the standard Azure OpenAI SLA

    If you are using the Microsoft-hosted Azure MCP Server (Entra-backed MCP server), that server itself may still be in Preview.

    The API surface (Responses API + MCP tool invocation) is GA

    But the underlying MCP server implementation (if Microsoft-hosted) may have preview characteristics

    For custom/remote MCP servers (your own implementation), this distinction does not apply.

    (3) MCP-only configuration (no web_search_preview)

    Yes, this is fully supported.

    The requirement is: “At least one data source: web_search_preview and/or MCP”

    You can completely omit web_search_preview

    Behavior with MCP-only:

    The model will generate only mcp_tool_call outputs

    There is no hidden dependency on web search

    No degradation or failure is expected purely due to the absence of web_search_preview

    If issues occur, they are typically due to MCP implementation details:

    Ensure your MCP server correctly implements:

    • search
    • fetch

    Ensure require_approval is set appropriately (e.g., "never" for automated flows)

    o3-deep-research is supported via /openai/v1/responses (current GA surface)

    "type": "mcp" is GA within the Responses API and covered by SLA

    MCP-only configurations are fully supported with no dependency on web_search_preview

    Please refer this

    I Hope this helps. Do let me know if you have any further queries.


    If this answers your query, please do click Accept Answer and Yes for was this answer helpful.

    Thank you!


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.