Share via

Designing secure AI chat with role-based access and controlled SQL queries using Azure services

Vangelis Kalomenidis 20 Reputation points
2026-03-18T10:55:19.22+00:00

Hello, I am designing a web-based AI chat application and I would appreciate guidance on the recommended architecture using Microsoft technologies. The goal is to build a secure conversational interface where authenticated users can interact with enterprise data. Key requirements: - Users must authenticate (likely via Azure Entra ID) - Each user has roles and permissions (RBAC) - The AI should respond only within the user's allowed data scope - The system should NOT allow direct unrestricted database access from the AI The application should support: - Natural language responses - Data visualizations (charts) - File generation (Excel/CSV/PDF) My current high-level architecture idea: - Frontend: Web app (React) - Backend: ASP.NET Core API or Azure Functions - AI: Azure OpenAI (possibly with Azure AI Foundry / Agent Service) - Database: Azure SQL - Storage: Azure Blob Storage The backend will act as a control layer: - Validate user permissions - Execute predefined queries or stored procedures - Return structured data to the AI Questions: 1. What is the recommended pattern for safely allowing AI to trigger database queries? (e.g. function calling, tool-based approach, or something else?) 2. Is Azure AI Foundry Agent Service the right choice for this scenario, or is a simpler Azure OpenAI setup sufficient? 3. What is the best way to enforce row-level or user-level data access in this architecture? Should this be handled in: - SQL (RLS)? - Backend logic? - Both? 4. For generating charts and files dynamically: Is it better to handle this entirely in the backend, or are there recommended Azure-native approaches? Any guidance, best practices, or reference architectures would be greatly appreciated. Thank you!

Azure AI Bot Service
Azure AI Bot Service

An Azure service that provides an integrated environment for bot development.

0 comments No comments

Answer accepted by question author
  1. Q&A Assist
    2026-03-18T10:55:57.68+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more
    1. Pattern for safely allowing AI to trigger database queries

    Use a strict “backend-as-gateway” pattern where AI never talks to the database directly:

    • Keep all intelligence away from the client and block direct access to data stores. Route all data access through backend APIs that enforce authorization and propagate user/tenant context into retrieval and filtering.
    • Implement AI “tools” or “functions” as backend endpoints (ASP.NET Core APIs or Azure Functions) that:
      • Receive a structured request (parameters, user identity/roles).
      • Validate permissions and input.
      • Execute only predefined queries or stored procedures against Azure SQL.
      • Return sanitized, structured results (not raw connection access) to the AI orchestration layer.
    • Use function-calling/tool-based patterns only as a contract between the AI model and these backend APIs; the backend remains the enforcement point.

    This aligns with the guidance to:

    • Block direct access to data stores and route all data requests through an abstraction that enforces authorization and propagates user context.
    • Isolate behaviors and actions so the “knowledge” layer (SQL, storage) is separated from the “intelligence” layer (AI/agents) and each layer enforces its own policies.
    1. Azure AI Foundry Agent Service vs simple Azure OpenAI

    A simpler Azure OpenAI setup can be sufficient if:

    • The application only needs straightforward chat + a small set of backend tools (APIs) for queries, charts, and file generation.
    • Orchestration, routing, and safety logic are implemented in the ASP.NET Core backend.

    Azure AI Foundry and agents become more compelling when:

    • Multiple tools, models, or data sources must be orchestrated.
    • There is a need for centralized governance, quotas, and access control across AI workloads.
    • Workload and agent identities, connections, and secrets should be centrally managed.

    Relevant guidance:

    • Use fit-for-purpose identity types (workload identities for apps, agent identities for AI agents).
    • Prefer Microsoft Entra ID–based authentication for connections; store non-Entra secrets in a dedicated Key Vault connection for Foundry.
    • Use Foundry Management Center to control access to AI resources, manage quotas, and enforce governance.

    For an initial implementation, a single Azure OpenAI resource with a backend-orchestrated tool pattern is typically sufficient. Foundry/agents can be introduced later for more complex multi-agent or multi-connection scenarios.

    1. Enforcing row-level / user-level data access

    Use a combination of:

    • Identity propagation and security-trimmed grounding:
      • Propagate Microsoft Entra ID user identity and group claims from the frontend to the backend.
      • Backend uses these claims to enforce authorization before any query is executed.
      • Ensure that any knowledge retrieval (including SQL, search, or other data tools) is security-aware and never retrieves unauthorized data.
    • Backend authorization logic:
      • Implement RBAC in the backend using Entra group claims and application roles.
      • Backend APIs validate that the caller is allowed to execute a given “tool” (e.g., “GetSalesByRegion”) and, if needed, constrain parameters based on the user’s scope.
      • Log and audit denials for insufficient permissions.
    • SQL-level controls (optional but recommended):
      • Use SQL Row-Level Security (RLS) or equivalent policies to enforce data-level restrictions as a defense-in-depth layer.
      • Backend passes user/tenant identifiers or role information as parameters or session context so RLS can apply.

    This matches the guidance to:

    • Pass user identity forward so that data-level security can be applied.
    • Implement group/ACL-based trimming and authorization enforcement in the knowledge layer.
    • Maintain audit trails and auditable denials.
    1. Generating charts and files (Excel/CSV/PDF)

    Handle chart and file generation in the backend, not in the model:

    • Backend responsibilities:
      • Receive a high-level intent from the AI (via a tool/function call) such as “generate bar chart of X by Y for user’s allowed scope” or “export current result set to Excel”.
      • Validate permissions and ensure the underlying data respects user scope.
      • Generate charts (e.g., using .NET charting libraries) and files (Excel/CSV/PDF) in the backend.
      • Store generated artifacts in Azure Blob Storage and return secure URLs or binary streams to the frontend.
    • Frontend responsibilities:
      • Render charts using JavaScript libraries (React-based charting) from structured data returned by the backend.
      • Download or display files via links or streams provided by the backend.

    This keeps intelligence and sensitive operations away from the client and ensures that all data and artifact generation is subject to backend authorization and logging.

    1. Additional security and governance practices
    • Authentication and access:
      • Use Microsoft Entra ID for user authentication and Azure RBAC for controlling access to AI resources and backend APIs.
      • Apply least privilege access for all roles and review regularly.
      • Require Entra ID authentication for AI model endpoints; optionally front them with Azure API Management as an AI gateway to enforce policies and monitor usage.
    • Data security and boundaries:
      • Define data boundaries based on user access levels (internal, customer, public) and isolate datasets using separate storage accounts/databases.
      • Configure role-based data access controls with Azure RBAC and, where applicable, SQL permissions.
      • Use Microsoft Purview for data discovery, classification, and governance across AI data sources.
    • AI-specific safety:
      • Implement prompt filtering and injection prevention in the backend.
      • Enforce provider safety systems, input/output filtering, identity-bound rate limiting and quotas, and token/prompt caps.
      • Monitor agent behavior and tool usage; log all AI interactions and tool invocations for audit and compliance.

    Overall reference architecture pattern

    • Frontend (React):
      • Auth via Entra ID.
      • Calls backend APIs with user tokens.
    • Backend (ASP.NET Core / Azure Functions):
      • Validates tokens, extracts roles/claims.
      • Implements AI orchestration (tool definitions, function-calling schema).
      • Calls Azure OpenAI (or Foundry/agents) using managed identity.
      • Exposes tools that wrap predefined SQL queries/stored procedures and chart/file generation.
      • Enforces authorization and logs all operations.
    • Data and storage:
      • Azure SQL with optional RLS and strict RBAC.
      • Azure Blob Storage for generated files and possibly cached artifacts.
    • AI platform:
      • Azure OpenAI resource (optionally within Azure AI Foundry) with RBAC roles such as Cognitive Services OpenAI User/Contributor for appropriate personas.

    This pattern satisfies: authentication via Entra ID, RBAC-based access, security-trimmed data retrieval, no direct database access from AI, and backend-controlled charts and file generation.


    References:

1 additional answer

Sort by: Most helpful
  1. SRILAKSHMI C 16,975 Reputation points Microsoft External Staff Moderator
    2026-03-20T10:51:49.54+00:00

    Hello Vangelis Kalomenidis

    I understand that you’re piecing together a secure, role-aware AI chat that only ever runs pre‐approved queries against your Azure SQL data. Below is a pattern you can use and some guidance on the specific points you raised.

    1. Safely allowing the AI to trigger database queries

    The safest and most commonly used pattern is to treat the AI as a caller of predefined backend functions, not as something that can generate or execute SQL.

    In practice, this means You define a limited set of operations.

    These operations are exposed as functions/tools to the model.

    The model’s job is only to:

    • Understand the user’s intent
    • Select the right function
    • Pass structured parameters

    Your backend then does all the real work:

    • Validates the user

    Checks role/permissions

    Validates input parameters (whitelisting, ranges, etc.)

    Calls a stored procedure or parameterized query

    Returns structured data (JSON)

    This gives you:

    No direct DB access from the model

    No dynamic SQL generation

    Full control over what queries are possible

    Avoid patterns where:

    The model generates SQL

    Or you “sanitize” model-generated SQL and run it

    Those approaches are hard to secure and don’t scale well.

    2. Azure OpenAI vs Azure AI Foundry Agent Service

    For your scenario, you don’t need to start with a complex agent framework.

    Azure OpenAI (with function calling) is sufficient if:

    You have a single chat flow

    A limited number of tools (queries, reports, exports)

    Backend is already handling orchestration

    Agent Service (Foundry) becomes useful when:

    You have multiple tools interacting in sequence

    You need built-in orchestration (planning, retries, state)

    You expect long-running or multi-step workflows

    A practical approach:

    • Start with Azure OpenAI + your backend controlling everything
    • Introduce Agent Service later only if orchestration logic becomes complex

    3. Enforcing row-level / user-level access

    This should not be handled in just one place. The most reliable setup is layered enforcement.

    Database layer

    Use Row-Level Security (RLS) in Azure SQL.

    • Define predicates that filter data based on user identity or role
    • Pass user context from backend (e.g., SESSION_CONTEXT)
    • This ensures that even if something goes wrong upstream, the database still enforces access rules

    Backend layer

    Your API should:

    Extract user identity from the token

    Map roles/claims to allowed operations

    Decide which functions the user is allowed to call

    Pass only allowed parameters to SQL

    Why both Backend protects what can be executed

    SQL protects what data can be returned

    Do not rely on Prompt instructions, Asking the model to “only show allowed data”

    Those are not enforceable security boundaries.

    4. Charts and file generation

    Best practice is to keep this in your backend.

    Charts

    Backend returns structured data (JSON)

    Frontend renders charts using a library (Chart.js, Recharts, etc.)

    This keeps UI flexible and avoids unnecessary load on the backend.

    File generation

    Handle this in backend services (ASP.NET or Azure Functions):

    Generate Excel (EPPlus / ClosedXML)

    Generate PDF (iText / similar)

    Generate CSV directly

    Store files in Blob Storage and return:

    A secure download link (SAS)

    This approach gives:

    • Better control over formatting
    • Easier auditing
    • Clear separation of responsibilities

    Optional Azure-native options

    • Power BI Embedded: if you need rich dashboards and export features
    • Logic Apps: if you want low-code automation (e.g., generate and email reports)

    Use these only if your use case requires them; otherwise backend generation is simpler and more predictable.

    Recommended architecture

    Frontend (React)

    • Auth via Entra ID (MSAL)
    • Sends user queries and renders responses/charts

    Backend (ASP.NET Core / Azure Functions)

    • Validates authentication and RBAC
    • Exposes controlled functions/tools to the AI
    • Executes stored procedures
    • Generates files
    • Returns structured responses

    AI Layer (Azure OpenAI)

    • Handles natural language understanding
    • Calls only approved functions
    • Formats final responses

    Database (Azure SQL)

    • Stored procedures / parameterized queries
    • Row-Level Security for data filtering

    Storage

    • Blob Storage for generated files

    Key principles to stick to

    • The AI should never have direct database access
    • All database interaction must go through your backend
    • Only predefined queries/functions should be allowed
    • Enforce permissions both in backend and in SQL
    • Treat everything coming from the model as untrusted input

    Answers to your questions

    1. Safe query pattern → Function/tool calling with backend execution
    2. Foundry vs OpenAI → Start with OpenAI, use Foundry only if needed
    3. Access control → Use both RLS in SQL and RBAC in backend
    4. Charts/files → Backend-driven, frontend renders charts; optional Power BI/Logic Apps if needed

    Please refer this

    1. Role-Based Access Control for Azure OpenAI Service https://dotnet.territoriali.olinfo.it/azure/ai-services/openai/how-to/role-based-access-control
    2. Configure Azure OpenAI with Managed Identity https://dotnet.territoriali.olinfo.it/azure/cognitive-services/openai/how-to/managed-identity
    3. SQL Row-Level Security https://dotnet.territoriali.olinfo.it/sql/relational-databases/security/row-level-security
    4. Azure OpenAI Function Calling Docs https://dotnet.territoriali.olinfo.it/azure/ai-services/openai/concepts/function-calling
    5. Azure AI Foundry / Agent Service Overview https://dotnet.territoriali.olinfo.it/azure/ai-services/agent-service/overview
    6. Bot Framework Security Guidelines (encryption, privacy) https://dotnet.territoriali.olinfo.it/azure/bot-service/bot-builder-security-guidelines

    I Hope this helps. Do let me know if you have any further queries.


    If this answers your query, please do click Accept Answer and Yes for was this answer helpful.

    Thank you!

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.