An Azure service that provides an integrated environment for bot development.
- Pattern for safely allowing AI to trigger database queries
Use a strict “backend-as-gateway” pattern where AI never talks to the database directly:
- Keep all intelligence away from the client and block direct access to data stores. Route all data access through backend APIs that enforce authorization and propagate user/tenant context into retrieval and filtering.
- Implement AI “tools” or “functions” as backend endpoints (ASP.NET Core APIs or Azure Functions) that:
- Receive a structured request (parameters, user identity/roles).
- Validate permissions and input.
- Execute only predefined queries or stored procedures against Azure SQL.
- Return sanitized, structured results (not raw connection access) to the AI orchestration layer.
- Use function-calling/tool-based patterns only as a contract between the AI model and these backend APIs; the backend remains the enforcement point.
This aligns with the guidance to:
- Block direct access to data stores and route all data requests through an abstraction that enforces authorization and propagates user context.
- Isolate behaviors and actions so the “knowledge” layer (SQL, storage) is separated from the “intelligence” layer (AI/agents) and each layer enforces its own policies.
- Azure AI Foundry Agent Service vs simple Azure OpenAI
A simpler Azure OpenAI setup can be sufficient if:
- The application only needs straightforward chat + a small set of backend tools (APIs) for queries, charts, and file generation.
- Orchestration, routing, and safety logic are implemented in the ASP.NET Core backend.
Azure AI Foundry and agents become more compelling when:
- Multiple tools, models, or data sources must be orchestrated.
- There is a need for centralized governance, quotas, and access control across AI workloads.
- Workload and agent identities, connections, and secrets should be centrally managed.
Relevant guidance:
- Use fit-for-purpose identity types (workload identities for apps, agent identities for AI agents).
- Prefer Microsoft Entra ID–based authentication for connections; store non-Entra secrets in a dedicated Key Vault connection for Foundry.
- Use Foundry Management Center to control access to AI resources, manage quotas, and enforce governance.
For an initial implementation, a single Azure OpenAI resource with a backend-orchestrated tool pattern is typically sufficient. Foundry/agents can be introduced later for more complex multi-agent or multi-connection scenarios.
- Enforcing row-level / user-level data access
Use a combination of:
- Identity propagation and security-trimmed grounding:
- Propagate Microsoft Entra ID user identity and group claims from the frontend to the backend.
- Backend uses these claims to enforce authorization before any query is executed.
- Ensure that any knowledge retrieval (including SQL, search, or other data tools) is security-aware and never retrieves unauthorized data.
- Backend authorization logic:
- Implement RBAC in the backend using Entra group claims and application roles.
- Backend APIs validate that the caller is allowed to execute a given “tool” (e.g., “GetSalesByRegion”) and, if needed, constrain parameters based on the user’s scope.
- Log and audit denials for insufficient permissions.
- SQL-level controls (optional but recommended):
- Use SQL Row-Level Security (RLS) or equivalent policies to enforce data-level restrictions as a defense-in-depth layer.
- Backend passes user/tenant identifiers or role information as parameters or session context so RLS can apply.
This matches the guidance to:
- Pass user identity forward so that data-level security can be applied.
- Implement group/ACL-based trimming and authorization enforcement in the knowledge layer.
- Maintain audit trails and auditable denials.
- Generating charts and files (Excel/CSV/PDF)
Handle chart and file generation in the backend, not in the model:
- Backend responsibilities:
- Receive a high-level intent from the AI (via a tool/function call) such as “generate bar chart of X by Y for user’s allowed scope” or “export current result set to Excel”.
- Validate permissions and ensure the underlying data respects user scope.
- Generate charts (e.g., using .NET charting libraries) and files (Excel/CSV/PDF) in the backend.
- Store generated artifacts in Azure Blob Storage and return secure URLs or binary streams to the frontend.
- Frontend responsibilities:
- Render charts using JavaScript libraries (React-based charting) from structured data returned by the backend.
- Download or display files via links or streams provided by the backend.
This keeps intelligence and sensitive operations away from the client and ensures that all data and artifact generation is subject to backend authorization and logging.
- Additional security and governance practices
- Authentication and access:
- Use Microsoft Entra ID for user authentication and Azure RBAC for controlling access to AI resources and backend APIs.
- Apply least privilege access for all roles and review regularly.
- Require Entra ID authentication for AI model endpoints; optionally front them with Azure API Management as an AI gateway to enforce policies and monitor usage.
- Data security and boundaries:
- Define data boundaries based on user access levels (internal, customer, public) and isolate datasets using separate storage accounts/databases.
- Configure role-based data access controls with Azure RBAC and, where applicable, SQL permissions.
- Use Microsoft Purview for data discovery, classification, and governance across AI data sources.
- AI-specific safety:
- Implement prompt filtering and injection prevention in the backend.
- Enforce provider safety systems, input/output filtering, identity-bound rate limiting and quotas, and token/prompt caps.
- Monitor agent behavior and tool usage; log all AI interactions and tool invocations for audit and compliance.
Overall reference architecture pattern
- Frontend (React):
- Auth via Entra ID.
- Calls backend APIs with user tokens.
- Backend (ASP.NET Core / Azure Functions):
- Validates tokens, extracts roles/claims.
- Implements AI orchestration (tool definitions, function-calling schema).
- Calls Azure OpenAI (or Foundry/agents) using managed identity.
- Exposes tools that wrap predefined SQL queries/stored procedures and chart/file generation.
- Enforces authorization and logs all operations.
- Data and storage:
- Azure SQL with optional RLS and strict RBAC.
- Azure Blob Storage for generated files and possibly cached artifacts.
- AI platform:
- Azure OpenAI resource (optionally within Azure AI Foundry) with RBAC roles such as Cognitive Services OpenAI User/Contributor for appropriate personas.
This pattern satisfies: authentication via Entra ID, RBAC-based access, security-trimmed data retrieval, no direct database access from AI, and backend-controlled charts and file generation.
References: