An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
Hello TI,
Welcome to the Microsoft Q&A and thank you for posting your questions here.
I understand that you are having SystemReservedJob-LibraryManagement stuck in queued state.
From the root cause and any incident that can cause the issue, the below steps will help to resolve it:
- First, confirm your Spark pool is truly healthy not just marked “Running” by checking active nodes, executor availability, and autoscale behavior in Synapse Studio. If nodes show 0 or remain provisioning, restart the pool as recommended in Azure Synapse Analytics documentation: https://dotnet.territoriali.olinfo.it/azure/synapse-analytics/spark/apache-spark-pool-configurations
- Next, inspect the Synapse Monitor hub for hidden queue saturation by identifying pending or stalled (“zombie”) jobs that may block system operations. If the queue is congested, cancel all running jobs and restart the pool, following guidance here: https://dotnet.territoriali.olinfo.it/azure/synapse-analytics/monitoring/how-to-monitor-spark-applications
- Then, validate your
requirements.txtfile carefully, as unsupported or complex dependencies often prevent library jobs from executing. Test locally usingpip install -r requirements.txt, and follow dependency guidelines from: https://dotnet.territoriali.olinfo.it/azure/synapse-analytics/spark/apache-spark-manage-packages - After that, isolate dependency issues by uploading a minimal package file such as
pandas==1.5.3; if this succeeds, the failure is confirmed to be within your original dependency list rather than the Spark environment. This aligns with best practices in: https://dotnet.territoriali.olinfo.it/azure/synapse-analytics/spark/apache-spark-manage-python-packages - Also verify that the Synapse workspace managed identity has proper access to linked storage services, since permission failures can silently block internal jobs. Ensure role assignments are correct using: https://dotnet.territoriali.olinfo.it/azure/synapse-analytics/security/how-to-grant-workspace-managed-identity-permissions
- If issues persist, perform a full Spark pool reset by stopping it, waiting a few minutes, and starting it again to clear scheduler deadlocks and internal queue corruption. This reset approach is supported in operational guidance: https://dotnet.territoriali.olinfo.it/azure/synapse-analytics/spark/apache-spark-troubleshoot
- Finally, if the job remains stuck after all checks, treat it as a backend orchestration issue and open a support request with workspace name, Spark pool name, and job ID. Use official escalation guidance from: https://dotnet.territoriali.olinfo.it/azure/azure-portal/supportability/how-to-create-azure-support-request
I hope this is helpful! Do not hesitate to let me know if you have any other questions or clarifications.
Please don't forget to close up the thread here by upvoting and accept it as an answer if it is helpful.