Share via

SystemReservedJob-LibraryManagement stuck in queued state

TI 0 Reputation points
2026-03-27T17:21:58.9666667+00:00

SystemReservedJob-LibraryManagement doesn't leave queued state when trying to upload "requirement.txt" file to spark pool

User's image

Azure Synapse Analytics
Azure Synapse Analytics

An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.


2 answers

Sort by: Most helpful
  1. Sina Salam 28,361 Reputation points Volunteer Moderator
    2026-03-31T15:13:34.4+00:00

    Hello TI,

    Welcome to the Microsoft Q&A and thank you for posting your questions here.

    I understand that you are having SystemReservedJob-LibraryManagement stuck in queued state.

    From the root cause and any incident that can cause the issue, the below steps will help to resolve it:

    1. First, confirm your Spark pool is truly healthy not just marked “Running” by checking active nodes, executor availability, and autoscale behavior in Synapse Studio. If nodes show 0 or remain provisioning, restart the pool as recommended in Azure Synapse Analytics documentation: https://dotnet.territoriali.olinfo.it/azure/synapse-analytics/spark/apache-spark-pool-configurations
    2. Next, inspect the Synapse Monitor hub for hidden queue saturation by identifying pending or stalled (“zombie”) jobs that may block system operations. If the queue is congested, cancel all running jobs and restart the pool, following guidance here: https://dotnet.territoriali.olinfo.it/azure/synapse-analytics/monitoring/how-to-monitor-spark-applications
    3. Then, validate your requirements.txt file carefully, as unsupported or complex dependencies often prevent library jobs from executing. Test locally using pip install -r requirements.txt, and follow dependency guidelines from: https://dotnet.territoriali.olinfo.it/azure/synapse-analytics/spark/apache-spark-manage-packages
    4. After that, isolate dependency issues by uploading a minimal package file such as pandas==1.5.3; if this succeeds, the failure is confirmed to be within your original dependency list rather than the Spark environment. This aligns with best practices in: https://dotnet.territoriali.olinfo.it/azure/synapse-analytics/spark/apache-spark-manage-python-packages
    5. Also verify that the Synapse workspace managed identity has proper access to linked storage services, since permission failures can silently block internal jobs. Ensure role assignments are correct using: https://dotnet.territoriali.olinfo.it/azure/synapse-analytics/security/how-to-grant-workspace-managed-identity-permissions
    6. If issues persist, perform a full Spark pool reset by stopping it, waiting a few minutes, and starting it again to clear scheduler deadlocks and internal queue corruption. This reset approach is supported in operational guidance: https://dotnet.territoriali.olinfo.it/azure/synapse-analytics/spark/apache-spark-troubleshoot
    7. Finally, if the job remains stuck after all checks, treat it as a backend orchestration issue and open a support request with workspace name, Spark pool name, and job ID. Use official escalation guidance from: https://dotnet.territoriali.olinfo.it/azure/azure-portal/supportability/how-to-create-azure-support-request

    I hope this is helpful! Do not hesitate to let me know if you have any other questions or clarifications.


    Please don't forget to close up the thread here by upvoting and accept it as an answer if it is helpful.

    0 comments No comments

  2. Pilladi Padma Sai Manisha 6,740 Reputation points Microsoft External Staff Moderator
    2026-03-27T18:03:16.1333333+00:00

    Hi TI,

    Thankyou for reaching microsoft Q&A!
    SystemReservedJob-LibraryManagement is an internal Spark job that runs when uploading libraries (such as .txt, .jar, or .whl) to a Spark pool. If it remains in a Queued state, it typically indicates a problem with the Spark compute environment rather than any database component.

    This can occur if the Spark pool is paused, not fully provisioned, or experiencing resource constraints. It may also happen when there are other running or stuck jobs blocking execution, or due to issues in the workspace or managed resource group.

    To resolve this, ensure the Spark pool is in a Running state and try restarting it. Check for any long-running or stuck jobs and cancel them if needed, then retry the library upload. If the issue continues, recreating the Spark pool or raising a support request is recommended.

    this is a Spark runtime or compute-related issue within Synapse, which is part of Data & Analytics, not a database issue.

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.