Outage in Circle CI

Jobs stuck in running state

Resolved Minor
December 03, 2025 - Started about 10 hours ago - Lasted about 1 hour
Official incident page

Incident Report

At 16:20 UTC, we began experiencing delays in job triggering and starts across all resource classes. Some workflows and jobs may appear stuck in a running state. What’s impacted: Job triggering is experiencing delays or is stuck. This affects all resource classes and executors. What to expect: If you have workflows that appear stuck and haven’t started, we recommend manually rerunning them. We are actively investigating the root cause and working to restore normal processing speeds. Next update: We will provide an update within 30 minutes or earlier with our progress
Components affected
Circle CI Pipelines & Workflows

Need to monitor Circle CI outages?

One place to monitor all your cloud vendors. Get instant alerts when an outage is detected.

Latest Updates ( sorted recent to last )
RESOLVED about 9 hours ago - at 12/03/2025 06:12PM

Between 16:20 and 16:32 UTC, job triggering and workflow starts experienced disruptions across all resource classes due to memory pressure on our internal job distributor systems. We identified the issue and scaled our infrastructure to handle the load. Services returned to normal operation at 16:32 UTC.

What was impacted: Job triggering and workflow starts were disrupted for 12 minutes. Some workflows and jobs appeared stuck in a running state during this window.

Resolution: Our systems are now operating normally with additional capacity in place to prevent similar disruptions. If you had workflows or jobs that were stuck during this window, please manually rerun them.

The incident is now resolved and we will be conducting a thorough review to understand what triggered the memory pressure and identify any additional preventive measures

MONITORING about 10 hours ago - at 12/03/2025 05:27PM

As of 16:32 UTC, job triggering and workflow starts have returned to normal operation across all resource classes. The impact was limited to a 12-minute window between 16:20 and 16:32 UTC.

What's impacted: All new jobs and workflows are now starting normally.

What to expect: If you have workflows or jobs that were stuck during the 16:20-16:32 UTC window, please manually rerun them.

We are continuing to investigate the root cause of this disruption and will provide an update within 30 minutes or once our investigation is complete.

INVESTIGATING about 10 hours ago - at 12/03/2025 04:53PM

At 16:20 UTC, we began experiencing delays in job triggering and starts across all resource classes. Some workflows and jobs may appear stuck in a running state.

What’s impacted: Job triggering is experiencing delays or is stuck. This affects all resource classes and executors.

What to expect: If you have workflows that appear stuck and haven’t started, we recommend manually rerunning them.

We are actively investigating the root cause and working to restore normal processing speeds. Next update: We will provide an update within 30 minutes or earlier with our progress

Latest Circle CI outages

Jobs not starting - about 6 hours ago
Pipelines page not loading - about 13 hours ago

Status Aggregator for All Your Third-Party Services

With IsDown, you can monitor all your critical services' official status pages from one centralized dashboard and receive instant alerts the moment an outage is detected. Say goodbye to constantly checking multiple sites for updates and stay ahead of outages with IsDown.

Start free trial

No credit card required · Cancel anytime · 4600 services available

Integrations with Slack Microsoft Teams Google Chat Datadog PagerDuty Zapier Discord Webhook