1 incidents in the last 7 days
5 incidents in the last 30 days
Last check: 1 minute ago
Last known issue: 2 days ago
Resolved Customers may experience a delay starting macOS jobs
Customers may experience a delay starting macOS jobs. We have identified the issue and are monitoring the situation
Resolved Delayed start and/or not running macOS jobs
Some customers are experiencing a delayed start in macOS jobs or jobs are not running at all. We are currently investigating the cause and will update by 5:10 PM UTC.
Resolved Docker jobs are delayed with some jobs failing
We are investigating an issue where Docker jobs are delayed and failing. We will share an update on the status in 20 minutes.
Resolved Browser crash when trying to add a new project
We are investigating an issue where adding a project causes the browser to crash, and prevents users from adding the project to CircleCI. We will share an update on the status in 20 minutes.
Resolved Delay in builds
We are seeing a delay in jobs due to a delay to schedule work
Resolved Tags not triggering builds - 500 Errors
We are currently investigating an error where some tags are not successfully triggering builds and resulting in 500 errors for some customers/end-users
Resolved Jobs are being delayed
From 16:50 UTC we are observing a delay of approximately 30 seconds (down from one minute).
Resolved CircleCI UI unavailable, API still accessible
We are currently investigating an issue with the CircleCI UI being unavailable. However, the API is still accessible, and builds are still running.
Resolved Machine jobs slow to provision
We're currently seeing elevated error rates leading to long queue times for machine (Linux) jobs If your builds are affected, they will encounter an "Infrastructure Fail" error and will be automatically retried. Please accept our apologies for the disruption.
Resolved Slow builds for some of the Customers
Starting 14:00 UTC, we are having intermittent degraded performance on approximately 10% of jobs. This is stemming from heavy levels of platform abuse. We have taken some steps to resolve this, however, we are continuing to work on getting performance back to normal levels.