Need to monitor Coveralls outages?
Stay on top of outages with IsDown. Monitor the official status pages of all your vendors, SaaS, and tools, including Coveralls, and never miss an outage again.
Start Free Trial
This incident has been resolved, but we will continue monitoring closely.
All systems are operational, but we will leave systems category at Degraded Performance until we have fully cleared a backlog of background processing jobs.
We have implemented another fix and are monitoring the results.
We are continuing to monitor for any further issues.
A partial fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
The issue has been identified and a fix is being implemented.
While monitoring we have discovered some additional planner anomalies that are slowing down queries associated with our various calculation jobs.
We are investigating those again and working to identify and implement a fix.
We will continue posting updates here.
All systems operational. We are carefully scaling resources and monitoring database performance to ensure stable recovery.
Some delays in build and coverage report processing may still be observed as we restore full capacity.
Thank you for your continued patience — we’ll share further updates as recovery progresses.
We have completed implementation of our fix. We are cautiously resuming background processing and will continue monitoring closely. If you notice any delays in build processing, rest assured they will be resolved shortly.
Thank you for your patience — more updates will follow as we return to full capacity.
We’re currently experiencing an outage due to unexpected query planner behavior following our recent upgrade to PostgreSQL 16.
Despite extensive preparation and testing, one of our core background queries began performing full table scans under the new version, causing a rapid increase in load and job backlog.
What we're doing:
- We’ve paused background job processing to stabilize the system.
- We tried all "quick fixes" like adjustments to DB params that affect planner choices—all to no effect.
- We're now actively deploying a targeted database index to resolve the performance issue.
- We’ve identified a longer-term fix that will make the query safer and more efficient on the new version of PostgreSQL.
Why this happened:
PostgreSQL 16 introduced changes to how certain types of queries are planned. A query that performed well in PostgreSQL 12 unexpectedly triggered a much more expensive plan in 16. We're correcting for that now.
Estimated recovery:
Background job processing is expected to resume within 20–40 minutes, with full service restoration shortly thereafter.
We’ll continue to post updates here as we make progress. Thanks for your patience — we’re on it.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We need to pause processing momentarily to clear a backlog of DB connections. We cutover to a new database version this weekend and even after months of planning and preventative steps, during periods of elevated usage after such a change it's still common for planner regressions to occur. We will identify the offending SQL statements, fix their planner issues, and restart work as soon as possible. Thanks for your pateince as we work though this as quickly as possible.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
With IsDown, you can monitor all your critical services' official status pages from one centralized dashboard and receive instant alerts the moment an outage is detected. Say goodbye to constantly checking multiple sites for updates and stay ahead of outages with IsDown.
Start free trialNo credit card required · Cancel anytime · 4200 services available
Integrations with