Create a FREE account to start receiving notifications!
1 incidents in the last 7 days
2 incidents in the last 30 days
Last check: 1 minute ago
Last known issue: 3 days ago
Resolved Slower performance for some projects
Performance slowed for some projects, mostly larger projects, as dedicated background job queues experienced greater than normal traffic and began backing up.
Resolved Performance slowdown in background jobs for some projects
Some projects are experiencing slower-than-normal performance in the background jobs that do two things: 1. Calculate overall coverage for builds 2. Draw the TREE view of the project's source files in the SOURCE FILES table. Particularly affected are projects with north-of-1,000 source files. The background jobs are de-queuing at a normal rate and system response times are normal, so we're unsure what's causing the slowdown. It seems there has been a greater-than-normal number of jobs in the past 48-72 hours. We are currently investigating this issue. Will update here.
Resolved Database migration
We are currently promoting a read replica to our primary. Downtime should only be a few minutes.
Resolved Performance slowdowns
For the last week to ten (10) days, we've had reports of slow builds and lags in our data processing infrastructure that caused temporary inaccuracies in our reporting tools, sometimes taking hours to resolve. For affected projects, no data was lost and both reports and notifications should have eventually caught up. On initial investigation, we came to believe the effect was limited to larger projects, which use a dedicated segment of our infrastructure, and, while we failed to find a root cause, we started monitoring the issue to address any slowdowns and clear them as quickly as possible. In recent days, however, we've seen performance slowdowns affect general web use, and appear in metrics related to RDS CPU. New highs in builds/day and new builds/min, convince us that we're experiencing another wave of growth that requires us to increase our infrastructure resources. We planned an infrastructure upgrade for this past weekend (with no downtime), but were unable to complete it due to an unexpected capacity limit on our instance class at AmazonRDS: > Service: AmazonRDS; > Status Code: 400; > Error Code: InsufficientDBInstanceCapacity; > Request ID: b5935b39-1cff-4f02-8dc0-c0a9b9cfe470 We have since resolved the issue with Amazon and are planning another upgrade overnight tonight, between 12-3AM PST. Again, with no planned downtime for users. Note: It's come to our attention during this time that the current SYSTEM METRICS reports on our status page, which are a measure of general performance, are not sufficient for users with larger projects, whose performance may diverge greatly from mean. Therefore, we're committed to adding additional reports for this class of project soon after we complete this planned upgrade.
Resolved Database maintenance
Some requests will fail during this brief maintenance period.
Resolved Redis server outage
This incident has been resolved.
Resolved Database upgrades in progress.
We are currently investigating this issue.
Resolved Server outage
Currently undergoing database maintenance.
Resolved Database upgrade
We are currently upgrading our Postgres database to the latest version.
Resolved BitBucket login and build outage
The issue has been identified and we're working on a fix.
IsDown aggregates status page from services so you don't have to. Follow Coveralls and hundreds of services and be the first to know when something is wrong.Get started now