Coveralls

Coveralls Status

Official source Everything seems OK
Follow Coveralls status! Don't miss out on outages and problems. 500+ services available to follow.

Create a FREE account to start receiving notifications!

Stats

1 incidents in the last 7 days

2 incidents in the last 30 days

Automatic Checks

Last check: 1 minute ago

Last known issue: 3 days ago

Compare to Alternatives

Travis CI
Travis CI

0 incidents in the last 7 days

0 incidents in the last 30 days

Semaphore
Semaphore

2 incidents in the last 7 days

9 incidents in the last 30 days

Latest Incidents

Last 30 days

27/01
28/01
29/01
30/01
31/01
01/02
02/02
03/02
04/02
05/02
06/02
07/02
08/02
09/02
10/02
11/02
12/02
13/02
14/02
15/02
16/02
17/02
18/02
19/02
20/02
21/02
22/02
23/02
24/02
25/02
26/02

Resolved Slower performance for some projects

Performance slowed for some projects, mostly larger projects, as dedicated background job queues experienced greater than normal traffic and began backing up.

Resolved Performance slowdown in background jobs for some projects

Some projects are experiencing slower-than-normal performance in the background jobs that do two things: 1. Calculate overall coverage for builds 2. Draw the TREE view of the project's source files in the SOURCE FILES table. Particularly affected are projects with north-of-1,000 source files. The background jobs are de-queuing at a normal rate and system response times are normal, so we're unsure what's causing the slowdown. It seems there has been a greater-than-normal number of jobs in the past 48-72 hours. We are currently investigating this issue. Will update here.

Resolved Database migration

We are currently promoting a read replica to our primary. Downtime should only be a few minutes.

about 2 months ago Official incident report

Resolved Performance slowdowns

For the last week to ten (10) days, we've had reports of slow builds and lags in our data processing infrastructure that caused temporary inaccuracies in our reporting tools, sometimes taking hours to resolve. For affected projects, no data was lost and both reports and notifications should have eventually caught up. On initial investigation, we came to believe the effect was limited to larger projects, which use a dedicated segment of our infrastructure, and, while we failed to find a root cause, we started monitoring the issue to address any slowdowns and clear them as quickly as possible. In recent days, however, we've seen performance slowdowns affect general web use, and appear in metrics related to RDS CPU. New highs in builds/day and new builds/min, convince us that we're experiencing another wave of growth that requires us to increase our infrastructure resources. We planned an infrastructure upgrade for this past weekend (with no downtime), but were unable to complete it due to an unexpected capacity limit on our instance class at AmazonRDS: > Service: AmazonRDS; > Status Code: 400; > Error Code: InsufficientDBInstanceCapacity; > Request ID: b5935b39-1cff-4f02-8dc0-c0a9b9cfe470 We have since resolved the issue with Amazon and are planning another upgrade overnight tonight, between 12-3AM PST. Again, with no planned downtime for users. Note: It's come to our attention during this time that the current SYSTEM METRICS reports on our status page, which are a measure of general performance, are not sufficient for users with larger projects, whose performance may diverge greatly from mean. Therefore, we're committed to adding additional reports for this class of project soon after we complete this planned upgrade.

Resolved Database maintenance

Some requests will fail during this brief maintenance period.

over 1 year ago Official incident report

Resolved Redis server outage

This incident has been resolved.

over 1 year ago Official incident report

Resolved Database upgrades in progress.

We are currently investigating this issue.

over 1 year ago Official incident report

Resolved Server outage

Currently undergoing database maintenance.

over 1 year ago Official incident report

Resolved Database upgrade

We are currently upgrading our Postgres database to the latest version.

over 1 year ago Official incident report

Resolved BitBucket login and build outage

The issue has been identified and we're working on a fix.

over 1 year ago Official incident report

Don't miss another incident in Coveralls!

IsDown aggregates status page from services so you don't have to. Follow Coveralls and hundreds of services and be the first to know when something is wrong.

Get started now
No credit card required