Semaphore

Semaphore Status

Official source
Everything seems OK
Semaphore had 3 problems in the last month.
Don't be caught off guard. Get a notification in the next one!

Sign up with Google or with email

Easily monitor 30 service providers that your business relies on for only $7/month.
Monitor up to 5 services for free.

Stats

1 incidents in the last 7 days

3 incidents in the last 30 days

Automatic Checks

Last check: 3 minutes ago

Last known issue: about 15 hours ago

Compare to Alternatives

Coveralls
Coveralls

0 incidents in the last 7 days

1 incidents in the last 30 days

Travis CI
Travis CI

0 incidents in the last 7 days

0 incidents in the last 30 days

Latest Incidents

Last 30 days

13/04
14/04
15/04
16/04
17/04
18/04
19/04
20/04
21/04
22/04
23/04
24/04
25/04
26/04
27/04
28/04
29/04
30/04
01/05
02/05
03/05
04/05
05/05
06/05
07/05
08/05
09/05
10/05
11/05
12/05
13/05

Resolved Delays in pipeline processing

We are currently investigating this issue.

about 15 hours ago Official incident report

Resolved Network timeouts towards services relying on Fastly CDN

We are seeing sporadic timeouts when attempting to reach hex.pm. Our team is actively working with the upstream provider to pinpoint and resolve the issue. In the meantime, following workarounds are working: - Use hex mirror - `HEX_MIRROR=https://hexpm.upyun.com mix deps.get` - Add retry to `mix deps.get` as follows: `retry -t 10 mix deps.get`

Resolved Delays in job processing on Semaphore 2.0 for a1-standard-4 machines, Xcode 12

We are currently investigating this issue.

Resolved Delays in job processing on Semaphore 2.0 for a1-standard-4 machines

We are currently investigating this issue.

about 1 month ago Official incident report

Resolved Network connectivity issues

We are currently investigating this issue.

about 2 months ago Official incident report

Resolved Delays in job processing on Semaphore 2.0

We are currently investigating this issue.

about 2 months ago Official incident report

Resolved Delays in job processing on Semaphore 2.0

We are currently investigating this issue.

Resolved Delays in job processing on Semaphore 2.0

We are currently investigating this issue.

Resolved Elevated error rates on Semaphore Classic

We are currently investigating this issue.

Resolved Sporadic slow Docker push towards AWS ECR

We are seeing sporadic packet loss between our build cluster and the AWS us-east-1 region. Our team is actively working with the upstream provider to pinpoint and resolve the issue.

Trusted by teams from all over the world

Services Available
805
Incidents registered
47681
Monitoring Incidents
78

Ready to dive in? Start using IsDown today.

Sign up for free