Know when your cloud services are down

IsDown monitors AWS's and more than 1000 services statuses.
Monitor the services that impact your business. Get an alert when they are down.

AWS

AWS Status

AWS not working for you? Report it!

Stats

1 incidents in the last 7 days

6 incidents in the last 30 days

Automatic Checks

Last check: 1 minute ago

Last known issue: 5 days ago

Compare to Alternatives

Google Cloud
Google Cloud

0 incidents in the last 7 days

1 incidents in the last 30 days

Heroku
Heroku

1 incidents in the last 7 days

8 incidents in the last 30 days

fortrabbit
fortrabbit

0 incidents in the last 7 days

5 incidents in the last 30 days

Anchor Host
Anchor Host

0 incidents in the last 7 days

0 incidents in the last 30 days

Netlify
Netlify

0 incidents in the last 7 days

5 incidents in the last 30 days

Latest Incidents

Last 30 days

26/06
27/06
28/06
29/06
30/06
01/07
02/07
03/07
04/07
05/07
06/07
07/07
08/07
09/07
10/07
11/07
12/07
13/07
14/07
15/07
16/07
17/07
18/07
19/07
20/07
21/07
22/07
23/07
24/07
25/07
26/07

Resolved Increased Error rates

4:41 PM PDT We are investigating increased API error rates for the RunInstances API in the EU-SOUTH-1 Region. 5:04 PM PDT We can confirm increased API error rates for the RunInstances API in the EU-SOUTH-1 Region. This is also affecting services that depend on EC2 such as Auto Scaling, and launches of service instances that are built on EC2, such as RDS and ElastiCache. Instances that are already launched are operating normally. We have identified the root cause and are actively testing a mitigation plan. We expect to have an update on the success of this mitigation effort in the next 30 minutes. 5:19 PM PDT Between 3:59 PM and 5:07 PM PDT customers experienced increased error rates for the EC2 RunInstances API in the EU-SOUTH-1 Region. This is also affected services that depend on EC2 such as Auto Scaling, and launches of service instances that are built on EC2, such as RDS and ElastiCache. The issue has been resolved and the RunInstances API is now operating normally. This issue only affected new instance launches, instances that were already running were not affected.

Resolved エラー率およびレイテンシーの上昇 | Increased Error rates and Latencies

9:39 AM PDT AP-NORTHEAST-1 リージョンにおける CloudWatch Logs API のエラー率とレイテンシーの増加を調査しています。現在この問題の解決に取り組んでいます。 | We are investigating increased error rates and latencies for CloudWatch Logs APIs in the AP-NORTHEAST-1 Region. We are actively working to resolve the issue. 10:04 AM PDT AP-NORTHEAST-1 リージョンにおける API エラー率の上昇およびログイベントの遅延を確認しています。現在この問題の解決に取り組んでいます。| We can confirm elevated API error rates and some delayed log events in AP-NORTHEAST-1 Region. We are actively working to resolve the issue. 10:39 AM PDT AP-NORTHEAST-1 リージョンにおける API エラー率の上昇およびログイベントの遅延の原因を特定しました。復旧の兆候を確認しており、解決に向け引き続き対応を行っています。 | We have identified the root cause of the elevated API error rates and some delayed log events in the AP-NORTHEAST-1 Region. We are beginning to see signs of recovery and continue working towards resolution. 12:07 PM PDT AP-NORTHEAST-1 リージョンにおける CloudWatch Logs API のエラー率とレイテンシーの増加について引き続き調査および解決に取り組んでいます。| We continue to work on investigating and resolving increased error rates and latencies for CloudWatch Logs APIs in the AP-NORTHEAST-1 Region. 2:28 PM PDT AP-NORTHEAST-1 リージョンにおける CloudWatch Logs API のエラー率とレイテンシーの上昇を解消すると予測される緩和策の実施を完了しました。エラー率の改善が見られる一方で引き続き高い水準のレイテンシーを確認しており、完全な解決に向け引き続き対応を行っています。 | We have completed a mitigation strategy that we anticipated would resolve the increased error rates and latencies for CloudWatch Logs APIs in the AP-NORTHEAST-1 Region. While we have seen some recovery in error rates, we continue to see elevated levels of latency and continue working towards full resolution. 3:42 PM PDT AP-NORTHEAST-1 リージョンにおける CloudWatch Logs API のレイテンシー上昇と残存しているエラーの解消のため、第二の緩和策の展開を行なっております。解決に向け引き続き対応を行っております。 | We are deploying a second mitigation strategy to resolve elevated latency and the remaining level of errors for CloudWatch Logs APIs in the AP-NORTHEAST-1 Region. We continue working towards recovery. 5:05 PM PDT 日本時間 2021/07/14 23:01 から 2021/07/15 08:20 の間、AP-NORTHEAST-1 リージョンにおける CloudWatch Logs API のエラー率とレイテンシーの上昇が発生しておりました。この問題は解決し、現在サービスは正常に動作しています。 | Between 7:01 AM and 4:20 PM PDT we experienced increased error rates and latencies for CloudWatch Logs APIs in the AP-NORTHEAST-1 Region. The issue has been resolved and the service is operating normally.

Resolved Instance connectivity

5:29 AM PDT We are investigating increased error rates and latencies for the EC2 APIs and connectivity issues for some instances in a single Availability Zone in the EU-CENTRAL-1 Region 6:05 AM PDT We are seeing increased error rates and latencies for the RunInstances and CreateSnapshot APIs, and increased connectivity issues for some instances in a single Availability Zone (euc1-az3) in the EU-CENTRAL-1 Region. We have resolved the networking issues that affected the majority of instances within the affected Availability Zone, but continue to work on some instances that are experiencing degraded performance for some EBS volumes. Other Availability Zones are not affected by this issue. We would recommend failing away from the affected Availability Zone until this issue has been resolved. 6:29 AM PDT We continue to make progress in resolving the connectivity issues affecting some instances in a single Availability Zone (euc1-az3) in the EU-CENTRAL-1 Region. The increased error rates and latencies for the RunInstance and CreateSnapshot APIs have been resolved, as well as the degraded performance for some EC2 volumes within the affected Availability Zone. We continue to work on the remaining EC2 instances that are still impaired as a result of this event, some of which may have experienced a power cycle. While we do not expect any further impact at this stage, we would recommend continuing to utilize other Availability Zones in the EU-CENTRAL-1 region until this issue has been resolved. 7:24 AM PDT Starting at 5:07 AM PDT we experienced increase connectivity issues for some instances, degraded performance for some EBS volumes and increased error rates and latencies for the EC2 APIs in a single Availability Zone (euc1-az3) in the EU-CENTRAL-1 Region. By 6:03 AM PDT, API error rates had returned to normal levels, but some Auto Scaling workflows continued to see delays until 6:35 AM PDT. By 6:10 AM PDT, the vast majority of EBS volumes with degraded performance had been resolved as well, and by 7:05 AM PDT, the vast majority of affected instances had been recovered, some of which may have experienced a power cycle. A small number of remaining instances are hosted on hardware which was adversely affected by this event and require additional attention. We continue to work to recover all affected instances and have opened notifications for the remaining impacted customers via the Personal Health Dashboard. For immediate recovery, we recommend replacing any remaining affected instances if possible.

Resolved Propagation Delays

1:58 PM PDT We are investigating delays in propagating changes to CloudFront distributions to our edge locations. This is related to the ACM issue in the US-EAST-1 Region that we have posted to the Service Health Dashboard. Existing distributions continue to operate normally and there is no impact to serving content from our edge locations. 4:08 PM PDT CloudFront distribution changes continue to be affected by the AWS Certificate Manager issue in US-EAST-1. ACM is continuing to make progress towards recovery, but change propagation for CloudFront distribution changes will continue to be affected until the ACM issue is fully mitigated. At that point, we will begin to process the backlog of distribution changes. The backlog of changes may take additional time to complete. Existing distributions continue to operate normally and there is no impact to serving content from our edge locations. 5:06 PM PDT We are starting to see some ACM API calls succeed for CloudFront and we are starting to propagate changes for CloudFront distributions to our edge locations. ACM is continuing to make progress towards recovery. Existing distributions continue to operate normally and there is no impact to serving content from our edge locations. 6:11 PM PDT We continue to process the backlog of distribution changes and propagate the updates to our edge locations. Existing distributions continue to operate normally and there is no impact to serving content from our edge locations. 7:38 PM PDT Between 11:45 AM and 7:25 PM PDT, we experienced delays in propagating changes to CloudFront distributions to our edge locations due to the ACM issue in US-EAST-1. This issue has been resolved and the service is operating normally. During this time, previous configured distributions continued to operate without any issues and there were no issues with serving content from our CloudFront edge locations.

Resolved Increased API Error Rates and Latency

12:12 PM PDT We are investigating increased API latency and error rates in the US-EAST-1 Region. 12:36 PM PDT We can confirm increased API latency and increased API error rates for the ACM APIs in the US-EAST-1 Region. During this time, you may be unable to Request new certificates, and may also observe errors when attempting to List and/or nodify existing certificates. This issue impacts both the AWS Management Console, and the ACM APIs. Additionally, you may also receive API errors when attempting to associate new resources. Existing associated resources are unaffected, and continue to operate as normal. We have identified the root cause of the issue and are working toward mitigation and resolution. We will provide further updates as we have more information to share. 1:14 PM PDT We continue to work toward mitigating the affected subsystem responsible for the increase in API Errors and Latencies for the ACM APIs. Other AWS Services (such as ClientVPN) who attempt to create or associate new certificates may also be impacted by this issue. Existing resources remain unaffected by this issue and continue to operate normally. 2:13 PM PDT We are continuing to drive to root cause and work toward mitigating the affected subsystem responsible for the increase in API Errors and Latencies for the ACM APIs. Other AWS Services (such as ClientVPN) who attempt to create or associate new certificates may also be impacted by this issue. Existing resources remain unaffected by this issue and continue to operate normally. 2:56 PM PDT We have identified some workloads on the affected subsystem of the ACM API that may be causing the increase in API errors and latency, and we are reviewing and testing procedures to mitigate their impact. We do not have an ETA at this time. This issue does affect services like CloudFront and ElasticSearch that rely on ACM for their certificate needs. It would also impact CloudFormation workflows that either directly or indirectly need to manipulate ACM certificates. All workflows that depend on ACM certificates that are already created are not impacted by this event, and continue to operate normally. 3:33 PM PDT We continue to work toward mitigating the increased latencies and error rates affecting the ACM APIs. Until this point, some requests and retries have been succeeding. At this time, we are temporarily not accepting additional API requests, in order to help accelerate mitigation and recovery. Once we begin accepting new API requests, requests will be throttled. We will continue to provide updates as we progress. 4:49 PM PDT We are starting to see some ACM API calls succeed for CloudFront and ELB and we are starting to propagate changes for CloudFront distributions to our edge locations. Customer facing APIs are still throttled. ACM is continuing to make progress towards recovery. 5:47 PM PDT We are seeing recovery for customers and throttling has been removed from most APIs. We are working through the final changes to unblock the following APIs: RequestCertificate, ListCertificates, and ImportCertificate and expect to have those final changes in-place shortly. We will update as we make progress towards full recovery. 7:05 PM PDT We are seeing recovery for customers and throttling has been removed from most APIs. We have unblocked RequestCertificate for most use cases and are working to have the final changes in-place shortly. We will update as we make progress towards full recovery. 7:46 PM PDT Between 11:45 AM and 7:42 PM PDT, customers experienced increased ACM API errors and latency in the US-EAST-1 Region that impacted the ability to issue new certificates, import certificates and retrieve information about certificates from ACM. Existing certificates that were already vended to services such as CloudFront and ELB continued to operate and were unaffected. This issue also impacted provisioning and scaling workflows for services that depend on ACM for certificate management needs, such as CloudFront and ELB, as well as CloudFormation operations that involve mutating ACM certificates. This issue was caused by a previously unknown limit in an ACM storage subsystem. We have identified the limit issue and have mitigated it. The issue has been fully resolved and all ACM API requests are being answered normally. During this time, all existing resources that had a configured ACM certificate (such as ELB load balancers and CloudFront distributions) continued to operate normally, and were not impaired by this issue.

How it works

  1. Step 1 Create an account

    Start with a free account that will allow you to monitor up to five services. Sign up via Google, Slack, or email.

  2. Step 2 Select your tools

    There are 1100 services to choose from, and we're adding more every week.

  3. Step 3 Set up notifications

    You can get notifications through the mail, Slack or use Zapier or Webhooks to build your workflows.

  4. Step 4 You're ready!

    You'll never be caught by surprise again. Every time there's a problem with one tool, you'll get a notification ASAP.

Unified tools status

Check the status of all your tools in one place

IsDown integrates with hundreds of services. Handles the hassle of going to each one of the status pages and manage it one by one. We also help control how you receive the notifications.
We monitor all problems and outages and keep you posted on their current status in almost real-time.

Notifications in realtime

Get notifications in your favorite communication channel

You can easily get notifications in your email, slack, or use Webhooks and Zapier to introduce the service status in your workflows.

Email
SlackSlack
ZapierZapier
Webhooks

React faster

Help your teams with more data

Engineering

You already monitor internal systems. Add another dimension (external systems) to your monitoring data and complement it with the external factors.

Customer Support

Know before your clients tell you. Anticipate possible issues and make the necessary arrangements.

Marketing

One of your competitors is down? Maybe a good time to spread the word about your service.

Trusted by teams from all over the world

Services Available
1100
Incidents
63623
Ongoing Incidents
105

Start monitoring your tools today

Start today for FREE