AWS

AWS Status

Official source
Everything seems OK

AWS had 7 problems in the last month.

Get a notification when the status changes! Start your FREE account today!

Sign up with Google or with email

Easily monitor 30 service providers that your business relies on for only $7/month.
Monitor up to 5 services for free.

Sources

AWS status page

Stats

3 incidents in the last 7 days

7 incidents in the last 30 days

Automatic Checks

Last check: 5 minutes ago

Last known issue: about 17 hours ago

Compare to Alternatives

Google Cloud
Google Cloud

4 incidents in the last 7 days

10 incidents in the last 30 days

Heroku
Heroku

3 incidents in the last 7 days

15 incidents in the last 30 days

Netlify
Netlify

2 incidents in the last 7 days

10 incidents in the last 30 days

fortrabbit
fortrabbit

0 incidents in the last 7 days

1 incidents in the last 30 days

Anchor Host
Anchor Host

0 incidents in the last 7 days

0 incidents in the last 30 days

Latest Incidents

Last 30 days

15/04
16/04
17/04
18/04
19/04
20/04
21/04
22/04
23/04
24/04
25/04
26/04
27/04
28/04
29/04
30/04
01/05
02/05
03/05
04/05
05/05
06/05
07/05
08/05
09/05
10/05
11/05
12/05
13/05
14/05
15/05

Resolved Increased Errors Connecting to WorkSpaces

1:23 PM PDT We are investigating connectivity issues and WorkSpaces automatically rebooting in the US-EAST-1 Region. 2:05 PM PDT We can confirm an issue causing a loss of connectivity and reboots for a subset of Windows WorkSpaces in the US-EAST-1 Region. We have identified the root cause and are rolling out a mitigation. 3:26 PM PDT We are currently deploying a mitigation for the issue that is causing loss of connectivity and reboots for a subset of Windows WorkSpaces in the US-EAST-1 Region. Workspaces will recover as this mitigation is deployed. 5:01 PM PDT We continue to see recovery for some Windows WorkSpaces as our mitigation is deployed in the US-EAST-1 Region. 6:10 PM PDT Recently, we experienced an issue that caused loss of connectivity and reboots for a subset of Windows WorkSpaces in the US-EAST-1 Region. The issue has been resolved and the service is operating normally.

about 17 hours ago Official incident report

Resolved API Errors, Increased Provisioning Times / Registration Latencies

6:47 AM PDT We are investigating increased API error rates and increased provisioning/registration latencies for ELBs in the US-EAST-1 Region. Connectivity to existing load balancers is not affected. 10:38 AM PDT Starting at 5:05 AM PDT, we experienced periods of increased error rates and provisioning/registration latencies for ELB APIs in the US-EAST-1 Region. The periods of elevated API error rates were resolved at 7:55 AM PDT. The periods of increased provisioning/latencies were resolved for the vast majority of customers by 8:31 AM PDT, with full recovery at 9:11 AM PDT. Connectivity to existing load balancers was not affected. The issue has been resolved and the service is operating normally.

about 19 hours ago Official incident report

Resolved API Errors, Increased Provisioning Times / Registration Latencies

6:47 AM PDT We are investigating increased API error rates and increased provisioning/registration latencies for ELBs in the US-EAST-1 Region. Connectivity to existing load balancers is not affected.

about 23 hours ago Official incident report

Resolved Network Connectivity Issue

12:56 PM PDT We are investigating connectivity issues for some instances in a single Availability Zone (sae1-az2) in the SA-EAST-1 Region. 1:19 PM PDT Starting at 12:20 PM PDT, we experienced low levels of packet loss for Internet Connectivity for some instances in a single Availability Zone (sae1-az2) in the SA-EAST-1 Region. Between 12:48 PM and 12:59 PM PDT, DNS resolution within the affected Availability Zone (sae1-az2) and connectivity between the affected Availability Zone (sae1-az2) and other Availability Zones using public IP addressing also experienced low levels of packet loss. At 1:12 PM PDT, all packet loss issues were resolved. The issue has been resolved and the service is operating normally.

Resolved Change Propagation Delays

11:16 AM PDT We are investigating delays in propagation times for changes to CloudFront configurations. End-user requests for content from our edge locations are not affected by this issue and are being served normally. 11:41 AM PDT We can confirm that actual invalidations are being propagated as usual, but the invalidation status confirmation through the console and API is delayed. This issue is not impacting propagation times for changes to CloudFront configurations as previously stated. End-user requests for content from our edge locations are not affected by this issue and are being served normally. 12:10 PM PDT We have identified the root cause of delays in reporting status changes of CloudFront invalidations. We continue to work toward resolution. All CloudFront edge locations are consuming configuration changes and invalidations normally. Also, End-user requests for content from our edge locations are not affected by this issue and are being served normally. 12:43 PM PDT Between 10:04 AM PDT and 12:20 PM PDT, customers might have experienced delays in reporting status change of CloudFront invalidations. During this time, all CloudFront edge locations were consuming configuration changes and invalidations normally but were not updating status changes in console or via CloudFront APIs. The issue has been resolved and the service is operating normally.

Resolved Increased Latencies and Error Rates

5:22 PM PDT Between 4:14 PM and 4:22 PM PDT the Lambda invoke API experienced increased latencies and error rates in the US-EAST-1 Region. The issue has been resolved and the service is operating normally.

Resolved Increased API Latencies

8:56 AM PDT We are investigating increased API latencies in the US-EAST-1 Region. 9:36 AM PDT We are starting to see recovery for increased API latencies and timeouts in the US-EAST-1 Region and continue to work towards resolution. 9:51 AM PDT Between 4:41 AM and 9:00 AM PDT, we experienced increased API latencies and timeouts in the US-EAST-1 Region. The issue has been resolved and the service is operating normally.

Resolved Compute Environments going INVALID

6:45 PM PDT We are investigating increased transitions to INVALID of some AWS Batch Compute Environments in the US-WEST-2 Region. 7:12 PM PDT We can confirm increased transitions to INVALID of some AWS Batch Compute Environments in the US-WEST-2 Region. 8:00 PM PDT Between 5:25 PM and 7:52 PM PDT, some AWS Batch Compute Environments transitioned to INVALID in the US-WEST-2 Region. The issue has been resolved and the service is working normally.

about 1 month ago Official incident report

Resolved Increased Provisioning and Scaling Latencies

12:50 PM PDT We can confirm increased provisioning and scaling latencies for Elastic Load Balancers in the US-EAST-1 Region. 1:38 PM PDT We have determined the root cause of increased scaling latencies for Elastic Load Balancers in the US-EAST-1 Region and are working towards mitigation. 2:39 PM PDT The team is closely monitoring the issue with EC2 APIs, and continues to see reductions in scaling and provisioning latencies. Other load balancing operations, including target registrations and traffic processing are unaffected. Once the issue with the EC2 APIs is resolved, we will process the backlog of any pending operations before we observe full resolution. 3:44 PM PDT Beginning at 11:13 AM PDT we experienced increased provisioning and scaling latencies for Elastic Load Balancers in the US-EAST-1 Region. Recovery for scaling latency occurred at 1:15 PM PDT, and recovery for provisioning latency occurred at 3:35 PM PDT. The issue has been resolved and the service is operating normally.

about 1 month ago Official incident report

Resolved Increased API Errors

11:44 AM PDT We are investigating increased API error rates in the US-EAST-1 Region. 12:17 PM PDT We are working to resolve the issue resulting in increased error rates for the following EC2 APIs in the US-EAST-1 Region: RunInstances, *SecurityGroups, *NetworkInterfaces, *RouteTables, *AccountAttributes, and *NetworkAcls. These APIs will affect the ability to launch new EC2 instances and make mutating changes to Virtual Private Cloud (VPC) network configuration(s). Existing instances and networks continue to work normally. We have identified the root cause and are working towards resolution. 12:56 PM PDT We continue to work toward recovery for the issue resulting in increased API error rates for the EC2 APIs in the US-EAST-1 Region. We have identified the root cause and applied mitigations to reduce the impact, while we continue to work towards full mitigation. Some APIs may experience errors or “request limit exceeded” when calling an affected API or using the EC2 Management Console. In many cases, a retry of the request may succeed as some requests are still succeeding. Other AWS services that utilize these affected APIs for their own workflows may also be experiencing impact. These services have posted impact via the Personal Health and/or Service Health Dashboards. We will provide an update in 30 minutes. 1:22 PM PDT We continue to work towards full resolution for the issue resulting in increased error rates for the EC2 APIs in the US-EAST-1 Region. We have applied some request throttling for the affected APIs, which has reduced error rates, allowing several APIs to see early recovery. We are adjusting these throttling for some of the affected APIs, which are causing some additional API errors and elevated errors in the EC2 Management Console. We would expect API error rates to continue to recover with the mitigation steps we have taken as we work towards full recovery. 2:13 PM PDT We continue to work towards full recovery for the issue resulting in increased error rates for the EC2 APIs in the US-EAST-1 Region. We have adjusted request throttles to reduce error rates for the affected APIs. While this has worked for some of the affected APIs, such as RunInstances, some of the affected APIs are now returning “request limit exceeded”. If this does occur, attempt to reduce your request rate for the affected API and retry. With the request throttling, some of the affected services are also beginning to see recovery. We continue to work on resolving the underlying root cause and expect to be fully recovered within the next hour. 3:01 PM PDT We have further adjusted request throttles to reduce error rates for the affected APIs, so “request limit exceeded” errors should now be significantly reduced. We are now in the final stages of resolving the issue with the underlying data store. Once resolved, we will remove all API throttles and expect all API operations to return to normal levels. 3:55 PM PDT We have resolved the issue resulting in the increased error rates for the EC2 APIs, and removed all API request throttling, in the US-EAST-1 Region. Beginning at 11:11 AM PDT, we experienced an increase in API error rates for RunInstances and networking related EC2 APIs. At 12:57 PM PDT, request throttling was applied to several of the affected APIs, which helped to improve error rates for the RunInstances API. We continued to work towards full resolution, while removing request throttles, until 3:30 PM PDT, at which time all affected APIs returned to normal levels of operation. The issue has been resolved and the service is operating normally.

about 1 month ago Official incident report

Incidents will happen.
You just need to prepare the best for them.

We're monitoring 806 services and adding more every week.

All-in-one platform to check the status of your services

IsDown integrates with hundreds of services. Handles the hassle of going to each one of the status pages and manage it one by one. We also help control how you receive the notifications.
We monitor all problems and outages and keep you posted on their current status in almost real-time.

Get notifications in your favorite communication channel

You can easily get notifications in your email, slack, or use Webhooks and Zapier to introduce the service status in your workflows.


Email
Slack
Slack
Zapier
Zapier

Webhooks

Empower your teams with more data

IsDown collects status data from services to help you be ahead of the game.

Engineering

You already monitor internal systems. Add another dimension (external systems) to your monitoring data and complement it with the external factors.

Customer Support

Know before your clients tell you. Anticipate possible issues and make the necessary arrangements.

Marketing

One of your competitors is down? Maybe a good time to spread the word about your service.

Trusted by teams from all over the world

Services Available
806
Incidents registered
47831
Monitoring Incidents
59

Ready to dive in? Start using IsDown today.

Sign up for free