AWS

AWS

OPERATIONAL

Last 30 Days

7 incidents

Incidents in the last 30 days

Possible Outage Indicated by User Reports

AWS
Started 1 day ago Resolved in about 5 hours

Increased Error Rates and Latencies - N. Virginia

Earlier today some EC2 launches within the use1-az2 Availability Zone (AZ) experienced increased latencies for EC2 instance launches. We communicated with affected customers via the AWS Personal Health Dashboard shortly after the issue began. This issue has been resolved and EC2 instance launches are operating normally, however some request throttles are currently in place for the use1-az2 Availability Zone (AZ), which are gradually being removed. Customers may experience “request limit exceeded” in this AZ while these throttles are in place; retries should resolve the issue. Currently we are investigating task launch failure rates for ECS tasks for both EC2 and Fargate for a subset of customers in the US-EAST-1 Region. Customers may also see their container instances disconnect from ECS which can cause tasks to stop in some circumstances. ECS operates cells in the Region and a small number of these cells are currently experiencing elevated error rates launching new tasks and existing tasks may stop unexpectedly. When creating an ECS cluster to run tasks, the cluster is assigned to a specific cell. Customers with a cluster in impacted cells are seeing impact across all Availability Zones in the Region. At this time, we recommend customers who can, create new clusters to ensure that the cluster is assigned to a healthy cell. Existing clusters in the remaining healthy cells are not affected. We have identified actions to restore the impacted cells to full service but do not have an estimated time of recovery. Customers who use EMR Serverless are also affected by this issue. We will provide an update by 4:15 PM PDT or as soon as more information becomes available.

AWS
Started 2 days ago Resolved in about 7 hours

Possible Outage Indicated by User Reports

AWS
Started 9 days ago Resolved in about 1 hour

Possible Outage Indicated by User Reports

AWS
Started 10 days ago Resolved in about 1 hour

Increased Error Rates and Latencies - N. Virginia

We are investigating increased error rates and latencies for multiple AWS services in the US-EAST-1 Region. We will provide another update in the next 30-45 minutes.

AWS
Started 11 days ago Resolved in about 16 hours

AWS DynamoDB and CloudWatch outage

Users report errors accessing DynamoDB and CloudWatch, likely due to connection issues with AWS endpoints in the US East region.

AWS
Started 11 days ago Resolved in about 19 hours

Possible Outage Indicated by User Reports

AWS
Started 24 days ago Resolved in about 3 hours