Outage in AWS

Increased Error Rates and Latencies - N. Virginia

Resolved Minor
October 20, 2025 - Started about 1 month ago - Lasted about 16 hours

Incident Report

We are investigating increased error rates and latencies for multiple AWS services in the US-EAST-1 Region. We will provide another update in the next 30-45 minutes.
Components affected
Amazon API Gateway Amazon API Gateway (us-east-1) Amazon AppFlow Amazon AppFlow (us-east-1) Amazon AppStream 2.0 Amazon AppStream 2.0 (us-east-1) Amazon Athena Amazon Athena (us-east-1) Amazon Chime Amazon CloudFront Amazon CloudWatch Application Insights Amazon CloudWatch Application Insights (us-east-1) Amazon CloudWatch Amazon CloudWatch (us-east-1) Amazon Cognito Amazon Cognito (us-east-1) Amazon Comprehend Amazon Comprehend (us-east-1) Amazon Connect Amazon Connect (us-east-1) Amazon DocumentDB Amazon DocumentDB (us-east-1) Amazon DynamoDB Amazon DynamoDB (us-east-1) Amazon EC2 Amazon EC2 (us-east-1) Amazon ECR Amazon ECR (us-east-1) Amazon ECS Amazon ECS (us-east-1) Amazon Elastic File System Amazon Elastic File System (us-east-1) Amazon EKS Amazon EKS (us-east-1) Amazon ELB Amazon ELB (us-east-1) Amazon EMR Amazon EMR (us-east-1) Amazon ElastiCache Amazon ElastiCache (us-east-1) Amazon EventBridge Amazon EventBridge (us-east-1) Amazon FSx Amazon FSx (us-east-1) Amazon GuardDuty Amazon GuardDuty (us-east-1) Amazon Interactive Video Service Amazon Interactive Video Service (us-east-1) Amazon Kendra Amazon Kendra (us-east-1) Amazon Kinesis Data Streams Amazon Kinesis Data Streams (us-east-1) Amazon Kinesis Firehose Amazon Kinesis Firehose (us-east-1) Amazon Kinesis Video Streams Amazon Kinesis Video Streams (us-east-1) Amazon Location Service Amazon Location Service (us-east-1) Amazon Managed Grafana Amazon Managed Grafana (us-east-1) Amazon Managed Service for Prometheus Amazon Managed Service for Prometheus (us-east-1) Amazon Managed Streaming for Apache Kafka Amazon Managed Streaming for Apache Kafka (us-east-1) Amazon Managed Workflows for Apache Airflow Amazon Managed Workflows for Apache Airflow (us-east-1) Amazon MQ Amazon MQ (us-east-1) Amazon Neptune Amazon Neptune (us-east-1) Amazon OpenSearch Service Amazon OpenSearch Service (us-east-1) Amazon Pinpoint Amazon Pinpoint (us-east-1) Amazon Polly Amazon Polly (us-east-1) Amazon Redshift Amazon Redshift (us-east-1) Amazon Rekognition Amazon Rekognition (us-east-1) Amazon RDS Amazon RDS (us-east-1) Amazon SageMaker Amazon SageMaker (us-east-1) Amazon SES Amazon SES (us-east-1) Amazon SNS Amazon SNS (us-east-1) Amazon SQS Amazon SQS (us-east-1) Amazon S3 Amazon S3 (us-east-1) Amazon SWF Amazon SWF (us-east-1) Amazon Textract Amazon Textract (us-east-1) Amazon Timestream Amazon Timestream (us-east-1) Amazon Transcribe Amazon Transcribe (us-east-1) Amazon Translate Amazon Translate (us-east-1) Amazon WorkMail Amazon WorkMail (us-east-1) Amazon WorkSpaces Amazon WorkSpaces (us-east-1) AWS Account Management AWS Amplify AWS Amplify (us-east-1) AWS Application Migration Service AWS Application Migration Service (us-east-1) AWS AppSync AWS AppSync (us-east-1) AWS Batch AWS Batch (us-east-1) AWS Billing Console AWS Client VPN AWS Client VPN (us-east-1) AWS CloudFormation AWS CloudFormation (us-east-1) AWS CloudHSM AWS CloudHSM (us-east-1) AWS CloudTrail AWS CloudTrail (us-east-1) AWS CodeBuild AWS CodeBuild (us-east-1) AWS Config AWS Config (us-east-1) AWS Control Tower AWS Control Tower (us-east-1) AWS Database Migration Service AWS Database Migration Service (us-east-1) AWS DataSync AWS DataSync (us-east-1) AWS Direct Connect AWS Direct Connect (us-east-1) AWS Directory Service AWS Directory Service (us-east-1) AWS EB AWS EB (us-east-1) AWS Elastic Disaster Recovery AWS Elastic Disaster Recovery (us-east-1) AWS Elemental AWS Elemental (us-east-1) AWS Firewall Manager AWS Firewall Manager (us-east-1) AWS Global Accelerator AWS Glue AWS Glue (us-east-1) AWS IAM AWS IoT Analytics AWS IoT Analytics (us-east-1) AWS IoT Core AWS IoT Core (us-east-1) AWS IoT Device Management AWS IoT Events AWS IoT Events (us-east-1) AWS IoT SiteWise AWS IoT SiteWise (us-east-1) AWS Lake Formation AWS Lake Formation (us-east-1) AWS Lambda AWS Lambda (us-east-1) AWS License Manager AWS License Manager (us-east-1) AWS NAT Gateway AWS NAT Gateway (us-east-1) AWS Network Firewall AWS Network Firewall (us-east-1) AWS Organizations AWS Outposts AWS Outposts (us-east-1) AWS Resource Groups AWS Secrets Manager AWS Secrets Manager (us-east-1) AWS Site-to-Site VPN AWS Site-to-Site VPN (us-east-1) AWS Step Functions AWS Step Functions (us-east-1) AWS Storage Gateway AWS Storage Gateway (us-east-1) AWS Support Center AWS Systems Manager AWS Systems Manager (us-east-1) AWS Transit Gateway AWS Transit Gateway (us-east-1) AWS VPCE PrivateLink AWS VPCE PrivateLink (us-east-1) EC2 Image Builder EC2 Image Builder (us-east-1) Multiple services Multiple services (us-east-1) Traffic Mirroring Traffic Mirroring (us-east-1) AWS Cloud WAN Amazon VPC IP Address Manager Amazon VPC IP Address Manager (us-east-1) AWS IoT FleetWise AWS IoT FleetWise (us-east-1) AWS Payment Cryptography AWS Payment Cryptography (us-east-1) AWS Systems Manager for SAP AWS Systems Manager for SAP (us-east-1) AWS AppConfig AWS AppConfig (us-east-1) AWS B2B Data Interchange AWS B2B Data Interchange (us-east-1) Amazon Bedrock Amazon Bedrock (us-east-1) Amazon DataZone Amazon DataZone (us-east-1) Amazon EC2 Instance Connect Amazon EC2 Instance Connect (us-east-1) Amazon EMR Serverless Amazon EMR Serverless (us-east-1) Amazon EventBridge Scheduler Amazon EventBridge Scheduler (us-east-1) AWS IoT Greengrass AWS IoT Greengrass (us-east-1) AWS HealthImaging AWS HealthImaging (us-east-1) AWS HealthLake AWS HealthLake (us-east-1) AWS IAM Identity Center AWS IAM Identity Center (us-east-1) AWS IoT Device Management (us-east-1) AWS Launch Wizard AWS Launch Wizard (us-east-1) AWS HealthOmics AWS HealthOmics (us-east-1) AWS Private Certificate Authority AWS Private Certificate Authority (us-east-1) Amazon Security Lake Amazon Security Lake (us-east-1) AWS Transfer Family AWS Transfer Family (us-east-1) Amazon VPC Lattice Amazon VPC Lattice (us-east-1) AWS Verified Access AWS Verified Access (us-east-1) Amazon WorkSpaces Thin Client Amazon WorkSpaces Thin Client (us-east-1)

Need to monitor AWS outages?

One place to monitor all your cloud vendors. Get instant alerts when an outage is detected.

Latest Updates ( sorted recent to last )
UPDATE about 1 month ago - at 10/20/2025 09:48PM

We have restored EC2 instance launch throttles to pre-event levels and EC2 launch failures have recovered across all Availability Zones in the US-EAST-1 Regions. AWS services which rely on EC2 instance launches such as Redshift are working through their backlog of EC2 instance launches successfully and we anticipate full recovery of the backlog over the next two hours. We can confirm that Connect is handling new voice and chat sessions normally. There is a backlog of analytics and reporting data that we must process and anticipate that we will have worked through the backlog over the next two hours. We will provide an update by 3:30 PM PDT.

UPDATE about 1 month ago - at 10/20/2025 08:52PM

We have continued to reduce throttles for EC2 instance launches in the US-EAST-1 Region and we continue to make progress toward pre-event levels in all Availability Zones (AZs). AWS services such as ECS and Glue, which rely on EC2 instance launches will recover as the successful instance launch rate improves. We see full recovery for Lambda invocations and are working through the backlog of queued events which we expect to be full processed in approximately in the next two hours. We will provide another update by 2:30 PM PDT.

UPDATE about 1 month ago - at 10/20/2025 08:03PM

Service recovery across all AWS services continues to improve. We continue to reduce throttles for new EC2 Instance launches in the US-EAST-1 Region that were put in place to help mitigate impact. Lambda invocation errors have fully recovered and function errors continue to improve. We have scaled up the rate of polling SQS queues via Lambda Event Source Mappings to pre-event levels. We will provide another update by 1:45 PM PDT.

UPDATE about 1 month ago - at 10/20/2025 07:15PM

We continue to observe recovery across all AWS services, and instance launches are succeeding across multiple Availability Zones in the US-EAST-1 Regions. For Lambda, customers may face intermittent function errors for functions making network requests to other services or systems as we work to address residual network connectivity issues. To recover Lambda’s invocation errors, we slowed down the rate of SQS polling via Lambda Event Source Mappings. We are now increasing the rate of SQS polling as we experience more successful invocations and reduced function errors. We will provide another update by 1:00 PM PDT.

UPDATE about 1 month ago - at 10/20/2025 06:22PM

Our mitigations to resolve launch failures for new EC2 instances continue to progress and we are seeing increased launches of new EC2 instances and decreasing networking connectivity issues in the US-EAST-1 Region. We are also experiencing significant improvements to Lambda invocation errors, especially when creating new execution environments (including for Lambda@Edge invocations). We will provide an update by 12:00 PM PDT.

UPDATE about 1 month ago - at 10/20/2025 05:38PM

Our mitigations to resolve launch failures for new EC2 instances are progressing and the internal subsystems of EC2 are now showing early signs of recovering in a few Availability Zones (AZs) in the US-EAST-1 Region. We are applying mitigations to the remaining AZs at which point we expect launch errors and network connectivity issues to subside. We will provide an update by 11:30 AM PDT.

UPDATE about 1 month ago - at 10/20/2025 05:03PM

We continue to apply mitigation steps for network load balancer health and recovering connectivity for most AWS services. Lambda is experiencing function invocation errors because an internal subsystem was impacted by the network load balancer health checks. We are taking steps to recover this internal Lambda system. For EC2 launch instance failures, we are in the process of validating a fix and will deploy to the first AZ as soon as we have confidence we can do so safely. We will provide an update by 10:45 AM PDT.

UPDATE about 1 month ago - at 10/20/2025 04:13PM

We have taken additional mitigation steps to aid the recovery of the underlying internal subsystem responsible for monitoring the health of our network load balancers and are now seeing connectivity and API recovery for AWS services. We have also identified and are applying next steps to mitigate throttling of new EC2 instance launches. We will provide an update by 10:00 AM PDT.

UPDATE about 1 month ago - at 10/20/2025 03:43PM

We have narrowed down the source of the network connectivity issues that impacted AWS Services. The root cause is an underlying internal subsystem responsible for monitoring the health of our network load balancers. We are throttling requests for new EC2 instance launches to aid recovery and actively working on mitigations.

UPDATE about 1 month ago - at 10/20/2025 03:04PM

We continue to investigate the root cause for the network connectivity issues that are impacting AWS services such as DynamoDB, SQS, and Amazon Connect in the US-EAST-1 Region. We have identified that the issue originated from within the EC2 internal network. We continue to investigate and identify mitigations.

UPDATE about 1 month ago - at 10/20/2025 02:29PM

We have confirmed multiple AWS services experienced network connectivity issues in the US-EAST-1 Region. We are seeing early signs of recovery for the connectivity issues and are continuing to investigate the root cause.

UPDATE about 1 month ago - at 10/20/2025 02:14PM

We can confirm significant API errors and connectivity issues across multiple services in the US-EAST-1 Region. We are investigating and will provide further update in 30 minutes or soon if we have additional information.

UPDATE about 1 month ago - at 10/20/2025 01:42PM

We have applied multiple mitigations across multiple Availability Zones (AZs) in US-EAST-1 and are still experiencing elevated errors for new EC2 instance launches. We are rate limiting new instance launches to aid recovery. We will provide an update at 7:30 AM PDT or sooner if we have additional information.

UPDATE about 1 month ago - at 10/20/2025 12:48PM

We are making progress on resolving the issue with new EC2 instance launches in the US-EAST-1 Region and are now able to successfully launch new instances in some Availability Zones. We are applying similar mitigations to the remaining impacted Availability Zones to restore new instance launches. As we continue to make progress, customers will see an increasing number of successful new EC2 launches. We continue to recommend that customers launch new EC2 Instance launches that are not targeted to a specific Availability Zone (AZ) so that EC2 has flexibility in selecting the appropriate AZ.

We also wanted to share that we are continuing to successfully process the backlog of events for both EventBridge and Cloudtrail. New events published to these services are being delivered normally and are not experiencing elevated delivery latencies.

We will provide an update by 6:30 AM PDT or sooner if we have additional information to share.

UPDATE about 1 month ago - at 10/20/2025 12:10PM

We confirm that we have now recovered processing of SQS queues via Lambda Event Source Mappings. We are now working through processing the backlog of SQS messages in Lambda queues.

UPDATE about 1 month ago - at 10/20/2025 11:48AM

We continue to work to fully restore new EC2 launches in US-EAST-1. We recommend EC2 Instance launches that are not targeted to a specific Availability Zone (AZ) so that EC2 has flexibility in selecting the appropriate AZ. The impairment in new EC2 launches also affects services such as RDS, ECS, and Glue. We also recommend that Auto Scaling Groups are configured to use multiple AZs so that Auto Scaling can manage EC2 instance launches automatically.

We are pursuing further mitigation steps to recover Lambda’s polling delays for Event Source Mappings for SQS. AWS features that depend on Lambda’s SQS polling capabilities such as Organization policy updates are also experiencing elevated processing times. We will provide an update by 5:30 AM PDT.

UPDATE about 1 month ago - at 10/20/2025 11:08AM

We are continuing to work towards full recovery for EC2 launch errors, which may manifest as an Insufficient Capacity Error. Additionally, we continue to work toward mitigation for elevated polling delays for Lambda, specifically for Lambda Event Source Mappings for SQS. We will provide an update by 5:00 AM PDT.

UPDATE about 1 month ago - at 10/20/2025 10:35AM

The underlying DNS issue has been fully mitigated, and most AWS Service operations are succeeding normally now. Some requests may be throttled while we work toward full resolution. Additionally, some services are continuing to work through a backlog of events such as Cloudtrail and Lambda. While most operations are recovered, requests to launch new EC2 instances (or services that launch EC2 instances such as ECS) in the US-EAST-1 Region are still experiencing increased error rates. We continue to work toward full resolution. If you are still experiencing an issue resolving the DynamoDB service endpoints in US-EAST-1, we recommend flushing your DNS caches. We will provide an update by 4:15 AM, or sooner if we have additional information to share.

UPDATE about 1 month ago - at 10/20/2025 10:03AM

We continue to observe recovery across most of the affected AWS Services. We can confirm global services and features that rely on US-EAST-1 have also recovered. We continue to work towards full resolution and will provide updates as we have more information to share.

UPDATE about 1 month ago - at 10/20/2025 09:27AM

We are seeing significant signs of recovery. Most requests should now be succeeding. We continue to work through a backlog of queued requests. We will continue to provide additional information.

UPDATE about 1 month ago - at 10/20/2025 09:22AM

We have applied initial mitigations and we are observing early signs of recovery for some impacted AWS Services. During this time, requests may continue to fail as we work toward full resolution. We recommend customers retry failed requests. While requests begin succeeding, there may be additional latency and some services will have a backlog of work to work through, which may take additional time to fully process. We will continue to provide updates as we have more information to share, or by 3:15 AM.

UPDATE about 1 month ago - at 10/20/2025 09:01AM

We have identified a potential root cause for error rates for the DynamoDB APIs in the US-EAST-1 Region. Based on our investigation, the issue appears to be related to DNS resolution of the DynamoDB API endpoint in US-EAST-1. We are working on multiple parallel paths to accelerate recovery. This issue also affects other AWS Services in the US-EAST-1 Region. Global services or features that rely on US-EAST-1 endpoints such as IAM updates and DynamoDB Global tables may also be experiencing issues. During this time, customers may be unable to create or update Support Cases. We recommend customers continue to retry any failed requests. We will continue to provide updates as we have more information to share, or by 2:45 AM.

UPDATE about 1 month ago - at 10/20/2025 08:26AM

We can confirm significant error rates for requests made to the DynamoDB endpoint in the US-EAST-1 Region. This issue also affects other AWS Services in the US-EAST-1 Region as well. During this time, customers may be unable to create or update Support Cases. Engineers were immediately engaged and are actively working on both mitigating the issue, and fully understanding the root cause. We will continue to provide updates as we have more information to share, or by 2:00 AM.

UPDATE about 1 month ago - at 10/20/2025 07:51AM

We can confirm increased error rates and latencies for multiple AWS Services in the US-EAST-1 Region. This issue may also be affecting Case Creation through the AWS Support Center or the Support API. We are actively engaged and working to both mitigate the issue and understand root cause. We will provide an update in 45 minutes, or sooner if we have additional information to share.

UPDATE about 1 month ago - at 10/20/2025 07:11AM

We are investigating increased error rates and latencies for multiple AWS services in the US-EAST-1 Region. We will provide another update in the next 30-45 minutes.

source IsDown AWS DynamoDB and CloudWatch outage Reports started 6 minutes before official outage was reported

Users report errors accessing DynamoDB and CloudWatch, likely due to connection issues with AWS endpoints in the US East region.

Status Aggregator for All Your Third-Party Services

With IsDown, you can monitor all your critical services' official status pages from one centralized dashboard and receive instant alerts the moment an outage is detected. Say goodbye to constantly checking multiple sites for updates and stay ahead of outages with IsDown.

Start free trial

No credit card required · Cancel anytime · 4600 services available

Integrations with Slack Microsoft Teams Google Chat Datadog PagerDuty Zapier Discord Webhook