Use cases
Software Products E-commerce MSPs Schools Development & Marketing DevOps Agencies Help Desk
Company
Internet Status Blog Pricing Log in Get started free

Amazon Web Services (AWS) Outage History

Every past AWS outage tracked by IsDown, with detection times, duration, and resolution details.

There were 389 AWS outages since May 2020. The 32 outages from the last 12 months are summarized below, with incident details, duration, and resolution information.

Minor May 7, 2026

May 2026: Increased Error Rate and Latency - N. Virginia

Detected May 7, 2026 8:25 PM EDT · Resolved May 8, 2026 11:06 PM EDT · Duration 1 day

AWS experienced a thermal event in a single Availability Zone (use1-az4) in the US-EAST-1 region that caused power loss and impaired EC2 instances, EBS volumes, and multiple other services including Elastic Load Balancing, ElastiCache, Redshift, and Managed Streaming for Apache Kafka. The incident lasted 26.7 hours as AWS worked to restore cooling capacity and safely bring affected hardware back online in phases. Recovery was gradual with some services like Redshift recovering independently, while EC2 instances and EBS volumes required the full restoration of cooling systems to achieve complete resolution.

Minor April 27, 2026

April 2026: Increased Connectivity Issues - Paris

Detected Apr 27, 2026 7:27 AM EDT · Resolved Apr 27, 2026 8:09 AM EDT · Duration 42 minutes

AWS experienced instance connectivity issues in a single Availability Zone (euw3-az2) within the EU-WEST-3 Paris region for 42 minutes. The incident affected multiple AWS services including EC2, Lambda, API Gateway, S3, and various networking and container services in the Paris region. The issue was classified as minor and AWS investigated the connectivity problems during the outage period.

Minor March 7, 2026

March 2026: Increased Error Rates - Zurich

Detected Mar 7, 2026 2:53 PM EST · Resolved Mar 7, 2026 4:06 PM EST · Duration about 1 hour

AWS experienced increased error rates in the EU-CENTRAL-2 (Zurich) region for 1.2 hours, primarily caused by substantial error rates for PUT and GET requests to Amazon S3 due to issues with a subsystem responsible for assembling objects from bytes in storage. The incident affected numerous AWS services including EC2, Lambda, CloudWatch, and many others that depend on S3, though existing EC2 instances remained unaffected. AWS engineers implemented mitigations and observed early signs of recovery after triangulating the root cause.

Minor February 25, 2026

February 2026: Intermittent missing or delayed EC2 instance and status check metrics - N. Virginia

Detected Feb 25, 2026 1:14 PM EST · Resolved Feb 25, 2026 3:55 PM EST · Duration about 3 hours

AWS experienced intermittent missing or delayed EC2 instance and status check metrics in the US-EAST-1 Region for 2.7 hours, caused by issues in an underlying subsystem responsible for publishing EC2 metric data to CloudWatch. The incident affected monitoring alarms which transitioned to INSUFFICIENT_DATA state and potentially triggered automated actions based on missing metric data, though underlying EC2 instances continued operating normally and APIs remained unaffected. AWS resolved the issue through parallel mitigation efforts and worked to backfill the delayed monitoring data.