Outage in AWS
Instance connectivity
Resolved
Minor
July 13, 2021 - Started over 3 years ago
- Lasted 9 months
Need to monitor AWS outages?
Stay on top of outages with IsDown. Monitor the official status pages of all your vendors, SaaS, and tools, including AWS, and never miss an outage again.
Start Free Trial →
Outage Details
5:29 AM PDT We are investigating increased error rates and latencies for the EC2 APIs and connectivity issues for some instances in a single Availability Zone in the EU-CENTRAL-1 Region
6:05 AM PDT We are seeing increased error rates and latencies for the RunInstances and CreateSnapshot APIs, and increased connectivity issues for some instances in a single Availability Zone (euc1-az3) in the EU-CENTRAL-1 Region. We have resolved the networking issues that affected the majority of instances within the affected Availability Zone, but continue to work on some instances that are experiencing degraded performance for some EBS volumes. Other Availability Zones are not affected by this issue. We would recommend failing away from the affected Availability Zone until this issue has been resolved.
6:29 AM PDT We continue to make progress in resolving the connectivity issues affecting some instances in a single Availability Zone (euc1-az3) in the EU-CENTRAL-1 Region. The increased error rates and latencies for the RunInstance and CreateSnapshot APIs have been resolved, as well as the degraded performance for some EC2 volumes within the affected Availability Zone. We continue to work on the remaining EC2 instances that are still impaired as a result of this event, some of which may have experienced a power cycle. While we do not expect any further impact at this stage, we would recommend continuing to utilize other Availability Zones in the EU-CENTRAL-1 region until this issue has been resolved.
7:24 AM PDT Starting at 5:07 AM PDT we experienced increase connectivity issues for some instances, degraded performance for some EBS volumes and increased error rates and latencies for the EC2 APIs in a single Availability Zone (euc1-az3) in the EU-CENTRAL-1 Region. By 6:03 AM PDT, API error rates had returned to normal levels, but some Auto Scaling workflows continued to see delays until 6:35 AM PDT. By 6:10 AM PDT, the vast majority of EBS volumes with degraded performance had been resolved as well, and by 7:05 AM PDT, the vast majority of affected instances had been recovered, some of which may have experienced a power cycle. A small number of remaining instances are hosted on hardware which was adversely affected by this event and require additional attention. We continue to work to recover all affected instances and have opened notifications for the remaining impacted customers via the Personal Health Dashboard. For immediate recovery, we recommend replacing any remaining affected instances if possible.