AWS

AWS Status

Official source Everything seems OK
Follow AWS status! Don't miss out on outages and problems. 500+ services available to follow.

Create a FREE account to start receiving notifications!

Sources

AWS status page

Stats

1 incidents in the last 7 days

8 incidents in the last 30 days

Automatic Checks

Last check: 3 minutes ago

Last known issue: 5 days ago

Compare to Alternatives

Google Cloud
Google Cloud

3 incidents in the last 7 days

23 incidents in the last 30 days

fortrabbit
fortrabbit

1 incidents in the last 7 days

4 incidents in the last 30 days

Netlify
Netlify

1 incidents in the last 7 days

5 incidents in the last 30 days

Anchor Host
Anchor Host

0 incidents in the last 7 days

0 incidents in the last 30 days

Heroku
Heroku

1 incidents in the last 7 days

5 incidents in the last 30 days

Latest Incidents

Last 30 days

01/02
02/02
03/02
04/02
05/02
06/02
07/02
08/02
09/02
10/02
11/02
12/02
13/02
14/02
15/02
16/02
17/02
18/02
19/02
20/02
21/02
22/02
23/02
24/02
25/02
26/02
27/02
28/02
01/03
02/03
03/03

Resolved Increased Domain Operation Error Rates

2:50 PM PST Beginning at 4:06 AM PST we began experiencing increased error rates impacting Create and Modify operations for Elasticsearch Domains in the US-EAST-1 Region. While many customers have been notified in their Personal Health Dashboard, we wanted to share more information about this event as the impact is ongoing. We have identified the component responsible for these errors are actively working toward identifying the root cause and mitigating the issue. 3:08 PM PST We have identified the root cause of the increased error rates impacting Create and Modify operations for Elasticsearch Domains in the US-EAST-1 Region. We have begun mitigating the issue and are working towards recovery. 3:52 PM PST We are continuing to make progress in mitigating the issue. The rate of elevated latencies and errors for affected customers will begin declining, however we expect it may take several hours for the issue to be resolved completely. 4:22 PM PST We continue to make progress in mitigating the issue and are more than halfway complete. The rate of elevated latencies and errors for affected customers is declining. However, we expect it may take 2-3 hours for the issue to be resolved completely. 5:37 PM PST From 4:06 AM to 5:15 PM PST, Create and Modify operations were experiencing increased error rates and latencies. The issue has been resolved and the service is now operating normally. We've contacted customers directly on their Personal Health Dashboard who have domains created during the impact period and are still experiencing issues.

Resolved API Error Rate

9:20 AM PST (2:20AM JST)現在、AP-NORTHEAST-1 リージョンでの、ELB API エラー率の上昇について調査を進めております。既存のロードバランサーへの接続には影響はありません。 | We are investigating increased error rates for ELB APIs in the AP-NORTHEAST-1 Region. Connectivity to existing load balancers is not affected. 9:27 AM PST (2:27AM JST)日本時間 2/20 AM 2:00 から AM 2:18 にかけて AP-NORTHEAST-1 リージョンにおいて API エラーレートの増加を確認しました。すでに問題は復旧し、通常通り動作しております。 | Between 9:00 AM and 9:18 AM PST we experienced increased API error rates in the AP-NORTHEAST-1 Region. The issue has been resolved and the service is operating normally.

Resolved インスタンスの障害について | Instance impairments

7:09 AM PST (12:09AM JST)現在、東京リージョン AP-NORTHEAST-1 のひとつのアベイラビリティゾーン apne1-az1 において、インスタンスに影響を及ぼす接続性の問題が発生しており、対応を行っております。 | We are investigating connectivity issues affecting instances in a single Availability Zone (apne1-az1) in the AP-NORTHEAST-1 Region. 7:58 AM PST (12:58AM JST)現在、東京リージョン AP-NORTHEAST-1 における一つのアベイラビリティゾーン(apne1-az1)の一部で、周囲の温度が上昇している状況を確認いたしました。影響を受けているアベイラビリティーゾーンの一部 EC2 インスタンスでは、接続性の問題または温度上昇の影響に伴い、電源が切れている問題が発生しております。当該問題の影響により、一部 EBS ボリュームにてパフォーマンスが低下しております。本問題の根本原因を特定し、現在解決に向けて対応しております。東京リージョン AP-NORTHEAST-1 におけるその他アベイラビリティゾーンは、この問題の影響を受けておりません。 | We can confirm that a small area of a single Availability Zone (apne1-az1) is experiencing an increase in ambient temperature in the AP-NORTHEAST-1 Region. Some EC2 instances within the affected section of the Availability Zone have experienced connectivity issues or have powered down as a result of the increasing temperatures. Some EBS volumes are also experiencing degraded performance as a result of the event. We have identified the root cause of the issue and are working towards resolution. Other Availability Zones within the AP-NORTHEAST-1 Region are not affected by this event. 8:40 AM PST (1:40AM JST)AP-NORTHEAST-1 リージョンのうちの 1 つのアベイラビリティーゾーン (apne1-az1) のある一部の区画での温度上昇に対処するために引き続き取り組んでいます。温度の上昇は、当該セクション内の冷却システムへの電力の損失によって発生しました。引き続き、電源の回復に取り組んでおりこれまでに冷却システムの 1つを正常に復旧させました。引き続き温度を通常レベルに復元し、影響を受けた EC2 インスタンスと EBS ボリュームの回復に取り組んでまいります。EC2 および EBS API を含むその他のシステムは、影響を受けたアベイラビリティーゾーン内で正常に動作しています。影響のあった EC2 インスタンスおよび EBS ボリュームをお持ちのお客様は、影響を受けたアベイラビリティーゾーン、または AP-NORTHEAST-1 リージョン内のその別のアベイラビリティーゾーンで再起動を試みることができます。 | We continue to work on addressing the increase in ambient temperature affecting a small section of a single Availability Zone (apne1-az1) in the AP-NORTHEAST-1 region. The increase in temperature is caused by a loss of power to the cooling systems within the affected section of the Availability Zone. We are working to restore power and have successfully brought online one of the cooling systems. We continue to work on restoring temperatures to normal levels and then recovering affecting EC2 instances and EBS volumes. Other systems, including EC2 and EBS APIs, are operating normally within the affected Availability Zone. Customers with affected EC2 instances and EBS volumes can attempt to relaunch in the affected Availability Zone, or another Availability Zone within the AP-NORTHEAST-1 Region. 9:43 AM PST (2:43AM JST)AP-NORTHEAST-1 リージョンのうちの 1 つのアベイラビリティーゾーン (apne1-az1) のある一部の区画での温度上昇に対処するために引き続き取り組んでいます。温度の上昇は当該セクション内の冷却装置への電力損失によって発生しました。当該セクション内のいくつかの冷却ユニットの電力はすでに復元しており、温度が低下し始めていることを確認しております。残りのオフラインの冷却ユニットは引き続き作業を続け、温度を通常レベルに戻します。温度が回復次第、影響を受ける EC2 インスタンスと EBS ボリュームが回復します。EC2 および EBS API を含むその他のシステムは、影響を受けるアベイラビリティーゾーン内で正常に動作しています。影響を受けた EC2 インスタンスおよび EBS ボリュームをお持ちのお客様は、影響を受けたアベイラビリティーゾーン、または AP-NORTHEAST-1 リージョン内の別のアベイラビリティーゾーンでインスタンスの再作成を試みることができます。| We continue to work on addressing the increase in ambient temperature affecting a small section of a single Availability Zone (apne1-az1) in the AP-NORTHEAST-1 region. The increase in temperature is caused by a loss of power to the cooling units within the affected section of the Availability Zone. We have now restored power to a number of the cooling units within this section of the Availability Zone and are starting to see temperatures decreasing. We will continue to work through the remaining cooling units that are still offline, which will return temperatures to normal levels. Once temperatures have recovered, we would expect to see affected EC2 instances and EBS volumes begin to recover. Other systems, including EC2 and EBS APIs, are operating normally within the affected Availability Zone. Customers with affected EC2 instances and EBS volumes can attempt to relaunch in the affected Availability Zone, or another Availability Zone within the AP-NORTHEAST-1 Region. 10:42 AM PST (3:42AM JST)AP-NORTHEAST-1 リージョンのうちの 1 つのアベイラビリティーゾーン (apne1-az1) のある一部の区画で影響を受けていた冷却ユニットの多くの電源が回復しました。室温は通常のレベルに近い状況まで戻り、ネットワーク、EC2 および EBS ボリュームの回復処理を開始しています。ネットワークはすでに回復し、EC2とEBSボリューム の回復処理に着手しております。回復処理が始まると再起動が発生するため、お客様にはお使いのインスタンスでアクションをとっていただく場合がございます。EBSボリュームに関しましては、ボリュームが回復するにつれ、degraded I/Oパフォーマンスが通常に戻ります。 ”stopping” もしくは ”shutting-down” のまま止まってしまっているインスタンスに関しましては、回復処理が進むにつれ、 ”stopped” もしくは “terminated” に戻ります。| We have now restored power to the majority of the cooling units within the affected section of the Availability Zone (apne1-az1) in the AP-NORTHEAST-1 Region. Temperatures are now close to normal levels and we have begun the process of restoring networking, EC2 instances and EBS volumes. The network has been restored within the affected section of the Availability Zone and we are now working on EC2 instances and EBS volumes. As they begin to recover, customers may need to take action on their instance as it will have experienced a reboot. For EBS volumes, degraded I/O performance will return to normal levels as volumes recover. For instances that are stuck “stopping” or “shutting-down”, these will return to the “stopped” or “terminated” state as recovery proceeds. 11:26 AM PST (4:26AM JST) AP-NORTHEAST-1 リージョンのうちの 1 つのアベイラビリティーゾーン (apne1-az1) で影響を受けていた冷却サブシステムの電源が回復しました。現在、室温は通常レベルで運用されています。大部分の ES2 インスタンスと EBS ボリュームが復旧しておりますが、残りのインスタンスとボリュームの復旧作業に引き続き取り組んでいます。| We have now restored power to the cooling subsystem within the affected section of the Availability Zone (apne1-az1) in the AP-NORTHEAST-1 Region. Temperatures are now operating at normal levels. We are also seeing recovery for the majority of EC2 instances and EBS volumes and continue to work on the remaining instance and volumes. 12:09 PM PST (5:09AM JST)アベイラビリティゾーン (apne1-az1) で影響を受けた一部の区画の室温は安定し、通常のレベルに戻りました。多くの EC2インスタンスは回復済みとなっております。多くの EBSボリュームも回復済みですが、残りの少数のボリュームの復旧作業に引き続き取り組んでおります。| Temperatures within the affected section of the Availability Zone (apne1-az1) remain stable and at normal levels. We have now recovered the vast majority of EC2 instances. The majority of EBS volumes have also recovered but there are a few that have required some engineering intervention that we are working on. 12:54 PM PST (5:54AM JST)日本時間 02/19 11:01 PM から、AP-NORTHEAST-1 リージョンのうちの1つのアベイラビリティーゾーンの一部の区画で室温の上昇を確認いたしました。日本時間 02/19 11:03 PM から、室温が上昇した結果として、一部の EC2インスタンスが影響を受け、一部のEBSボリュームではパフォーマンスが低下しました。根本的な原因は、影響を受けたアベイラビリティーゾーンのセクション内の冷却システムへの電力の喪失であり、すでに回復済みです。日本時間 02/20 03:30 AM までに、電力は冷却システム内のほとんどのユニットで復旧し、室温は通常のレベルに戻りました。日本時間 02/20 04:00 AM までに、EC2 インスタンスと EBS ボリュームの回復が始まり、日本時間 02/20 05:30 AM 時点で、影響を受けた EC2 インスタンスと EBS ボリュームの大部分は通常通り動作しております。一部のインスタンスとボリュームは、イベントによって影響を受けたハードウェア上でホストされていました。引き続き影響を受けたすべてのインスタンスとボリュームの復旧に取り組み、Personal Health Dashboard を通じて、現在も影響を受けているお客様に対し通知を行います。即時の復旧が必要な場合は、影響を受けているインスタンスまたはボリュームを置き換えていただくことをお勧めします。| Starting at 6:01 AM PST, we experienced an increase in ambient temperatures within a section of a single Availability Zone within the AP-NORTHEAST-1 Region. Starting at 6:03 AM PST, some EC2 instances were impaired and some EBS volumes experienced degraded performance as a result of the increase in temperature. The root cause was a loss of power to the cooling system within a section of the affected Availability Zone, which engineers worked to restore. By 10:30 AM PST, power had been restored to the majority of the units within the cooling system and temperatures were returning to normal levels. By 11:00 AM PST, EC2 instances and EBS volumes had begun to recover and by 12:30 PM PST, the vast majority of affected EC2 instances and EBS volumes were operating normally. A small number of remaining instances and volumes are hosted on hardware which was adversely affected by the event. We continue to work to recover all affected instances and volumes and have opened notifications for the remaining impacted customers via the Personal Health Dashboard. For immediate recovery, we recommend replacing any remaining affected instances or volumes, if possible.

Resolved Increased Console Error Rates

2:54 PM PST We are investigating increased error rates displaying Virtual Private Cloud (VPC) in the VPC Management Console in the US-EAST-1 Region. This issue only affects the new VPC Management Console experience, so switching back to the previous VPC Management Console experience (top left toggle switch) will resolve the issue. The VPC Command Line Tools and APIs are not affected by this issue. We are working to resolve the issue. 3:10 PM PST We have resolved the issue causing increased error rates displaying Virtual Private Cloud (VPC) in the VPC Management Console in the US-EAST-1 Region. This issue only affected the new experience of the VPC Management Console. The issue has been resolved and the service is operating normally.

Resolved Connectivity Issues for Network Load Balancers

12:19 PM PST We are investigating increased connectivity issues for Network Load Balancers within the US-EAST-1 Region. 12:45 PM PST We continue to work to identify the root cause of the event resulting in connectivity issues for some Network Load Balancers within the US-EAST-1 Region. Some Network Load Balancers in USE1-AZ2, USE1-AZ4, and USE1-AZ5 are experiencing connection issues for TLS (SSL) requests. Non-TLS (SSL) requests are not affected by this issue. We are also experiencing some delays in the provisioning of new Network Load Balancers and registration of new targets behind existing Network Load Balancers. We have identified the underlying subsystems responsible for this issue and continue to work towards root cause and resolution. 1:12 PM PST We have identified root cause and are now working to test a fix for the issue affecting connectivity for Networking Load Balancers in the US-EAST-1 Region. Once we have confirmed that it resolves the issue, we will update all affected Network Load Balancers. The issue continues to affect TLS (SSL) connections for Network Load Balancers within the USE1-AZ2, USE1-AZ4, and USE1-AZ5 Availability Zones. The issue does not affect other Availability Zones in the Region and is not expected to affect any additional Network Load Balancers that are currently operating normally. Affected customers can create new Network Load Balancers (with or without TLS) in Availability Zones that are not affected by this issue for immediate mitigation. 1:37 PM PST We have confirmed that the fix to resolve the issue is working as expected and are currently deploying it to the affected Network Load Balancers. We will be deploying one Availability Zone at a time, in the following order: USE1-AZ4, USE1-AZ2, and USE1-AZ5. We expect to see recovery as we progress through the affected Availability Zones. 2:03 PM PST We have made steady progress and are close to completing the rollout of the fix in the USE1-AZ4 Availability Zone. Most of the affected Networking Load Balancers in that Availability Zone are now operating normally. We’ll continue at this pace through the remaining Network Load Balancers in USE1-AZ4, followed by the Network Load Balancers in USE1-AZ2 and USE1-AZ5 Availability Zones. Some customers have reported impact for PrivateLink endpoints, which is related to this issue and will see recovery as we continue to deploy the fix. 2:43 PM PST We continue to make progress in deploying the fix to Network Load Balancers in the USE1-AZ4 Availability Zone. While the vast majority of the affected Network Load Balancers in that Availability Zone have now recovered, we are making slower progress than expected on the final cell within the Availability Zone. We are working to resolve that issue before moving onto the remaining Availability Zones. We’ll continue to keep you updated on our progress. 3:27 PM PST We have resolved the issue that affected our deployment of the fix to the final cell in the USE1-AZ4 Availability Zone, and continue to deploy to Availability Zones USE1-AZ2 and USE1-AZ5. We have taken steps to accelerate recovery but are also being cautious to not apply the fix to more than one Availability Zone at a time out of an abundance of caution. For customers that have seen their Network Load Balancer recover, we do not expect any further impact. For customers that are still waiting for recovery, we expect to see that in the coming hour or two as we work through the remaining Availability Zone. For immediate mitigation, customers can create a new Network Load Balancer in the unaffected Availability Zones and register the existing back-end targets with it. We’ll continue to provide updates as we progress through the remaining Availability Zones. 4:03 PM PST We continue to make progress and are now seeing recovery for affected Network Load Balancers in the USE1-AZ2 Availability Zone. We will continue on the remainder of that Availability Zone before completing USE1-AZ5, the final Availability Zone. Again, for Network Load Balancers that have recovered, we do not expect further impact. 4:28 PM PST We have completed applying the fix in both the USE1-AZ4 and USE1-AZ2 Availability Zones. We have started the process of updating the final Availability Zone, USE1-AZ5. We expect to see full recovery within the next 30 minutes. 4:38 PM PST We have completed the update in all affected Availability Zones and all Network Load Balancers are now operating normally. We are working through a backlog of Network Load Balancer created and back-end instance registrations, but at this stage all operations should be operating normally. We will continue to monitor recovery and post an update shortly. 5:07 PM PST Starting at 11:24 AM PST, some load balancers in the USE1-AZ2, USE1-AZ4, and USE1-AZ5 Availability Zones experienced connectivity issues when terminating TLS (SSL) connections in the US-EAST-1 Region. Engineers worked to identify the root cause of the event, ultimately deploying an update to recover USE1-AZ4 at 1:50 PM PST, USE1-AZ2 at 4:26 PM PST and USE1-AZ5 at 4:31 PM PST. Network Load Balancers not terminating TLS (SSL) connections were not affected by this event. Some PrivateLink endpoints were also affected by the issue. The issue has been resolved and the service is operating normally.

Resolved Increased Stack Create, Delete and Update Times

3:12 AM PST We are investigating increased latencies when creating, updating, and deleting AWS CloudFormation stacks in the US-EAST-1 Region and are actively working to resolve the issue. 3:56 AM PST We can confirm increased latencies for creating and deleting stacks in the US-EAST-1 Region and continue to work towards resolution. 4:16 AM PST Between 2:03 AM and 4:03 AM PST we experienced increased latencies for stack creation and deletion in the US-EAST-1 Region. The issue has been resolved and the service is operating normally.

Resolved Increased API Error Rates and Latency

11:43 AM PST We are investigating increased error rates and latency for the Identity, Registry, and Rules Engine APIs in the US-EAST-1 Region. Device connectivity, messaging and rules evaluations are unaffected. 11:59 AM PST Between 10:34 AM and 11:46 AM PST we experienced increased error rates and latency for the Identity, Registry and Rules Engine APIs in the US-EAST-1 Region. Device connectivity, messaging and rules evaluations remained unaffected during the event. The issue has been resolved and the service is operating normally.

Resolved Instance Connectivity

4:24 AM PST We are experiencing instance connectivity issues in a single Availability Zone (euw2-az3) in the EU-WEST-2 Region. 5:34 AM PST We're continuing to address connectivity issues to impacted instances in a single Availability Zone (euw2-az3) in the EU-WEST-2 Region. Services that use EC2 and EBS including Elastic Load Balancing, EFS, Elasticache, ECS Fargate, Sagemaker and ELB, are also impacted. 6:08 AM PST We're continuing to address connectivity issues to impacted instances in a single Availability Zone (euw2-az3) in the EU-WEST-2 Region. We are experiencing elevated error rates for new instance launches in the impacted Availability Zone. Services that use EC2 and EBS including Connect, EFS, Elasticache, ECS Fargate, Managed Streaming for Apache Kafka and Sagemaker are also impacted. 6:50 AM PST We're continuing to address connectivity issues to impacted instances in a single Availability Zone (euw2-az3) in the EU-WEST-2 Region. We are starting to see recovery for new instance launches in the impacted Availability Zone. Services that use EC2 and EBS including EFS, Elasticache, ECS Fargate, Managed Streaming for Kafka, EMR and Sagemaker are also impacted. Between 3:02 AM and 5:20 AM PST, Amazon Connect customers experienced degraded call and Chat connectivity and handling for agents in the EU-WEST-2 Region. Amazon Connect issues have been resolved and the service is operating normally. 7:41 AM PST We're continuing to address connectivity issues to impacted instances in a single Availability Zone (euw2-az3) in the EU-WEST-2 Region. The elevated error rates that impacted some new instance launches in the affected Availability Zone have fully recovered. 8:29 AM PST We have resolved the connectivity issues for the vast majority of affected instances in a single Availability Zone (euw2-az3) in the EU-WEST-2 Region. We are also seeing recovery for the vast majority of EBS volumes experiencing degraded performance due to this event. Services that use EC2 and EBS including RDS, EFS, Elasticache, ECS Fargate, Managed Streaming for Apache Kafka, EMR and Sagemaker are also seeing recovery. We continue to work towards full recovery for the remaining instances and volumes. 8:53 AM PST Starting at 2:57 AM PST we experienced power and network connectivity issues for some instances, and degraded performance for some EBS volumes in the affected Availability Zone (euw2-az3). By 3:59 AM PST, power and networking connectivity had been restored to the majority of affected instances and, by 7:32 AM PST, degraded performance for the majority of affected EBS volumes had been resolved. Since the beginning of the impact, we have been working to recover the remaining instances and volumes. A small number of remaining instances and volumes are hosted on hardware which was adversely affected by the event. We continue to work to recover all affected instances and volumes and have opened notifications for the remaining impacted customers via the Personal Health Dashboard. For immediate recovery, we recommend replacing any remaining affected instances or volumes if possible.

Resolved Increased Error Rates and Latencies

1:27 PM PST AP-NORTHEAST-1 リージョンにおける AWS CloudFormation スタックを作成、更新、削除する際のエラーレートおよびレイテンシーの増加について調査中です。| We are investigating increased error rates and latencies when creating, updating, and deleting AWS CloudFormation stacks in the AP-NORTHEAST-1 Region. 1:48 PM PST AP-NORTHEAST-1 リージョンで AWS CloudFormation スタックを作成、更新、削除するときのエラーレートおよびレイテンシーの増加について確認いたしました。現在解決に向けて対応しております。We can confirm increased error rates and latencies when creating, updating, and deleting AWS CloudFormation stacks in the AP-NORTHEAST-1 Region and are actively working toward resolution. 2:02 PM PST 現在本問題の回復を確認しており、現在解決に向けて引き続き対応しております。| We are beginning to see recovery and continue to work toward full resolution. 2:10 PM PST 日本時間 1/29 午前 4:25 から 6:55 にかけて、AP-NORTHEAST-1 リージョンで AWS CloudFormation スタック作成、更新、削除のエラーレートおよびレイテンシーが増加しました。問題は解決され、サービスは正常に動作しています。 | Between 11:25 AM and 1:55 PM PST we experienced increased error rates and latencies creating, updating, and deleting AWS CloudFormation stacks in the AP-NORTHEAST-1 Region. The issue has been resolved and the service is operating normally.

about 1 month ago Official incident report

Resolved 인스턴스 기동실패 | Elevated Launch Failures

9:49 PM PST AP-NORTHEAST-2 리전의 단일 가용 영역 (APNE2-AZ3)에서 발생하고 있는 EC2 인스턴스의 기동 실패와 VPC 업데이트의 네트워크 전파 시간 지연에 대해 조사하고 있습니다. | We are investigating elevated launch failures for EC2 instances and delayed network propagation times for VPC update in a single Availability Zone (APNE2-AZ3) in the AP-NORTHEAST-2 Region. 9:55 PM PST AP-NORTHEAST-2 리전의 단일 가용 영역 (APNE2-AZ3)에서 새로운 EC2 인스턴스 시작 및 VPC에서의 네트워크 전파 시간에 영향을 미치는 문제에 대한 복구 징후가 보입니다. AWS는 문제 해결을 위해 계속 노력하고 있습니다. | We are seeing signs of recovery for the issue affecting new EC2 instance launches and network propagation times for VPC in a single Availability Zone (APNE2-AZ3) in the AP-NORTHEAST-2 Region. We continue to work on resolving the issues. 10:37 PM PST 태평양 표준시 (PST) 기준으로 오후 8시 13 분에서 오후 8시 33 분 사이에 AP-NORTHEAST-2 리전의 단일 가용 영역 (APNE2-AZ3)에서 EC2 인스턴스에 대한 기동 실패가 발생했습니다. 인스턴스 기동 실패는 PST기준 오후 8시 33 분에 복구되었지만, 영향을 받은 가용 영역 내에서 새로 시작된 EC2 인스턴스에 대한 VPC 네트워크 구성 전파 시간이 계속 지연되었습니다. PST 기준 오후 9시 46 분에 VPC 네트워크 구성 전파가 정상 수준으로 돌아 왔습니다. 발생했던 문제는 해결되었으며 현재 서비스가 정상적으로 작동하고 있습니다. 이 서비스의 가용성을 복원하기 위해 고객이 추가 조치를 취하실 필요는 없습니다. 혹시라도 질문이 있거나 AWS 서비스 운영에 문제가 있는 경우 AWS 지원 센터 (https://console.aws.amazon.com/support)를 통해 AWS 서포트팀에 문의하시기 바랍니다. | Between 8:13 PM and 8:33 PM PST, we experience elevated launch failures for EC2 instances in a single Availability Zone (APNE2-AZ3) in the AP-NORTHEAST-2 Region. Although the launch failures recovered at 8:33 PM PST, we continued to experience delayed propagation times of VPC network configuration for newly launched EC2 instances within the affected Availability Zone. By 9:46 PM PST, VPC network configuration propagation had returned to normal levels. The issue has been resolved and the service is operating normally. Customers do not need to take additional measures to restore the availability of this service. If you have any questions or are experiencing any operational issue with any of our services, please contact the AWS Support department via the AWS Support Center at https://console.aws.amazon.com/support

about 1 month ago Official incident report

Don't miss another incident in AWS!

IsDown aggregates status page from services so you don't have to. Follow AWS and hundreds of services and be the first to know when something is wrong.

Get started now
No credit card required