This chart provides user-reported problems for Google Cloud Google Kubernetes Engine in the last 12 hours. It's a collection of user reports from different sources.
We continuously monitor the official Google Cloud Google Kubernetes Engine status page for updates on any ongoing outages. Check the stats for the latest 30 days and a list of the last Google Cloud Google Kubernetes Engine outages and possible affected you.
Minor Resolved · 18 days ago · lasted 4 days
Horizontal Pod Autoscaler (HPA) doesn't work in newly created GKE Minor version 1.24+ clusters
Summary: Horizontal Pod Autoscaler (HPA) doesn't work in newly created GKE Minor version 1.24+ clusters Description: Horizontal pod autoscaler (HPA) does not work in newly created GKE minor version 1.24+ clusters. A rollback of the affected feature is currently underway by our engineering team. We do not have an ETA for mitigation at this point. We will provide more information by Friday, 2023-01-13 05:00 US/Pacific. Diagnosis: Horizontal pod autoscaler might not scale workloads within newly created clusters. Customers might observe "No recommendation " in the HPA object status. Workaround: None at this time.
Minor Resolved · about 2 months ago · lasted about 21 hours
Calico Node pods failing to start and crash-looping
Summary: Calico Node pods failing to start and crash-looping Description: We believe the issue with Google Kubernetes Engine is partially resolved. Full resolution is expected to complete by Friday, 2022-12-23 17:00 US/Pacific. We will provide an update by Friday, 2022-12-16 17:00 US/Pacific with current details. Diagnosis: Customers may observe their calico node pods failing to start and crash-looping indefinitely on GKE 1.24.7+ and 1.25.3+. Workaround: Customers may disable nodelocaldns. Customers may also raise a support ticket with Google to patch the impacted clusters.
14 day free trial · No credit card required
Minor Resolved · about 2 months ago · lasted 6 minutes
Clusters in us-central1 are getting DEADLINE_EXCEEDED errors
Summary: Clusters in us-central1 are getting DEADLINE_EXCEEDED errors Description: We are experiencing an issue with Google Kubernetes Engine beginning at Thursday, 2022-12-08 03:42 US/Pacific. Our engineering team continues to investigate the issue. We will provide an update by Thursday, 2022-12-08 09:00 US/Pacific with current details. We apologize to all who are affected by the disruption. Diagnosis: None at this time. Workaround: None at this time.
Minor Resolved · 3 months ago · lasted about 6 hours
Autopilot (clusters versions >= 1.23) may break some workloads
Summary: Autopilot (clusters versions >= 1.23) may break some workloads Description: We are experiencing an issue with Google Kubernetes Engine. Our engineering team continues to investigate the issue. We will provide an update by Tuesday, 2022-11-08 13:30 US/Pacific with current details. We apologize to all who are affected by the disruption. Diagnosis: 1. Customers are unable to apply partner workloads (twistlock,Splunk Otel-Connector) 2. Customer may experience workloads admitted in 1.22, unable to create/schedule new pods as part of deployment after upgrading to 1.23+ Workaround: Action Required: Upgrades of Autopilot clusters to 1.23+ should be blocked to prevent more clusters from migrating from allowlistv1 to allowlistv2.
Minor Resolved · 4 months ago · lasted 2 days
Global: Calico enabled GKE clusters’ pods may get stuck Terminating or Pending after upgrading to 1.22+
Summary: Global: Calico enabled GKE clusters’ pods may get stuck Terminating or Pending after upgrading to 1.22+ Description: The following GKE versions are vulnerable to a race condition when using the Calico Network Policy, resulting in pods stuck Terminating or Pending: All 1.22 GKE versions All 1.23 GKE versions 1.24 versions before 1.24.4-gke.800 Only a small number of GKE clusters have actually experienced stuck pods. Use of cluster autoscaler can increase the chance of hitting the race condition. A fix is available in GKE v1.24.4-gke.800 or later. The fix is also being made available in v1.23 and v1.22, as part of the next release, which has now started. Once available, customers can manually upgrade to the fixed version. Or, Clusters on the RAPID, REGULAR or STABLE release channels using 1.22 or 1.23 will upgrade automatically over coming weeks. We will provide an update by Friday, 2022-09-30 15:00 US/Pacific with current details. The issue was introduced in the Calico compo...
Minor Resolved · 5 months ago · lasted 7 days
Global: Calico enabled GKE clusters’ pods may get stuck Terminating or Pending after upgrading to 1.22+
Summary: Global: Calico enabled GKE clusters’ pods may get stuck terminating after upgrading to 1.22+ Description: GKE clusters running versions 1.22 or later and that use Calico Network Policy might experience issues with terminating Pods under some conditions. Our engineering team continues to investigate the issue and are qualifying a potential mitigation for release to the Rapid channel 1.24. After all the qualifications are done, we will expedite the backport of the fix to 1.22 as soon as possible. We will provide an update by Friday, 2022-09-16 15:00 US/Pacific with current details. We apologize to all who are affected by the disruption. Diagnosis: The Calico CNI plugin will show the following error terminating Pods: “Warning FailedKillPod 36m (x389 over 121m) kubelet error killing pod: failed to "KillPodSandbox" for "af9ab8f9-d6d6-4828-9b8c-a58441dd1f86" with KillPodSandboxError: "rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "myclient-pod-6474c769...
Minor Resolved · 5 months ago · lasted 11 minutes
Global: Calico enabled GKE clusters’ pods may get stuck terminating after upgrading to 1.22+
Summary: Global: Calico enabled GKE clusters’ pods may get stuck terminating after upgrading to 1.22+ Description: GKE clusters running versions 1.22 or later and that use Calico Network Policy might experience issues with terminating Pods under some conditions. Our engineering team continues to investigate the issue and are qualifying a potential mitigation for release to the Rapid channel 1.24. After all the qualifications are done, we will expedite the backport of the fix to 1.22 as soon as possible. We will provide an update by Friday, 2022-09-16 15:00 US/Pacific with current details. We apologize to all who are affected by the disruption. Diagnosis: The Calico CNI plugin will show the following error terminating Pods: “Warning FailedKillPod 36m (x389 over 121m) kubelet error killing pod: failed to "KillPodSandbox" for "af9ab8f9-d6d6-4828-9b8c-a58441dd1f86" with KillPodSandboxError: "rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "myclient-pod-6474c769...
IsDown is an uptime monitoring solution for your critical business dependencies. Keep tabs on your SaaS and cloud providers in real-time and never miss another outage again. Get instant alerts and stay informed when an incident impacts your operations.
Start free trialNo credit card required · Cancel anytime · 2346 services available
Integrations with
Quickly identify external outages that impact your business. We are monitoring more than 2000 services in real time.
A high-level view of the health of all your services
IsDown aggregates the information from the status pages of all your services, making it easy to monitor the health of all your services in one place. Say goodbye to managing each status page individually - our service simplifies the process.
Uptime monitoring in real time
Say goodbye to wasting time trying to diagnose issues with your services - our 24/7 monitoring service does the work for you. We'll notify you if there is an incident, so you can focus on other tasks.
Receive alerts in your preferred channels
Our outage monitoring keeps you informed, no matter where you are. Get instant notifications in your email, Slack, Teams, or Discord when an outage is detected, so you can take action quickly.
Easily integrate with your current tools and workflows
Enhance your processes with more information using our integration of Zapier, Webhooks, PagerDuty, and Datadog. Stay notified and in control. Upgrade your operations today.
Avoid notifications clutter
Maximize your control with customizable notifications from each service. Filter by components and severity to only receive the most important updates. Streamline your processes and stay informed with our advanced notification features.
Multiple dashboards, shareable with the world
Create one dashboard for each of your teams/clients/projects and monitor only the services that each uses. Have a dedicated dashboard with custom notification settings. Easily make your dashboard public and share it with the world.
Prepare for scheduled maintenances
Never again be caught off guard by unexpected maintenance from your services. A feed of the next scheduled maintenances is available.
Weekly Digest of the services' outages
Every Monday, you'll receive a weekly summary of what happened the previous week as well as the maintenance schedule for the following week.
The data and notifications you need, in the tools you already use.
DevOps & On-Call Teams
You already monitor your internal systems. What about the external services? Monitor the services your business depends on. Don't waste time looking elsewhere when external outages are the cause of issues.
IT Support Teams
Detect external outages before your clients tell you. Anticipate possible issues and make the necessary arrangements. Having proactive communication, builds trust over clients and prevents flow of support tickets.
5 minute setup,
instant value for your team
Start with a trial account that will allow you to try and monitor up to 40 services for 14 days.
There are 2346 services to choose from and you can start monitoring, and we're adding more every week.
You can get notifications by email, Slack, and Discord. You can also use Zapier or Webhooks to build your workflows.
You'll start getting alerts when we detect outages in your external dependencies! No more wasting time looking in the wrong place!
Try it out! How much time you'll save your team, by having the outages information close to them?