Use cases
Software Products E-commerce MSPs Schools Development & Marketing DevOps Agencies Help Desk
Company
Internet Status Blog Pricing Log in Get started free

IONOS Cloud Outage History

Every past IONOS Cloud outage tracked by IsDown, with detection times, duration, and resolution details.

There were 246 IONOS Cloud outages since September 2020. The 71 outages from the last 12 months are summarized below, with incident details, duration, and resolution information.

Minor April 22, 2026

April 2026: Support Telephone Routing Issue

Detected Apr 22, 2026 6:17 AM EDT · Resolved Apr 22, 2026 11:25 AM EDT · Duration about 5 hours

IONOS Cloud experienced a routing issue with their Cloud Support telephone system that caused customer calls to disconnect or route incorrectly. The incident affected customers' ability to reach support through documented phone numbers, requiring them to use an alternative direct dial number. The underlying cause was identified and fixed after 5.1 hours, restoring normal functionality to all documented support numbers.

Minor April 21, 2026

April 2026: S3: Increased error count in eu-central-1

Detected Apr 21, 2026 3:37 AM EDT · Resolved Apr 21, 2026 1:09 PM EDT · Duration about 10 hours

IONOS Cloud experienced increased error rates and latency issues with S3 Object Storage buckets in the eu-central-1 region (DE/FRA location) for 9.4 hours. The problem was caused by localized resource contention on several storage nodes, which affected some users' access to object storage services. The engineering team performed maintenance on the affected storage cluster to resolve the resource contention, and the incident was resolved once error rates returned to normal levels.

Major April 14, 2026

April 2026: Network Connectivity Issue in FRA

Detected Apr 14, 2026 5:40 PM EDT · Resolved Apr 14, 2026 6:29 PM EDT · Duration about 1 hour

IONOS Cloud experienced a network connectivity issue affecting one specific host in the FRA region for 50 minutes, while other hosts in the region remained operational. The problem was caused by database inconsistencies that occurred during a previous package update. The network team deployed a mitigation to restore connectivity and resolved the incident.

Minor April 9, 2026

April 2026: AI Modelhub: Performance Degradation

Detected Apr 9, 2026 2:28 AM EDT · Resolved Apr 9, 2026 11:48 AM EDT · Duration about 9 hours

IONOS Cloud's AI Modelhub experienced performance degradation for 9.3 hours due to increased traffic volumes causing capacity constraints, particularly affecting the GPT-OSS 120B model. Users encountered longer response times and intermittent timeouts when accessing affected models. The issue was resolved by scaling capacity after the team identified the source of increased load.

Minor April 8, 2026

April 2026: Performance Degradation Compute FRA

Detected Apr 8, 2026 2:48 AM EDT · Resolved Apr 8, 2026 3:51 PM EDT · Duration about 13 hours

IONOS Cloud experienced performance degradation in their FRA data center affecting a subset of Virtual Machines and Kubernetes Clusters for 13.1 hours. The issue was caused by increased CPU steal time and problematic CPU core affinity settings on affected hosts. The incident was resolved through multiple configuration update rollouts that improved CPU performance across the affected fleet.

Minor March 31, 2026

March 2026: Managed Kubernetes: Service Degradation for Control Planes in FRA

Detected Mar 31, 2026 8:22 AM EDT · Resolved Apr 1, 2026 4:18 PM EDT · Duration 1 day

IONOS Cloud experienced service degradation affecting Control Planes of some Managed Kubernetes clusters in the FRA data center for 32 hours. The issue was caused by a transient latency problem in a redundant CoreDNS pod that prevented kube-apiservers from discovering etcd instances in time. All affected Control Planes recovered and the incident was resolved, with IONOS developing better mitigation options for the excessive NXDOMAIN requests that occurred during the incident.

Minor March 24, 2026

March 2026: Availability of API and DCD limited

Detected Mar 24, 2026 8:43 PM EDT · Resolved Mar 25, 2026 7:11 AM EDT · Duration about 10 hours

IONOS Cloud experienced limited availability of their Cloud API, Data Center Designer (DCD), Billing API, and Reseller API for 10.5 hours. The incident affected users' ability to access and manage cloud resources through these critical interfaces. A fix was implemented and monitored before the service was fully restored.

Minor March 16, 2026

March 2026: AI Model Hub - Increased Error Rate in Embeddings

Detected Mar 16, 2026 4:28 AM EDT · Resolved Mar 16, 2026 7:10 AM EDT · Duration about 3 hours

IONOS Cloud's AI Model Hub experienced increased error rates in the Embeddings functionality for 2.7 hours. The root cause was traced to an ongoing Kubernetes incident that was affecting the service. The incident was resolved by merging it with the broader Kubernetes incident response, with both teams working together to restore normal service operations.

Major March 16, 2026

March 2026: MK8s: Partial Connectivity Degradation to Control Planes

Detected Mar 16, 2026 4:24 AM EDT · Resolved Mar 17, 2026 1:07 PM EDT · Duration 1 day

IONOS Cloud experienced a 32.7-hour incident where customers encountered connection problems to Managed Kubernetes control planes and degraded functionality, with the issue expanding to affect Container Registry, DBaaS, and AI Model Hub services. The root cause was identified as resource constraints within the etcd database, which caused recurring load spikes that impacted system stability. The incident was resolved by applying fixes and changes to all affected clusters, with all control planes returning to a stable state.

Minor March 11, 2026

March 2026: Network Connectivity Issues in TXL

Detected Mar 11, 2026 2:13 PM EDT · Resolved Mar 11, 2026 5:21 PM EDT · Duration about 3 hours

IONOS Cloud experienced network connectivity issues in their TXL datacenter for 3.1 hours. The network team deployed changes to stabilize the affected cluster and restore connectivity. The incident was resolved after monitoring confirmed the network returned to normal operation.