Use cases
Software Products E-commerce MSPs Schools Development & Marketing DevOps Agencies Help Desk
Company
Internet Status Blog Pricing Log in Get started free

IONOS Cloud Outage History

Every past IONOS Cloud outage tracked by IsDown, with detection times, duration, and resolution details.

There were 253 IONOS Cloud outages since September 2020. The 76 outages from the last 12 months are summarized below, with incident details, duration, and resolution information.

Minor May 12, 2026

May 2026: Suspected Issues with USt- / VAT ID verification

Detected May 12, 2026 4:50 AM EDT · Resolved May 13, 2026 2:58 AM EDT · Duration about 22 hours

IONOS Cloud's automated accounting system erroneously sent emails to multiple customers claiming their VAT/Umsatzsteuer ID numbers could not be verified. The incident was identified as a false alarm with no actual verification issues or required customer actions. All affected customers were contacted directly via email once the 22-hour incident was resolved.

Minor May 11, 2026

May 2026: Partial Connectivity Degradation to Control Plane Affecting Kubernetes Operations

Detected May 11, 2026 1:53 PM EDT · Resolved May 11, 2026 5:19 PM EDT · Duration about 3 hours

IONOS Cloud experienced partial connectivity issues with their Managed Kubernetes control plane, causing degraded functionality for some customers. The incident affected Kubernetes operations and lasted approximately 3.4 hours. The issue was resolved after implementing a fix and monitoring the results.

Major May 8, 2026

May 2026: K8s Control Planes not available

Detected May 8, 2026 11:15 AM EDT · Resolved May 11, 2026 4:33 AM EDT · Duration 3 days

IONOS Cloud experienced a major incident where customers encountered connection problems to Kubernetes control planes and degraded functionality in their Managed Kubernetes service. The issue was caused by a defect in a configuration filtering mechanism that led to inadvertent configuration changes on affected resources. The incident lasted 65.3 hours and was resolved through a rollback of the identified changes, followed by monitoring to check for residual inconsistencies.

Major May 5, 2026

May 2026: Managed Kubernetes - Partial Connectivity Degradation to Control Planes

Detected May 5, 2026 12:06 PM EDT · Resolved May 6, 2026 2:01 AM EDT · Duration about 14 hours

IONOS Cloud experienced a major outage affecting a high number of Managed Kubernetes control planes, primarily in the Frankfurt region, preventing customers from making workload changes to their K8s clusters. The incident lasted 13.9 hours before teams identified the root cause and applied a fix to restore all control planes. All services returned to normal operation with no further issues reported during the monitoring period.

Minor May 5, 2026

May 2026: Object Storage Service Restrictions: New keys can not be created

Detected May 5, 2026 3:47 AM EDT · Resolved May 6, 2026 2:01 AM EDT · Duration about 22 hours

IONOS Cloud's S3 Object Storage service experienced a 22.2-hour incident where all new access key creation failed, preventing users from generating new authentication credentials. Existing access keys continued to work normally throughout the incident, so users with previously created keys maintained access to their storage resources. The key creation functionality was restored and monitored before being fully resolved.

Major May 3, 2026

May 2026: Limited access to provisioning services, DCD, Managed Kubernetes

Detected May 3, 2026 2:53 PM EDT · Resolved May 6, 2026 1:57 AM EDT · Duration 2 days

IONOS Cloud experienced a major incident affecting provisioning services, Data Center Designer, Cloud API, and Managed Kubernetes, causing increased processing times, timeouts, connection interruptions, and temporarily placing Kubernetes in read-only mode. The incident lasted 59.1 hours and required temporarily disabling provisioning services to implement fixes. The issue was resolved with provisioning functionality restored to normal operation, while existing virtual data center resources remained unaffected throughout the incident.

Minor April 28, 2026

April 2026: Network: Increased latency in TXL

Detected Apr 28, 2026 4:17 PM EDT · Resolved Apr 29, 2026 12:33 PM EDT · Duration about 20 hours

IONOS Cloud experienced network latency irregularities in their TXL datacenter that affected network services and caused increased error rates on the AI Modelhub. The network team identified the likely cause and deployed a mitigation after approximately 20 hours. The incident was resolved once no further anomalies were observed during peak usage times.

Minor April 22, 2026

April 2026: Support Telephone Routing Issue

Detected Apr 22, 2026 6:17 AM EDT · Resolved Apr 22, 2026 11:25 AM EDT · Duration about 5 hours

IONOS Cloud experienced a routing issue with their Cloud Support telephone system that caused customer calls to disconnect or route incorrectly. The incident affected customers' ability to reach support through documented phone numbers, requiring them to use an alternative direct dial number. The underlying cause was identified and fixed after 5.1 hours, restoring normal functionality to all documented support numbers.

Minor April 21, 2026

April 2026: S3: Increased error count in eu-central-1

Detected Apr 21, 2026 3:37 AM EDT · Resolved Apr 21, 2026 1:09 PM EDT · Duration about 10 hours

IONOS Cloud experienced increased error rates and latency issues with S3 Object Storage buckets in the eu-central-1 region (DE/FRA location) for 9.4 hours. The problem was caused by localized resource contention on several storage nodes, which affected some users' access to object storage services. The engineering team performed maintenance on the affected storage cluster to resolve the resource contention, and the incident was resolved once error rates returned to normal levels.

Major April 14, 2026

April 2026: Network Connectivity Issue in FRA

Detected Apr 14, 2026 5:40 PM EDT · Resolved Apr 14, 2026 6:29 PM EDT · Duration about 1 hour

IONOS Cloud experienced a network connectivity issue affecting one specific host in the FRA region for 50 minutes, while other hosts in the region remained operational. The problem was caused by database inconsistencies that occurred during a previous package update. The network team deployed a mitigation to restore connectivity and resolved the incident.