Use cases
Software Products E-commerce MSPs Schools Development & Marketing DevOps Agencies Help Desk
Company
Internet Status Blog Pricing Log in Get started free

OVHcloud Outage History

Every past OVHcloud outage tracked by IsDown, with detection times, duration, and resolution details.

There were 105 OVHcloud outages since October 2025. The 105 outages from the last 12 months are summarized below, with incident details, duration, and resolution information.

Major March 31, 2026

March 2026: [BHS1/3][Compute - Instance] - Some instances incident notification

Detected Mar 31, 2026 10:24 AM EDT · Resolved Mar 31, 2026 11:07 AM EDT · Duration 44 minutes

OVHcloud experienced a major incident affecting compute instances in the BHS1 and BHS3 regions, caused by an unexpected underlying infrastructure malfunction. Some customers were temporarily unable to access and use their instances in these regions for approximately 1 hour and 2 minutes. The incident was identified and resolved by OVHcloud's teams, with full service restoration completed.

Minor March 30, 2026

March 2026: [GRA11][Compute - Instance] - Some instances incident notification

Detected Mar 30, 2026 4:50 AM EDT · Resolved Mar 30, 2026 8:09 AM EDT · Duration about 3 hours

OVHcloud experienced a compute instance outage in the GRA11 region from 07:22 to 12:00 UTC on March 30, 2026, caused by an unexpected underlying infrastructure malfunction. Some instances became suddenly unreachable, preventing affected customers from accessing and using their instances in the region. The incident was resolved after 3.3 hours once teams identified the root cause and restored service.

Major March 26, 2026

March 2026: [GRA11][Compute Instance] - Public Cloud instances Incident Notification

Detected Mar 26, 2026 10:57 AM EDT · Resolved Mar 26, 2026 2:22 PM EDT · Duration about 3 hours

An unexpected infrastructure malfunction in OVHcloud's GRA11 region caused some compute hosts to become unreachable, preventing customers from accessing their Public Cloud instances. The incident lasted approximately 8 hours from 10:25 UTC to 18:15 UTC on March 26, 2026, and has been resolved.

Major March 25, 2026

March 2026: [GRA/SBG][Storage] - Swift Object Storage incident notification

Detected Mar 25, 2026 2:17 PM EDT · Resolved Mar 25, 2026 2:35 PM EDT · Duration 18 minutes

OVHcloud's Swift Object Storage service experienced a major incident from 17:24 to 18:00 UTC on March 25, 2026, affecting the GRA and SBG regions. Customers encountered 498 errors when attempting to upload files via PUT requests to their PCS services. The incident was resolved after 36 minutes and was caused by a software configuration issue.

Minor March 24, 2026

March 2026: [BHS1/3][Compute - Instance] - Some instances incident notification

Detected Mar 24, 2026 5:22 PM EDT · Resolved Mar 24, 2026 5:38 PM EDT · Duration 16 minutes

OVHcloud experienced a service disruption affecting some Public Cloud instances in the BHS1 and BHS3 regions from 21:07 to 21:15 UTC on March 24, 2026, caused by an unexpected underlying infrastructure malfunction. Customers were temporarily unable to access and use their instances in these regions during the 8-minute outage. The service was fully restored and monitoring continued to ensure stability.

Major March 22, 2026

March 2026: [GLOBAL][Compute - Instance] - IAM Login incident notification

Detected Mar 22, 2026 12:55 PM EDT · Resolved Mar 22, 2026 5:14 PM EDT · Duration about 4 hours

OVHcloud experienced a major incident affecting IAM (Identity and Access Management) login functionality for Public Cloud instances across multiple global regions for 4.3 hours on March 22, 2026. Customers were temporarily unable to login to their compute instances due to an unexpected underlying infrastructure malfunction. The incident was resolved after teams implemented a fix and completed service recovery operations.

Major March 20, 2026

March 2026: [GRA5/7/9/11][Compute - Instance] - Some instances incident notification

Detected Mar 20, 2026 4:28 AM EDT · Resolved Mar 20, 2026 5:04 AM EDT · Duration 37 minutes

OVHcloud experienced a major incident affecting compute instances in the GRA5, GRA7, GRA9, and GRA11 regions, making some instances unreachable due to an unexpected underlying infrastructure malfunction. Customers were temporarily unable to access and use their affected instances during the outage. The incident was resolved after 37 minutes, with service restored from 08:06 UTC to 08:56 UTC on March 20, 2026.

Major March 19, 2026

March 2026: [BHS1][Compute - Instance] - Some instances incident notification

Detected Mar 19, 2026 2:49 PM EDT · Resolved Mar 19, 2026 3:27 PM EDT · Duration 38 minutes

OVHcloud experienced a major incident affecting compute instances in the BHS1 region from 18:16 to 19:21 UTC on March 19, 2026, caused by an unexpected underlying infrastructure malfunction. Some instances became unreachable, preventing customers from accessing and using their instances in that region. The incident was resolved after 65 minutes of downtime.

Major March 18, 2026

March 2026: [GRA11][Containers & Orchestration] - MKS Incident Notification

Detected Mar 18, 2026 12:02 PM EDT · Resolved Mar 18, 2026 4:35 PM EDT · Duration about 5 hours

OVHcloud's MKS (Managed Kubernetes Service) in the GRA11 region experienced an incident that caused delays in creating new Octavia load balancers through the platform for 4.5 hours on March 18, 2026. Existing load balancers remained unaffected and continued to operate normally throughout the incident. The service was fully restored after teams identified and resolved the underlying issue.

Major March 18, 2026

March 2026: [GRA11][Network] - Load Balancer Incident Notification

Detected Mar 18, 2026 10:54 AM EDT · Resolved Mar 18, 2026 4:14 PM EDT · Duration about 5 hours

OVHcloud's Load Balancer Octavia service in the GRA11 region experienced a software-related incident that caused delays in load balancer management operations for 5.3 hours on March 18, 2026. While the data plane remained unaffected, customers experienced slowdowns when performing load balancer administrative tasks. The incident was resolved after teams identified and fixed the underlying software issue.