Use cases
Software Products E-commerce MSPs Schools Development & Marketing DevOps Agencies Help Desk
Company
Internet Status Blog Pricing Log in Get started free

SolarWinds Observability Outage History

Every past SolarWinds Observability outage tracked by IsDown, with detection times, duration, and resolution details.

There were 143 SolarWinds Observability outages since October 2022. The 36 outages from the last 12 months are summarized below, with incident details, duration, and resolution information.

Major April 28, 2026

April 2026: Degraded Alert Notifications [NA-01, NA-02, EU-01, AP-01]

Detected Apr 28, 2026 3:22 AM EDT · Resolved Apr 28, 2026 5:12 AM EDT · Duration about 2 hours

SolarWinds Observability experienced degraded email alert notifications across multiple regions (NA-01, NA-02, EU-01, AP-01) for 1.8 hours, with some alerts being missed entirely. The team identified the root cause and implemented a fix, followed by monitoring to ensure resolution. The incident was fully resolved after the fix was confirmed to be working properly.

Minor March 26, 2026

March 2026: Delayed Data Ingestion [NA-01]

Detected Mar 26, 2026 6:08 AM EDT · Resolved Mar 26, 2026 6:28 AM EDT · Duration 20 minutes

SolarWinds Observability experienced a 20-minute delay in Logs data ingestion specifically affecting customers using port-based destinations. During the incident, new log data did not appear in the UI, alerts were delayed, and dashboards displayed missing or outdated information. The issue was identified, fixed, and fully resolved with normal data processing restored.

Minor March 23, 2026

March 2026: Degraded Web Interface [EU-01]

Detected Mar 23, 2026 6:14 AM EDT · Resolved Mar 23, 2026 6:41 AM EDT · Duration 27 minutes

SolarWinds Observability experienced a degradation in their EU-01 web interface, causing customers to encounter intermittent UI errors and monitoring data that failed to load on the home page. The engineering team implemented a fix and monitored the results before fully resolving the incident after 27 minutes.

Minor March 13, 2026

March 2026: Degraded Data Ingestion [NA-01]

Detected Mar 13, 2026 5:04 PM EDT · Resolved Mar 13, 2026 6:57 PM EDT · Duration about 2 hours

SolarWinds Observability experienced degraded data ingestion for logs, metrics, and traces over 1.9 hours, causing gaps in telemetry data and missing alerts across all observability components including APM, Infrastructure, Network, Kubernetes, Database, and Alerting services. The root cause was identified and a fix was implemented, with systems returning to stable operation.

Minor March 9, 2026

March 2026: Degraded Digital Experience Monitoring [Washington DC AWS]

Detected Mar 9, 2026 3:43 PM EDT · Resolved Mar 9, 2026 4:07 PM EDT · Duration 24 minutes

SolarWinds Observability experienced degraded Digital Experience Monitoring in the Washington DC AWS region for 24 minutes, causing synthetic checks and alerts to fail or perform poorly. The issue was identified and a fix was implemented, with affected probes returning to stable operation. Synthetic checks that were missed during the incident will not be re-run and may appear as missing data in customer dashboards.

Major March 2, 2026

March 2026: Degraded Data Ingestion DC-01

Detected Mar 2, 2026 7:38 PM EST · Resolved Mar 2, 2026 11:38 PM EST · Duration about 4 hours

SolarWinds Observability experienced a 4-hour data ingestion issue affecting logs, metrics, and traces across multiple components including Alerting, APM, DEM, RUM, and Infrastructure Observability. Users experienced gaps in telemetry data, delayed or missing alerts, and dashboards showing outdated values as new data failed to appear in the UI. The root cause was identified and resolved, with systems returning to stable operation.

Major March 2, 2026

March 2026: Degraded Digital Experience Monitoring [Middle East]

Detected Mar 2, 2026 3:22 AM EST · Resolved Mar 5, 2026 7:02 PM EST · Duration 4 days

SolarWinds Observability experienced a major outage lasting 87.7 hours that affected Digital Experience Monitoring in the Middle East region, specifically Dubai probes. The incident was caused by ongoing regional conflict and AWS datacenter infrastructure disruption, preventing synthetic checks and alerts from executing properly in that location. The issue was resolved by recommending customers reconfigure their monitoring checks to use alternative testing regions while the affected infrastructure remained unavailable.