Use cases
Software Products E-commerce MSPs Schools Development & Marketing DevOps Agencies Help Desk
Company
Internet Status Blog Pricing Log in Get started free

Dynatrace Outage History

Every past Dynatrace outage tracked by IsDown, with detection times, duration, and resolution details.

There were 146 Dynatrace outages since February 2024. The 83 outages from the last 12 months are summarized below, with incident details, duration, and resolution information.

Minor May 7, 2026

May 2026: AWS operational issue may impact some US deployments

Detected May 7, 2026 9:03 PM EDT · Resolved May 8, 2026 3:05 PM EDT · Duration about 18 hours

An AWS operational issue in the US-EAST-1 region caused degraded data ingestion for Dynatrace US deployments, resulting in missing or delayed data, intermittent API errors, and processing delays across metrics, logs, events, and user sessions. The incident lasted 18 hours with recovery actions progressively restoring data ingestion, though some high-volume customers continued experiencing delayed span processing and a limited subset still faced ingestion issues for certain data types. Full resolution was achieved through capacity adjustments and system recovery measures that allowed the platform to catch up on previously delayed data.

Minor May 7, 2026

May 2026: Multiple Google Synthetic Locations are not executing tests

Detected May 7, 2026 11:40 AM EDT · Resolved May 7, 2026 1:21 PM EDT · Duration about 2 hours

Dynatrace experienced an outage where multiple Google Cloud synthetic monitoring locations stopped executing tests for 1.7 hours. The service identified the root cause and worked on mitigation to restore normal synthetic test execution. The incident was classified as minor severity.

Minor April 21, 2026

April 2026: Real User Monitoring on Grail sessions processing in OpenPipeline are impacted

Detected Apr 21, 2026 10:37 AM EDT · Resolved Apr 21, 2026 11:57 AM EDT · Duration about 1 hour

Dynatrace experienced an issue with Real User Monitoring sessions processing through OpenPipeline, where data ingestion continued normally but post-processing was impacted. The incident affected user session data analysis and monitoring capabilities for 1.3 hours. The root cause was identified and mitigation steps were implemented to resolve the processing issue.

Minor April 15, 2026

April 2026: SSO Authentication Issues Impacting Access to Dynatrace

Detected Apr 15, 2026 1:19 PM EDT · Resolved Apr 15, 2026 2:55 PM EDT · Duration about 2 hours

Dynatrace experienced SSO authentication issues that prevented users from logging into the platform, though existing active sessions remained functional for up to one hour and the core product continued operating normally. The incident lasted 1.6 hours before a mitigation was applied and access was restored. Teams confirmed recovery through customer feedback and continued monitoring while working toward a complete resolution.

Minor April 15, 2026

April 2026: Potential delay in log ingestion in Azure Europe (Switzerland)

Detected Apr 15, 2026 5:38 AM EDT · Resolved Apr 15, 2026 6:11 AM EDT · Duration 33 minutes

Dynatrace experienced increased load on the data processing pipeline in their Azure Europe (Switzerland) deployment, causing delays in log ingestion for some customers. The incident was classified as minor and lasted 33 minutes. Teams actively analyzed the situation and took steps to stabilize the deployment, resolving the ingestion delays.

Minor April 2, 2026

April 2026: Support ticket access (Zendesk)

Detected Apr 2, 2026 1:47 AM EDT · Resolved Apr 2, 2026 7:28 AM EDT · Duration about 6 hours

Dynatrace's support ticket system (Zendesk) experienced a major outage lasting 2.7 hours, preventing users from logging in to access support services. The company worked with their provider to restore the system while directing users to call +1-844-900-3962 for immediate support needs. The login functionality was eventually restored after the provider resolved the underlying issue.

Major March 26, 2026

March 2026: Degraded Dynatrace SaaS platform performance affecting AWS London & Azure US East deployments

Detected Mar 26, 2026 9:23 AM EDT · Resolved Mar 26, 2026 2:44 PM EDT · Duration about 5 hours

Dynatrace experienced elevated load on backend services in AWS London and Azure US East deployments, causing slower UI responsiveness and delays in alert processing. The issue was identified as a backend service overload problem and teams implemented mitigation steps to restore normal service levels. The incident lasted 5.4 hours and affected the SaaS platform's performance across these two specific regional deployments.

Major March 25, 2026

March 2026: Problem generation impacted on one Azure SaaS deployment in the US East region

Detected Mar 25, 2026 12:32 PM EDT · Resolved Mar 25, 2026 1:04 PM EDT · Duration 32 minutes

Dynatrace experienced a major incident where problem generation was impacted for customer tenants on one Azure SaaS deployment in the US East region. The service disruption lasted 32 minutes while the company investigated the reports. The incident has been resolved, though specific resolution details were not provided in the available updates.

Major March 24, 2026

March 2026: Synthetic tests degraded in AWS Bahrain location

Detected Mar 24, 2026 5:46 AM EDT · Resolved Mar 30, 2026 9:26 AM EDT · Duration 6 days

Dynatrace synthetic tests in the AWS Bahrain location were degraded for 147.7 hours due to an ongoing AWS regional availability issue that prevented test execution and resulted in no data being collected. As the infrastructure problems persisted without a clear resolution timeline, Dynatrace disabled the Bahrain location for new synthetic test configurations and planned to disable all existing tests in that location. Users were advised to execute tests from alternate locations as a workaround during the extended outage.

Minor March 2, 2026

March 2026: Dynatrace Platform Tenant UI Access Issues in a Single GCP Deployment

Detected Mar 2, 2026 3:33 PM EST · Resolved Mar 2, 2026 5:45 PM EST · Duration about 2 hours

Multiple customers experienced HTTP 500 errors when accessing their Dynatrace Platform Tenant UI in the US-East GCP deployment, with additional impacts to data ingestion and system accessibility. The incident lasted 2.2 hours, during which teams identified the root cause and applied mitigation efforts that restored functionality, though customers experienced intermittent service availability during the resolution process.