Use cases
Software Products E-commerce MSPs Schools Development & Marketing DevOps Agencies Help Desk
Company
Internet Status Blog Pricing Log in Get started free

Elastic Cloud Outage History

Every past Elastic Cloud outage tracked by IsDown, with detection times, duration, and resolution details.

There were 347 Elastic Cloud outages since February 2021. The 74 outages from the last 12 months are summarized below, with incident details, duration, and resolution information.

Major May 8, 2026

May 2026: Issues with Email Alerts

Detected May 8, 2026 12:10 PM CST · Resolved May 8, 2026 7:03 PM CST · Duration about 7 hours

Elastic Cloud experienced a major incident where some customers faced issues with email alerts for 6.9 hours. The service disruption affected email alert delivery functionality across the platform. The issue was resolved after implementing a solution that restored email delivery, with monitoring confirming successful operation before full resolution.

Major April 30, 2026

April 2026: Delayed AutoOps Metrics in AWS us-east-1

Detected Apr 30, 2026 3:46 PM CST · Resolved Apr 30, 2026 4:37 PM CST · Duration about 1 hour

Elastic Cloud experienced delayed AutoOps metrics for customers in the AWS us-east-1 region due to an identified system issue. The problem affected metric delivery and monitoring capabilities for users in that specific region. The issue was resolved after 52 minutes of mitigation efforts.

Major April 28, 2026

April 2026: Missing Metrics in Cloud Console — US East (us-east-1)

Detected Apr 28, 2026 8:46 AM CST · Resolved Apr 28, 2026 12:40 PM CST · Duration about 4 hours

A disruption in metric data ingestion affected Elastic Cloud deployments in the us-east-1 region, causing missing metrics visualizations in the Cloud console and preventing recent metric data from appearing in charts or monitoring views. The incident lasted 3.9 hours, with the system showing gradual signs of stabilization before full recovery was achieved.

Major April 27, 2026

April 2026: Elevated Error Rates and Latency — EU West Region

Detected Apr 27, 2026 5:49 AM CST · Resolved Apr 27, 2026 6:42 AM CST · Duration about 1 hour

Elastic Cloud experienced connectivity issues in the AWS EU West 3 (Paris) region for 53 minutes, causing elevated error rates and latency for a subset of customers. The incident was caused by an underlying cloud infrastructure provider issue that affected the region's ingress layer traffic. The problem was resolved through remediation with the cloud provider, with traffic returning to normal levels and all monitoring alerts clearing.

Major April 20, 2026

April 2026: EIS elevated 5xx error rates for model google-gemini-embedding-001

Detected Apr 20, 2026 12:37 AM CST · Resolved Apr 20, 2026 4:42 AM CST · Duration about 4 hours

Elastic Cloud experienced elevated 5xx error rates for the Google Gemini Embedding v1 model (.google-gemini-embedding-001) in their Elastic Inference Service (EIS) for 4.1 hours. The errors originated from Google's provider API, and the incident was escalated to Google for resolution. The upstream service provider issue was resolved, restoring normal access to the embedding model.

Major April 9, 2026

April 2026: Privatelink hostnames reported by API are incorrect

Detected Apr 9, 2026 2:46 PM CST · Resolved Apr 10, 2026 1:55 PM CST · Duration about 23 hours

A recent change to Elastic Cloud's Privatelink implementation caused the deployment API to report incorrect URLs, resulting in connectivity issues for customers relying on those URLs. The incident lasted 23.2 hours and affected customers' ability to connect to their deployments through Privatelink. The issue was resolved by deploying a fix to the User Console that corrected the hostname reporting.

Major April 9, 2026

April 2026: Connection issue to Kibana via the Cloud UI SSO

Detected Apr 9, 2026 9:37 AM CST · Resolved Apr 10, 2026 5:53 AM CST · Duration about 20 hours

Elastic Cloud experienced a major 20.3-hour incident where users could not connect to Kibana through the Cloud UI via SAML SSO, specifically affecting PrivateLink customers while other hosted deployments remained unaffected. The engineering team identified the root cause, deployed a fix, and confirmed full resolution with no further customer impact.

Major April 6, 2026

April 2026: Synthetics service may not run on schedule (us-east-4)

Detected Apr 6, 2026 11:58 AM CST · Resolved Apr 6, 2026 2:45 PM CST · Duration about 3 hours

The Elastic Cloud Synthetics service in the us-east-4 region experienced a major incident where customer monitor jobs failed to run on their expected schedules. The issue affected monitoring capabilities for customers using the Synthetics service in that region for approximately 2.8 hours. The problem was identified and resolved, restoring normal scheduling functionality for all affected monitor jobs.

Major March 30, 2026

March 2026: Elevated error rates for Claude Sonnet 4.5 EIS inference endpoints

Detected Mar 30, 2026 12:46 PM CST · Resolved Mar 30, 2026 2:00 PM CST · Duration about 1 hour

Elastic Cloud experienced elevated 5xx error rates affecting Claude Sonnet 4.5 model inference endpoints, including chat completion and completion endpoints for both Anthropic Claude 4.5 Sonnet and GP-LLM-v2 models. The incident lasted 1.2 hours before all endpoints returned to normal operation.

Major March 26, 2026

March 2026: AutoOps deployments marked as Inactive in AWS us-east-1 region

Detected Mar 26, 2026 10:33 PM CST · Resolved Mar 27, 2026 1:24 AM CST · Duration about 3 hours

Elastic Cloud experienced a major AutoOps outage in the AWS us-east-1 region lasting 2.9 hours, causing customer deployments to be incorrectly marked as inactive and preventing access to recent metrics. The engineering team identified the root cause, applied mitigations, and restored full AutoOps functionality. The incident was fully resolved after a monitoring period to ensure system stability.