Use cases
Software Products E-commerce MSPs Schools Development & Marketing DevOps Agencies Help Desk
Company
Internet Status Blog Pricing Log in Get started free

Confluent Outage History

Every past Confluent outage tracked by IsDown, with detection times, duration, and resolution details.

There were 140 Confluent outages since March 2024. The 53 outages from the last 12 months are summarized below, with incident details, duration, and resolution information.

Major March 4, 2026

March 2026: Metrics API was experiencing failures from 03:23 UTC to 03:58 UTC

Detected Mar 4, 2026 11:40 PM EST · Resolved Mar 5, 2026 2:26 AM EST · Duration about 3 hours

The Confluent Cloud Metrics API experienced a complete outage from 03:23 UTC to 03:58 UTC on March 5, 2026, making metrics data unavailable to users. The service was restored after 35 minutes, followed by a monitoring period to ensure stability. The incident was fully resolved at 07:22 UTC with all systems returning to normal operation.

Major March 2, 2026

March 2026: Elevated error rates in AWS me-south-1 and me-central-1 regions

Detected Mar 2, 2026 2:19 AM EST · Resolved Mar 19, 2026 8:55 PM EDT · Duration 18 days

Confluent Cloud services in AWS me-south-1 and me-central-1 regions experienced elevated error rates and outages due to underlying AWS infrastructure failures in multiple availability zones. The me-south-1 region was restored within days, while me-central-1 experienced a complete outage that lasted significantly longer due to extended AWS regional recovery issues. The incident was fully resolved after approximately 17 days, with customers advised to consider regional failover options during the extended outage period.

Major March 1, 2026

March 2026: Elevated Error Rates in AWS me-central-1 region

Detected Mar 1, 2026 1:08 PM EST · Resolved Mar 1, 2026 5:08 PM EST · Duration about 4 hours

Confluent Cloud services in the AWS me-central-1 region experienced elevated error rates for 4 hours starting at 12:51 PM UTC, caused by a disruption in AWS availability zone mec1-az2. The team identified the root cause as the AWS infrastructure issue and implemented mitigation steps to restore service health. All Confluent Cloud services in the region were fully restored and healthy by 21:15 PM UTC.

Major February 26, 2026

February 2026: All new dedicated kafka cluster provisioning and expansion in Azure westus3, southcentralus, and eastus2 are failing

Detected Feb 26, 2026 2:56 PM EST · Resolved Feb 26, 2026 8:57 PM EST · Duration about 6 hours

Confluent Cloud experienced a major outage where all new dedicated Kafka cluster provisioning and expansion operations failed in Azure regions westus3, southcentralus, and eastus2. The issue was caused by an underlying Azure infrastructure problem that prevented new cluster deployments and scaling operations. The incident was resolved after 6 hours once Azure identified and implemented a fix for the underlying infrastructure issue.

Major February 26, 2026

February 2026: Network provisioning service in Azure East US region is degraded

Detected Feb 26, 2026 8:59 AM EST · Resolved Feb 26, 2026 5:25 PM EST · Duration about 8 hours

Confluent's network provisioning service in the Azure East US region experienced degradation for 8.4 hours, causing potential failures when provisioning new networks. The issue affected Confluent Cloud services in that specific region. The incident has been resolved after investigation and remediation efforts.

Major February 25, 2026

February 2026: Some Confluent Cloud customers accessing services in GCP us-south1 region might experience elevated errors rates

Detected Feb 25, 2026 12:01 AM EST · Resolved Feb 27, 2026 1:17 PM EST · Duration 3 days

Confluent Cloud customers in the GCP us-south1 region experienced elevated error rates for 61.3 hours, with single zone clusters in Zone-a being specifically impacted. The issue was identified as originating from GCP's infrastructure, and GCP worked to apply mitigation measures. The incident was resolved after implementing a fix and monitoring the results.

Minor February 24, 2026

February 2026: Elevated Kafka REST API Errors in AWS us-west-2

Detected Feb 24, 2026 4:15 PM EST · Resolved Feb 24, 2026 8:15 PM EST · Duration about 4 hours

Confluent Cloud experienced elevated Kafka REST API errors with error code 429 in AWS us-west-2, impacting Kafka eSKU clusters in the region. The issue caused rate limiting errors for users attempting to access the Kafka REST API. The incident was resolved after 3.9 hours once the fix was identified and deployed.

Minor February 23, 2026

February 2026: Elevated Kafka Latency in GCP asia-southeast1

Detected Feb 23, 2026 5:12 PM EST · Resolved Feb 24, 2026 5:01 AM EST · Duration about 12 hours

Confluent Cloud experienced elevated Kafka latency in the GCP asia-southeast1 region, affecting less than 0.1% of produce and fetch requests for some customers while median latencies remained unimpacted. The issue was caused by an underlying GCP problem that required collaboration between Confluent and Google Cloud Platform to identify and resolve. The incident lasted 11.8 hours and was fully resolved after GCP implemented a fix and monitoring confirmed all systems were working normally.