Outage in Honeycomb

Delays in SLO, Service Maps processing

Resolved Major
October 20, 2025 - Started 23 days ago - Lasted 1 day
Official incident page

Incident Report

Due to constrained EC2 instance capacity in the AWS us-east-1 region, we are choosing to allocate the capacity we have to incoming events and telemetry. As such, customers may see over 5 minute delays in our processing of - SLOs - Service Maps We do not expect a degradation of our core ingest -> query flow, and we do not expect triggers to be impacted

Need to monitor Honeycomb outages?

One place to monitor all your cloud vendors. Get instant alerts when an outage is detected.

Latest Updates ( sorted recent to last )
RESOLVED 21 days ago - at 10/21/2025 03:41PM

This incident has been resolved. At no point did we lose any customer data that hit our load balancers. Querying has been stable for hours, and all features that were degraded are functional.

MONITORING 22 days ago - at 10/21/2025 12:11AM

Honeycomb core functionality is operational. Service maps are up and running and errors have resolved. Querying, triggers and SLOs are continuing to improve and customers may see delays as functionality continues to improve. This is our final update for the night unless the situation degrades. We will continue to monitor the situation.

MONITORING 22 days ago - at 10/20/2025 11:08PM

Service maps are up and running and errors have resolved. Querying, triggers and SLOs are continuing to improve and customers may see delays as functionality continues to improve. We are continuing to monitor the situation.

MONITORING 22 days ago - at 10/20/2025 08:06PM

AWS is starting to show signs of recovery. Querying remains partially impacted and may take longer to return results. We are continuing to monitor the situation.

MONITORING 22 days ago - at 10/20/2025 05:39PM

We’re currently observing that querying is seeing signs of recovery. We are continuing to monitor the situation

MONITORING 22 days ago - at 10/20/2025 03:30PM

The networking issues in us-east-1 is affecting our query engine - customers may see errors or delays when running queries as a result.

MONITORING 22 days ago - at 10/20/2025 03:14PM

SLO processing has recovered. Service Maps continues to be degraded but historical data can be queried.

MONITORING 23 days ago - at 10/20/2025 02:16PM

Due to constrained EC2 instance capacity in the AWS us-east-1 region, we are choosing to allocate the capacity we have to incoming events and telemetry. As such, customers may see over 5 minute delays in our processing of
- SLOs
- Service Maps
We do not expect a degradation of our core ingest -> query flow, and we do not expect triggers to be impacted

The Status Page Aggregator Built for IT Teams

With IsDown, you can monitor all your critical services' official status pages from one centralized dashboard and receive instant alerts the moment an outage is detected. Say goodbye to constantly checking multiple sites for updates and stay ahead of outages with IsDown.

Start free trial

No credit card required · Cancel anytime · 4600 services available

Integrations with Slack Microsoft Teams Google Chat Datadog PagerDuty Zapier Discord Webhook