Outage in Honeycomb

Delays in SLO, Service Maps processing

Major
October 20, 2025 - Started about 17 hours ago
Official incident page

Incident Report

Due to constrained EC2 instance capacity in the AWS us-east-1 region, we are choosing to allocate the capacity we have to incoming events and telemetry. As such, customers may see over 5 minute delays in our processing of - SLOs - Service Maps We do not expect a degradation of our core ingest -> query flow, and we do not expect triggers to be impacted

One place to monitor all your cloud vendors. Get instant alerts when an outage is detected.

Try IsDown risk-free 14-day free trial · No credit card required
Latest Updates ( sorted recent to last )
MONITORING about 7 hours ago - at 10/21/2025 12:11AM

Honeycomb core functionality is operational. Service maps are up and running and errors have resolved. Querying, triggers and SLOs are continuing to improve and customers may see delays as functionality continues to improve. This is our final update for the night unless the situation degrades. We will continue to monitor the situation.

MONITORING about 8 hours ago - at 10/20/2025 11:08PM

Service maps are up and running and errors have resolved. Querying, triggers and SLOs are continuing to improve and customers may see delays as functionality continues to improve. We are continuing to monitor the situation.

MONITORING about 11 hours ago - at 10/20/2025 08:06PM

AWS is starting to show signs of recovery. Querying remains partially impacted and may take longer to return results. We are continuing to monitor the situation.

MONITORING about 13 hours ago - at 10/20/2025 05:39PM

We’re currently observing that querying is seeing signs of recovery. We are continuing to monitor the situation

MONITORING about 16 hours ago - at 10/20/2025 03:30PM

The networking issues in us-east-1 is affecting our query engine - customers may see errors or delays when running queries as a result.

MONITORING about 16 hours ago - at 10/20/2025 03:14PM

SLO processing has recovered. Service Maps continues to be degraded but historical data can be queried.

MONITORING about 17 hours ago - at 10/20/2025 02:16PM

Due to constrained EC2 instance capacity in the AWS us-east-1 region, we are choosing to allocate the capacity we have to incoming events and telemetry. As such, customers may see over 5 minute delays in our processing of
- SLOs
- Service Maps
We do not expect a degradation of our core ingest -> query flow, and we do not expect triggers to be impacted

The Status Page Aggregator Built for IT Teams

With IsDown, you can monitor all your critical services' official status pages from one centralized dashboard and receive instant alerts the moment an outage is detected. Say goodbye to constantly checking multiple sites for updates and stay ahead of outages with IsDown.

Start free trial

No credit card required · Cancel anytime · 4522 services available

Integrations with Slack Microsoft Teams Google Chat Datadog PagerDuty Zapier Discord Webhook