Use cases
Software Products E-commerce MSPs Schools Development & Marketing DevOps Agencies Help Desk
Company
Internet Status Blog Pricing Log in Get started free

ElevenLabs Outage History

Every past ElevenLabs outage tracked by IsDown, with detection times, duration, and resolution details.

There were 90 ElevenLabs outages since May 2025. The 90 outages from the last 12 months are summarized below, with incident details, duration, and resolution information.

Minor March 8, 2026

March 2026: SIP Trunking Failures

Detected Mar 8, 2026 4:58 PM UTC · Resolved Mar 8, 2026 5:15 PM UTC · Duration 16 minutes

Starting at 03:50 UTC, a portion of SIP Trunks imported into ElevenLabs experienced failures when SIP calls were initiated. Mitigation measures were implemented at 16:45 UTC to resolve the issue. The service is now under continued monitoring to ensure full resolution.

Minor March 6, 2026

March 2026: Voice Not Found Generation Failures

Detected Mar 6, 2026 10:28 PM UTC · Resolved Mar 7, 2026 12:16 AM UTC · Duration about 2 hours

ElevenLabs experienced voice generation failures affecting the "George" voice (ID: JBFqnCBsd6RMkjVDRZzb) starting at 19:30 UTC, with similar issues reported for legacy voices. The service provided a temporary workaround by directing users to use an alternative voice ID (6WwXjDDEMyNmFG95zycZ) from the library and recommending switching from legacy voices to non-legacy alternatives. The incident was resolved after 1.8 hours with the cause identified and a fix implemented.

Minor March 4, 2026

March 2026: Small number of calls being stuck in "in progress" state

Detected Mar 4, 2026 7:31 AM UTC · Resolved Mar 4, 2026 12:00 PM UTC · Duration about 4 hours

ElevenLabs experienced an issue with their Conversational v3 Model (Alpha) where some calls became stuck in an "in progress" state, making conversation recordings, transcripts, and analytics unavailable for those calls. The incident lasted 4.5 hours, with the root cause identified and a fix deployed. Affected calls were automatically transitioned to "failed" status within 48 hours.

Minor February 27, 2026

February 2026: Conversation latency

Detected Feb 27, 2026 1:50 PM UTC · Resolved Feb 27, 2026 3:08 PM UTC · Duration about 1 hour

ElevenLabs experienced increased conversation latency affecting Gemini models for 1.3 hours, with conversations experiencing elevated response times but not failing due to fallback mechanisms. The issue was identified as originating from their cloud provider, and ElevenLabs worked with them to resolve it. Error rates decreased and the incident was resolved after cloud provider confirmation.

Minor February 26, 2026

February 2026: Elevated 429 “system_busy” errors from STT

Detected Feb 26, 2026 11:25 PM UTC · Resolved Feb 26, 2026 11:32 PM UTC · Duration 7 minutes

ElevenLabs experienced elevated 429 "system_busy" errors affecting their Speech-to-Text API for 7 minutes. Users encountered increased rate limiting when attempting to use the STT service. The incident was resolved after the brief 7-minute duration.

Minor February 25, 2026

February 2026: Website Slowness and API Request Latency

Detected Feb 25, 2026 2:36 PM UTC · Resolved Feb 25, 2026 7:47 PM UTC · Duration about 5 hours

ElevenLabs experienced website slowness and API request latency for 5.2 hours due to an underlying cloud provider issue. The incident primarily affected the Agents Platform with degraded conversation initiation performance and caused intermittent website slowness, while TTS API, STT API, and Dubbing API continued working normally. The engineering team implemented changes to affected services that resulted in recovery and improved performance, with continued monitoring to confirm full stability.

Minor February 23, 2026

February 2026: Agents - v3 conversational model

Detected Feb 23, 2026 10:53 AM UTC · Resolved Feb 23, 2026 1:01 PM UTC · Duration about 2 hours

ElevenLabs experienced an issue with their v3 conversational text-to-speech model on the Agents platform for 2.1 hours. The company investigated the problem and deployed a fix that restored normal error rates. The service is now operating normally with ongoing monitoring.