Use Cases
Software Products MSPs Schools Development & Marketing DevOps Agencies Help Desk
 
Internet Status Blog Pricing Log In Try IsDown for free now

Outage in DigitalOcean

Serverless Inference - High error rates for open source models ( Qwen 3 32B)

Resolved Minor
April 07, 2026 - Started about 13 hours ago - Lasted about 3 hours
Official incident page

Incident Report

Summary AI Generated

DigitalOcean's Serverless Inference service experienced high error rates and elevated latency for the Qwen 3 32B model in the tor1 region for 3 hours starting at 10:46 UTC. The issue was caused by higher-than-expected request volume without sufficient resources to scale, resulting in capacity constraints and multiple workers in a pending state. The service was restored by expanding the node pool size to improve available capacity, along with implementing stability improvements to prevent similar issues.

Serverless inference for alibaba-qwen3-32b (Qwen 3 32B) in tor1 is experiencing high error rates starting at 10:46 UTC.

Need to monitor DigitalOcean outages?

  • Monitor all your external dependencies in one place
  • Get instant alerts when outages are detected
  • Show real-time status on private or public status page
  • Keep your team informed
Latest Updates ( sorted recent to last )
RESOLVED about 10 hours ago - at 04/07/2026 03:50PM

Service has been fully restored, and the model is now operating normally. We have implemented improvements to enhance stability and reduce the likelihood of similar issues in the future.

IDENTIFIED about 13 hours ago - at 04/07/2026 12:55PM

We are currently investigating reports of elevated latency affecting requests to this model when using Serverless Inference and Agents.

Earlier observations indicated increased error rates for the open-source Qwen 3 32B model. The Ray dashboard also showed multiple workers in a pending state, suggesting capacity constraints.

Our analysis determined that the model was experiencing higher-than-expected request volume without sufficient resources to scale accordingly. To address this, the node pool size has been increased to improve available capacity. However, there are still insufficient nodes to fully support the desired number of model replicas.

Following the node pool expansion, a new pod-related error has been identified. Our Engineering team is actively working to resolve this issue and restore full service performance.

INVESTIGATING about 13 hours ago - at 04/07/2026 12:49PM

Serverless inference for alibaba-qwen3-32b (Qwen 3 32B) in tor1 is experiencing high error rates starting at 10:46 UTC.

The Status Page Aggregator with Early Outage Detection

With IsDown, you can monitor all your critical services' official status pages from one centralized dashboard and receive instant alerts the moment an outage is detected. Say goodbye to constantly checking multiple sites for updates and stay ahead of outages with IsDown.

Start free trial

No credit card required · Cancel anytime · 6020 services available

Integrations with Slack Microsoft Teams Google Chat Datadog PagerDuty Zapier Discord Webhook