DigitalOcean's Serverless inference service experienced an issue causing high error rates specifically for the llama 3.3 70b open source model. The incident lasted 5.6 hours, during which users encountered failures when attempting to use this particular model. The engineering team implemented a fix and monitored the results before fully resolving the issue.
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
Our Engineering team is investigating an issue with Serverless inference.
At this time, users may experience high error rates for open source models (llama 3.3 70b).
We apologize for the inconvenience and will share an update once we have more information.
With IsDown, you can monitor all your critical services' official status pages from one centralized dashboard and receive instant alerts the moment an outage is detected. Say goodbye to constantly checking multiple sites for updates and stay ahead of outages with IsDown.
Start free trialNo credit card required · Cancel anytime · 6020 services available
Integrations with