DigitalOcean's Llama 3.3-70B model experienced intermittent errors for 6.4 hours, affecting users making serverless inference requests via APIs and Agents. The issue was caused by a few problematic requests to the model and was resolved after deploying a fix and monitoring the resources.
Issue resolved.
Cause: A few requests made to the Llama 3.3-70B model caused issues.
Impact: Intermittent errors when interacting with the model through serverless inference and/or with agents created using this model.
Contact support if issues persist.
Fix deployed. Monitoring resources related to the Llama 3.3-70B.
Users should no longer experience intermittent errors when making serverless inference requests via APIs and Agents . Awaiting confirmation before closure.
We are currently investigating an issue affecting the Llama 3.3-70B model.
Symptoms: Users may encounter intermittent errors when making serverless inference requests via APIs and Agents.
Current Status: Our engineering team is actively investigating the issue to determine the root cause.
With IsDown, you can monitor all your critical services' official status pages from one centralized dashboard and receive instant alerts the moment an outage is detected. Say goodbye to constantly checking multiple sites for updates and stay ahead of outages with IsDown.
Start free trialNo credit card required · Cancel anytime · 6020 services available
Integrations with