We have observed a reoccurrence of Gemini-2.5-Flash failures and are continuing to monitor the issue.
LLM failures have now returned to baseline, and conversation latency using Gemini-2.5-Flash is now back to expected levels. For future mitigation, we plan to improve our fallback methods to better handle reduced cloud provider availability.
We have observed a significant decrease in error occurrence and are continuing to monitor availability of the resources that provide service for Gemini-2.5-Flash.
We have identified the issue scope is isolated to the Gemini-2.5-Flash model. We are working with our cloud provider to resolve this.
Currently, some conversations are affected by increased latency due to elevated LLM generation failures. We are investigating the root cause and working to mitigate this.
With IsDown, you can monitor all your critical services' official status pages from one centralized dashboard and receive instant alerts the moment an outage is detected. Say goodbye to constantly checking multiple sites for updates and stay ahead of outages with IsDown.
Start free trialNo credit card required · Cancel anytime · 5850 services available
Integrations with