After investigation, we have seen that the root cause is probably an undersized LB in front of the impacted part of the infrastructure. During the time ranges where users receive 5xx errors, we have noticed an unusual CPU usage that might have led to congestion, making requests forwarding to our infrastructure difficult.
The fix made this morning at 8:45 UTC has increased the impacted LB capacity, so we hope it will stabilize the situation.
We are still investigating with LB team.
From what we have seen as of now, it seems that the time ranges on which users receive 5xx errors match with a higher load and more connections opened on our gateways.
We have deployed a fix to increase the capacity of our gateways on 8:45 UTC, and we are closely monitoring the situation.
We sincerely apologize for any inconvenience.
Erratum:
- So far, the first occurrences of this issue appeared on 2026-01-08. We are still gathering metrics to understand if it has also happened before.
- Impacted users might see all of their requests during specific periods of time (from 10m to 30m) return 5xx errors.
A subset of users on nl-ams (~35%) may experience sporadic 502 or 504 errors when accessing their Serverless Functions and Serverless Containers. These errors are very sporadic, so we expect only a few amount of requests to fail.
We've seen the first occurrences on 2026-01-20.
We are investigating, sorry about any inconvenience.
With IsDown, you can monitor all your critical services' official status pages from one centralized dashboard and receive instant alerts the moment an outage is detected. Say goodbye to constantly checking multiple sites for updates and stay ahead of outages with IsDown.
Start free trialNo credit card required · Cancel anytime · 5450 services available
Integrations with