We're currently investigating a connectivity issue affecting a subset of traffic in region AWS us-east. Some requests may experience increased latency or occasional delays. Our initial analysis indicates this is caused by network-level packet loss on the path between our load balancer infrastructure and the public internet. At this stage, we do not believe this is related to application code, recent changes, or backend service saturation. We're continuing to investigate at the infrastructure and network layer, including host-level networking, NIC behavior, and potential AWS network/AZ conditions. We're also working on mitigation options to reduce the impact while we identify the root cause. We'll share updates as soon as we have more information.
A component of the ingestion infra is degraded and ingestion is delayed in the region
We are currently investigating an issue affecting our job queue. Some jobs may be delayed or not processing as expected. Our team is actively looking into the root cause. We apologize for the inconvenience.
Due to the security incident on Vercel, we have initiated a credential rotation across our environment variables. Services depending on these credentials may be temporarily disrupted. ETA for resolution: under investigation. Updates to follow.