Create a FREE account to start receiving notifications!
Resolved Partial degradation on uploads, api
We have experienced issues with our uploads, api systems from 7:51 to 7:56 UTC. We have identified the source of problems, implemented the fix and are monitoring the situation.
Resolved Partial degradation of image processing and file delivery
We're experiencing issues with image processing and file delivery.
Resolved Video Processing System Maintenance
Video processing vendor is performing database maintenance between 6 am and 11 am UTC. They're trying to minimize the impact on their service.
Resolved Website partial outage.
Due to our vendor issues, website is partially down.
Resolved Higher latency and increased error rate on CDN and upload API
From 07:25 to 7:45 UTC we've experienced higher latency and increased error rate on AWS S3 resulting in partial degradation of our content delivery service and upload API. We are monitoring the situation.
Resolved REST API partial outage
Starting at 09:20 UTC we've experienced partial outage of our REST API. We've identified the source of problems, deployed fixes and are monitoring the situation.
Resolved Upload API downtime
The was 8 minutes downtime starting from 18:21 UTC. What happened: We're trying to mitigate Roscomnadzor's carpet bombing our Russian clients and end users. As one of the measures we've changed our Load Balancing settings but missed one critical config setting that caused the downtime during our regular code deployment. We've fixed the setting and are monitoring the situation.
Resolved Increased CDN error rates
Our CDN origin fleet is experiencing increased error rate.
Resolved Increased REST API error rates
We've encountered increased error rates on our REST API endpoints. This resulted in reduced reported uptime. In fact, even though the uptime suffered it wasn't as bad as reported. What happened: - from February 25 22:00 UTC to February 26 05:50 UTC error rates on REST API endpoints were increased Why that happened: - one of the machines in REST API fleet ran out of memory - due to OOM, the machine was unable to handle any incoming requests - misconfigured health check prevented load balancer from getting rid of the failing machine - part of all requests, including Pingdom (that reports our uptime) ones, was sent to that failing machine What we've done: - tracked down and terminated the failing machine - fixed health check configuration
Resolved Increased error rate for Upload API
We are currently investigating this issue.
IsDown aggregates status page from services so you don't have to. Follow Uploadcare and hundreds of services and be the first to know when something is wrong.Get started now