Use cases
Software Products E-commerce MSPs Schools Development & Marketing DevOps Agencies Help Desk
Company
Internet Status Blog Pricing Log in Get started free

GitLab Outage History

Every past GitLab outage tracked by IsDown, with detection times, duration, and resolution details.

There were 357 GitLab outages since March 2022. The 120 outages from the last 12 months are summarized below, with incident details, duration, and resolution information.

Minor May 15, 2026

May 2026: Degraded performance affecting Duo features

Detected May 15, 2026 11:39 AM EDT · Resolved May 15, 2026 2:16 PM EDT · Duration about 3 hours

GitLab experienced degraded performance affecting Duo Agent Platform workflows across GitLab.com and GitLab Dedicated for 2.6 hours, causing code review and custom flows to fail with "Session failed" errors. The team identified the root cause, implemented a revert fix, and deployed the solution. Error rates returned to normal levels after the revert was deployed.

Minor May 13, 2026

May 2026: Slowed pipelines and CI/CD Jobs

Detected May 13, 2026 12:26 PM EDT · Resolved May 13, 2026 1:35 PM EDT · Duration about 1 hour

GitLab experienced a high backlog of jobs that caused pipelines and CI/CD jobs to run slower than normal. The service disruption lasted approximately 1.2 hours and was classified as a minor incident. The issue has been resolved, restoring normal pipeline and CI/CD job processing speeds.

Minor May 12, 2026

May 2026: Elevated 5xx Errors on GitLab.com

Detected May 12, 2026 5:08 PM EDT · Resolved May 12, 2026 5:36 PM EDT · Duration 28 minutes

GitLab.com experienced elevated 5xx errors for 28 minutes, affecting the website and Git operations. The issue was identified as a frozen Gitaly node, which GitLab worked to correct. Users experienced general service issues during this period.

Minor May 12, 2026

May 2026: Elevated error rates on GitLab.com

Detected May 12, 2026 1:40 PM EDT · Resolved May 12, 2026 5:12 PM EDT · Duration about 4 hours

GitLab.com experienced elevated error rates for 3.5 hours due to a major Redis Sidekiq cluster outage, causing intermittent failures and degraded performance across the website, API, Git operations, CI/CD pipelines, container registry, and authentication services. Users experienced delayed or non-running pipelines and authentication failures. The issue was resolved through targeted restarts and forced resets of the Redis cluster, with error rates returning to normal levels.

Minor May 11, 2026

May 2026: 500 errors accessing repositories

Detected May 11, 2026 4:44 AM EDT · Resolved May 11, 2026 7:20 AM EDT · Duration about 3 hours

GitLab experienced 500 errors when accessing repositories and Gitaly errors during push operations, affecting both the website and Git operations for 2.6 hours. The incident impacted users' ability to access and push code to repositories. The service was restored by restarting the Gitaly process.

Minor May 7, 2026

May 2026: Background processing delays

Detected May 7, 2026 11:48 AM EDT · Resolved May 7, 2026 1:57 PM EDT · Duration about 2 hours

GitLab experienced background job processing delays for 2.2 hours, affecting the Background Processing component. The engineering team identified the root cause of the delays and worked on implementing mitigating measures to resolve the issue.

Minor May 6, 2026

May 2026: Intermittent errors observed on GitLab Next

Detected May 6, 2026 3:05 PM EDT · Resolved May 6, 2026 4:50 PM EDT · Duration about 2 hours

A configuration update caused intermittent error messages on GitLab Next (Canary) for 1.7 hours. Users were advised to use GitLab Current as a workaround while the fix was implemented. The team identified the root cause and worked on resolving the configuration issue.

Minor April 30, 2026

April 2026: Small GitLab Hosted Runners

Detected Apr 30, 2026 11:52 AM EDT · Resolved Apr 30, 2026 1:00 PM EDT · Duration about 1 hour

GitLab experienced resource issues with their small hosted runners (saas-linux-small-amd64) that caused delays for CI/CD jobs tagged for these runners. The incident affected GitLab SaaS Shared Runners, creating a backlog of pending jobs that users experienced as processing delays. The team implemented configuration changes to improve resourcing and resolved the issue after 1.1 hours.

Minor April 28, 2026

April 2026: parsing error in the vulnerability reports

Detected Apr 28, 2026 7:06 AM EDT · Resolved Apr 28, 2026 9:18 AM EDT · Duration about 2 hours

GitLab experienced a parsing error in vulnerability reports that affected dependency scanning functionality on their website for 2.2 hours. The team identified the root cause of the issue and worked to resolve it. The incident was classified as minor severity and has been resolved.

Minor April 23, 2026

April 2026: Degraded performance affecting Duo features

Detected Apr 23, 2026 4:14 AM EDT · Resolved Apr 23, 2026 8:51 AM EDT · Duration about 5 hours

GitLab experienced degraded performance in their AI-assisted services and AI gateway, causing users to encounter higher latency and errors when using Duo features. The incident lasted 4.6 hours and was caused by request saturation and potential segfaults affecting the AI services. GitLab resolved the issue by increasing maximum concurrency limits to reduce latency.