Use cases
Software Products E-commerce MSPs Schools Development & Marketing DevOps Agencies Help Desk
Company
Internet Status Blog Pricing Log in Get started free

Technolutions Outage History

Every past Technolutions outage tracked by IsDown, with detection times, duration, and resolution details.

There were 30 Technolutions outages since December 2019. The 4 outages from the last 12 months are summarized below, with incident details, duration, and resolution information.

Major May 4, 2026

May 2026: Issue affecting availability of certain databases

Detected May 4, 2026 1:14 AM EDT · Resolved May 4, 2026 2:56 AM EDT · Duration about 2 hours

Technolutions experienced a 1.7-hour outage affecting the availability of certain databases used by their Slate service. The interruption was caused by a loss of quorum in the failover cluster, which was linked to periodic network issues following recent database infrastructure and operating system upgrades. The team resolved the immediate issue by bringing the databases back online and hardened the cluster configuration to prevent future unintended failovers.

Minor April 27, 2026

April 2026: Intermittent availability of certain databases on LUNA

Detected Apr 27, 2026 1:43 PM EDT · Resolved Apr 27, 2026 5:18 PM EDT · Duration about 4 hours

Databases on LUNA experienced intermittent availability for 3.6 hours due to a high availability service that initiated an unexpected failover and became stuck in a failover loop, potentially related to weekend database server upgrades. The Slate component was affected during this incident. The issue was resolved by temporarily disabling automatic failover while investigating the root cause, with manual failovers implemented as a protective measure.

Minor April 25, 2026

April 2026: Intermittent availability of certain databases

Detected Apr 25, 2026 9:13 PM EDT · Resolved Apr 26, 2026 1:47 AM EDT · Duration about 5 hours

Technolutions experienced intermittent availability of certain databases in the US region for 4.6 hours, primarily affecting databases on the LIMA cluster and the Slate component. The issue was caused by a routine third-party database engine update that introduced a behavioral change leading to worker thread exhaustion, which triggered an automated failover to secondary infrastructure where some databases were slow to recover due to similar thread exhaustion. The incident was resolved by disabling the problematic functionality as advised by the vendor and increasing the worker thread ceiling to prevent recurrence.