Use cases
Software Products E-commerce MSPs Schools Development & Marketing DevOps Agencies Help Desk
Company
Internet Status Blog Pricing Log in Get started free

Fivetran Outage History

Every past Fivetran outage tracked by IsDown, with detection times, duration, and resolution details.

There were 1115 Fivetran outages since December 2019. The 251 outages from the last 12 months are summarized below, with incident details, duration, and resolution information.

Minor March 19, 2026

March 2026: 3rd Party: Some Amazon S3 Connections Intermittently Failing with HTTP Connection Timeout Error

Detected Mar 19, 2026 7:30 PM EDT · Resolved Mar 19, 2026 9:22 PM EDT · Duration about 2 hours

Some Amazon S3 connections in Fivetran experienced intermittent sync failures with HTTP connection timeout errors due to AWS-side connectivity issues. The outage affected General Services for 1.9 hours on March 19-20, 2026. The issue was automatically resolved once AWS connectivity was restored, with all affected connectors returning to normal sync operations.

Minor March 18, 2026

March 2026: 3rd Party: Snapchat Ads connections are failing with error "500 Internal Server Error"

Detected Mar 18, 2026 1:50 AM EDT · Resolved Mar 18, 2026 2:52 AM EDT · Duration about 1 hour

Fivetran's Snapchat Ads connections experienced failures with "500 Internal Server Error" messages for approximately 1 hour on March 18, 2026, due to an intermittent issue with a third-party API service. The issue was automatically resolved on the third-party side without requiring any changes from Fivetran, and sync success rates returned to normal levels.

Minor March 15, 2026

March 2026: Fivetran: Multiple connection syncs and transformations are delayed

Detected Mar 15, 2026 3:35 AM EDT · Resolved Mar 15, 2026 8:14 AM EDT · Duration about 5 hours

Fivetran experienced a 4.7-hour incident where connection syncs and transformations were delayed due to a recent infrastructure change. In AWS and Azure regions, new syncs were delayed while ongoing syncs continued normally, but in GCP regions, ongoing syncs failed and new syncs were also delayed. The issue was resolved by implementing a fix to address the infrastructure problem, after which instance rates returned to normal and affected connectors resumed syncing successfully.

Minor March 14, 2026

March 2026: 3rd Party: Few Qualtrics connections were failing with HTTP Error Code 429

Detected Mar 14, 2026 6:55 PM EDT · Resolved Mar 15, 2026 1:02 AM EDT · Duration about 6 hours

Fivetran's Qualtrics connections experienced sync failures due to HTTP Error Code 429 (rate limiting) issues, caused by third-party rate limits on the Distribution History endpoint that began spiking around March 10th at 10PM UTC. The issue was resolved by implementing increased retries with exponential backoff between API calls to reduce server load, with affected connectors returning to normal sync operations after the fix was deployed.

Minor March 14, 2026

March 2026: 3rd Party: Some HubSpot connection syncs are failing with an HTTP 477 RefreshCredentials error.

Detected Mar 14, 2026 5:00 AM EDT · Resolved Mar 14, 2026 8:12 AM EDT · Duration about 3 hours

HubSpot connector syncs failed with HTTP 477 RefreshCredentials errors due to a recent migration of the Fivetran public app by HubSpot. The issue affected HubSpot connections for 3.2 hours on March 14, 2026, preventing successful data synchronization. Fivetran deployed a hotfix to resolve the authentication issue and confirmed syncs were completing successfully after implementation.

Minor March 12, 2026

March 2026: Some of connections may experience dashboard sync bar delay in showing data

Detected Mar 12, 2026 3:35 PM EDT · Resolved Mar 12, 2026 5:32 PM EDT · Duration about 2 hours

Fivetran experienced a 2-hour incident where the Connection Status page dashboard displayed sync bar delays and sync events failed to appear correctly in the event log due to malformed messages blocking proper message processing. The issue was caused by a recent code change that allowed malformed messages to be accepted by the backend service, creating a backlog of properly formatted messages. The incident was resolved by deploying a hotfix to handle the malformed messages, allowing the backlog to clear and restoring normal dashboard functionality.

Minor March 11, 2026

March 2026: Postgres: Expanding column in schema tab throws "Fetch source columns failed"

Detected Mar 11, 2026 5:25 PM EDT · Resolved Mar 11, 2026 11:31 PM EDT · Duration about 6 hours

Fivetran experienced a minor issue where expanding columns in the PostgreSQL schema tab resulted in "Fetch source columns failed" errors. The problem affected PostgreSQL components and prevented users from properly viewing schema column details. The issue was resolved after 6.1 hours with a fix implementation and monitoring phase.

Minor March 11, 2026

March 2026: MySQL connectors using Teleport are failing with the error: “Some tables failed to sync.”

Detected Mar 11, 2026 1:15 AM EDT · Resolved Mar 11, 2026 5:43 AM EDT · Duration about 4 hours

Fivetran's MySQL connectors using Teleport experienced sync failures with the error "Some tables failed to sync" for 4.5 hours, affecting multiple MySQL database services including Amazon Aurora, Azure Database, Google Cloud SQL, and various MySQL RDS configurations. The issue was caused by a recent code change to the connector, which engineering resolved by reverting the changes and deploying a hotfix to restore normal operation.

Minor March 10, 2026

March 2026: Some Criteo Connections are failing with 'Failed to Upsert Additional Attributes for Creative' Error

Detected Mar 10, 2026 2:25 PM EDT · Resolved Mar 10, 2026 6:01 PM EDT · Duration about 4 hours

Fivetran's Criteo connections experienced sync failures for 3.6 hours due to "unknown" attribute types being returned by the source API, causing the error "Failed to upsert additional attributes for creative." The issue was resolved by implementing a fix that skips the problematic attribute types, allowing connectors to sync successfully again.

Minor March 10, 2026

March 2026: Fivetran: Setup tests for the destinations are failing across regions.

Detected Mar 10, 2026 7:15 AM EDT · Resolved Mar 10, 2026 3:17 PM EDT · Duration about 8 hours

Fivetran experienced an 8-hour outage where destination setup tests were failing across all regions due to a faulty pull request that changed the token registration process for setup test jobs. Engineering identified the root cause, reverted the problematic changes, and deployed a hotfix to restore the original token registration model, after which destination setup tests resumed normal operation.