Outage in Fivetran

3rd Party: Okta connectors failing with HTTP 400 Error

Resolved Minor
March 18, 2024 - Started 8 months ago - Lasted 3 days
Official incident page

Need to monitor Fivetran outages?
Stay on top of outages with IsDown. Monitor the official status pages of all your vendors, SaaS, and tools, including Fivetran, and never miss an outage again.
Start Free Trial

Outage Details

The issue has been identified and we are working to resolve it.
Components affected
Fivetran Okta
Latest Updates ( sorted recent to last )
RESOLVED 8 months ago - at 03/21/2024 09:08AM

This incident has been resolved. We have observed that instance rates have returned to normal levels and affected connectors are syncing successfully.

MONITORING 8 months ago - at 03/20/2024 09:13PM

We have stopped and started most of the connector syncs, however, there are some connectors still running since the workaround was applied.

If your connector is still running, please pause and unpause the connector for the full fix to take effect.

We are continuing to monitor to ensure the success of the fix for all affected connectors.

MONITORING 8 months ago - at 03/20/2024 05:20PM

We have identified that there are still connectors running since the workaround was applied. These are running at longer than average sync times due to the reduction in the use of pagination with the workaround. Due to this, they have not yet had the full fix applied.

As a solution to ensure the full fix is applied, we will stop these connector syncs and start them again. Once they start again they will run with the full fix and return to a normal syncing state.

MONITORING 8 months ago - at 03/20/2024 03:26PM

The full fix for this issue has been released.

The fix has resolved the incorrect values in the API request, which resulted in the connector failures. The fix also removes the implemented workaround, which in some cases was seen to have consumed excessive API calls and increase sync times due to limited pagination.

If your connector has been running since the workaround was applied, Pause and Resume the connector for the full fix to take effect.

We are continuing to monitor to ensure the success of the fix.

MONITORING 8 months ago - at 03/20/2024 07:48AM

We are continuing to monitor the connectors to guarantee the effectiveness of the interim solution.

MONITORING 8 months ago - at 03/19/2024 06:03PM

We are monitoring connectors to ensure the success of the workaround fix.

IDENTIFIED 8 months ago - at 03/19/2024 06:03PM

We have identified the issue is caused by the processing and encoding of the values received from the Okta API. We are investigating further to identify a long-term solution to this problem.

We are monitoring the connectors following the deployment of the workaround fix. Connectors that were failing are now syncing successfully.

IDENTIFIED 8 months ago - at 03/19/2024 09:00AM

We have deployed a workaround to fix the issue.

The cause of the issue occurs when an incorrect value is returned from the Okta API for use in Pagination. When the next request is sent from the connector, it uses this incorrect value and results in an error. The workaround fix changes the approach to extract data from the API to limit the need to paginate the data.

We are continuing to work toward a long term solution. We are engaged with the Okta Support team as necessary to ensure the expected values are sent from the Okta API.

We will continue to monitor the syncs to ensure the workaround solution resolves the error.

IDENTIFIED 8 months ago - at 03/18/2024 11:41PM

We are working on a workaround while continuing to work on a fix for this issue.

IDENTIFIED 8 months ago - at 03/18/2024 03:02PM

We are continuing to work on a fix for this issue.

IDENTIFIED 8 months ago - at 03/18/2024 12:18PM

We are continuing to investigate this issue.

As an optional interim solution to allow the connector to sync successfully, de-select the affected tables in the connector Schema configuration. The affected tables are USERS, GROUPS and DEVICE.
When the issue is resolved, select these tables again.

Note that if you take this optional solution, there will be no loss in data. Once the tables are re-selected, they will sync from the point they left off.

IDENTIFIED 8 months ago - at 03/18/2024 11:50AM

We have identified an issue where Okta connectors are failing with an exception like

We couldn't sync 2 out of the 17 primary endpoint(s). We were unable to sync data for the following tables: [GROUP_LOGO_LINK, USER_CREDENTIALS_EMAIL, GROUPS, GROUP_MEMBER, GROUP_ROLE, USERS, USER_ROLE]. Error details: {[id:23] URL:/users=Server is not responding with correct response after specific retries and giving status code [400] and status message.....

The connectors are failing with a 400 response code at the USERS and GROUPS endpoints. These endpoints use the incremental sync and employ the 'search' query parameter to pass the 'lastUpdated' field. During the decoding of the API curl request, connectors encounter failure. Our internal team is currently working with Okta Support to resolve this issue.

IDENTIFIED 8 months ago - at 03/18/2024 11:43AM

The issue has been identified and we are working to resolve it.

Vendor Downtime? Keep Your Team Informed with an Internal Status Page

With IsDown, you can monitor all your critical services' official status pages from one centralized dashboard and receive instant alerts the moment an outage is detected. Say goodbye to constantly checking multiple sites for updates and stay ahead of outages with IsDown.

Start free trial

No credit card required · Cancel anytime · 3265 services available

Integrations with Slack Microsoft Teams Google Chat Datadog PagerDuty Zapier Discord Webhook

Setup in 5 minutes or less

How much time you'll save your team, by having the outages information close to them?

14-day free trial · No credit card required · Cancel anytime