Outage in Optimism

Goerli Full Sync Issues

Resolved Minor
July 28, 2023 - Started about 2 years ago - Lasted about 2 hours

Need to monitor Optimism outages?
Stay on top of outages with IsDown. Monitor the official status pages of all your vendors, SaaS, and tools, including Optimism, and never miss an outage again.
Start Free Trial

Outage Details

We have identified the root cause of this issue. The L1 Goerli testnet experienced a large reorg of approximately 18 blocks. In addition to the large reorg, a significant number of blocks were missed, resulting in one large reorg and many missing blocks. This caused a large (>256 block) reorg on OP Goerli. Currently, users operating full nodes on OP Goerli will experience a node halt. These nodes will not sync new blocks from the OP Goerli network, and may return inconsistent data over RPC. **Users operating archival nodes on OP Goerli are not affected and we do not expect this issue to occur in any capacity on OP Mainnet.** To resolve this issue, **full node operators** need to perform one of these actions: 1. Roll the full node back to block number 12572500 using the debug_setHead RPC. 2. Resync the full node from a backup prior to block 12572500, or from genesis. We apologize for the inconvenience and will share a full post-mortem when ready.
Latest Updates ( sorted recent to last )
MONITORING about 2 years ago - at 07/28/2023 09:34PM

We have identified the root cause of this issue.

The L1 Goerli testnet experienced a large reorg of approximately 18 blocks. In addition to the large reorg, a significant number of blocks were missed, resulting in one large reorg and many missing blocks. This caused a large (>256 block) reorg on OP Goerli. Currently, users operating full nodes on OP Goerli will experience a node halt. These nodes will not sync new blocks from the OP Goerli network, and may return inconsistent data over RPC. **Users operating archival nodes on OP Goerli are not affected and we do not expect this issue to occur in any capacity on OP Mainnet.**

To resolve this issue, **full node operators** need to perform one of these actions:

1. Roll the full node back to block number 12572500 using the debug_setHead RPC.
2. Resync the full node from a backup prior to block 12572500, or from genesis.

We apologize for the inconvenience and will share a full post-mortem when ready.

INVESTIGATING about 2 years ago - at 07/28/2023 09:00PM

Our team is investigating reports of issues with full node sync. At this time, users syncing full nodes may experience degraded performance.

We apologize for the inconvenience and will share an update once we have more information.

Be the First to Know When Vendors Go Down

With IsDown, you can monitor all your critical services' official status pages from one centralized dashboard and receive instant alerts the moment an outage is detected. Say goodbye to constantly checking multiple sites for updates and stay ahead of outages with IsDown.

Start free trial

No credit card required · Cancel anytime · 4484 services available

Integrations with Slack Microsoft Teams Google Chat Datadog PagerDuty Zapier Discord Webhook