Outage in Bentley Systems

CrowdStrike Global Outage Impacting Bentley Systems Network

Resolved Major
July 19, 2024 - Started 4 months ago - Lasted 1 day
Official incident page

Need to monitor Bentley Systems outages?
Stay on top of outages with IsDown. Monitor the official status pages of all your vendors, SaaS, and tools, including Bentley Systems, and never miss an outage again.
Start Free Trial

Outage Details

Our team is currently investigating an issue with Bentley machines being down. Some users may be having trouble accessing some sites and the system may be getting restarts automatically or using certain features. We are working diligently to identify the root cause of the problem and implement a solution. We will provide an update as we learn more. In the meantime, we apologize for any inconvenience this may cause and appreciate your patience and understanding.
Latest Updates ( sorted recent to last )
RESOLVED 3 months ago - at 07/20/2024 10:39AM

Our dedicated teams are actively working to restore all Bentley systems and services to full functionality following the effects of the global update issued by third-party application, CrowdStrike. We are committed to providing you with continuous updates. We value your patience and cooperation while we work to resolve this matter.

Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:
Awareness - Virtual Machines
We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).
It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.
CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround.

IDENTIFIED 3 months ago - at 07/20/2024 06:46AM

Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:

Awareness - Virtual Machines

We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).

It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.

CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround. For environments specific to Azure, further instructions are provided below:

Updated: We approximate impact started as early as 19 July 2024 at 04:09UTC, when this update started rolling out.

Update as of 02:30 UTC on 20 July 2024:

We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines. Customers can attempt to do so as follows:

Using the Azure Portal - attempting 'Restart' on affected VMs
Using the Azure CLI or Azure Shell (https://shell.azure.com)
https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart

We have received feedback from customers that several reboots may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.

Additional options for recovery:

Option 1

We recommend customers that can, to restore from a backup, preferably from before 19 July 2024 at 04:09UTC, when this faulty update started rolling out.

Customers leveraging Azure Backup can follow the following instructions:
How to restore Azure VM data in Azure portal

Option 2

Customers can attempt to remove the C-00000291*.sys file on the disk directly and potentially not need to perform detach and reattach disc.

Open Azure AZ CLI and run the following steps

1.Create rescue VM with

// Creates a rescue VM, same size as the original VM in the same region. Asks for Username and password.

// Makes a copy of the OS Disk of the problem VM

// Attaches the OS Disk as Data disk to the Rescue VM

//az vm repair create -g {your-resource-group} -n {vm-name} --verbose

"az vm repair create -g RGNAME -n VMNAME -- verbose"

**NOTE: For encrypted VM run the following command:**

"az vm repair create -g RGNAME -n BROKENVMNAME --unlock-encrypted-vm --verbose"



2.Then run:

// Runs the mitigation script on the Rescue VM which fixes the problem (on the os-disk copy attached as a data disk)

//az vm repair run -g {your-resource-group} -n {vm-name} --run-id win-crowdstrike-fix-bootloop -verbose

"az vm repair run -g RGNAME -n BROKENVMNAME -- run-id win-crowdstrike-fix-bootloop -- run-on-repair -- verbose"



3.Final step is to run:

// Removes the Fixed OS-Disk Copy from the rescue VM

// Stops the problem VM but it is not deallocated

// Attaches the fixed OS-Disk to the original VM

// Starts the original VM

// Gives prompts to delete the repair vm

//az vm repair restore -g {your-resource-group} -n {vmname} --verbose

"az vm repair restore -g RGNAME -n BROKENVMNAME" --verbose

Note: These steps would work for both managed and unmanaged disks. In case, if you run into capacity issues, please retry after some time.

Option 3

Customers can attempt repairs on the OS disk by following these instructions:

Troubleshoot a Windows VM by attaching the OS disk to a repair VM through the Azure portal

Once the disk is attached, customers can attempt to delete the following file:

Windows/System32/Drivers/CrowdStrike/C-00000291*.sys

The disk can then be attached and re-attached to the original VM.

We can confirm the affected update has been pulled by CrowdStrike. Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance.

Additionally, we're continuing to investigate additional mitigation options for customers and will share more information as it becomes known.

This message was last updated at 02:34 UTC on 20 July 2024

IDENTIFIED 3 months ago - at 07/20/2024 02:51AM

Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:

Awareness - Virtual Machines

We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).

It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.

CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround. For environments specific to Azure, further instructions are provided below:

Updated: We approximate impact started as early as 19 July 2024 at 04:09UTC, when this update started rolling out.

Update as of 02:30 UTC on 20 July 2024:

We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines. Customers can attempt to do so as follows:

Using the Azure Portal - attempting 'Restart' on affected VMs
Using the Azure CLI or Azure Shell (https://shell.azure.com)
https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart

We have received feedback from customers that several reboots may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.

Additional options for recovery:

Option 1

We recommend customers that can, to restore from a backup, preferably from before 19 July 2024 at 04:09UTC, when this faulty update started rolling out.

Customers leveraging Azure Backup can follow the following instructions:
How to restore Azure VM data in Azure portal

Option 2

Customers can attempt to remove the C-00000291*.sys file on the disk directly and potentially not need to perform detach and reattach disc.

Open Azure AZ CLI and run the following steps

1.Create rescue VM with

// Creates a rescue VM, same size as the original VM in the same region. Asks for Username and password.

// Makes a copy of the OS Disk of the problem VM

// Attaches the OS Disk as Data disk to the Rescue VM

//az vm repair create -g {your-resource-group} -n {vm-name} --verbose

"az vm repair create -g RGNAME -n VMNAME -- verbose"

**NOTE: For encrypted VM run the following command:**

"az vm repair create -g RGNAME -n BROKENVMNAME --unlock-encrypted-vm --verbose"



2.Then run:

// Runs the mitigation script on the Rescue VM which fixes the problem (on the os-disk copy attached as a data disk)

//az vm repair run -g {your-resource-group} -n {vm-name} --run-id win-crowdstrike-fix-bootloop -verbose

"az vm repair run -g RGNAME -n BROKENVMNAME -- run-id win-crowdstrike-fix-bootloop -- run-on-repair -- verbose"



3.Final step is to run:

// Removes the Fixed OS-Disk Copy from the rescue VM

// Stops the problem VM but it is not deallocated

// Attaches the fixed OS-Disk to the original VM

// Starts the original VM

// Gives prompts to delete the repair vm

//az vm repair restore -g {your-resource-group} -n {vmname} --verbose

"az vm repair restore -g RGNAME -n BROKENVMNAME" --verbose

Note: These steps would work for both managed and unmanaged disks. In case, if you run into capacity issues, please retry after some time.

Option 3

Customers can attempt repairs on the OS disk by following these instructions:

Troubleshoot a Windows VM by attaching the OS disk to a repair VM through the Azure portal

Once the disk is attached, customers can attempt to delete the following file:

Windows/System32/Drivers/CrowdStrike/C-00000291*.sys

The disk can then be attached and re-attached to the original VM.

We can confirm the affected update has been pulled by CrowdStrike. Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance.

Additionally, we're continuing to investigate additional mitigation options for customers and will share more information as it becomes known.

This message was last updated at 02:34 UTC on 20 July 2024

IDENTIFIED 3 months ago - at 07/19/2024 11:08PM

Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:

Awareness - Virtual Machines

We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).

It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.

CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround. For environments specific to Azure, further instructions are provided below:

Updated: We approximate impact started as early as 19 July 2024 at 04:09UTC, when this update started rolling out.

Update as of 10:30 UTC on 19 July 2024:

We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines. Customers can attempt to do so as follows:

Using the Azure Portal - attempting 'Restart' on affected VMs
Using the Azure CLI or Azure Shell (https://shell.azure.com)
https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart

We have received feedback from customers that several reboots may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.

Additional options for recovery:

Option 1

We recommend customers that can, to restore from a backup, preferably from before 19 July 2024 at 04:09UTC, when this faulty update started rolling out.

Customers leveraging Azure Backup can follow the following instructions:
How to restore Azure VM data in Azure portal

Option 2

Customers can attempt to remove the C-00000291*.sys file on the disk directly and potentially not need to perform detach and reattach disc.

Open Azure AZ CLI and run the following steps

1.Create rescue VM with

// Creates a rescue VM, same size as the original VM in the same region. Asks for Username and password.

// Makes a copy of the OS Disk of the problem VM

// Attaches the OS Disk as Data disk to the Rescue VM

//az vm repair create -g {your-resource-group} -n {vm-name} --verbose

"az vm repair create -g RGNAME -n VMNAME -- verbose"

**NOTE: For encrypted VM run the following command:**

"az vm repair create -g RGNAME -n BROKENVMNAME --unlock-encrypted-vm --verbose"



2.Then run:

// Runs the mitigation script on the Rescue VM which fixes the problem (on the os-disk copy attached as a data disk)

//az vm repair run -g {your-resource-group} -n {vm-name} --run-id win-crowdstrike-fix-bootloop -verbose

"az vm repair run -g RGNAME -n BROKENVMNAME -- run-id win-crowdstrike-fix-bootloop -- run-on-repair -- verbose"



3.Final step is to run:

// Removes the Fixed OS-Disk Copy from the rescue VM

// Stops the problem VM but it is not deallocated

// Attaches the fixed OS-Disk to the original VM

// Starts the original VM

// Gives prompts to delete the repair vm

//az vm repair restore -g {your-resource-group} -n {vmname} --verbose

"az vm repair restore -g RGNAME -n BROKENVMNAME" --verbose

Note: These steps would work for both managed and unmanaged disks. In case, if you run into capacity issues, please retry after some time.

Option 3

Customers can attempt repairs on the OS disk by following these instructions:

Troubleshoot a Windows VM by attaching the OS disk to a repair VM through the Azure portal

Once the disk is attached, customers can attempt to delete the following file:

Windows/System32/Drivers/CrowdStrike/C-00000291*.sys

The disk can then be attached and re-attached to the original VM.

We can confirm the affected update has been pulled by CrowdStrike. Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance.

Additionally, we're continuing to investigate additional mitigation options for customers and will share more information as it becomes known.

This message was last updated at 22:24 UTC on 19 July 2024

IDENTIFIED 3 months ago - at 07/19/2024 09:16PM

Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:

Awareness - Virtual Machines

We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).

It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.

CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround. For environments specific to Azure, further instructions are provided below:

Updated: We approximate impact started as early as 19 July 2024 at 04:09UTC, when this update started rolling out.

Update as of 10:30 UTC on 19 July 2024:

We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines. Customers can attempt to do so as follows:

Using the Azure Portal - attempting 'Restart' on affected VMs
Using the Azure CLI or Azure Shell (https://shell.azure.com)
https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart

We have received feedback from customers that several reboots may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.

Additional options for recovery:

Option 1

We recommend customers that can, to restore from a backup, preferably from before 19 July 2024 at 04:09UTC, when this faulty update started rolling out.

Customers leveraging Azure Backup can follow the following instructions:
How to restore Azure VM data in Azure portal

Option 2

Customers can attempt to remove the C-00000291*.sys file on the disk directly and potentially not need to perform detach and reattach disc.

Open Azure AZ CLI and run the following steps

1.Create rescue VM with

// Creates a rescue VM, same size as the original VM in the same region. Asks for Username and password.

// Makes a copy of the OS Disk of the problem VM

// Attaches the OS Disk as Data disk to the Rescue VM

//az vm repair create -g {your-resource-group} -n {vm-name} --verbose

"az vm repair create -g RGNAME -n VMNAME" -- verbose



2.Then run:

// Runs the mitigation script on the Rescue VM which fixes the problem (on the os-disk copy attached as a data disk)

//az vm repair run -g {your-resource-group} -n {vm-name} --run-id win-crowdstrike-fix-bootloop -verbose

"az vm repair run -g RGNAME -n BROKENVMNAME -- run-id win-crowdstrike-fix-bootloop -- run-on-repair -- verbose"



3.Final step is to run:

// Removes the Fixed OS-Disk Copy from the rescue VM

// Stops the problem VM but it is not deallocated

// Attaches the fixed OS-Disk to the original VM

// Starts the original VM

// Gives prompts to delete the repair vm

//az vm repair restore -g {your-resource-group} -n {vmname} --verbose

"az vm repair restore -g RGNAME -n BROKENVMNAME" --verbose

Note: These steps would work for both managed and unmanaged disks. In case, if you run into capacity issues, please retry after some time.

Option 3

Customers can attempt repairs on the OS disk by following these instructions:

Troubleshoot a Windows VM by attaching the OS disk to a repair VM through the Azure portal

Once the disk is attached, customers can attempt to delete the following file:

Windows/System32/Drivers/CrowdStrike/C-00000291*.sys

The disk can then be attached and re-attached to the original VM.

We can confirm the affected update has been pulled by CrowdStrike. Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance.

Additionally, we're continuing to investigate additional mitigation options for customers and will share more information as it becomes known.

This message was last updated at 20:23 UTC on 19 July 2024

IDENTIFIED 3 months ago - at 07/19/2024 08:13PM

Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:

Awareness - Virtual Machines

We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).

It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.

CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround. For environments specific to Azure, further instructions are provided below:

Updated: We approximate impact started as early as 19 July 2024 at 04:09UTC, when this update started rolling out.

Update as of 10:30 UTC on 19 July 2024:

We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines. Customers can attempt to do so as follows:

Using the Azure Portal - attempting 'Restart' on affected VMs
Using the Azure CLI or Azure Shell (https://shell.azure.com)
https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart

We have received feedback from customers that several reboots (as many as 15 have been reported) may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.

Additional options for recovery:

We recommend customers that are able to, to restore from a backup, preferably from before 19 July 2024 at 04:09UTC, when this faulty update started rolling out.

Customers leveraging Azure Backup can follow the following instructions:
How to restore Azure VM data in Azure portal

Alternatively, customers can attempt repairs on the OS disk by following these instructions:
Troubleshoot a Windows VM by attaching the OS disk to a repair VM through the Azure portal

Once the disk is attached, customers can attempt to delete the following file:

Windows/System32/Drivers/CrowdStrike/C-00000291*.sys

The disk can then be attached and re-attached to the original VM.

We can confirm the affected update has been pulled by CrowdStrike. Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance.

Additionally, we're continuing to investigate additional mitigation options for customers and will share more information as it becomes known.

This message was last updated at 19:10 UTC on 19 July 2024

IDENTIFIED 3 months ago - at 07/19/2024 06:11PM

Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:

Awareness - Virtual Machines

We are aware of an issue that started on July 18, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).

It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.

CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround. For environments specific to Azure, further instructions are provided below:

Updated: We approximate impact started as early as 04:09 UTC on the 18th of July, when this update started rolling out.

Update as of 10:30 UTC on 19 July 2024:

We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines. Customers can attempt to do so as follows:

Using the Azure Portal - attempting 'Restart' on affected VMs
Using the Azure CLI or Azure Shell (https://shell.azure.com)
https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart

We have received feedback from customers that several reboots (as many as 15 have been reported) may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.

Additional options for recovery:

We recommend customers that are able to, to restore from a backup, preferably from before 04:09 UTC on the 18th of July, when this faulty update started rolling out.

Customers leveraging Azure Backup can follow the following instructions:
How to restore Azure VM data in Azure portal

Alternatively, customers can attempt repairs on the OS disk by following these instructions:
Troubleshoot a Windows VM by attaching the OS disk to a repair VM through the Azure portal

Once the disk is attached, customers can attempt to delete the following file:

Windows/System32/Drivers/CrowdStrike/C-00000291*.sys

The disk can then be attached and re-attached to the original VM.

We can confirm the affected update has been pulled by CrowdStrike. Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance.

Additionally, we're continuing to investigate additional mitigation options for customers and will share more information as it becomes known.

This message was last updated at 17:43 UTC on 19 July 2024

IDENTIFIED 4 months ago - at 07/19/2024 11:25AM

Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services, CrowdStrike has released the following statement:

CrowdStrike is actively working with customers impacted by a defect found in a single content update for Windows hosts. Mac and Linux hosts are not impacted. This is not a security incident or cyberattack.

The issue has been identified, isolated and a fix has been deployed. We refer customers to the support portal for the latest updates and will continue to provide complete and continuous updates on our website.

We further recommend organizations ensure they’re communicating with CrowdStrike representatives through official channels.

Our team is fully mobilized to ensure the security and stability of CrowdStrike customers.

IDENTIFIED 4 months ago - at 07/19/2024 08:58AM

We are continuing to work on a fix for this issue.

IDENTIFIED 4 months ago - at 07/19/2024 08:44AM

CrowdStrike Engineering has identified a content deployment related to this issue and reverted those changes.

INVESTIGATING 4 months ago - at 07/19/2024 08:20AM

We are continuing to investigate this issue.

INVESTIGATING 4 months ago - at 07/19/2024 08:14AM

We are continuing to investigate this issue.

INVESTIGATING 4 months ago - at 07/19/2024 08:02AM

Our team is currently investigating an issue with Bentley machines being down. Some users may be having trouble accessing some sites and the system may be getting restarts automatically or using certain features.
We are working diligently to identify the root cause of the problem and implement a solution. We will provide an update as we learn more.
In the meantime, we apologize for any inconvenience this may cause and appreciate your patience and understanding.

INVESTIGATING 4 months ago - at 07/19/2024 07:53AM

Our team is currently investigating an issue with Bentley machines being down. Some users may be having trouble accessing some sites and the system may be getting restarts automatically or using certain features.
We are working diligently to identify the root cause of the problem and implement a solution. We will provide an update as we learn more.
In the meantime, we apologize for any inconvenience this may cause and appreciate your patience and understanding.

Cut Vendor Outage Costs with an Internal Status Page

With IsDown, you can monitor all your critical services' official status pages from one centralized dashboard and receive instant alerts the moment an outage is detected. Say goodbye to constantly checking multiple sites for updates and stay ahead of outages with IsDown.

Start free trial

No credit card required · Cancel anytime · 3256 services available

Integrations with Slack Microsoft Teams Google Chat Datadog PagerDuty Zapier Discord Webhook

Setup in 5 minutes or less

How much time you'll save your team, by having the outages information close to them?

14-day free trial · No credit card required · Cancel anytime