top of page

Critical Vulnerability Discovered by Orca Security in MSFT Azure Puts Fortune 500 Companies at Risk

Orca Security researchers have discovered a critical Azure Automation service vulnerability called AutoWarp. This serious flaw would allow an attacker unauthorized access to other customer accounts and potentially full control over resources and data belonging to those accounts, as well as put multiple Fortune 500 companies and billions of dollars at risk. Working closely with Microsoft Azure, Orca Security helped to fix the issue and notify impacted customers.

We spoke with Yanir Tsarimi from the Orca Security research team about this discovery.

How did you discover this vulnerability?

I’m part of the Orca Security research team and our mission is to do high quality cloud research. I personally do vulnerability research within the biggest cloud vendors.

One day, I started looking at the Azure Automation service as a new research target. I understood that customer’s automation scripts run in a shared virtual machine, but under some sandbox isolation. However, when looking through the filesystem in the shared environment, I found a suspicious log file that indicated a web endpoint was hosted on a high, seemingly random local port.

The random port chosen raised my suspicions. A quick investigation revealed that this random port was being used because customers were assigned to other local ports. And those ports were also accessible through the network.

Realizing that this endpoint actually serves authentication tokens, I tried requesting tokens from local ports on the machine, one by one, and I was able to retrieve authentication tokens that belong to other customers.

What makes it so dangerous?

The tokens obtained through this attack could be used to access any resource or data that the Automation service could. So if you were using the service to perform operations on your virtual machines, an attacker could steal that token and abuse that access. This effectively breaks tenant isolation, which is a core security principle in terms of cloud security.

How was working with Microsoft during the disclosure process?

The people from the Microsoft Security Response Center were very friendly, professional, and responsive.

Microsoft appreciated our input and they were very open to working with us. They seem to truly care about the security of their customers. I can definitely recommend working with them.

Who would or could be impacted? What can organizations do to mitigate any impact?

All customers who were using the Azure Automation service, and had their managed identity enabled (which is the default) could have been impacted.

It’s important to mention that Microsoft has completed their investigation and stated that they did not detect any misuse of tokens. If someone wanted to investigate further, I would start by reviewing what permissions your managed identity had, and check the logs for any suspicious actions with those permissions before the date of the fix (December 10th, 2021).



bottom of page