Lessons Learned from the Rackspace Attack

Over the last month, Rackspace has been performing incident response and investigation into the December 2, 2022 attack that took email services for large portions of their customer base offline. 

Rackspace recently released information and details from that ongoing investigation, giving us insight into the cause and impact of the attack itself.  As reported by DarkReading.com, the attack appears to have used a known vulnerability within Exchange Server to gain unauthorized privileges; and this vulnerability,  CVE-2022-41080 which allowed for ProxyNotShell attacks, had a patch available for about a month prior to the attack taking place. 

Why were the systems not patched, and what can we learn from the situation which occurred? 

Problematic Patching 

Rackspace was aware of the vulnerability and of the existence of a patch for it but chose not to perform the patching on their Exchange online platform. 

The primary reason for this – as per their statements – was due to a potential for authentication failures in patched versions of the Exchange Platform; errors which could impact a large number of their end-customers.

The reality of the situation is that many patches do indeed create unforeseen issues of their own when they are applied. For example, a Windows Server patch that was distributed about the same time as the patch for CVE-2022-41080 did indeed break Kerberos – the primary authentication protocol used in Active Directory.   Rackspace erred on the side of caution with this patch, and this resulted in the Exchange Online systems being vulnerable to the ProxyNotShell attack that the organization suffered.  

Why Not Patch Anyway? 

Interestingly, the decision not to patch against this vulnerability was quite valid.  Since the patch itself would have risked a service outage, and since threat actors had already been experimenting with ways to bypass the patch, Rackspace was stuck in a “Morton’s Fork” – a situation where either choice results in an unwanted outcome.  The potential for attack was felt to be less than the known chances of an outage, and so the patch was delayed. 

Multiple Defensive Paths are Required 

While applying this patch would create the potential for and outage, that isn’t to say that an organization should stop there and then. 

A “good, better, best” approach can assist in dealing with difficult patches by leveraging multiple defensive layers to minimize impact.  Of course, the best would be to patch whenever possible, but as we have seen, that’s not always a viable path.

When such vulnerabilities with difficult patching arise, bringing other defensive tools to bear is the next step.  Behavioral-based detection anti-malware (while controversial on Exchange servers) is often a good line of defenses for these more complex attacks. 

Leveraging workarounds and fix methodologies also come into play here – such as limiting Remote PowerShell access and specific IIS functions to make it much more difficult to succeed with a ProxyNotShell attempt. 

Finally, if those two paths are not workable, then detection and containment are the “good” option.  Setting up SIEM correlation rules and strict segmentation reduce both the time before an attacker is discovered and how far they can propagate through connected systems. Notably, all of these methodologies share one common component – simulation testing. 

By leveraging breach & attack simulation systems, all the remediation techniques can be tested to ensure that they are having the desired impact.  Patching and workarounds can be challenged with simulated attack techniques to confirm that the attack no longer works. Running these simulation techniques also allows the organization to ensure that incident response is triggered as expected, and that propagation is stopped by segmentation. 

So no matter which of the “good, better, best” choices are put into practice, testing becomes a key part of the overall strategy. 

Summing It Up 

While we still don’t have all the details around this incident – and are likely to never get all of them – what we do know shows a path to effective security. 

First, apply all patches possible. It isn’t likely that every single patch will be able to be immediately applied, but all those which can be applied quickly and safely should be applied. 

Second, if a patch cannot be applied then the organization must find other combinations of workarounds and layers of controls which will defend the systems in question until such a time as the patch (or an updated version of it) can be safely applied.

Finally, no matter which path is best for any particular patch, regular and consistently updated security validation and testing protocols must be implemented and performed.  In combination, these methods can help to ensure that threats are effectively blocked; and, where that isn’t possible, to ensure that attackers can be restricted and damage minimized.  


Request a Demo