Red teaming and penetration testing (pen testing) are two essential cybersecurity practices that help organizations identify vulnerabilities and strengthen their security posture. Both techniques involve simulating attacks on an organization's IT infrastructure, but they differ in scope and objectives. Implementing a comprehensive cybersecurity strategy that includes red teaming and pen testing can significantly improve an organization's resilience against cyber threats.
We sat down with Andrew Chiles, Director of Technical Services and Research at SpecterOps, to discuss the importance of red teaming and pen testing, and how organizations can overcome some of the common challenges faced when implementing these critical programs into their overall cyber strategies. What is the importance of red teaming and pentesting?
Penetration testing and red teaming have much in common and ultimately support the same goal of increasing an organization's security posture but are inherently different. In a nutshell, penetration testing assesses an organization’s protective controls without much focus on the security operations teams that triage and respond to security alerts. In contrast, red teaming is intended to exercise the combination of people, process, and technology that represent the protective controls, detection capabilities and incident response practices employed in defense of an organization.
More specifically, penetration testing identifies and demonstrates the exploitation of vulnerabilities or misconfigurations in the technology stack of targeted systems to allow the organization to fix them before real-world attackers can exploit them.
On the other hand, red teaming is often conducted in form of an exercise that simulates a threat actor that actively tries to attack the organization. While red teaming may involve identifying and exploiting technical vulnerabilities, as penetration testing does, it also targets people and processes. It allows the organization to assess its capacity to identify, contain and remove threat actors from its environment.
An effective security program should employ both pentesting and red teaming methodologies.
What are some of the common security holes that organizations miss that red teaming and pentesting can help bring to light?
Excessive permissions for normal business users - In both penetration tests and red teams, we too often identify attack paths that allow normal users, like Jane from marketing, to gain control of business-critical systems. These paths often involve passwords for privileged accounts stored or used where they shouldn't be or excessive access granted to the low-privileged users, such as allowing Bob from accounting to reset the password of accounts with Domain Admin access.
Lack of network segmentation - Testing often reveals that compromise of a normal end-user workstation can lead to complete compromise of the most sensitive systems in an organization due to an absence of network controls or segmentation that would otherwise hinder lateral movement.
Brittle detection rules - In red team exercises, we often see that organizations rely on brittle detection rules that are too specific to the behavior of publicly available attack tooling, which allows attackers to evade existing detections with only slight modifications to well-known offensive tradecraft.
Over-reliance on a single tool or process - Another common issue is that organizations rely on a single security tool, such as EDR, to identify and contain a breach. If a red team can neutralize or evade that tool, the defensive team needs to improvise, and their effectiveness suffers.
In general, there's often a gap between what "is" and what "should be" – Frequently, these tests uncover vulnerabilities, misconfigurations, or gaps that an organization will say "shouldn't be the case" or "were already fixed" or "that system shouldn't be there anymore." The testing reveals the differences between the organization’s assumptions about the state of their environment and the reality.
What are some of the challenges with red teaming and pentesting?
A recurring theme is lack of focus, which can impact the outcome when the exercise is bound by time and resources. Organizations need to answer the "why" in terms of the reasoning and goals behind doing the testing in the first place.
Organizations often want to simulate the full attack chain and have few strategies to prioritize which tactics, techniques and procedures are most relevant to their environments. A “full” attack chain can include a long list of disparate objects, from gaining initial access through social engineering or cracking the perimeter to gaining privileged access to critical systems and sensitive information. Similarly, organizations often have a long list of objectives they want the red team to target, from cloud infrastructure to code repositories. Casting such a wide net can sometimes spread the red team too thin and impact the red team's ability to spend the appropriate time and resources to simulate a threat profile that meets the organization's expectations.
The root cause is that the red team is often not involved in planning the exercise and does not provide input on what is feasible, realistic or valuable. How can organizations overcome those challenges?
Get the red team involved in the conversation early on and allow it to help plan the approach, scope, and objectives of a given assessment.
For example, the red team might suggest exercising an "assumed breach" scenario, in which the red team does not spend valuable time manipulating users into clicking phishing emails and potentially losing the element of surprise.
Another example is setting objectives that will directly challenge the blue team, such as installing various types of persistence to hinder the organization's efforts to contain and remove the red team from the environment. A good red team should be more than willing to modify the tradecraft they employ to make their actions more or less “stealthy” depending on an organization’s goals.
Ultimately, red teaming is about challenging assumptions and applying an unbiased and adversarial mindset to assess the robustness and efficacy of a particular plan, program, etc. Organizations can apply this approach to any facet of their operations and not just technology.