How many times have you heard the popular information security joke: “It’s always DNS”? It means that every time there’s a problem you can’t figure out, you will dig until you reach the conclusion that it’s always DNS. But DNS is also where a lot of issues can be caught early, and it should be leveraged more than ever, especially by those working on their zero trust journeys. DNS can be part of better threat detection — let’s see how that works.
What’s to DNS and Zero Trust?
Let’s unpack this for a minute. DNS is the internet’s phone book. It translates domain names into numbers that computers can then route. More specifically, “the Domain Name System is the hierarchical and decentralized naming system used to identify computers, services, and other resources reachable through the internet or other internet protocol networks.” As such, the DNS protocol is also one of the few application protocols that are allowed to cross organizational network perimeters.
Zero trust is a framework that assumes a complex network’s security is always at risk to external and internal threats. It helps organize and strategize a thorough approach to counter those threats.
Where do these two meet?
Zero trust is about doing continuous risk assessments and verifications, a principle that also requires examining traffic that comes into and out of organizational networks. You might agree that pretty much everything happening on connected devices is evident somewhere in DNS traffic. That’s especially true since DNS can go everywhere, and that’s where attackers want to get.
Unfortunately, many security professionals have a common misconception that DNS is just a domain blocklist and do not consider its power as a detection tool or a data source to analyze as part of zero trust architectures. But they should. DNS is where security teams can find forensic markers, automatic domain categorization data, suspicious behavior patterns, and potential/confirmed maliciousness.
DNS security fits zero trust perfectly for two reasons. Firstly, DNS is fundamental in any network infrastructure, making it an excellent policy enforcement point for all zero trust architectures, no matter what other controls are in play. Since almost every network connection has a corresponding DNS request, we can leverage this advantage in risk assessments.
Second, any new or unknown domain that shows up in secure environments can trigger a validation process because DNS security, like zero trust, also assumes breach. This plays right into the state of continuous verification that zero trust aims to achieve.
Look Beyond the Basics
If it’s so great, why are so many organizations not using DNS to their advantage?
DNS traffic sent by UDP used to be plaintext and thus transparent to security admins. To keep DNS queries private, however, that data is now encrypted with DNS-over-TLS (DoT) and DNS-over-HTTPS (DoH). As a result, admins no longer see the same data from queries and have lost the visibility they used to have on the network. From the security perspective, in DoT’s case, admins can at least do some blocking, but DoH mixes in with the rest of HTTPS traffic, making it impossible to block without wider implications. That said, DNS should not be abandoned as a place to detect malicious activity. Attackers are definitely using it to their advantage at every turn with DNS tunneling attacks that conceal covert communications and exfiltrated data.
While visibility has changed, one can still detect connections that don’t have corresponding DNS requests and associate them to detect use of unauthorized encrypted DNS services. No one is going to blindly block never-before-seen domains just because they are considered riskier. But blocking them with more context can provide an additional factor within zero trust risk assessments.
To begin, correctly determining the uniqueness of domains is a critical step in its risk assessment. Only broad visibility into a comprehensive global DNS can help validate this analytic effectively. For example, the visibility IBM Security teams get from Quad9 can tell us if a given domain is unique in the enterprise or unique globally.
Then, aside from blocking, how can we treat newly observed domains? The answer ties back again to continuous verification. There are various DNS analytics we can rely on to analyze new domains and their risk potential. Think of domain names generated by DGAs, typo squatting, fast flux networks, and DNS tunneling. Analytics that can provide that sort of context are a powerful way to reveal the true intentions of those who registered the domains and help security admins trigger the right mitigations on time.
DNS security helps support better cyber hygiene in your environment, and it enables continuous risk assessment and validation. Without DNS security, it becomes more difficult to gain early visibility into potential threats even as one works within zero trust principles. It also means that security admins would need to spend more effort on data collection and policy enforcement. Therefore, DNS security is not only essential but also a low-hanging fruit in any Zero Trust architecture.
Learn more about DNS analytics in this post from IBM Security.
IBM Security X-Force recommends that every enterprise start using DNS providers with built-in security. For example, Quad9 reduces the complexity of security operations at no cost.
Quad9 is also a trustworthy DNS provider supporting encryption since malware/botnet won’t use Quad9 for many good reasons. Furthermore, with a partnership with IBM X-Force, Quad9 scrapes every newly observed domain to help Quad9 users stay ahead of threats.
Join X-Force Exchange threat intelligence sharing by visiting: exchange.xforce.ibmcloud.com
To read emerging threat intelligence blogs from X-Force, visit: securityintelligence.com/category/x-force
Chenta Lee is a contributor for SecurityIntelligence.