February 24, 2015 By Douglas Bonderud 3 min read

How much of the World Wide Web is readily available to search engines? Chief information security officers and IT security professionals know the amount of data hidden from plain sight is substantial — something akin to an iceberg carrying nine-tenths of its weight below the surface.

However, according to a new Global Commission on Internet Governance report, the number is much larger. Just 0.03 percent of the so-called Deep Web is available to search engines, while the even-deeper Dark Web is deliberately hidden and unavailable when using standard browsers.

Such a massive piece of virtual real estate that is essentially unmonitored by Internet oversight agencies raises the question: Is there any hope for cybersecurity in the dark?

Deep Web Versus Dark Web

To understand the effect of Dark Web data, it is first important to separate the Deep Web from its shadowy counterpart. The report, “The Impact of the Dark Web on Internet Governance and Security,” defines the Deep Web as “a class of content on the Internet that, for various reasons, is not indexed by search engines.” For most major engines, this lack of indexing is tied to profit. While the information is readily available to those who look, so few are interested that actively crawling for this content provides little to no return on investment.

The Dark Web, meanwhile, is “a part of the Deep Web that has been intentionally hidden and is inaccessible through standard Web browsers.” Powered by networks such as TOR and I2P, this hidden Web makes it possible for users to remain entirely anonymous. While in some cases, this anonymity is used simply as a way to protect free speech or for government agencies to keep top-secret data under wraps, there is another side to this darker corner of the Web filled with cybercrime, the transfer of illegal goods and even terrorism. Are Internet governance and cybersecurity even possible in this environment?

Here Comes the Calvary

According to a recent Naked Security article, it’s only a matter of time before law enforcement and other agencies gain some measure of control over the Dark Web. The article likens the existing hidden Web to the Wild West — even though it was once larger than the settled territories of the United States, even this lawless land eventually found itself bound by law and order.

According to the Global Commission on Information Governance report, the following are six key monitoring areas that are essential to the success of any governance effort:

  1. Mapping the Hidden Services Directory: Both TOR and I2P use a distributed hash table system to hide database information. Strategically deployed nodes could monitor and map this network.
  2. Customer Data Monitoring: There will be no monitoring of consumers themselves, but rather destination requests to track down top-level rogue domains.
  3. Social Site Monitoring: This includes watching over popular sites such as Pastebin to find hidden services.
  4. Hidden Service Monitoring: Agencies must “snapshot” new services and sites as they appear for later analysis, since they disappear quickly.
  5. Semantic Analysis: A shared database of hidden site activities and history should be built.
  6. Marketplace Profiling: Sellers, buyers and intermediary agents committing illegal acts should be tracked.

The bottom line for businesses? While the Dark Web does not pose any immediate or obvious threat, it exists nonetheless and operates as a catchall both for users seeking anonymity and those looking to operate outside the law. Monitoring this hidden corner of the Web is by no means impossible. It comes down to the choices nation-states and private companies are willing to make. How much light must be thrown at the Dark Web to make it safe, while still respecting the right to Internet anonymity? Is a known darkness better than none at all?

Image Source: iStock

More from

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Passwords, passkeys and familiarity bias

5 min read - As passkey (passwordless authentication) adoption proceeds, misconceptions abound. There appears to be a widespread impression that passkeys may be more convenient and less secure than passwords. The reality is that they are both more secure and more convenient — possibly a first in cybersecurity.Most of us could be forgiven for not realizing passwordless authentication is more secure than passwords. Thinking back to the first couple of use cases I was exposed to — a phone operating system (OS) and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today