February 8, 2018 By Douglas Bonderud 2 min read

According to recent research, the security of the internet at large is shaky. Menlo Security reported that 42 percent of the top 100,000 websites as ranked by Alexa are potentially compromised and risky for users. To make matters worse, typical measures to weed out bad actors, including site reputation and category regulation, make little difference when it comes to overall security.

Neighborhood Watch

Digital citizens have established trusted neighborhoods — clusters of reputable sites that handle data responsibly, leverage cutting-edge internet security measures and stay up to date with threat developments. Typically, these sites have hard-won online reputations to back up these claims.

As noted by SC Magazine, however, cybercriminals are using public and corporate perceptions of trust to launch background, phishing and typosquatting attacks. As a result, more than 40 percent of trusted sites are considered risky because they’re running vulnerable software, have been used to distribute or launch malware attacks, or suffered a security breach in the last 12 months.

One particular area of concern is the number of background sites leveraged by trusted domains for content such as video or online advertisements. According to Infosecurity Magazine, the average website uses 25 background connections to deliver this content, but most enterprise administrators don’t have the monitoring solutions in place to determine whether these connections exhibit risky or criminal behavior.

User trust is also exploited through typosquatting. According to the Menlo Security data, 19 percent of typosquatting attacks — in which fraudsters claim domain names that are almost identical to those of familiar sites but with small typos — occurred in trusted website categories. Phishers also used the cover of legitimate domains to obfuscate their intentions and convince users to click on malicious links or download infected attachments.

Filling Internet Security Gaps

According to Menlo Security CEO Amir Ben-Efraim, the company’s recent study “confirms what most CISOs already know: that a false sense of security is a dangerous thing when using the web.” But what’s driving this overconfidence in a technology landscape filled with emerging threats?

Transparency is part of the problem. Most enterprises don’t have a clear picture of the risks posed by background sites and delivered content. Companies are also getting complacent once they reach a position of consumer trust, especially if they’ve successfully avoided recent internet security threats. In other words, there’s a sense that current firewalls and antivirus tools are enough to keep sites safe.

But a the Menlo data demonstrated, the opposite is true: Trusted sites are some of the most risky. Companies can’t afford to ignore background content simply because it’s never proven problematic before, because cybercriminals will exploit anything and everything connected to their intended targets.

Employee education is equally crucial. Attacks exploiting the human element, such as failure to notice typosquatting or getting duped by phishing emails, make up the lion’s share of successful trust-hacking. Educating employees cuts these attacks off at their source and improves total security hygiene.

Despite appearances, internet security for top sites is spotty at best. Organizations need to figure out how to track exactly what’s coming, going and happening on their networks.

More from

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Passwords, passkeys and familiarity bias

5 min read - As passkey (passwordless authentication) adoption proceeds, misconceptions abound. There appears to be a widespread impression that passkeys may be more convenient and less secure than passwords. The reality is that they are both more secure and more convenient — possibly a first in cybersecurity.Most of us could be forgiven for not realizing passwordless authentication is more secure than passwords. Thinking back to the first couple of use cases I was exposed to — a phone operating system (OS) and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today