September 19, 2017 By Douglas Bonderud 2 min read

Web security solutions lack standardization for reporting vulnerabilities. Security researcher Ed Foudil recently submitted a draft to the Internet Engineeing Task Force (IETF) that suggested creating a standardized security.txt file on every website that details the site’s security policy and provides contact information to report new vulnerabilities.

The Dangers of Delayed Response

Right now, many companies fail to respond in a timely manner when they discover vulnerabilities. As noted by IT Wire, security firm Embedi discovered multiple flaws in D-Link routers and reported them all to the company. Despite months of back and forth, Embedi said only one of the vulnerabilities was patched.

The firm also contacted the Community Emergency Response Team (CERT) and was told to use D-Link’s official reporting channels. In August, the security firm released exploit code for all three vulnerabilities since there was no evidence of further patch progress.

This isn’t the ideal route for security professionals and security firms. They would prefer to work with developers, create a patch and then release code into the wild, especially since cybercriminals start working on new attack variations within hours after any new weakness is made public.

Lacking standardization, white-hat hackers are forced to wait on corporate responses. Often there is no viable way to contact organizations directly, leaving researchers with the difficult choice of keeping quiet and hoping no one else notices the issue or speaking up to compel change, risking exploitation by malicious actors.

A New Standard for Reporting Vulnerabilities

As noted by Bleeping Computer, Foudil got his idea after attending DEF CON this year. He modeled his new security.txt after robots.txt, which is used by web search spiders when they index sites. Standardization of robots.txt has significantly improved the efficiency of this indexing, and Foudil imagined something similar for the security.txt protocol. In his version, however, the file is kept on the top level of company web servers and read by human beings.

The current draft supports four directives: contact, encryption, disclosure and acknowledgment, which would give security researchers the information they need to make contact and start the process of remediating code flaws. So far, security.txt has been met with support from HackerOne, Bugcrowd and Google. If widely adopted, the new .txt file could also be expanded to include directives such as rate-limit, platform, reward, donate and disallow, providing even more specific direction to security firms and researchers.

While it’s still in the draft stage, there’s a growing need for security.txt and similar standardized security solutions. Haphazard communication, feedback and application patching won’t work in a tech landscape characterized by cybercriminals able to pounce on vulnerabilities within hours and companies willing to discount potential defense disasters until it’s too late. By creating a simple, standardized format, corporations can provide a direct line for feedback, fraudsters are left out of the loop and security experts can do what they do best: discover and report critical code vulnerabilities.

More from

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Passwords, passkeys and familiarity bias

5 min read - As passkey (passwordless authentication) adoption proceeds, misconceptions abound. There appears to be a widespread impression that passkeys may be more convenient and less secure than passwords. The reality is that they are both more secure and more convenient — possibly a first in cybersecurity.Most of us could be forgiven for not realizing passwordless authentication is more secure than passwords. Thinking back to the first couple of use cases I was exposed to — a phone operating system (OS) and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today