August 2, 2016 By Douglas Bonderud 2 min read

Who hacks the hackers? As it turns out, just about anyone.

According to CSO Online, the official app for this year’s Black Hat conference contained a number of serious social flaws — worrisome enough that organizers stripped out specific functions before the app went live.

Thankfully, the nearly two-decade old event, which bills itself as “the most technical and relevant global information security event in the world,” had the foresight to disclose the app for testing before a public rollout. Here’s a look at where this Black Hat app went off the rails.

Of Lies and Logins

After some hands-on time with the Black Hat app, researchers from Lookout had some serious concerns about its social functionality.

It all started with the sign-up process, which allowed users to build a profile, browse sessions and send messages to other attendees. The problem: With no verification for email addresses, users could either create entirely fake profiles or sign up using the name of someone else at the conference.

Black Hat App: A Troll’s Playground

For those interested in simply trolling the event, it was possible to enter nonsense email address details and create fake profiles with the photo and corporate details of their choice. Since corporate email addresses often follow a set pattern, there was also potential for impersonation. People could sign up as an attendee who works for a competitor, use their real email address and then send messages to other users or make offensive comments on posts in a conferencewide activity feed.

It gets worse. If users discovered someone else had registered their name and email address, it was possible to ask for a password reset. The problem: This reset didn’t end the session of other users logged in to the same account, meaning that so long as impostors didn’t manually sign out, they retained access to all features and data enjoyed by the legitimate account owner, without that owner’s knowledge.

As a result of this disclosure, the app was pulled; better to release a truncated piece of software than a significant security risk at a conference designed to address these exact types of security issues.

Hats Off to Black Hat

Black Hat continues to do good work in the security community, especially when it comes to tapping the pulse of emergent issues.

As noted by The Wall Street Journal, the conference received 50 proposals this year for talks related to the Internet of Things (IoT). While it only had space for 13, the trend is obvious: A bigger attack surface makes for a more appealing target.

Black Hat has been right before. In 1997, attacking Windows was a key conference focus; a decade later, cracking iPhones was the big draw. This year, there’s talk about proof-of-concept attacks on network-connected vehicles moving at significant speed, unlike last year’s 5 mph maximum.

Nothing Is Safe

But here’s the takeaway, and it’s inherent in the Black Hat ethos itself: Nothing is safe. No device, no app and no data is immune from potential misuse or compromise. Even an application specifically designed for a high-level security conference contained a number of glaring and potentially devastating flaws. Thankfully, organizers practiced what they preach and used critical feedback to pull the plug on social security risks.

Heading to Black Hat this year? Enjoy Vegas and learn more about advanced threats — but for the sake of corporate safety, maybe give the official app a pass.

More from

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Passwords, passkeys and familiarity bias

5 min read - As passkey (passwordless authentication) adoption proceeds, misconceptions abound. There appears to be a widespread impression that passkeys may be more convenient and less secure than passwords. The reality is that they are both more secure and more convenient — possibly a first in cybersecurity.Most of us could be forgiven for not realizing passwordless authentication is more secure than passwords. Thinking back to the first couple of use cases I was exposed to — a phone operating system (OS) and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today