March 15, 2018 By Brad Harris 4 min read

This is the first installment in a two-part series about generative adversarial networks (GANs). For the full story, be sure to also read part two.

GANs are one of the latest ideas in artificial intelligence (AI) that have advanced the state of the art. But before we dive into this topic, let’s examine the meaning of the word “adversarial.” In its original application in AI, this word refers to an example type that is designed to fool an evaluating neural net or another machine-learning model. With the use of machine learning in security applications increasing, this example type has become very important.

Imagine documents with headers that include either terminating tags, such as HTML, or document lengths, such as rich text formats (.rtf) or .doc file formats. Because these files can have arbitrary bytes appended to the end, this gives rise to file space, which could be used to create these adversarial examples.

Right now, the state of the art has focused on images, but it might apply to other file formats as well. In theory, these formats may be even more vulnerable since an image must be changed only slightly to make sure it is still recognizable to humans. Other formats will appear identical — even if there is extra content at the end. This gives rise to several different attacks (and defenses) against these examples, which are described in more detail in a paper by researchers from the University of Virginia, “Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks.”

What Are Generative Adversarial Networks?

According to O’Reilly Media, generative adversarial networks are “neural networks that learn to create synthetic data similar to some known input data.” These networks use a slightly different definition of “adversarial” than the one described above. In this case, the term refers to two neural networks — a generator and a discriminator — competing against each other to succeed in a game. The object of the game is for the generator to fool the discriminator with examples that look similar to the training set. This idea was first proposed in a research paper, “Generative Adversarial Nets,” by Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville and Yoshua Bengio.

When the discriminator rejects an example produced by the generator, the generator learns a little more about what the good example looks like. Note that the generator must start with some sort of probability distribution. This is often just the normal distribution, making the GAN very practical and easy to initialize. If the generator can learn more about the real examples, it can choose a better probability distribution. Typically, the discriminator acts as a binary classifier — that is, it says “yes” or “no” to an example. The fact that there are only two options for the discriminator to choose simplifies the architecture and makes GANs practical.

How does the generator get closer to the real examples? With each attempt, the discriminator sends a signal back to the generator to tell it how close it is to an actual example. Technically, this is the gradient of the difference, but you can think of it as a proximity/quality and directionality indicator. In other words, the discriminator leaks information about just how close the generator was and how it should proceed to get closer. In an ideal situation, the generator will eventually produce examples that are as good as the discriminator is at distinguishing between the real and generated examples.

Semisupervised Learning

The discriminator is given samples from the training set and the generator. When training, it labels inputs as 1 — typically with a smoothing factor that makes values close to 1 positive — and labels the generator images as 0. This is how the discriminator initializes itself. It then assumes that any image from the generator is fake, which is how it creates the binary training set.

In a practical sense, each half of the network trains at the same time, meaning that each half initializes with no knowledge at all. However, the discriminator has access to the knowledge buried in the training set while the generator can only adjust based on the initially flawed indicator returned by the discriminator. This works because, in the beginning, the generator creates what can be called noise — examples that are so fake that they don’t resemble the real examples at all. Therefore, the discriminator can safely say that any example it receives from the generator is fake.

This is technically called semisupervised learning. In semisupervised learning, the algorithm (discriminator) has one set of examples labeled as truth and one set that is not. In this case, the discriminator knows that the training set contains real examples, but it cannot know for sure that the initial examples sent by the generator are not very close to the real ones. It can only assume that the output is noise because the generator has very little knowledge of what the real examples should look like.

Given an extremely accurate probability distribution, it’s possible for the generator to quickly create convincingly realistic examples. However, this defeats the purpose of GANs because if one already knows the detailed probability distribution, there are much simpler and more direct methods available to derive realistic examples.

As time goes by, the discriminator learns from the training set and sends more and more meaningful signals back to the generator. As this occurs, the generator gets closer and closer to learning what the examples from the training set look like. Once again, the only inputs the generator has are an initial probability distribution (often the normal distribution) and the indicator it gets back from the discriminator. It never sees any real examples.

Stay Tuned to Learn More

This process may seem impractical in the real world, but there are many scenarios in which GANs can help solve very practical problems. In the second part of this series, we will explore how this emerging development in AI can be applied to cybersecurity to perform fundamental processes, such as password cracking, and complex tasks, such as spotting information hidden in generated images.

More from Artificial Intelligence

AI cybersecurity solutions detect ransomware in under 60 seconds

2 min read - Worried about ransomware? If so, it’s not surprising. According to the World Economic Forum, for large cyber losses (€1 million+), the number of cases in which data is exfiltrated is increasing, doubling from 40% in 2019 to almost 80% in 2022. And more recent activity is tracking even higher.Meanwhile, other dangers are appearing on the horizon. For example, the 2024 IBM X-Force Threat Intelligence Index states that threat group investment is increasingly focused on generative AI attack tools.Criminals have been…

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today