Generative Adversarial Network

GAN
Deep Learning

Two networks trained against each other — a generator that creates fake samples and a discriminator that tries to tell them from real ones.


In one line

Train a generator and a discriminator in a minimax game — the generator learns to fool the discriminator, the discriminator learns to catch the generator.

What it actually means

The generator maps noise z to a sample G(z). The discriminator D takes a sample and outputs the probability it’s real. You train D to maximize log D(real) + log(1 - D(G(z))) and G to minimize the second term — i.e. to fool D. At equilibrium the generator produces samples indistinguishable from the training distribution. In practice training is finicky: mode collapse (the generator only makes one good sample), vanishing discriminator gradients, and sensitivity to architecture and learning rate are all common.

Why it matters

From 2014 to about 2021, GANs were the dominant approach for photorealistic image generation (StyleGAN, BigGAN, Pix2Pix, CycleGAN). Diffusion models have largely taken over for general image synthesis, but GANs are still the right call when you need fast single-step generation — you run the generator once, no iterative denoising. They also remain strong on face editing, super-resolution, and domain translation.

Example

D_loss = -log D(x_real) - log(1 - D(G(z)))
G_loss = -log D(G(z))

You’ll hear it when

  • Discussing image generation history.
  • Working with StyleGAN for faces or CycleGAN for unpaired translation.
  • Debugging mode collapse or training instability.
  • Comparing GAN vs diffusion for a generation task.

Related terms

See also