Softmax GAN is a novel variant of Generative Adversarial Network (GAN). The key idea of Softmax GAN is to replace the classification loss in the original GAN with a softmax cross-entropy loss in the sample space of one single batch.
This paper introduces Diffusion-GAN that employs a Gaussian mixture distribution, defined over all the diffusion steps of a forward diffusion chain, to inject instance noise. A random sample from the mixture, which is diffused from an observed or generated data, is fed as the input to the discriminator.
GAN Lab is a novel interactive visualization tool for anyone to learn and experiment with Generative Adversarial Networks (GANs), a popular class of complex deep learning models. With GAN Lab, you can interactively train GAN models for 2D data distributions and visualize their inner-workings ...
The Progressive GAN code repository contains a command-line tool for recreating bit-exact replicas of the datasets that we used in the paper. The tool also provides various utilities for operating on the datasets:
Generative adversarial networks (GAN) are a class of generative machine learning frameworks. A GAN consists of two competing neural networks, often termed the Discriminator network and the Generator network. GANs have been shown to be powerful generative models and are able to successfully generate ...
Each type of GAN is contained in its own folder and has a make_GAN_TYPE function. For example, make_bigbigan creates a BigBiGAN with the format of the GeneratorWrapper above. The weights of all GANs except those in PyTorch-StudioGAN and are downloaded automatically. To download the PyTorch-StudioGAN weights, use the download.sh scripts in the corresponding folders (see the file structure below).
Official TP-GAN Tensorflow implementation for the ICCV17 paper "Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis" by Huang, Rui and Zhang, Shu and Li, Tianyu and He, Ran. The goal is to recover a frontal face image of the same person from a single face image under any poses.
Generative denoising diffusion models typically assume that the denoising distribution can be modeled by a Gaussian distribution. This assumption holds only for small denoising steps, which in practice translates to thousands of denoising steps in the synthesis process. In our denoising diffusion ...