Generating complex discrete distributions remains as one of the challenging problems in machine learning. Existing techniques for generating complex distributions with high degrees of freedom depend on standard generative models like Generative Adversarial Networks (GAN), Wasserstein GAN, and associated variations. Such models are based on an optimization involving the distance between two continuous distributions. We introduce a Discrete Wasserstein GAN (DWGAN) model which is based on a dual formulation of the Wasserstein distance between two discrete distributions. We derive a novel training algorithm and corresponding network architecture based on the formulation. Experimental results are provided for both synthetic discrete data, and real discretized data from MNIST handwritten digits.
Discrete Wasserstein GANs (Generative Adversarial Networks)
Discrete Wasserstein GANs (Generative Adversarial Networks)
Discrete Wasserstein GANs (Generative Adversarial Networks)
Related Content
Research Paper /Feb 2024 / Wireless communication, 5G, Machine learning/ Deep learning /Artificial Intelligence
The ubiquitous deployment of 4G/5G technology has made it a critical infrastructure for society that will facilitate the delivery and adoption of emerging applications and use cases (extended reality, automation, robotics, to name but a few). These new applications require high throughput and low latency in both uplink and downlink for optimal performance, while…
Research Paper /Apr 2024 / Compression, Volumetric Imaging, Machine learning/ Deep learning /Artificial Intelligence
"Learning-based point cloud (PC) compression is a promising research avenue to reduce the transmission and storage costs for PC applications. Existing learning-based methods to compress PCs attributes employ variational autoencoders (VAE) or normalizing flows (NF) to learn compact signal representations. However, VAEs leverage a lower-dimensional bottleneck that…
Achieving successful variable bitrate compression with computationally simple algorithms from a single end-to-end learned image or video compression model remains a challenge. Many approaches have been proposed, including conditional auto-encoders, channel-adaptive gains for the latent tensor or uniformly quantizing all elements of the latent tensor. This paper …
Webinar /Jun 2024
Blog Post /Jun 2025
Blog Post /Jun 2025