Published in ICML 2018 (co-first author)
Abstract
Generative adversarial networks (GANs) aim to generate realistic data from some prior distribution (eg, Gaussian noises). However, such prior distribution is often independent of real data and thus may lose semantic information (eg, geometric structure or content in images) of data. In practice, the semantic information might be represented by some latent distribution learned from data, which, however, is hard to be used for sampling in GANs. In this paper, rather than sampling from the pre-defined prior distribution, we propose a Local Coordinate Coding (LCC) based sampling method to improve GANs. We derive a generalization bound for LCC based GANs and prove that a small dimensional input is sufficient to achieve good generalization. Extensive experiments on various real-world datasets demonstrate the effectiveness of the proposed method.