Phd Candidate, Princeton University
PairCycleGAN: Style Transfer using Generative Adversarial Model
We consider image style transfer problem, where an input image is transformed into an output image with an exemplar style. We assume all the styles are in the same domain such as painting, manga or even makeup for faces. To learn such style transfer, one typical way is to minimize the perceptual loss in feature space [Gatys et al. 2015, Liao et al. 2017], but it normally takes a few minutes for a single static image. Recent work proposes feed-forward networks [Johnson et al. 2016] and generative models [Zhu et al. 2017] which speed up transforming an input image into an desired style. But the primary downside is that it requires a training model to each exemplar style. In this paper, we present an approach for learning to transform an image from a source domain to a target subdomain given an exemplar in the subdomain. Specifically, we assume the style transform problem is an unbalanced problem, where a source domain is degraded compared with the target domain. We exploit supervision at the level of sets of domain and transfer all exemplar styles in the same set of domain on input images without training them separately. We assess the method on several tasks such as manga style transfer, makeup transfer, scribble to object transfer and so on.
Huiwen is a fourth-year PhD student at Princeton University, advised by Adam Finkelstein. She received a B.S. from IIIS (Yao class) in computer science from Tsinghua University. Her interests focus on practical problems on photo processing. She interned at Adobe Seattle in 2015, and was a recipient of Microsoft fellowship in 2016.