Transcript
Page 1: EmotiGAN: Emoji Art using Generative Adversarial Networkscs229.stanford.edu/proj2017/final-reports/5244346.pdfA. Generative Adversarial Networks A Generative Adversarial Network[4]

EmotiGAN: Emoji Art using Generative AdversarialNetworksMarcel Puyat

Abstract— We investigate a Generative AdversarialNetwork (GAN) approach to generating emojis from text.We focus on two interesting research areas related toGANs: training stability and mode collapse. In doing so,we explore a novel GAN training approach that involvestraining a generator with different images with the sameconditional label in order to produce more varying kindsof images with Conditional GANs.

I. INTRODUCTION

In this work, we are interested in translating textin the form of a phrase or ”search query” directlyinto emoji image pixels. One might imagine anapplication for this wherein a user inputs a shortsearch query and is given a unique emoji relatingto their search terms.

Similar recent works[1] have tackled the task oftranslating text into images by solving two sub-problems: first, learn a text feature representation,or embedding, that captures the important visualdetails of the domain; and second, use these fea-tures as input into a generative model to generaterealistic images that a human might mistake forreal.

EmotiGAN solves this problem by reusingworks from 2 different areas. First, we reuse mostof the architecture of Deep Convolutional GANs[2]

(DCGAN) which has shown to do quite wellon a broad range of domains related to imagegeneration. We then use a pre-trained word2vec[3]

model to convert text into an embedding that wecan use as input into our GAN. Section 3 coversmore details on the design and implementation ofthese two components.

Along the way, we make some alterations to theDCGAN architecture in order to stabilize trainingand produce more varying generated images giventhe same text label.

Fig. 1: Examples of EmotiGAN generated emojis fromphrases. The phrase above each emoji was converted intoan embedding using word2vec and then fed as input into aGAN.

II. RELATED WORK

A. Generative Adversarial Networks

A Generative Adversarial Network[4] consists oftwo neural networks, a generator and discrimina-tor, whose cost functions set up a minimax gamewherein the discriminator learns to classify imagesas real or fake, and the generator attempts toproduce realistic synthetic images in order to foolthe discriminator.

More formally, the generator and discriminatorplay the following minimax game with value func-tion V (D,G):

minG

maxD

V (D,G) = log(D(x))+log(1−D(G(z)))

Where x is a real image, z is a noise variable drawnfrom some prior, D attempts outputs 1 or 0 given a

Page 2: EmotiGAN: Emoji Art using Generative Adversarial Networkscs229.stanford.edu/proj2017/final-reports/5244346.pdfA. Generative Adversarial Networks A Generative Adversarial Network[4]

Fig. 2: EmotiGAN high level architecture

real or fake image respectively, and G attempts toproduce a realistic image given the noise variablez.

GANs are notoriously difficult to train to con-vergence, wherein the discriminator and generatorlearn at a similar pace where neither one ends upoverpowering the other, leading to either explodingor vanishing gradients for the other. Many heuris-tics and formal techniques[5] have been explored inorder to improve training stability for GANs, andwe explore many of these techniques in order tostabilize EmotiGAN. Another GAN failure modethat has not been fully solved is the issue of modecollapse, where the generator outputs images withlittle or no variety, despite being trained on adataset with high variance of image types.

There are no clear evaluation metrics used in thefield of unsupervised image generation. Goodfel-low et al.[5] suggest an inception score metric thatwe explore using in this work in order to providequantitative results in addition to the qualitativeanalysis done by simply examining the syntheticimages.

B. Conditional GANMirza et al.[6] explored adding an additional

input variable to both the discriminator and gener-ator to allow for more control over the generatedimages. In EmotiGAN, for example, this additionalconditional input is the vector-representation of thetext input.

The issue of mode collapse (wherein given thesame conditional input c, the generator outputs

the exact same image) with Conditional GANs ismostly unsolved, as manys have found that thegenerator learns to ignore the noise variable z andonly conditions on the conditional input c.

C. Text to Image Synthesis

Zhang et al.[1] were able to train a text-to-imageGAN on a dataset of bird images. Their applicationdiffers from this work in that they were interestedin translating full sentences into images. In orderto do so, they trained a word embedding modelintended to capture the visual semantics behind agiven sentence related to birds. Our work simplyuses a pre-trained word2vec model[3], but one canimagine possible extensions to this work whereina word embedding model is trained especially foremojis.

III. EMOTIGAN

EmotiGAN draws inspiration from the DCGANarchitecture, using a similar approach with anupsampling ConvNet used for the generator and adownsampling ConvNet used as the discriminator.Figure 2 gives a high level view of the overallarchitecture. Here, we lay out the unique aspectsof EmotiGAN:

A. Heuristic Cost Function

The generator’s cost function in the originalGAN paper[4] directly minimizes the negative ofthe discriminator’s cost function:

Page 3: EmotiGAN: Emoji Art using Generative Adversarial Networkscs229.stanford.edu/proj2017/final-reports/5244346.pdfA. Generative Adversarial Networks A Generative Adversarial Network[4]

Fig. 3: Images generated when using text input: ”tongue”Left: Images with our mode collapse training mechanism.

Right: Images without our mode collapse training mechanism

J(G) = log(1−D(G(z))

However, when training with this objective, wefound that some training attempts resulted in earlyvanishing gradients for the generator that wouldnever recover. In the scenario that the discriminatorhappens to get off to an ”early lead” and startsrejecting the generator’s outputs, the generator’sobjective function quickly goes to zero.

Goodfellow et al.[5] found that a heuristic objec-tive function wherein the generator instead tries tomaximize the number of synthetic images classi-fied as real:

J(G) = −log(D(G(z))

We employed this heuristic objective functionand found a significant reduction in the number oftraining attempts that would fail early. Notice thatwith this objective function, the generator’s lossincreases as the discriminator starts rejecting moreof its outputs, leading to bigger gradients when it isdoing poorly. In other results, Goodfellow et al.[7]

explored the possible downsides of this heuristiccost function and found that it did not significantlyaffect the long term convergence of GAN training.

B. Addressing mode collapse

When attempting a first pass at training emojiimage generation without conditioning on text la-bels, we found that, as commonly seen in mostGAN training attempts, the generator had verylittle variance in its outputs. Goodfellow et al.[5]

suggest using a technique they called minibatchdiscrimination wherein the discriminator examines

all images in a given batch and measures similar-ity between images. This forces the generator toachieve similar variance levels as real batches ofimages.

However, as mentioned in our Related Workssection, Conditional GANs suffer from a slightlydifferent form of mode collapse that is not rectifi-able with vanilla minibatch discrimination: a givenbatch of images with different conditional labelsmight look different by virtue of the discriminatorrequiring an image to resemble its conditionalinput, but images with the same conditional inputtend to look the same.

We employ a novel training mechanism to ad-dress this issue. Instead of simply running usualtraining iterations wherein the conditional inputsare drawn in random batches, we alternate randombatches with batches that all contain the samelabel. We then combine this with Goodfellow’sminibatch discrimination, and the result is that thegenerator is forced to use the noise its input noisevariable z in order to vary its outputs with thesame conditional label and not get rejected by thediscriminator.

IV. DATASET

We trained on a dataset of 2,000 emoji imagesscraped from unicode.org[8], training on Tensor-flow on Paperspace Cloud GPUs (NVIDIA QuadroP6000). Examples of dataset emojis are includedin the appendix. One important step we tookon the data to stabilize GAN training is to addinstance noise to the discriminator’s input in orderto stabilize training early in the GAN minimaxgame. Sonderby et al.[9] found that adding noiseto the discriminator input ensures that even in

Page 4: EmotiGAN: Emoji Art using Generative Adversarial Networkscs229.stanford.edu/proj2017/final-reports/5244346.pdfA. Generative Adversarial Networks A Generative Adversarial Network[4]

the early phases of training when the generatoris outputting nonsensical images, there will besome overlap between the real images with noiseand the generator outputs with noise. This overlapis crucial to the stabilization of GAN training,which implicitly relies on the assumption that theKL-Divergence between the generated distributionand real data distribution. When there is zerooverlap between the real data and generator output,the KL-Divergence is undefined and this leads toirrecoverable issues with training.

V. EXPERIMENTAL RESULTS

Figures 1 & 3 show some of our qualitativeresults. Unsupervised Generative Models do nothave a standard quantitative metric that is used.Nonetheless, we attempt to use a modified versionof Goodfellow’s inception score metric mentionedearlier, wherein we use a modified version of ourtrained discriminator (with a softmax layer at theend) in order to the output class label probabilitiesthat we need to measure the inception score. Hisproposed formula is:

exp(KL(p(y|x)||p(y)))

Which is the KL divergence between the proba-bility that a classifier running the Inception modeltrained on your dataset will assign the correct labelto a generated image (which should be higher thebetter a GAN is), and the probability of a givenclass appearing p(y) should have high entropy be-cause we desire varied outputs. Because we don’thave a pretrained Inception model on our emojidataset, we instead converted our Discriminatormodel into a CNN classifier with a softmax layerat the end. This allowed us to compute p(y|x) and,by marginalizing over our generated images, p(y).We found that this modified inception score whentraining with our proposed method of addressingmode collapse is notably higher due to the greatervariance in the generator’s outputs. The followingtable displays our scores when training with ourproposed training mechanism meant to vary theoutputs more by using batches with varying out-puts and the same label, and without it.

While we took some steps to avoid overfittingour training set, such as dropout both the dis-criminator and generator model, adding noise tothe inputs to both models, and adding noise to

the labels by having the discriminator not alwaysmeasure it’s cost against exactly 1 for real imagesor exactly 0 for fake images, we do think thatthere was some overfitting that occurred aroundhow the generator uses the conditional input. Morespecifically, when we provided the generator withwords that deviated a good amount from the wordsin the training set, many of the emojis it generatedwere nonsensical. We’ve included some of these inthe appendix and believe that this can be alleviatedwith: A.) A larger, more varied dataset, and B.) aword embedding space that makes more sense touse for emojis.

Inception Score

Same labels in batches Without same labels inbatches

8.4 4.9

VI. CONCLUSION

In conclusion, we were able to train a model togenerate emojis similar to those in our training setwhile discovering a novel training mechanism thatis able to have a Conditional GAN generate morevaried kinds of outputs. This is an open researchproblem and the hope is that this work can makea contribution to these types of problems. Futurework should consider trying to scale our trainingmechanism to larger datasets better (possibly byonly replicating N training examples with the samelabel in a given batch), using a more sensible wordembedding for the domain of emojis as opposed toGoogle News word2vec, and using a larger datasetfor training in order to generalize to more varyingkinds of words when generating emojis from text.

REFERENCES

[1] Zhang, H., Xu, T., Li, H., Zhang, S., Huang, X., Wang, X., andMetaxas, D. (2016). Stackgan: Text to photo-realistic image syn-thesis with stacked generative adversarial networks. arXiv preprintarXiv:1612.03242.

[2] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervisedrepresentation learning with deep convolutional generative adversarialnetworks. arXiv preprint arXiv:1511.06434, 2015.

[3] Tomas Mikolov, Kai Chen, Greg Corrado, and JeffreyDean. Efficient Estimation of Word Representations inVector Space. In Proceedings of Workshop at ICLR, 2013.https://code.google.com/archive/p/word2vec/

[4] Goodfellow, Ian J., Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing,Warde-Farley, David, Ozair, Sherjil, Courville, Aaron C., and Bengio,Yoshua. Generative adversarial nets. NIPS, 2014.

[5] ] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford,and X. Chen. Improved techniques for training gans. arXiv preprintarXiv:1606.03498, 2016.

Page 5: EmotiGAN: Emoji Art using Generative Adversarial Networkscs229.stanford.edu/proj2017/final-reports/5244346.pdfA. Generative Adversarial Networks A Generative Adversarial Network[4]

[6] M. Mirza and S. Osindero. Conditional generative adversarial nets.arXiv preprint arXiv:1411.1784, 2014.

[7] William Fedus, Mihaela Rosca, Balaji Lakshminarayanan, Andrew M.Dai, Shakir Mohamed, Ian Goodfellow. Many Paths to Equilibrium:GANs Do Not Need to Decrease a Divergence At Every Step

[8] https://unicode.org/emoji/charts/full-emoji-list.html[9] C. K. Sonderby, J. Caballero, L. Theis, W. Shi, F. Huszar. Amortised

MAP Inference for Image Super-resolution

VII. APPENDIX

A. Training data examples

Fig. 4: ”Smiling face with halo”

Fig. 5: ”Smiling Devil Face”

Fig. 6: ”Pile of poo”

B. More generated emojis

Fig. 7: Various different labels

Fig. 8: Various different labels, 2

Page 6: EmotiGAN: Emoji Art using Generative Adversarial Networkscs229.stanford.edu/proj2017/final-reports/5244346.pdfA. Generative Adversarial Networks A Generative Adversarial Network[4]

Fig. 9: Input text: ”glasses”

Fig. 10: Input text: ”glasses”, without our output-varyingtraining mechanism


Top Related