FaceApp is the one that automatically generates highly realistic transformations of your face is the new trend right now. But what goes behind making such an app technically. Let’s find out in detail! If you’ve been using the internet recently, you must have gone through these pictures of people who looked many years older than their real age. Or a different version, the same person in a different gender or having facial hair or the classic Walter White look from the TV show Breaking Bad. And they look quite realistic, right?
This might feel strange how FaceApp manages to do so, but its just a classic implementation of Neural Nets and Generative Models. No problem if you don’t know these terms we shall explain in our blog.
Simply saying, FaceApp is taking features from one face and applying it to another. So it has a database where it has a huge amount of pictures of faces and it extracts features from your face and applies some changes which render your face to look different, yet the distinctive features remain which makes you identify yourself. So now you can change the gender of the photograph, age yourself decades and do what not with striking realism.
If you are from tech-savvy, you must have listened to this term artificial intelligence, machine learning, and deep learning. Machine learning is a subpart of artificial intelligent and deep learning is a subpart of machine learning. Deep learning is a collection of the neural network model. Faceapp is also part of deep learning. It uses a generative Adversarial Network (GAN). A generative adversarial network (GAN) is a type of construct in neural network technology that offers a lot of potential in the world of artificial intelligence. A generative adversarial network is composed of two neural networks: a generative network and a discriminative network. These work together to provide a high-level simulation of conceptual tasks.
FaceApp is part of facial attribute editing. Currently many Generative Adversarial Network model available for facial attribute editing like STGAN, FACEGAN, AttGAN, etc. but AttGAN is best for implementation. In this blog, we talk about AttGAN. First, we see how it work AttGAN see below figure.
In AttGAN we also focus on the encoder-decoder architecture and develop an effective method for high-quality facial attribute editing. With the encoder-decoder architecture, facial attribute editing is achieved by decoding the latent representation from the encoder conditioned on the expected attributes. Based on such a framework, the key issue of facial attribute editing is how to model the relation between the attributes and the face latent representation. For this issue, GAN represents each attribute as a vector, which is defined as the difference between the mean latent representations of the faces with and without this attribute. Then, by adding single or multiple attribute vectors to a face latent representation, the decoded face image from the modified representation is expected to own those attributes. However, such an attribute vector contains highly correlated attributes, thus inevitably leading to unexpected changes of other attributes, e.g., adding blond hair always makes a male become a female because most blond hair objects are female in the training set.