People are now aware of the technological changes due to artificial Intelligent (AI). So far, human can easily tell the difference between real images and fake images created by AI since these fake images are pretty strange to human eyes. However, with the advancement of AI technology, it is getting harder and harder to spot the slight difference between these two different sources of images.
Recently, the team formed by researchers from DeepMind and Heriot-Watt University in the UK created a machine learning model, BigGANs, which are based on generative adversarial network (GAN). BigGANs are stated to improve the quality of images generated by the AI image generator.
During the training process, BigGANs rely on ImageNet, an image dataset that contains numerous images of different objects and is maintained by Stanford and Princeton. Trained on 128 Google’s Tensor Processing Units (TPUs), the model required one to two days to finish training. The results measured by inception score (IS) showed that the model worked well by pushing IS from 52.52 to 166.3. It is believed that the reason why BigGANs become a success is because they employ larger GAN, use bigger batch sizes and involve more parameters.
One might be curious about why this application of AI is important. Fake images might cause a lot of problems, such as privacy, morality, legal issues, etc.; however, image generators are very useful for model training, especially when only limited training data is available. Under normal circumstances, models perform better with more diversified training data. Therefore, if theses image generators can provide different but realistic images, they can alleviate problems of the lack of training data. Moreover, to avoid misappropriated uses of GANs technology which can be led by political or unethical purposes, the research team focuses on more general images instead of images with faces.
Sources:
https://www.theregister.co.uk/2018/10/01/biggan_fake_images/