Images generated by AI makes it hard to differentiate what’s real and fake

9

October

2018

No ratings yet.

People are now aware of the technological changes due to artificial Intelligent (AI). So far, human can easily tell the difference between real images and fake images created by AI since these fake images are pretty strange to human eyes. However, with the advancement of AI technology, it is getting harder and harder to spot the slight difference between these two different sources of images.

Recently, the team formed by researchers from DeepMind and Heriot-Watt University in the UK created a machine learning model, BigGANs, which are based on generative adversarial network (GAN). BigGANs are stated to improve the quality of images generated by the AI image generator.

During the training process, BigGANs rely on ImageNet, an image dataset that contains numerous images of different objects and is maintained by Stanford and Princeton. Trained on 128 Google’s Tensor Processing Units (TPUs), the model required one to two days to finish training. The results measured by inception score (IS) showed that the model worked well by pushing IS from 52.52 to 166.3. It is believed that the reason why BigGANs become a success is because they employ larger GAN, use bigger batch sizes and involve more parameters.

One might be curious about why this application of AI is important. Fake images might cause a lot of problems, such as privacy, morality, legal issues, etc.; however, image generators are very useful for model training, especially when only limited training data is available. Under normal circumstances, models perform better with more diversified training data. Therefore, if theses image generators can provide different but realistic images, they can alleviate problems of the lack of training data. Moreover, to avoid misappropriated uses of GANs technology which can be led by political or unethical purposes, the research team focuses on more general images instead of images with faces.

 

Sources:

https://venturebeat.com/2018/10/02/deepmind-ai-can-generate-convincing-photos-of-burgers-dogs-and-butterflies/

https://www.theregister.co.uk/2018/10/01/biggan_fake_images/

Please rate this

2 thoughts on “Images generated by AI makes it hard to differentiate what’s real and fake”

  1. Hi Shu-Yu,

    Thank you for your interesting post.

    Nowadays, a lot of news updates envolving AI are about image recognition instead of image generation. The AI implementation that you describe is new to me.

    On one hand, I can clearly see the potential positives in image generation. Besides the use for training purposes that you mention, I can imagine that the technology can be useful in many other ways. An example is the architectural design of a to-be-build house or office building. The design can be depicted in a much clearer and more realistic way, leading to a narrower gap between expectations and realizations.

    On the other hand, I definitely foresee some serious problems involving this AI implementation. First, an average picture will lose its value. The credibility of an average picture will go down, as editing/producing of a picture becomes too easy and too realistic. This makes it (almost) impossible for a human to tell the difference between what is real and what is fake. Second, I agree with you on the potentially-problemetic ethical side of image generation. An example that I can think of is children (and also adults) being bullied in the near future with fake pictures created by image generation. Especially because it will become incredibly hard to tell the difference between real-life and artificially-produced images.

    I recommend the government to think very deeply about the potential hazards of image generation. I believe that the potential positives should be exploited and the potential problems should be taken care of in a pro-active manner.

    Dennis

  2. In the past few weeks I’ve had a couple good laughs at fantastic composites of real-world characters and (fake) surreal dialogue. The accuracy of mapping on videos of faces is stunning. For now it’s all fun and games, but it won’t be long before the veracity of every element in videos will have to be brought to question by default.

Leave a Reply

Your email address will not be published. Required fields are marked *