Recently, a tiktok trend is going viral where people generate an AI image of a homeless men in there house. They then send this picture to their parents or housemates and claim there is someone that has entered their home and does not want to leave. Although it can be funny for the sender, you can imagine the reaction from scared parents that are worried that all their belongings are at risk or even worse.
I tried this feature myself with Google Gemini and showed it to my housemate, who is a law student. At first, we were laughing at how realistic it looked. The face of the man, his clothing, the way he emerges in the picture. It all looked so realistic that it was hard to tell it was AI generated.
My housemate then pointed out how this kind of technology could also be quite scary. Give these AI generators two more years and the ease with how people can fabricate content will be insane. One industry affected by this will be the legal systems, my roommate pointed of to me. If anyone can fabricate “evidence material” in a matter minutes through an easy accessible online tool, then what happens in courtrooms were normally visual evidence is seen as robust and weight carrying material? Previously, such evidence felt really objective, but with deepfake risks, a generated photo of a “suspect”, or fake screenshot could completely change a case.
The thing that scares me the most is that we are at the beginning of this revolution. It is only a matter of time when AI-generated content is able to fool judges, juries, insurance companies or even journalists. My roommate and I joked that society is heading into a digital Wild West, where everything you see is fake. It definitely sounds like a Black Mirror episode. The greatest challenge will lie in the verification of content and if a couple of children can already create fake people who look real, what’s coming next?