Ethical concerns of local AI

17

October

2023

Images generated through a 'style-copy' using dreambooth
No ratings yet.

Several years ago, before ChatGPT helped catapult the view of AI as a helpful tool that many of us will use daily, tools with similar use cases already existed. They were quit a bit more primitive, but still managed to make the news a couple of times, often in a negative context. One of these tools was an app called ‘DeepNude’, this tool rightfully garnered a lot of criticism, because it allowed you to turn a picture of someone into a nude picture (Cole, 2019). After the backlash this app was quickly taken offline, but it showed a snapshot of some of the negative effects of Generative AI. DeepNude app taken offline after backlash (Collective Shout, n.d.)


Compared to 2019, the playing field has changed quite a lot. For image generation, currently the most well-known tools would most likely be Midjourney and DALL-E. But another famous one, which strongly differs from the previous two tools, would be Stable Diffusion. The main difference is that in the case of Midjourney and DALL-E, you make use of a service provided by these companies online to generate images. But Stable Diffusion differs from these two by the fact that Stable Diffusion can be run completely offline and locally on one’s one computer. This fact allows for Stable Diffusion to offer a much greater freedom to it’s users, which ties us back to ‘DeepNude’. While this app was swiftly taken offline after backlash, and while the users of services such as Midjourney are required to adhere to certain rules, users of SD can do as they please. This of course means that they can generate images of other people, and through the Dreambooth extension on SD, of which many guides have been put on the internet, this is also very possible.


This brings me to a question, should SD be regulated and if so how? It’s something I’ve thought about while using this app. The potential of deepfakes in general is quite high, but is it even feasible to ban something such as SD? After all how would you be able to control for it? This is not to mention the fact that SD is not inherently a ‘bad’ tool, rather it is something bad actors could use for nefarious purposes. But at the same time the increased freedom of SD compared to other generative image AI allows some legitimate artists to use it as part of their workflow, to speed up the process, or perhaps to increase the creativity of their art (Edwards, 2023). Although many artists have complained about such AI, claiming ethical concerns such as copyright breaches and theft, and many are worried about their financial prospects if large companies start replacing them, making this another complexity to handle when talking about AI regulation. Of course it may also be used as a simple tool of entertainment for many, allowing them to quickly generate interesting images through the use of its text-to-image functionality.


With all this in mind perhaps a more reasonable approach may be to ban the creation and sharing of all deepfakes, and to punish the people that do so with jail time and/or fines. Tracing whoever shared harmful deepfakes is much more feasible while it doesn’t discourage the positive uses of such a program. What are your thoughts on the matter? Should such offline AI perhaps be regulated much more severly, and if so how?

References:
Cole, S. (2019, 26 juni). This Horrifying App Undresses a Photo of Any Woman With a Single Click. vice.com. https://www.vice.com/en/article/kzm59x/deepnude-app-creates-fake-nudes-of-any-woman
Collective Shout (n.d.) DeepNude app taken offline after backlash. https://www.collectiveshout.org/deepnude_app_taken_offline
Edwards, P. (2023, 2 mei). Why this AI art took 17 hours to make. Vox. https://www.vox.com/videos/2023/5/2/23708076/ai-artist-stelfie-process-workflow

Please rate this

2 thoughts on “Ethical concerns of local AI”

  1. Very interesting and relevant topic. As we have discussed in class, there are no laws concerning the ethics behind genAI. Therefore, as you mentioned, such text-to-image tools are not necessarily bad, but people with bad intentions can make use of them too. I think the DeepNude case addresses perfectly that we as society may not be ready for such (offline) tools, as there is no strict regulation.

    I agree with you that there has to be some sort of regulation. My view on this topic is that strict worldwide genAI guidelines/laws have to be introduced. Also, maybe obligating digital watermarking can be an effective method to counter harmful deepfakes.

    Great blog and thank you for sharing your insights!

    1. Yes I agree, perhaps some governing body, or perhaps some of the larger companies such as microsoft and google, should collaborate to create such guidelines and rules. The digital watermarking is also a good idea, as this allows freedom on the part of the creators to make what they please (for the most part), while also making sure harm is strongly reduced.

Leave a Reply

Your email address will not be published. Required fields are marked *