Advanced AI and Manipulation: A Call for Caution or Innovation? A brain teaser

4

October

2023

No ratings yet.

What if the advanced AI systems could be used to manipulate information and sway political outcomes?

The open letter from the Future of Life Institute raises this provocative question and calls for a pause on research into advanced generative AI systems due to concerns about potential risks (Future of Life Institute, 2022).

One of the key claims made in the open letter is that AI systems like GPT-3 and its successors have the potential to create convincing disinformation (Future of Life Institute, 2022). One example of the potential dangers is the creation of deepfake videos. AI’s growing sophistication simplified the creation of convincing videos of people doing and saying things they never did (Thompson, 2023). These videos could be used to spread disinformation or manipulate public opinion, potentially influencing political outcomes (Byman et al., 2023).

Another concern is the perpetuation of bias and discrimination using AI systems. If these systems are trained on biased or incomplete data, they may perpetuate and even amplify existing societal inequalities (Best, 2021). For example, facial recognition technology has been found to have higher error rates for people with darker skin tones, potentially leading to harmful consequences such as wrongful arrests (Buolamwini & Gebru, 2018).

Proponents of a pause argue that these risks are too great to ignore and that a break is necessary to reassess the potential consequences and develop appropriate safeguards (Future of Life Institute, 2022). Hence, signatory of the letter requests the development of a global regulatory framework to govern AI systems (Future of Life Institute, 2022).

On the other hand, ongoing research can lead to the development of novel algorithms and techniques that can address issues of fairness and accountability in AI systems (Madras et al., 2018). For instance, researchers have developed algorithms that can explain the decision-making processes of AI systems, increasing transparency and accountability (Doshi-Velez and Kim, 2017). They have also developed techniques that can detect algorithmic bias, such as adversarial training (Madras et al., 2018).

Moreover, research pauses in other fields have shown that they can have unintended consequences, such the stagnation of innovation (National Library of Medicine, n.d.). For example, the 1975-1980 moratorium on recombinant DNA (rDNA) research, which was a response to concerns over the safety and ethical implications of manipulating genetic material. While the pause did allow for the development of biosafety guidelines and regulations, it also resulted in a significant delay in the development of genetic engineering, which could have potentially hindered progress in fields such as biotechnology and medicine (National Library of Medicine, n.d.)

Therefore, proponents of continued research argue that a pause on research could impede economic progress and the development of ethical AI systems. Instead, the focus should be on developing effective regulation and governance mechanisms to mitigate potential risks while harnessing the benefits of advanced AI systems.

Reference List

Best, M. (2021, June 29). AI bias is personal for me. It should be for you, too. PwC. https://www.pwc.com/us/en/tech-effect/ai-analytics/artificial-intelligence-bias.html

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability and Transparency. https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

Byman, D., Meserole, C., & Subrahmanian, V. (2023, February 23). The Deepfake Dangers Ahead. WSJ. https://www.wsj.com/articles/the-deepfake-dangers-ahead-b08e4ecf

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. (https://arxiv.org/abs/1702.08608)

Future of Life Institute. (2022). Pause Giant AI Experiments: An Open Letter.  https://futureoflife.org/open-letter/pause-giant-ai-experiments/

Madras, D., Creager, E., Pitassi, T., & Zemel, R. (2018). Learning adversarially fair and transferable representations. arXiv preprint arXiv:1802.06309. (https://arxiv.org/abs/1802.06309)

National Library of Medicine. (n.d.). Risk, Regulation, and Scientific Citizenship: The Controversy over Recombinant DNA Research. Maxine Singer – Profiles in Science. https://profiles.nlm.nih.gov/spotlight/dj/feature/regulation

Thompson, S. A. (2023, March 12). Making Deepfakes Gets Cheaper and Easier Thanks to A.I. The New York Times. https://www.nytimes.com/2023/03/12/technology/deepfakes-cheapfakes-videos-ai.html

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *