Can GenAI Manage My Day? A One-Day Personal Assistant Experiment

6

October

2024

No ratings yet.

With the increased applications of generative AI, I got interested in whether it could be of use in organising my life. Thus, I used it as a personal assistant for an entire day. My goal was to improve my productivity, reduce my procrastination, and make sense of my schedule. For this experiment, I used ChatGPT-4o, Monica Ai, and Taqtiq to assist me throughout the day.

To create a starting structure, I used GPT-4o to prepare a daily schedule incorporating the time for my work, studies and sports. Next to this, I added a request for a workout plan and some recipes for meals. The result was a comprehensive and personalised outline, that made my day well-structured and easy to follow.

For my work, I used the Monica AI extension as my all-in-one assistant. Powered by LLM’s like GPT-4, Claude, and Gemini, it provides an in-screen assistant that offers several functions such as an AI mind map, a writing agent, a search engine, summary options for both text and video, a translator, and an image generator. Of these options I mainly used the writing agent to help me write emails in the right tone. This significantly reduced the time I usually spend on answering emails and improved the clarity of my communication.

As a supporting feature for my work meetings the Taqtiq extension was of great use. As an AI transcriber it summarises important content and it processes conversations in real-time, converting discussions into concise pieces of text. This made my meetings more effective as I didn’t have to worry about missing important points.

For my studies, the content search on different economical concepts with GPT-4o was of great use, as the feedback made it easier to understand my study materials and acted as a boost to memorise theoretical frameworks.

By the end of the day, using these tools reduced my procrastination and made me feel more in control of my schedule. I completed tasks more quickly and had extra time to focus on projects that were previously on the backburner.

My experiment reflects the continued expansion in the use of AI in personal work. As highlighted by Murgia (2024) in the Financial Times, major companies like Google, OpenAI, and Apple are racing to develop advanced AI-powered personal assistants. Developments of “multimodal” AI tools which interpret voice, video, images, and code within a single interface, provide revolutionising advancements in understanding and executing complex tasks. Our interaction with the digital world is strongly enhanced as systems support our daily planning.

Despite the benefits, there are shortcomings to consider, particularly regarding privacy and data security. Relying heavily on AI assistants involves sharing personal and potentially sensitive information, raising concerns about how this data is stored and used. For example, I couldn’t use the Monica AI tool for certain email responses because the emails contained personal information from clients. ChatGPT is already vague about its data storage policies, and even more so with these extensions. The same applied to meetings; I had asked for permission to record the meeting beforehand. However, it’s possible to record without consent, potentially violating my colleagues’ privacy.

Currently, interactions with AI assistants are still mostly text-based, but I believe the future holds the potential for real-life AI assistants that we can speak to directly, receiving immediate responses without delay. My experience using AI tools as a personal assistant was largely positive; they significantly boosted my productivity and helped me stay organised. However, due to privacy concerns, it’s not something I will rely on extensively just yet.

As AI continues to advance, the possibilities seem endless; but would you be comfortable using an AI assistant in your daily life given the current privacy risks? And what features would make an AI truly indispensable to you?


References:

Murgia, M. (2024, May 17). The race for an AI-powered personal assistant. Financial Times. https://www.ft.com/content/8772d32b-99df-497f-9bd7-4244f38d0439

Please rate this

Targeted by Something Relentless: The Ethical Implications of AI in Modern Warfare

26

September

2024

No ratings yet.

Illustration by Yoshi Sodeoka

“The advantage will go ‘to those who no longer see the world like humans. We can now be targeted by something relentless, a thing that does not sleep.'”
Army research officials Thom Hawkins and Alexander Kott, 2022 (as cited in Manson, 2024)


In the past decade the use of artificial intelligence (AI) has significantly increased, with by now well-known applications such as facial recognition software, self-driving vehicles, search engines, and translation software. However, next to these ‘peaceful’ uses of AI we now see an increased presence of AI in modern warfare. From an estimated US$ 9.2 billion in 2023, investments are projected to reach US$ 38.8 billion in 2028, a CAGR of over 33% (MarketsandMarkets, 2024). The integration of AI into military operations is revolutionising warfare, basically replacing the threat of nuclear weapons by automated weapon systems, raising profound ethical concerns. As nations like the U.S., Russia and China but also India rapidly develop AI capabilities for their military operations, questions about moral restraint, accountability, and the necessity for international governance become increasingly urgent, specifically as no international legal regulatory framework exists to address these concerns around the use of AI, particularly in the context of conflict.

With regard to AI applications in warfare, a major concern is the erosion of human moral judgment in combat decisions. Renic and Schwarz (2023), building upon Kelman’s (1973) insights into the dehumanisation of warfare, argue that the use of AI-powered targeting systems diminishes meaningful human oversight. It becomes a routinised process, where the sense of responsibility of the executioners is lacking and innocent human beings are falsely identified as combatants. When through AI, machines are made to select and engage targets, the previous existing human element of ethical deliberation is foregone. As restraints on the use of military force is removed, this will then lead to inhumane outcomes. Thus, as warfare becomes depersonalised, it reduces the threshold for initiating conflict and stimulates the use of AI-powered weapons as we see today in Ukraine and Russia.

Accountability is another pressing concern. Michel (2023) points out the difficulties in determining responsibility when AI systems are involved in choices that could mean the difference between life and death. Who is at fault if an autonomous weapon misidentifies a targetᅳthe machine, the commanding officer, or the programmer? This ambiguity poses a risk of establishing a “moral crumple zone” in which moral accountability becomes diminished and justice is unobtainable.

Kluth (2024) states that while AI can enhance military effectiveness, fully autonomous weapons lacking human oversight pose the real ethical threat. In order to maintain ethical standards and prevent unwanted or unintended outcomes, the human element should be overarching. AI as intended should serve as a tool to aid human decision-making, never replace it.

Given these challenges, the call for international governance is critical. As this is greatly lacking today, heightened risks to international peace and security exist. The transformative implications of AI in all areas – not only warfare – need to be managed and regulated.
Csernatoni (2024) specifically underscores this absence of global frameworks regulating military AI. The need to establish international norms and agreements is crucial to ensure responsible deployment and to mitigate security risks associated with unchecked technological advancements. The call for an AI oversight body is therefore becoming stronger and is being (slowly) addressed in guidance and regulations by supervisory bodies.

So, is it ever ethical to delegate life-and-death decisions to AI systems in warfare, even with human oversight? Could AI’s efficiency make ethical shortcomings acceptable? It is important that we start this discussion, as the decisions we make today will have an immense impact on the future of warfare.

References

Csernatoni, R. (2024, July 17). Governing military AI amid a geopolitical minefield. Carnegie Endowment for International Peace. https://carnegieendowment.org/research/2024/07/governing-military-ai-amid-a-geopolitical-minefield?lang=en&center=europe

Kelman, H. C. (1973). Violence without moral restraint: Reflections on the dehumanization of victims and victimizers. Journal of Social Issues, 29(4), 25–61.

Kluth, A. (2024, March 12). Don’t fear AI in war, fear autonomous weapons. Bloomberg.com. https://www.bloomberg.com/opinion/articles/2024-03-12/don-t-fear-ai-in-war-fear-autonomous-weapons

Manson, K. (2024, February 29). AI warfare becomes real for US military with Project Maven. Bloomberg.com. https://www.bloomberg.com/features/2024-ai-warfare-project-maven/

MarketsandMarkets. (2024, January 4). Artificial Intelligence (AI) in Military Market Size, share, Industry Growth, Trends, and Analysis 2028. https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-military-market-41793495.html

Michel, A. H. (2023, October 18). Inside the messy ethics of making war with machines. MIT Technology Review. https://www.technologyreview.com/2023/08/16/1077386/war-machines/

Renic, N. C. & Schwarz, E. (2023, December 19). Inhuman-in-the-loop: AI-targeting and the erosion of moral restraint. Opinio Juris. https://opiniojuris.org/2023/12/19/inhuman-in-the-loop-ai-targeting-and-the-erosion-of-moral-restraint/

Please rate this