How AI Helped me Overcome the Challenges of my Learning Disorder

14

October

2024

No ratings yet.

I have a form of dyslexia known as Dysorthography, and AI has made my life a whole lot easier. Dysorthography is a learning difference that poses significant difficulties, particularly for someone like myself who is involved in academic writing all the time. It interferes with my ability to correctly apply spelling and grammar rules, which frequently results in spelling and grammar mistakes and makes it difficult to write cohesive sentences. This can make writing emails, motivational letters, and academic assignments very difficult.
 
However, AI tools have made my writing difficulties a lot easier. For instance, when writing a motivation letter or assignment, I outline my thoughts in bullet points or incomplete sentences and then use AI to develop these into a coherent and structured story. After this, I use this draft as an inspiration for writing the content myself, making sure that the final work reflects ideas and writing style. This approach not only enhances my writing but also keeps it authentically mine. After writing the content, I use AI to perform a final check for spelling and grammar. This approach enables me to produce work that meets academic standards while also making sure that the final result reflects my own work and ideas.
 
In conclusion, AI has become an invaluable tool for me in overcoming the challenges posed by Dysorthography. It allows me to turn my ideas into structured, cohesive writing while maintaining my personal voice and creativity. By transforming my writing process, AI has become the tool to overcome my learning disorder. With AI, my writing no longer comes across as amateurish due to spelling and grammar mistakes, and I can focus more on conveying my thoughts and arguments effectively. While Dysorthography may present difficulties, AI has made my life a lot easier by helping me overcome this.

Please rate this

Bridging the Gap Between AR, AI and the Real World: A Glimpse Into the Future of Smart Technology

12

September

2024

5/5 (3)

Apple’s recent keynote showcased new products, including the iPhone’s groundbreaking AI integration. However, when you break it down, what Apple has really done is combine several existing technologies and seamlessly integrate them, presenting it as a revolutionary technology. This sparked my imagination of what could already be possible with existing technologies and what our future might look like. This sparked my imagination about what could already be possible with today’s technology—and what our future might look like.

Apple introduced advanced visual intelligence, allowing users to take a picture of a restaurant, shop, or even a dog, and instantly access a wealth of information. Whether it’s reviews, operating hours, event details, or identifying objects like vehicles or pets, this technology uses AI to analyze visual data and provide real-time insights, bridging the gap between the physical and digital worlds. Tools like Google Image Search and ChatGPT have been available for some time, but Apple has taken these capabilities and seamlessly integrated them into its ecosystem, making them easily accessible and more user-friendly [1]. The Apple Vision Pro merges AR and VR, controlled by moving your eyes and pinching your fingers [2]. I’ve tried it myself, and it was incredibly easy to navigate, with digital content perfectly overlaying the physical world. Now imagine the possibilities if Apple integrated the iPhone’s visual intelligence into the Vision Pro. This headset wouldn’t just be for entertainment or increasing work productivity; it could become an everyday wearable, a powerful tool for real-time interaction with your surroundings.

Picture walking through a city wearing the Vision Pro. By simply looking at a restaurant and pinching your fingers, you could instantly pull up reviews, check the menu, or even make a reservation. Or, if you see someone wearing a piece of clothing you like, you could instantly check online where to buy it, without needing to stop. With these capabilities, the Vision Pro could bring the physical and digital worlds closer together than ever before, allowing users to interact with their environment in ways we’re only beginning to imagine.

Do you think the existing technologies can already do this? Do you think this is what the future would look like? I’m curious to hear your thoughts.

Sources:

[0] All images generate by DALL-E, a GPT made by ChatGPT.

[1] https://www.youtube.com/watch?v=uarNiSl_uh4&t=1744s

[2] https://www.apple.com/newsroom/2024/01/apple-vision-pro-available-in-the-us-on-february-2/

Please rate this