Enhancing the 3D modeling space using GenAI.

18

October

2024

No ratings yet.

Creating a technical and detailed 3D model by simply describing it seems like the far-off future, but by integrating generative AI (GenAI) into an advanced modeling software like SketchUp, it can be achieved using current technologies. If you want to design a house or a complex component for your 3D printer, all you have to do is describe it.

SketchUp is currently one of the most popular 3D modeling tools used by a broad range of people, from students and non-expert users to professional designers. Its intuitive interface has made it a great choice for many different professions and use cases. That’s why incorporating GenAI into this software can make 3D modeling even faster, easier, and more accessible, regardless of how skilled you are at modeling.

Currently, 3D modeling comes with a steep learning curve, if you want to create an idea you will need to build it from (close to) scratch or adjust a model you can find in SketchUp’s ‘warehouse’ (Their online model marketplace). But GenAI will allow even beginners who don’t know how to use professional tools to explore creative ideas and tweak these ideas to fulfill their needs.

For existing professional users, GenAI will allow them to greatly improve their current workflow. The GenAI can be used to quickly generate prototypes or get repetitive parts for a project made faster. If they want to test different lay-outs or quickly generate multiple variations, all the manual work can be done by the GenAI tool. This way, professionals can focus on using their technical expertise to perfect and enhance the models for further use and let their creativity flow more.

We would love to hear your feedback on this idea, would you use a tool like this? Or would GenAI take away the creativity of 3D modeling? Either way, we think that tools like this will shape the future of the digital modeling industry, and are excited to see where these new technologies take the 3D modeling industry.

Please rate this

From Words to Beats: My Journey with Generative AI

11

October

2024

No ratings yet.

Since their release, I have mostly used ChatGPT and Gemini as my go-to AI tools. At first, my experiences were challenging; I had to adapt to how these tools work and what prompts would work best. It wasn’t always smooth, sometimes it gave me strange outputs or irrelevant answers (Floridi & Chiriatti, 2020). However, as I gained more experience in using these tools, I learned how to interact with them, especially for text-based tasks like brainstorming, rewriting, and planning. Now, I get more useful results and can easily use AI to structure my thoughts, 

I also tried using AI for creating visual art. I was surprised at how specific and creative images AI can create based on just a text prompt. I used these kinds of tools for creating cover pages, PowerPoint slides, but also for fun. I also came across some issues; the tool gave me strange symbols or text in non-existing language in the images. Despite those, visual generation is very useful for everyone to create images or other forms of art, but it needs improvement. Another interesting use of AI for me has been in music creation. Together with friends, we’ve used AI tools to generate songs based on prompts like the style, mood, and mainly its lyrics. It is super easy to use as you can modify the output by giving the tool some new input, this way it will give you the result specific to your preferences (Dhariwal et al., 2020). 

Overall, I’ve enjoyed the use of AI for many creative and text-based tasks, but I feel there’s some room for improvement in how I utilize these tools. Right now, I mostly rely on general-purpose AI’s like ChatGPT and some other tools I mentioned. I haven’t explored specialized AI tools for specific tasks enough, like creating charts or more advanced image and sound generation. I think if I dive deeper into specialized tools, they could significantly improve my work with AI. 

References 
Dhariwal, P., Jun, H., Payne, C., Kim, J. W., Radford, A., & Sutskever, I. (2020). Jukebox: A generative model for music. arXiv. https://arxiv.org/abs/2005.00341 

Floridi, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30(4), 681-694. https://doi.org/10.1007/s11023-020-09548-1 

Please rate this

Navigating the Future: Augmented Reality is Rewriting the Map 

20

September

2024

5/5 (2)

Augmented Reality (AR) is reshaping the way we learn, shop and even experience entertainment. AR creates a new way of how we interact with the world around us by blending digital content with our real physical surroundings. From virtual fitting rooms in the fashion industry to the new learning tools that revolutionize education (Bacca et al., 2014). AR is also transforming the way we navigate – whether you walk through a crowded city or drive on unfamiliar roads. AR-based navigation tools make it easier, and even more important, also safer to move through different environments.  

A perfect example of the use of AR in navigation is Google Maps’ AR walking directions. This initiative uses overlayed arrows and street names in the real-world view that is captured by the camera of our phones. This feature makes navigation simpler by providing visual aspects to guide users in busy urban spaces for example. This function reduces the confusion that is often associated with traditional map interfaces (Google, 2023). Indoors, AR can also make an impact in the way we navigate. Blippar’s AR navigation helps users to find their way through complex indoor spaces like malls or airports. It overlays digital directions on our smartphone screens (Blippar, 2023). This technology is particularly useful in environments where traditional GPS signals are weaker. 

In the automotive world, the company WayRay used AR head-up displays (HUDs). These systems can project navigational information directly onto the windshield of a vehicle. This allows drivers to receive direction instructions without diverting their attention from the road. WayRay uses holographic optical elements (HOEs) in their displays, these can handle complex optical tasks while staying thin, clear, and flexible enough to fit curved surfaces like windshields (WayRay, 2023). This innovation improves safety by reducing distractions. This can make AR navigation more integrated and intuitive. 

As AR technology advances, we can expect to see even more immersive and hands-free navigation experiences in the future, such as AR glasses that guide users in real time (Billinghurst et al., 2015). While there are still quite some challenges, like data accuracy and user adoption, AR will undoubtedly lead the next evolution of navigation.  

References 

Bacca, J., Baldiris, S., Fabregat, R., Graf, S., & Kinshuk. (2014). Augmented reality trends in education: A systematic review of research and applications. Educational Technology & Society, 17(4), 133-149. https://www.jstor.org/stable/jeductechsoci.17.4.133 

Billinghurst, M., Clark, A., & Lee, G. (2015). A survey of augmented reality. Foundations and Trends in Human-Computer Interaction, 8(2-3), 73-272. https://doi.org/10.1561/1100000049 

Blippar. (2023). AR indoor navigation. Retrieved from https://www.blippar.com 

Google. (2023). Google Maps AR walking directions. Retrieved from https://www.google.com/maps 

Huang, B. C., Hsu, J., Chu, E. T. H., & Wu, H. M. (2020). Arbin: Augmented reality based indoor navigation system. Sensors, 20(20), 5890. 

Narzt, W., Pomberger, G., Ferscha, A., Kolb, D., Müller, R., Wieghardt, J., … & Lindinger, C. (2006). Augmented reality navigation systems. Universal Access in the Information Society, 4, 177-187. 

WayRay. (2023). AR head-up displays for cars. Retrieved from https://www.wayray.com 

Please rate this