Is the Stock market way overallocated on a gigantic AI bet?

19

September

2025

No ratings yet.

When watching the news, reading newspapers or watching podcasts, somehow there is always a article or topic about a company that spend a ridiculous amount of money on investments in AI. Whenever I come across one of these stories, I can’t help but wonder if such a large investment is worth it.

Take meta for example they reportedly offered Andrew Tulloch, the co-founder of Thinking Machine Lab, as much as 1.5 billion dollars over at least six years (Jin, B and Hagey, K, 2025) or the reported offer to Matt Deitke for 200 millions (The New York Times, 2025). All this money for just one person, how could this in any way be profitable?

But it is not just meta that are throwing large amounts of money in search of AI experts NVIDIA spent over 900 million to hire personnel from the AI company Enfabric (Kolodny et al., 2025) and Google reportedly offered 2.4 million billion dollars to Varun Mohan, the cofounder of Windsurf.

This strategy of spending hundreds of hundreds of millions dollars on the top guys of AI startups seems like betting everything on black in roulette, because it was your favourite colour. The thought that a few people, who had leadership positions in AI companies, are such geniuses that they are the key ingredient in revolutionizing the AI industry seems simplistic. In reality progress is not driven by the talent of the few, but it depends on workforces, infrastructure, and market readiness. The arms race of getting the top talent is only leading to inflating the costs without guarantying returns.

References:

Please rate this

Is real-time ray tracing worth it for consumers in 2022?

15

October

2022

No ratings yet.
NVIDIA RTX 2080 Ti graphics card (https://www.nvidia.com/nl-nl/geforce/20-series/)

Computer-generated graphics have been steadily improving in quality over the past few decades. We have advanced from crudely rendered 8-bit graphics to photorealistic images due to improvements in computational hardware. One of the critical elements of photorealism is the behaviour of light.

In the real world, light behaves like a ray and reflects (bounces) and refracts (bends) while interacting with various surfaces based on physical properties like colour, emissivity, refractive index, etc. To create an accurate rendition of a scene on a computer, we would have to program all these properties into virtual objects, render numerous light rays from all light sources, and trace the paths of these rays as they interact with the virtual objects. This process is known as ray tracing.

Ray tracing using computer hardware has been attempted as far back as 1982 by the LINKS-1 computer graphics system in Osaka, Japan (http://museum.ipsj.or.jp/en/computer/other/0013.html). However, due to the heavy computational requirements, it was reserved for use in pre-rendered scenes. In 1995, the animated movie “Toy Story” was released by Pixar studios, and it was rendered on 117 computers. It took between 45 minutes to 30 hours to render a single frame. (https://www.insider.com/pixars-animation-evolved-toy-story-2019-6). To perceive smooth motion, we must see at least 24 image frames in a single second, but computers obviously couldn’t render so many frames in real-time with ray tracing enabled. Hence, most computer graphics generated in real-time on consumer hardware was rendered using a method called “rasterization”, which approximated how light would be rendered and could result in glaring visual flaws, which reduced immersion.

In short, the main factors which have prevented ray tracing from becoming mainstream are as follows:

  • Cost: The amount and type of hardware to render images with ray tracing would be prohibitively expensive, costing tens of thousands of dollars.
  • Speed: Even after spending a lot of money, the speed of rendering would be too slow for real-time applications, taking several minutes to even hours to render a single frame.
  • Electricity consumption and heat production: A large amount of hardware would consume a lot of electricity and thus produce a lot of heat, making it impractical for average consumers.

This changed in 2018 when NVIDIA launched the RTX 2000 series of graphics processing units, with specialized hardware to drastically speed up the mathematical operations required for ray tracing calculations. It was possible to get a computer costing less than 1000$ and consuming only 300-400 watts of electricity capable of rendering ray-traced images in real-time at a reasonable speed. Since then, graphics processing units have been steadily improving in performance following Moores’s law, and now it’s possible to render games with real-time ray tracing enabled at high frame rates. The most recent RTX 4090 graphics card achieved 60 Frames Per Second or more at an image resolution of 3840 by 2160 pixels with ray tracing enabled on numerous games from AAA studios (https://www.eurogamer.net/digitalfoundry-2022-nvidia-geforce-rtx-4090-review-extreme-performance?page=5), once thought impossible. The number of games supporting ray tracing has rapidly increased, reaching 141 as of October 15, 2022 (https://www.pcgamingwiki.com/wiki/List_of_games_that_support_ray_tracing).

Given the visual benefits of ray tracing, increased support from games, and the primary constraints of Cost, speed, and power consumption are mitigated to a large extent, the answer to whether ray tracing is worth it for consumers in 2022 is a resounding YES!!! The scope for real-time ray tracing could also extend to the metaverse and other digital content consumed by the general public as prices for ray tracing hardware reduce.

Please rate this