The "Android Moment" for AI: Why Modular Raised $250M to Break GPU Lock-In

While everyone debates which AI model is smartest, a quiet infrastructure revolution is happening underneath. Modular just raised $250M to solve what Tim Davis calls the "hardware lock-in" problem - AI developers are trapped in expensive, vendor-specific ecosystems. In this episode, Tim Davis (Co-Founder & President of Modular, ex-Google Brain) explains why his company is building the "hypervisor for AI" - a unified compute layer that lets you write code once and run it on any GPU.

We cover:

  • Why NVIDIA's CUDA creates vendor lock-in (and why even NVIDIA partners with Modular)

  • How the Mojo programming language solves the ""two language problem""

  • Real results: How InWorld got 4x performance and 70% cost reduction

  • The controversial take: Are autoregressive LLMs actually the path to superintelligence?

  • Why we're deploying AI at scale without understanding how it works

  • The vision: What happens if we make hardware diversity possible again

Reimagine how AI gets built and deployed:

Next
Next

What we owe the minds we create