Interviews Tim Davis Interviews Tim Davis

Fund/Build/Scale Podcast Interview

Leaving a high-paying role at Google to take on NVIDIA, Intel, and AMD is not for the faint of heart, but that’s exactly what Tim Davis, co-founder and president of Modular, did.

I had a wonderful conversion with Walter Thompson about Modular some time ago, we covered AI compute, building a new AI software stack, the ups and downs of startups and scaling AI into the future.

Introduction

Leaving a high-paying role at Google to take on NVIDIA, Intel, and AMD is not for the faint of heart, but that’s exactly what Tim Davis, co-founder and president of Modular, did.

In this episode of Fund/Build/Scale, Tim explains why Modular has raised $130M to reimagine AI compute infrastructure, and what he’s learned trying to build a platform that competes with some of the biggest names in tech.

We talked about:

  • 🚀 Why Modular believes AI workloads need a hardware-agnostic execution platform

  • 💡 How Tim and co-founder Chris Lattner decided to “start from the hardest part of the stack”

  • 💰 The trade-offs of raising VC for infrastructure-heavy startups

  • 🛠 Why Modular focuses on talent density and how they’ve recruited top engineers from Google, NVIDIA, and beyond

  • 🌍 What it takes to break into the AI space when your competitors are trillion-dollar companies

Tim takes a thoughtful, deep-dive approach to this conversation—unpacking the complexities of AI infrastructure and what it takes to build in one of tech’s most competitive spaces. There’s valuable insight here for founders navigating technical markets or aiming to disrupt entrenched players.

Episode Breakdown

(1:26) “We are building a new accelerated execution platform for compute.”

(6:41) “ It will exist all over the place and it already does, but AI will be everywhere that compute is.”

(11:18) “ You only you only have so much time in a week. What is the thing that you're best at?”

(15:13) “ We have decided to start from the hardest part of the software stack.”

(22:44) “For the most talented people in the world, the risk is actually not as great as what you think.”

(30:24) “ Growing up in Australia, my view of the of the United States was very much driven from the media and from Hollywood.”

(33:26) “ I sat in a room for six weeks and just met everyone that I could. And that really was the beginning of a journey to the United States.”

(37:48) “ I still think there's a special place in the Bay Area, and in the United States, there is a different risk appetite.”

(40:41) The one question Tim would have to ask the CEO before he’d take a job at someone else’s early-stage startup.

Read More
Interviews Tim Davis Interviews Tim Davis

The Aussie conquering Silicon Valley

Meet the little-known young Australian in Silicon Valley at the forefront of the artificial intelligence revolution, who heads one of the hottest start-ups in America that has the audacious goal of fixing AI infrastructure for the world’s software and hardware developers.

By JOHN STENSHOLT (The Australian Business Review)

After years of toughing it out and watching his Silicon Valley dream almost die, a little-known former Melbourne NAB analyst now leads one of the hottest start-ups in America.

Meet the little-known young Australian in Silicon Valley at the forefront of the artificial intelligence revolution, who heads one of the hottest start-ups in America that has the audacious goal of fixing AI infrastructure for the world’s software and hardware developers.

Tim Davis, 40, turned up in California on a whim a little over a decade ago, survived a stint in a wild house known as the “Hacker Fortress”, started an early version of an online food delivery service before the rise of UberEats and others, was poached by Google only to leave the technology giant in 2022 to co-found Modular, an AI infrastructure start-up recently valued at about US$600 million (A$927 million).

And he says he is only getting started.

In his first Australian media interview, Davis says the goals for Modular are clear – and big.
“We started Modular to improve AI infrastructure for the world. Changing the world is never easy – but we are incredibly determined to do so,” he said.

“AI is so important to the future of humanity, and we feel a great purpose to try to improve AI’s usability, scalability, portability and accessibility for developers and enterprises around the world.”

Modular’s vision is to allow AI technology to be used by anyone, anywhere and it is creating a developer infrastructure platform that enables more developers around the world to deploy AI faster, and across more hardware.

Davis, and his America co-founder Chris Lattner, have helped build and scale much of the AI software infrastructure that powers workloads at some of the world’s largest tech companies – including Google – but they argue this software has many shortcomings and was designed for research and not for scaling AI across the vast number of new uses and hardware the world is demanding now and into the future.

They are aiming to rebuild and unify AI software infrastructure, solving fragmentation issues that make it difficult for developers who work outside the world’s largest companies to build, deploy and scale AI, and ultimately make AI more accessible to everyone.

“We thought we could build something unique that could actually empower the world to move faster with AI, while equally making it more accessible to developers, make it easier to program in and better from a cost standpoint because you can scale it to different types of hardware.”

Modular claims to have built the world’s fastest AI inference engine – software that enables AI programs to run and scale to millions of people – and its own programming language, Mojo, a superset of Python (the world’s most popular programming language) which enables developers to deploy their AI programs tens of thousands of times faster, reduce costs and make it more simple to deploy AI around the world.

After raising US$30 million from investors last year, Modular recently raised another US$100 million in a funding round led by private equity firm General Catalyst and including Google Ventures, and Silicon Valley-based venture funds SV Angel, Greylock Partners and Factory.

Modular says it’s now more than 120,000 developers using its products—such as its inference engine and Mojo—launched in early May—and that “leading tech companies” are already using its infrastructure, with 35,000 on the waitlist.

The US$100 million raised will be used on product expansion, hardware support and the expansion of Mojo, as well as building the sales and commercialisation aspects of the business.

Early years

All of which is a far cry from when Davis flew to Silicon Valley in September 2012 with dreams of becoming an entrepreneur, following stints working as a financial analyst at National Australia Bank and then several internships at law firms like Allens and Minters after completing law, business and commerce degrees at Melbourne’s Monash University.

Davis had started his own company called CrowdSend in 2011, which had a software system for identifying objects in images and pictures and matching them to retailers, but said he found it “difficult” as “investors in Australia, if you wanted to raise some seed capital, they would take a very large percentage of the business.”

“So it was my wife that said if you want to do this (become an entrepreneur) why don’t you go do the place where technology is. We’d actually just gotten married and then off I went to America.”

Davis used Airbnb to find a place to stay in Silicon Valley, gave himself six weeks to make a success of things and found what was called the Hacker Fortress in the Los Altos Hills.

“It was very much a place with a whole bunch of misfits who had come to America, particularly Silicon Valley, to do a start-up. This was a very big house, it had 15 rooms, and my dream at the time was how to make CrowdSend successful. There were a lot of really talented people there,” he says.

“(But) it turns out when you come to Silicon Valley, and you come with a preconceived notion of what you want to do, well then there’s reality of what you ended up doing.”

There were plenty of issues in the Hacker Fortress, which was not as clean as advertised on Airbnb. At one stage, a housemate died in his room. Davis would also have the roof fall in on his room one evening.

The house, despite being advertised as being only 10 minutes from the likes of Google and Apple, was actually quite a way from shops and restaurants.

“We basically had no ability to get food easily and so we had this idea that maybe we could build this large scalable distribution model,” Davis explains. “At the time the likes of GrubHub and these other businesses were going to restaurants and trying to get commission deals. So instead we reverse engineered the Starbucks and Chipotle menus and built an app and threw it out as an idea to people in the house.”

Before UberEats

The idea for what would become Davis’s next business, Fluc (Food Lovers United Co), was born – all because he and his housemates were a little lazy and really didn’t want to cook.

It was 2013, a year before UberEats was launched, and while other food delivery service apps were dealing directly with restaurants, Davis and his took a different approach.

“We just thought, why don’t we just grab every restaurant menu, we just put it on our website, inflate the prices, and start selling food? Overnight, we went from five restaurants to 160 restaurants. And it just exploded. It was unprecedented. And what we tapped into really was that selection mattered. And that’s what consumers had not had, at least in the American market. And so then we went on this journey of raising capital.”

Fluc would mainly service the local Bay Area market of about 8 million people, though it tried to move to Los Angeles at one stage, and raised US$4 million from local angel investors.

Davis says after two and a half years of working seven days a week it became obvious “that we wouldn’t be able to raise the amount of capital that we needed to scale that business” at a time when the likes of DoorDash were raising hundreds of millions of dollars annually from investors.

Davis and his team then started talking to other companies about potentially being acquired, and eventually Google expressed interest not in the company but the people Fluc had assembled.

That led to Davis joining Google, where he worked as a product manager and then joined the ad division where he built machine learning systems and the Google Brain research division dedicated to AI – where he met Lattner.

What followed, Davis says, was six years of building AI systems that stood them in good stead when they decided to leave Google and form Modular.

“We were basically there for all the major components of AI as it is now … and what was interesting is that through that experience we could see the infrastructure was designed by researchers – a bunch of people who wanted to train large machine learning models. But taking those models from a research environment and actually scaling them into very large production systems is very hard.”

He says Modular’s only goal is to find solutions to that problem, and commercialise it and that the opportunity “we have in this market is astronomically huge.”

What’s next

“We work with everyone from high performance racing teams, to autonomous car companies, to very large machine learning recommendations to generative AI. You name it, whether it’s video generation, image generation, text generation, we’re there.

“You could go down the NASDAQ list of companies (to find those) who want to use our infrastructure to scale AI inside their organisations.”

As for widespread concerns about AI, Davis says the fact that AI systems take an “incredible amount of time to build” means that the public should be “realistic” about the perception that AI could be a threat to humanity.

“We definitely have better recommendation systems, we have better chat bots, we have translation across multiple languages, we can take better photos, we are now all better copywriters, and we have voice assistants.

“But none of this is remotely close to AGI or artificial general intelligence – or anywhere near ASI: artificial super intelligence. We have a long way to go.

“AI shouldn’t be seen as a replacement for human intelligence, it’s a way to augment human life – a way to improve our world, to leave it better than we found it.”

Read More
Interviews Tim Davis Interviews Tim Davis

Unite AI – Interview Series

Tim Davis, is the Co-Founder & President of Modular, an integrated, composable suite of tools that simplifies your AI infrastructure so your team can develop, deploy, and innovate faster. Modular is best known for developing Mojo, a new programming language that bridges the gap between research and production by combining the best of Python with systems and metaprogramming.

By Antoine Tardif (Unite AI)

Tim Davis, is the Co-Founder & President of Modular, an integrated, composable suite of tools that simplifies your AI infrastructure so your team can develop, deploy, and innovate faster. Modular is best known for developing Mojo, a new programming language that bridges the gap between research and production by combining the best of Python with systems and metaprogramming.

Repeat Entrepreneur and Product Leader. Tim helped build, found and scale large parts of Google's AI infrastructure at Google Brain and Core Systems from APIs (TensorFlow), Compilers (XLA & MLIR) and runtimes for server (CPU/GPU/TPU) and TF Lite (Mobile/Micro/Web), Android ML & NNAPI, large model infrastructure & OSS for billions of users and devices. Loves running, building and scaling products to help people, and the world.

When did you initially discover coding, and what attracted you to it?

As a kid growing up in Australia, my dad brought home a Commodore 64C and gaming was what got me hooked – Boulder Dash, Maniac Mansion, Double Dragon – what a time to be alive. That computer introduced me to BASIC and hacking around with that was my first real introduction to programming. Things got more intense through High School and University where I used more traditional static languages for engineering courses, and over time I even dabbled all the way up to Javascript and VBA, before settling on Python for the vast majority of programming as the language of data science and AI. I wrote a bunch of code in my earlier startups but these days, of course, I utilize Mojo and the toolchain we have created around it.

For over 5 years you worked at Google as Senior Product Manager and Group Product Leader, where you helped to scale large parts of Google's AI infrastructure at Google Brain. What did you learn from this experience?

People are what build world-changing technologies and products, and it is a devoted group of people bound by a larger vision that brings them to the world. Google is an incredible company, with amazing people, and I was fortunate to meet and work with many of the brightest minds in AI years ago when I moved to join the Brain team. The greatest lessons I learnt were to always focus on the user and progressively disclose complexity, to empower users to tell their unique stories to the world like fixing the Greater Barrier Reef or helping people like Jason the Drummer, and to attract and assemble a diverse mix of people to drive towards a common goal. In a massive company of very smart and talented people, this is much harder than you can imagine. Reflecting on my time there, it’s always the people you worked with that are truly memorable. I will always look back fondly and appreciate that many people took risks on me, and I’m enormously thankful they did, as many of those risks encouraged me to be a better leader and person, to dive deep and truly understand AI systems. It truly made me realize the profound power AI has to impact the world, and this was the very reason I had the inspiration and courage to leave and co-found Modular.

Can you share the genesis story behind Modular?

Chris and I met at Google and shipped many influential technologies that have significantly impacted the world of AI today. However, we felt AI was being held back by overly complex and fragmented infrastructure that we witnessed first hand deploying large workloads to billions of users. We were motivated by a desire to accelerate the impact of AI on the world by lifting the industry towards production-quality AI software so we, as a global society, can have a greater impact on how we live. One can’t help but wonder how many problems AI can help solve, how many illnesses cured, how much more productive we can become as a species, to further our existence for future generations, by increasing the penetration of this incredible technology.

Having worked together for years on large scale critical AI infrastructure – we saw the enormous developer pain first hand – “why can’t things just work”? For the world to adopt and discover the enormous transformative nature of AI, we need software and developer infrastructure that scales from research to production, and is highly accessible. This will enable us to unlock the next way of scientific discoveries – of which AI will be critical – and is a grand engineering challenge. With this motivating background, we developed an intrinsic belief that we could set out to build a new approach for AI infrastructure, and empower developers everywhere to use AI to help make the world a better place. We are also very fortunate to have many people join us on this journey, and we have the world's best AI infrastructure team as a result.

Can you discuss how the Mojo programming language was initially built for your own team?

Modular’s vision is to enable AI to be used by anyone, anywhere. Everything we do at Modular is focused on that goal, and we walk backwards from that in the way we build out our products and our technology. In this light, our own developer velocity is what matters to us firstly, and having built so much of the existing AI infrastructure for the world – we needed to carefully consider what would enable our team to move faster. We have lived through the two-world language problem in AI – where researchers live in Python, and production and hardware engineers live in C++ – and we had no choice but to either barrel down that road, or rethink the approach entirely. We chose the latter. There was a clear need to solve this problem, but many different ways to solve it – we approached it with our strong belief of meeting the ecosystem where it is today, and enabling a simpler lift into the future. Our team bears the scars of software migration at large scale, and we didn’t want a repeat of that. We also realized that there is no language today, in our opinion, that can solve all the challenges we are attempting to solve for AI and so we undertook a first principles approach, and Mojo was born.

How does Mojo enable seamless scaling and portability across many types of hardware?

Chris, myself and our team at Google (many at Modular) helped bring MLIR into the world years ago – with the goal to help the global community solve real challenges by enabling AI models to be consistently represented and executed on any type of hardware. MLIR is a new type of open-source compiler infrastructure that has been adopted at scale, and is rapidly becoming the new standard for building compilers through LLVM. Given our team's history in creating this infrastructure, it's natural that we utilize it heavily at Modular and this underpins our state of the art approach in developing new AI infrastructure for the world. Critically, while MLIR is now being fast adopted, Mojo is the first language that really takes the power of MLIR and exposes it to developers in a unique and accessible way. This means it scales from Python developers who are writing applications, to Performance engineers who are deploying high performance code, to hardware engineers who are writing very low level system code for their unique hardware.

References to Mojo claim that it’s basically Python++, with the accessibility of Python and the high performance of C. Is this a gross oversimplification? How would you describe it?

Mojo should feel very familiar to any Python programmer, as it shares Python’s syntax. But there are a few important differences that you’ll see as one ports a simple Python program to Mojo, including that it will just work out of the box. One of our core goals for Mojo is to provide a superset of Python – that is, to make Mojo compatible with existing Python programs – and to embrace the CPython implementation for long-tail ecosystem support. Then enable you to slowly augment your code and replace non-performing parts with Mojo’s lower-level features to explicitly manage memory, add types, utilize autotuning and many other aspects to get the performance of C or better! We feel Mojo gives you get the best of both worlds and you don’t have to write, and rewrite, your algorithms in multiple languages. We appreciate Python++ is an enormous goal, and will be a multi-year endeavor, but we are committed to making it reality and enabling our legendary community of more than 140K+ developers to help us build the future together.

In a recent keynote it was showcased that Mojo is 35,000x faster than Python, how was this speed calculated?

It’s actually 68,000x now! But let's recognize that it's just a single program in Mandelbrot – you can go and read a series of three blog posts on how we achieved this – here, here and here. Of course, we’ve been doing this a long time and we know that performance games aren’t what drive language adoption (despite them being fun!) – it’s developer velocity, language usability, high quality toolchains & documentation, and a community utilizing the infrastructure to invent and build in ways we can’t even imagine. We are tool builders, and our goal is to empower the world to use our tools, to create amazing products and solve important problems. If we focus on our larger goal, it's actually to create a language that meets you where you are today and then lifts you easily to a better world. Mojo enables you to have a highly performant, usable, statically typed and portable language that seamlessly integrates with your existing Python code – giving you the best of both worlds. It enables you to realize the true power of the hardware with multithreading and parallelization in ways that raw Python today can not – unlocking the global developer community to have a single language that scales from top to bottom.

Mojo’s magic is its ability to unify programming languages with one set of tools, why Is this so important?

Languages always succeed by the power of their ecosystems and the communities that form around them. We’ve been working with open source communities for a long time, and we are incredibly thoughtful towards engaging in the right way and ensuring that we do right by the community. We’re working incredibly hard to ship our infrastructure, but need time to scale out our team – so we won’t have all the answers immediately, but we’ll get there. Stepping back, our goal is to lift the Python ecosystem by embracing the whole existing ecosystem, and we aren’t seeking to fracture it like so many other projects. Interoperability just makes it easier for the community to try our infrastructure, without having to rewrite all their code, and that matters a lot for AI.

Also, we have learnt so much from the development of AI infrastructure and tools over the last ten years. The existing monolithic systems are not easily extensible or generalizable outside of their initial domain target and the consequence is a hugely fragmented AI deployment industry with dozens of toolchains that carry different tradeoffs and limitations. These design patterns have slowed the pace of innovation by being less usable, less portable, and harder to scale.

The next-generation AI system needs to be production-quality and meet developers where they are. It must not require an expensive rewrite, re-architecting, or re-basing of user code. It must be natively multi-framework, multi-cloud, and multi-hardware. It needs to combine the best performance and efficiency with the best usability. This is the only way to reduce fragmentation and unlock the next generation of hardware, data, and algorithmic innovations.

Modular recently announced raising $100 million in new funding, led by General Catalyst and filled by existing investors GV (Google Ventures), SV Angel, Greylock, and Factory. What should we expect next?

This new capital will primarily be used to grow our team, hiring the best people in AI infrastructure, and continuing to meet the enormous commercial demand that we are seeing for our platform. Modverse, our community of well over 130K+ developers and 10K’s of enterprises, are all seeking our infrastructure – so we want to make sure we keep scaling and working hard to develop it for them, and deliver it to them. We hold ourselves to an incredibly high standard, and the products we ship are a reflection of who we are as a team, and who we become as a company. If you know anyone who is driven, who loves the boundary of software and hardware, and who wants to help see AI penetrate the world in a meaningful and positive way – send them our way.

What is your vision for the future of programming?

Programming should be a skill that everyone in society can develop and utilize. For many, the “idea” of programming instantly conjures a picture of a developer writing out complex low level code that requires heavy math and logic – but it doesn’t have to be perceived that way. Technology has always been a great productivity enabler for society, and by making programming more accessible and usable, we can empower more people to embrace it. Empowering people to automate repetitive processes and make their lives simpler is a powerful way to give people more time back.

And in Python, we already have a wonderful language that has stood the test of time – it's the world's most popular language, with an incredible community – but it also has limitations. I believe we have a huge opportunity to make it even more powerful, and to encourage more of the world to embrace its beauty and simplicity. As I said earlier, it's about building products that have progressive disclosure of complexity – enabling high level abstractions, but scaling to incredibly low level ones as well. We are already witnessing a significant leap with AI models enabling progressive text-to-code translations – and these will only become more personalized over time – but behind this magical innovation is still a developer authoring and deploying code to power it. We’ve written about this in the past – AI will continue to unlock creativity and productivity across many programming languages, but I also believe Mojo will open the ecosystem aperture even further, empowering more accessibility, scalability and hardware portability to many more developers across the world.

To finish, AI will penetrate our lives in untold ways, and it will exist everywhere – so I hope Mojo catalyzes developers to go and solve the most important problems for humanity faster – no matter where they live in our world. I think that’s a future worth fighting for.

Read More
Interviews Tim Davis Interviews Tim Davis

Data Exchange Interview

Welcome to the Data Exchange Podcast. Today we’re joined by Tim Davis, co-founder and Chief Product Officer at Modular. Their tagline says it all: The future of AI development starts here. Tim, great to have you on the show.

Interview with Tim Davis, Co-Founder of Modular. Full interview here

Ben (Host): Welcome to the Data Exchange Podcast. Today we’re joined by Tim Davis, co-founder and Chief Product Officer at Modular. Their tagline says it all: The future of AI development starts here. Tim, great to have you on the show.
Tim Davis: Great to be here, Ben—thanks for having me.

Introducing Mojo: Python, Reimagined

Ben: Let’s dive right in. What is Mojo, and what can developers use today?
Tim: Mojo is a new programming language—a superset of Python, or “Python++,” if you will. Right now, anyone can sign up at modular.com/mojo to access our cloud-hosted notebook environment, play with the language, and run unmodified Python code alongside Mojo’s advanced features.

“All your Python code will execute out of the box—you can then take performance-critical parts and rewrite them in Mojo to unlock 5–10× speedups.”

That uplift comes from our state-of-the-art compiler and runtime stack, built on MLIR and LLVM foundations.

Solving the Two-Language Problem

Many ML frameworks hide C++/CUDA complexity behind Python APIs, but that split still causes friction. Mojo bridges the gap:

  • Prototype in Python

  • Optimize in Mojo (same codebase)

“Researchers no longer need to drop into C++ for speed; they stay in one language from research to production.”

This unified model dramatically accelerates the path from idea to deployment.

Who is Mojo For?

Ben: Frameworks like TensorFlow and PyTorch already tackle performance. Who’s Mojo’s target audience?
Tim: Initially, it’s us—Modular’s own infrastructure team. But our real audience spans:

  1. Systems-level ML engineers who need granular control and performance.

  2. GPU researchers wanting a seamless path to production without rewriting code.

By meeting developers where they are, Mojo helps defragment fragmented ML stacks and simplifies pipelines.

Under the Hood: Hardware-Agnostic Design

Mojo’s architecture is built for broad hardware support:

  • MLIR (Multi-Level IR): Provides a common representation across hardware.

  • LLVM Optimizations: Powers high-performance codegen.

  • Multi-Hardware Portability: CPUs, GPUs, TPUs, edge devices, and beyond.

“We want access to all hardware types. Today’s programming model is constrained—Mojo opens up choice.”

This means you’re not locked into CUDA or any single accelerator vendor.

Beyond the Language: Unified AI Inference Engine

Modular also offers a drop-in inference engine:

  • Integrates with Triton, TF-Serving, TorchServe

  • CPUs first (batch workloads), GPUs coming soon

  • Orders-of-magnitude performance gains

“Simply swap your backend and get massive efficiency improvements—no changes to your serving layer.”

Enterprises benefit from predictable scaling and hardware flexibility, whether on Intel, AMD, ARM-based servers, or custom ASICs.

Roadmap: Community, Open Source & Enterprise

Next 6–12 Months:

  • Expand Mojo’s language features (classes, ownership, lifetimes).

  • Enable GPU execution (beyond the cloud playground).

  • Extend the inference engine to training, dynamic workloads, and full pipeline optimizations (pre-/post-processing).

“We released early to learn from real users—80,000 sign-ups across 230+ countries. Their feedback drives our roadmap.”

Why a New Language Matters

Mojo’s core value prop can be summed up in three words:

  1. Usable: Drop-in Python compatibility; gentle learning curve.

  2. Performant: Advanced compiler + runtime yields 5–10× speedups out of the box.

  3. Portable: Write once, run anywhere—from cloud GPUs to mobile CPUs.

Together, these unlock faster innovation, lower costs, and broader hardware choice.

Democratizing AI Development

In Tim’s own words:

“Our mission is to make AI development accessible to anyone, anywhere. By rethinking the entire stack, we’re unlocking a new wave of innovation and putting compute power in more hands.”

With its unified language and inference engine, Modular is ushering in a future where AI development truly starts here—for researchers, engineers, and enterprises alike.

Read More