MLIR offers new infrastructure and a design philosophy that enables machine learning models to be consistently represented and executed on any type of hardware.

MLIR is a new compiler stack that I have the privilege of being the Product Manager for at Google. It fundamentally takes a completely different perspective on existing technologies available in the marketplace by producing a multi-level intermediate representation that combines high level optimizations with lower level code generation in a way that hasn't been done before. I have had the privilege to work with Chris Lattner and a talented team of many other folks in building out this technology and look forward to the enormous impact it is going to have on machine learning in the years ahead. Given that we, Google, are an AI first company - even Sundar was happy about the news. Below is a repost of the original announcement that we posted to the main Google blog.

Machine learning now runs on everything from cloud infrastructure containing GPUs and TPUs, to mobile phones, to even the smallest hardware like microcontrollers that power smart devices. The combination of advancements in hardware and open-source software frameworks like TensorFlow is making all of the incredible AI applications we’re seeing today possible--whether it’s predicting extreme weather, helping people with speech impairments communicate better, or assisting farmers to detect plant diseases.

But with all this progress happening so quickly, the industry is struggling to keep up with making different machine learning software frameworks work with a diverse and growing set of hardware. The machine learning ecosystem is dependent on many different technologies with varying levels of complexity that often don't work well together. The burden of managing this complexity falls on researchers, enterprises and developers. By slowing the pace at which new machine learning-driven products can go from research to reality, this complexity ultimately affects our ability to solve challenging, real-world problems.

Earlier this year we announced MLIR, open source machine learning compiler infrastructure that addresses the complexity caused by growing software and hardware fragmentation and makes it easier to build AI applications. It offers new infrastructure and a design philosophy that enables machine learning models to be consistently represented and executed on any type of hardware. And today we’re announcing that we’re contributing MLIR to the nonprofit LLVM Foundation. This will enable even faster adoption of MLIR by the industry as a whole.

MLIR aims to be the new standard in ML infrastructure and comes with strong support from global hardware and software partners including AMD, ARM, Cerebras, Graphcore, Habana, IBM, Intel, Mediatek, NVIDIA, Qualcomm Technologies, Inc, SambaNova Systems, Samsung, Xiaomi, Xilinx—making up more than 95 percent of the world’s data-center accelerator hardware, more than 4 billion mobile phones and countless IoT devices. At Google, MLIR is being incorporated and used across all our server and mobile hardware efforts.

Machine learning has come a long way, but it's still incredibly early. With MLIR, AI will advance faster by empowering researchers to train and deploy models at larger scale, with more consistency, velocity and simplicity on different hardware. These innovations can then quickly make their way into products that you use every day and run smoothly on all the devices you have—ultimately leading to AI being more helpful and more useful to everyone on the planet.