Posted December 8, 2025

GPUs are the backbone of the AI industry. They run the majority of training and inference workloads today, and advances in GPU technology consistently drive improvements in frontier model capabilities. In this sense they are powering the pursuit (and arguably achievement!) of machine intelligence.

These incredible results, however, come at the cost of growing computational intensity. Frontier model training runs typically require hundreds of thousands of GPUs. Inference clusters are often similar or larger in size, and there is no obvious upper bound on their growth. New data center buildouts greater than 1 gigawatt, once considered crazy, are now routine.

This scaling will likely continue for the foreseeable future, to the benefit of the entire industry. But for intelligence to become ubiquitous in the long run, we believe new, more efficient points in the hardware design space also need to be explored. We’re thrilled to announce today that we’re co-leading the $475m seed round for Unconventional AI, to help them do exactly that.

Unconventional’s core observation is that AI models are probabilistic, but the chips used to train and run them are not.

Let me explain. We’re not talking here about randomness introduced to AI models through implementation details (e.g. floating point errors) or added at inference time (i.e. sampling). Rather, the best methods we know for training AI models – the only ones proven to work at scale – rely intrinsically on statistical methods to “teach” the model some input data. The whole training task, in some sense, is to approximate a probability distribution. This is a feature of modern AI, not a bug.

GPUs, of course, are not probabilistic. To a GPU (or any digital processor), a probability distribution looks like an array of floating point numbers. The latest chips have been optimized brilliantly to operate on very large arrays of numbers – including by adding more memory, wider I/O bandwidth, and direct networking. But at a basic level, this is still a very sophisticated (and expensive) abstraction.

So, Unconventional’s goal is to bridge this gap. They are designing new chips specifically for probabilistic workloads like AI. This means pursuing analog and mixed signal designs that store exact probability distributions in the underlying physical substrate, rather than using numerical approximations. These types of chips can theoretically operate at O(1,000x) lower power consumption than digital computers, and can make it possible to train new types of AI models.

This is an incredibly ambitious bet. Analog computers are not the dominant paradigm today and have faced scaling challenges in the past. But the team has several promising directions to follow that are theoretically sound, including oscillators, thermodynamics, and spiking neurons. And we think the right time to make a serious attempt is now, when AI is creating new markets and driving change through the entire computing stack. Critically, we also believe this kind of step-change performance improvement is necessary to carve out space alongside Nvidia’s powerful hardware and software ecosystem.

What we know for sure is that Unconventional has one of the top teams on the planet to pursue this project. Unconventional’s CEO Naveen Rao has two prior successful exits (Nervana to Intel, Mosaic to Databricks) and is one of the few people who deeply understand both the hardware and software side of AI. Cofounders Mike Carbin and Sara Achour have encyclopedic knowledge of novel computing methods and bring a deeply curious but practical eye to selecting projects. And MeeLan Lee is an incredibly talented engineering leader who has the tall task of making mixed signal chip designs a reality.

AI is the defining technology of our time. We’re investing in Unconventional because we believe real innovation at the hardware layer is necessary to achieve intelligence at scale. We can’t wait to see what they build.