Posted January 22, 2026

The AI industry has historically been bottlenecked by training.

Early LLM and diffusion models were powerful but unpredictable. As a developer, you couldn’t really integrate them into program control flow, so what you could build was relatively limited. If you had a cool new idea for an AI app, it might work – or you might just have to wait for the next model release.

We’re rapidly approaching a second phase that’s bottlenecked by inference. In fact, we might already be there.

In releases over the past ~year, the major AI labs have dramatically improved model reliability. Key advances include reasoning/test-time scaling, longer effective context windows, and more comprehensive training in code and other technical domains. Models still can’t really take over program control, but they can do much longer and deeper work on their own – taking over more & more functionality from non-AI code paths.

The design space of “apps that work” is therefore much larger and more diverse today than just a year ago. It includes agentic coding workflows in Cursor, deep research in ChatGPT, and vertical AI apps like Decagon, Harvey, etc. Of course, the models are not perfect. But waiting for a new release is often not the blocker anymore. For a large and growing class of applications, the models now really are good enough.

This is great news, because it means the AI application ecosystem can flourish and grow without a hard dependency on the research labs’ roadmaps. But if you play the movie forward, it also means the demand for inference will grow massively – probably super-linearly, as we see growth in both steps per task (agents run longer) and tokens generated per step (test-time compute). And inference workloads will become increasingly diverse.

Inference turns out to be a technically challenging problem. It’s {m*n}, in the sense that a wide range of models need to run on a diverse set of hardware platforms. And the dynamics of the problem change at scale. While responding to a single request is straightforward, serving thousands of concurrent requests is highly inefficient without carefully managing batching, cache policies, and the low-level details of how each model operator is run on each chip. That’s the layer of the stack inference engines were created to solve.

So, we’re super excited to announce today that we’re leading the seed round for Inferact. Inferact is a new startup led by the maintainers of the vLLM project, including Simon Mo, Woosuk Kwon, Kaichao You, and Roger Wang. vLLM is the leading open source inference engine and one of the biggest open source projects of any kind. At any given moment, vLLM is running on 400k+ GPUs concurrently around the world (that we know of); it has over 2,000 contributors and a highly dedicated team of 50+ core devs; and it’s used in production by companies like Meta, Google, Character.ai, and many others. Many of the top open source AI labs and hardware companies even contribute to vLLM directly to ensure compatibility on day 1.

The goal for Inferact as a company is twofold. First, to support the vLLM open source project through dedicated financial and developer resources. This is a real challenge because the project needs to scale on three dimensions that are all growing quickly: new model architectures, new hardware targets, and bigger models that require more sophisticated multinode deployments. This is explicitly the main goal of the company for the foreseeable future.

Second, the Inferact team will build what they see as the next generation commercial inference engine. The leading inference services today are fantastic: they are highly performant and hide a lot of underlying complexity from end users. Nearly all of them use vLLM under the hood. We believe it’s important for a company like Inferact to exist, focusing narrowly on improving the software stack and building what they call the “universal inference layer.” This means working with existing providers, not competing against them.

For a16z infra, investing in the vLLM community is an explicit bet that the future will bring incredible diversity of AI apps, agents, and workloads running on a variety of hardware platforms. And that vLLM can uniquely enable this growth, and give developers even more choices for open, low cost inference to power AI adoption. Tremendous advances in infrastructure can still happen the old-fashioned way: with amazing founders, working in small teams, creating a movement among the larger infrastructure community to build the new world.

Finally, this investment is especially close to our hearts because we’ve been small-scale supporters of the vLLM project since 2023. The first vLLM meetup was hosted in our office, and the first a16z open source AI grant was made to the vLLM team. So, we’d like to officially welcome to the a16z family Simon, Woosuk, Zhuohan, Kaichao, Roger, and the rest of the vLLM community.