Posted September 30, 2025

Science is Critical…

Everyone has been amazed by the remarkable progress that frontier AI models have made in tackling increasingly complex challenges in math, code, and logic. This should make us hopeful about the future of scientific discovery through ever-smarter models: everyone wants AI smart enough to cure diseases and bring humanity to the stars. But for all their strengths, AI models today are still terrible at physics, chemistry, and the many applied fields in the physical sciences – in short, all the things that allow us to enjoy the medicines, devices, and transportation that we use every day.

But AI Still Fails at Science…

This became clear to me earlier this year while working as a visiting scientist at Stanford’s Department of Applied Physics, where we began measuring the ability of frontier AI models to analyze scientific experiment data. Our research focused on condensed matter physics, a field that underpins countless industries – from semiconductors to advanced manufacturing. We quickly realized the truth – not only are today’s frontier models objectively terrible at scientific analysis, they are also relatively worse than human investigators.

As it so happens, a few miles away in San Francisco, two physicists turned AI researchers had come to a similar conclusion. Liam Fedus, the head of post-training at OpenAI and co-creator of ChatGPT, and Dogus Cubuk, whose team at Google DeepMind had pioneered the GNOME approach to scalable compound discovery with AI, had both realized that today’s frontier models were missing a critical component that was necessary for AI models that could aid scientific reasoning: real world data.

After being introduced by mutual friends, it quickly became clear that we shared a conviction about what was holding AI back from making groundbreaking scientific discoveries. The internet has been exhausted – the best models had been trained on roughly 10 trillion tokens of text. But training alone isn’t enough. You can read and re-read the textbook, but eventually you need to run the experiment. You need to close the loop between hypothesis and reality. That’s what science is.

Why Models Fail at Science

Frontier models fail at physics because science is iterative by nature. The data in the literature is fundamentally insufficient – formation enthalpy labels, for instance, have such high noise that training on them doesn’t produce sufficiently predictive models, negative results rarely get published, and the epistemic uncertainty that matters can’t be collapsed without running an experiment.

This is the gap Periodic is built to close.

What Periodic is Building

Periodic Labs is building AI scientists and the autonomous laboratories for them to control. Not models trained on scientific text. Not simulated environments. Real, physical labs that synthesize materials, characterize properties, and generate gigabytes of experimental data that exists nowhere else. Their approach starts at the quantum mechanical energy scale, where chemistry, materials, and solid-state physics operate: they are building powder synthesis labs where robots mix pre-cursors and heat them to discover new superconductors, magnets, and heat shields. These are simple methods, but they generate rich physical data.

The key insight: nature becomes the reinforcement learning environment. When you predict a material’s properties and synthesize it, you know definitively whether you were right. The models will read literature, run quantum mechanical simulations, take action in the lab, and get feedback from nature itself.

Why Now

The technology to do this has only emerged in the last couple years. Liam and Dogus have built what we believe is an n-of-one team: physicists, chemists, simulation experts, and some of the best ML researchers in the world. The team runs weekly teaching sessions where physicists teach LLMs to reason about quantum mechanics and ML researchers learn the physics and the intuitions.

Periodic is already working with customers in space, defense, and semiconductors – sectors representing trillions in R&D spend. They’re helping semiconductor manufacturers solve heat dissipation problems, training agents to automate simulations, and building systems that encode deep domain knowledge through mid-training and reinforcement learning. The strategy is to land and expand at the frontier: solve critical problems with clear evaluations, show what’s possible when you optimize against physical reality rather than internet text, and then scale.

Why We Led

The industries that Periodic will impact – advanced manufacturing, materials science, semiconductors, energy, aerospace – represent roughly $15 trillion of global GDP. These are the sectors AI has barely touched because you can’t transform them with models trained only on text.

If Moore’s Law is slowing, this is how we restart it. The bottleneck has been the iteration speed of human-led experimentation. Periodic removes that constraint, and we are thrilled to lead their $300M founding round.

Periodic is hiring ML researchers, experimentalists, and simulation experts. If you’re world-class in your domain and want to accelerate scientific progress – not in a decade, but now – you should talk to them.

Learn more and apply here: https://jobs.ashbyhq.com/periodic-labs