AI and the Promise of Hardware Iteration at Software Speed

Millen Anand

Software simulations are central to developmental engineering in every hardware industry. In aerospace, we simulate the vibrations and forces of a rocket launch and the on-orbit thermal balances; in aviation, lift and drag over wings; in medicine, drug delivery through the bloodstream. 

Today, these simulations are based on fundamental physics equations, mathematically derived centuries ago by people like Newton and Bernoulli. They often involve nonlinear partial differential equations (PDEs) and often have no closed-form analytical solutions. As a result, these simulations use iterative numerical approximations and are incredibly time- and compute-intensive — and often inaccurate. 

This is all about to change. Machine learning methods applied to physics will speed up simulation times by orders of magnitude, have the potential to improve accuracy, and will revolutionize the engineering development process. Next-generation physics ML simulations are the first step toward enabling hardware iteration at software speeds. 

The traditional engineering process

Hardware development today follows an iterative process from design to analysis (simulation), followed by prototype and test. A single iteration cycle can take weeks to months. And although there are improvements to be made across the entire product development flow, simulation software may be the first frontier, as there are promising early signals in the world of physics machine learning.

Current computational physics simulations, for example in fluid dynamics, work by discretizing the geometry of interest (say, an airplane wing) into a mesh of small polygonal elements and nodes. Numerical solvers then combine the fundamental equations describing fluid flow (Navier-Stokes) with initial and boundary conditions, and use iterative solvers to estimate solutions. Models often have millions (or even billions) of nodes over which to evaluate solutions, and simulations can easily take hours (and sometimes days) to complete. 

For engineering teams, this iteration process is incredibly time consuming as each design modification results in days to weeks of simulation time. As a result, only a limited number of iterations can take place before engineering teams are forced to accept their working design and move to prototype development.

In part, this is because the market for simulation software is dominated by incumbent juggernauts with multibillion dollar market caps, such as Ansys and Siemens, whose legacy products have historically prioritized stability and incremental improvements over innovation. Many of these tools are written in ancient programming languages such as Fortran, lack intuitive UX, and fail to fully take advantage of modern hardware like GPUs. The field has been stagnant for decades, and is ripe for disruption. 

But if simulations took seconds instead of days, engineers would be unshackled. They’d have the freedom to get more creative, explore design spaces, optimize in software, and, overall, carry out many more iterations before delivering a product.

ML-based approaches to simulation

Machine learning approaches to physics simulations work broadly the same way as computational physics approaches: creating a 3D embedding of the model geometry (either with point clouds, graphs, or other similar techniques); encoding initial and boundary conditions; and outputting results in terms of pressure and velocity fields or stress plots. The key difference is that machine learning approaches are pre-trained to learn a transfer function from initial conditions to outputs based on a trove of (generally) computational physics simulation data. At runtime, this means that machine learning simulations are only computing a forward pass through a neural network rather than a huge number of iterative computations at every node in the mesh.

This can speed things up by many orders of magnitude. Simulations that previously took days can be solved in seconds. And simulations that previously were constrained to lower resolutions to enable reasonable compute time can be run at far greater resolutions, resulting in greater accuracy.

There is some precedent here. For decades, weather forecasting was performed in much the same way as engineering simulations — by discretizing the globe into small sections and computing fundamental, nonlinear partial differential equations of momentum, surface pressure, temperature, and more in a process that required some of the most advanced supercomputers in the world. In 2023, AI weather models separately released by Huawei and Google DeepMind, trained on decades of weather observations and simulations, improved compute time by five orders of magnitude while achieving accuracy comparable to the best existing models. 

There are early signs of momentum toward disruption in this space, with a number of startups emerging to take on the incumbents. Approaches differ in terms of ML architecture, and broadly can be segmented into architectures that rely more on fundamental physics vs those that correlate to simulation or experimental data, as well as approaches that focus on a single domain vs those that provide general multiphysics simulation capabilities. NVIDIA has also released Modulus, an open-source physics ML platform that allows users to select ML architectures and train models. Here are some of the startups working on this technology.

Progress, adoption, and challenges

However, while many physics ML startups exist today with very impressive technology, these approaches are still nascent and lack significant utilization among engineers. One of the reasons for this is that startups tend to focus less on what we view as one of the key challenges: overcoming barriers to adoption. Thus far, for example, most improvements in simulation have come through transitioning compute from dedicated desktops to the cloud, rather than from novel improvements in software capabilities or ML models. Notably, existing analysis tools can’t benefit from the boom in GPU technology, as their solvers are generally incompatible and would need to be re-written.

In addition to building great new technologies, we also hope to see more startups laser-focused on the small things that will help customers fully utilize their products.

Here are some insights from dozens of conversations with potential customers of this technology. Our advice to founders is to do everything in their power — on the product front, as well as from an educational perspective — to help potential customers overcome these hurdles and biases:

Awareness and training

Hardware engineers tend to have a limited understanding of the current capabilities of different ML tools, and where they might perform better than traditional simulation tools. Engineers also generally lack the right training and skillset to use ML physics simulations in their current form, as nearly all have been trained only on traditional tools. Thus, the ROI on AI is not clear for engineering leaders who make software purchasing decisions.

Evaluation and trust

Without good benchmarks, ML physics simulation tools are tough for engineering leaders to evaluate, and these people often do not have the time or bandwidth to invest in determining what the right tool is. Engineers can also be skeptical of “black box” ML tools that don’t directly use physics and first principles to produce simulation results in a fully predictable manner. The reluctance to trust new tools is even greater if the design being simulated carries humans or has a lot of kinetic energy.

UX and speed

New platforms, while enabling significantly faster simulation times, are often unintuitive and present new users with a steep learning curve. Anyone without solid programming skills and/or a working knowledge of ML techniques can struggle to see the benefits early on and is at risk of abandoning a new system prematurely.

Go-to-market strategies

We’re already seeing startups experiment with a suite of go-to-market strategies to overcome some of these obstacles. Here are the strategies we believe to be most promising:  

  • Offering white-glove / full-service models initially (even going so far as embedding engineers), before transitioning to self-service. This provides companies with real-world feedback that can help them improve UX, and other issues, for when they’re ready to scale their sales motion. 
  • Getting software into the hands of every college rocket and racecar team for free, and letting them bring it to industry when they graduate. Creating evangelists that are appalled at the speeds of legacy industry tools helps accelerate adoption from the bottom up, much in the way that Benchling uses a free tier for academic researchers to drive product-led growth in the biotech industry. Additionally, startups that can find their way into university class syllabi (where engineers currently learn legacy tools) will likely be very effective in the long run.
  • Marketing the product as an early-stage design tool that helps engineers rapidly assess initial designs, evaluate trade spaces, and respond quickly to shifting requirements. Once established in an enterprise, startups can continue to introduce features that address ever-greater parts of the product-development process. This will also allow startups a chance to gain trust with engineers around the accuracy of the simulations, building toward a long-term goal of being an end-to-end simulation tool.
  • Focusing on the industries and groups within enterprises most likely to adopt. For example, the automotive sector tends to be more risk tolerant than the aerospace sector. R&D groups within larger organizations tend to be less tied to strict processes and could be more open to new technology that helps them progress through conceptual designs. After adopting, these groups could also help champion new tools to the broader organization.

Hardware iteration at software speeds

While simulations are a core part of engineering development, decades without innovation from incumbents has left product iteration cycles long, tedious, and stagnant. Emerging physics machine learning simulation technology has the potential to revolutionize hardware iteration, and to short-circuit the time it takes for products to go from design to production.

We are certainly a long way away from hardware iteration at software speeds, and there are other pieces of the puzzle left to solve, including design and rapid mass-manufacturing. Yet progress is happening quickly, and this future is beginning to look like an inevitability. We can’t wait to live in a world where ideas become reality almost as quickly as we can imagine them.

If you are a hardware or simulation engineer, or a founder building in this space, please reach out.

Want more a16z American Dynamism?

Sign up to stay updated on the ideas, companies, and individuals building toward a more dynamic future.

Thanks for signing up for the a16z American Dynamism newsletter.

Check your inbox for a welcome note.

MANAGE MY SUBSCRIPTIONS By clicking the Subscribe button, you agree to the Privacy Policy.