I’ve historically avoided investing in hardware. Hardware is hard – expensive, slow, risky. (I should know.) If anything, the problems with building and selling consumer hardware are the same today as they were a decade ago: everyone expects Apple-level design, the margins are thin, there’s manufacturing lag to contend with, and you’re at the mercy of supply chains. You run the risk of placing an order for a million units, then the world moves on, and suddenly you’re stuck with warehouses of unsold products.
And yet.
This moment feels different in a way that’s causing me to reluctantly reevaluate my own beliefs.
With AI, we built something close to a digital god. Then we crammed it into a web app and typed at it. Yes, we’ve added audio, image, even video – but still, we mostly just talk to it through a box. If this tech is as powerful as we know it is, shouldn’t we be able to interact with it in more meaningful ways? Shouldn’t it come along with us on the journey?
In the last two weeks alone, I’ve met with dozens of hardware founders. They’re all saying some version of: AI needs a body. I’ve heard everything from glasses to pins to Labubu-style toys. Everyone’s experimenting. This new wave of smart, young founders who are building in hardware is undeniably compelling. They feel the same way: this AI ghost needs a shell (iykyk). And typing into a laptop ain’t it.
Think back to the mobile shift. At first, it just seemed like the same behavior on a smaller screen. But then entirely new behaviors emerged – DoorDash, Uber, Snap – products that weren’t possible on desktop. That happened because the hardware changed. With that pocket-sized form factor, you suddenly had location, cameras, payment capabilities, and more, and you were on the move.
Dare I say — AI hasn’t had that moment yet. Not for everyday consumers. The models are still too big for mobile AI to really work. So though I’m loath to admit it, I’m starting to think: maybe hardware is the missing link.
I believe the biggest consumer breakthrough in AI over the past six months has been the open-ended memory layer. It’s what makes AI functionally impossible to turn off, and infinitely more useful. The more context AI gets, the more valuable it becomes. Look at Oura, a $5 billion+ company. It works because by tracking 180+ biomarkers, it gives AI context.
There are two versions of gathering that all-important context: active input and passive input. Active input is what we do with ChatGPT. But, come on: people get lazy; transcribing your day into ChatGPT each night is extreme early adopter behavior. Most people will opt for passive input. That’s why all these early productivity tools are Chrome extensions that integrate with your email and calendar. They’re trying to passively collect context.
But this can’t stay on the laptop. It’s too magical for that. It needs to move with us. It needs to be ambient.
If that’s the case, the smart person who’s followed tech for longer than five minutes would ask: why aren’t we all walking around wearing AI glasses or pins or rings then? To me, it seems obvious: the social cues around these products just haven’t settled yet.
Social acceptability is the bottleneck. For better or worse, a Google Glass-style device still feels sci-fi and creepy to most. Nobody wants to look like they’re recording everything. For hardware to break through, it needs to be invisible, desirable, and – the real kicker – it needs a use case everyone agrees is worth it.
This is key. The form factor determines what kind of data you can collect, which determines what the AI can do. My prediction is we’ll start somewhere lighter than always-on cameras: sound. I expect we’ll see a form that can house mics without being intrusive. We don’t know what that looks like yet. But we do know this: AI becomes dramatically more useful when it has context, and hardware might be the only way to collect that context passively.
I’m not interested in debating the fashionability of a particular brand of glasses or accessory. It’s possible this elusive hardware ends up tethered to the iPhone, which already has 7 billion units moving through the world – or something phone-adjacent, like Airpods. More likely, though, it’ll be built by someone less encumbered by privacy constraints, someone who can gather the kind of passive context AI needs to be undeniably valuable. Which leads us to…
I met a founder recently who’s building cheap AI glasses for factory workers, not for consumers. He’s solving a very specific problem: how do you collect hand-motion data to train robots? For that, he doesn’t need long battery life or social appeal. He just needs repeatable, structured input, and a factory is a great place to get it. It’s not a mainstream use case, but it is precise.
That’s the “what” I’m looking for: precise, valuable, socially acceptable use cases. That’s how the hardware wedge will happen.
We’ve built a technology that’s too powerful to live in a text box. Increasingly, people want it in their lives. They just need a purpose for it that feels right.
Loveable’s cofounder famously claims to be building the last piece of software. Let’s assume, for a moment, that he’s right. Now the question becomes: what brings that software into our lives in a real way?
If I’m to believe the dozens of smart, convincing young founders I’ve met this month, maybe, just maybe, it’s hardware.