What Comes After Mobile? Meta’s Andrew Bosworth on AI and Consumer Tech

David George and Andrew Bosworth

In our conversation series AI Revolution, we ask industry leaders how they’re harnessing the power of generative AI and steering their companies through the next platform shift. Find more content from our AI Revolution series on www.a16z.com/AIRevolution.

Drawing on his early days building Facebook’s News Feed to his work today on smart glasses and AR headsets, Meta CTO Andrew Bosworth joins a16z Growth General Partner David George to share how he’s translated emerging technologies into products people use and love. He also explores how breakthroughs in AI and hardware could turn the existing app model on its head, introduce entirely new competitive dynamics and business models for startups, and usher in a post-mobile phone era. If we get it right, Bosworth says, the next wave of consumer tech won’t run on taps and swipes—it’ll run on intent.

[00:00:00] The future of content consumption

[00:02:18] Building great products on the backs of tech trends

[00:08:24] The evolution of Reality Labs and imagining a post-mobile world

[00:11:36] Building a bridge to the post-mobile era

[00:15:00] Does AI turn the app model on its head?

[00:21:28] Competing on performance and price in the AI era

[00:24:27] Open source, Llama, and commoditizing your complements

[00:27:49] Risks to the post-mobile vision

[00:32:41] Committing to the vision

Note: this transcript has been lightly edited for clarity. 

The future of content consumption 

David George: I want to jump right in. How are we all going to be consuming content five years from now—and 10 years from now?

Andrew Bosworth: Ten years, I feel confident that we will have a lot more ways to bring content into our view shed than just taking out our phone. I think augmented reality glasses are a real possibility. I’m also hoping that we can do better for engaging in immersive things. Right now you have to travel to something like the Sphere, which is great, but there’s one of them, it’s in Vegas, and it’s a trip.

Are there better ways we can access immersive and social experiences? Say I want to watch the game with my dad and feel like we’re courtside. Sure, we can go and pay a lot for tickets, but is there a better way? I think there is. So 10 years out, I feel good about alternative content delivery vehicles.

Five years is trickier. I think the smart glasses, the AI glasses, the display glasses that we’ll have in five years will be good. Some will be super high-end and exceptional. Some will be lower resolution but always available and on your face. I wouldn’t be doing work there, but if I’m grabbing simple content between moments, they’ll be good for that.

We’re seeing a spectrum: super high-end but expensive experiences that won’t be evenly distributed, and a broader set of experiences that aren’t rich enough to replace current devices. Hopefully, more people will have access to experiences that couldn’t happen any other way. That’s what’s exciting about mixed and virtual reality.

 Building breakthrough products on the backs of tech trends

Building great products on the backs of tech trends 

David George: You’ve been uniquely good at piecing together big tech shifts into new product experiences. At Facebook, you helped create News Feed. That combined social experience, mobile, and old-school AI.

Andrew Bosworth: Let me say two things about this. First, what me and my cohorts at Meta were good at was immersing in the problem. What were people trying to do? What did they want?

When you start there, you reach for whatever tool is available. That helps you be honest about what tools work and lets you see trends. If you’re too focused on the technology, you can get caught in a wave and not admit when it’s over or embrace what’s next.

David George: And then you’re building technology for technology’s sake, instead of solving a product problem.

Andrew Bosworth: Exactly. If you’re focused on real problems, even if they’re not profound, you make better things. That’s why this AI revolution feels different. It solves real problems—even though it creates new ones. It feels like a substantial, new, broadly applicable capability.

While today’s AI has downsides—factuality, compute, cost—those are solvable. What’s unusual is how many domains it applies to. Most breakthroughs are very domain specific: this gets faster, that gets cheaper. This feels like everything gets better.

Every interface I interact with, every problem I try to solve, is being improved by this technology. That’s rare. Mark and I believed this revolution was coming—we just thought it would take 10 more years.

What we expected to happen first was the interface revolution. Around 2015, the mobile phone form factor already felt saturated. It’s the greatest computing device we’ve ever used, but once you get past that, the next step has to be more natural—getting information through your eyes and ears, and expressing intentions without a keyboard or touchscreen.

Okay, we need to be on the face, because we need access to eyes and ears. And we need neural interfaces or something else that helps the user express intention to the machine.

That’s the vision we’ve had for 10 years. But we grew up with an entire generation of engineers for whom the system—app model, interaction design—was fixed. Yes, we went from mouse to touchscreen, but it’s still direct manipulation, same as in the 1960s.

We haven’t changed modalities. And there’s a cost to that, because we’ve learned to use current tools really well.

 So the challenge was: build this hardware that does amazing things, is attractive, light, affordable—and none of it existed before. But that’s only half the problem. The other half is: how do I use it? How do I make it feel natural? I’m so good with my phone—it’s like an extension of my body and my intention. How do we make it even easier?

We were having these challenges, and then, what a blessing: AI came in two years ago, much sooner than we expected. It’s a tremendous opportunity to make this even easier. The AI we have today has a greater ability to understand what my intentions are. I can give vague references, and it can work through the corpus of information to produce specific outcomes.

There’s still a lot of work to do to adapt it, and it’s not yet a control interface. I can’t reliably operate my machine with it. But we know what those things are. So now we’re in a more exciting place.

Before, we had a big hill to climb on hardware and interaction design—but now we’ve got this tailwind. On the interaction design side, there’s the potential to have a much more intelligent agent that knows what you’re seeing, hearing, what’s going on around you, and can make intelligent inferences.

The evolution of Reality Labs and imagining a post-mobile world

David George: Let’s talk about Reality Labs and the suite of products. So, you’ve got Quest headsets, smart glasses, and then, on the far end, Orion and the stuff I demoed. Talk about how these efforts evolved, what markets they target, and how they converge or diverge over time.

Andrew Bosworth: When we started the Ray-Ban Meta project, they were going to be smart glasses. They were entirely built and six months from production when Llama 3 hit. The team said, “No, we’ve got to do this.” So we added AI.

Now they’re AI glasses. The form factor was already right. We had compute, we had audio. Now you have glasses you can ask questions to.

In December, through our early access program, we launched what we call “Live AI.” You can start a live session with your Ray-Ban Meta glasses and for 30 minutes—until the battery runs out—it’s seeing what you’re seeing.

On paper, Ray-Ban Meta looks like an incremental improvement over Ray-Ban Stories. But the story I’m trying to tell is that the hardware isn’t that different between the two—the interactions we enable are so much richer.

When you use Orion, the full AR glasses, you can imagine a post-phone world. If the device were attractive, light, had good battery life—I could wear it all day. Everything I need would be right there.

You combine that with what AI is capable of—like the demo you saw with breakfast ingredients?

David George: Yeah, I did. I looked at them and said, “Hey Meta, what are some recipes I could make?” It was awesome.

Andrew Bosworth: That’s the vision for Orion. Initially, it didn’t have AI—it was modeled on a traditional app framework, like the phone. Of course, you want to do calls, email, texting, games, Instagram reels.

But now, we’re excited about layering on an assistant that understands not just your device inputs but also the physical world around you. It connects what you need in the moment with what’s happening.

These concepts flip the model: what if the entire app paradigm is upside down? What if it’s not, “I want to open Instagram,” but instead, “I have a free moment—should I catch up on highlights from my favorite team?” That becomes possible.

That said, hardware problems are hard—and real. Cost problems are hard—and real. You can’t come for the king unless you’re ready. The phone is the centerpiece of our lives.

It’s how I operate my home, my car, my work. It’s everywhere. The world has adapted to the phone. My fridge has a phone app. It’s excessive—but it exists.

The 10-year view is clearer: these devices will be widely available, increasingly adopted. The five-year view is harder. Even if we knock it out of the park, replacing the phone in five years is hard. It’s almost unthinkable.

Building a bridge to the post-mobile era

David George: Right. It’s hard to envision life without the OS we’re all used to. So what about the interim period? Maybe the hardware becomes capable and market-accessible. Do you tether it to the phone? Do you have a strong view that you’ll never do that? How do you think about that?

Andrew Bosworth: Phones have a huge advantage: they’re already central to our lives. They have a huge developer ecosystem, so they’re a great anchor device.

We found that apps want to be different when they’re not controlled via touchscreen. That’s not a novel insight. A lot of people failed early in mobile—including us—by just porting web stuff onto phones.

We said, “Let’s just put the web on mobile.” But it wasn’t native to the phone’s interaction model—its design, layout, feel. And because of that, we failed. Even with one of the most popular products in web history.

David George: This is like the skeuomorphic versus native design debate.

Andrew Bosworth: Exactly. Having the developers is valuable. Having all this functionality is valuable. But once you reproject it into 3D space and try to manipulate it with your fingers instead of a touchscreen, you have less precision. There’s no native voice interface. There’s no tooling or design for it.

So having a phone platform feels like a strong base on the hardware side—but a drag on the software side.

We’re not opposed to partnerships. As the hardware matures, it’ll be interesting to see how partners feel. And I hope they continue supporting users who buy $1,200 phones and want to pair other hardware with them.

Does AI turn the app model on its head?

Andrew Bosworth: The biggest question I have is whether the entire app model gets turned on its head. We were imagining a very phone-like app model for these devices—just with different inputs.

But AI could flip that. Right now, if I want to play music, the first thing I think is, “Which provider do I use—Spotify or Tidal?” That’s not what I want. What I want is to play music. I want to say, “Play this song,” and have the AI figure out how to do it.

It should know I’m already using a service—or that one has better latency or better audio quality—or say, “That song isn’t available, but here’s another service that does have it. Want to sign up?”

I don’t want to orchestrate which app to open. We’ve had to do that because that’s how computing evolved: you had to pick the application first.

David George: That’s a hot take. 

Andrew Bosworth: That’s not just about wearables—that’s even at the phone level. If you were building a phone today, would you build an app store the way it’s historically been built?

 I don’t think so. Today, I have to solve a problem by first deciding which provider I want to use. That’s limiting.

The stronger we get at agentic reasoning and capabilities, the more I can rely on my AI to do things for me—even when I’m not actively using it. At first, that’ll be knowledge work.

But once we get real consumer usage, people are going to hit dead ends. They’ll ask, “Can you do this?” and the AI will say, “No.”

That’s the gold mine. You take that to developers and say, “A hundred thousand people a day are asking for this. They don’t know they’re trying to use your app, but here’s the query stream. Build a bridge. If you do, you’ll unlock real demand.”

The AI can even say, “This costs money,” and route them to a service—even if that’s a human plumber. It could be anything. There’s a marketplace that will emerge here. Not from a closed room with someone designing an app platform—but from real demand, real failures, and developers filling those gaps.

David George: That’s a very alluring end state, especially as a consumer. If I don’t have to pay the brand tax, that’s great.

Andrew Bosworth: It’s messy. It creates new marketplaces where performance matters most. It also abstracts away brand names. That’ll be hard for companies that rely on brand affinity.

Consumers won’t care if the music comes from Service A or B. They’ll just want it to play well. Brands want you to care. They want attachment.

David George: Some things you may care about—but in many cases, you won’t.

Andrew Bosworth: In the world where apps compete for your eyeballs, brand matters. In a world where I just want a good song played well, different things matter. That’s net positive for consumers, because price-per-performance becomes more important. But companies won’t love that.

It puts a lot more pressure on trusting the AI—or whoever distributes the AI. If I’m floating between different companies providing AI, I have to trust that they’re not bought and paid for on the backend. That they’re not giving me the experience that makes them the most money, but the one that’s best for me.

David George: That’s the experience today, right? It’s a very different world.

Andrew Bosworth: It is. But you can already see the beginning of it. Some companies are working with the new AI providers to enable agentic task completion. But then they say, “Wait, I don’t want bots just executing this—I want the humans coming to me.” They feel it’s existential to have a direct relationship with demand.

That’s potentially messy. But for the consumer, it’s a bright future—especially if we can avoid paying the brand tax.

Competing on performance and price in the AI era

David George: It’s going to be messy, but probably unavoidable. Once people get into those tight loops where more of their interactions are moderated by AI, companies won’t have a choice. That’s where their customers will be. There’ll probably be groups that try to move fast into that world. Competing with branded incumbents by saying, “I’ll win on performance and price.” Where do you think that could happen first?

Andrew Bosworth: Good question. It’ll probably mirror where the query volume is. We already have a model for this—in the web era when Google became dominant.

Before that, the web was all index-based—Yahoo, link structures. Getting traffic sources to link to you was the game. Then Google took over in a few years, and suddenly all that mattered was SEO—where you were in the query stream.

That dictated which businesses won.

David George: Right. Travel sites, for example.

Andrew Bosworth: Travel got disrupted fast. Travel agents went from being the norm to being gone in just a few years.

David George: And the competition was based on deal execution—best prices, fast conversions.

Andrew Bosworth: I think SEO has now gotten to a point where it’s kind of a bummer. It’s made things worse. Everyone’s gotten so good at it—especially now with AI.

David George: Totally. It’s just gaming the system.

Andrew Bosworth: Right. We had a great flattening. Now it’s starting to spike again—with paid placements dominating. That’s probably the cautionary tale for how AI could play out too.

I do think there will be a golden era. The query stream will reveal which problems people are trying to solve, and developers will follow that. Each vertical will tip. We’ll get a lot of progress quickly—better solutions for consumers.

But once it hits steady state, gamesmanship starts to creep in.

David George: That’s the decaying era. That’ll be the test of the AI layer—whether it can avoid falling into that trap.

Open source, Llama, and commoditizing your complements

Let’s shift gears. You’ve been leading from the front on open source. Talk about your efforts there and what kind of market structure you’d ideally want for AI models.

Andrew Bosworth: Llama came out of FAIR—our Fundamental AI Research group—which has always been open source. That’s allowed us to attract great researchers who believe we’ll make faster progress by collaborating across labs.

It’s not just us. The transformer paper came out of Google. Self-supervised learning was a big contribution from us. Everyone is contributing.

When we released Llama, every major model had been open source. Then everything else went closed source. But until that point, the norm was: if your model was worth something, you open sourced it so people could use and validate it.

Llama 2 was a key decision point. That’s when something else kicked in—a belief I strongly advocated for, and that Mark believes in too. He’s written about it.

These models should be open. First, because we’ll make more progress. The biggest contributions aren’t going to come from big labs—they’ll come from small ones.

Look at DeepSeek in China. They were under pressure, and they innovated in memory architectures and other areas. Incredible results.

Second, this is a classic case of commoditizing your complements. Our products are made better with AI. That’s why we’ve been investing for so long. Whether it’s recommendation systems, who to show at the top of your DMs, or new surfaces like semantic search in WhatsApp—it all gets better.

But even if everyone has access to the same model, they still can’t build our products. The asymmetry works in our favor.

So for us, commoditizing the foundational models is just good business. And having a lot of competitively priced—or nearly free—models helps the whole ecosystem. It helps startups, academics, and it helps us as a platform provider.

David George: So the business model alignment and societal progress are moving in the same direction?

Andrew Bosworth: Yes. The belief in open research and the alignment with our business model are totally in sync. There’s no conflict.

Risks to the post-mobile vision

David George: I want to shift gears to the impediments to progress—what risks this vision. Obviously, hardware. AI capabilities. Vision systems, screens, resolution. We talked about the developer ecosystem and native products. Which of these do you see as linear and which are risky?

Andrew Bosworth: We have real invention risk. There’s a chance that what we want to build can’t yet be done—not by us, not by anyone.

That said, I think we have windows into it. You’ve seen Orion. It can be done. The challenge now is reducing cost and improving materials. So yes, there’s invention risk—but even bigger is adoption risk.

Will people accept it socially? Will they learn a new modality? We all learned to type. Phones feel native now. Are people willing to learn something new? Is it worth it to them?

Then there’s ecosystem risk. Say we build the device, but it only does email and Reels—that’s not enough. You need the full suite of software that we use in daily life to show up.

I’ll say: we feel good about where we’re going with hardware. And on acceptability—we think we can get there. That wasn’t always clear, but with the Ray-Ban Meta glasses, we’re feeling confident. People will accept this form of tech.

Within that, though, there are subtle regulatory and social questions. You now have an always-on machine with superhuman sensing. Your vision is better. Your hearing is better. Your memory is better.

So when I see you two years from now, and haven’t seen you online, and I think, “Oh, I remember—we did a podcast together. What’s his name again?”—can I ask that question?

Am I allowed to use a tool to supplement my memory? Because it’s your face. If I had a better memory, I’d just know. So what’s the line there?

There are deep privacy and social acceptability issues embedded in this. And they could derail everything.

David George: Absolutely. You could overstep and get your hands slapped.

Andrew Bosworth: Exactly. If people overstep socially, entire technologies can be derailed. Nuclear power got derailed for decades—for reasons we now know weren’t that strong. They just played it wrong. They ignored the public.

So yeah, we feel good on the invention side. Acceptability is better than it’s been, but there’s a long way to go. I used to think ecosystem was the biggest risk—but now, AI may be our silver bullet there.

If AI becomes the main interface, it comes with the device. And I’ll say, even with just Ray-Ban Meta—not even Orion—the interest from companies wanting to build on it has been stronger than expected.

It’s not even a platform yet. There’s so little compute—barely any space. But we partnered with Be My Eyes, which helps people who are blind or visually impaired navigate the world. It’s spectacular.

That’s a window into what’s possible. The response has been more positive than I expected.

So yeah, everything right now—tailwinds abound. And after eight or nine years of headwinds, having a year of tailwinds? I’ll take it.

David George: No victory laps yet, but it’s good.

Andrew Bosworth: It’s all hard. And it could all fail at any point.

David George: I like that you start with invention risk. There are many ways this just won’t work—even if it does work technically, it might not take.

Committing to the vision

Andrew Bosworth: Totally. I’ll say two things. First, Mark deserves a lot of credit here. We’re true believers. We have real conviction. He believes this is the next thing—and it doesn’t happen for free.

We can be the ones to make it happen. Our chief scientist, Michael Abrash, talks about the myth of technological eventualism. People say, “Oh, it’ll eventually happen.” No, it won’t. That’s not how it works.

You have to stop what you’re doing and commit the time, money, and effort to build it. And that’s what we’re doing.

That’s the difference between us and everyone else. We believe this stuff to our core. This is the most important work I’ll ever do.

This is Xerox PARC–level stuff. This is rethinking how humans interact with computers. This is Licklider and human-in-the-loop computing. We’re seeing that now with AI.

It’s a rare moment. It doesn’t happen every generation. Maybe every second or third generation. You don’t get many shots at this. So we’re not missing it. We’re going to do it. We might fail. But we won’t fail for lack of belief or effort.

David George: Great. Thanks a ton, Boz.

Andrew Bosworth: Cheers.