In this episode, a16z partner Kimberly Tan sits down with Michael Chime, the CEO and co-founder of AI emergency response platform Prepared, for a powerful conversation about how AI is transforming public safety — starting with 911 call centers.
They discuss how Prepared went from an idea in a Yale dorm room to a fast-growing platform augmenting emergency response nationwide. Michael shares insights into the company’s go-to-market journey, the challenges of integrating voice AI into highly sensitive environments, and what it takes to build trust with government agencies.
Together, they also dive into the real-world impact of AI agents in emergency services, what “human-in-the-loop” really means in crisis moments, and why AI’s future in government is only just beginning.
Kimberly: Maybe just to start for people, what does Prepared do? What is Prepared?
Michael: We are AI assistance for 911 calls. We really do that in two ways. We have a set of tools that totally offloads mundane work off of a 911 call taker. Things like noise complaints, parking tickets, non-emergency traffic, or backend processes like quality assurance. We will do that completely.
And then on emergency calls, we are a co-pilot, so transcription, translation, handling of CAD entries, summarizing that entire interaction. We’ll sit next to the call taker and make sure no critical detail is missed and that everything gets out to the responder.
Kimberly: And so this is obviously transformative and life-changing technology.
It’s also an area that I think a lot of people haven’t thought of applying tech to. So we’d love to understand the founding story. How did you guys decide to actually come and build for public safety and for 911 centers?
Michael: So I started working on it about five years ago. I was an undergrad at Yale of all things. So people are like, ‘why public safety outta school, that’s a unique life choice?’ So I start there, it was personal for me.
I grew up in a town right outside of where there was an active shooter event in 2012, I was 13 and my town was small and blue collar. And so you just saw how it rippled and that was the catalyst for me. I went to college thinking about those types of problems, but I didn’t know it’d be a business. I met two classmates that turned out to be co-founders later, and we built this simple app for schools and we would go to schools and we’d say, ‘Hey, if you have an emergency, as opposed to using a walkie talkie or a PA system that’s built for the announcements, why not use your phone?’ It’s a much better, faster means of communication and we had a couple hundred schools that did it and took us up on it.
And it was only through doing that work that we kind of stumbled upon 911. ‘Cause we would have schools that would have a situation and would collect information like a picture and then wanna share that to 911 and couldn’t. And then we started to talk to 911 centers and we’d find out that the systems that they’re using everyday were built on the assumption that calls are landlines, that they were that outdated. Like, I could communicate better with you right now over text than I could to 911. And it felt so backwards. And so we started there solving problems and then it became this whole thing of: how could we rethink 911.
Kimberly: Yeah ’cause I think a lot of the technology people use today, 911 centers are from like the 1960s or something like that. And it’s really not built for the way in which people actually communicate today.
Michael: Yeah, that’s right.
Kimberly: You know, one of the other things that’s really interesting about Prepared is, I don’t think people historically thought that selling into state and local government was an area that, technology, venture backed companies could sell into. It’d be great to hear the initial story of the go-to-market path that Prepared went on, ’cause it’s quite a unique one, I think for a lot of reasons.
Michael: When we were first starting to totally zoom out again, when I first walked through a center, it was clear that technology could get better. It was unclear why it hadn’t. It wasn’t because there weren’t better tools. There’s actually some call handling systems that were better, or CAD systems that were better. And so we’re like: ‘okay, well what actually is the constraint here?’ And our perspective was that nobody had hacked or really solved go-to-market.
I was really inspired by companies like Slack or Yammer, and it used to be true that the large enterprise was not territory for a startup. It was, you know ‘that’s Oracle, Salesforce’ like only they could punch it that way because systems, it would block everything. The innovative idea was, well what if we just got to the end users, proved it to them that it was valuable and then, you know, we’d still have to go to it. We’d have the users at our back and it worked.
And it was crazy at the time, people were like ‘no, you just can’t go. They’re employees in an enterprise.’ And our belief was if we could just get to the end users, show them better technology, that this was a technology fight and we’d be well prepared to win that one. So the simple answer to that was we launched a free tier of our product. And we said, ‘Hey, use it.’ And we felt like if our technology was side-by-side the incumbent technology, that over time they would be able to build more powerful things and that’s beared out.
And so we started that way and we got from zero to 1,000 centers across the country. It was about six of the US on our platform really, really quickly. And we built a lot of goodwill. ’cause people were like ‘Oh, actually I can do better. I’m seeing it. It’s actually working and it doesn’t go down right?’ So it’s actually credible. And so then when we came back with additional products, people took us up on that.
We’ve always liked to think of ourselves as just technology partners trying to solve problems. And so it’s not about any underlying technology necessarily. It is just like ‘What is the problem we’re trying to solve?’ Or ‘What are the problems our users are facing and how do we go about solving this?’ And now we think we should rally the business around AI ’cause it can solve a ton of problems that were unsolvable two, three years ago.
The kind of macro problems for a 911 center are two: One is that call volume’s going up, but that doesn’t necessarily mean emergencies are going up. A lot of that call volume is non-emergent. So I’d say there’s two buckets of products we offer. One is full automation. So we will take mundane things that just add burden to the call taker. This is non-emergency calls. So often cities will have a 10 digit line or 311, et cetera, where these calls route to. There’s big agencies out in California where the average hold time on non-emergency calls is 40 minutes. Nobody’s answering those calls. So oftentimes you’ll have, especially in large cities, hold times on those non-emergency calls. We reduce those hold times to zero.
And then when you’re in an emergency, there’s a lot of people limitations or gaps that people have that create challenges. I’ll give you an example. We had a call a few weeks ago where someone calls in and they’re panicked and there’s weapons on scene and they mention in a roundabout way that there is a person on site that has autism and the call taker didn’t hear that, missed it. Our transcription caught it, but they didn’t type it into the notes and they found out later. But the responder there was responding to weapons. And so when they’re knocking the door and nobody’s responding, they’re not giving any verbal cues back. They’re like, okay, I need to like go in and like figure this out. And so small details like that really have an impact.
And then there’s things like when you call or if you call and you don’t speak English, you get a dramatically different level of service than if you do. A language like Vietnamese could have a hold time of seven minutes for a human interpreter. Whereas where we come in and we just translate it. Instantly. So there’s a ton of opportunities for AI to just improve the call processing times.
You can also think of us as a co-pilot on the emergency calls. You can imagine some 911 centers, they’re given exactly what to say. But the question is: do folks say it? They really can’t answer that question. Today it’s all manual. You only have people in the back that’ll pull out calls. They’ll listen to them live. They’ll have a piece of pen and paper, and they’ll check the boxes. Did they verify the location? Did they verify the incident? And they’ll get to like 2% of the calls. Now with AI, it’s not that hard of a technical problem. It’s what was said, what ought to have been said, and then we match ’em up and now we get to 100% of calls, quality assured. And then you start to determine trends on what we’re doing well, what we’re not doing so well. You can coach those same day.
Kimberly: When you first realized this opportunity and you went to your customers, how did they receive the idea of using ai?
Michael: Yeah, it’s all been iterative. So the very first product was just transcription and translation. And so I think that was a very on fire problem. Like I said, non-English calls on average, have one to seven minute hold times, depending on the language, right. Some are much faster, Spanish is much faster, but you have hold times
Kimberly: because these are human translators you have to wait to show up.
Michael: Exactly. They’re hitting a button. It transfers to a human interpreter, and if there’s not one, they wait until that person’s there. And so that was just a very acute need. So we started there and we thought that might just be the product, right? Like, uh, you know, this is right when ChatGPT came out.
We thought the most thoughtful application of AI in an emergency setting was as a copilot, but you learn next to customers over time. So we’re now on a rate this year to process over 20 million calls. Some of the biggest cities in the country, you just start to look at the transcripts side by side with customers and say, most of this traffic is mundane. You see parking tickets, you see noise complaints, and you’re like, oh, we might be able to fully handle that and reduce the burden. And so you start to interject voice and people really liked it.
The way we found quality assurance was, a director in one of our largest cities, he had put keywords on CPR calls. So every time a CPR call would come up, he’d get a notification and he can go and watch that call. And I was like ‘why are you watching CPR? Why is that useful?’ And he is like ‘Oh, I’m just trying to do QA and chip in. That’s one that we really want to get right.’ I was like ‘How do you do QA today?’ And he took me in the back and he showed me this team of people that had pen and paper. They were listening to calls and he was like pounding his chest about getting to 3% of calls a year. He thought that was amazing. And I was like ‘Dude, just give us the form.’ Now we have a form, we get to a hundred percent. So it started as just transcription, translation, the burden on a call taker is so high. Can we just assist? And then naturally you start stumbling upon all these problems. And I still think it’s just the beginning.
Kimberly: Yeah. I think one of the interesting things there is: On the one hand, you are replacing the work that people were doing for QA, but on the other hand, there was so much more work that they could have done that they simply didn’t have the manpower to do. So can you talk about that a little bit–the balance between obviously augmenting some of the work as well as, you know, taking over some of it end to end?
Michael: Yeah. We get this question all the time in centers. I use two silly examples, and I think they’re useful as a frame. I don’t think the AI applications are that different from the past technology applications. Like, where are we using technology today? I think there’s, again, two categories. I think we automate work we don’t want to do, and then we fill in gaps of things we cannot do.
So my two silly examples, I’ll walk into centers and have this conversation. I probably could wash my clothes by hand. You know, I could do the whole thing, put ’em up on the clothes pin. I don’t know if I wanna spend my time doing that. And so I use a washer and I use a dryer, and it makes a lot of sense. I could do it, but I don’t do it. That’s non-emergency calls to me. Like, I know you could do it. You might be able to do it better than we could do it, but should you be doing that or should you be focused on the cardiac arrest call? Should you be focused on the real emergency? And everybody unanimously is like, yes, I should be focused on the emergency. And so that’s one class. We’re using technology for that today.
The second silly example I use is I wish I could run 60 miles an hour. That’d be amazing. You know, I’d be in the Olympics, but uh, I can’t. And so I jump in a car and, you know, man and machine now get to this outcome way faster, get to my destination way quicker because I’m using technology to do that. And like, I’d never be able to do that no matter how much I trained. And so my example to call takers is I’ll go into the room and I’ll be like, ‘How many of us speak 30 languages?’ I haven’t seen a single hand and I joke and say, if I do see a hand, you should pay them double. They’re like the best call taker ever. But nobody can do that job. We’re not taking anyone’s position in that world. We are allowing them to do the job better and now they don’t feel panicked while they’re waiting on the line for an interpreter and they can tell that person is urgent, but have no idea what they’re saying.
I can give you an example. So we had a call, it’s a suburb of really big city and they just deployed the tool probably two weeks from starting it. And so they’re just getting used to it, starting to remember how to use it. So a call comes in, it’s in Mandarin and they hit the human interpreter button like they normally would and they just wait, ’cause they forget they have Prepared and then they look up and they’re like ‘Okay, I have the transcription.’ And they could tell the guy was really urgent and he was saying ‘my daughter was shot, my daughter was shot.’ And they catch it. And, oh my goodness, okay, I can dispatch that. And so they send law enforcement, they send EMS, and by the time responders got on scene, the interpreter had just gotten on. And they save the girl. And their perspective is that the girl would’ve not made it if they waited that entire time. And so you just put yourself in that scenario and be the call taker there and how helpless you would feel just sitting there waiting for the human interpreters. And there’s millions of examples like that where yeah, you’re just empowering them to do better.
Kimberly: And how do you manage the data effects? Obviously this is sensitive data too, and I’m sure your customers care a lot that the data stays confidential, stays within their system or your system. So how do you guys manage that data sensitivity?
Michael: One is, uh, everything’s in the US. You would think that is fairly common, but it is unique in the space. So that’s one thing that we stick by. The second is, there’s data security policies like CJIS that we adhere to, just criminal justice information security. And we adhere to that. And then like your typical, like SOC 2 compliance, et cetera. But we don’t train on the data and the follow up question is: ‘Why?’ It’s because of this. It’s all localized.
Kimberly: Got it. So I guess having the dataset helps you from a product perspective in that you can better understand what you need to build and such, but you don’t actually use the data in the process of building a product.
Michael: Yeah, exactly. It’s more that when we see something happen in the product, those trends you know quickly are real.
Kimberly: Well, one of the product things that I do wanna latch on to is this voice AI product, because I think for a long time you guys were a little bit more of a co-pilot/assistant when it came to crisis calls. But the idea of launching a voice AI product that can actually end-to-end handle a lot of the non-emergency calls coming in, I think it’s very innovative in this space. And I’d love to hear, how did you guys initially come up with this idea? How does that product work in private? I imagine there’s a lot of systems you might need to integrate into, et cetera.
Michael: The same way we found QA or any of these other products. It was just, there was a ton of pull. There was pull in the problem. I don’t think folks knew what the solution was, but it was clear that a large part of the traffic was mundane and you could ask yourself the question, does a person need to spend time on this? As we started to work through solutioning, we threw out the idea of, how would you feel about an agent jumping in and handling the call? And there was real appetite for it. And I think the reason that there was appetite is, we went to: ‘Where does technology make sense?’ This was not a ‘people vs. bot‘ thing oftentimes, right? You again, you go to some large agencies in California and the hold times are 40 minutes, so this is like, ‘bot vs. nothing.’ And so we could do much better than that.
And then also, even if you had zero hold times, you were getting to that outcome by spending a ton. You know, forced overtime in cities is a very, very common thing where you’re pulling people in on holidays, et cetera, to just try to answer the call volume. And so I think the pain was just so clear, right? And the call volume’s only going up and the people are only going down. And so that general problem I think people really resonated with.
And then when we started launching it early, it really worked. And so it’s just good at the job. We’ve started to get to a point in some cities where they actually ask for our system, which is exciting. I think they just know they’ll get someone instantly to answer your question practically, it’ll take the call. You can have a person, some cities have selected to do this monitoring, you know, 10 to 20 calls at once. There is a UX there, they can watch the transcript, see what’s happening, barge in if they want to. So there is a human in the loop. Even though they’re not interacting directly with the caller, they can jump in if they want to. Normally, it’ll just handle it and it’ll go all the way through to the action. And that action could equal a call for service. That means integrating it directly into that CAD system. Then a responder would see it, but it’s us that’s punching in the summary. Or, say there’s a parking ticket and you’re logging it and you wanna track the progress of you saying, no, I didn’t do that. We’ll send you a link and you can track the progress, but we’ll fill it out in the system, so we’ll handle it end-to-end.
Kimberly: Do customers have appetite to allow AI voice agents to handle 911 calls, or do you think today that’s a bridge too far?
Michael: I think what humans are really, really good at is being empathetic to the caller, and I think the only reason we’re automating anything is because we believe they are tied up in situations where empathy’s not that important. Like again, a noise complaint or parking ticket, it’s not that important. And that they really should be focusing their time and attention on the cardiac arrest call. That’s the only reason we’re automating. And so I think AI makes sense as a co-pilot in those situations and freeing up the human to do what they do best or just offloading things so that they can spend time on that.
Kimberly: One other thing that I think is really interesting is when people think about vertical AI applications and Prepared is such an interesting example of one of those. A lot of times there’s a lot of education that has to be involved. Like, once you deploy the product on site, it takes a while for people to get used to it. Can you talk a little bit about that? How does the onboarding or implementation process for a customer actually work?
Michael: Some of our products do take a lot of time, some of them you’d be surprised how quickly people get used to it. And so, again, you’re just waiting there while they’re waiting for the human interpreter. So in that scenario, there’s not much training needed. They normally put up a separate monitor for us, and the second that happens, they just look up. We’ve had centers that have literally signed and the next day are deployed and using it. So that’s the quickest, average of 24 hours. And then there’s other products like quality assurance that take slightly longer and often because you have defined workflows and processes that you’re using today. It’s really more a matter of getting them familiar with the UX and how they can submit feedback. We also will tell them if we check a box and say someone did this versus if they didn’t do it, we’ll give them a ‘why’ behind the AI’s decision. And so getting used to, okay, well, should I agree with it here, or go against it here are just small things that take time and just change in management. But often you see people getting going with it relatively quickly.
Kimberly: And how often are those folks or just people in your go-to-market or implementation org in general? How often are they from the industry versus maybe like from tech writ large?
Michael: No, very often they’re from the industry. So you can think of us culturally as like: Half of the business is just technologists, people that come from the valley or from fast-growing startups and product engineering, et cetera. The other half of the business comes from 911 and public safety. It’s like quite the group.
Kimberly: And you bring them together.
Michael: Yeah. And it’s really cool because I think we can just, solve problems so quickly, ’cause a lot of our customers are now on the team. And again, we already have a large dataset, but now you could just go to somebody and be like ‘Hey, does this make sense?’
Kimberly: Got it. And how do you manage that culture? Because it’s a little bit unusual also to have folks who are from an industry that a lot of people in tech aren’t familiar with. And then a lot of people who are really cutting edge technologists.
Michael: I think it’s the only way to do it, but it is not easy. I think that to work at Prepared, you have to have deep empathy for our customers. So one of the things we do is within the first 90 days of onboarding, somebody has to go to visit one of the customers. We started doing that really early and it was just so powerful. You can only really understand what it’s like to take a non-English call by seeing that happen and seeing somebody struggle through that, bringing that into the company has just created a culture of empathy for our centers. And so, I don’t know if you can do it another way.
Kimberly: What do you think is the role of AI in government and public services?
Michael: I think like most industries, you will see the mundane stuff get automated. So we’ve talked about 311–I go to a lot of cities and that’s like the storefront of the city and we don’t do it so well today. I guess a zoom down point is like, I think the goal of government at the end of the day is: Somebody needs help. Help them. Right? And you can draw any big and small thing that we do in government to that idea. Like, I need to get a license, I go to the DMV. I want a package, they bring it to me, et cetera. Like in 311 or even 911, that’s the epitome of that. In 911 I really need help. Please help me. And we don’t do it great today from a technology perspective. Right? And so my hope is that now that’s the first place we should be applying this technology.
But you use the same frame that you use for any other, it’s like ‘What is keeping us from doing our best work? Let’s take that off. Uh, AI could do that entirely.’ And then the things that I’m really good at, where do I have gaps? And I think you’ll see AI jump in there.
So what does that mean for society? That means like, okay, now every city’s gonna have a line that’s manned 24/7 for any requests that you have, and that has downstream impacts for 911. That means anytime you have a real emergency, you know somebody’s gonna be there. And that they’re gonna be ready, ’cause hopefully they’ve had a break and they’re not handling non-emergency calls. So I think now every city will be able to do what New York City has done. People don’t know this, but New York City, and this is a 2009 stat, so maybe it’s even gotten worse, spends almost 50 million a year on 311. That’s how expensive it is to, you know, man a center like this. And that’s the extreme. But most cities can’t even think about staffing that. Now I think that’s attainable. Now I think that’s possible.
Kimberly: So give us a sense of the scale that Prepared is at now. You were founded a couple years ago, I think you mentioned like very early on you hit over 1,000 centers. What’s the rough scale now, and what’s the forward looking trajectory?
Michael: Yep. There’s about 6,000 911 centers in the US that dispatch on behalf of about 72,000 agencies–Police, fire, EMS. We cover six of those 911 centers and so then six responding agencies too can get data from us. In terms of our AI products we process, we’re on track to process over 20 million calls this year.
So some of the largest cities in the US are now on the platform. We’re rapidly approaching 150 people, which is great. And then we just closed Series C, which we were excited about. So we raised roughly $130-ish million in funding.
Kimberly: And so if you look at like, let’s say five to ten years and you say Prepared was a success or continues to be a success, what does that look like to you?
Michael: I think that you’re seeing what will be distributed widely in small pockets today. I think in the future, 911 operators will become more like air traffic controllers and they’ll be able to focus on what they’re really good at. Just being an empathetic ear to the caller and everything in the background is just being done and the entire non-emergency call volume is offloaded. I think every city will have a 311 line, and again, you’ll never have this gap if you call in and you don’t speak English. You won’t have any critical detail being missed. I think you’ll also see the technology, these eight screens condensed to one, hopefully just one intelligent system.
Kimberly: That shows Prepared.
Michael: Yes, that’d be great. We could call it that. I think this is a little bit tangent, but I think one of the things that could be dangerous in our space, we have kinda a fork in the road right now where I think AI could either be the contributory factor of 20 screens. And, double that. Or it could bring it down to one. And so, why are we trying to build all of these component parts in a single screen? It’s because like, we just think for two reasons. One, it won’t just be a matter of convenience in the future. Like eight screen is inconvenient. I can’t think of a better word, but I think it could be dangerous in the future if you don’t bring them together. ‘Cause AI’s only as good as the data you give it. So if you silo all these pieces of information into eight different systems. It’s not gonna have the full context.
Kimberly: What are you most excited about for the rest of the year?
Michael: I think the thing that makes it easy for us to get excited is we recognize like, right now, this moment is when you’ll do your best work, right? Like, you don’t often realize it, but when the internet came out, I bet people were like ‘Oh, that’s when I had the most impact.’ And that’s happening now with AI. And so like what I really push our team to think about and any new employee that we would be chatting with is like, um, be really deliberate about what you work on right now. ‘Cause again, it’s totally outside of you, this tech shift, but it’s wind at your sails. You can just make the most impact no matter how hard you work now. And so the cool thing for us is we just kind of stumbled into that inflection while working in a mission critical lifesaving world. And we think that’s just such a privilege to really be able to do that. And so I think I’ll do my most impactful work in the direction of saving lives. And so I’m excited about that.
Kimberly Tan is an investing partner at Andreessen Horowitz, where she focuses on SaaS and AI investments.
Michael Chime is the CEO and co-founder of AI emergency response platform Prepared.