For decades, companies have poured billions of dollars into market research to better understand their customers, only to be constrained by slow surveys, biased panels, and lagging insights. Despite the $140 billion spent each year on market research, software is little more than a rounding error. Case in point: Traditional human-driven consulting firms Gartner and McKinsey are each valued at $40 billion, while software platforms Qualtrics and Medallia are worth $12.5 billion and $6.4 billion, respectively. And that’s just accounting for external spend.
With AI, we’re seeing yet another case of a market ready to shift labor spend into software. Early AI players are already leveraging speech-to-text and text-to-speech models to build AI-native survey platforms that conduct autonomous video interviews with people, then use LLMs to analyze results and create presentations. Those early movers are growing quickly, signing large deals, and co-opting budget that traditionally went to market research and consulting firms.
In doing so, these AI-enabled startups are reshaping how organizations derive insights from customers, make decisions, and execute at scale. However, most of these startups still rely on panel providers to source humans for surveys.
Now we’re seeing a crop of AI research companies replace the expensive human survey and analysis process entirely. Instead of recruiting a panel of people and asking them what they think, these companies can go as far as simulating entire societies of generative AI agents that can be queried, observed, and experimented with, modeling real human behavior. This turns market research from a lagging, one-time input into a continuous, dynamic advantage.
The field of customer research has slowly incorporated software over time. In the 1990s, research was primarily conducted manually, with pen and paper data collection and analysis. Qualtrics and Medallia, among others, introduced online surveys in the early 2000s, followed by real-time analytics and mobile-based survey collection. Both companies used surveys to build deeper experience management tools around customers and employees. In parallel, the rise of bottom-up, self-serve tools like SurveyMonkey enabled individual teams to run quick, lightweight surveys — broadening access to research, but often resulting in fragmented efforts, inconsistent methodologies, and limited organizational visibility. These tools lacked the governance, scale, and integration required to support enterprise-wide research operations.
Consulting firms, McKinsey included, built entire divisions dedicated to deploying software-based research tools for customer segmentation and consumer insights at scale. These engagements often took months, cost millions, and relied on expensive and biased panels. The process of research often takes weeks to recruit a panel of participants, run the survey, analyze the results, then create reporting. And then the survey results are usually delivered to the buyer in packaged form, without much opportunity to revisit the process or dive deeper into the findings.
Most enterprises still rely on quarterly research to guide major launches, but that doesn’t provide the ongoing insights needed for fast, everyday decisions. Because traditional research is expensive, small bets and early ideas often go untested. Even companies eager to modernize find themselves stuck with outdated tools and slow processes.
In the late 2010s, a new wave of UX research tools emerged that was built directly for product teams, not consultants or survey ops. Instead of outsourcing user research, companies began embedding it into their development loops. Through unmoderated usability tests, in-product surveys, and prototype feedback, tools like Sprig, Maze, and Dovetail enabled faster, customer-informed decisions. These research tools demonstrated just how important integrated research is in modern businesses. But while such tools provided real-time value for software-driven teams, they were less oriented toward non-software companies and were primarily optimized for team-level use, rather than cross-functional use. AI-native research companies build on the advances of UX research: insights are immediate and applicable across teams, products, and industries, whether software-native or not.
AI has already increased the pace and decreased the cost of surveying. AI makes it easy to generate surveys quickly and adapt questions in real time based on how people respond. Analysis that once took weeks now happens in hours. Insight libraries learn over time, spotting patterns across projects and extrapolating early signals. This shift doesn’t just make research accessible to smaller companies; it also expands the set of decisions that can be informed by data, from early product concepts to nuanced positioning questions that were previously too expensive to investigate. Now AI-powered research tools are being used by many more users across a company’s marketing, product, sales, and customer success teams, as well as leadership.
These improvements matter. But even AI-powered surveys are still limited by the variability and accessibility of human panels and often depend on third-party recruiting to access respondents, limiting pricing control and differentiation.
Enter generative agents, a concept originally introduced in the landmark paper Generative Agents: Interactive Simulacra of Human Behavior. The researchers demonstrated how simulated characters powered by large language models can exhibit increasingly human-like behavior, driven by memory, reflection, and planning. While the idea initially drew interest for its potential in building lifelike, simulated societies, its implications go beyond academic curiosity. One of its most promising commercial applications? Market research.
If this sounds abstract, here’s an example of how it might play out: Ahead of a new skincare launch in France, a beauty company could simulate 10,000 agents modeled on gen Z and millennial French beauty consumers. Each agent would be seeded with data from customer reviews, CRM histories, social listening insights (e.g. TikTok trends around “skincare routines”), and past purchase behavior. These agents could interact with each other, view simulated influencer content, shop virtual store shelves, and post product opinions in AI-generated social feeds, evolving over time as they absorb new information and reflect on past experiences.
What makes these simulations possible isn’t just off-the-shelf LLMs, but a growing stack of sophisticated techniques. Agents are now anchored in persistent memory architectures, often grounded in rich qualitative data like interviews or behavioral histories, enabling them to evolve over time through accumulated experiences and contextual feedback. In-context prompting supplies them with behavioral histories, environmental cues, and prior decisions, creating more nuanced, lifelike responses. Under the hood, methods like Retrieval-Augmented Generation (RAG) and agent chaining support complex, multi-step decision-making, resulting in simulations that mirror real-world customer journeys. Fine-tuned, multimodal models — trained across text, visuals, and interactions on domain-specific tasks — push agent behavior beyond the limits of text.
Early platforms are already leveraging these approaches. AI-powered simulation startups such as Simile and Aaru (which just announced a partnership with Accenture) hint at what’s coming: dynamic, always-on populations that act like real customers, ready to be queried, observed, and experimented with.
Agentic simulation doesn’t just accelerate workflows that once took weeks; it fundamentally reinvents how research and decision-making happens. It also overcomes many traditional research limitations by creating a research tool that can live inside a workflow. This leap is not just in efficiency. It’s in fidelity.
If history is any guide, the companies that dominate this AI wave won’t just have the best technology, they’ll master distribution and adoption. Qualtrics and Medallia, for example, won early by prioritizing adoption, familiarity, and loyalty, embedding themselves deeply into universities and key industries.
Accuracy obviously matters — particularly as teams measure AI tools against traditional, human-led research. But in this category, there are no established benchmarks or evaluation frameworks, which makes it difficult to objectively assess how “good” a given model is. Companies experimenting with agent simulation technology often have to define their own metrics.
Crucially, success doesn’t mean achieving 100% accuracy. It’s about hitting a threshold that’s “good enough” for your use case. Many CMOs we’ve spoken with are comfortable with outputs that are at least 70% as accurate as those from traditional consulting firms, especially since the data is cheaper, faster, and updated in real time. In the absence of standardized expectations, this creates a window for startups to move quickly, validate through real-world usage, and become embedded in workflows early. That said, startups must continue to refine the product: benchmarks will emerge, and the more you charge, the more customers will demand.
At this stage, the risk lies less in imperfect outputs than in over-engineering for theoretical accuracy. Startups that prioritize speed, integration, and distribution can define the emerging standard. Those that delay for perfect fidelity may find themselves stuck in endless pilots while others move to production.
AI-native research companies are fundamentally better positioned than traditional firms to redefine expectations for market research. While legacy market research firms may have deep panel data, their business models and workflows are not built for automation. In contrast, AI-native players have already developed purpose-built tooling for AI-moderated research and are structurally incentivized to push the frontier, not protect the past. They’re primed to own both the data layer and the simulation layer. The widely cited Generative Agent Simulations of 1,000 People paper illustrates this convergence: its coauthors relied on real interviews conducted by AI to seed agentic profiles — the same type of pipeline AI-native companies are already running at scale.
To drive impact, insights must be applicable beyond UX and marketing teams to product, strategy, and operations. The challenge: offering just enough service support without recreating the heavy overhead of traditional agencies.
The long era of lagging research is ending. AI-driven market research is transforming how we understand customers, whether through simulation, analysis, or insight generation. The companies that adopt AI-powered research tools early will gain faster insights, make better decisions, and unlock a new competitive edge. As shipping products becomes faster and easier, the real advantage lies in knowing what to build.