Last year, Kristina Shen and I expressed our anticipation for a “Wave 2” of B2B AI applications focused on synthesizing information, which we referred to as “SynthAI.” If the first wave of generative AI applications was about creating new content — like emails, lists, or marketing copy — this second wave focuses on condensing information in a manner that saves users time. The crux of this is to own the workflow by getting users to do more of their work in your application. This is important because it leads to stickiness with customers, and facilitates expanding to additional use cases over time.

A year later, we are indeed seeing the shift toward Wave 2, with more startups focused on using AI capabilities to capture these end to end workflows. I wanted to follow up to share some observations and thoughts for how startups can more deliberately target workflows.

Owning the workflow

What is a workflow, anyway? A workflow is a sequence of steps that someone performs to complete a task or do a part of their job. In knowledge work, the person needs to gather some information, apply context, and process it into a desired output – perhaps an insight or a decision. Application software strives to help us perform these workflows faster.

One way it saves us time is by capturing, storing, and representing information in a way that’s easy to work with. The other is performing the work for us. The ultimate promise of software and automation is that it could complete the entire workflow for us. We don’t actually want to do the job ourselves; we’d prefer to just click a button to, and then we’re done!

How do we think about this in the context of AI and large language models (LLMs)? The prompting mechanism that has popularized LLMs is anchored in the modality of input→output. For example, we refer to categories of foundation models as “text to speech” or “text to video” or “image to video,” etc. — literally describing what input we’re providing and the output we expect. This paradigm maps to workflows quite well. We are essentially trying to go from an input of context and information to an output of action/insights/decision. The workflow is the process in between.

However, a key challenge with these prompting mechanisms in the context of B2B applications is that B2B applications have already been designed with a workflow in mind. This is where the chat UX / prompting mechanism breaks down. If the user needs to “chat” with the AI system to maneuver or manipulate the context and information, it creates an alternative set of work that disrupts an otherwise native workflow.  

So the question becomes how do we build this input→output natively into a product. Arguably, the ideal outcome is for the workflow process to become a click of a button. This is how AI can really “own” the workflow: by converting it into a feature or capability within the product. This is the exact potential that we believe SynthAI yields.

Indeed, we are already seeing products that encapsulate this process. 

Example: FigJam

Consider a team brainstorming exercise where everyone’s ideas are being written on sticky notes. Typically, the output is to identify major themes, as well as some specific takeaways under each. The workflow is:

  1. Sort similar or repetitive sticky notes near each other.
  2. Define and identify what the clusters represent.
  3. Summarize themes and takeaways within the clusters in a succinct document.

Traditionally, each of these steps needs to be done manually because the context around the exercise and what’s written on the sticky notes is idiosyncratic to the exercise. Thus, a rote algorithm or script won’t work. 

However, it turns out these are all steps that LLMs are particularly good at. They are all different varieties of synthesis (ergo, SynthAI) that, theoretically, could be accomplished with the click of a button. And that’s exactly what FigJam — Figma’s online whiteboard for teams to brainstorm, meet, and work together — has done. 

A product manager or researcher could have easily spent up to an hour summarizing the results of the brainstorming exercise, and now they can do it with a few clicks in FigJam.

Example: Macro

Another example is an editing process where there are multiple parties suggesting edits and adding comments to the same doc. For much of knowledge work — especially in high-stakes negotiations — this is still done on Word docs, meaning each person is submitting their changes asynchronously. Typically, the desired output is a summary of all the changes, incorporated into a single document. Someone needs to coalesce all the different versions, and their workflow is like this:

  1. For each version, identify the changes.
  2. If there are multiple changes on the same portion of the document, identify how the changes differ from each other.
  3. Summarize the impact of the changes, as well as how changes from different versions might conflict with each other.

Again, traditionally, each of these steps must be done manually because the context and information is idiosyncratic to not only the original document, but also the different parties submitting new versions. And, again, it turns out these are all steps that LLMs are particularly good at. Enter Macro, a next-gen document editor with built-in AI and redlining tools. Macro has built an “AI Compare” feature that automates all these steps.

This series of comparison tasks could take a lawyer hours to complete. Now they can do it with a few clicks in Macro.

Example: Claygent

A common task in sales (and many professions) is researching specific attributes about a company or lead. For example: Who are their competitors? How do they price their product? What do they use as their POS provider? The output can be as simple as a table with a list of leads and a column tracking the attribute. The workflow is less simple, however, as someone needs navigate the website and be able to identify when they’ve found what they’re looking for: 

  1. Navigate to company website.
  2. Look through headers and/or sitemap to see if there’s a page that might have what you’re looking for. Navigate to it.
  3. If page doesn’t have it, repeat step 2.
  4. Write down the attribute in table.
  5. In the case of a lead list, repeat steps 1-4 for each lead.

This type of information scavenger hunt was traditionally difficult for a rote algorithm or script because the ways in which the information appears on company websites can be highly idiosyncratic. It could be further complicated by the company not having the information on their website, but perhaps in a third-party article that one would find with a Google search. That said, LLMs are particularly good at being guided to find information — SynthAI is essentially tracking down the output we’re looking for.

Claygent is Clay’s AI-powered web scraper that automates this type of research. In its current form, users provide guidance on the task and the desired output format. Over time, one could easily see this upfront process becoming simpler. For example, for common attributes (e.g., “pricing model” or “competitors”), Claygent can simply provide them as field options, already trained on how to retrieve this information. Further, Claygent can learn common question types, so that even if a user provides poor guidance, the product can still provide the optimal results.

It’s not uncommon to have lead lists that are thousands of companies in length. The time savings from being able to automate this with a few configurations in Clay are enormous.

Where will the trend continue?

It’s evident that AI will only get better at automating workflows, and companies will build more AI-enabled features and capabilities natively in their products. We see two natural evolutions of this trend:

  1. AI automations will execute workflows more proactively.
  2. AI automations will reimagine user experiences.

More proactive

As suggested at the beginning of this piece, the ideal outcome is for the workflow process to be turned into a click of a button. Let’s push this idea further. If we trust the AI solution will perform the workflow accurately, and the system can recognize when the workflow is needed, then the AI solution can proactively perform the workflow without needing any user action. We may still want to know that the AI solution performed the workflow on its own, and, in this case, delivering a notification will be sufficient. 

But what if we are OK with not even knowing? For example, if a customer mentions a specific objection during a sales call, the AI agent could automatically ask someone on the solutions engineering team to reach out; the account executive doesn’t need to be involved in the workflow at all.

This idea naturally allows the scope of the AI-automated workflows to expand. Since the AI solution is trusted with smaller steps, it can take on an extended workflow or a more complex input→output scenario before a human needs a notification. One example is that an AI solution could proactively surface relevant work being done by another team in the company, based on the early progress you make on a project, and propose ways for the different teams to collaborate. Another example is an AI avatar that could actively participate in a meeting alongside a teammate; instead of the account executive chatting with a copilot on the side, the AI avatar would join the call and proactively answer questions when it knows its human counterpart might not know the answer.

New user experiences

Taking this even a step further, having reliable, proactive AI capabilities should change how we interact with products at a more fundamental level. One of our favorite thought experiments on the team is to imagine how an AI-powered CRM would manifest. In the most extreme case, an AI CRM would not at all resemble a CRM as we know it today.

Today, we imagine accounts in a relational database, with static objects and fields housing information, and deals moving through pre-defined stages. However, the emerging design of AI-native apps is to ingest all the contextual data (e.g., all sales activities taking place across all SaaS apps) and represent these relationships in embeddings. This allows AI systems to capture nuance and context that’s hard to represent in a tabular or linear mapping. AI features optimized for a relational database, such as automatically populating fields in a traditional CRM structure, may quickly become outdated.

An AI CRM will continually improve its understanding of the company’s relationships with each customer by constantly ingesting the most up-to -date data and context. The objective would be for the AI CRM to have an opinion on the unique state of any prospect or customer. It should encourage the right action from an account executive or account manager (or perform it proactively), and surface relevant information to leadership at the right time. In such a paradigm, information doesn’t need to be viewed by the user in account- or stage-oriented views. Instead, the UX might look like a combination of summary dashboards and notifications. A relational table might merely be a way to summarize information for easier digestion by a human, or a translation layer for moving data into other applications that are not AI-native.

Another reason to believe we’ll see workflows we can’t even imagine today is that AI solutions will recognize more patterns as workflows. When we design software, we can only do so around what we, as humans, recognize as workflows. In reality, we could be performing repetitive actions in our jobs, via patterns we don’t even realize. AI solutions with more complete context could identify such patterns and define workflows around them (think about, for example, how earlier generations of AI systems mastered complex games such as Go or Dota 2). 

This will be especially true as AI systems become multi-modal by default, and we improve the techniques for these systems to seamlessly tie together context across different modes of information, whereas humans may be limited to seeing patterns within a single medium. In this world, AI would be designing automations that the user previously couldn’t fathom.

Conclusion

We’re still in the early innings of B2B AI applications. Wave 2 is certainly playing out, though, with companies exerting a clear ethos around owning the workflow. In turn, products will increasingly feature capabilities that can drive proactive automation. These nearer-term innovations will ultimately become the building blocks that allow products to layer on more complexity and perform broader automations. We’re excited to see what’s to come!

We also love meeting startups who are building products to own their respective workflows. If you’re working on one, feel free to reach out to zyang at a16z dot com. 

Thanks to Kristina Shen for being a thought partner during the early drafting for this piece. Many thanks to Conor Woods and Shannon Toliver at Figma; Jacob Beckerman at Macro; and Kareem Amin and Adam Eldefrawy at Clay for your engagement and feedback on your respective examples.

Want more a16z Enterprise?

Sign up for news and resources to navigate the world of B2B technology, from AI and data, to security and SaaS, and more.

Thanks for signing up for the a16z Enterprise newsletter

Check your inbox for a welcome note.

MANAGE MY SUBSCRIPTIONS By clicking the Subscribe button, you agree to the Privacy Policy.