Skip to main content

Building & Managing Human+Agent Hybrid Teams with Karen Ng, Head of Product at HubSpot

Richie and Karen explore the evolving role of AI agents in sales, marketing, and support, the distinction between chatbots, co-pilots, and autonomous agents, the importance of data quality and context, the concept of hybrid teams, the future of AI-driven business processes, and much more.
Sep 3, 2025

Karen Ng's photo
Guest
Karen Ng
LinkedIn

Karen Ng is the executive vice president of product at HubSpot, where she leads product strategy, design, and partnerships with the mission of helping millions of organizations grow better. Since joining in 2022, she has driven innovation across Smart CRM, Operations Hub, Breeze Intelligence, and the developer ecosystem, with a focus on unifying structured and unstructured data to make AI truly useful for businesses. Known for leading with clarity and “AI speed,” she pushes HubSpot to stay ahead of disruption and empower customers to thrive.

Previously, Karen held senior product leadership roles at Common Room, Google, and Microsoft. At Common Room, she built the product and data science teams from the ground up, while at Google she directed Android’s product frameworks like Jetpack and Jetpack Compose. During more than a decade at Microsoft, she helped shape the company’s .NET strategy and launched the Roslyn compiler platform. Recognized as a Product 50 Winner and recipient of the PM Award for Technical Strategist, she also advises and invests in high-growth technology companies.


Richie Cotton's photo
Host
Richie Cotton

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.

Key Quotes

I always believe that we will be in a world where humans lead and AI accelerates and the magic of humans. Creativity and breakthroughs ares till going to be present.

A lot of the ways that we work together as a team are going to be really important in our agent world. We believe a lot in hybrid teams, and I believe that's the way that things are going to grow. Hybrid teams are both humans, supercharged with AIs, and agents that need to work together with humans and other agents.

Key Takeaways

1

Utilize a hybrid team model where AI agents and humans collaborate, ensuring that agents are trained with company-specific data to maximize their effectiveness and relevance in business processes.

2

Adopt a structured approach to process re-engineering by conducting proof of concepts with AI agents, setting clear hypotheses, and evaluating outcomes within a defined timeframe to identify successful implementations.

3

Enhance your marketing strategy by shifting from traditional SEO to AI Engine Optimization (AEO), focusing on creating content that is discoverable by LLMs and aligns with current search behaviors.

Links From The Show

Hubspot Breeze Agents External Link

Transcript

Richie Cotton: Hi Karen. Welcome to the show, 

Karen NG: Richie. It's so great to be here. Thanks for having me. 

Richie Cotton: Yeah, I'm looking forward to this conversation. So I guess to begin with, I always wanna hear about cool uses of agents. So what's the coolest AI agent you've seen across sales, marketing, and support? 

Karen NG: Oh gosh. It's interesting to see how agents have progressed and do more work for us across those.

I feel like the most interesting one I've seen in all spaces have been like a customer agent that resolves support tickets. It's the agents that give value immediately and fast, and things like customer agent. Being able to expand that across the whole customer journey from like catching leads to resolving support tickets to understanding renewals is a really interesting.

Like way to think about how agents are doing work for us now. 

Richie Cotton: Okay. Yeah. I think like the customer support thing is perhaps where everyone starts is such an obvious idea is like answering the same questions that your customers have over and over again. Boring for humans. If agents can do it faster, then that's gonna be an easy win for, we've got chat bots, we've got agents, we've got co-pilots.

There's all these sort of different formats. Do you wanna talk me through when you want each of them? 

Karen NG: Chatbots are still in the world we are coming from. So when you put chatbots on you... See more

r site, a lot of it is still very deterministic. So chatbots can answer questions that it knows about, it can address a whole bunch different ones.

I almost see the chat bots that are now AI powered as being able to resolve those questions. They're smarter, they're sometimes non-deterministic For co-pilots I think of those as assistance to humans. So you use a copilot when you're human and you wanna augment or supercharge your work.

Like I'm gonna be the one kicking off questions. I wanna be the one that kind of asks for summaries and insights. And so that copilot is really a human assistant. Then in agents, that's when you start getting to autonomous work today, I believed deeply in like agents even still will have humans in the loop.

Or you'll have these hybrid teams that are humans that are superpowered by copilots and then agents that help you do work, but the agents get into autonomous work. So I mentioned customer agent, that was the first thing everyone started with. Getting into a couple of those new agents that I'm sure we'll talk about, but one of those agents is a data agent.

Now you start getting into things like an operations agent that works 24 7, and it's making your data clean. It's making sure you have the most accurate data, and it goes out and finds external data that makes your, CRM data even more powerful. 

Richie Cotton: Okay. So tons of different use cases. And I like the idea that we've had chatbots like.

Forever at least a decade now. And yeah. This off being fairly deterministic, now you've got a bit more flexibility. And then there's a progression right through Copilots where you've got human and AI working together through two agents, which are at least. At least in theory, mostly autonomous.

You mentioned a couple of use cases and I know with your Breeze platform you've got tons of different breeze agents for different commercial use cases. So you mentioned like data agents, you mentioned the customer support agents. Do you wanna walk me through like how you think about categorizing these things?

What are the different. Use cases there. 

Karen NG: We deeply think about, if you have, if you're forming your go-to-market team, you know you're running your company, what are the kinds of roles and jobs you'd want to do and what would you wanna hire for? And so that kind of gives us the basis of the really key agents that will become essential.

And every go-to-market team, we think everyone will have a customer service, a customer. 'cause you'll have support tickets, you'll have resolutions. They'll very likely need a data agent. So data cleanliness, your AI is really only as good as the data powers it. And then I'll say that data for instance, since this is the data podcast, it matters so much around context.

And having complete context and sometimes that complete context comes from the external world. For instance, there's a set of companies that I, or customers I care a lot about, and they get a brand new funding round. I wanna know about that instantly. And so what the data agent does is help you create those custom properties that you might be interested in, like funding rounds or major executive team leadership changes.

And it helps you build that context layer, another kind of agent that's coming stills a like a closing agent. So a lot of times when you look at what happens in a deal. Lifecycle, it might be that deals get stalled. 86% of deals actually stall at a certain phase and the customer wants to buy, you wanna sell.

And what's happening that makes it so slow. So the closing agent helps you get from kind of cash to close all the way through by making sure it's really easy to create quotes, custom line items, et cetera. But like all the way we think about agents is these are the essential agents you'll need for your team.

Then we think of it also as we need to create a platform. We are launching a Breeze Marketplace and Breeze Studio that help you customize and train these agents and also craft your own if you wanted to. 

Richie Cotton: Okay. That's cool. Certainly the data version is very important to remind people is everyone gets excited about ai, but actually, yeah, I don't mean any good if you've got good data to go in there.

So having a bit of assistance to improve the data quality, that's gonna be incredibly important I think, for the success of all your other agents as well. And the closing agent, I feel like every quarter, like the entire sales team in basically every company is like sweating. 'cause okay, we've got a week to close all these deals, we've gotta hit our targets and having a bit of assistance there.

Maybe you smooth things out. I don't entirely believe it. 'cause like it happens at every company. Like they always the sales team is just try to push to hit a deadline at the last minute. So may maybe technology will finally help. 

Karen NG: Yeah, probably every, at the end of every quarter seems it's very stressful.

Richie Cotton: Yeah. Yeah. I'm sure. I know because you've got all these different use cases. One thing that seems to be very difficult with AI agents is determining like how hard it's gonna be to make one of these things. And since you've built like a view of them do you have a sense of what makes an.

Easy AI agent, like what the difficult use cases are? Like what sort of proving stubborn to build yeah. How do you judge difficulty? 

Karen NG: Yeah. Part of the reason for creating almost an Asian platform is to make it a lot easier. 'cause the way us as humans would think about creating an agent is it's almost it shouldn't require a lot of code, right? Nowadays. So the way we think about kind of building agents is give it a set of instructions, but the instructions are like the same way that you would train a team member. You can give that a natural language, which is I want you to be a customer health agent. I want you to flag when things are, give it constraints.

So I wanna know whenever anything drops below like 50% resolution rate, or if you see some of these risks, and I'll define the risks for you. So that's how easy it is now to build an agent once you have the platform to do the things that become really hard with agents is multi-agent orchestration.

So what happens when agency to talk to other agents? So I mentioned customers service agent that one resolves support tickets, but as support tickets get resolved, it's still escalating to humans for the most complex questions. We have another agent called knowledge base Agent that watches which escalations are happening.

Then helps recommend knowledge bases. So that interaction of two agents interacting together, we want them to be like human people and that multi-agent orchestration is very tricky. The other thing that's tricky is when you have multiple tools in place. So you know, we have agents that work across the HubSpot CRM, that the CRM works across, like the closing agent needs to generate a quote from my payments or billing system and needs to send that quote back to my invoicing system.

When you have multiple tools in play. We're seeing agent platforms still develop, but the ability to choose and rationalize which tool to use and which one to pick right now still has like limitations for the number that you have. With all ai, things are gonna get better and better, but those are a couple of things that become really tricky.

Multi-agent orchestration, how to think about which tools to use, what is the reasoning as you start choosing between different things those get complicated. Those are the hard problems. But, being able to create and express an agent we want to make easier and easier. 

Richie Cotton: I like the idea that customizing agents is relatively simple.

So if you start with a standard agent, tweaking is easier than building something from scratch. So that sounds very useful. 'cause I guess you just want something that's a little bit. Different, maybe. We'll come back to that in, in a moment. The the multiagent orchestration. I know when you have to work with colleagues, it's like you work with someone in your team.

It's fine. You work with someone in a different team. It's like they have stupid ideas and they want to do different things. They have different goals to you. You end up these interoffice fight you. Is that what happens with agents? Like we're gonna see office politics for agents as well. 

Karen NG: It's funny, we might see office politics, what will be interesting is to see how polite people are to their agents if they're using the same kind of politeness and niceties.

Or if your head just can tell your agent, that was a bad idea, please try again. I think we'll have some of that as they work together. The interesting part about like inter-office dynamics will be which agent takes weight. So what happens when they do conflict, or what happens, like how would you resolve something if you have two pretty equally weighted priorities?

How do you describe constraints for your prioritization structure? So I do think a lot of the ways that we work together as a team. Is gonna be really important in our Asian world, we believe a lot in hybrid teams, and I believe that's the way that things are gonna grow. And hybrid teams are both humans, supercharge of AI and agents that need to work together with humans and other agents.

Richie Cotton: Okay. Go on. So tell me what this looks like. What, what does this hybrid team consist of? Is it like Yeah. Is it gonna be like humans in charge and then agents supporting them? Is it like what does the, what does they consist of? What's the structure. 

Karen NG: The blueprint for a hybrid team.

I think if you're, if people are thinking about how to create them, you mentioned this too, like AI is only as good as the data that powers it. So I think the first part of the blueprint for any hybrid team is how do you create shared context so you run into all kinds of shared context issues of context and data is spread between different tools.

So you have this siloed data today. A lot of that data is also trapped in things like emails or calls or trips. We call that like unstructured data and trap data. And then it's also data that potentially is really bad. There's a lot of bad data outta this world. So first part of the blueprint for hybrid teams is for sure, create that unified context, connector data.

The second part is enable your human teams to do what they do best. So believe really deeply in. We're gonna, we're gonna see taste and creativity still be a really key element in the AI world, where that taste maker, that cre, a creative marketer, they are the brilliant marketer that might be breaking through.

And so we wanna enable these human teams to focus on what they do best, which might be. Building customer relationships, understanding renewals, reading the room. So that's the second part of the blueprint. And then the last part is now you have to build your actual team, which is a, probably going to be a bit of both, which is the supercharged human with a breeze assistant or a co-pilot.

And then which agents do you bring into the framework? And then how do you get 'em all context? So you need. Shared context across humans and agents or humans and ai. You need tools that kind of empower your people to do what they do best. Get rid of like the manual stuff. And then the last one is combine the supercharged human is plus agents.

Richie Cotton: Okay. I like there are different sort of steps. There's so almost a workflow for like, how do you build these teams? So you said start with shared context. Now I was chatting to one of our account executives Joe. So I was. At data camp we're trying to build out all these agents ourselves.

And so I was asking him like, what are the problems you have with building agents for sales? He's some of our data's in Salesforce, some's in all these other tools. Some's in Big Query and just dealing with these data silos. So getting that shared context is really difficult.

So data science is a really old problem, but still a challenge. Do you have any advice on how you deal with all these sort of different data sources and pulling the shared context together? 

Karen NG: Yeah, definitely understanding and using tools that help you bring together that context.

We are launching Data Hub, which, it's both two things. How you connect data that is across these silo systems like a BigQuery or Snowflake or your analytics database warehouse. But also there's a ton of data that are in spreadsheets that are probably just collected across Google Sheets, CSVs, other kinds of things.

And so how can you easily bring those tools together? What Data Hub does is it gives you a home that I wish data, bringing it together was as easy as adding a column like in a spreadsheet, and that's what Data Hub and data Studio allows you to do. So today it's fairly complicated to still bring together multiple data sources.

You almost need like a c or a data ops team, but equipping people with tools that like. If I know I want product analytics and it comes from BigQuery and I know I want these fields, could I add a column that just lets me pull out almost fields, like in a spreadsheet the same way to create a pivot table?

Data Studio is allowing us to do that. It's one way that we're helping to solve siloed data. 

Richie Cotton: Okay. So the idea is really just, it is almost I guess especially, so it's point and click interface just to say, these are the bits of data I want. Let's bring things together and hopefully you're solving these sort of connectivity issues, 

Karen NG: right?

Because once you have siloed data, or once you're trying to bring together data, you almost always have an outcome. So data itself, while you have the shared context, it's not that useful unless you use it for something. So I'll give you an example. In the sales and marketing world where oftentimes if you find a set of customers that are high value customers, they higher propensity to buy, they often have characteristics like maybe an executive would've entered the conversation at a certain period of time.

I almost wish I could find, look like audiences. And so that's what we're trying to enable with silo data. If you can bring the data together. You can start seeing patterns across it and then you could do things like create lookalike audiences and then run a campaign if you want. It makes it a lot easier to reach people that might value your product the most.

Richie Cotton: Okay. So it sounds like there might be some sort of data trade engineering involved just in the set of this, just making sure that everything is ready so that other people can use it. Is that correct? 

Karen NG: There will be both. Our philosophy is we've always tried to build tools that are easy and intuitive to use.

Then have fast time to value. So if you wanted to create this lookalike audience, it shouldn't be like a number of steps that you have to train up an entire team. Ideally, there's ways that we can help you do it. So I give you that because yes, we'll have data engineer, you'll see engineer, data engineering teams, but like all functions, AI will disrupt a lot of that manual work.

And so what we're doing with Data Studio is even if you don't have a full data engineering team. You could start bringing this data together in a very intuitive way, as simple as adding a column that's, it might look like advanced joins, but if you had four columns across your BigQuery, your HubSpot, CRN, your QuickBooks invoices, you could start adding these as columns and then mixing them.

That is essentially an advanced complex join. 

Richie Cotton: Okay now having to learn some SQL for exactly. Joins in sql, it's like anyone could do them, but in general, like salespeople, marketing people don't want to have to do them on a regular basis. 

Karen NG: So giving them kind of intuitive tools that like match what you're trying to do.

Richie Cotton: Okay. That makes all the sense is have the right format for your audience. So actually I guess this is leads to a sort of my general point in the often like your sales marketing people don't necessarily converse well with your engineers. A lot of the time when you, like, when you try to build out these agents, it can feel like you, you do need technical people and you need people with domain knowledge and they have to work together.

They've gotta collaborate. It's hard work. Do you have any advice for making those collaborations easier? 

Karen NG: Finding shared language, to be honest. So a lot of times if it's really solving for the customer. There might be different ways that we think about solving the customer, but at the end of the day, that is the joint that, that's what both are trying to do.

Sales, marketing and engineering. So finding some of those joint terms. We use them a lot to describe what we think customer value is and how we measure it. That helps us build an example. So for instance, I talked a little bit about these lookalike audiences. The goal of look at audiences or personalized email or campaigns is really to understand and reach customers.

It's generate more leads. So more leads, close leads to higher revenue and growing your business. And so just using shared language and in how you're trying to do this, 

Richie Cotton: that makes a lot of sense. It's if you're both talking the same bit of jargon, then there'll be a lot less confusion of that and hopefully you're gonna have some shared goals as well.

So you were talking about the idea of a hybrid team. So it's gonna be humans and agents working together. So do you message agents. Being ephemeral things. 'cause if you think of them as a colleague, maybe they ought to have some kind of permanence. Whereas often it's this agent ran for five minutes and then it no longer exists.

So I'm wondering, does this change in the future? Are we gonna have these sort of more permanent agents as part of our team? 

Karen NG: I believe that there will be hybrid teams where you might have agents that kind of take over certain parts of jobs. What's, what is interesting and I think is happening just across the world now is, some A agents are agentic, like there are people will say they're ambient orent, and they're triggered off of something. So like an ambient agent just just means it's triggered by an event. Like it's not triggered by a human. Like a human might say, please go research this company for me. And I kicked off the job.

Or it might be, hey, an email comes in and every time an email comes in, I want this agent to do certain things. So I do think you're gonna see a mix of both, where sometimes there will be ephemeral tasks that will be done. But if you think about it as which ones do you bring in into a recurring workflow, it will be a bit of both.

Like humans will drive some of these workflows that will have these ephemeral tasks. And then I also think that actually, the way I think about agents too is agents themselves can do, have many different skills. Some of those skills will be kicked off as ephemeral skills and some will be more permanent.

Like rather than hiring the five new customer support folks on my team, if I have a customer agent that resolves this kind of, these this amount of tickets, I now know what I'll need is the more complex escalations, and I might look for different skill sets there. 

Richie Cotton: Ah, this is fascinating. Suppose you're a manager.

How do you change what skill sets you're looking for when you're hiring? 

Karen NG: That one depends on what you think are, I look at some of the companies today that have been really successful in, AI native tools like uc, Harvey and Legal uc, a couple of these that are very specific in verticals.

What they've done is they've taken jobs that are more mundane and repetitive. Those will go first. That's why I had customer, customers agent is probably it's not the most unique. It's the most valuable right now. And it's because, you used to be able to calculate, if I added X amount of people, I would likely get X amount of support.

That gets resolved. And what's possible now is when you look across that team, I can give almost like an equation of if I have customer support, agent running, I reduce my ticket volume by this. But I would expect as humans, you just ask harder questions. So you know then it's and maybe the skill sets in that will be, can you, the more complex questions may often rely on, like we have a big product portfolio, it might go across the product portfolio.

So kind of people who can navigate that ambiguity or navigate like a product portfolio. Those might be ways that it could change. Yeah. It'll be an interesting world for managers as we think about how to create a hu a hybrid team. And then the same way you would onboard a human or like a new person on your team, you'll have to onboard that agent.

Like customer agents are only great if you train it on your company material. Otherwise it's answering generic questions, which is not as useful. 

Richie Cotton: Yeah. This is interesting because you hire a new human. Okay. Read some documentation. They're gonna have to watch some videos on what you're doing.

Then maybe shadowing into the Hume. So yeah. Go on. Talk me through agent onboarding. Is this just a case of fine tuning an existing agent, like using company data? What does it look like? 

Karen NG: It would be less, probably fine tuning, but giving it context. So I think that's why the context layer and why actually why data is so critical.

Data is probably the currency that AI is gonna run on and why it's so important that people start thinking or are thinking about it. Training an agent today, you might have a place to go. So I'll take customers agent as an example, giving it PDFs or your knowledge base or your company website or, the same way that you might have got a getting started guide for a human.

You probably have a getting started guide with a bunch of different links today. It looks like just giving the agent those links. It's almost like creating a project, in one of the LLMs where as long as you can feed it different example types and like good content, it starts learning and that's probably what it's gonna look like, even as we go into more complex places.

Richie Cotton: Okay. I feel like I've got sort, we've got bits of components of a plan here. So if you want to get your really good customer service agent, so you start off with like generic agent and then you hook it up to all your data sources and then you discover that those are outta data in a mess. And then you have to use the data agent to fix your data sources.

And then once you've done that, then you can onboard the customer service agent properly. And in theory, things work, 

Karen NG: we should make it easier and easier for people to train their agents. So today we have a smart CRM that is key part in building context. CRMs really used to be this manual system of record that people had to keep up to date, but now with AI you don't have to do that.

So our smart CRM now self generates data, so it monitors things like emails or call transcripts or the support tickets coming in. I can tell, Ray Richie got a new job. I'll update my content in my CRM of your new job and your new role. If you're outta office and you sent that to me in email, I can see that.

So I would know, Hey, I don't email. You know if I'm gonna email you. Don't email me when I'm on vacation. Email me just right after. When I finally get through that, like the amount of email I have, for instance, but is smart CRM, that helps you build self-generating context. Means I don't have to train you on everything because I already have unified context in my CRM.

Richie Cotton: Okay. So unified contact, unified context sounds very powerful, but I guess there are gonna be some challenges around data privacy about who's allowed to know what. You can have pretty strong data governance there. Can you talk me through how that works? Like what some of, what you might need to worry about?

Karen NG: Yeah. Data governance is gonna be incredibly important because you want AI that you can trust. If you can't trust AI and you can't trust your data, that's like a very bad thing. Governance will mean couple things, so user permissions and even agent permissions is something that we're very thoughtful about today.

There are roles that you might have on teams, so someone on a CS team may want default views that look like this. Someone on a sales team. Oftentimes deals are split across regions. Or across different owners. And so you don't really want everybody seeing everyone's deals unless you are like our sales leader across that team.

And I wanna see what the forecast looks like across the board. And so user permissions are incredibly important. Same thing with like, where is that data coming from? How is that data being used? How would this data be trained? But I think of how do you access data as like a core component of solving the data challenges here.

Agents will be the same where agents that they act on behalf of a user. You inherit those user permissions if it acts on its own. Are we gonna enter a world where agents need to have licenses and seats the same way that humans do, which will be really interesting. 

Richie Cotton: Yeah. There's probably a whole separate discussion on like how you go about pricing your own software, if it's gonna be used by agents as well as by humans as well.

It's maybe a whole separate episode. I don't know whether you've thought about this at all. 

Karen NG: Yeah. The concept of hybrid teams gives you a mix of both, which is you have people. Agents that may have seats or may have core seats. And so I think the monetization strategy will be hybrid, so it will both be potentially, core seats.

But as AI does more work, you will need less seats. So the other half of the hybrid monetization is seats plus credits. And that shifts work more from paying for software as paying for outcomes and work. So you've seen a lot of different pricing models happening in the market today, but that concept of a hybrid monetization model where you still protect permissions, you have seats and then you monetize on outcomes and credits, I think that combo we'll see more of going forward.

Richie Cotton: Okay. Yeah, because I guess the dumb thing over the last. Decade or more has been like the SaaS model where it's, you pay your monthly subscription per person, 'cause generative AI stuff, especially agents can be more expensive or more valuable in terms of the costs. I think. Yeah, the usage pricing is to be a lot more popular there.

Karen NG: Yeah, I think it'll be a bit of both. Because permission is gonna be so important. You might have subscription pricing per month, which is access to. Feature sets or capabilities, and then it's ideally outcomes and value is tied to, how much you, you want your pricing tied to value and outcomes.

Richie Cotton: Okay. So outcome pricing, would that be like if you've got a customer service bot, then it's charged by the number of tickets it closes or something like that? 

Karen NG: Yeah, that might be it, but I think there's a lot of different pricing models and like thinking right now and again for me and for us, it really is like perceived customer value.

To what people are willing to pay. So that is the easiest version of customer agent, which is support tickets resolved. But there could be many different reasons why you have support tickets like that high for instance. Maybe it's an instant or something else. So those are all factors I think, that need to be considered when people start thinking about outcome, outcomes, or value-based pricing.

Richie Cotton: Okay. Alright. Since you mentioned outcomes, that gives you an idea. So we were talking about agents as being team members. Would you have some kind of performance review for your agents? If you're monitoring a trucking outcomes, then I guess there's some way of saying this is how well the agent's performing.

Talk me through how do you do that? How do you evaluate the performance of your agents? 

Karen NG: Yeah, I, so I think so, one of the important tools that are happening around AI right now are evals. So even on my team, we think a lot about evals of like, how do you create an eval system for an agent? And the answer, especially in the non-deterministic world, like you're not exactly sure where they're gonna ask.

There's gonna be an answer that's roughly in the same shape. But how do you build an eval system that is okay, you've got it, you've nailed it. Maybe the first bar is like there's pure accuracy that like gives you answers back that it knows is true and relevant, never makes anything up.

So like easiest bar, and then the harder bar is if it's non-deterministic, how do you determine that the range of this answer. Was like roughly in the right spot. I think evals and AI systems are becoming increasingly important and I think you'll see almost every product, have some way of EVA evaluating.

Hey agent, did you do your job the way you were meant to do it? And do I need to give you more context? Did you make stuff up or did you actually help a customer like resolve what they were looking for? 

Richie Cotton: Yeah. So are you seeing humans having to do the evaluation of agents or is there a way for other agents to evaluate agents or what's the current best practice?

Karen NG: Yeah, I'm seeing both. Like I definitely see humans creating eval systems and then you can write an like even today, internally for us, like you can write an eval agent. That looks across its other teammates and kinda holds them to a high bar of if they're doing their job. So you'll see a mix of both, for sure.

Like right now I see a lot of human in the loop to make sure human in the loop, both in terms of how we interact with ai. So iterating on something that you've asked about and then going deeper. I use AI myself like that way. And then you'll see human in the loop with evals as we teach agents how to eval other agents.

Richie Cotton: Okay, nice. I guess eventually you want it to be mostly automated with other agents doing it, but you're probably need a human in the short term. I guess just thinking about it, like human messes up a performance review, worse that's gonna happen. You get fired. Agent messes up a performance review that's getting turned off permanently.

So it's the existential risk for the agent. 

Karen NG: Which one's worse? It's better for an agent to have existential risk. 

Richie Cotton: Yeah, but I, I think so. Yeah. Yeah, I assume go on a situation where humans are getting turned off a phone, their performance re review. Okay. Alright. I'm curious as to how far you can push this.

So do you have a sense of what fraction of works or long term is gonna be done by humans? What fraction gets done by agents? Like how much can you realistically outsourced AI. 

Karen NG: I always believe that there will be in a world where kind of humans lead and AI accelerates and the kind of the magic of humans and how creative and breakthroughs like that is still gonna be there.

So I don't know what the exact percentage will end up being, but I think higher order work will happen. So a lot of times today, I think humans do. Repetitive and manual work, and I think all of that will be gone. So almost as agents get better and more autonomous, they will drive humans to do even more and more.

And like I'm excited to see what kind of breakthroughs humans will do. So I definitely believe like there's gonna be a human accelerated AI world versus purely autonomous. 

Richie Cotton: Yeah I guess it's good that there is gonna be something left for humans in the end. There's room for us.

Suppose you want to get started with this, you want to like, really push hard and create some agents, get them do new work. Where do you begin? What's step one? 

Karen NG: Step one is still the blueprint. You're a data guy. It's unite your data, unite your context. Some of it'll start with things like a source of truth, like a CRM and crm.

As it gets smarter, they're gonna do it for you, but you'll still always have siloed tools. And so connecting your data should get easier and easier. Cleaning up your data, if you're coming from legacy systems or like actually any system, right? Like we coined the term inbound marketing, where it was you get form submissions.

Once you put your value online, how many of those forms are half filled? How many of those names are still messed up? How many duplicates do you have? And so getting your data clean, that's step one. Step two is then, okay deciding which agents you want in your team or what kind of jobs. I've seen both sides, which is are there jobs you want to get done or are there, human roles that you want to hire for and that you are looking to have an agent for. Determine where you want that use case and then train your agent. So again, it's giving your agent context and then putting it in, putting in kind of in production. But a lot of people will test out, an agent first, see how it is.

Or data agent that I mentioned to you is really good at making sure your custom data. So if I care a lot about. For every company in my CRM, if I care about their top products, who their top competitors are, and every funding ground, I can just tell the agent, please watch for those three things. And then every time new information comes in from the web or from earnings, it will populate your CRM.

Making sure that you have a great system around you and then deploying those agents. 

Richie Cotton: Okay. Your point about, if you are hiring for something, it's you said that might be a thing where you want to incorporate an agent. So is that the idea? You're like, okay, we're a bit resource constrained.

We think we might need to hire for this particular role, therefore we start trying to create an agent to outsource or some of the work to AI there before we start hiring. Is that your suggested flow? 

Karen NG: That is one suggested flow, but I think what I'm seeing with people adopting agents today is usually there's a.

There's a thing eminent in front of you, so you know it's just picking a single business process that you might want to augment or change with ai. So for instance, we've really been seeing the marketing funnel change. Essentially. The old playbook just doesn't work anymore. Search is coming from different mechanisms, not just search engines.

And so that might be an eminent thing that people want. So how do you one, create more personalized or tailored outreach? Or how do you find search traffic? And so like I would say my advice is probably pick one process that is. Not working the way that you want and think about how AI can augment it.

That might be a place that you'll find an opportunity for an agent 

Richie Cotton: or 

Karen NG: a 

Richie Cotton: co-pilot. Okay. So yeah, I'd love to talk about how you go about re-engineering these processes, but you just terrified me slightly with the marketing funnel is broken. So talk me through how do you fix your marketing funnel then?

Karen NG: We coined inbound marketing in the cloud disruption and it was really about bringing value out. I think we're seeing a new funnel or a new way to market. So what we're introducing, for an AI era is a different kind of, of marketing playbook. And that playbook is called the loop, where you have four phases and it's very dynamic and it needs to think about like how you would use AI in each of those loops.

But the first one is around can you express very clearly who you're trying to reach? One of the reasons I mentioned lookalike audiences is it is possible now to find cohorts in similar areas. So can you express what you're looking for in your core ICP? Let's say I'm looking for, leaders that are in the education segment, but I also sell to the healthcare segment.

I want to be able to find those folks and then personalize their both segments and expresses the first part of that loop, then you might wanna tailor it to say, I wanna reach specific people and what would that look like? And then I want to amplify it, which is not only do I wanna reach these people, but I wanna reach them in wherever they are.

So a lot of search results in that funnel is coming from LLMs today. So how can I create content that can be discovered in LLMs? LLMs? If you look across, a lot of what we've learned LLMs prioritize things like community influence content like Reddit. It influences, certain types of content.

And if I could craft my content the same way I did an inbound, which was on a website, I could do that for an LLAM. And that is something we're calling, that you're seeing across the industry that's called a EO, which is, AI engine optimization. A little bit different than SEO. So while the old SEO funnel is broken, I believe there's a new funnel.

And that new funnel probably looks more like a EO. 

Richie Cotton: Okay. That's fascinating. Yeah SEO is very well understood at this point. Also, again, a little bit scary that all the dumb comments I've made on Reddit now being associated with my name, going straight into chat Chief Jean Claude, and all the rest.

Yeah. That's terrifying, but good to know. Good to be aware of. I, for a lot of people, we were talking before about process re-engineering. So how do you about changing your process to incorporate these AI agents? I'm sure it's not as simple as just oh. Let's switch out a human for AI here.

Karen NG: Yeah. No, it's not. I think I think you people have to try it and so it's nice to pick targeted problems. And then, we do a lot of what we call proof of concepts. And the proof of concepts might be okay, we know the funnel's changed, we know the source of that traffic is coming from LLMs. Like we know that content is changing and there isn't an SEO playbook the same way there has been established right in cloud yet, but I expect it to come.

What are things that you would try and. Once you try a set of things and AI works, then I would, I would start replacing it. For instance or I would say, the other thing I've seen people do is give people the impossible task, which is, could you do this? Like if there's a new idea for a project can they project be done at a faster speed with less resource?

If you used AI and simply giving that as a constraint, which is almost like an impossible task. It's been amazing to see what teams can do and in what timeframe. That's the other route I would suggest, which is almost the inspire others with the impossible task and people break the box with it, which is.

Super fascinating. 

Richie Cotton: Yeah, I do like the idea of just trying stuff and maybe it is a case of do something completely different and solve an impossible task, and then share what you've done with everyone else, because I think everyone's struggling with this problem at the moment. So I guess communication is an incredibly important aspect of this.

Yeah. Do you have any advice on like how you share your wins across your organization? 

Karen NG: We do it in a structured way. So we might say, Hey, there's four hypotheses we're kicking off. All of them deserve a proof of concept. We're gonna have that proof of concept team. It might be a virtual team across different disciplines, and then they get six weeks.

And then in six weeks we look across all the proof of concepts and what did you set out to do and did it work? So for instance, we use a version of we use our customer agent to resolve support tickets, and it is dramatic how many support tickets it was able to resolve. But that uncovered the next level, which is, oh wow, we're getting more complex requests now.

And so how do we think about the team that way? So I think of it as have I set a hypothesis of what you wanna achieve? Give them an impossible task. Do a proof of concept. Have a six week, give it a timeframe because people can do amazing things in limited amount of times and then see which ones worked and which ones didn't.

Once ones worked, scale it out. So that's how we're doing. We're doing that on a repeat. 

Richie Cotton: Okay. I like that. And yeah, I think the time boxing is incredibly important. Otherwise, this thing can go on forever. 

Karen NG: It takes forever. And there's no way to there's no way to check in on the learnings.

And that's what really you're looking for in the proof of concepts. You're looking for. The learnings, 

Richie Cotton: okay. Is, yeah. Have your employees learn how to do something new. And maybe that's even more important than just the thing they built. 

Karen NG: Yeah. We also leave time, so sometimes it can feel super overwhelming with the number of AI tools coming out.

And so we leave employees time. Like we have something called we call Grow Day, where we might be like, okay, we're gonna train everybody and then try it in your own craft. Share different wins. We have Slack channels that are called AI fluency or Grow Fridays, but those are fun ways to share things and then give people time to like.

Try and learn. Once you get the aha moment in something that you're doing yourself it's dramatic. You know how. How powerful it could be. 

Richie Cotton: I, I do love that idea of just give people time to learn and experiment and try new things. Suppose it all goes well. You push hard on AI agents.

It works. What does success look like to you? 

Karen NG: Success looks like the outcomes, so it depends, like some of the agents are. As easy to understand as like the support ticket one, some of the agents, like I even run in my eight to day job is I run agents over my inbox. And for me it's like SLA, like how many emails from my executive team and am I taking a really long time to reply to?

Those are like small things across the board, board, another agent would be like the prospecting agent, for instance. We have about a 17,000 person wait list on the prospecting agent, but the prospecting agent is trying to find. Cohorts of customers that are very interesting. So that might mean we have something we call research intent.

So because we have had inbound marketing, we have all these websites that people are all over the internet are like doing things. And so when you shop for a product, you're starting to shop for a problem before you decide which product to buy. So that's a research intent that you are in the market for, new plants.

For instance, I see your plants behind you and you're in the market for like new plants that don't take a lot of water. And so as you start looking for that in search engines, the website and LLMs, like to us, that's research intent. So that research intent is built into the prospecting agent. It's built into the CRM by defaults, and you can start finding these signals now.

And so prospecting agents measure of success. Number of media, books, number of customers at reaches high value customers at our reaches, and so what kind of outcome are you looking for when you start bringing an agent into play? 

Richie Cotton: No, that's very interesting. The idea that like as soon as someone starts searching for something that's intent to research, like they might able to buy something in the future and therefore they're already feeding into all of your systems.

Yeah. 

Karen NG: And prospecting agent can monitor the internet better than you can. It gives you that, like these set of people are starting to look for,

Richie Cotton: new plans. Wonderful. Alright as we close off what are you most excited about in the world of data and AI at the moment?

Karen NG: Oh, I'm, I am really excited about hybrid teams because, I don't believe in a world that humans are just a blur, moderated, it really will be the kinds of things that make us more creative. I love the craft of like creating things, I'm product person, so I think I'm most excited about that, which is what AI enables us to do.

That wasn't possible today. Like even in that research intent example, that would be really hard to build. But it makes me a better. It would be make me a better salesperson. It would make me a better product person, just to understand that, the majority of people now are starting to look for this type of thing.

So I'm excited about what other opportunities AI unlocks. 

Richie Cotton: Absolutely. Yeah. So hopefully we'll see a few more human aid agent hybrid teams in the future and yeah some great products to support that. And finally I always want new people to follow. Whose research are you most interested in at the moment?

Karen NG: I love Leonard TKIs stuff. There's a series that he's just started called How I AI and I think that's a really interesting kind of podcast and others to learn about how you can practically apply AI in your daily life. 

Richie Cotton: Okay. Yeah, certainly practically applying stuff very useful.

Even more so than the theory oftentimes. 

Wonderful. Thank you so much for your time. 

Karen NG: Awesome. Thank you so much. Great meeting Richie.

Topics
Related

podcast

Enterprise AI Agents with Jun Qian, VP of Generative AI Services at Oracle

Richie and Jun explore the evolution of AI agents, the unique features of ChatGPT, advancements in chatbot technology, the importance of data management and security in AI, the future of AI in computing and robotics, and much more.

podcast

Effective Product Management for AI with Marily Nika, Gen AI Product Lead at Google Assistant

Richie and Marily explore the unique challenges of AI product management, collaboration, skills needed to succeed in AI product development, the career path to work in AI as a Product Manager, key metrics for AI products and much more.

podcast

Developing AI Products That Impact Your Business with Venky Veeraraghavan, Chief Product Officer at DataRobot

Richie and Venky explore AI readiness, aligning AI with business processes, roles and skills needed for AI integration, the balance between building and buying AI solutions, the challenges of implementing AI-driven changes, and much more.

podcast

The Human Element of AI-Driven Transformation with Steve Lucas, CEO at Boomi

Richie and Steve explore the importance of choosing the right tech stack for your business, the challenges of managing complex systems, AI for transforming business processes, the need for effective AI governance, the future of AI-driven enterprises and much more.

podcast

How Generative AI is Changing Leadership with Christie Smith, Founder of the Humanity Institute and Kelly Monahan, Managing Director, Research Institute

Richie, Christie, and Kelly explore leadership transformations driven by crises, the rise of human-centered workplaces, the integration of AI with human intelligence, the evolving skill landscape, the emergence of gray-collar work, and much more.

podcast

Industry Roundup #5: AI Agents Hype vs. Reality, Meta’s $15B Stake in Scale AI, and the First Fully AI-Generated NBA Ad

Richie and Martijn explore the hype and reality of AI agents in business, the McKinsey vs. Ethan Mollick debate on simple vs. complex agents, Meta's $15B stake in Scale AI and what it means for data and talent, the first fully AI-generated NBA ad, a new benchmark for deep research tools, and much more.
See MoreSee More