Benchmarking that drives RevOps decisions

Are your benchmarks actually useful, or are they just satisfying your curiosity? In this session, Josh sits down with Katherine from OPEX Engine (an independent subsidiary of Bain and Company) to break down how RevOps and finance leaders can use benchmarking to build better plans, get stakeholder alignment, and make smarter decisions, without falling into the most common traps.
Benchmarking that drives RevOps decisions

Related Content

Never miss new content

Subscribe to keep up with the latest news from AccountAim.

Discover how benchmarks can drive better decision-making for RevOps teams with Katherine Zhang, CEO & GM of OPEXEngine.

What You’ll Learn:

Benchmarking for RevOps: How to Plan, Align, and Drive Decisions

  • Why benchmarks and case studies are fundamentally different tools, and when to use each
  • How to choose the right comparison cohort based on growth stage, size, and margin, not just industry
  • Why benchmarks are swim lanes, not targets, and how to use them without over-anchoring

Using Benchmarks to Solve the Planning Problem:

  • How benchmarks eliminate the “blank page” problem when building annual or multi-year plans
  • Why linear projections fail fast-growing tech companies and what to use instead
  • How to triangulate headcount and resource decisions using both historical data and benchmark ranges

Getting Stakeholder Alignment:

  • How to use benchmarks to convince non-revenue stakeholders your plan is credible
  • Why having one data point is only partially helpful, and how to use the next level down
  • How to frame a benchmark divergence as a strategic conversation, not a performance problem

Choosing the Right Metrics to Benchmark:

  • Why the metrics that matter most are the ones tied to actual business decisions
  • How to distinguish curiosity questions from actionable benchmarks
  • Why internal data quality and consistent definitions matter as much as the benchmarks themselves

AI and the Future of Benchmarking:

  • How the efficiency vs. growth shift is changing what RevOps teams measure
  • Why the SaaS P&L fundamentals haven’t changed, but tagging and attribution have
  • Why annual planning cadences are too slow and what leading teams are doing instead

Key Takeaways:

  • Benchmarks are most powerful when paired with a clear understanding of your strategic priorities
  • The right cohort is defined by growth stage and margin profile, not your end customer
  • Data quality and consistent metric definitions are the foundation of any benchmarking effort
  • AI hasn’t replaced the fundamentals; it’s added a layer that requires more conscious KPI review

Perfect for RevOps leaders building or stress-testing their annual plans, finance and operations professionals looking to add external context to internal data, and anyone navigating stakeholder alignment with benchmark-backed decision making.

Featured Guest: Katherine Zhang, CEO of OPEXEngine (an independent subsidiary of Bain and Company)

Full Transcript:

But yes, as folks start to trickle in here, what we’ll be doing here is very typical for most of our webinars that we are now doing weekly. We’re gonna be sharing this out with everybody that attends. So don’t worry if you miss anything, we are recording it. You should be getting that out hopefully tomorrow. You can dive in. There’s a Q&A section at the bottom, and if you have any questions as we’re getting things going, feel free to weigh in, throw them in there, and we’ll try to cover as many of them as we can. I’ve actually already gotten some questions from folks before we even kicked off. So I think this is gonna be a really, really fun one.

So Katherine, why don’t we go ahead and just dive right in. Why don’t you start with a little bit of your background, because I think that of all the folks that I know that have a RevOps background, I think yours is a little bit unique and I’d say aspirational for a lot of the folks that we at least chat with, as they think about trying to get to the C-Suite. So I’ll let you take us away.

Yes. So good to be here today. Very, very excited to be talking about benchmarking. But in terms of myself, so I lead OPEX Engine. We’re an independent subsidiary of Bain and Company, the consulting firm. And what we do is we provide tech companies and tech investors with performance benchmarks to drive their decisions.

And how did it come to that? Well, early in my career, which is how I met Josh and team, I was a revenue operations leader and also a growth strategy leader at various SaaS companies. And so of course, I think for those of you in these roles, you know you rely very heavily on benchmarks to drive performance and also just make smarter decisions. And now at OPEX Engine, I get to take that firsthand experience and I’m now sitting on their side. So I get to help operators access the data and the insights that I once depended on. And my goal really is to make sure that the data we have isn’t just data. The data should be giving operators practical benchmarks that they can actually use to make decisions. That’s a very important thing for me to make sure that data is going to drive action at the end of the day. So that’s a little brief of kind of how I went from revenue operations to now running a company.

Amazing. I’m sure we’re gonna dive a little bit deeper into the career side here as we get going. But let’s spend a little bit of time talking about benchmarking. When I was working in finance, it came up a lot. The idea of benchmarking on the operation side came up a little bit less. When companies say that they want to benchmark, what are they often misunderstanding about what benchmarks actually are?

Yeah. So it’s interesting because I think one of the things I would generally say in what I’ve learned being on the other side and running a benchmarking company is it’s a lot more nuanced and complicated than even I thought it was using benchmarks. And I think one of the nuances that companies often kind of misunderstand is that there’s benchmarks and there’s case studies, and they’re actually different.

A case study is when you say, I want to know everything there is to know about how my one or two direct competitors do things. Their go-to-market motion, how many partners they have, what’s their strategy, what are their priorities, what’s their ICP, how is it different from mine? You go really, really deep into one or two companies for a specific reason. Benchmarks are different from that. Benchmarks are a broader average, so to speak. It’s not always the mean, but a broader average of performance metrics across a set of comparable companies. So you’re using the comparability and the size of the set of companies as a way to say, here’s what the companies in our industry are doing, which is different than the case study.

And in terms of when you use them, with benchmarks you can only get so granular because you want more than two companies in your benchmark. The average of two companies is not that interesting. So if you really need to know something super granular, like how many channel AEs do I have to hire to cover these three specific large partners, and you’re a tech company serving real estate agencies, that’s not something you’re gonna benchmark for because you are probably one of two or three companies in the world that do this. And so that’s where you want to use a case study.

Yeah. Slightly different use cases.

Yes. It’s a different use case.

So like I mentioned, when I was in ops, I joined a series A business. We didn’t have benchmarking at the time, at least nothing proactively. When is the right time for somebody at a company today to start thinking about benchmarking?

So there’s a couple of use cases that we often see for the customers that we work with. One of the things I’d say, following on from the “what is a benchmark” discussion, is these are all cases where you want guidelines. They’re not for exact target setting. You never want to say, the benchmark says 23%, and that is now the target for the year. The benchmarks, because they’re on average, are meant to be guidelines. They’re swim lanes, but if you want to be slightly on the left side of the lane, on the right side, or in a different lane entirely, that’s fine. It’s not a target, it’s just where the lane lines are.

Because of that, the main use cases we see for it, especially with RevOps folks, is building or checking your plans. I always think of this as avoiding the blank page problem. I think we’ve all had to plan for the year. I literally just did this for OPEX Engine six months ago, where I’m like, all right, here’s how this year is ending up. I have some historical information. What do I put as the numbers for next year? Of course, I could just roll them forward. It’s grown 5% per year on this cost, I’ll just keep going. And that works if you are in a super stable industry and your company is super stable. That almost never happens, especially if you’re talking about tech.

So you’re faced with this. You’re like, all right, I want to grow 20% next year. And I’m suddenly going to be a $120 million company instead of $100 million. And I’ve never seen what that looks like. And it’s even more difficult if you’re told, now we want the three or five year plan. At that point you’re a $300 million company, and that’s really, really far from what you know. And so benchmarks are really good for that. You can look up and say, $300 million companies that grow 20% and are profitable, what does their resource allocation structure look like? And it just gives you a nice starting point for your planning.

The extra bonus use case I would say, or the expert use case, is that after building the plan, benchmarks are also useful for getting alignment on the plan. Being in RevOps, a lot of RevOps leaders have probably had this issue where you have the data, you go to the stakeholder, someone often not in one of the revenue orgs you work with all the time and not in your reporting line, and you have to convince them that whatever you’ve put as a plan is the right number. Having this external view of saying, hey, we need to grow 20 AEs next year. Well, how do you know 20? Based on our historical growth, but also a company of $200 million should have this many AEs. We’re triangulating those and now that’s how we get 20. That really helps drive alignment across the board.

I think driving alignment is one of the biggest challenges RevOps teams face day in and day out. Having that quantitative backdrop to just point folks to is really helpful. So let’s fast forward a little bit and say, okay, we recognize that benchmarking is going to be helpful for both planning and operating in the business. If I’m in RevOps, how should I think about getting started? What’s the first step in benchmarking?

Point-in-time benchmarking is definitely useful if you just need something quick. In one of my past roles, I had to put together a RevOps team plan and didn’t really have a RevOps team figured out. For things like that, just getting one general data point from our investors, like you basically have one RevOps person to 11 AEs, okay, that’s a nice general guideline. Sometimes all you need is just a little hint of something.

But if you’re looking at something broader where you are planning something more holistic, or doing something cross-functional, or definitely something several years out, there are a couple of steps that we guide a lot of our customers through.

The first one is make sure you choose the right comparison cohort for you. This is where the case study versus benchmarking thing really matters. We get a lot of people saying, we are a tech company that serves real estate agencies, I want a cohort of other tech companies that serve real estate agencies. And that totally makes sense on the surface until you dig into it and you realize, okay, we are a $50 million company growing 100% a year serving real estate agencies. The other two tech companies are at $1 billion and $3 billion, and they’re growing 2% a year. They’re going to be making very different decisions. They’re going to have a very different resource allocation. Those are not the benchmarks to go with.

So make sure you choose the right comparison. We’ve done a statistical analysis that says the things that drive resource allocation metrics the most are your growth stage, what size you are, how fast you’re growing, and your margin. Are you growing 100% and fine being at negative 20% margins? Are you growing 10% and you really need 40% margins? Those are going to change your resource allocation a lot.

That makes a ton of sense. Once you’ve figured out your peer group and you’ve split the world between your case studies and your benchmark cohort, how do you decide which metrics you should be tracking and looking at, and does that differ by stage or size of business?

They definitely change a little bit. The larger you get, the more metrics you have, and the more granular you are able to get with your data. We tend not to work with companies at $1 or $2 million because you don’t have your own data to compare against. What are you benchmarking? Just keep adding to the top line. You’re probably VC funded and they don’t care about how much you’re spending, so just keep adding to the top line and you’re good to go.

In terms of the right metrics, you’ve got to have the right internal data to compare. Whether you already have it or you need to tweak your P&L reporting so you can benchmark later, that’s very important. And then when you’re looking past that, I get back to driving action and driving decisions. You want to benchmark the things that actually drive your business decisions. Things like FTE numbers, costs, et cetera.

We’ll occasionally get questions where, like I just had one this morning asking about the average contract value. And that might drive a business decision for a specific company, but in this case it’s kind of just something they want to know. What are you going to do with it? Does it matter? Is it going to change how you do your contracts or how many sales people you hire? Probably not. What was more useful for them was a sales org type of look.

I feel like that’s something I struggled with in the past when thinking about benchmarking. There’s the curiosity questions, which are things you’d love to know but wouldn’t necessarily act on. But obviously there’s a lot of value in applying benchmarking throughout the business to make it useful and actionable. How should leaders be thinking about applying benchmarks throughout the business?

I think in terms of applying it, the big thing is you have these swim lanes, and your strategic priorities will dictate where you are in or outside of those swim lanes. This is why I say it’s not a target. If you look at the benchmarks and it says for your growth stage and margin you should be spending 23% of your revenue on salespeople, and you look at that and you know your industry is very relationship-heavy, so you actually need to spend 30%, that’s a very specific business thing that only you know. And actually it’s something you can get from case studies too. This is where the two kind of combine.

I’ll say, one nice thing about SaaS, and it’s why OPEX Engine exists, is that for people on this call who’ve been at multiple SaaS companies, a lot of the fundamental metrics are very, very similar. Not just the metrics themselves, but actually the value of the metrics. Like what percentage of your revenue should be spent on sales, there’s a general range that everyone should be in, regardless of whether you’re selling to real estate companies or manufacturing companies or whatever. It’s a good thing for being able to use broader comparisons.

In a former life when I was doing a lot more public comparables, we would just group a lot of the SaaS businesses together regardless of industry. These are SaaS metrics, they’re going to be largely the same.

Maybe using your example, I’m going to ask you to go back in time to when you were leading a RevOps team. We have this metric that says 20% of our spend should be on sales and marketing. In reality, let’s say it’s at 40%, or you think it should be 40% based on what you’re seeing in the business. How do you communicate that to leadership when it’s so divergent from what the benchmark shows?

I think it really gets to unpacking why there’s that 20 percentage point difference. This is where having the one data point that says 20% is only partially helpful. Having the things below that that say, companies that are doing 20%, this is how many AEs they have, this is how much they spend on marketing, maybe they spend a lot more on marketing and that’s what’s happening. Having that next level down, you have to understand the path from what the number says to where you are today.

And then I always go back to strategic priorities. If you are at 40% and you know the company is a grow-at-all-cost strategic priority for the next three years, that might not be something worth spending a lot of your time pushing. You just want to make sure the top line number goes up. If it’s 40% and you know there is a margin or cost target coming up, that is definitely something worth pushing. Tying to that, even though they might not be looking at sales spending, you can say: hey, I know that we are trying to get our margins better. I found there’s this discrepancy, and it seems to show up in X, Y, and Z.

That makes total sense. AI is one of the more fun topics that keeps coming up. It feels like we’ve had more change from a business model and margin perspective in the past two years than in the past 10. Knowing the benchmarks from the past 10 years, I can imagine where getting those refreshed and updated would be really helpful. But right now it’s changing so fast that keeping up with what the latest are is really difficult for most teams. How have you seen benchmarking change in the past couple of years with the rise of AI? Has there been a change or is it really continuing to stick to the fundamentals?

A bit of both. I think the fundamentals still matter. You’re not suddenly benchmarking some completely new metric. As a general thing, we have seen a lot more focus on efficiency versus growth. When I was a RevOps leader, there was a lot more conversation about what’s the sales capacity to get to this level of growth, as opposed to now it’s not just a straight line. Oh, this is the productivity they used to have, take that, divide into the growth number and that’s the capacity you need. Now it’s, well, let’s look at that productivity number first and see if we can get that up and actually do more with less. So I think that’s changed.

We’re actually, in a couple of days, coming out with an article about how you measure the impact of AI in a SaaS business. The key point is, yes, there are ways you need to change what you’re measuring. You need to make sure you are carving out your revenue in different ways and carving your costs. But at the end of the day, once you have those inputs correct in your CRM or whatever you’re using for rev rec, you’re still measuring the same metrics. So if you carve out your AI costs and your revenue from your AI products, great. You’re still going to do a revenue divided by costs productivity calculation. It’s just got a little AI tag on it.

So you’re saying the P&L itself hasn’t actually changed, maybe some of the numbers and where lines are has slightly, but the fundamentals are still the fundamentals.

The fundamentals are still the fundamentals. You just have to make sure you are tagging things in a conscious way. And I think one thing we added kind of last minute to the article, which is always been true but is even more true in AI, is that once you decide on the KPIs for your company, you have to make sure that they are reviewed consciously at a pretty frequent clip, especially now. You don’t want a KPI to stick around just because it’s how it’s always been done. Because suddenly you’ll find yourself a year or two down the path and you’re like, I actually don’t know what happened with AI.

Totally. That’s something we see a lot right now. When we first started with the accounting team three years ago, a lot of folks were thinking about annual planning. What we’re seeing now is the pace of annual planning is actually too slow for a lot of these companies. We’re seeing folks do it much more frequently, even quarterly in some cases, where looking at where benchmarks are and what the KPIs are on a much more frequent basis has become pretty table stakes. And it’s particularly the case for software businesses where gross margins are now maybe no longer in the high eighties if you start to factor in the cost of AI. We’re starting to see quite a bit more of that on our side.

Well Katherine, this has been fantastic. I think we can talk about benchmarking and RevOps for hours. Maybe as we think about where to leave things, any major pitfalls to avoid if someone’s looking to do benchmarking in RevOps for the first time? We talked about the case study versus benchmarking distinction, but anything else you see as common missteps?

So I think just emphasizing what I said about making sure you have the right comparison for your growth stage. Don’t anchor too heavily on your exact end-user model. And the other thing I would say is, to the extent you can, be picky about knowing what the source of the data is. If you just want the one general number, like 20% of your revenue, there’s a lot of stuff out there for that. But if you really want to use it holistically for planning, know where that data is coming from.

I love that. The quality of data matters tremendously. That was one of the biggest hangups when I was doing this in banking. We were always trying to get to the highest quality data possible because it just swings things so wildly.

And that’s actually the definition of the metrics. So the way OPEX Engine works is we get the data directly from the companies that work with us. And once we get the data in, it always takes us at least a week to go back and forth with whoever submitted the data, usually the finance team, and figure out, well, when you say sales expense, what’s in it? It turns out, oh, actually CS is also in there. And things like that. You’ve got to make sure, and knowing that you have that confidence in the data is great. But sometimes when you find these general obvious things, you’re like, okay, 20%, but what does that actually include?

Getting to apples-to-apples is obviously critical here and is something we see overlooked a lot of the time. So Katherine, this is awesome. I really appreciate you taking the time out and sharing more on benchmarking and OPEX Engine. If folks have questions about benchmarking generally or maybe even your career path, what’s the best place to find you?

LinkedIn. And my email is pretty easy: katherine@opexengine.com. Happy to answer questions about any of this, about career stuff. Please reach out.

Amazing. Well Katherine, really appreciate it. As always with these, thank you so much. We’ll be sharing the recording with everybody in case you want to go back and review or share it with anyone else. Let’s do it again here soon. This has been so fun.

Has been very fun. Thanks, Josh.

Amazing. All right. Take care everybody.

Ready to get started?