Our Manifesto
Our Manifesto
Our Manifesto
The Quest for B2B Revenue Attribution and Intelligence
The Quest for B2B Revenue Attribution and Intelligence
The Quest for B2B Revenue Attribution and Intelligence
The Quest for B2B Revenue Attribution and Intelligence
May 6, 2024
Once upon a time, “growth at any cost” was an acceptable strategy for B2B companies. Money was cheap, so teams could drive new revenue simply by spending their way to success.
Those days are gone. Now, it is the age of efficient growth, and B2B go-to-market leaders need to know what is working so they don’t waste their limited resources.
We believe that the foundation of efficient growth is trustworthy, reliable data. This foundation is transformational for a go-to-market organization, because it enables data-driven decisions and answers questions like “where should I spend an extra $100k next quarter?” or “how can I increase sales pipeline without a bigger budget?” across every dollar spent and every person hired. It’s a transformation we have experienced first-hand: at our previous company, Branch, the holistic attribution model we built in-house was such a game-changer that it quickly became the cornerstone for most of our key decisions and optimizations while scaling to over $100m in revenue.
It was only later, when industry peers began to tell us that the insights we were getting sounded better than anything else they’d ever heard of, that we began to realize the model we built at Branch might be something special. With Upside, our mission is to help go-to-market leaders make better decisions by bringing the same game-changing, data-driven perspective to every B2B team in the world.
In the article below, we share many of the best practices we wish we had known when we first started wrestling with these problems, and describe some of the biggest measurement traps we often see teams falling into, including:
Believing that the best way to measure is by identifying a single “source” for each deal.
Relying on the sequence of events to allocate credit across many touchpoints.
Looking at the contributions of each go-to-market team separately.
Using attribution data as a weapon, rather than a feedback loop.
Whether you’ve been struggling to solve your company’s revenue measurement challenge for years, or you have a solution that is already working well, or even if attribution is just something you have strong opinions about, we are passionate about this topic and we would love to hear from you. Please drop us a note!
Why is B2B revenue measurement such a tangled mess?
Branch, the company where we first took on these challenges together, was founded in 2014. The product was deep linking and attribution for mobile apps, and the target customers were large consumer brands. Eight years later, we had scaled the company to over $100 million in revenue (and a valuation of $4 billion), primarily through large, complex deals with long cycles and multiple buyers.
But partway through that growth curve, we began to miss our quarterly sales pipeline targets. During a post-mortem meeting to investigate why, the conversation went something like this:
You might be smiling ruefully in commiseration, perhaps remembering similar discussions of your own (and if you’re thinking that we could have solved this pipeline duplication problem by tracking opportunity source, we’d tried that already — we’ll explain why we believe it isn’t the answer in the next section).
As we began trying to understand what had caused us to miss our targets, we discovered many layers of complexity that make measurement challenging. These layers are present in almost every B2B sales process, but they compile together to become increasingly misleading in large enterprise deals with multiple decision makers.
Complexity layer 1: there is never just one source for a deal
One of the most widely-adopted concepts in B2B marketing is the Demand Waterfall, from SiriusDecisions. It has become so deeply embedded across the industry that most readers can likely recite these stages in their sleep:
Today, almost every system for B2B pipeline measurement is designed to identify the one single action that drove each lead to raise their hand (submit a demo request, take a call, etc.). That single action is then reported as “the source of the deal.”
The idea that any single touchpoint, with any single person, had somehow sourced the entire deal was obviously wrong.
If only it were so easy.
When we talk to go-to-market leaders, they know that their one-source attribution is not correct, but often their hunch is that it’s close enough. The idea of parsing through the mess of everything else that they know helps both source and close a deal is daunting, and it’s hard to find a frame of reference to decide whether it would even be worth the effort.
We’ve found that these one-source models become especially problematic as deals grow bigger and more complicated. At Branch, by the time an opportunity turned into qualified pipeline, we usually saw 5-8 people engaging with us, reading content, attending webinars, replying to outbound emails, receiving a referral from a partner, and so on. Some of these activities clearly mattered more than others, but the idea that any single touchpoint, with any single person, had somehow sourced the entire deal was obviously wrong.
It was all of the engagement together that eventually led to results.
To illustrate the issue with one-source attribution, here is a simplistic example showing touchpoints from the early stages of a deal with just three people involved:
In this scenario, here’s what is going on behind the scenes to generate these touchpoints:
An SDR is outbounding multiple people.
The decision maker eventually replies to one of those emails, and is invited to attend a dinner. Afterwards, she asks her analyst to research the solution.
The analyst has been ignoring the SDR, but now he reads some content and attends a webinar.
The analyst asks an engineer on another team to take a look too, and they agree a demo would be helpful.
All three people attend the demo, during which the decision maker expresses interest. This results in an opportunity getting created.
What’s the correct way to attribute this deal?
A typical lead-source model would probably give credit to the webinar, because that was what the analyst did right before submitting the demo request.
The SDR team would argue that this is a successful outbound deal, because the SDR sent emails before anything else.
We believe neither of these answers is valid, because with all the data laid out, it’s clear that no single touchpoint got this deal moving — in fact, it was all of the engagement together that eventually led to results.
If you hope to get useful insights about complex deals like this one, you need an attribution model that is sophisticated enough to look beyond one-source reporting.
Complexity layer 2: time matters, but touchpoint sequence (usually) doesn’t
Every go-to-market team can benefit from attribution, but experienced marketers are usually the ones who have spent the most time thinking about it. We hear CMOs mention trying first-touch, last-touch, U-shape, W-shape, J-shape, reverse J-shape, full-path…and they keep going.
The problem with this alphabet soup of models? None of them are designed to acknowledge that some types of touchpoint inherently have more impact on a deal than others. Instead, position-based models like these try to figure out which touchpoints deserve more credit solely based on the sequence of events.
Demo requests reflect the intent built up over all the previous touchpoints, but they aren’t a meaningful touchpoint of their own.
To illustrate this, let’s look at the touchpoint activity from the early stages of another deal. We’ll use a single person this time, just to keep the example a little simpler:
In this scenario, here’s what is happening to generate these touchpoints:
The lead gets a cold email and initially replies “not interested” or “let’s talk later.”
But then they attend a conference talk about the product, during which they learn more about the value it could bring them.
Next, they do some preliminary research on the website, and submit a demo request.
After a short demo meeting with the sales team, they do some additional research on the website, and then email the salesperson to say they would like to move forward.
How would this activity typically be attributed?
A first-touch model would credit the outbound emails, even though the most they accomplished was some brand awareness.
Last-touch would credit the demo request form (or maybe either the Google search or the product page, if the system is slightly more sophisticated).
But in reality, the conference talk was where this person spent the most time learning about the solution…shouldn’t that show up somewhere? And why should we give credit to the demo request at all? Demo requests reflect the intent built up over all the previous touchpoints, but they aren’t a meaningful touchpoint of their own!
Our opinion: time matters, because something that happened a year ago is less relevant than what happened yesterday. But all of the touchpoints in this example helped move the deal forward, and the sequence of events indicates very little about which ones were more incremental.
This is why advanced teams have moved beyond position-based multi-touch, into impact-weighted attribution and econometric models. Embracing a non-chronological approach also comes with two wonderful side benefits:
You no longer need to worry about debugging misattribution caused by minor errors in how your systems captured the exact order of events.
“Dark funnel” activities (podcasts, Slack communities, word-of-mouth) are far less disruptive to your measurement. This is because you still see the relative contributions of all the other things that also moved the deal forward, and you can more easily incorporate the answers to self-reported attribution questions like “how did you hear about us?” that don’t come with an exact timestamp.
Complexity layer 3: deals are always influenced by more than one team
When teams are siloed and don’t share a common language of data that helps them work together, there is often a lot of double counting, political finger-pointing, or both. It’s hard to know what worked, and where to invest.
Let’s use a simplified example involving just marketing and sales: historically, B2B attribution has been built on the assumption that there is one buyer going through the journey, and this buyer follows a stage-based waterfall model. That’s why marketing organizations measure everything by counting leads — which is a metric they can fully control — but it’s also why there is typically a strong divide between marketing and sales:
Marketing “captures the leads” and sends the qualified ones to sales. The failure mode for this is sales always asking for more leads, but then complaining that the leads they’re getting are not good.
Sales “works the leads” to generate opportunities. The failure mode for this is marketing thinking that sales simply doesn’t follow through on all the excellent leads they’ve been sent.
The end result is a lot of blame shifted around when pipeline is not up to par.
But in today’s world, almost every large deal is brought together through the efforts of both marketing and sales, at minimum, and often also with the assistance of teams such as partnerships and customer support. And what’s missing from this picture is a holistic understanding of how all of the teams involved are contributing to the thing that really matters: revenue.
For example, sales and marketing might see a deal like this, in separate silos:
But the full story, as experienced by the customer, is more like this:
At Branch, one of our top sources of new deals was VIP dinners that the marketing team organized. But the people at those dinners were identified and invited by our sales teams. There was simply no way to attribute the resulting opportunities to only sales or marketing — the dinner was important, but so was the 1:1 invitation. Any measurement system that tries to evaluate just a single team in a silo is doomed to produce misleading data from the start.
Complexity layer 4: attribution is just one piece of the puzzle
Finally, one of the biggest issues with most attribution systems today isn’t about the data itself; it’s with how the data is used. Time and time again, we have seen examples of attribution being used to say “look what a good job I did!” instead of “what should I do next?” or “what changes should I make to improve?”
In other words, attribution is being misused as a weapon by leaders to assert their worth, rather than as data to inform a feedback loop that helps the business succeed.
Better attribution models and insights can go a long way, but attribution data must be viewed through the lens of what it can do to drive revenue, ROI, and efficiency metrics like the company’s magic number.
What does it take to untangle the mess of B2B revenue measurement?
All models are wrong, but some are useful. This gets at one of the fundamental challenges for any measurement solution: the real world has so many variables that it’s impossible to identify true cause and effect.
Customers don’t remember everything that influenced them either, so there is huge value to measuring the recipe that led to a successful outcome.
Some B2B thought leaders have begun talking about why attribution has become a fool’s errand, due to the growth of unattributable “dark funnel” activity (podcasts, Slack communities, social media interactions, and so on). In their view, the real solution is to give up on attribution models and instead simply ask customers “how did you hear about us?”
We don’t agree that attribution is dead. Data from asking customers (also known as self-reported attribution) is absolutely something that should be incorporated into a model, but it shouldn’t be the only source of data because it’s incomplete; customers don’t remember everything that influenced them either, so there is huge value to measuring the recipe that led to a successful outcome.
In other words, imperfect data is far better than none at all, and every step up the attribution maturity curve will improve the answers to questions like “where should I spend an extra $100k next quarter?” or “how can I increase sales pipeline without a bigger budget?”. This requires balancing two objectives that are often in opposition to each other:
Include as many inputs as possible. For an attribution model to generate meaningful results, it must capture a high percentage of the raw touchpoints across every team and every channel.
Produce simple outputs. For a measurement system to have a lasting impact, it must not be a black box or a massive, arcane dashboard that only a few specially-trained analysts can make sense of.
After chipping away at the problem of B2B revenue measurement for many years, here are some of the best practices we’ve learned along the way:
Consider everyone involved in a deal, not just the individual leads.
Bring together all customer touchpoints, from every go-to-market team.
Start from an objective baseline, like “minutes of engagement,” to compare different types of touchpoints.
Apply a time-decay to give less weight to touchpoints that happened long before the conversion.
Use personas as a multiplier to allocate more weight to decision makers.
Look at attribution data as a diagnostic, not a target.
Be OK with using different models to answer different questions.
Don't stop at attribution, because ROl is much better.
Make data actionable for everyone on the team.
In the section below, we’ll explore each of these areas in more detail.
Consider everyone involved in a deal, not just the individual leads
When your customers’ buying committees contain multiple people, you won’t get a clear picture of what makes your deals progress by running attribution on the activity done by just one of them. You need to consider the touchpoints of everyone involved.
At the level of practical implementation, there are two ways we’ve seen teams do to this, each with pros and cons:
Opportunity contact roles. In theory, contact role relationships set in the CRM should be the simplest approach to figure out who is important on a deal. But this often fails in practice, because it has an unavoidable dependency on AEs adding contact roles for everyone they interact with. We’ve found maintaining that level of CRM hygiene to be a losing battle.
Account-level attribution. Unlike contact roles, contact-to-account mapping is often automated and quite reliable. Using the activity of every contact mapped to an account solves the CRM hygiene challenge, but comes with the risk of too much noise, especially when accounts are so big that there are multiple business units with distinct deal cycles active at the same time. And in CRMs that maintain a distinction between “leads” and “contacts” (like Salesforce), there is a risk of missing relevant touchpoints from people who have not yet been converted.
We think both of these methods lead to unreliable data. Instead, we recommend an approach that gives the best of both worlds: use contact roles, and detect additional contact roles automatically (such as whenever new people attend meetings or join email threads about an in-progress deal).
Bring together all customer touchpoints, from every go-to-market team
By breaking down the data silos between teams like marketing, sales, partnerships, SDRs, and customer success, you can make sure that you are looking at your whole go-to-market presence holistically. After we did this at Branch, we noticed our entire organization working together far more collaboratively — in fact, we eventually did an analysis that showed deal sizes went up when more teams were involved.
Data centralization is essentially just a problem of extract, transform, and load (ETL). We’ve seen some companies use a dedicated vendor, and others that simply pipe everything back into Salesforce. How you do it is less important, though vendor consolidation is worth considering since this is usually commoditized functionality.
Find an objective baseline and use it to create a touchpoint weighting schema
Bringing data together across teams is just the first step. You still need to DO something with it. That is where we’ve seen many teams get stuck, because turning raw data into attribution insights requires addressing something that is always contentious: you’ll need to create a common language across these touchpoints so they can be compared.
Some customer interactions clearly do more to move a deal forward than others. The challenge is agreeing which ones, and by how much — without some sort of ground truth, this is a political fight just waiting to happen. With position-based models (first-touch, last-touch, W-shape, and so on) off the table, there are three main options:
Impact-weighted. This involves defining a relative “size” for each type of touchpoint, and then using those scores as the baseline for allocating credit.
Econometrics. Incrementality studies and media mix models have been around for decades, and can give a statistical answer about what is driving results. The challenge with econometric techniques is they’re typically slow (weeks or months for each report), and difficult to use for tactical decisions because of how abstract their outputs are.
Data-driven. So-called “data-driven attribution” models are based on machine learning algorithms. They’re more tactically useful than econometric analyses, but they’re best suited to specific channels with large quantities of similar-looking activities (for example, digital marketing for consumer brands). Data-driven models are also incredibly dark black boxes, and we have yet to meet a leader who fully understands or trusts their output.
For B2B revenue attribution, we believe impact-weighted models are the current state of the art. A simplified example might look something like this:
Then, use these weighted scores to generate metrics like “weighted pipeline” and “weighted revenue” based on the full story of the deal across everyone involved:
Yes, agreeing on the “size” scores for each touchpoint type in an impact-weighted attribution model is hard. It’s probably the most contentious step in the whole process, because everyone will come with a different opinion. The impact-weighted model we used at Branch was anchored to the average amount of time a prospect spent with each activity, which provided an objective starting point, and we had a cross-functional group of leaders we called “the weighting council” that was tasked with iterating and refining these scores as we learned more and tried new things.
Fortunately, there are several approaches on the horizon to make impact-weighted scores more accurate and less subjective:
Deep assessment using AI. With the proliferation of LLMs, it will soon be possible to make an even more nuanced assessment of each touchpoint’s impact. For example, scores can be refined based on how someone replied to an email, or how engaged they were in a Zoom meeting or webinar.
Using an econometric model for calibration. Even if a media mix model is challenging to use for everyday decisions, it can provide a very useful checkpoint for weighting refinements. For example, you might get a recommendation that sounds something like this: “based on a statistical analysis of your deals last quarter, it appears your current weightings schema might be under-estimating the impact of your webinars, but slightly over-crediting conferences.”
Incorporating cross-company benchmarks. All of this can be made more accurate when models are trained on a larger dataset.
Apply a time decay
Something that just happened yesterday holds more weight in someone’s mind than something that happened months ago, and should therefore receive more attribution credit in an active deal cycle, even if both touchpoints are otherwise equivalent.
In our opinion, this is important but it isn’t worth over-thinking. We recommend starting with a fixed time window prior to opportunity creation (probably in the range of 12-18 months, depending on the average length of your deal cycles), and then applying a linear time decay on top of all the touchpoints within that window.
Use personas as a multiplier
This one is easy: not every customer contact involved in an opportunity has equal influence on the outcome. An executive decision maker has more sway on a deal than a junior analyst doing research, which means your attribution model should allow different weighting multipliers for each persona on the deal.
Look at attribution data as a diagnostic, not a target
Goodhart’s law says “when a measure becomes a target, it ceases to be a good measure.” This definitely applies to revenue attribution.
One thing we learned early on: don’t tie individual contributor compensation to cross-team attribution metrics. This will seem like a seductively logical path, but you must keep in mind that as a leader, you have a strong incentive to do what is best for the business. Cross-functional initiatives (for example, SDRs inviting prospects to marketing-hosted VIP dinners) are often some of the most effective programs you can run, but not everyone on your team has the same level of context that you do.
In other words, take care to avoid creating a dynamic where ICs are hesitant to collaborate with others because their own numbers could go down.
Be OK with using different models to answer different questions
While we said this once already, it’s worth repeating: all models are wrong, but some are useful. In our opinion, a cross-team, impact-weighted multi-touch model is the best option for answering questions like “what do we need to do to get more revenue?” This is a question most boards and investors care about, so it’s a great one to ask.
But sometimes you need answers to more tactical questions like “what is most effective at booking us initial meetings?”. In that situation, a “breakthrough touchpoint report” (an improved version of the infamous “original source” model) might still be your best bet.
Ultimately, there is no single attribution model that can answer every question. You’ll make better decisions if you get comfortable with using the right model for the right task.
Don’t stop at attribution, because ROI is much better
In general, most sales and marketing activities “work” at some level. The question is, do they work well enough to continue investing? The way to answer this question is with an ROI report.
At Branch, our ROI calculations included both direct campaign costs and estimated people costs, which involved estimating the amount of time spent to execute each activity.
We found that the more difficult challenge was training everyone on the team to view results through the lens of ROI, rather than raw outcome metrics. This is a cultural change that has to start from the top with go-to-market leadership, and must prevail through both good times and bad.
Make data actionable for everyone on the team
B2B attribution is valuable to help a go-to-market organization understand what worked, but the real power comes from going beyond passive reporting.
If you know what has helped close deals in the past, you can equip every person on the team with insight about the best step to take next for each deal. For example, once you know the types of people who are positively influenced by webinars, you can automatically create lists for your SDR team to invite to the next one.
The power of generating recommendations based on cross-team insights is that you can make higher-level decisions, and allocate budget and resources even more intelligently to get the best possible outcome.
Takeaway: B2B attribution IS achievable, and worth the effort!
B2B revenue measurement is a giant mess, but we don’t agree with those in the industry who argue that attribution is no longer worth the effort. With the end of “growth at any cost,” and increasing scrutiny on efficient go-to-market motions, game-changing measurement is possible. Using reliable data to get the full picture of your growth engine is one of the best things you can do to scale your business.
Here are just a few of the decisions our holistic attribution model helped us make at Branch:
Replacing conferences with VIP dinners. Our model supported what the marketing team had suspected for a long time: most of the conferences we were sponsoring were never worth it for us. However, small group dinners with a mixture of prospects and existing customers were hugely successful. And despite appearing expensive on a per-event basis, the ROI analysis convinced our finance team to get on board with scaling that program up.
Cutting paid promotion. We eliminated almost all spending on paid acquisition, including content syndication. Even though we saw a moderate volume of net new leads through these channels, the cost was extremely high and the leads we captured were low-quality and rarely led to opportunities.
Executive events. We hosted an all-expenses-paid, multi-day event in Hawaii for customers and prospects. Despite rave reviews from attendees and from our own teams, our data showed that single-day events in local cities were a more cost-effective way to get the same results. And the model also helped the team responsible for these events decide who to invite, without needing to constantly bother sales reps for nominations.
How many SDRs to hire, and which ones were consistently underperforming. While we warned earlier about using cross-team metrics to determine compensation, they can be extremely helpful as an input when comparing the results of people in equivalent roles. Our model helped us understand when we needed to hire additional SDRs in a given region, and which individuals were not a strong fit for their roles.
Identifying high performing outbound messaging. By comparing the weighted pipeline per outbound email sent by the SDR team, we were able to surface creative strategies that opened doors at target companies, then subsequently roll them out to the whole team.
Which BD partnerships to invest in. We received a constant stream of invitations to participate in co-marketing and other activities with our partners. We always wanted to participate, but needed help prioritizing limited resources. Our model allowed us to understand which partnerships performed best.
Doubling down on direct mail campaigns. When our teams suggested direct mail as something to test, we were skeptical — would sending someone something in the mail in return for a meeting really work? Sure it would get a lead and maybe an opportunity, but would it contribute to qualified pipeline and won opportunities? Our model showed that it did, and that the ROI of these campaigns was off the charts.
After seeing the huge impact revenue measurement has when working well, and experiencing first-hand how difficult and painful it was to create our own solution from scratch at Branch, we have founded Upside to help go-to-market leaders make better decisions by bringing these insights to every B2B team in the world.
If you’ve been trying to solve this problem for years and somehow never managed to make progress, we still have a few spaces in our design partner program and we would love to hear from you. Just reach out to us and we’ll be in touch!
Once upon a time, “growth at any cost” was an acceptable strategy for B2B companies. Money was cheap, so teams could drive new revenue simply by spending their way to success.
Those days are gone. Now, it is the age of efficient growth, and B2B go-to-market leaders need to know what is working so they don’t waste their limited resources.
We believe that the foundation of efficient growth is trustworthy, reliable data. This foundation is transformational for a go-to-market organization, because it enables data-driven decisions and answers questions like “where should I spend an extra $100k next quarter?” or “how can I increase sales pipeline without a bigger budget?” across every dollar spent and every person hired. It’s a transformation we have experienced first-hand: at our previous company, Branch, the holistic attribution model we built in-house was such a game-changer that it quickly became the cornerstone for most of our key decisions and optimizations while scaling to over $100m in revenue.
It was only later, when industry peers began to tell us that the insights we were getting sounded better than anything else they’d ever heard of, that we began to realize the model we built at Branch might be something special. With Upside, our mission is to help go-to-market leaders make better decisions by bringing the same game-changing, data-driven perspective to every B2B team in the world.
In the article below, we share many of the best practices we wish we had known when we first started wrestling with these problems, and describe some of the biggest measurement traps we often see teams falling into, including:
Believing that the best way to measure is by identifying a single “source” for each deal.
Relying on the sequence of events to allocate credit across many touchpoints.
Looking at the contributions of each go-to-market team separately.
Using attribution data as a weapon, rather than a feedback loop.
Whether you’ve been struggling to solve your company’s revenue measurement challenge for years, or you have a solution that is already working well, or even if attribution is just something you have strong opinions about, we are passionate about this topic and we would love to hear from you. Please drop us a note!
Why is B2B revenue measurement such a tangled mess?
Branch, the company where we first took on these challenges together, was founded in 2014. The product was deep linking and attribution for mobile apps, and the target customers were large consumer brands. Eight years later, we had scaled the company to over $100 million in revenue (and a valuation of $4 billion), primarily through large, complex deals with long cycles and multiple buyers.
But partway through that growth curve, we began to miss our quarterly sales pipeline targets. During a post-mortem meeting to investigate why, the conversation went something like this:
You might be smiling ruefully in commiseration, perhaps remembering similar discussions of your own (and if you’re thinking that we could have solved this pipeline duplication problem by tracking opportunity source, we’d tried that already — we’ll explain why we believe it isn’t the answer in the next section).
As we began trying to understand what had caused us to miss our targets, we discovered many layers of complexity that make measurement challenging. These layers are present in almost every B2B sales process, but they compile together to become increasingly misleading in large enterprise deals with multiple decision makers.
Complexity layer 1: there is never just one source for a deal
One of the most widely-adopted concepts in B2B marketing is the Demand Waterfall, from SiriusDecisions. It has become so deeply embedded across the industry that most readers can likely recite these stages in their sleep:
Today, almost every system for B2B pipeline measurement is designed to identify the one single action that drove each lead to raise their hand (submit a demo request, take a call, etc.). That single action is then reported as “the source of the deal.”
The idea that any single touchpoint, with any single person, had somehow sourced the entire deal was obviously wrong.
If only it were so easy.
When we talk to go-to-market leaders, they know that their one-source attribution is not correct, but often their hunch is that it’s close enough. The idea of parsing through the mess of everything else that they know helps both source and close a deal is daunting, and it’s hard to find a frame of reference to decide whether it would even be worth the effort.
We’ve found that these one-source models become especially problematic as deals grow bigger and more complicated. At Branch, by the time an opportunity turned into qualified pipeline, we usually saw 5-8 people engaging with us, reading content, attending webinars, replying to outbound emails, receiving a referral from a partner, and so on. Some of these activities clearly mattered more than others, but the idea that any single touchpoint, with any single person, had somehow sourced the entire deal was obviously wrong.
It was all of the engagement together that eventually led to results.
To illustrate the issue with one-source attribution, here is a simplistic example showing touchpoints from the early stages of a deal with just three people involved:
In this scenario, here’s what is going on behind the scenes to generate these touchpoints:
An SDR is outbounding multiple people.
The decision maker eventually replies to one of those emails, and is invited to attend a dinner. Afterwards, she asks her analyst to research the solution.
The analyst has been ignoring the SDR, but now he reads some content and attends a webinar.
The analyst asks an engineer on another team to take a look too, and they agree a demo would be helpful.
All three people attend the demo, during which the decision maker expresses interest. This results in an opportunity getting created.
What’s the correct way to attribute this deal?
A typical lead-source model would probably give credit to the webinar, because that was what the analyst did right before submitting the demo request.
The SDR team would argue that this is a successful outbound deal, because the SDR sent emails before anything else.
We believe neither of these answers is valid, because with all the data laid out, it’s clear that no single touchpoint got this deal moving — in fact, it was all of the engagement together that eventually led to results.
If you hope to get useful insights about complex deals like this one, you need an attribution model that is sophisticated enough to look beyond one-source reporting.
Complexity layer 2: time matters, but touchpoint sequence (usually) doesn’t
Every go-to-market team can benefit from attribution, but experienced marketers are usually the ones who have spent the most time thinking about it. We hear CMOs mention trying first-touch, last-touch, U-shape, W-shape, J-shape, reverse J-shape, full-path…and they keep going.
The problem with this alphabet soup of models? None of them are designed to acknowledge that some types of touchpoint inherently have more impact on a deal than others. Instead, position-based models like these try to figure out which touchpoints deserve more credit solely based on the sequence of events.
Demo requests reflect the intent built up over all the previous touchpoints, but they aren’t a meaningful touchpoint of their own.
To illustrate this, let’s look at the touchpoint activity from the early stages of another deal. We’ll use a single person this time, just to keep the example a little simpler:
In this scenario, here’s what is happening to generate these touchpoints:
The lead gets a cold email and initially replies “not interested” or “let’s talk later.”
But then they attend a conference talk about the product, during which they learn more about the value it could bring them.
Next, they do some preliminary research on the website, and submit a demo request.
After a short demo meeting with the sales team, they do some additional research on the website, and then email the salesperson to say they would like to move forward.
How would this activity typically be attributed?
A first-touch model would credit the outbound emails, even though the most they accomplished was some brand awareness.
Last-touch would credit the demo request form (or maybe either the Google search or the product page, if the system is slightly more sophisticated).
But in reality, the conference talk was where this person spent the most time learning about the solution…shouldn’t that show up somewhere? And why should we give credit to the demo request at all? Demo requests reflect the intent built up over all the previous touchpoints, but they aren’t a meaningful touchpoint of their own!
Our opinion: time matters, because something that happened a year ago is less relevant than what happened yesterday. But all of the touchpoints in this example helped move the deal forward, and the sequence of events indicates very little about which ones were more incremental.
This is why advanced teams have moved beyond position-based multi-touch, into impact-weighted attribution and econometric models. Embracing a non-chronological approach also comes with two wonderful side benefits:
You no longer need to worry about debugging misattribution caused by minor errors in how your systems captured the exact order of events.
“Dark funnel” activities (podcasts, Slack communities, word-of-mouth) are far less disruptive to your measurement. This is because you still see the relative contributions of all the other things that also moved the deal forward, and you can more easily incorporate the answers to self-reported attribution questions like “how did you hear about us?” that don’t come with an exact timestamp.
Complexity layer 3: deals are always influenced by more than one team
When teams are siloed and don’t share a common language of data that helps them work together, there is often a lot of double counting, political finger-pointing, or both. It’s hard to know what worked, and where to invest.
Let’s use a simplified example involving just marketing and sales: historically, B2B attribution has been built on the assumption that there is one buyer going through the journey, and this buyer follows a stage-based waterfall model. That’s why marketing organizations measure everything by counting leads — which is a metric they can fully control — but it’s also why there is typically a strong divide between marketing and sales:
Marketing “captures the leads” and sends the qualified ones to sales. The failure mode for this is sales always asking for more leads, but then complaining that the leads they’re getting are not good.
Sales “works the leads” to generate opportunities. The failure mode for this is marketing thinking that sales simply doesn’t follow through on all the excellent leads they’ve been sent.
The end result is a lot of blame shifted around when pipeline is not up to par.
But in today’s world, almost every large deal is brought together through the efforts of both marketing and sales, at minimum, and often also with the assistance of teams such as partnerships and customer support. And what’s missing from this picture is a holistic understanding of how all of the teams involved are contributing to the thing that really matters: revenue.
For example, sales and marketing might see a deal like this, in separate silos:
But the full story, as experienced by the customer, is more like this:
At Branch, one of our top sources of new deals was VIP dinners that the marketing team organized. But the people at those dinners were identified and invited by our sales teams. There was simply no way to attribute the resulting opportunities to only sales or marketing — the dinner was important, but so was the 1:1 invitation. Any measurement system that tries to evaluate just a single team in a silo is doomed to produce misleading data from the start.
Complexity layer 4: attribution is just one piece of the puzzle
Finally, one of the biggest issues with most attribution systems today isn’t about the data itself; it’s with how the data is used. Time and time again, we have seen examples of attribution being used to say “look what a good job I did!” instead of “what should I do next?” or “what changes should I make to improve?”
In other words, attribution is being misused as a weapon by leaders to assert their worth, rather than as data to inform a feedback loop that helps the business succeed.
Better attribution models and insights can go a long way, but attribution data must be viewed through the lens of what it can do to drive revenue, ROI, and efficiency metrics like the company’s magic number.
What does it take to untangle the mess of B2B revenue measurement?
All models are wrong, but some are useful. This gets at one of the fundamental challenges for any measurement solution: the real world has so many variables that it’s impossible to identify true cause and effect.
Customers don’t remember everything that influenced them either, so there is huge value to measuring the recipe that led to a successful outcome.
Some B2B thought leaders have begun talking about why attribution has become a fool’s errand, due to the growth of unattributable “dark funnel” activity (podcasts, Slack communities, social media interactions, and so on). In their view, the real solution is to give up on attribution models and instead simply ask customers “how did you hear about us?”
We don’t agree that attribution is dead. Data from asking customers (also known as self-reported attribution) is absolutely something that should be incorporated into a model, but it shouldn’t be the only source of data because it’s incomplete; customers don’t remember everything that influenced them either, so there is huge value to measuring the recipe that led to a successful outcome.
In other words, imperfect data is far better than none at all, and every step up the attribution maturity curve will improve the answers to questions like “where should I spend an extra $100k next quarter?” or “how can I increase sales pipeline without a bigger budget?”. This requires balancing two objectives that are often in opposition to each other:
Include as many inputs as possible. For an attribution model to generate meaningful results, it must capture a high percentage of the raw touchpoints across every team and every channel.
Produce simple outputs. For a measurement system to have a lasting impact, it must not be a black box or a massive, arcane dashboard that only a few specially-trained analysts can make sense of.
After chipping away at the problem of B2B revenue measurement for many years, here are some of the best practices we’ve learned along the way:
Consider everyone involved in a deal, not just the individual leads.
Bring together all customer touchpoints, from every go-to-market team.
Start from an objective baseline, like “minutes of engagement,” to compare different types of touchpoints.
Apply a time-decay to give less weight to touchpoints that happened long before the conversion.
Use personas as a multiplier to allocate more weight to decision makers.
Look at attribution data as a diagnostic, not a target.
Be OK with using different models to answer different questions.
Don't stop at attribution, because ROl is much better.
Make data actionable for everyone on the team.
In the section below, we’ll explore each of these areas in more detail.
Consider everyone involved in a deal, not just the individual leads
When your customers’ buying committees contain multiple people, you won’t get a clear picture of what makes your deals progress by running attribution on the activity done by just one of them. You need to consider the touchpoints of everyone involved.
At the level of practical implementation, there are two ways we’ve seen teams do to this, each with pros and cons:
Opportunity contact roles. In theory, contact role relationships set in the CRM should be the simplest approach to figure out who is important on a deal. But this often fails in practice, because it has an unavoidable dependency on AEs adding contact roles for everyone they interact with. We’ve found maintaining that level of CRM hygiene to be a losing battle.
Account-level attribution. Unlike contact roles, contact-to-account mapping is often automated and quite reliable. Using the activity of every contact mapped to an account solves the CRM hygiene challenge, but comes with the risk of too much noise, especially when accounts are so big that there are multiple business units with distinct deal cycles active at the same time. And in CRMs that maintain a distinction between “leads” and “contacts” (like Salesforce), there is a risk of missing relevant touchpoints from people who have not yet been converted.
We think both of these methods lead to unreliable data. Instead, we recommend an approach that gives the best of both worlds: use contact roles, and detect additional contact roles automatically (such as whenever new people attend meetings or join email threads about an in-progress deal).
Bring together all customer touchpoints, from every go-to-market team
By breaking down the data silos between teams like marketing, sales, partnerships, SDRs, and customer success, you can make sure that you are looking at your whole go-to-market presence holistically. After we did this at Branch, we noticed our entire organization working together far more collaboratively — in fact, we eventually did an analysis that showed deal sizes went up when more teams were involved.
Data centralization is essentially just a problem of extract, transform, and load (ETL). We’ve seen some companies use a dedicated vendor, and others that simply pipe everything back into Salesforce. How you do it is less important, though vendor consolidation is worth considering since this is usually commoditized functionality.
Find an objective baseline and use it to create a touchpoint weighting schema
Bringing data together across teams is just the first step. You still need to DO something with it. That is where we’ve seen many teams get stuck, because turning raw data into attribution insights requires addressing something that is always contentious: you’ll need to create a common language across these touchpoints so they can be compared.
Some customer interactions clearly do more to move a deal forward than others. The challenge is agreeing which ones, and by how much — without some sort of ground truth, this is a political fight just waiting to happen. With position-based models (first-touch, last-touch, W-shape, and so on) off the table, there are three main options:
Impact-weighted. This involves defining a relative “size” for each type of touchpoint, and then using those scores as the baseline for allocating credit.
Econometrics. Incrementality studies and media mix models have been around for decades, and can give a statistical answer about what is driving results. The challenge with econometric techniques is they’re typically slow (weeks or months for each report), and difficult to use for tactical decisions because of how abstract their outputs are.
Data-driven. So-called “data-driven attribution” models are based on machine learning algorithms. They’re more tactically useful than econometric analyses, but they’re best suited to specific channels with large quantities of similar-looking activities (for example, digital marketing for consumer brands). Data-driven models are also incredibly dark black boxes, and we have yet to meet a leader who fully understands or trusts their output.
For B2B revenue attribution, we believe impact-weighted models are the current state of the art. A simplified example might look something like this:
Then, use these weighted scores to generate metrics like “weighted pipeline” and “weighted revenue” based on the full story of the deal across everyone involved:
Yes, agreeing on the “size” scores for each touchpoint type in an impact-weighted attribution model is hard. It’s probably the most contentious step in the whole process, because everyone will come with a different opinion. The impact-weighted model we used at Branch was anchored to the average amount of time a prospect spent with each activity, which provided an objective starting point, and we had a cross-functional group of leaders we called “the weighting council” that was tasked with iterating and refining these scores as we learned more and tried new things.
Fortunately, there are several approaches on the horizon to make impact-weighted scores more accurate and less subjective:
Deep assessment using AI. With the proliferation of LLMs, it will soon be possible to make an even more nuanced assessment of each touchpoint’s impact. For example, scores can be refined based on how someone replied to an email, or how engaged they were in a Zoom meeting or webinar.
Using an econometric model for calibration. Even if a media mix model is challenging to use for everyday decisions, it can provide a very useful checkpoint for weighting refinements. For example, you might get a recommendation that sounds something like this: “based on a statistical analysis of your deals last quarter, it appears your current weightings schema might be under-estimating the impact of your webinars, but slightly over-crediting conferences.”
Incorporating cross-company benchmarks. All of this can be made more accurate when models are trained on a larger dataset.
Apply a time decay
Something that just happened yesterday holds more weight in someone’s mind than something that happened months ago, and should therefore receive more attribution credit in an active deal cycle, even if both touchpoints are otherwise equivalent.
In our opinion, this is important but it isn’t worth over-thinking. We recommend starting with a fixed time window prior to opportunity creation (probably in the range of 12-18 months, depending on the average length of your deal cycles), and then applying a linear time decay on top of all the touchpoints within that window.
Use personas as a multiplier
This one is easy: not every customer contact involved in an opportunity has equal influence on the outcome. An executive decision maker has more sway on a deal than a junior analyst doing research, which means your attribution model should allow different weighting multipliers for each persona on the deal.
Look at attribution data as a diagnostic, not a target
Goodhart’s law says “when a measure becomes a target, it ceases to be a good measure.” This definitely applies to revenue attribution.
One thing we learned early on: don’t tie individual contributor compensation to cross-team attribution metrics. This will seem like a seductively logical path, but you must keep in mind that as a leader, you have a strong incentive to do what is best for the business. Cross-functional initiatives (for example, SDRs inviting prospects to marketing-hosted VIP dinners) are often some of the most effective programs you can run, but not everyone on your team has the same level of context that you do.
In other words, take care to avoid creating a dynamic where ICs are hesitant to collaborate with others because their own numbers could go down.
Be OK with using different models to answer different questions
While we said this once already, it’s worth repeating: all models are wrong, but some are useful. In our opinion, a cross-team, impact-weighted multi-touch model is the best option for answering questions like “what do we need to do to get more revenue?” This is a question most boards and investors care about, so it’s a great one to ask.
But sometimes you need answers to more tactical questions like “what is most effective at booking us initial meetings?”. In that situation, a “breakthrough touchpoint report” (an improved version of the infamous “original source” model) might still be your best bet.
Ultimately, there is no single attribution model that can answer every question. You’ll make better decisions if you get comfortable with using the right model for the right task.
Don’t stop at attribution, because ROI is much better
In general, most sales and marketing activities “work” at some level. The question is, do they work well enough to continue investing? The way to answer this question is with an ROI report.
At Branch, our ROI calculations included both direct campaign costs and estimated people costs, which involved estimating the amount of time spent to execute each activity.
We found that the more difficult challenge was training everyone on the team to view results through the lens of ROI, rather than raw outcome metrics. This is a cultural change that has to start from the top with go-to-market leadership, and must prevail through both good times and bad.
Make data actionable for everyone on the team
B2B attribution is valuable to help a go-to-market organization understand what worked, but the real power comes from going beyond passive reporting.
If you know what has helped close deals in the past, you can equip every person on the team with insight about the best step to take next for each deal. For example, once you know the types of people who are positively influenced by webinars, you can automatically create lists for your SDR team to invite to the next one.
The power of generating recommendations based on cross-team insights is that you can make higher-level decisions, and allocate budget and resources even more intelligently to get the best possible outcome.
Takeaway: B2B attribution IS achievable, and worth the effort!
B2B revenue measurement is a giant mess, but we don’t agree with those in the industry who argue that attribution is no longer worth the effort. With the end of “growth at any cost,” and increasing scrutiny on efficient go-to-market motions, game-changing measurement is possible. Using reliable data to get the full picture of your growth engine is one of the best things you can do to scale your business.
Here are just a few of the decisions our holistic attribution model helped us make at Branch:
Replacing conferences with VIP dinners. Our model supported what the marketing team had suspected for a long time: most of the conferences we were sponsoring were never worth it for us. However, small group dinners with a mixture of prospects and existing customers were hugely successful. And despite appearing expensive on a per-event basis, the ROI analysis convinced our finance team to get on board with scaling that program up.
Cutting paid promotion. We eliminated almost all spending on paid acquisition, including content syndication. Even though we saw a moderate volume of net new leads through these channels, the cost was extremely high and the leads we captured were low-quality and rarely led to opportunities.
Executive events. We hosted an all-expenses-paid, multi-day event in Hawaii for customers and prospects. Despite rave reviews from attendees and from our own teams, our data showed that single-day events in local cities were a more cost-effective way to get the same results. And the model also helped the team responsible for these events decide who to invite, without needing to constantly bother sales reps for nominations.
How many SDRs to hire, and which ones were consistently underperforming. While we warned earlier about using cross-team metrics to determine compensation, they can be extremely helpful as an input when comparing the results of people in equivalent roles. Our model helped us understand when we needed to hire additional SDRs in a given region, and which individuals were not a strong fit for their roles.
Identifying high performing outbound messaging. By comparing the weighted pipeline per outbound email sent by the SDR team, we were able to surface creative strategies that opened doors at target companies, then subsequently roll them out to the whole team.
Which BD partnerships to invest in. We received a constant stream of invitations to participate in co-marketing and other activities with our partners. We always wanted to participate, but needed help prioritizing limited resources. Our model allowed us to understand which partnerships performed best.
Doubling down on direct mail campaigns. When our teams suggested direct mail as something to test, we were skeptical — would sending someone something in the mail in return for a meeting really work? Sure it would get a lead and maybe an opportunity, but would it contribute to qualified pipeline and won opportunities? Our model showed that it did, and that the ROI of these campaigns was off the charts.
After seeing the huge impact revenue measurement has when working well, and experiencing first-hand how difficult and painful it was to create our own solution from scratch at Branch, we have founded Upside to help go-to-market leaders make better decisions by bringing these insights to every B2B team in the world.
If you’ve been trying to solve this problem for years and somehow never managed to make progress, we still have a few spaces in our design partner program and we would love to hear from you. Just reach out to us and we’ll be in touch!
Take control of your revenue journey
Analyze every touchpoint, predict impactful investments, and optimize resources. Transform your B2B strategy with data-driven insights and maximize ROI.
Book a demo
Upside is revenue intelligence for B2B companies. Our platform helps teams figure out what influenced each deal and what they should do next to win it as efficiently as possible.
Resources
© Dragonsight Labs, Inc 2024. All Rights Reserved
Take control of your revenue journey
Analyze every touchpoint, predict impactful investments, and optimize resources. Transform your B2B strategy with data-driven insights and maximize ROI.
Book a demo
Upside is revenue intelligence for B2B companies. Our platform helps teams figure out what influenced each deal and what they should do next to win it as efficiently as possible.
Resources
© Dragonsight Labs, Inc 2024. All Rights Reserved
Take control of your revenue journey
Analyze every touchpoint, predict impactful investments, and optimize resources. Transform your B2B strategy with data-driven insights and maximize ROI.
Book a demo
Upside is revenue intelligence for B2B companies. Our platform helps teams figure out what influenced each deal and what they should do next to win it as efficiently as possible.
Resources
© Dragonsight Labs, Inc 2024. All Rights Reserved
Take control of your revenue journey
Analyze every touchpoint, predict impactful investments, and optimize resources. Transform your B2B strategy with data-driven insights and maximize ROI.
Book a demo
Upside is revenue intelligence for B2B companies. Our platform helps teams figure out what influenced each deal and what they should do next to win it as efficiently as possible.
Resources
© Dragonsight Labs, Inc 2024. All Rights Reserved