In the triumvirate of right customer, right time, right message, knowing propensity to buy may be number 2, but ask any sales person what they want in a prospective customer and the answer will be very consistent: prospects with the highest likelihood to buy. That notion of likelihood to buy is critical and one we will be digging into deeper.
Want the resources sent directly to your inbox? Click the button below.
So why focus on propensity to buy, aka prioritisation?
For me it’s a critical part of GTM in 2022. Despite this, it’s one of the most under-developed, unsophisticated parts of account-based or outbound GTM across many early stage companies. Most companies have 1000+ companies they can sell to. But only a sales team of 5 BDRs, or an ad budget of XXX p/m. Even if you’re PLG, if we can target our content or ads at companies most likely to convert, you’ll see greater performance.
So why aren’t we spending more time trying to maximise ROI on their time spent?
The Impact is colossal
Even the most basic maths show the impact basic prioritisation can have. Hypothetically, if you’re a BDR and you target two sets of accounts.
Well that’s not much I hear you say, 4% to 5%. Maybe, but this would result in 25% more meetings booked (or whatever the next stage in your funnel is) and pipeline generated, so in that regard the impact is huge.
Marginal improvements at the top of the funnel, can lead to dramatically improved results.
What do we mean by scoring?
People try and score a lot of things. Fit, Intent, Recency, Engagement - the FIRE model - is something I’ve implemented before and was quite interesting. There is some recommended reading here if you want to learn more.
What I’m talking about here is a little simpler, though arguably could be an aggregation of those things depending on how you set it up. What we are focused on is scoring an account’s propensity to buy.
Basically, we want a rep to hit the CRM, see an ordered view of which accounts that look most likely to buy their product, so that they can focus their attention on that account.
In short, we score a company or account’s likelihood to buy and we prioritise based off of that score.
Types of scoring
People tend to implement one of two types of scoring: Account Tiering and Cumulative Scores.
Account Tiering: Pros and Cons.
Tiering is the more basic of the two options and one most commonly applied.
Typically you divide your market up into some basic segments,
Pros? This is really easy to do and most importantly easy for reps to understand, after all they are our stakeholder and need to believe in our prioritisation in order to use it. I have learned that being involved in and understanding the process that arrived at the scoring plays a key part in adoption.
Cons? It’s a super blunt instrument, not allowing for any grey area. So for example using account tiering you will divide your market up into segments based on attributes, and tier them accordingly.
For example: All accounts in A, B, C industries, with over 100 employees, using technology Z = Tier 1.
Here every attribute needs to be true for someone to be tier 1. So if a company is in the industry A but only has 80 employees, despite its proximity to being a Tier 1 company, it would be bucketed into Tier 2 and potentially missed.
I have seen this approach implemented in a tonne of ways. Most export accounts from CRM into excel sheets and then setup some basic filters and re-upload the labelled data.
You can also use SFDC lists or HubSpot dynamic lists … to show every account which certain filters are true for.
Cumulative Scoring - Pros and Cons.
The second option is cumulative scores, in which you determine a set of attributes that contribute to a score, positively or negatively. You assign each attribute a certain number of points and the accounts with the highest number of points are the priorities.
Simple in concept.
Pros? This can get really nuanced, assigning points to tonnes of attributes. For example: Industry X = +5 points, >100 employees +10 points, Technology X in use = +5 points. At Notion we’ve implemented similar setups with the likes of TestGorilla, and Cledara.
Cons? This can be harder to implement (dependent upon CRM) and slightly more difficult for reps to get their heads around: “Should the numbers add up to anything in particular?” or “What is a good score vs. Bad score?”
Which are both reasonable points. So If the highest number is 34 … without any context that means nothing. However this is easily addressed. Convert the score to a % … how many points out of total, or convert scores into a name: Good, Better, Best.
This is slightly more complex to implement in your CRM, but can be done in Hubspot & Salesforce.
How to implement cumulative scoring w/ HubSpot
How to implement cumulative scoring w/ SFDC
Determining & Validating Inputs
When building out a score folks often wonder how complex they should go to begin with, what should be included, etc. That’s the big question. To determine your inputs, I’d encourage you to think quantitatively as well as qualitative.
Start by looking at historic performance to determine correlation between people who reply, you create opps with, and win. As well as the opposite… don’t just look at positive correlation but negative too. This only really works if you’ve got a statistically significant amount of historic data to look back on.
More often than not, who you’ve won historically won’t be who you’re targeting in the future.
At Paddle — we started the business targeting independent Mac software developers. Today our performance with those folks is very strong indeed and we see a positive correlation against the attributes typical for a Indie Mac Software Developer and Closed Won business.
However — we know that strategically as a business, we’re trying to move away from desktop software folks. Our investors want to see SaaS apps using Paddle and so we’ve built an entire product roadmap targeted towards recurring billing SaaS.
So you can see how this qualitative, strategic lens can be really important layered on top of the qualitative. In this case amping down the importance of the mac software devs attributes, and amping up those of recurring billing SaaS.
Last but not least your users of the data are so critical throughout this entire process.
Work with some of your most respected reps … or longest tenured. Get their feedback on the quantitative analysis you did, and get their opinion qualitatively on what makes an account more likely to buy than another. Ultimately this is a tool for them, so you need to create understanding and buy-in for it to a success
Once you’ve developed your first version of the score, before rolling out to the entire team, score a sub-set of accounts.
REMEMBER: We’re not finding silver bullet, 100% wins here… we’re trying to prioritise our time towards those who look marginally better fit than others, to drive those marginal funnel gains. You aren’t guaranteed to win customers 1 through 10, but they should look more likely to buy than customers 500 through 550.
Distribution & Recycling
In theory at this point…
We’ve mapped the market of every account we could sell to. But don’t underestimate the importance of this in prioritisation. You need comprehensive coverage. If our market is 10k companies, but we only have visibility into 1k of them which we are scoring. Chances are some of the 9k we don’t have visibility into have a higher propensity to buy than the 1k we’re measuring.
We have scoring setup so every single one of them is scored in the CRM and ideally scoring inputs should be being updated - not scoring using out of date data.
Once we’ve reached that point, we need to actually get to work and GTM.
There are two different approaches here…
Named Account (Account assignment)
Some companies tend to want to distribute a set of accounts to their reps. This is most common with larger, more complex sales organisations. Maybe working within some parameters you’re forced to operate within, eg. region, industry, size
Meanwhile, perhaps you know they need 500 named accounts to hit target. In this case, equitably distribute the accounts based on their score, taking into account those filters among the team.
If you’re distributing based on region, avoid assigning the entire region to a rep.
1) This is problematic as your team grows (reps don't like accounts being removed from names)
2) A specific number of accounts is a great way to set expectations with your team
I.e the reason we’ve delivered you XXX accounts, is because we expect you to engage with YYY, create XX opportunities, with a win rate of XXX = targets hit.
Use the specific number to drive activity.
Don't forget to recycle accounts. This is most commonly done quarterly - folks remove accounts that are not engaging or have been lost, and top-up named account lists with those with the highest scores (remember they change!)
The alternative to this is to let reps select accounts themselves. In this case I would create a view in CRM which lists every qualified account, ordered by the score. We called this at Paddle at one point the “Account Queue”.
This tends to be most common with very small teams, where less coordination is needed. eg. Two founders and a single sales rep. As this scales, you’ll want to implement validation rules, for example limiting the number of accounts that can be put in a rep’s name and look out for some trying to bag all “good” accounts.
While simple, this approach doesn’t tend to scale, but much easier than building processes around account distribution and recycling. Let reps do it themselves.
In summary, we want to map our market, and score accounts based on their propensity to buy, using a cumulative scoring approach where possible. Be sure to validate the scoring inputs with the team, validate the output and be prepared to iterate and improve. Remember sales people are your users — you want them to understand and adopt.
Distribute & recycle accounts based on dynamic scores, refreshing the data on a regular basis.
Review your scoring twice per year as your strategy will change and your performance will evolve!