For many sales leaders, adding support headcount is like walking a tightrope: too little support may not budge the top line, but too much support could hurt the bottom line. Striking a balance is so tricky because support resources have only an indirect impact on revenue. If one could tell for sure that, say, adding X support headcount would increase revenue by Y dollars, then adding support headcount (or not) would be a snap. As it turns out, it’s possible to do just that by leveraging the right sales analytics techniques in three steps…
First, construct two sample groups. Divide your top-performing reps into two groups: an experimental group and a control group. The groups should be large (30+ reps) and similar with regard to demographics and average rep performance (use random assignments to accomplish this). The experimental group will be given the level of sales support that you’re considering making available to the entire sales force. The control group will continue with the current support level without change.
Second, collect data. After a short while has passed, gather information on every rep across both groups, including but not limited to:
Quota, revenue and pipeline information should be easy to get, as sales departments track them regularly. Sales activities and time allocations can be captured by the use of a survey.
Third, analyze the data. Compare the groups’ results using two statistical methods: hypothesis testing and regression analysis. Hypothesis testing will tell you whether or not the experimental group’s performance is significantly different than the control group’s; regression analysis goes further to map out the causal relationships between the factors at play (obviously, “whether or not a rep has full sales support” is the most interesting variable here). For a more thorough test, include other variables such as education level, tenure, etc. into your regression analysis.
Of course these are just general steps. But eventually you’ll wind up with an equation that predicts the effect of each variable on overall rep performance. Then you can judge for your organization whether or not adding sales support headcount makes sense.
Alexander Group recently ran these analyses to evaluate the effectiveness of a clinical specialist program for a large Medical Device company. The client had a traditional “lone wolf” sales model. Sales reps spent much of their time in operating rooms, limiting their ability to field other sales opportunities. In considering whether to offload OR activities to less expensive clinical specialists, the client brought in Alexander Group to assist the transition. We advised the client on a two-phase implementation: first, involve only a small subset of sales reps; then if (and only if) the reps’ performance improved dramatically, implement a large-scale deployment. Initial results were encouraging. Reps with clinical specialist support brought in 28% more revenue than those without, more than enough to cover the added costs. But revenue increase alone didn’t necessarily indicate that the difference was a direct result of the clinical specialists – it may have been due to other factors that set the experimental group apart from the control group.
Once they had rolled out the program to the entire sales force, they settled on a near 2:1 support ratio, i.e., two sales reps for every clinical specialist. At this point, Alexander Group stepped in to conduct its second measure of performance. We found that, while the results of the general deployment were still very positive, the average increase in rep productivity had fallen from our initial measure of $230K (during the test phase) to $180K (sales force-wide implementation). It seems that the law of diminishing returns was at work here. So, should the client stop investing now? Or should they continue ahead full steam, bringing the CS-Rep ratio all the way to 1:1?
The answer would be found in a time series analysis of individual rep performances. We constructed a panel set of data with two records for every rep – one detailing their revenue performance during the experimental period six months prior, and one detailing revenue performance after the full-scale deployment. Using time series regression, we calculated that the effect of a clinical specialist on revenue was between $150K and $500K – a wide range indeed. But exactly where in that spectrum the true number lay would determine whether or not to push on with the program.
By isolating the differences between the two records for the same rep, we created a new set of variables with which to run another regression. The result showed that having CS support should increase a given rep’s revenue by an average of $154K. Therefore we advised the client to cease further investment in clinical specialists, as the theoretical benefit of a clinical specialist was fast approaching the actual cost of one.
This serves only as one example of how to gauge when to stop investing in sales resources. Other methods, e.g., regression with data in logarithmic format and maximum likelihood calculation, would have given us the same answer. But when simpler calculations can get the job done, what’s the point in show-boating advanced statistical knowledge? To achieve the stated goal using the most efficient method – that’s the commitment of Alexander Group consultants.
When it comes to investment decisions, timing is everything. In theory, deciding when to start investing and when to stop should be an informed process. In practice, however, investment decisions rely heavily on intuition and trial and error. The key to making such decisions in a more precise and scientific way is through thoughtful, rigorous data analysis.
Learn more about Alexander Group’s sales analytics insights.