Humans and chimpanzees have a match on about 96% of their DNA. That’s not a lot of difference between you or I in our automobiles, sipping a Starbucks latte while chatting our cell phones and our pan troglodytes relatives in the rain forests of central Africa.
And that 4% is about the difference between dramatic marketing success and dramatic marketing failure.
How can you avoid being a marketing chimpanzee? Just Focus on the Four.
In my experience, failures in direct response marketing efforts tend not to come from missing the big things. Although big misses do happen from time to time, they are usually easily identified and rectified through improving the people or process.
It’s the problems in the 4% of your efforts (sometimes wrongly thought of as “on the margins” or “blocking and tackling”) that conspire to sap your marketing efforts of their ability to dramatically improve results, grow your business, and keep ahead of your competition.
While you can–and should–think about the 4% in all areas of your direct response marketing efforts, there’s two areas that I’d advise looking at: Testing and Process. Having a high degree of focus on the 4% in those two areas means you have to look at the two areas in very opposite ways.
In Testing, you have to focus on creating wide variances of results (-50% to +100%). The biggest mistake in testing is fear of failure, resulting in lots of flat results that offer no help in setting direction for future tests.
In Process, you have to focus on being able to create replicable and stable results from both your rollout and testing efforts. The biggest mistake in process is in turning the “routine” or “automatic” parts of the campaign over to less-experienced staff (with little or no training!) or outsourced to vendors (who in turn, turn over that work to their less-experienced staff). The result is that you have a high degree of variance in your campaign, test and financial results.
Let’s look at each in more detail.
96% or so of your tests will yield nothing or be basically flat. (I’m including in this category the +/- 5-10% results that will regress to the mean in rollout.) Flat tests, while a natural part of what we do, drive me a little crazy. They don’t immediately show a path to improved profit and they don’t immediately show paths to avoid.
You will never succeed in driving your business unless you are willing to have big winners and big losers, and you’re willing to analyze both the winners and the losers. The reasons for big winners are obvious. You find a new offer or channel that can drive 50% more volume or reduce CPA by 30% or whatever and you roll it out. Further, that test shows a fruitful path that leads to more tests that can build on your success.
The big losers are even more useful.
The losing tests can help you modify your hypotheses about what might work in the future and how those hypotheses apply to your customers. However, you have to make sure that your culture makes it safe to bring those big losers to the fore and properly debrief against your findings. The test that depresses response by 70% teaches a lot, but you have to be willing to look at the message it gives you and not shoot the messenger.
An example from a past life was in testing “straightforward” copy in direct mail solicitations for a continuity product I used to sell.
We used a lot of typical direct response techniques–strong offer, multi-level game/gimmick, double/triple guarantees, longish benefit-oriented copy in the letter, product sample and so on. For years, we heard from both our customers and creative agencies that they wanted “no B.S.” or “just the facts” in the promotion. Time and again, we tested simplified copy, offer presentment and so on. And time and again, we yielded depressions in both front-end response and back-end take, yielding massive depressions in ROI.
Because we were getting such a big swing in results, though, I was convinced there was something missing that could move the needle to the other side and generate a big positive result.
The breakthrough hit our creative agency one night at about 1 a.m. after a long, frustrating day of testing concepts in dyad interviews. What happened? It turned out we weren’t being simple and clear enough in our creative and copy. We had always been tempted to insert just a bit more benefits or to clarify the offer just a bit more.
Getting rid of the temptation to add “just a bit more” was the key to uncorking a new creative concept that swung the needle 180 degrees and lifted both response and back-end take. The result was a long-term control that worked on a number of products, in a number of channels and that produced a series of additional strong tests.
- Look to drive big winners and big losers.
- Prioritize tests that are going to be likely to have big impact and test them cleanly.
- Make sure you spend time with the big losers. There can be gold in them thar hills.
Unlike testing, where you need to open your mind and be aggressive to drive big results in the 4% of non-flat tests, process is an entirely different thing altogether. Here you need to focus on the detail and not drive big variation.
It’s the final 4% that you don’t cover off on that will kill your campaign, your tests, your product and possibly your career.
Some examples of process problems that can bite you:
Who’s developing the merge/purge instructions and analyzing the results of the merge? If the answer is you don’t know, or the lowest-paid or least-tenured person on your team or, worse, the merge/purge vendor or nobody, you can stop reading this article and get on that immediately. The details associated with your merge and name allocation process can seem somewhat mundane and routine, but the output of your selection and allocation process is 40% of the campaign, as the old DR saw goes.
Panel splits. Are you sure your test and controls comprise nths of the exact same population? And is that test population the same as the rollout population? I’ve seen cases where people were doing extraordinary things in their merge, with very sophisticated testing to ensure proper allocation of names across multiple products, offers and with various timing. However, the offer, creative and step tests provided inconsistent (sometimes disastrous) rollout results because something as seemingly basic as the panel splits were done incorrectly.
I’ve always followed the 40/40/20 rule when allocating time and resources to direct response campaigns and planning. 40% of my time was working all aspects of the offer, 40% was on the lists (i.e. the target audience and recipients of the promotion) and 20% for everything else.
Far too often we leave the “glamorous” work, such as the creative (which is only a piece of the 20%) to the most seasoned members of our team and leave the “grunt” work, such as the list ordering and list testing (which is solidly behind 40% of your results) to unseasoned and under-trained staff. You can even flip the responsibilities around and probably immediately improve your results!
- Follow up on the details. At the very least, know who’s responsible for that 4% in the corners.
- Make sure the responsible person knows they’ve got the ball.
- Look at your process periodically. What worked in a channel last year may no longer apply or there may be an easier and less error-prone way of doing that process now.
Seek to drive more variance in the 4% of your testing that won’t be flat. Don’t worry about looking bad with the losers; as long as your hypotheses are solid, you’ll learn from them.
Drive more consistency in the final 4% of your campaign process that has the potential to hurt both your overall campaign performance and your testing. Make sure you understand it and that your team knows there’s nothing “routine” about it.
Focus on the Four, and you won’t be left eating bananas. Unless, of course you like them!