If you’re a direct marketer, then you know about panel testing. However, there is something that you are likely doing wrong, and it could be hurting your bottom line.
The recommendations from the statisticians pertaining to what size test panels to mail would indicate, for example, that an outgoing test panel size of 50,000 is required for a test result variability of plus or minus 5% (at a 95% confidence level).
This is absolutely correct from a statistical theory point of view. But does it hold true when you roll out? Even though you do everything right, with randomly selected names, and so on? The answer is no.
It turns out that these decisions based on statistical theory are not accurate. Following is an example that explains how I know this to be true.
During my time at Publishers Clearing House, testing was our lifeblood. In fact, every week for 30 years I looked at and scrutinized test results. I did this as a young marketing analyst when I first started in 1973 and kept doing this when I was an SVP and officer in the company before I retired a few years ago.
And we tested everything. We performed about 125 tests every year, from copy changes to format changes to product offers to pricing to contest techniques. You name it, we tested it.
We were using the traditional formulas for test panel sizes. However, we realized that the risk of a wrong test result could be very harmful.
So what did we do? We tested it. We tested the validity of test panel size and variability. We did this by running several control panels under the typical random selection (including several other techniques to maintain randomness) and mailed them. We ran this several times with varying outgoing mail volumes.
And after all this testing, what did we find out? That the bottom line about panel testing is that you need to increase outgoing test panel sizes by about 50% (versus what statistical theory indicates) to be able to have a high confidence level in your findings.