My recent post "The Right Way To Sample" generated this reply, which I'm delighted to receive and share with you:
Thanks for the opportunity for “equal time” (no, we won’t go there) to the comments in your April Fool’s Day blog about Arbitron’s sampling.
First of all, thanks for the kudos on the cell phone only sampling that’s now underway in 151 markets with the rest of the markets (except Puerto Rico) on the way this Fall. We’ve worked very hard to bring this positive enhancement to fruition after much testing. Oh, and by the way, your readers should know that the market in question that had double the intab for African-American was actually only 50% over for one phase and is presently 25% over for two phases. As we always say, it’s a twelve week survey…let’s see how the full twelve weeks finish. We do like to set the record straight.
Now, let’s go on to the issue about single versus multiple person per household sampling and Ted Bolton’s piece from 1995. The concept is known as “probability of selection”.
Let’s start with the factual errors because these often become “urban myths” (or perhaps with your blog, they could be “country myths”). Arbitron does not select individuals from a list. Arbitron uses a random digit dial (RDD) telephone sample of residential landline phone numbers. Landline phones, with a few exceptions, are tied to households. While not perfect, you can generally assume one landline phone number per household.
Ted spent much of his piece writing about lists and quotas. Those of you who still have budget for a perceptual are probably using lists of names and setting quotas for different demos (or more accurately, your research company is doing it). While quota samples tend to be the norm for custom research, that doesn’t make it right, just expedient. You can’t use quota samples to tabulate radio ratings that are projectable. We can’t call a house and say “We’re looking only for a Male 18-24 year old”. To do valid radio ratings, you must use a probability-based sample.
But on to the main point. In simple terms, Ted was wrong. Multiple person per household (MPPH) sampling has long been an accepted methodology in media surveys, government surveys, and many other kinds of surveys. And in terms of “probability of selection”, an MPPH frame equalizes the probability of selection across demos while a single person per household (SPPH) sample distorts the probability selection.
Here’s an example of what I mean:
In an SPPH frame, the larger the household, the smaller the chance that any one individual in the household will be selected.
Single person per household sampling is used in many surveys and one reason is that it makes sense for studies conducted by phone. Consider the logistics of talking to one person in a household for an extended period (perhaps 15 to 20 minutes for a perceptual or callout) and then asking to speak to another person. Those of us who design phone surveys do our best to dissuade anyone who would ask us to survey more than one person in a household by phone; it just doesn’t work. As someone who worked at Birch (which used single person per household sampling for those of you who remember Birch ratings), it would have been an operational nightmare to measure more than one person per household.
In Arbitron’s case, nearly everyone is eligible to be in the survey (one exception is most of this blog’s readers who are in the media business). We sample at the household level. Even our new address-based method of finding cell phone only households is designed for the household level. And each household has a relatively equal chance of being selected assuming they have a landline phone. With the addition of the cell phone only frame, that opportunity extends to nearly all households.
Let’s take the discussion one step further. One could weight for this difference in probability of selection. Some people in our business hate the idea of weighting and others would have us weight for almost every variable (are you right handed or left handed?). In statistical terms, weighting reduces bias and increases variance. In simpler terms, weighting makes up for the potential that a group that has different radio listening habits is not represented at their level of the population (bias), but increases bounce in the estimates (variance). It’s a tradeoff and our view is that the large amount of variance (bounce) that would be created by weighting for household size would be a major negative for your ratings.
We’re quite comfortable with measuring everyone above a certain age (6+ for PPM, 12+ for diary) in the household and the need to defend the sampling method has long passed. And thanks again, Jaye, for the opportunity to be part of your blog.
-- Ed Cohen, Vice President-Research Policy and Communication, Arbitron, Columbia, MD (410-312-8592 - Ed.Cohen@arbitron.com
OK, who else wants to add a few cents' worth? Add a comment below or drop me an email.
'WILL RADIO BE PUSHED OUT OF THE CONNECTED CAR?" IS THE WRONG QUESTION FOR BROADCASTERS TO ASK - A recent A&O&B Facebook post from Jaye got quite a bit of attention. It concerned a story by the Las Vegas Review-Journal’s Todd Prince speculating about w...
3 months ago