To be discussed

Challenges in Sampling in Market and Social Research

   Artikel anhören
© pixabay.com
It’s a truism that even large samples can be misleading if they are biased or, as market and social researchers say, if they are not representative for the universe. Therefore, it is essential whether a sample is representative or not. A Paper by Raimund Wildner.
For drawing a representative sample there are two fundamentally differing methods:
The first way is quota sampling, which is a kind of stratified sample, where the samples within the strata are not drawn at random. In textbooks about sampling quota sampling is not mentioned at all (e.g. Thompson 2012, Chaudhuri / Stenger 2005) or just mentioned with citing some results from tests without any theory behind (e.g. Cochran 1977, p. 135f). The simple reason for that: There is nothing like a theory of quota sampling, not even a glimpse of it.


The second method is random sampling, for which we have multiple options like simple random sampling, stratified random sampling, cluster sampling, two stage sampling, just to mention the most important. There is a full and elaborated theory for all these different random sampling designs. There are formulas for the estimation of the sample mean and its variance. But this theory and these formulas are based on the assumption that the probability of every element of the universe to become part of the sample is greater than zero and is known before the sample is drawn. The probabilities may be equal as in simple random sampling or proportional stratified sampling or unequal like in disproportional stratified sampling, but they must be known for every element in the universe. All theory and all computations of sample means, and their variances are based on this assumption.

At this point we must define, what is meant by “becoming a part of the sample”. For this it is not enough to be asked to take part in a survey. This probability can be estimated quite well if a person can be reached at all. But this is not enough. In addition, it is needed that the data of the selected person can be collected i.e. that the person is taking part in the survey. Non-response has the consequence that all drawing probabilities we attributed to the elements of the universe are wrong.


A very simple example may demonstrate this. Let’s assume we want to sample 1000 people from a universe of 10.000.000 with simple random sampling. We attribute to every person of the universe the probability of 1000/10.000.000 or 1/10.000. Let’s further assume, the response rate is 10%, which is quite common in telephone surveys nowadays. So effectively there are 1.000.000 people that can be sampled. For 9.000.000 people who refuse to take part the probability to get into the sample is not 1/10.000 but 0. For the other 1.000.000 people who are willing to answer our questions this probability is not 1/10.000 but 1/1000 because we must reach our target sample size of 1000. In the end for every person of the universe the effective probability to be part of the sample is not what we attributed to them in advance but something far away from that.

Non-response is a problem that is as old as market and social research. Decades ago, when response rates of 70% and more were feasible, researchers could assume, that the assumption was approximately fulfilled. Statisticians have a huge experience with assumptions that are approximately valid. But nowadays with response rates in many countries of about one third at best, often 10% or less, this assumption is not approximately valid, it is simply and outright wrong.

In the consequence we must realize: Random sampling as described in textbooks is something like the unicorn of market research: It is very nice, everybody knows how it looks like, but nobody ever has seen it yet.

Of course, there is some theory about non-response. There are good solutions for item non-response which means that there are some questions, that were not answered, but others were and these answers as well as the answers of other people can be used to fill the gaps by multiple imputation (see Rubin 1987). But there are no practical solutions for unit non-response, i.e. if a person refuses to take part in an interview at all. We know that unit non-response does not harm, if it is “missing at random”, which means that the data collected are independent of the probability of non-response. But we know as well that this assumption in general is not true. There is some work on a method that requires that a random sample of the non-responders can be sampled by increasing the effort (e.g. Thripati et.al. 1997). But this is not realistic and consequently normally not applied in practice.

We must state we have a big problem that is not yet solved by science.

Does this mean, that the time of representative sampling is over, as a marketing manager put it at an ESOMAR conference 10 years ago? Are representative samples made obsolete because of big data? The answer is a clear No. If a problem of society or in marketing is to be evaluated and it must be estimated, how much money is needed to address the problem, you need reliable figures about how many people are affected.

Does this mean, that everything is allowed in sampling? Not at all. In the contrary. The fact that theory is lacking makes it much more difficult to get to representative samples. A lot of experience is needed for that. Very often as much random procedures as possible are applied. For this science is still helpful. But from a certain point onwards very often you have to apply carefully selected non-random procedures to get to a balanced sample.

This is not the place to show in detail and for different situations how to do that. But one example might demonstrate how this might work.

In TV audience measurement technical equipment must be installed in the households. Of course, response rates are very low. One possible way is to make short questionnaires with a lot of people collecting their important structural variables like household size, how they get their TV signal etc. In the end of the interview the interviewees are told how the TV audience measurement works and are asked whether they are willing to take part in such a system. If a household leaves the panel and must be replaced a random draw is taken from those households interviewed with equal characteristics and who were willing to take part in the system. Of course, this is not a real random sample of the universe of all TV households, but it showed to deliver good results.

What else could be done? All the efforts, to raise the response rates, training the interviewers, making questionnaires more interesting etc. And finally, there is weighting of the results, which can diminish the bias but normally cannot eliminate it and sometimes even make it worse.

What is needed as well is an open discussion between practitioners and scientists about new ways of sampling. For this it is needed that practitioners make transparent their procedures and it is needed that scientists accept that traditional random sampling where you know in advance from every element in the universe the probability for the element to become a part of the sample is not feasible any more. Let’s start such a discussion! And of course we as well have to discuss the problems of drawing representative samples from online panels as well, which were not discussed in this article.
Bitte loggen Sie sich hier ein, damit Sie Artikel kommentieren können. Oder registrieren Sie sich kostenlos für H+.
Ich habe die Datenschutzbestimmungen zur Kenntnis genommen und akzeptiere diese.
stats