As the saying goes, there are lies, damn lies, and statistics. We won’t go too far down the rabbit hole on this topic since one could teach a whole class on the logic and mathematics of statistical reasoning. All I’ll do here is provide a simple account of what statistical reasoning is and then highlight a few common errors in statistical reasoning.
When generalizing using statistical methods, we always have some finite set of data and we’re trying to get to a claim about the whole population. If you randomly sample one million human beings, you’re probably going to end up with roughly 50/50 men and women, with non-binary folks making up a fraction as well. You might, if you think your sample is a good one, conclude that humans in general are something short of 50% likely to be men.
What makes a good sample? Two things: it must be random and it must be representative. If you want to know the attitudes of Americans about abortion rights, then sampling in Alabama isn’t going to tell you much. It’ll just tell you how Alabamians feel about abortion rights. That sample would be neither representative nor random. A sample being random means that the way individuals were put into the sample was done using some random method that wasn’t biased in favor of any particular sub-group. If I randomly select a state on the Easter Sea Board and then use a random number generator to select a name in the phone book, then I will have randomly selected a sample, but that sample won’t represent the whole United States because it will necessarily consist of people from the East Coast.
If I specifically choose people I can think of who represent every group of Americans I think are politically relevant, then I may get a perfectly representative sample, but it will be biased towards people I can think of and so are either people I know or are famous. The selection won’t be random.
So we need a way of selecting members of our sample that is random, but that also ensures some measure of representativeness. Of course, if you choose 2 people, it can’t be representative of the whole US. Even if you choose 1,000, you’re likely to get a biased sample even if you develop a perfectly randomized selection process. So large samples are often the antidote to the possibility that even when choosing randomly, it is still possible that one’s sample won’t be representative.
How can statistical generalization go wrong? Here’s one way: the way that data is collected can bias the outcome. So even if you have a perfectly random and representative sample, if you then go on to ask them whether they are in favor of “the liberal money grab”, then you will likely get a result that is biased against the policy in question. Even if you are super careful, there’s always the possibility of framing a question in a way that biases your subjects in favor of one answer or another.
A push poll, for instance, presents one position and sometimes even an argument in favor of that position and then asks subjects whether they agree or disagree. This is no good because it biases subjects in favor of agreeing—after all, you just gave them an argument for why they should agree.
Poll questions can also be loaded. Do you agree that we should support our troops and the wars they are fighting overseas? Even a pacifist might respond “yes” because they have nothing against the troops, they just don’t like the wars. How much do you give to charity each year? Even if the true answer is “zero”, few people want to admit this to a pollster. Do you agree with the unjust detainment of unaccompanied minors at the border? Who would say yes? It says “unjust” right there in the question.
Finally, one should always be wary of statistics simply because they can be manipulated so easily. Choosing one pair of factors to compare can deliver one result, while choosing a different pair will deliver a different result. You can imagine the difference if we compared lethal encounters between police and black people vs. lethal encounters between police and young black men. Or if we compared abortion rates among teenagers vs abortion rates among impoverished teenagers. Or if we compared prayer in private schools vs. prayer in schools period. Or if we compared gun violence in homes in Bel Air vs. gun violence in homes in the greater Los Angeles area. Choosing the categories to compare will bias the results in one direction or another.
One thing that all too often gets in the way of good causal reasoning and good statistical generalization is a phenomenon called “selection bias.”
Do married men really live longer? Actually, yeah, it turns out that they do. Is this because marriage causes their longevity or is it because the type of man who gets married is more likely to be the type of man who would have lived longer anyways?
Let’s break it down. There are two possibilities. When you read the headline “Married Men Live Longer, Study says” you are likely to think that it is saying something like “marriage makes men live longer through the effect that having a spouse and/or children has on one’s likelihood to engage in risky activities.” Something like that. There’s a mechanism that makes men tend to live longer when they are married, and that mechanism is part of marriage itself: the responsibility that comes with being married, or the fact that someone else is looking out for your health, or maybe there’s a gender dynamic in heterosexual marriages in particular that involves a connection between living with a woman and living longer. Each of these possibilities suggests new routes of exploration and new kinds of evidence we would want to have.
But that’s not the only possibility. Here’s another one: there are only certain men who are likely to get married in the first place. The die-hard party machine who will never settle down is also likely to die young of liver disease. The men who die tragic deaths in their early 20’s won’t get married either. The men who never quite get their lives together enough to feel like they could be in a long term committed relationship might also be more likely to die of heart disease due to a sedentary lifestyle. These are mostly just suppositions. What is important here is recognizing the possibility that being a man who gets married may already put you in the category of men who will live longer whether they get married or not.
Selection bias happens when the sample we generalize from isn’t representative of the total population in some important respect. Some factor makes it so that we aren’t generalizing from a truly random sample or makes it so that what appears to be a causal relationship is instead just a selection relationship (like marriage selects rather than causes long-lived men).
Suppose you are trying to do a national survey of political opinions. Here’s a method: choose addresses at random and then go to those addresses in the middle of the day and talk to whoever opens the door. Any problems you see with this strategy?
Well you may get some stay-home parents, or retirees. You sure won’t get many single parents or homes with two working parents. Do you see how your selection strategy isn’t truly random? Sure, you chose addresses at random, but then you ignored anyone who wasn’t home when you showed up. That selects for certain people and so you’ll have a biased sample.
Self-selection is also a form of selection bias. Fox News polls will deliver overwhelmingly conservative results. Yelp reviews are going to be biased in favor of loyal customers and angry customers. Those in the middle aren’t likely to post reviews. American Idol is not a good poll of what Americans think because the only people who call in are those who already watch the show and care enough to make an attempt to vote.
When you hear a causal claim or statistical claim like “people who x are more likely to y” on the news, or read it online or in the paper, it is always important to ask yourself: could this be selection bias? Are women more likely to get osteoporosis or are the people who get tested for osteoporosis already more likely to be women? For instance, women have a higher life expectancy, so perhaps there are more octogenarian women. I think they’ve accounted for this in their studies already (or perhaps these claims are simply false :), what do I know?), but a dose of skepticism is healthy.
Understanding selection bias is a tool to put in your toolkit. Developing the habit of checking for the possibility of selection bias makes you a better thinker.
Finally, we should discuss the most common tactic used in the political sphere: reporting the same data in different ways to achieve different rhetorical goals. You might say “feeding the homeless in every state in the union would cost 50 million a year.” Or you might say “feeding the homeless in every state in the union would cost .0001% of the Pentagon’s yearly budget.” When we talk in comparative terms, often we have a deeper grasp of what these large numbers mean.
90% of people who take our medicine recover from their colds in under a week
Vs. 90% of people recover from their colds in under a week
Only .05% of Russian Immigrants voted in the last election
Vs. 98% of Russian immigrants who have US citizenship voted in the last election
Selected test subjects showed a 200% increase in efficacy
Vs. 2 test subjects out of the 10,000 tested showed a 200% increase in efficacy