Common Statistical Sense

The whole population was studied on Election Day, in the best way available

By David Stolinsky and Dave Kopel

12/19/00 9:55 a.m., National Review Online. Also by Kopel on the 2000 election recount: Illegal Ballots? Hooey. Read the statute. National Review Online. Nov. 10, 2000. The Palm Beach Legal Precedent. No cause for Dem squawking. National Review Online. Nov. 9, 2000.

The Florida election mess will eventually fade, but it is symptomatic of a larger, enduring flaw in the American public dialogue. Anyone who ever took a course in statistics was taught that its standard method is to select a representative sample and study it, in order to draw valid conclusions about the whole population. But this process has recently been reversed, and for the most unscientific of reasons — political agendas.

For example, a professor recently "studied" a small sample of Florida's population, then "concluded" that if all the votes were counted, the state would have more votes for Gore than Bush. This "study" will be cited ad infinitum by those who want to deny President Bush's legitimacy. But the whole population had already been studied on Election Day, in the best way available: an election was held where everyone who wished to vote did so, and all readable votes were counted. If the professor's sample gave a different result, this proves only that the sample wasn't representative. Another example: Dr. John Lott, in his book More Guns, Less Crime, studied all 3,054 counties in the U.S. and found that where law-abiding citizens are licensed to carry guns, violent crime dropped. His book was reviewed in the New England Journal of Medicine, which chose a gun-control advocate as reviewer. This person selected a sample of counties in which he claimed that violent crime rose after this law was passed. Again, this proved only that the sample wasn't representative of the whole population. In both cases, a sample was selected not at random, but with the object of "proving" that the authors' biases were correct. But also, in both cases, the whole population already had been studied, so studying a sample was not only biased but unnecessary.

From these examples, we can conclude that inferences drawn from any sample are most unlikely to be more valid than data drawn from the whole population. Regrettably, we can also conclude that if you torture the numbers long enough, they will tell you anything you want to hear. Of course, this is the precise opposite of science, where we reach conclusions by studying the outside world. In these examples, we attempt to make the outside world conform to our preconceptions. And if it doesn't, we distort the numbers to pretend that it does.

What, after all, is the difference between a correction and a fudge factor? A correction is something we do to make a measurement of the outside world more accurate. For instance, we compare our thermometer with a standard thermometer of known accuracy. Or we check our speedometer against a police radar gun. That is, we try to make the measurement more accurate without knowing which way the correction will move the result.

A fudge factor, on the other hand, is a dishonest effort to move the result in the direction we already wanted it to go. This is exemplified by a quotation from the biography of a Nobel Prize winner in physics, Genius: The Life and Science of Richard Feynman, by James Gleick:

If a Caltech experimenter told Feynman about a result reached after a complex process of correcting data, Feynman was sure to ask how the experimenter had decided when to stop correcting, and whether that decision had been made before the experimenter could see what effect it would have on the outcome. It was all too easy to fall into the trap of correcting until the answer looked right. To avoid it required an intimate acquaintanceship with the rules of the scientist's game. It also required not just honesty, but a sense that honesty required exertion."

Because so much of the population is innumerate, it is incapable of distinguishing valid social-science research from junk. As a result, many in the public retreat to the know-nothing conclusion that statistics don't really prove anything. This allows people to ignore overwhelming and undisputed social science evidence, in order to cling to notions which they strongly want to believe in.

For example, as an article of faith, gun prohibitionists believe that people who don't work for the government are incapable of using guns to save lives. They insist that it is more likely that a defensive gun user will have the gun taken away by the criminal than that the defensive gun use will be successful. Yet every single study of defensive gun use finds that "take-aways" are very rare, and no social scientist has offered any critique of these challenges.

Before the "Million" Mom March, Professor Lott appeared on a television show debating an official from MMM. The take-away issue came up, and Lott explained the data. The MMM spokeswoman snootily retorted that her own view of statistics was different. Not that she actually knew any statistics, or had read a single study on the subject. Her personal "view" was enough for her to ignore all contrary, rational evidence.

The "Million" Mom March itself was another exercise in the self-indulgence of self-deception. Photos show a crowd on the Washington Mall numbering in the tens of thousands. Yet the march's political consultant created an instant "recount" by claiming that the crowd was 850,000, and the media, substituting it for reality, began touting this figure, and other figures claiming a crowd of a quarter million or more. The November elections, however, provided a useful reality check — even the Washington Post reported that the gun control issue was hurting Democrats. This is not a result to be expected if there actually were an anti-gun citizen movement which could mobilize a crowd of close to a million.

Scientists are not the only ones who need to remember that honesty requires exertion. All citizens do. We must make gun policy based on facts, not counter-factual emotions. We must count all readable votes, then stop, regardless of whether our candidate won or lost. We must not persist with repeated recounts until we finally get the result we wanted all along, then declare that we have found the "correct" result. Surely we can exert ourselves sufficiently to refrain from using fudge factors and pretending that they are corrections. If we cannot, we deserve not to be citizens, but to be mere subjects. If we care so little that we allow election results and our personal safety decisions to be distorted to suit those in power, we deserve to be ruled, not governed. The choice is ours.


Share this page:

Kopel RSS feed Click the icon to get RSS/XML updates of this website, and of Dave's articles.

Follow Dave on Twitter.

Kopel's Law & Liberty News. Twice-daily web newspaper collecting articles from Kopel and those whom he follows on Twitter.

Author page on Amazon.

Search Kopel website:

Make a donation to support Dave Kopel's work in defense of constitutional rights and public safety.
Donate Now!

Nothing written here is to be construed as necessarily representing the views of the Independence Institute or as an attempt to influence any election or legislative action. Please send comments to Independence Institute, 727 East 16th Ave., Colorado 80203. Phone 303-279-6536. (email) webmngr @ i2i.org

Copyright © 2018