The liberal intelligentsia decided years ago that it knew what was best for John Q. Public.
On March 17, 2006, the Associated Press published an authoritative-sounding news article (“Girls warned not to ‘go wild’ on Spring Break”) about an American Medical Association (AMA) survey which “all but [confirmed] what goes on in those Girls Gone Wild videos.”
According to the article, “83 percent of college women and graduates surveyed by the AMA said spring break involves heavier-than-usual drinking, and 74 percent said the break results in increased sexual activity.” Also, “about 30 percent of women surveyed said spring break trips with sun and alcohol are an essential part of college life.”
It’s a good thing we have a nonpartisan, dispassionate scientific group to conduct this type of potentially controversial research. We can’t have special-interest groups conducting shoddy research in an effort to call attention to underage drinking among women.
Oops, wait. The American Medical Association performed this study “to call attention to underage drinking among women,” according to Dr. J. Edward Hill, president of the AMA (as reported by AP).
As a matter of fact, the AMA’s spring-break survey is a pretty good summary of what not to do when performing statistical research.
1) Confusion about the definition of a “random sample.” According to AP, “the online survey queried a nationwide random sample of 644 college women or graduates ages 17 to 35 last week.” Because online surveys are opt-in, they are never random. They will always be biased. It doesn’t matter if you have 30, 644, or 187,998 women sampled; the sample mean will never approach the population mean. For this reason, scientists do not use opt-in Internet surveys, but rather use random representative surveys.
2) Quote people who are not experts: “I think a lot of students wouldn’t really pay that much attention to [an anti-binge-drinking campaign],” said Kathleen Fitzgerald, a 21-year-old junior at Illinois State University. “They would just be like, ‘Duh, that’s why we do it.’”
3) Write unquantifiable, intentionally vague questions and decide ex post facto what the respondents meant. “Seventy-four percent said women use spring break drinking as an excuse for ‘outrageous’ behavior that the AMA said could include public nudity and dancing on tables.”
4) Try to find the most biased subsample you can, then generalize from there. “Of the 27 percent who said they had attended a college spring break trip…More than half said they regretted getting sick from drinking on the trip…About 40 percent said they regretted passing out or not remembering what they did…13 percent said they had sexual activity with more than one partner…10 percent said they regretted engaging in public or group sexual activity” [italics mine].
5) Whenever possible, rely on respondents’ reports of “the experiences of friends and acquaintances.”
6) Use technical statistical terms in wildly inappropriate ways. The original AMA press release said, “The American Medical Association commissioned the survey. Fako & Associates, Inc., of Lemont, Illinois, a national public opinion research firm, conducted the survey online. A nationwide random sample of 644 women age 17-35 who currently attend college, graduated from college or attended, but did not graduate from college within the United States were surveyed. The survey has a margin of error of +/- 4.00 percent at the 95 percent level of confidence” [italics mine]. Random sample, margin of error, and level of confidence are all statistical terms with specific, technical meanings. Because the “survey” was not a random sample but rather an Internet poll, the “margin of error” and “level of confidence” have no meaning. A revised press release no longer boasts a “margin of error” or a “95 percent level of confidence,” and an editor’s note explains the age breakdown of the respondents.
Cliff Zukin, president of the American Association for Public Opinion Research, who insists on antiquated and outmoded statistical conventions like “random sampling,” investigated the methodology of AMA’s online poll. A highly disturbing e-mail conversation between Zukin and Janet Williams, deputy director of the AMA’s Office of Alcohol, Tobacco and Other Drug Abuse, can be found on mysterypollster.com.
Williams said that “the poll was conducted in the industry standard for Internet polls—this was not academic research—it was a public opinion poll that is standard for policy development and used by politicians and nonprofits.”
So the American Medical Association is now learning poll-taking methods from politicians. That might be amusing, except that not even a politician would try to publish the results of an Internet poll.
Williams continued, “I have been involved in the development of public policy research for more than 15 years using this company and several others…I ask why did you not have a problem with the other two public opinion surveys I have conducted…As far as the methodology, it is the standard in the industry and does generalize for the population…this is a standard media advocacy tool that is regularly used by the American Lung Association, American Heart Association, American Cancer Society and others.”
It’s becoming increasingly dangerous to be innumerate in a world where the American Medical Association, American Lung Association, American Heart Association, American Cancer Society, “and others” conspire to publish pretend science for “media advocacy” purposes.