fish oilThere is a need for self-critical review within the statistical community.

The consumer (well, at least one of them) is always chasing after the results from new studies, e.g., individual nutrients, phytonutrients, antioxidants, fish oils, etc. – some with value, and then some later found to have little or no value or even to be harmful.

It's where science meets promotion and hucksterism, and it's a big business at the local supermarkets and warehouse stores. And policy makers and large amounts of money can be shifted by these studies. At any rate further potential problems with medical data from tests (even if the subjects are properly chosen) from, "We're so good at medical studies that most of them are wrong" by John Timmer

The problem now is that we're rapidly expanding our ability to do tests. Various speakers pointed to data sources as diverse as gene expression chips and the Sloan Digital Sky Survey, which provide tens of thousands of individual data points to analyze. At the same time, the growth of computing power has meant that we can ask many questions of these large data sets at once, and each one of these tests increases the prospects than an error will occur in a study; as Shaffer put it, "every decision increases your error prospects. She pointed out that dividing data into subgroups, which can often identify susceptible subpopulations, is also a decision, and increases the chances of a spurious error. Smaller populations are also more prone to random associations. In the end, Young noted, by the time you reach 61 tests, there's a 95 percent chance that you'll get a significant result at random. And, let's face it—researchers want to see a significant result, so there's a strong, unintentional bias towards trying different tests until something pops out. Young went on to describe a study, published in JAMA, that was a multiple testing train wreck: exposures to 275 chemicals were considered, 32 health outcomes were tracked, and 10 demographic variables were used as controls. That was about 8,800 different tests, and as many as 9 million ways of looking at the data once the demographics were considered. "


It's pretty obvious that these factors create a host of potential problems, but Young provided the best measure of where the field stands. In a survey of the recent literature, he found that 95 percent of the results of observational studies on human health had failed replication when tested using a rigorous, double blind trial. So, how do we fix this?





Speak your mind


Resources & Links