Wednesday, April 20, 2016

Scientific Sleight-Of-Hand

The Public Good


Masters of Illusion: Both science and religion will attract charlatans, who will use their considerable powers to make people believe that something false is true. Moreover, both are prone to the use of reward and punishment, have persons within their ranks that are arrogant and patronizing to outsiders, and have persons who hold unwavering faith in their ideas. This is to say that scientists and monks might have more in common than many would first think, a proposition put forth by William A. Wilson for First Things: “Which brings us to the odd moment in which we live. At the same time as an ever more bloated scientific bureaucracy churns out masses of research results, the majority of which are likely outright false, scientists themselves are lauded as heroes and science is upheld as the only legitimate basis for policy-making. There’s reason to believe that these phenomena are linked. When a formerly ascetic discipline suddenly attains a measure of influence, it is bound to be flooded by opportunists and charlatans, whether it’s the National Academy of Science or the monastery of Cluny.”
Image Credit: Akiyoshi Kitaoka; Ritsumeikan University

An article, by William A. Wilson, in First Things points out one problem that plagues science today. While science and the long-standing and accepted method that undergirds it has the potential to do good and advance our understanding of the universe, much of science today does neither. It adds nothing to our knowledge; it squanders public funds for ill-conceived ideas; and it adds nothing for our public good.

In “Scientific Regress” (May 2016), Wilson writes of concrete examples of science’s very real failures:
The problem with ­science is that so much of it simply isn’t. Last summer, the Open Science Collaboration announced that it had tried to replicate one hundred published psychology experiments sampled from three of the most prestigious journals in the field. Scientific claims rest on the idea that experiments repeated under nearly identical conditions ought to yield approximately the same results, but until very recently, very few had bothered to check in a systematic way whether this was actually the case. The OSC was the biggest attempt yet to check a field’s results, and the most shocking. In many cases, they had used original experimental materials, and sometimes even performed the experiments under the guidance of the original researchers. Of the studies that had originally reported positive results, an astonishing 65 percent failed to show statistical significance on replication, and many of the remainder showed greatly reduced effect sizes.
Their findings made the news, and quickly became a club with which to bash the social sciences. But the problem isn’t just with psychology. There’s an ­unspoken rule in the pharmaceutical industry that half of all academic biomedical research will ultimately prove false, and in 2011 a group of researchers at Bayer decided to test it. Looking at sixty-seven recent drug discovery projects based on preclinical cancer biology research, they found that in more than 75 percent of cases the published data did not match up with their in-house attempts to replicate. These were not studies published in fly-by-night oncology journals, but blockbuster research featured in Science, Nature, Cell, and the like. The Bayer researchers were drowning in bad studies, and it was to this, in part, that they attributed the mysteriously declining yields of drug pipelines. Perhaps so many of these new drugs fail to have an effect because the basic research on which their development was based isn’t valid.
And so the argument goes. If you have been following science as I have, you will find that there is much truth in this argument, particularly on how statistical analysis can be artfully and selectively applied to “prove” almost anything. (The results are in the data.)  There is also the incentive to publish, even if the results are questionable, or misleading or plainly wrong, thus explaining scientific fraud, which might be on the rise.  This desire to cheat is nothing short of scientific sleight of hand, whose purpose is to deceive within the greater aim of entertainment—not something serious scientists seek to (or ought to) emulate. Most scientists, however, have until recently dismissed such genuine concerns, often for the very reasons that this writer says. Or for other reasons that speak of fear of loss.

Yet, most of the public non-scientists can understand the public-policy implications of fraudulent science, including on how fraudulent science affects us all, notably since much of science today is funded from the public purse. This status quo arrangement is precisely what will keep science from progressing and finding out the answers that humanity seeks. This is not to say that good sound science does not take place—it does, and we are all better for it—but that bad corrupt science often gets more attention than it deserves, which is none. Bad science does a disservice to the public, which is whom Science ultimately serves. Science serves not itself or the cohort of scientists within its ranks, but serves humanity.

Too many scientists have either forgotten or dismissed this idea—a noble one—in their quest for fame and fortune. A dose of humility might be the necessary antidote.

*****************
For more, go to [FirstThings]