Monday, 3 January 2011

Why the "decline effect" shows the scientific method works

Part of the struggle between science and ideology is the recurring theme that science too can be wrong. Note the number of times we have seen scientists retreating from their initial position to a more nuanced one. This transition to less pronounced statements causes the anti-science crowd to claim this proves science is not as reliable as we think. This phenomenon is discussed by David Gorski while responding to:
an article in The New Yorker by Jonah Lehrer entitled The Truth Wears Off: Is There Something Wrong With the Scientific Method?
The above concept is called the "decline effect" which, according to Gorski, stands for:
a phenomenon in which initial results from experiments or studies of a scientific question are highly impressive, but, over time, become less so as the same investigators and other investigators try to replicate the results, usually as a means of building on them. In fact, Googling “the decline effect” brought up an entry from The Skeptic’s Dictionary, in which the decline effect is described thusly:
The decline effect is the notion that psychics lose their powers under continued investigation. This idea is based on the observation that subjects who do significantly better than chance in early trials tend to do worse in later trials.
To him this observation is is neither shocking nor new:
In medicine, in particular, early reports tend to be smaller trials and experiments that, because of their size, tend to be more prone to false positive results. Such false positive results (or, perhaps, exaggerated results that appear more positive than they really are) generate enthusiasm, and more investigators pile on. There’s often a tendency to want to publish confirmatory papers early on (the “bandwagon effect”), which might further skew the literature too far towards the positive. Ultimately, larger, more rigorous studies are done, and these studies result in a “regression to the mean” of sorts, in which the newer studies fail to replicate the large effects seen in earlier results. This is nothing more than what we’ve been writing right here on SBM ever since its inception, namely that the normal course of clinical research is to start out with observations from smaller studies, which are inherently less reliable because they are small and thus more prone to false positives or exaggerated effect sizes
His conclusion:
Although Lehrer makes some good points, where he stumbles, from my perspective, is when he appears to conflate “truth” with science or, more properly, accept the idea that there are scientific “truths,” even going so far as to use the word in the title of his article. That is a profound misrepresentation of the nature of science, in which all “truths” are provisional and all “truths” are subject to revision based on evidence and experimentation. The decline effect–or, as Lehrer describes it the title of his article, the “truth wearing off”–is nothing more than science doing what science does: Correcting itself.
Commenting on this article Steven Novella writes that the term "decline effect:"
was first applied to the parapsychological literature, and was in fact proposed as a real phenomena of ESP – that ESP effects literally decline over time. Skeptics have criticized this view as magical thinking and hopelessly naive – Occam’s razor favors the conclusion that it is the flawed measurement of ESP, not ESP itself, that is declining over time.  Lehrer, however, applies this idea to all of science, not just parapsychology.
Just like Gorski he notes:
Lehrer is ultimately referring to aspects of science that skeptics have been pointing out for years (as a way of discerning science from pseudoscience), but Lehrer takes it to the nihilistic conclusion that it is difficult to prove anything, and that ultimately “we still have to choose what to believe.” Bollocks!
His explanation for this phenomenon is the same as Gorski's. The article was also noticed by Pharyngula who stated:
I read it. I was unimpressed with the overselling of the flaws in the science, but actually quite impressed with the article as an example of psychological manipulation.
And he then remarked:
Early in any scientific career, one should learn a couple of general rules: science is never about absolute certainty, and the absence of black & white binary results is not evidence against it; you don't get to choose what you want to believe, but instead only accept provisionally a result; and when you've got a positive result, the proper response is not to claim that you've proved something, but instead to focus more tightly, scrutinize more strictly, and test, test, test ever more deeply. It's unfortunate that Lehrer has tainted his story with all that unwarranted breast-beating, because as a summary of why science can be hard to do, and of the institutional flaws in doing science, it's quite good.
In short, the article erroneously points out the inherent characteristics of the scientific method as "proof" of why science is just another opinion. While provocative it identifies weakness but fails to recognise this actually is science's strongpoint.

As an side, this flowchart on how to debate the anti-science brigade (creationists actually) is brilliant:
(h/t Pharyngula)

Update: Following the "decline effect" there now is the Lehrer effect.

1 comment:

  1. The 2 examples of basic principles of reasoning are nutty ... you cannot get to "true" ... you might get highly likely and reasonable to assume ... but "true"?

    eg everyone assumes cigarettes cause cancer ... and yes many contract cancer from smoking ... or so it appears ... but since many who smoke do not ... one cannot say the statement is true ... However modify the claim to: Cigarettes are (reasonably likely ... or some legit % average) to cause cancer and that would be a bit more accurate ... thing is one can never prove that someone who smoked would not have contracted cancer had they not smoked ... there's so much polluted air we breathe .. who's to say what is causal ... Better still to say Cigarettes increase your risk for cancer ... I look at it as playing a kind of russian roulette .. where you don't know whether you hold a fully loaded gun or an empty one ... why shoot yourself in the head to find out?

    ReplyDelete