The importance of science in paranormal research
By D.A. Lascelles
An article in The Guardian caught my eye today and I felt it had relevance to this blog for a number of reasons. First of all, the title, Precognition studies and the curse of the failed replications, definitely hinted at a link to the spirit world and the accompanying photograph of a spiritualist gazing into a crystal ball only added to that appearance. Secondly, the main meat of the article touched on something very close to my heart with regards to parapsychology and any research into the spirit world – the importance of good science.
To give some background (in case you don’t want to read the whole article), Professor French and his team attempted to replicate a precognition study performed by Professor Daryl Bem. This was after Bem himself invited any and all to try to replicate his results which successfully demonstrated that precognition existed. French and a group of other psychologists (including the moderately famous Professor Richard Wiseman) conducted the experiments exactly as Bem did and produced their own analysis which showed that there was no precognitive effect. A number of top journals refused to publish it or even send it to reviewers. One of the ones that did send it to review had Bem as a reviewer and he was the one who rejected it (the other reviewer loved it).
On reading Professor French’s article, I was not surprised at the problems he had getting his study published. Having worked in science for a number of years (different field but same politics), I know all about the failure of journals to consider negative results or papers which repeat previously published studies whether they support or deny the claims made by that paper. As French rightly points out, it is an issue which potentially skews the data available to the general public. This is especially the case when you consider that what the general public largely read are newspaper articles filtered by journalists from the most interesting of those papers that are published. This means that what the average man and woman on the street actually read are summaries (sometimes with the wrong emphasis) of studies that a journalist thought were interesting selected out of the few studies that the reviewers thought were interesting which are selected out of the even fewer studies that the editor thought were interesting which were only sent to the editor because the researchers thought the results had a chance of being published. That’s a lot of filtering steps to get through before your hard won research data gets into a reputable journal.
And this was only one of the issues that French pointed out. According to him:
“As would be expected given the controversial nature of Bem’s claims, a number of critics have gone through the original paper with a fine-toothed comb and highlighted evidence of flawed methodology and inappropriate statistical analyses.”
Parapsychology as a scientific discipline has major issues with being taken seriously. Too often it gets confused with the table rappers and conmen of the early twentieth century and seen as a joke discipline populated by kooks and cranks. Therefore any study into the paranormal showing postive results is going to be pulled apart by skeptics and critics eager to prove the cranks wrong. This often means rigourous and vicious attacks on the methodology and, in particular, the statistical methods used to analyse the data. This happens in any field of science, to be honest. Quite often which test of significance you use (chi squared, t-Test, Paired t-Test, one tailed, two tailed etc. There are a lot of them to choose from and all needed for different reasons) can be an easy target and, quite often, you can pick any scientific paper and spot at least one statistical flaw in the argument – usually not setting an appropriately stringent P value* (which makes it more likely you will get positive results) or using the wrong test for the data collected (which makes your analysis irrelevent or at least doubtful). In fact, one professor of mine at University once said that he often looked at the names of the authors of the paper and if there wasn’t at least one statistician listed there he was sure to check the analysis closely for errors. The difference between many of these papers and more controversial studies (such as most in parapsychology) is that they are not challenging anyone’s paradigm in a significant manner. Therefore, people (reviewers, editors, critics) are less likely to look for an error which will damn the analysis of the data and therefore maintain the status quo of their opinions.
For this reason, Parapsychologists need to be whiter than white. You have to expect the scrutiny from the critics and the sceptics and be prepared for any and all attacks on methodolgy and statistical analysis. This is especially true because most of the data collected in this field is variable and open to varying interpretation. I always tell my A level Biology students that if you want to be good at statistical analysis it is good to take some psychology modules at university. Psychologists, because much of their data is subject to the whims of human emotions, have to be shit-hot in thier stats in order to produce any positive results while other disciplines can sometimes show a convincing result without the need for stats to back it up. Parapsychologists have this problem in spades and so have to be even better at it than psychologists.
After all, if you want to demonstrate irrefutable proof that the supernatural and paranormal exist, hiding behind sloppy methods and obfuscatingly opaque statistics is not going to get you there. All it will do is add further support to the sceptic’s opinions that these things are no more than coincidence.
*For the uninitiated, a P value in a statistical test is the probability that the two data sets analysed are actually the same data set (i.e. the two mean values are not significantly different). This is usually set to P=0.05 which means the researchers believe there is a 5% chance that the data is part of the same data set. The lower the P value, the more different the two datasets are and the more sure you are of the significance of the results. The higher the P value, the more likely the results are random chance