The Science of Wind Turbine Syndrome: Part 1
Jul 24, 2013
The real gold standard of science is not “peer review”; it’s something called “reproducibility.”
—Curt Devlin (Fairhaven, MA), 7/1/13
Science has become a rareified business these days. It is conducted far outside the bounds of the average person’s experience. As a result, it’s easy to mislead people about how science actually works and what counts as good science or bad science. Most people know that evolution is considered to be good science and creationism is considered to be bad science, but they still would be largely at a loss to explain why.
When told that the “gold standard” of science is peer review, most people tend to accept this as gospel. If science has become so sophisticated that only experts in the field can understand it, then surely it makes sense to have any scientific conclusions evaluated by other experts in that field. Right?
Unfortunately, the idea that peer review is the gold standard of science is absolutely false.
To the extent that peer review is based on authority or expert opinion, it is completely contrary to the true spirit of science. Peer review is not a bad practice, but its true purpose is to improve the work and decide if it is worthy of being published. You could say that peer review is the gold standard of publication—nothing more and nothing less.
At its heart, the real core of science is a handful of simple ideas called the scientific method. It involves careful observation, precise measurement, and accurate reporting of both the conditions under which measurements were made, and the measured results themselves. The whole point of the scientific method is to eliminate human authority, opinion, or bias of any kind from consideration.
The real gold standard of science is something called reproducibility. Simply put, this means that if you do the same experiment under the same conditions and same measurement precision, you get the same results.
You could say the mantra of science is “see for yourself.”
If we apply the standard of reproducibility to the findings reported by Dr. Nina Pierpont in her book, “Wind Turbine Syndrome” (WTS), her conclusions hold up remarkably well because her work is based on very careful observation and measurement. Her reporting of experimental conditions and measurements is extremely detailed and meticulous—even to the point of publishing all the raw data in her book. (No one who is interested in selling a book puts raw data in it.) Presumably, Pierpont did this to ensure that serious defects would be obvious. This type of transparency and disclosure is a signature of scientific integrity.
Predictability is also a very important element of good science. The findings of a study should support specific predictions about outcomes under certain conditions. As a resident of Fairhaven, Massachusetts, where two 1.5 MW industrial wind turbines were sited in a dense neighborhood two years ago, WTS has proven to be an excellent predictor of the adverse health effects that have occurred since then. I have absolutely no doubt that the results of Pierpont’s study could be reproduced in Fairhaven tomorrow.
Pierpont’s critics within the wind industry could easily fund an independent study to determine whether her experiments can be reproduced, but they never have and never will. Perhaps they already know too well that these attempts will only result in confirming her findings. Pierpont’s findings are simple enough. When people live near wind turbines, they experience nausea, dizziness, sleeplessness and stress-induced illnesses. When they get away from them, they begin to feel much better and may recover completely.
Just the other day, I heard a science editor on public radio, Heather Goldstone, leveling criticism that WTS is not peer reviewed and parroting the claim that peer review is the gold standard of science. Anyone who has read WTS (as I have) knows that Goldstone is factually incorrect. Pierpont sought review, advice, and criticism from her peers throughout her research and publication. This group included highly regarded clinicians, acoustic experts, researchers in neurology and public health, among others. The referee reports (aka peer reviews) are all included in the book, for those who trouble themselves to actually read it.
Goldstone also claimed that Pierpont’s study was flawed because she studied subjects only in one small location. This is also factually incorrect, proving only that our “science” editor, Goldstone, had never bothered to actually read WTS herself. (Maybe she got this information about the book from “good authority,” perhaps a friend in the wind industry?)
As Pierpont explains in the book, she went to great lengths to identify subjects from other geographies and countries to avoid this limitation in her case study. That is why she had to restrict the participants to those who spoke English, to ensure that she could clearly understand their reports.
By contrast, the process of peer review does not stand up so well under close scrutiny. Several studies have shown that the peer review process can be fraught with petty professional jealousy, personal grudges, and other conflicts of interest created by ambition, academic competition, and so forth. Some studies have shown that this problem is even worse in blind and double-blind peer reviews, because reviewers can hide behind anonymity and offer reviews that they would not stake their professional reputation on.
So much for peer review as the ultimate standard of scholarly publication.
Advocates of wind energy would have you believe that anything that is not peer-reviewed should be discredited and disregarded. Let’s see how this idea holds up.
In 1904, if you had argued that apples fall from the tree to the ground because the immense mass of the Earth causes space to warp, you would have been treated to some strange looks. If you had claimed that time slows down as things speed up, or that matter and energy were really two forms of the same thing, you probably would have been diagnosed with dementia praecox (that’s what Alzheimer’s was called in those days). Truly, such ideas simply defy common sense. (Note: Good science often does.)
You would have been subjected to raucous laughter if you had mentioned that you learned all this from a third-rate clerk at a Swiss patent office in Bern.
And yet, as improbable as all this sounds, this is more or less what Albert Einstein did tell us in an article he published in a German periodical called Annals of Physics in 1905. It established one of the very pillars of modern physics for the next century.
Amazingly, Einstein’s work was not peer-reviewed at all. It was read by Max Planck, the pre-eminent physicist of the day, who gave it a wink and a nod. Then it was published. Since then, Einstein’s theories have been experimented with, scrutinized, and tested as much as any in history. Science must accept or reject it based on evidence alone, not a “peer reviewer’s” authority or opinion.
Einstein’s ideas—most, at least—have been confirmed over and over again.
Based on the “gold standard” of peer review, however, we are presumably expected to discard the theory of relativity until it has been properly peer reviewed.
In 1953, two Harvard biologists, James Watson and Francis Crick, published a paper in the journal Nature claiming that the chemical structure of DNA, the code for all life on Earth (and probably the universe), is a double helix—like a spiral staircase, which in fact gave them the idea.
Again, they did so without a single peer review. It would seem that we must disregard the foundations of modern biology and genetics, too. The “gold standard” of peer review demands it, correct?
When given fairly and honestly, peer review can be a powerful ally of science. Often, peer review can provide an invaluable exchange of ideas between researchers. Sometimes it can be the beginning of fruitful collaboration between scientists, each of whom is holding a different piece of the same puzzle. But the idea that peer review is the final arbiter of science is absurd. If there is such a standard, it is, and must be, reproducibility. Replicability.
Darwin’s “Origin of Species” was not “peer-reviewed” before publication. But it was “replicated”—and it revolutionized biology.
Let’s face it. The chant of peer review coming from religious devotees of wind is becoming nothing more than lip service by those who have been turned into intellectual zombies by the incessant propaganda of a wind industry that places profits above health, politics before science, and opinion over genuine knowledge.
In the case of WTS, this chant has been used as a weapon of mass delusion, a device to dismiss a superb piece of science and a pioneering contribution to our knowledge about the impact of wind turbines on human health and wellbeing. This has been done because legitimate criticism and ground-level research only serve to strengthen the conclusions arrived at in this book.
If you are interested in some of the most cogent and legitimate criticisms of WTS that I have read, consider these:
» The study was done by interview and limited to available medical records.
» Participant memory limitations or distortions.
» Possible minimization or exaggeration effects.
» The study was limited to English-speaking subjects.
» Small case series sample.
» Limited duration of follow-up.
The details of these specific criticisms and limitations of the Pierpont report can be found on pages 124-125 of WTS. Pierpont herself wrote them to alert her peers and fellow clinicians, and to identify the limitations of her own work; undoubtedly realizing that the study should be done on a much larger scale to address them. This was a task she did not have sufficient resources to do herself.
Calling attention to the defects or limitations of your own study does not invalidate it. On the contrary, it is one of the hallmarks of good science and an invitation to further study by other scientists who may be in a position to eliminate those limitations and either confirm or reject its conclusions on the basis of the evidence alone.
At the end of the day, you must ask yourself why a study of such profound social importance has not been repeated on a large scale. Could it be that those with the most to lose are afraid of what they will find?
“Wind Turbine Syndrome” is good science. The devotees of wind turbines, who would challenge the results reported in its pages, must do so on the basis of more good science—or not at all. Either they must exercise the Principle of Reproducibility or accept Ludwig Wittgenstein’s famous caution, “Whereof one cannot speak, thereof one must be silent.”
Editor’s note: Notice “Google’s” home page image, today. Celebrating Rosalind Franklin on her 93rd birthday.