positivism

Our Positivist Bias

Daniel Sarewitz’s recent Nature column, “Beware the creeping cracks of bias”, makes for unsettling reading. The co-director of Arizona State University’s Consortium for Science, Policy and Outcomes warns that science’s positive bias may eventually erode public trust, as it becomes increasingly clear that science cannot deliver on many of its promises.

Positive publication bias first came to light in the 1990s, when pharmaceutical companies were found designing prejudiced studies and publishing only those favouring their own products, whilst confining negative results to laboratory drawers. The financial motivation seemed obvious (and reprehensible) at the time, but it appears that pharmaceutical companies were not the only ones at it.

A preference for publishing positive results is now widely recognised in many areas of science, but most notably in biomedical research, where early findings can be easily tested in terms of treatment response. A Nature comment article from March discussed attempts to replicate 53 exciting findings in cancer research. Only 6 were successful.

Many blame the rewards scientists receive, as it is easier to publish positive results in good journals, but Sarewitz goes beyond the practicalities of career progression:

 “…a powerful cultural belief is aligning multiple sources of scientific bias in the same direction. The belief is that progress in science means the continual production of positive findings. All involved benefit from positive results, and from the appearance of progress. Scientists are rewarded both intellectually and professionally…and the public desire for a better world is answered”.

An all-pervading belief that science equals progress may have led to a constrained working environment for scientists. Sarewitz speaks of the damage positive bias may cause to public trust, however, the public rarely access scientific journals directly. What are the implications then for those communicating science to the wider public?

When The Science Media Centre’s Fiona Fox appeared at The Leveson Inquiry, she discussed the difficulties of reporting novel findings from small samples:

 “Most studies are preliminary and provisional.  The vast majority will not be replicated, and indeed will be overturned because they’re small. They’re very important scientifically, but they’re not important to the public at that stage…we are not saying that we don’t want the media to report on these…we want all these studies to be reported, we’re delighted to see them but we want them on the inside pages.  They should not be on the front.”

Fox spoke about the pressure on journalists to report surprising new findings, and to sensationalise risks. She also criticised the tendency for sub-editors to write dramatic, eye-catching headlines that do not accurately reflect article content. To tackle these issues, and avoid the erosion of public trust that unfulfilled scientific claims may cause, Fox suggested scientists and journalists work together to write a new code of practice for science journalism.

Many of the problems Fox highlighted may simply be due to established journalistic norms, and covering an entire scientific study in a short news item is bound to limit the space devoted to caveats. However, in his narrative analysis of a scientific magazine, Ron Curtis felt the bias in science reporting went further than this. He found that science articles often contained a strong narrative element, pitching scientists as heroes triumphing over adversity, or as detectives solving a mystery. Readers’ familiarity with this type of story meant that even when qualifications and limitations were stated, the expectation of a victory for the hero, and of an ‘ending’ to the tale, rendered them almost meaningless.

So it seems there is also a positive – or perhaps positivist – bias of sorts working in the popular science press, influencing the very structure of science communication. Painting scientists as detectives solving nature’s mysteries may seem natural, but it holds many ideological implications, an important one being that the mysteries described have been solved and the case closed; science was successful and has therefore progressed. A code of practice may go some way to alleviating the practical pressures faced by journalists, but the same belief system that Sarewitz sees affecting scientists may also be constraining popular science journalism and the public’s expectations of it.

Challenging these seemingly deep-set societal beliefs will require an enormous cultural shift. Nonetheless, Sarewitz warns that if the issue is not addressed, the public will lose faith in a system that seems to be forever making ‘breakthroughs’ that rarely amount to anything. “The first step is to face up to the problem — before the cracks undermine the very foundations of science”.