Op-Ed: Who’s to blame when fake science gets published?
The now-discredited study got headlines because it offered hope. It seemed to prove that our sense of empathy, our basic humanity, could overcome prejudice and bridge seemingly irreconcilable differences. It was heartwarming, and it was utter bunkum. The good news is that this particular case of scientific fraud isn’t going to do much damage to anyone but the people who concocted and published the study. The bad news is that the alleged deception is a symptom of a weakness at the heart of the scientific establishment.
The study in question was the brainchild of Michael LaCour, a graduate student at UCLA, along with Donald Green, a professor of political science at Columbia University. Using surveys, they showed that a 20-minute conversation with a gay person would soften the hearts of opponents to same-sex marriage. The simple act of putting a face on an issue could begin to dissolve abstract ideology and entrenched hostility.When it was published in Science magazine last December, the research attracted academic as well as media attention; it seemed to provide solid evidence that increasing contact between minority and majority groups could reduce prejudice.
But earlier this month, other researchers tried to reproduce the study using the same methods, and failed. Upon closer examination, they uncovered a number of devastating “irregularities” — statistical quirks and troubling patterns — that strongly implied that the whole LaCour/Green study was based upon made-up data.
The data hit the fan last week, at which point Green distanced himself from the survey and called for the Science article to be retracted. The professor even told Retraction Watch, the website that broke the story, that all he’d really done was help LaCour write up the findings. What’s more, Green said that he initially had doubts about the results, which were “so astonishing” that they “would only be credible if the study were replicated.” After LaCour “replicated” the result, Green was satisfied, apparently without ever looking at the original survey responses.
Science magazine didn’t shoulder any blame, either. In a statement, Editor in Chief Marcia McNutt said the magazine was essentially helpless against the depredations of a clever hoaxer: “No peer review process is perfect, and in fact it is very difficult for peer reviewers to detect artful fraud.”
This is, unfortunately, accurate. In a scientific collaboration, a smart grad student can pull the wool over his advisor’s eyes — or vice versa. And if close collaborators aren’t going to catch the problem, it’s no surprise that outside reviewers dragooned into critiquing the research for a journal won’t catch it either. A modern science article rests on a foundation of trust.
Which is a sign that something is very amiss.
Sure, it’s an act of bad faith when a grad student fools his advisor with a fake survey, but it’s also a predictable consequence of the scientific community’s winking at the practice of senior scientists putting their names on junior researchers’ work without getting elbow-deep in the guts of the research themselves.
It’s all too common for a scientific fraud — last year’s Japanese stem-cell meltdown, a 2011 chemistry scandal at Columbia University, the famous materials-science fiasco involving Bell Laboratories’ Jan Hendrik Schon — to feature a young protege and a well-established scientist. The protege delivers great results; the stunningly incurious mentor asks no questions.
And, sure, it’s an act of bad faith when a scientist submits false data to a journal; but the scientific publishing industry encourages such behavior through lax standards.
You don’t have to look far to find dramatic failures. Recently two big-name scientific publishing houses had to withdraw dozens upon dozens of nonsense papers — computer-generated gobbledygook that somehow passed the peer review process. If the process can’t catch such obvious fraud — a hoax the perpetrators probably thought wouldn’t work — it’s no wonder that so many scientists feel emboldened to sneak a plagiarized passage or two past the gatekeepers.
There’s a deeper structural issue: Major peer-review journals tend to accept big, surprising, headline-grabbing results when those are precisely the ones that are most likely to be wrong. Replications (and failed replications), and less-than-spectacular results are incredibly difficult to get published, even though these constitute the true spine of the scientific endeavor. When scientists are rewarded for producing flashy publications at a rapid pace, we can’t be surprised that fraud is occasionally the consequence.
Despite the artful passing of the buck by LaCour’s senior colleague and the editors of Science magazine, affairs like this are seldom truly the product of a single dishonest grad student. Scientific publishers and veteran scientists — even when they don’t take an active part in deception — must recognize that they are ultimately responsible for the culture producing the steady drip-drip-drip of falsification, exaggeration and outright fabrication eroding the discipline they serve.
Charles Seife is a professor of journalism at NYU. His most recent book, “Virtual Unreality,” is about deception in the digital world.
Follow the Opinion section on Twitter @latimesopinion and Facebook
More to Read
Sign up for Essential California
The most important California stories and recommendations in your inbox every morning.
You may occasionally receive promotional content from the Los Angeles Times.