There are a few things that we all know; people eat more if their food is served on larger plates, or if they’re watching something exciting like an action movie. If you go grocery shopping when hungry, you’ll end up with more calories in your cart than if you’d gone shopping with a full stomach. Children can be encouraged to make better food choices if healthier foods are made more appealing, perhaps by giving them catchy names like “power peas” or decorating them with stickers. There’s just one problem with all these “facts” – they may not be true. All these conclusions were drawn from research conducted by Professor Brian Wansink of the Food and Brand Laboratory at Cornell University. In what the website RetractionWatch called one of the “Top Ten Retractions of 2018”, Dr. Wansink had 17 of his papers retracted from academic journals in 2018, including 6 in a single day from the Journal of the American Medical Association (JAMA) Network. Why were these and other papers authored by Dr. Wansink withdrawn from publication? After investigations by Cornell or by the publishing journals, the validity of the research underlying their conclusions could not be confirmed.
Dr. Wansink’s research began to draw scrutiny in November 2016, when he published a blog post describing the work done by two young researchers in his lab. One post-doctoral fellow declined to analyze two data sets and left the lab after authoring only a single publication; a second, unpaid graduate student analyzed a “failed study with null results” and used it to publish four papers within a year. Readers quickly pointed out that Wansink had encouraged the graduate student to conduct analyses until she found some statistically significant results. This practice, called “p-hacking”, inverts the scientific method. Instead of developing a hypothesis, conducting an experiment, and analyzing the resulting data, Wansink suggested that the student start with whatever results appeared important or interesting – say, that people sitting closer to an all-you-can-eat buffet ate significantly more than people sitting further away – and write a paper around them. The problem with p-hacking is that every dataset, if analyzed enough, will contain some “significant” results by chance alone. Starting with these results and working backwards to find an explanation for them creates a false narrative of how they came to be – or that they have any meaning at all. After the blog post was published, other researchers began reviewing Wansink’s work; they found that many of his publications had numeric and statistical errors, inconsistent sample sizes, and self-plagiarism (in which text from a previous publication was recycled in a later one without attribution). After Cornell conducted its own investigation and concluded that Wansink had committed academic misconduct, he announced that he would resign at the end of the 2018-2019 school year. Some commentators have written that the Wansink saga illustrates the “publish or perish” pressures of academia, particularly the need to produce results that can easily “go viral” to a broader, non-scientific audience. Others felt that it was a warning to be more skeptical of research that confirms our own biases. Still others felt that criticism of his work focused too narrowly on p-hacking, and not on the importance of solid theory, data collection and measurement when conducting research. Personally, I think there should be more discussion of the pressures placed on students and young researchers in academia, and whether the graduate student in Wansink’s blog post – who was visiting from another country – felt that she could say no when he asked her to analyze the “null” data. Perversely, Wansink’s academic work may still be useful, if only to illustrate how much we still don’t know about how we can produce the best research on food, eating and behavior.
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. |