Parsing The Facebook Study's Authorship and Review
The Laboratorium 2014-07-01
I have been thinking a lot about the mechanics of how the Facebook emotional manipulation study was conducted, reviewed, and accepted for publication. I have found it helpful to gather in one place all of the various claims about who did what and what forms of review it received. I have bolded the relevant language.
What did the authors do?
PNAS authorship policy:
Authorship must be limited to those who have contributed substantially to the work. …
All collaborators share some degree of responsibility for any paper they coauthor. Some coauthors have responsibility for the entire paper as an accurate, verifiable report of the research. These include coauthors who are accountable for the integrity of the data reported in the paper, carry out the analysis, write the manuscript, present major findings at conferences, or provide scientific leadership to junior colleagues. Coauthors who make specific, limited contributions to a paper are responsible for their contributions but may have only limited responsibility for other results. While not all coauthors may be familiar with all aspects of the research presented in their paper, all collaborators should have in place an appropriate process for reviewing the accuracy of the reported results. Authors must indicate their specific contributions to the published work. This information will be published as a footnote to the paper. Examples of designations include:
- Designed research
- Performed research
- Contributed new reagents or analytic tools
- Analyzed data
- Wrote the paper
An author may list more than one contribution, and more than one author may have contributed to the same aspect of the work.
From the paper:
Author contributions: A.D.I.K., J.E.G., and J.T.H. designed research; A.D.I.K. performed research; A.D.I.K. analyzed data; and A.D.I.K., J.E.G., and J.T.H. wrote the paper.
Cornell press release:
… According to a new study by social scientists at Cornell, the University of California, San Francisco (UCSF), and Facebook, emotions can spread among users of online social networks.
The researchers reduced the amount of either positive or negative stories that appeared in the news feed of 689,003 randomly selected Facebook users, and found that the so-called “emotional contagion” effect worked both ways.
“People who had positive content experimentally reduced on their Facebook news feed, for one week, used more negative words in their status updates,” reports Jeff Hancock, professor of communication at Cornell’s College of Agriculture and Life Sciences and co-director of its Social Media Lab. …
Cornell statement
Cornell University Professor of Communication and Information Science Jeffrey Hancock and Jamie Guillory, a Cornell doctoral student at the time (now at University of California San Francisco) analyzed results from previously conducted research by Facebook into emotional contagion among its users. Professor Hancock and Dr. Guillory did not participate in data collection and did not have access to user data. Their work was limited to initial discussions, analyzing the research results and working with colleagues from Facebook to prepare the peer-reviewed paper “Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks,” published online June 2 in Proceedings of the National Academy of Science-Social Science.
Because the research was conducted independently by Facebook and Professor Hancock had access only to results – and not to any data at any time – Cornell University’s Institutional Review Board concluded that he was not directly engaged in human research and that no review by the Cornell Human Research Protection Program was required.
Adam Kramer’s statement for Facebook:
OK so. A lot of people have asked me about my and Jamie and Jeff’s recent study published in PNAS, and I wanted to give a brief public explanation. …
Regarding methodology, our research sought to investigate the above claim by very minimally deprioritizing a small percentage of content in News Feed (based on whether there was an emotional word in the post) for a group of people (about 0.04% of users, or 1 in 2500) for a short period (one week, in early 2012). … And we found the exact opposite to what was then the conventional wisdom: Seeing a certain kind of emotion (positive) encourages it rather than suppresses is.
What did the IRB do?
PNAS IRB review policy:
Research involving Human and Animal Participants and Clinical Trials must have been approved by the author’s institutional review board. … Authors must include in the Methods section a brief statement identifying the institutional and/or licensing committee approving the experiments. For experiments involving human participants, authors must also include a statement confirming that informed consent was obtained from all participants. All experiments must have been conducted according to the principles expressed in the Declaration of Helsinki.
Susan Fiske’s email to Matt Pearce:
I was concerned about this ethical issue as well, but the authors indicated that their university IRB had approved the study, on the grounds that Facebook filters user news feeds all the time, per the user agreement. Thus, it fits everyday experiences for users, even if they do not often consider the nature of Facebook’s systematic interventions. The Cornell IRB considered it a pre-existing dataset because Facebook continually creates these interventions, as allowed by the user agreement.
Having chaired an IRB for a decade and having written on human subjects research ethics, I judged that PNAS should not second-guess the relevant IRB.
I regret not insisting that the authors insert their IRB approval in the body of the paper, but we did check that they had it.
Fiske’s email to Adrienne LaFrance:
Their revision letter said they had Cornell IRB approval as a “pre-existing dataset” presumably from FB, who seems to have reviewed it as well in some unspecified way. (I know University regulations for human subjects, but not FB’s.) So maybe both are true.
Cornell’s statement (again):
Because the research was conducted independently by Facebook and Professor Hancock had access only to results – and not to any data at any time – Cornell University’s Institutional Review Board concluded that he was not directly engaged in human research and that no review by the Cornell Human Research Protection Program was required.
Kramer’s statement (again):
While we’ve always considered what research we do carefully, we (not just me, several other researchers at Facebook) have been working on improving our internal review practices. The experiment in question was run in early 2012, and we have come a long way since then. Those review practices will also incorporate what we’ve learned from the reaction to this paper.