“A Post Mortem on the Gino Case”: “Committing fraud is, right now, a viable career strategy that can propel you at the top of the academic world.”

Statistical Modeling, Causal Inference, and Social Science 2025-03-08

Zoe Ziani, a psychology researcher who had the misfortune several years ago as a Ph.D. student be tasked with following up on some unrelicable published psychology research, tells the story of how her department pulled the chair out from under her:

I [Ziani] started having doubts about . . . (Casciaro, Gino, and Kouchaki, “The Contaminating Effect of Building Instrumental Ties: How Networking Can Make Us Feel Dirty”, ASQ, 2014; hereafter abbreviated as “CGK 2014”) during my PhD. At the time, I was working on the topic of networking behaviors, and this paper is a cornerstone of the literature.

I formed the opinion that I shouldn’t use this paper as a building block in my research. Indeed, the idea that people would feel “physically dirty” when networking did not seem very plausible, and I knew that many results in Management and Psychology published around this time had been obtained through researchers’ degrees of freedom. However, my advisor had a different view: The paper had been published in a top management journal by three prominent scholars . . . To her, it was inconceivable to simply disregard this paper. . . .

We’ve called this the research incumbency rule: once a paper has been published, people act like it’s correct. There’s a high threshold for post-publication criticsm.

Ziani continues:

At the end of my third year into the program . . . I finally decided to openly share with her my concerns about the paper. . . . Her reaction was to vehemently dismiss my concerns, and to imply that I was making very serious accusations.

This sort of thing annoys me a lot: the personalization of scientific discourse. I don’t like the “scientist as hero” narrative I also don’t like the assumption that, just because someone does bad science, that they’re a cheater (equivalently the assumption that, just because someone doesn’t cheat in science, that their work is solid). Remember, honesty and transparency are not enuf.

By implying that Ziani “was making very serious accusations,” the adviser was doing a classic judo move, taking a technical, scientific criticism and turning it into something personal. It’s a sort of reverse ad-hominem argument that I think has no place in scientific discussion. It’s the last refuge of the scientific scoundrel.

Ziani continues:

It was at this point that I started suspecting that part of the evidence presented in CGK 2014 was not just p-hacked but based on fabricated data. At the time, I wasn’t clear how warranted these suspicions were, or about the best way to share them . . .

What I knew, however, was that I had accumulated enough theoretical and empirical arguments to seriously question the conclusions of CGK 2014, and that these arguments might be of interest to the scientific community. Indeed, CGK 2014 is an unavoidable building block for anyone studying networking behavior: It is authored by influential scholars, published in a prestigious journal, received the Outstanding Publication Award in OB at the 2015 Academy of Management annual meeting for its “significant contribution to the advancement of the field of organizational behavior”.

Ding ding ding! Harvard! Awards! Prestige! Next step, Ted and NPR.

Ziani continues:

I, a (very) early-career researcher, took a deep dive into a famous paper and discovered inconsistencies. . . . The three members of my committee (who oversaw the content of my dissertation) were very upset by this criticism. They never engaged with the content: Instead, they repeatedly suggested that a scientific criticism of a published paper had no place in a dissertation. . . . After the defense, two members of the committee made it clear they would not sign off on my dissertation until I removed all traces of my criticism of CGK 2014. Neither commented on the content of my criticism. Instead, one committee member implied that a criticism is fundamentally incompatible with the professional norms of academic research. . . . adopting what he called a “self-righteous posture” that was “not appropriate.”

This rings a few bells. Opposition to criticism, tone policing, and a go-along-get-along attitude that encourages cynical complacency and discourages “self-righteousness.”

Ziani tells more of the story:

I ran a replication of Study 1 of CGK 2014 using the authors’ original materials. Not only did I fail to replicate the original result, but I also found serious anomalies when comparing the data of my replication to the data of the original.

This often happens, including with legit studies. Look into almost any report carefully and you’ll find some data-related errors, sometimes fairly minor and sometimes in a way that’s fatal to the main conclusions being made (or, notoriously, here). Again, though, it’s not just the extreme Wansink or Lacour-level examples. Even solid studies typically have enough data rattling around that mistakes creep in, and the more carefully you look, the more mistakes you’ll typically find. This is not to excuse any misconduct in CKG 2014; it’s just a more general statement that data problems are to be expected.

And now on to Ziani’s conclusions about science:

From a truth-finding perspective, p-hacking is as damaging as fraud. . . . In a world in which ridiculous effects can be shown to “exist” thanks to p-hacking . . . how does one identify fraudulent findings? P-hacked effects also provide the implausible theoretical foundations on which fraudulent findings are built. . . .

Think about all the people who try to replicate, extend, or build upon these false positives. [I wouldn’t use the word “false positives,” as I don’t buy into the framework that effects are real or not, but I agree with the general point here. — ed.] Any resource spent trying to extend or replicate fake research is a resource that isn’t spent discovering real findings. . . . When a subset of scientists can reliably produce incredible effects (because they cut corners), and publish hundreds of papers, they set a bar that serious, careful researchers can never hope to meet.

I agree; this is similar to points we’ve made about about Clarke’s and Gresham’s Laws as applied to science.

Finally, Ziani has some specific comments regarding business schools:

The incentives for fraud in business academia are significant. If you can meet the standards for hiring, promotion, and tenure at an R1 university (something that is much easier once you fabricate your data), you will get: – A 6-figure salary with full benefits until you retire – Complete job security – A flexible work environment (no boss, remote work…) – The social status and reputational benefits that go with the “Professor/Dr.” title – Opportunities to do book deals, TED talks, to teach in executive education, to conduct corporate workshops… The benefits of fraud must be balanced with the risks of course. Are the risks of being caught for faking data high enough? I [Ziani] don’t think so:

The peer review process, as it exists today, makes it extremely difficult to catch fraud. . . .

The bar to accuse someone of fraud is extremely high. Failing to replicate the effect? Not enough. Non-sensical effect sizes? Not enough. Anomalies in data? Not enough. Unless you can invest the resources to identify anomalous patterns of fraud across multiple papers, THEN drum up enough support from journals or universities to consider your suspicions, THEN hold their feet to the fire when they are unwilling to act… the probability that the person will never face consequences for fabricating data is very high.

The incentives to investigate and call out fraud are non-existent. In fact, the opposite is true: If you find something fishy in a paper, your mentor, colleagues, and friends will most likely suggest that you keep quiet and move on (or as I have learned the hard way, they might even try to bully you into silence). If you are crazy enough to ignore this advice, you are facing a Sisyphean task: Emailing authors to share data (which they do want not to), begging universities to investigate (which they do not want to), convincing journals to retract (which they do not want to), waiting months or years for them to share their findings with the public (if it ever happens)…

In summary:

Business academia needs to reckon with this inconvenient truth: Committing fraud is, right now, a viable career strategy that can propel you at the top of the academic world.

Not just business, and not just academia.