Samelson (July 1980) illustrates once again the shaky empirical foundation of psychology and reaffirms the need for critical analysis and replication. What should worry every psychologist is that the Watson and Burt episodes may not be isolated cases. Samelson warns, “To shrug them off as unfortunate accidents of the past seems too easy and ignores their impact on the world beyond the scientific discipline” (p. 623).

To provide some basis for the above concern, I must make the painful disclosure that in the past few years, in numerous experiments, I have failed to replicate some of the widely accepted findings enshrined in psychology textbooks and have also learned of several other investigators with similar experiences. This is not the place to report all my failures in replication, but I do believe that it is just a matter of time before more episodes similar to those of Watson and Burt come to light, further corroding the empirical foundation of psychology.

I have repeatedly asked myself these perplexing and distressing questions: Why is it so difficult to replicate findings in psychology? Who is responsible for the failure of internal criticism and replication? What can be done to safeguard the integrity of psychology as an empirical science? Perhaps the first step toward answering these questions is to examine critically the social process of psychology research enterprise as Samelson suggests.

Since editors of major journals play a key role in the social process of psychology, their explicit as well as implicit editorial policies may have contributed to the present crisis of integrity that psychology faces. Samelson implicates a lack of internal criticism and the editorial policies against negative criticism. It could be argued, however, that an overly critical and negative editorial policy also contributes to this crisis. An intricate set of social factors may predispose an editor to adopt either a very critical or a less than critical editorial policy. Since both of these extremes are seldom explicitly stated, their detrimental effects on psychology typically go undetected; this is especially true with regard to the case of stringent internal criticism. Here, I present the case that an overly critical editorial policy does not serve psychology well. I also attempt to identify some of the factors that may be responsible for both kinds of harmful implicit editorial policies.

Editors of major journals explicitly demand a high standard for publication. This is a laudable policy. Unfortunately, a high standard is often implicitly interpreted as perfection. An editor’s demand for perfect and flawless research may bias him or her to gloss over the strengths of a manuscript and focus on faultfinding. Unlike the physical sciences, many areas of psychological research do not have widely accepted standard paradigms and procedures. Since perfection is in the eyes of the beholder, it is not difficult to find fault with procedural details. A manuscript that deals with an important issue and reports significant findings may be rejected purely on procedural grounds. A person with a highly critical and skeptical eye may even “find bones in a piece of a bean cake,” as the Chinese saying goes. If the editor can find or concoct a long enough list of “flaws” in the procedure to cast doubts on the findings, he or she could easily reach a decision of rejection.

In order not to run the risk of being pinned down on certain specifics of procedures, researchers soon learn that it is far safer to make the description of procedure brief and cryptic. A more questionable reporting practice is the omission of procedural details that have been unfairly criticized. If one does not want to see a few years of solid research effort go down the drain simply because of a few minor procedural details, it becomes very tempting to omit the points of contention in resubmission to another journal. Both reporting practices necessarily contribute to the difficulty of replication by other investigators.

The unreasonable demand for perfection often also means that inconsistent or negative findings are looked upon with disfavor. To increase the likelihood of publication, the best strategy is to report selectively only those portions of data protocol that are consistent and significant. Such “data fudging” may be more commonplace than is suspected. Field research would establish how widespread these questionable reporting practices are and whether they are motivated by a perception of editorial policy as demanding perfection and focusing on faultfinding.

The tendency to be unduly critical can vary from individual to individual. However, there are certain social and attitudinal factors that may predispose a person to insist on perfection with a vengeance. For example, one is likely to be unduly critical when one does not approve of certain areas or certain paradigms of research. This negative attitude may be determined by one’s own biases or by one’s subjective reading of the tides of the times. Further, one is likely to be unreasonably critical when a manuscript questions one’s own theory or findings. For instance, Samelson writes that “part of the irony of this issue is that our two protagonists (and their followers) were very insistent on the objectivity and solidity of their science, as against the muddleheadedness of some of their opponents” (p. 623). Such ego-defensiveness, reinforced by an editorial policy that is generally biased against the publication of negative findings, has contributed to the perpetuation of the “halftruths” and “untruths” that have found their way into the literature.

It is my plea that editors and reviewers soften their insistence on perfection in paradigms or procedures. Their obsession with faultfinding may not only have discouraged many talented investigators from further research but also encouraged various questionable practices in reporting. I also urge editors and reviewers to guard against those unscholarly factors that may predispose them to adopt an unreasonably critical stance in reviewing manuscripts.

I now turn to Samelson’s concern with the failure of internal criticism. Curiously, the same editor I have just described, under a different set of conditions, may be guilty of not being critical enough. This double standard is closely linked to the social process of research enterprise. Here, I discuss only the problem of faddism, which is best described in the APA Publication Manual: “The field of psychology is occasionally marked by surges of enthusiasm for a new scale or technique. Publication begets publication until, inexplicably, enthusiasm wanes and the tidal wave subsides” (APA, 1974, p. 22).

The “explosion” of publication that follows the discovery of a new technique or phenomenon may be due to the widespread interest it generates among investigators. However, it is also likely that in the tidal wave of enthusiasm, editors and reviewers alike may forget to ask hard questions and look only for merits, to the neglect of Daws, in reviewing manuscripts that support the new discovery. Easy and frequent publication undoubtedly encourages more people to join the “hit parade” and begets more publication. Critical voices raised by a few individuals are either muffled or ignored. The psychology community regains its critical faculty only when the initial wave of excitement has tapered off. A connection between faddism and failure of internal criticism could be and should be thoroughly documented by historians of psychology. Samelson’s article has already provided some rather convincing evidence.

Zeitgeist, the social network of researchers, funding priorities, and the inherent appeal of a discovery all contribute to faddism. As a social movement, faddism may even determine the powers that be. When the leaders of the movement are in control of the major journals, the safeguard of internal criticism is in jeopardy. Failure of this important safeguard encourages slipshod or even fraudulent research. But there are no easy solutions to the problem of faddism. Exposing its potential dangers, as Samelson attempts to do, may help to cure psychology of its incessant infatuation with fads and fashions.

I have described two kinds of implicit editorial policies and their potentially damaging effects on psychology. As an empiricist, I suggest that given the same manuscript, one could demonstrate that the policy of focusing on faultfinding increases the likelihood of its rejection, whereas the policy of focusing on merits increases the likelihood of its acceptance. It might also be possible to establish empirically how prevalent these undesirable practices are and whether the adoption of either editorial policy is influenced by the various social factors described earlier.

Since these implicit editorial policies are enmeshed in the social process of psychology, we should heed Samelson’s call for greater understanding of this process. The future of psychology may depend on it.

References

  1. American Psychological Association. (1974). Publication manual (2nd ed.). Washington, DC: Author.
  2. Samelson, F. J. B. (1980). Watson’s little Albert, Cyril Burt’s twins, and the need for a critical science. American Psychologist, 35, 619-625.

Cite

Published as Wong, P. T. (1981). Implicit editorial policies and the integrity of psychology as an empirical science. American Psychologist, 36(6), 690-691.