It has felt like death by a thousand cuts, but I have survived the brutal and biased peer review process. This is one of the areas in my life where prayer has not spared me from many unfair rejections. This is also one of the areas where my heroic efforts to bring about reform have failed miserably. In spite of all these defeats, I still believe that my efforts were not wasted; eventually, they will contribute to the development of a fairer and more responsible peer review process.
When I started writing this chapter, my mind was flooded with all the painful memories of being rejected without any justifiable reason. I could have given up on a research career, if I did not have confidence in my research ability or a strong conviction of the worthiness of my research. Eventually, I was able to get the rejected papers published elsewhere, but the process of getting back to my feet and revising my papers was painful.
In the final analysis, I feel grateful for this pain experience for two reasons: Each revision makes the paper stronger, and each experience of unreasonable abuse from reviewers is still worth it, because it teaches me what not to do in peer review. These lessons make me a better reviewer.
In the 80s, I even took the trouble to write to several APA Presidents and Chairpersons of APA’s Publication Committee, urging them to start systematic research on the review process in the APA journals in order to improve their flawed review system. I even presented a summary of research evidence of publication biases (e.g., Smith, 2006; Wikipedia) and recommended some concrete steps to make the system fairer.
But such efforts did not yield any results. The most I ever received was a brief polite reply stating that the APA was improving its peer review process. Actually, the only change introduced was the blind review system, but such a system is not blind to the editor, who is not immune from the human foible of biases.
I hope that my story may be a source of encouragement to all those who have been wounded by the flawed peer review process. We need to continue the good fight to replace “the great wall” that protects entrenched special interest groups with a transparent wall that protects scientific integrity.
Ebbs and Flows in Psychology
Timing is everything in getting things published. If you do the wrong kind of research at the wrong time, you can have a hard time getting published. On the other hand, if you do fashionable research and belong to the right “club,” you can get published without much difficulty. This historical factor of fashions and fads in psychology is probably the most biasing factor, regardless of the quality of the research.
When I started my graduate training, learning theory still reigned supreme but was on its last legs, as the rising tide of the cognitive revolution would soon take over the psychological landscape. However, in the late 60s, most of the editors at that time were in some way connected with Clark Hull or Kenneth Spence. Abraham Amsel, being one of the most prominent students of Spence, was still very influential, and research papers coming out from his lab rarely got rejected. For example, my first publication with Amsel (Amsel, Wong, & Traupmann, 1971) was accepted by the Journal of Experimental Psychology (JEP), a prestigious APA journal, without any changes, even though it was just a part of a larger study I did with Ken Traupmann, a postdoc with Amsel.
Our fate began to nosedive when Allan Wagner, another prominent student of Spence, took over the editorship of the JEP. As a new editorial policy, he would not publish any instrumental learning studies in favour of classical conditioning. As a result, those from Amsel’s lab, including Amsel himself, could no longer publish at the JEP. Even my paper reporting a revolutionary approach to instrumental learning, “A Behavioral Field Approach to Instrumental Learning” (Wong, 1977) was rejected; this was later published as a leading article in Animal Learning & Behavior. Several years later, Amsel informed me that Allan Wagner wrote him, confessing his regret for the mistake of not publishing anything from Amsel’s lab.
When attribution and social cognition took off as a new field, it was relatively easy to get research in this area published. After a flourish of publications, journals were saturated with attribution studies. Therefore, it then became increasing difficult.
At present, positive psychology is still dominating the field of psychology, even though it is under increasing scrutiny (Brown, Lomas & Eiroa-Orosa, 2017; Wong & Roy, 2017). I hope that the days of second wave positive psychology (PP 2.0) will soon arrive; this integrative big-tent approach will benefit all researchers. In some way, one’s publication success depends on catching the wave at the right time. My predicament is that I have been swimming against the tide by choice, believing that one day the tide will break my way. I don’t recommend this risky and costly path for young researchers.
Arbitrary Editorial Decisions
More than once, I have had the experience of having a paper rejected by an editor, even when all the reviewers recommend conditional acceptance or re-submission with revision. My most cited paper, “When People Ask ‘Why’ Questions and the Heuristic of Attributional Search” (Wong & Weiner, 1981), was rejected by the action editor for the simple reason that my studies were based on self-reports to hypothetical “scenarios” rather than behavioral manipulations, even though at that time almost all experimental attribution studies were based on the manipulation of scenarios. I had to add a behavioral study and submit it as a new paper with a complaint to the editor-in-chief. It was unusual but gratifying that the action editor wrote me later and regretted his mistake of judgement.
Sometimes, such editorial decisions could be in your favour. For example, my other high impact paper on reminiscence (Wong & Watt, 1991) was rejected by all three reviewers asking for a validity test for my classification. I wrote a long rebuttal, arguing that my quantitative results and the existential categories in the literature provided the validity test. I was pleased that the editor of this APA journal was convinced by my rebuttal and published it, because this paper settled a decade-long debate whether reminiscence was beneficial to the elderly.
In another article, I outlined a number of implicit editorial polices that may threaten the integrity of the publication process (Wong, 1981). The most common editorial bias is to focus on the positives for the “in group” and on the negatives for the “out group,” resulting in rejecting good research and publishing bad research.
Institution Bias Against Small Universities
During my five years of serving on the NIMH Panel of Biological and Behavioral Sciences, I observed first-hand how the peer review systems favors grant applications from elite universities and big names, even though the same lab might submit multiple applications written by post-docs. Time and time again, I found myself having to make the case why some of the applications from big names have serious flaws and why some of the applications from small universities deserve support.
Personally, the most blatant example of institution bias was Canada’s Social Sciences and Humanities Research Council’s (SSHRC) aid to academic conferences. Time and time again, I applied for conference support for the INPM’s Biennial International Meaning Conference; each time, I got a rejection letter with exactly the same reasons: (a) it does not have academic significance, and (b) it does not have enough student involvement.
In my appeal, I documented that for each Meaning Conference, the themes were widely recognized in psychology as academically and socially significant, and all the keynote speakers were the most cited experts in each research area; in addition, I provided the statistics of extensive student involvement, from conference planning, opportunities for presentation and scholarships, and student volunteers. In contrast, the major universities received almost all the conference grants even for esoteric themes, such as a conference on the street names in Paris.
I appealed to the top person responsible for the SSHRC; his response was, “It is a matter of your opinion against the reviewer’ opinions. I have no reason to believe in your opinion.” I strongly suspect that this reply was written by a staff member, because a senior administrator should know the difference between personal opinion and objective evidence. I am so glad that recently the conference support program has been cancelled, probably because of too many complaints.
It is gratifying that, over the years, I have been asked to serve as a reviewer by most of the major granting agencies around the world and most of the major psychology journals. As a result, I have gained a great deal of experience as a reviewer. I am pleased that in the last three years, I have been recognized by Publons as one of their top reviewers in terms of numbers of reviews conducted. Recently, I have also had the privilege of mentoring young researchers to be good reviewers both personally and through Publons Academy.
It is my belief that all researchers should have some training in peer review so that they can avoid common mistakes in writing and in reviewing other people’s work. I am grateful that Amsel trained me to review journal submissions when I was a graduate student. This training has sharpened my critical thinking and improved my discernment for good research. My biggest mistake in review during those graduate school days was that I was too harsh, especially taking delight in discovering faults with “big names.”
I am still fighting my battle against bad peer review, but my focus has shifted to training others to be fair and competent as a referee. I hope that most of the review will be done by professional reviewers who have the expertise but not vested interest in promoting one particular area or point of view. I also hope that there will be an ombudsman to whom researchers can complain about unfair editorial decisions. These ideals probably will never be fulfilled, but at least I can contribute to the improvement of the referee process by training young reviewers.
Peer review is at the heart of research funding, research dissemination, and progress in psychology. A deficient peer review system will contribute to the replication crisis and perpetuation of “untruth.” A peer review system based on the “old boys’ network” will only serve to protect the status quo and entrenched special interest groups. An improved peer review system will not only protect the integrity of scientific research but also contribute to the development of new talents and creative ideas.
What is a “Wow!” to one reviewer may be a “So what?” to another. Completely fair evaluation may remain an ideal for a long time, but at least we should tackle the problem of peer review biases head on. Furthermore, if such biases are unavoidable, let us err in favor of the budding researchers, minorities, and trail-blazers rather than the privileged elite; my guide on “How to Write a Good Manuscript Review” was written with this objective in mind.
For all of its shortcomings, peer review remains the best mechanism available to evaluate scientific papers. The recent practice of post-publication peer review provides additional opportunities for peers to evaluate the quality of already published papers and to correct publication errors.
Whenever a school of psychology is dominant, be it learning theory or positive psychology, there will be an invisible and impenetrable “great wall” to protect its power and privilege through the biased use of peer review.
I believe that the self-correction process of scientific research will continue, and that scientific revolutions (Kuhn, 1996) will march on in spite of such biases. However, even though scientific research is inherently robust and self-correcting, it is still desirable to reduce review biases in order to make the system fairer for beginners and minorities and facilitate scientific progress and diversity in creative ideas.
Being a constantly defeated warrior, I have to admit defeat in spite of my lifelong struggle for justice and fairness in peer review. But I remain optimistic that, in an era of open science, the peer review system will eventually open up for scrutiny by the scientific community.
- Amsel, A., Wong, P. T. P., & Traupmann, K. L. (1971). Short-term and long-term factors in extinction and durable persistence. Journal of Experimental Psychology, 90(1), 90-95.
- Brown, N. J. L., Lomas, T., Eiroa-Orosa, F. J. (2017). The Routledge international handbook of critical positive psychology. New York, NY: Routledge.
- Gelman, A. (2016, September 22). Why is the scientific replication crisis centered on psychology? Statistical Modeling, Causal Inference, and Social Science. Retrieved from http://andrewgelman.com/2016/09/22/why-is-the-scientific-replication-crisis-centered-on-psychology/
- Kuhn, T. S. (1996). The structure of scientific revolutions. Chicago, IL: University of Chicago Press.
- Lee, C. J., Sugimoto, C. R., Zhang, G., & Cronin, B. (2013). Bias in peer review. Journal of the Association for Information Science and Technology, 64(1), 2-17.
- Murray, D. L., Morris, D., Lavoie, C., Leavitt, P. R., MacIsaac, H., Masson, M. E., & Villard, M. A. (2016). Bias in research grant evaluation has dire consequences for small universities. PLOS ONE, 11(6), e0155876. Retrieved from http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0155876
- Smith, R. (2006). Peer review: A flawed process at the heart of science and journals. Journal of the Royal Society of Medicine, 99(4), 178-182. doi:10.1258/jrsm.99.4.178
- Viner, N., Powell, P., & Green, R. (2004). Institutionalized biases in the award of research grants: a preliminary analysis revisiting the principle of accumulative advantage. Research Policy, 33(3), 443-454.
- Wong, P. T. P. (1977). A behavioral field approach to instrumental learning in the rat: I. Partial reinforcement effects and sex differences. Animal Learning & Behavior, 5(1), 5-13. Retrieved from https://link.springer.com/content/pdf/10.3758/BF03209123.pdf
- Wong, P. T. P. (1981). Implicit editorial policies and the integrity of psychology as an empirical science. American Psychologist, 36(6), 690-691.
- Wong, P. T. P., & Roy, S. (2017). Critique of positive psychology and positive interventions. In N. J. L. Brown, T. Lomas, & F. J. Eiroa-Orosa (Eds.), The Routledge international handbook of critical positive psychology. London, UK: Routledge.
- Wong, P. T. P., & Watt, L. (1991). What types of reminiscence are associated with successful aging? Psychology and Aging, 6(2), 272-279.
- Wong, P. T. P., & Weiner, B. (1981). When people ask “Why” questions and the heuristic of attributional search. Journal of Personality and Social Psychology, 40(4), 650-663.