slam effect

Upload: jbahalkeh

Post on 03-Apr-2018

249 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/28/2019 SLAM Effect

    1/17

    Why Authors Believe That Reviewers Stress LimitingAspects of Manuscr ipts:The SLAM Effect in Peer Review'

    PAUL A. M. VAN A N C E ?Departme171o/'.soc;ocia/ ' . ; ? ~ ' h o / o g~

    Free L'nri*rr.sit>,Am.v/erdam The . c.theriutidr

    This manuscript describes a prelirninap stud) examining judgments of authors andreviewers regarding manuscripts that ha\e been either accepted or rejected for publication.Consistent with hjpotheses, results reveal that participants believe that their o\vn manu-scripts are superior to others' manuscripts in terms of general. theoretical. and mrthod-ological qualit?. Relevant to the presumed tendencq among reviewers to .s / re .n /imir/n,qmpeclc of n f a n i m r i p f ( (S L A M). revieuers exhibited greater agreement 14ith editorialdecisions favoring rejection. relative to those favoring acceptance. These findings suggestthat authors' beliefs in reviewers' tendencies to S L A M can be partially understood interms o f authors' unrealisticall) favorable and optimistic beliefs regarding their nianu-scripts and in reviewers' actual tendencies to be quite critical-at least more critical thaneditors.

    Virtually all sciences rely on the peer-review system. a practice that has beendiscussed by v arious scientists. W hile there is a good deal of agreem ent a mon gscientists of different dis ciplines regarding the overall utility of the peer-re viewsystem , few (if an y) believe that the peer-review system is without limitations(e.g., Laband & Piette, 1994;Peters & Ceci. 1985).

    Recently, Epstein ( 1995) fostered further debate regarding the peer-reviewsystem, noting that reviewers tend to place particular emphasis on (often se em-ingly correctable) imperfections and limitations, rather than on the innovativeaspects of a manuscript. He summ arized a discussion with fellow sc ient ists (se v-eral of whom we re associate editors who had m any pub lication credits i n APA

    'This research \\as supported in part by a grant from the Netherlands Organization for ScientificResearch (N WO ; R-57-178). I thank the organizers o ftw o conferences. Chuck Stangor and \VolfgangWagner, for their help in conducting this research; Wilma Otten and two anonymous reviewers fortheir constructive comments on an earlier draft o f this manuscript; and all o f the social psychologistswho generously participated in this research.

    ?Correspondence concerning this article should be addressed to P aul Van Lange. Department o fSocial Psychology, Free University at Amsterdam. Van der Boechorststraat I, 1081 B T Amsterdam,The Netherlands. e-mail: pam.van.Langetiilps).vu.nl

    2550Journal of Applied Social Psychology, 1999, 29,12, pp 2550-2566Copyright 0 1999 by V H Winston 8, Son, Inc All rights reserved.

  • 7/28/2019 SLAM Effect

    2/17

    THE SLAM EFFECT IN PEER REVIEW 2551journ als and w ho did a considerable amount o f reviewing themselves) by notingthat I was im pressed by the widespread discontent with the journ al review pro-cess in this select group. So me of the terms people used to describe the reviewswere arbitrary, biased, self-serving, irresponsible, and arrogant (p. 884).Epstein (1995) further illustrated such biases among reviewers by suggest-ing that reviewers frequently exhibit an I gotcha mentality, a frame of mind inwh ich reviewers are tempted to recommend rejection on the basis of minor limi-tations, even in case of an otherwise excellent manuscript (for similar observa-tions, se e Ra bino vich, 1996). Are reviewers really that bad? Do es this belief inwhat I refer to as reviewers tendenc ies to SLAM (an acronym for stressing limit-ing aspects ofm anu scripts) reflect an unbiased, accurate jud gm en t? Or might thisbelief be colored by authors tendencies to interpret reviews (and reviewersinclinations) in w ays so as to maintain a favorable view of ones own m anu-script?3While the belief in reviewers tendencies to SLAM seems to be shared bymany researchers, Levenson (1996) pointed ou t that it is common for authors toengage in rationalization after reading reviews of their manuscripts, particularlythose review s that favor rejection rather than acceptance. He referred to this phe-nomenon as the sour-grapes hypothesis,stressing the notion that reviewers ten-den cies to S LA M are, to some degree, constructed beliefs. Indeed, i t is unlikelythat review ers themselves describe their reviews in terms o f SLA M . It is evenless l ikely that reviewers themselves describe their reviews as arbitrary,biased, self-serving, irresponsible, and arrogant (Epstein, 1995, p. 884),as Epsteins fellow scientists characterized reviews. Instead, it is more likely thatreviewers believe that their (negative) appraisals are entirely justified, in light ofwhat ar e believed to be serious limitations underlying the work they review ed(e . g ., T he theo ry w as poor ly deve loped , Th e s tud ie s were so poor lydesigned; cf. Levenson, 1996). Thus, it is possible that the belief in reviewerstendencies to SL AM represents an objective reality, a constructed reality, or both.

    A Preliminary S tudy of Author and Reviewer JudgmentsIn light of the preceding discussion-and given that the accum ulation of

    knowledge relies to a significant degree on the peer-review system-it becom esimportant to examine the experiences of authors and reviewers. Th e present3Unfortunately, the acronym SLA Mmay have some specific connotations that I do not intend to

    convey. For example, one connotation might be that reviewers are merely interested i n providingauthors with negative feedback. Of course, I do not use this acronym because of such harsh connota-tions, which in my view provide a very poor and inaccurate description of reviewers motivations.Nevertheless, I elieve that this acronym captures the global meaning that I wish to convey: (authorsbelieving that) reviewers stress limitations more than they do strengths. I thank an anonymousreviewer for suggesting this acronym.

  • 7/28/2019 SLAM Effect

    3/17

    2552 PAUL A . M. VAN LANGEmanuscript describe s a preliminary study. designed to enhan ce our understandingof (a ) ho u authors jud ge the qual ity of thei r own manuscripts that have beeneither accepted or rejected for publication. (b) how such judgm ents relate to Jud g-ments of others manuscripts that have been either accepted or rqjected for publi-cation, and ( c ) the extent to which authors and reviewers agree or disagree witheditorial decisions favoring acceptance and re.jection. Specifically, this researchasks several social p sycho logists to think of the most recent manuscript that theysubmitted (vs. review ed) and that has been accepted (vs. rejected) for publica-tion. Thereafter. they are asked to judge the general quality, theoretical quality,and methodological quali ty of the manuscript and to indicate how much theyagree with the editorial decision .O ur frameLvork for understanding experiences of authors and reviewersregarding the peer-review system proposes that evaluations of own and otherswork to some extent are socially defined and colored by relatively stable beliefsof superiority. assuming that ones o m u.ork is better than and not as bad as oth-ers wor k. Th is proposi tion m ay be der ived f rom class ic and contemporaryassum ptions underlying theories of social compa rison and self-other judg me nt,which emphasize the social and self-enhancing nature ofju dgm ents , particularlyin contex ts in w hich perfectly objecti1.e standard s for jud gm ent are not a vailable(cf . Festinger. 1954: Suls & Wills. 1991:Taylor & Brown. 1988 ). Accordingly,the perceived quality of o u n m anusc ripts may be partially affected by the goodand bad features believed to be associated w ith others manu scripts (e.g .. T hispaper is actually quite good. when I look at what is being published these da ys).Similarly, the perceived quality of others manuscrip ts may be partially affectedby the good and bad features believed to be associated with ow n man uscripts( e . g . . C o m p a r e d to m y o wn w o r k . th i s \vork does not seen1 to be all thatnovel).

    Congruent u i th these theoretical approaches. there is considerable e\ idencethat judgm ents of others are affected by beliefs about the self (an d vice \ . m a ) , andthat individuals tend to regard themselves as being better than average on severalcompetence-related attributes (Suls & Wills, 199 1:Taylor & Bromn. 1988; seealso Allison, Messick, & Goethals. 1989; Van Lange & Rusbult. 199 5). Th is evi-dence suggests that people have developed relatively stable views of the self, fre-quently referred to as positiLz self-scl7ema.r Taylor & Brown, 1988),14 hich maybe used to interpret, filter. and colo r ne u self-relevant info rm ation . In part. suchinformation processing is guided by sr!fl,,7/7,i7cer?ier7f.he motivation to elexitethe positivity of ones self-conceptions and to protect ones self-concepts fromneg ativ e eva luat ion. Also. such information processing might be guided by self-\.erification, the motivation to maintain consistency between self-conceptions andnew self-relevant information (SLvann. 1983, 1990; see also Se dikides & Strube,1997). Indeed. i f people already hold beliefs of self-other superiori ty , seff-enhancement and self-verification are highly complementary mechanisms.

  • 7/28/2019 SLAM Effect

    4/17

    THE SLAM EFFECT IN PEER REVIEW 2553Importantly, given that beliefs of superiority m ay to s om e degree serv e as an

    anchor for evaluating others work-and given that it is not easy to compete withsuc h standards-we as reviewers are likely to take a fairly critical approach to thework of othe rs, thereby stressing the limitations rather than the stre ngth s of themanu scripts we review. For exam ple, consciously or unconsciously, we asreviewers may sometimes be tempted to recommend rejection on the basis of the(imp licit but often d isputa ble) belief that the limitations of the work we revieware (far) more serious than the limitations that others have stressed regarding ourow n work.4 It is also interesting to note that by com mun icating experiencesregarding they as reviewers and we as reviewers, we are likely to confirmour beliefs of superiority.Most of our colleagues can probably relate to the feeling that reviewers o f ourwork (particularly work that has been rejected) can be somewhat biased or short-sighted and to the feeling that we as reviewers every now and then devote time tovery poor manuscripts. Accordingly, the belief in reviewers tendencies to SLAMis likely to represent an objective reality (i.e., on average, reviewers d o place par-ticular em phas is on limita tions) and a constructed reality (i.e., au thors do holdunrealistically favorable beliefs about their own research).

    Hypo theses: Beliefs of Superiority and Beliefs in ReviewersTendencies to SLAMOn the basis of the preceding l ines of reasoning, three hypotheses are

    advanced. First, it is predicted that social psychologists will evaluate their ownmanuscripts more favorably than those of others in terms of general quality, theo-retical quality, as well as methodological quality. Second, perceptions of superi-ority may also to some extent affect the degree to which authors and reviewersagree or disagree with editorial decisions. Accordingly, it is predicted that fo rmanuscripts being accepted, levels of agreement with the editorial decision willbe g reater for own man uscripts than for others manuscripts. Conversely, formanuscripts being rejected, levels of agreement with the editorial decision shouldbe lower fo r own manuscripts than for others manuscripts.

    owe ver, evaluation s need not alway s be socially defined or colored by beliefs o f superiority .For example, sometimes it is not difticult to evaluate a manuscript, in that its quality is unequivocall!excellent or unequivocally poor. Under such circumstance s, our evaluations do not tend to be sociallydefined, although such review experiences may activate further reasoning that can be understood i nterms o f beliefs of superiority. For example, an unequivocally poor manuscript may help researchersto maintain a relatively favorable view of their own work (e.g., Even my worst man uscript is better;cf. downward comparison, Wills, I99 1 ). And, an unequivocally excellent man uscr ipt may instigatesomewhat exaggerated beliefs about the author of this manuscript (This researcher must be agenius; cf. Alicke, Lo Schiav o, Zerbst, &Zhang, 1997).so that reviewers ar e still able to believe thattheir own work is quite good. Thus, evaluations may be socially defined and affected by beliefs ofsuperiority, especially i n the important gray area. ranging from not particularly good to quite goodmanuscripts.

  • 7/28/2019 SLAM Effect

    5/17

    2554 PAUL A. M. VAN LANGEFinally, this research seeks to provide evidence relevant to notion that the

    belief in reviewers' tendencies to SLA M to som e degree reflects an objectivereality. How ever. i t is not easy to provide direct evidence i n support o f thisnotion. How can one jud ge that revie\vers are actually too critical? Ho\v can onejudge that reviewers "overstress" limitations? This issue is approached by exam-ining reviewers' levels of agreem ent reg arding accep t versus re.iect decision staken by the editor. thus employing the editorial decision as a benchmark forassess ing rev iewers ' t enden cies to favor re jec t ion ra ther than accep tance .Gran ted, this benchm ark is indirect, in that the relationship between perceptionsof limitations and rec om men dations for rejection (\\bile prono unced ) is unlikelyto be perfect. At the sam e time, this benchmark do es take account of the lowaccepta nce rates. a stan dard which is likely to inspire some tendency to SLAMam ong reviewers and edi tors (c f . Reis & Sti ller . 199 2) . Th us, based on theassumption that the belief in SLAM am ong reviewers to some deg ree reflects anob jectiv e reality, i t is predicted that reviewers will tend to agree more M ith edito-rial decisions favoring rejection than with editorial decisions favoring accep-tance.

    Method

    Participants were recruited at t\vo major con ferences of social psych ology,including a conference ( in Washington, DC, i n September 1995 ) jointly orga-nized by the So ciety of Experimental Social Psychology (S E SP ) and the Euro-pean Association of Experimental Social Psychology (E AE SP ). and a conferenceheld in Gmunden (Austria, i n July 1996) organized by the EAESP. At both con-ferences, question naires \vere distributed, which participants could com pleteeither during the conference (an d drop in a designated box) or after the confer-ence (and mail to the unive rsity). Thre e weeks after each confe rence, a total o f 5 4questionnaires was returned (26 questionnaires for the S ESP iEAE SP joint meet-ing; 28 questionnaires for the EAES P conferenc e).

    In light of the large num ber of social psychologists who attended these con-ferences (i.e., both confere nces were attended by at least 300 social psycholo-gist s), the response rate is, of course. not ideal (eve n when o ne takes into accoun tthe fact that a fair number attended both conferences). However, an impressiveresponse rate was not anticipated, for tw o reasons. First, participants of confer-ences receive a fair amo unt of material and tend to be quite busy during con fer-ences. Second . many pa rticipants indicated that they w anted to participate, butcould not do so because they had not yet served as reviewers for a peer-reviewedjournal or had not yet published papers. Also, given that participants from twodifferent conferences were recruited, I was able to see whether there might be

  • 7/28/2019 SLAM Effect

    6/17

    THE SLAM EFFECT IN PEER REVIEW 2555any substantial differences as a result of sam ple or selection differences. Exami-nation of possible differences between the two samples for both principaldependent measures (i.e., quality judg me nts and level of agreement w ith the edi-torial decision) reveals no evidence for any significant main or interaction effectsinvo lving type of conference. Thus, there is some (albeit indirect) suppor t forthe generality of our findings.

    O f the 54 qu estionnaires returned, 6 could not be used. That is, 1 question-naire contained several missing values, 2 questionnaires indicated that these par-ticipants had not yet submitted papers that were accepted for publication, and 3questionnaires indicated that these participants had not yet served as reviewersfor a peer-reviewed jour nal. The remaining sample of 48 participants (35 men, 12women, 1 failed to indicate gender; A4 age =42 years) consisted of social psy-chologists working in various countries, including Austria, Australia, Belgium,Canada, England, France, Germ any, Greece, Italy, Israel, the Netherlands, Sw e-den, Portugal, the United States, New Zealand, and Wales. The sam ple consistedof 42 professors (1 7 full professors, 16 associate professors, 9 assistant profes-sors), 3 participants were post-doctoral research fellows, and 3 were graduate stu-dents. All of the participants had submitted manuscripts to social psychologicalor related journa ls and had served a s reviewers for peer-reviewed journals.Experimental Design

    This study employs a 2 x 2 (Author: Own M anuscript vs. Others Manu-script x Editorial Decision: Accept vs. Reject) between-participants design. Thedependent measures include (a) judg me nts of general quality, theoretical quality,and methodological quality of a submitted paper; and (b) level of agreement w iththe editorial decision.Procedure

    The first page of the questionnaire, entitled The Social Psychology ofSocial Psychologists, describes (a) the purpose of the research in very generalterm s (i.e., examining experie nces of social psychologists), (b) anonym ity ofparticipants responses, and (c) debriefing procedures. To ensure anonymity a ndto provide the possibility for debriefing, I asked participants to return stickerswith their names and addresses in a different box at the registration desk. At theSES P/EA ESP join t me eting, the questionnaire consisted of different parts ,exam ining issues such as descriptions o f interaction situations at the conference,and evaluations and judg me nts of person talk (e.g., gossip). Becau se these top-ics were not central to the current research, they will not be discussed further. Atthe EA ESP conference, the questionnaire examined only judgm ents of manu-scripts.

  • 7/28/2019 SLAM Effect

    7/17

    2556 PAUL A. M. VAN LANGE

    As noted earlier, there were four conditions. based on the factorial crossing ofau thor ( o n n vs. others' manuscript) and editorial decision (accept vs. reject). Inthe own-manuscript-accept condition. the instructions read, "For the ques tionsbelow u e would like to ask you to think about the empirical paper that y o u mostrecently si/hmi/tec/ US primat:13 m ~ h o ro a peer-reviewed journal and that hasbeen accepted for publication." In the others'-manuscript-accept condition, thephrase siihniittetl LIS primat;i* ruthor- t o was replaced M.ith rei~ieic,et/,fi)t..n therejection co nditions (i.e.. own-manuscript-rqiect and others'-manuscript-rqjectcond ition). the \\ord accep/etl was replaced \vith rr j rcrrd.

    In total, eight questions were asked. These questions focused on evaluationsof a paper in its first submitted version so as to yield c omp arahle judgm ent situa-tions for the two author conditions (o\vn \ s . others ' manuscript) . That is . review-ers typical ly do not ha\ ,e access to informat ion re levant to the qual i ty o f asubsequent revision. wh ereas authors do . A first submitted version tends to differin a nu mb er of aspec ts from a revision of this first draft. For example. relative tothe first submitted paper. a revision of that paper is associated with improvedqual i ty (o r at leas t the percept ion thereof) and a s t ronger awareness of thestrengths and limitations of the study or studies described. Thus. to avoid suchasymmetries in evaluations of own and others' manuscripts, I examined evalua-tions o f t h e first submitted version.

    Five question s focused on jud gm ents of quality. First, this research assessedratings of general quality ("Ho\v v.ould you rate the overall quality of the paperin its first subm itted \,ersion?") on a 7-point sc ale ranging from 1 ( iw : i .poor ) o7 (excellent) .Second, two questions assessed ratings of theoretical quality of th epaper ("Ho w no ul d you rate the theoretical aspects of the paper in its first sub-mitted version?" Lvas rated on the same 7-point scale as the one for general qual-ity; "To what estent uere there theoretical ambiguities in the paper [e.g.. logicwas not entirely clear; a theoretically relevant link \\as missing]?" Lvas rated on a7-point scale ranging from I [/het.e i i 'ew r n o ~ 7 ~ ~mhigi,ities] to 7 [ / h e w were noarnhigziities]).The correlation betneen these t\\ o ratings was fairly high. r(48)=.69, so the average ofthese t\io ratings \vas used in subsequent analyses. Third,using the same wording and scales as for the assessment of theoretical quality,this research assessed two ratings of the methodological quality of the paper (theillustration for methodological am bigu ities was "incomplete information reg ard-ing procedure or analyses"). The correlation bet\veen these two ratings was fairlyhigh, ~ ( 4 8 ) .67. so the average o f these t\vo ratings \+ as used i n subsequentanalyses. A final question assessed degree of agreement with the editorial deci-sion by asking participants to what extent they agreed or disagreed with the edi-torial decision on a 7-point scale ranging from 1 ( I disagree complete!,,) to 7 ( Iagree complete!,

    I also asked each part icipant whether he or she would be will ing to sharethe name o f the journal that had accepted or rejected the manuscript. The most

  • 7/28/2019 SLAM Effect

    8/17

    THE SLAM EFFECT IN PEER REVIEW 2557frequently listed journals were Journal of Persona lity and Social Psycholog y(JPSP; 14 times) and Personality and Social Psychology Bullet in (P S P B ; 16times). In a highly qualitative manner, I explored whether these two titles wereabout equally distributed over the four conditions. It appeared that JPSP waslisted at least twice in each condition, and PSPB was listed at least once in allfour conditions. Although this evidence is clearly impressionistic, it did not seemto be the c ase that the quality of the journals varied substantially across the fourconditions.

    ResultsThe analyses proceeded in two stages. To begin with, the hypotheses regard-ing quality judgm ents and level o f agreement with the editorial decision were

    tested using a series of 2 x 2 (Author x Editorial Decision) ANO VAs. However,as the reader will note, the assumption of homogeneity of variance w as violated(i.e., standard deviations varied significantly across differing cells). Even whenthe data are transformed (i.e., square-root transformation or log transformation),the standard deviations am ong cells were significantly different. Although para-metric tests such a s ANOVA tend to be fairly robust to violations o f the assump-tion of homogeneity, nonparametric tests are arguably more appropriate. T heresults of the ANOVA will nevertheless be reported because I regard the realitycomprising differences in means and standard deviations to be substantiallymean ingful. Moreover, ANOVAs focusing on one of the two gr oups of partici-pants (i.e., joint SESP/EAESP meeting participants and EAESP conference par-ticipants) yielded identical findings, but only occasionally revealed a significantviolation of the assumption of homogeneity. After describing the results of ANO-VAs, I report the results o f a series of loglinear analyses, for which this assump -tion is irrelevant, thus complementing the ANOVA with statistically moreappropriate tests of my hypotheses.Are O w n Manuscripts Perceived to Be Superior to the Manuscripts WeReview?

    I submitted the three ratings of quality (general, theoretical, and methodolog-ical) to a 2 x 2 (Author: Own vs. Others Manuscripts x Editorial Decision:Accept vs. Reject) MANOVA. This analysis reveals a significant multivariatemain Effect for author, F(3, 2) =5.26,

  • 7/28/2019 SLAM Effect

    9/17

    2558 P A UL A . M. VAN LANGEOwn manuscripts

    H Others manuscripts

    General Theoretical MethodologicalFigure 1. Judgments of qual i ty regarding o w n and others manuscr ip ts .

    (Ms=3.60 vs. 4.96, respective SDs = 1.63 and 0.85), and meth odolo gical quality(Ms=3.90vs. 5.27, respective SDs = 1.73 and 0.78 ).

    Not surprisingly, the analysis also reveals a significant multivariate maineffect for editorial decision , F(3, 42 ) =2.95, p

  • 7/28/2019 SLAM Effect

    10/17

    THE SLAM EFFE CT IN PEER RE VIEW 2559quality, F(3,2903) =4 . 4 8 , ~ .01; theoretical quality, F(3, 2903) =2 . 8 3 , ~ .05;and methodological quality, F(3,2903) =4 . 7 2 , ~

  • 7/28/2019 SLAM Effect

    11/17

    2560 PAUL A. M. VAN LANGEOwn manuscriptsOthers' manuscripts

    '1

    Accepted RejectedEditorial decision

    Figure 2. Levels of agreement with editorial decisions favoring acceptance or rqjection ofown and others' manuscripts.

    analysis reveals an interaction of author and editorial decision, F( 1, 44) =30.84,p

  • 7/28/2019 SLAM Effect

    12/17

    THE SLAM EFFECT IN PEER REVIEW 2561when others manuscripts were rejected (both SDs < I , as noted earlier) thanwhen ow n manuscripts were rejected or when others manuscripts were accepted(both SDs >1.69). Thus, for own accepted manuscripts and for others rejectedman uscripts, almo st all of the participants exhibited very high levels of agre e-ment with the editorial decision.

    As in the analyses o f quality judgem ents, the association among variableswas analyzed using a 2 x 2 x 2 (Agreement: High vs. Low x Author x EditorialDecisio n) loglinear analysis. Given that the overall levels of agreement werefairly high, I compared ratings of 5 or lower (i.e., low agreem ent) with ratingsthat exceeded 5 (i.e., high quality), resulting in a more equal distribution than theone determined by the midpoint of the scale (even though 65% exhibited ratingsgreater than 5). Paralleling our earlier interaction of author and editorial decision,this analysis reveals a significant three-way interaction of agreement, author, andquality, x*( 1, N =48) = 19.82, p < OO l . For ow n manuscripts, a large majority(16 of 17 participants, 9 4% ) indicated high agreement with acceptance, whereasfor others manuscripts, a large majority (10 of 12 participants, 83%) indicatedhigh agreement with rejection. Indeed, a specific follow-up analysis focusing onmanuscripts that the participants had reviewed (i.e.. others manuscripts) revealsa significant association between agreement (high vs. low) and editorial decision,xz( 1, N =20) =4.43, p

  • 7/28/2019 SLAM Effect

    13/17

    2562 PAUL A. M. VAN LANGEascribing greater quality (i.e.. general quality. theoretical quality, and m ethod-ological quality) to the manuscripts that they have submitted than to the manu-scripts they have reviewed. Moreover. levels of agreement with an editorialdecision were greater for own (vs. other s) ma nuscripts that have been accep tedfor publication; conversely, levels of agreement with an editorial decision weregreater for others (vs . ow n) manuscripts that have been rejected for publication.And in the role of reviewers. participants exhibited greater agreement with edito-rial decisions favoring rejection rather than acceptance. suggesting that reviewersare more critical than are editors .

    Generally, these findings are consistent with the assumption that judg me ntsof own and others work to s om e extent are socially defined and colored by rela-tively stable beliefs of superiority: processes that can be understood in terms o fself-enhancement and self-verification. Wh en subm itting manuscripts. ma ny ofus would seem to be som ewh at unrealistically optimistic. believin g that the oddsof acceptance are (m uc h) greater than the base rate (cf. Weinstein, 1980). Wemay continu e to hold favorable beliefs about the quality of own man uscripts thathav e been rejected by filtering sp ecifi c positive information (i.e., typical editorialletters and reviews do contain at least some positive comm ents) and (to someextent) downplaying the quality of reviews. Also, in evaluating own and otherswork, we m ay tend to assign greater attention and weight to specific domain sor criteria that are of special importance to us or that we ourse lves excel in (cf.Dunning. Meyerowitz, & Holzberg. 1989). At the same time, the present researchcannot rule out one potentially important alternative interpretation. It may be thatreviewers subm it manuscripts that are actually of greater quality than are theman uscripts that they review. Indeed. not all autho rs serve as reviewers for peer-reviewed journals, whereas all participants of this study had reviewer experience.As such , the p re sen t f i nd ings canno t unambiguous ly be a t t r i bu ted to se l f -enhancement or self-veri tication mechanisms.

    The major thrust of this study was not its theoretical relevance, but its rele-vance to understanding judgm ents of own and others work in the context of thecurrent peer-review system, an issue that has not yet been truly explored. Indeed,one major question was whether the belief in reviewers tendencies to SLAMrepresents an objectiv e reality, a cons truc ted reality, or both. Part of the answer islocated in our tendencies to hold unrealistically favorab le beliefs regard ing ourown work, thereby supporting the constructed reality underlying the belief inreviewers tendencies to SL AM (as wel l as Levensons, 1996, sour -gra peshypothesis). At the sa me time. the current find ings suggest (albeit indirec tly) thatreviewers do tend to recom me nd rejection on the basis of lim itations that wouldseem to be som ewhat less serious i n the eyes of editors. Reviewers appraisalstend to be more negative than those of editors, in that re viewe rs exhibited greaterlevels of agreem ent with editorial de cisions favoring rejection than w ith editorialdecisions favoring acceptance. Thus, another reason for why authors sometimes

  • 7/28/2019 SLAM Effect

    14/17

    THE SLAM EFFECT IN PEER REVIEW 2563might be upset is that reviewers factually place particular emphasis on the limita-tions of a manuscript, thereby supporting the objective reality underlying th ebelief in reviewers tendencies to SLAM.Why exactly might reviewers be overly critical; at least more critical than edi-tors? W hat are im portant differences between reviewers and editors? To beginwith, reviewers typically tend to have greater expertise regarding the issuesaddressed in a manuscript than d o editors. However, greater expertise and knowl-edge do not necessarily imply a greater emphasis or recognition of limitations, inthat such additional knowledge could also be associated with a greater ability todetect strengths and innovative aspects of a manuscript.

    A second explanation is that, given that the base rate for acceptance tends tobe quite low within the field of social psychology, reviewers may becomeinclined to recommend rejection rather than acceptance. For example, in light ofthis base rate, reviewers may believe that their jo b is to detect lim itations andonly occasionally to recommend acceptance. Also, scientists may wish to avoidwriting reviews that a re expected to deviate significantly from other reviews (cf.Asch, 1955). Given the low base rates for acceptance, the likelihood of beingdeviant would be greater if one recommends acceptance than if one recommendsrejection. Also, it may be psychologically more aversive to deviate by recom-mend ing acceptance than by recom mend ing rejection, particularly if review erssee their role a s one of pointing out limitations rather than strengths.

    A final explanation may be derived from the fact that in the current peer-review system, editors (unlike reviewers) are accountable. Editors are not anony-mous, and ed itors are the ones who se (negative) decisions might be challeng edby the author. Thus, it would seem to be particularly important for editors to care-fully attend to both limitations and strengths of a manuscript, whereas reviewersare in a position where they can almost freely focus on limitations, rather thanstrengths.Turning back to Epstein (1995), i t is interesting that one o f his major sugges-tions for im proving the review system focuses on increasing reviewer account-ability by providing authors with standard forms for evaluating the reviews thatthey receive. This suggestion is considered to be useful by Fine ( 1 996), who alsoargues that it would be essential that such forms be completed after a final deci -sion is mad e on the revised m anuscript in order to enhance honest evaluations ofthe reviews. As pointed ou t by Fine, Epsteins suggested procedure m ay (a) ele-vate review quality; (b) strengthen helpfulness among reviewers; and (c ) helpauthors to feel less dependent on incompetent, biased, or inexperienced review-ers. Moreover, if successful, this procedure may (d) help editors to select capable,constructive, and balanced reviewers; and (e) give rise to norms for assessingquality on the basis o f limitations and strengths.Thus, there is good reason to believe that Epsteins (1 995) suggested proce-dure is a useful one (even at the expense of som e practical costs). H owever, this

  • 7/28/2019 SLAM Effect

    15/17

    2564 PAUL A . M. VA N LANGEprocedure raises at least two q uestions . First, it is important to keep in mind thateditors need reviewers honest evaluations and judgm ents to mak e a balancededitorial decision. Although enhanced review er accountability is likely to resultin more balanced reviews, it is not clear whether accountability strengthens ordiminishes such honesty. G iven that this research was a nony mo us and not rele-vant to editorial decisions, there are n o strong reasons to qu estion the partici-pants honesty regardin g their evalua tions of manuscripts that they had reviewed(evaluatio ns which actually were qu ite critical). Thu s, although Epsteins proce-dure is likely to encourage balanced reviews, the potential risk is that the review-ers tend to be not only less critical but also less honest than they might be whenthey are less accountab le, as in the current peer-review system .A second issue raised by Epsteins ( 1995) suggested procedure is whetherauthors will provide reviewers with fair and useful com men ts. Although it ispossible that such com me nts are quite fruitful, as suggested by Fine (1996), it isalso possible that authors are som ewh at unable (or unwilling) to provide theirreviewers (particularly negative reviewers) with fair and useful feedback. Th ecurrent findings indicate that authors continue to hold very favorable beliefsabout their manu scripts, even after the work has been rejected for publication(i.e., beliefs that we re at least as favorab le as beliefs abo ut others work that hasbeen accepted) . This f inding is understandable, in tha t (a ) i t i s common forresearchers to have co nsiderab le faith in their contributions (i.e., based on ownlogic and own observations); (b) researchers are likely to submit their work to aparticular jou rnal , anticipating that this work will be fairly well received (i.e.,typically, au thors d o not submit their work to journals from which they expect arejection); (c ) the base rate s for acceptanc e are low: and. last but not least. ( d ) itis not easy for ma ny of us to dissociate the quality of our work from our sense ofself-esteem. Thus, in light of the finding that authors continue to believe thattheir rejected manuscripts are quite good. it is-at least to some degree-ques-tionable whether authors are able or willing to provide the reviewer with fairand useful feedback. Similarly, given that reviewers tend to be quite critical ofothers work. i t is som ew hat quest ionable whether reviewers wil l consid erauthors comm ents (part icularly somew hat cr i tical com me nts) to be fair anduseful.Clearly, the current research is not with out limitations. Th e sam ple relied ongenerosity and interest of the participants, and included only social psychologistswith review experience. These unfortunate (yet to some degree inevitable) fea-tures limit the scop e of this research and make it impossible to rule out alterna-t ive interpretat ions that may be der ived from select ion bias . Despi te theselimitations, the findings suggest that, for both good and bad reasons, we asautho rs may, every now and then, be disturbed about som e detailed q uestions an ddoub ts expressed by reviewers; whe reas we as reviewers may offen truly believethat such q uestions and doub ts are accurate, appropriate, and u se hl .

  • 7/28/2019 SLAM Effect

    16/17

    THE SLAM EFFECT IN PEE R REVIEW 2565References

    Alicke, M., LoSchiavo, F. M., Zerbst, J. I. , & Zhang, S. (1997). The person whooutperforms me is a genius: Esteem maintenance in upward social compari-son. Journal of Personality and Social Psychology, 73, 78 1-789.Allison, S. T., Messick, D. M., & Goethals, G . R. (1989). On being better but notsmarter than others. Social Cognition, 7, 275-296.Asch. S. E. (1955). Opinions and social pressure. Scientific American, 193, 3 1-35.Dunning. D., Meyerowitz, J. A., & Holzberg, A. D. (1989 ). Am biguity and self-evaluation: The role of idiosyncratic trait definitions in self-serving assess-ments of ability. Journal of Personality and Social Psychology, 57, 1082-1090.Epstein, S . (1995). What can be done to improve the journal review process?American Psychologist, 50, 883-885.

    Festinger, L. (1954). A theory of social comparison processes. Hziman Relntions,Fine. M. A . (1996) . Reflections on enhancing accountability in the peer review

    process. American Psychologist, 51, 1190-1 191.Laband, D. N., & Piette, M. J. (1994). A citation analysis of the impact of blindedpeer review. Journal of the Am erican M edical Association, 272, 147- 149.Levenson, R. L., Jr. (1996). Enhance the journals, not the review process. Ameri-can Psychologist, 51, 1191-1193.Peters, D., & Ceci, S. J . ( 1 985). Peer review: Beauty is in the eye of the beholder.Behavioral and B rain Sciences, 8, 747-750.Rabinovich, B. A . (1996). A perspective on the jou rnal review process. Americ~inPsychologist, 51, 1190 .Reis, H. T., & Stiller, J. (1992). Publication trends in JPSP: A three-decadereview. Personality and Social Psych ologv Bulletin. IS, 465-472.Sedikides, C., & Strube, M. J. (1997). Self-evaluation: To thine own self to begood, to thine own self be sure, to thine own self be true, and to thine ow n selfbe better. In M. P. Zanna (Ed.), Advances in experimental social psychology(Vol. 29, pp. 209-269). New York, N Y Academic.Suls, J., & Wills, T. A. (Eds.). (1991). Social comparison. Contemporag? heogsand research. Hillsdale, NJ: Lawrence Erlbaum.Swann, W. B., Jr. (1983). Self-verification: Bringing social reality in harmonywith the se lf. In J . Suls & A . G. Greenwald (Eds.), Psychologicalperspectives

    of the self(Vol.2, pp. 33-66). Hillsdale, NJ: Lawrence Erlbaum.Swann, W. B., Jr. (1990). To be adored or to be known: The interplay of self-enhancement and self-verification. In E. T. Higgins & R. M. Sorrentino(Eds.), Handbook ofm otivation and cognition: Fotmdotions of social behav-

    ior (Vol. 2, pp. 404-448 ). New York, NY: Guilford.

    7, 117-140.

  • 7/28/2019 SLAM Effect

    17/17

    2566 PAUL A . M. VA N IANGE

    Taylor, S. E., & Brown, J. D. (1988 ). Illusion an d w ell-being: A social psycho-logical perspe ctive on mental health. PsychofogicafBulletin, 103, 193-210.Van Lange, P. A. M., & Rusbult, C. E . ( 1 995). My relationship is better than-and no t as bad as-yours is: Th e perception of superiority in close relation-ships. Persona lity and Social Psycho logy Bulletin, 21, 32-44.

    Weinstein, N. D. (1980). Unrealistic optimism about future life events. Journal oPersona lity and Social Psychology, 39, 806-820.

    Wills, T. A . (1991). Similarity and self-esteem in downward comparison. In J .Suls & T. A. Wills (Eds.), Social comparison: Contemporary theory andresearch (pp. 5 1-78). Hillsdale NJ: Lawrence Erlbaum.