characterization of the council of emergency medicine residency directors' standardized letter...

7
EDUCATIONAL ADVANCES Characterization of the Council of Emergency Medicine Residency DirectorsStandardized Letter of Recommendation in 20112012 Jeffrey N. Love, MD, Nicole M. DeIorio, MD, Sarah Ronan-Bentle, MD, John M. Howell, MD, Christopher I. Doty, MD, David R. Lane, MD, and Cullen Hegarty, MD, for the SLOR Task Force Abstract Objectives: The Council of Emergency Medicine Residency Directors (CORD) introduced the standardized letter of recommendation (SLOR) in 1997, and it has become a critical tool for assessing candidates for emergency medicine (EM) training. It has not itself been evaluated since the initial studies associated with its introduction. This study characterizes current SLOR use to evaluate whether it serves its intended purpose of being standardized, concise, and discriminating. Methods: This retrospective, multi-institutional study evaluated letters of recommendation from U.S. allopathic applicants to three EM training programs during the 20112012 Electronic Residency Application Service (ERAS) application cycle. Distributions of responses to each question on the SLOR were calculated, and the free-text responses were analyzed. Two pilots, performed on ve applicants each, assisted in developing a strategy for limiting interrater reliability. Results: Each of the three geographically diverse programs provided a complete list of U.S. allopathic applicants to their program. Upon randomization, each program received a list of coded applicants unique to their program randomly selected for data collection. The number of applicants was selected to reach a goal of approximately 200 SLORs per site (n = 602). Among this group, comprising 278 of 1,498 applicants (18.6%) from U.S. allopathic schools, a total of 1,037 letters of recommendation were written, with 724 (69.8%) written by emergency physicians. SLORs represented 57.9% (602/1037) of all LORs (by any kind of author) and 83.1% (602/724) of letters written by emergency physicians. Three hundred ninety-two of 602 SLORs had a single author (65.1%). For the question on global assessment,students were scored in the top 10% in 234 of 583 of applications (40.1%; question not answered by some), and 485 of 583 (83.2%) of the applicants were ranked above the level of their peers. Similarly, >95% of all applicants were ranked in the top third compared to peers, for all but one section under qualications for emergency medicine.For 405 of 602 of all SLORs (67.2%), one or more questions were left unanswered, while 76 of all SLORs (12.6%) were customizedor changed from the standard template. Finally, in 291 of 599 of SLORs (48.6%), the word count was greater than the recommended maximum of 200 words. Conclusions: Grade ination is marked throughout the SLOR, limiting its ability to be discriminating. Furthermore, template customization and skipped questions work against the intention to standardize the SLOR. Finally, it is not uncommon for comments to be longer than guideline recommendations. As an assessment tool, the SLOR could be more discerning, concise, and standardized to serve its intended purpose. ACADEMIC EMERGENCY MEDICINE 2013; 20:926932 © 2013 by the Society for Academic Emergency Medicine F or many years emergency medicine (EM) pro- gram directors (PDs) struggled with the limita- tions inherent in traditional narrative letters of recommendation. In 1995, in response to growing con- cerns, the Council of Emergency Medicine Residency Directors (CORD) established a standardized letter of From the Medstar Georgetown University Hospital/Medstar Washington Hospital Center (JNL, DRL), Washington, DC; Oregon Health & Science University (NMD), Portland, OR; the University of Cincinnati (SRB), Cincinnati, OH; INOVA Fairfax Hospital (JMH), Fairfax, VA; the University of Kentucky (CID), Lexington, KY; and Regions Hospital (CH), St. Paul, MN. Received January 24, 2013; revision received March 16, 2013; accepted March 29, 2013. The authors have no relevant nancial information or potential conicts of interest to disclose. Dr. DeIorio, an Associate Editor for this journal, had no role in the peer review or publication decision process for this paper. Supervising Editor: John Burton, MD. Address for correspondence and reprints: Jeffrey N. Love, MD; e-mail: [email protected]. 926 PII ISSN 1069-6563583 doi: 10.1111/acem.12214 ISSN 1069-6563 © 2013 by the Society for Academic Emergency Medicine

Upload: john

Post on 30-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Characterization of the Council of Emergency Medicine Residency Directors' Standardized Letter of Recommendation in 2011-2012

EDUCATIONAL ADVANCES

Characterization of the Council of EmergencyMedicine Residency Directors’ StandardizedLetter of Recommendation in 2011–2012Jeffrey N. Love, MD, Nicole M. DeIorio, MD, Sarah Ronan-Bentle, MD, John M. Howell, MD,Christopher I. Doty, MD, David R. Lane, MD, and Cullen Hegarty, MD, for the SLOR Task Force

AbstractObjectives: The Council of Emergency Medicine Residency Directors (CORD) introduced thestandardized letter of recommendation (SLOR) in 1997, and it has become a critical tool for assessingcandidates for emergency medicine (EM) training. It has not itself been evaluated since the initial studiesassociated with its introduction. This study characterizes current SLOR use to evaluate whether it servesits intended purpose of being standardized, concise, and discriminating.

Methods: This retrospective, multi-institutional study evaluated letters of recommendation from U.S.allopathic applicants to three EM training programs during the 2011–2012 Electronic ResidencyApplication Service (ERAS) application cycle. Distributions of responses to each question on the SLORwere calculated, and the free-text responses were analyzed. Two pilots, performed on five applicantseach, assisted in developing a strategy for limiting interrater reliability.

Results: Each of the three geographically diverse programs provided a complete list of U.S. allopathicapplicants to their program. Upon randomization, each program received a list of coded applicants uniqueto their program randomly selected for data collection. The number of applicants was selected to reach agoal of approximately 200 SLORs per site (n = 602). Among this group, comprising 278 of 1,498 applicants(18.6%) from U.S. allopathic schools, a total of 1,037 letters of recommendation were written, with 724(69.8%) written by emergency physicians. SLORs represented 57.9% (602/1037) of all LORs (by any kind ofauthor) and 83.1% (602/724) of letters written by emergency physicians. Three hundred ninety-two of 602SLORs had a single author (65.1%). For the question on “global assessment,” students were scored in thetop 10% in 234 of 583 of applications (40.1%; question not answered by some), and 485 of 583 (83.2%) of theapplicants were ranked above the level of their peers. Similarly, >95% of all applicants were ranked in thetop third compared to peers, for all but one section under “qualifications for emergency medicine.” For 405of 602 of all SLORs (67.2%), one or more questions were left unanswered, while 76 of all SLORs (12.6%)were “customized” or changed from the standard template. Finally, in 291 of 599 of SLORs (48.6%), theword count was greater than the recommended maximum of 200 words.

Conclusions: Grade inflation is marked throughout the SLOR, limiting its ability to be discriminating.Furthermore, template customization and skipped questions work against the intention to standardize theSLOR. Finally, it is not uncommon for comments to be longer than guideline recommendations. As anassessment tool, the SLOR could be more discerning, concise, and standardized to serve its intendedpurpose.

ACADEMIC EMERGENCY MEDICINE 2013; 20:926–932 © 2013 by the Society for Academic EmergencyMedicine

For many years emergency medicine (EM) pro-gram directors (PDs) struggled with the limita-tions inherent in traditional narrative letters of

recommendation. In 1995, in response to growing con-cerns, the Council of Emergency Medicine ResidencyDirectors (CORD) established a standardized letter of

From the Medstar Georgetown University Hospital/Medstar Washington Hospital Center (JNL, DRL), Washington, DC; OregonHealth & Science University (NMD), Portland, OR; the University of Cincinnati (SRB), Cincinnati, OH; INOVA Fairfax Hospital(JMH), Fairfax, VA; the University of Kentucky (CID), Lexington, KY; and Regions Hospital (CH), St. Paul, MN.Received January 24, 2013; revision received March 16, 2013; accepted March 29, 2013.The authors have no relevant financial information or potential conflicts of interest to disclose.Dr. DeIorio, an Associate Editor for this journal, had no role in the peer review or publication decision process for this paper.Supervising Editor: John Burton, MD.Address for correspondence and reprints: Jeffrey N. Love, MD; e-mail: [email protected].

926 PII ISSN 1069-6563583 doi: 10.1111/acem.12214ISSN 1069-6563 © 2013 by the Society for Academic Emergency Medicine

Page 2: Characterization of the Council of Emergency Medicine Residency Directors' Standardized Letter of Recommendation in 2011-2012

recommendation (SLOR) task force whose goal was tocreate “a method of standardization for letters of rec-ommendation.”1

After a 2-year development process, the SLOR wasintroduced in 1997. The goal of the task force was todevelop an assessment instrument with three basictenets: 1) standardization—the same essential questionsare asked and answered for every candidate; 2) time-efficient to review—a clear and concise synopsis for effi-cient evaluation; and 3) discriminating—a template toconvey comparative performance data on candidates inspecific areas important to clinical practice.1 Of para-mount importance, the SLOR’s format was designed tolimit “clerkship grade and adjective inflation,” whichwas believed to be “rampant” at the time.

In the first few years after the adoption of the SLORthere were a number of commentaries and studiesrelated to its use.2–6 In general, it appeared that theSLOR accomplished many of its intended goals. Sincethat time, the SLOR has been universally adopted andbecome an expected component of any application forEM training. The SLOR template used today containsminor changes from the original format released in1997.

Approximately 15 years after its introduction, theSLOR task force was reestablished to evaluate whetheror not the SLOR, as it is used currently, adheres to itsoriginal tenets. Secondary goals included characteriza-tion of SLOR use and identification of opportunities forimproving this assessment tool. This article reportsthese results.

METHODS

Study DesignThis was a retrospective, multi-institutional study basedon U.S. allopathic applications to EM from the 2011–2012 Electronic Residency Application Service (ERAS)application cycle. Institutional review board (IRB)approval was obtained from each of three participatinginstitutions prior to the collection of data. Due to theanonymous nature of this study, each affiliated IRBapproved the study as exempt from informed consent.

Study Setting and PopulationData were collected from applicants to three geographi-cally diverse EM training programs: Medstar George-town University Hospital/Washington Hospital Center,Washington, DC; Oregon Health & Science University,Portland, Oregon; and The University of Cincinnati,Cincinnati, Ohio. Two of these programs have 3-yearformats and the third is a 4-year program.

Study ProtocolTo first determine interrater reliability, the SLORs offive applicants were randomly selected using a comput-erized random-number generator that was thenchecked for duplicate entries by the American Associa-tion of Medical Colleges (AAMC) identifier. Each of thethree primary study authors independently reviewedthe letters of recommendation, and the investigators’coding of the data were compared. There were 11SLORs in this pilot, with 597 data points. All three

authors agreed in 85.9% of outcomes. Disagreementswere resolved by consensus, and the coding protocolwas amended to avoid future discrepancies. A secondpilot of five additional candidates applying to all threeprograms was randomly selected (using the samemethod described). Once again all three authors inde-pendently evaluated the letters of recommendation. Inthis cohort there were 13 SLORs with 705 data pointsfor comparison. All three authors agreed in 94.2% ofoutcomes. A final meeting focused on remaining issuesand resulted in a consensus plan for dealing with ongo-ing areas of disagreement. For the study itself, wetargeted a minimum sample size of 600 SLORS. We pro-jected that with 600 letters, we would have power of 0.8to detect differences in proportions of 11% to 14%,depending on the numbers of subgroups compared (i.e.,two to four projected subgroups).

Data AnalysisAn initial, informal review of 20 random applicantsyielded an average of two SLORs per applicant on twoseparate occasions. This estimate was used when ran-domly generating 100 unique applicants from each ofthe three participating programs with Microsoft Excelfor Macintosh, v. 14.2.3 (Microsoft Corp., Redmond,WA) to obtain approximately 200 SLORs from each site.Descriptive statistics including means, proportions, and95% confidence intervals (CIs) were calculated usingIBM SPSS Statistics for Macintosh, v. 20 (IBM Corpora-tion, Armonk, NY).

RESULTS

To reach the goal of approximately 600 SLORs, theERAS applications of 287 allopathic applicants werereviewed. This represents 18.6% of the 1,498 U.S. allo-pathic seniors applying to EM (287 applicants minusnine interns) in 2011–2012.7 There were 1,037 letters ofrecommendation. Emergency physicians authored 724(70%) of these letters, 602 of which were SLORS. Of the602 SLORS, 198 (32.9%) were from The University ofCincinnati, 203 (33.7%) were from Medstar GeorgetownUniversity Hospital/Washington Hospital Center, and201 (33.4%) were from Oregon Health & ScienceUniversity.

The SLORs were based on EM rotations 98.3% (592 of602) of the time, with only 1.7% (10 out of 602) based onrelated EM electives (e.g., research, ultrasound). On occa-sion, a SLOR is written for an intern rotating through theemergency department (ED). Fourteen of 588 SLORs(nine applicants) were written for interns or residents, ofwhich only four were completely based on rotations inthe ED as a resident. The other nine were based on rota-tions during medical school the year before, four of whichincluded additional information since that rotation. Thenature of contact was “extended direct clinical contact” in73.2% (431 of 589) of all SLORs, in 76.3% (95%CI = 71.8% to 80.5%) of single-author SLORS, and in67.0% (95% CI = 60.0% to 73.5%) of group SLORs. Themajority of authors (57.9%, 319 of 551) indicated that theyknew the applicants for longer than 1 month.

Single-author SLORs made up 65.1% (392 of 602) ofall SLORs. The most common authors of this type of

ACADEMIC EMERGENCY MEDICINE • September 2013, Vol. 20, No. 9 • www.aemj.org 927

Page 3: Characterization of the Council of Emergency Medicine Residency Directors' Standardized Letter of Recommendation in 2011-2012

SLOR were clerkship directors (CDs; 36.2%, 142 of 392),EM faculty (29.6%, 116 of 392), assistant/associated PDsor assistant clerkship directors (APDs/ACDs; 11.5%, 45of 392), PDs (10.2%, 40 of 392), and emergency physi-cians not affiliated with the training program (5.9%, 23of 392). Group SLORs made up 34.9% of all SLORS withthe most common contributing authors being CDs(80%, 168 of 210), PDs (79%, 166 of 210), and APDs(47.6%, 100 of 210).

The vast majority of SLORs reported some variationon the honors/pass/fail grading system (95.8%, 566 of591), with 4.2% of the SLORs coming from rotations thatgrade on a pass/fail system. Table 1 provides a break-down of EM clerkship grading as well as a reported“historic” grading breakdown from the previous aca-demic year. Clerkship data reported that 29.9% (160 of535) of the SLORS were from institutions that have 50 orfewer participants in the EM rotation per year, 39.1%(209 of 535) between 51 and 99, and 31.0% (166 of 535)have 100 or more students participate annually.

Section B of the SLOR template pertains to “qualifica-tions for EM.” The results for SLOR questions B1–B4bcan be found in Table 2. Question B5a asks, “How muchguidance do you predict this applicant will need duringresidency?” Authors of the SLOR answered “almostnone” 46.1% (276 of 599) of the time, “minimal” in49.8% (298 of 599) of instances, and the remaining 4.2%(25 of 599) indicated that “moderate” guidance wasrequired.

The results of “global assessment” or section C of theSLOR are summarized in Table 3. In addition, SLORswere evaluated to determine whether the author’s for-mal position was related to the percentage of times aglobal ranking of “outstanding/top 10%” was provided.Table 4 summarizes these results.

A review of the written comments focused on thetotal length by number of words. In 48.6% (291 of 599)of SLORs, the word count was greater than the

recommended 200-word limit. This was true in 47.9%(187 of 390) of single-author SLORs and 50.0% (105 of210) of group SLORs. A list of faculty comments fromthe rotation was included in 22.0% (64 of 227) of thoselonger than 200 words and in 7.8% (24 of 283) of thosethat were less. General comments describing the basisfor grading, clinical environment, or clerkship logisticswere included in 35.4% (103 of 188) of those longer than200 words and in 8.1% (25 of 282) that were less. In 2%of cases (12 of 602), the written comments were hand-written.

Table 5 reviews the percentage of times each questionwas not answered. Question C1b, which documents theauthor’s profile of global ranking awarded last year,was skipped or did not provide the entire profilerequested 80.8% (458/567) of the time. The SLOR tem-plate was “customized” or changed in 12.6% (76/602) ofinstances. Six percent of the time (36/602), the final boxwas not checked indicating that the applicant did notwaive the right to review the SLOR.

DISCUSSION

The goal of the original SLOR Task Force was to createa letter that was standardized and concise while provid-ing comparative, discerning data. Shortly after theSLOR was adopted, Crane and Ferraro8 surveyed PDsand found that the most important factors in residentselection were the core clinical clerkship grades, EMrotation grade, and letters of recommendation. This factwas echoed by Wagoner and Suriano,9 who concludedthat for EM PDs, the EM rotation grade and core clini-cal grades were “near critical” for the successful appli-cant. At that time, the SLOR Task Force argued thatone of the strengths of the SLOR was that it containedtwo of the three major factors, namely, a letter of rec-ommendation and the grade on an EM rotation.1 EMrotations often take place during the fourth year of

Table 1SLOR Profile of 2011–2012 EM Rotation Grades and the Reported Grade Distribution the Previous Year

Grade Current Applicants, % (n/N) Last Academic Year, % (95% CI)

Honors 56.7 (315/556) 27.1 (25.9–28.3)High pass 34.0 (189/556) 40.0 (38.3–41.8)Pass 8.1 (45/556) 32.0 (29.9–34.0)Not available 1.1 (6/556)

Data exclude the 4.2% of SLORs from pass/fail rotations.

Table 2SLOR Profile 2011–2012 Reporting the Percentage of Candidates Receiving One of the Top Two Rankings and the Total in “Qualifi-cations for EM”

Qualification Top 10% Top Third Total % “Above Level of Peers”

Commitment 53.3 41.3 94.6Work ethic 63.2 32.5 95.7Differential diagnosis 38.9 47.7 86.6

Superior Excellent/GoodAbility to interact with others 64.6 31.9 96.5Communicates caring nature 62.6 35.9 98.5

928 Jeffrey et al. • CHARACTERIZATION OF THE CORD SLOR

Page 4: Characterization of the Council of Emergency Medicine Residency Directors' Standardized Letter of Recommendation in 2011-2012

medical school and, as a result, these grades are notalways available on the “Dean’s letter” or Medical Stu-dent Performance Evaluation. In 2000, a study of appli-cants to EM training from a single programdocumented that 22% of all letters of recommendationwere SLORs.4 Since 2000, the SLOR has become inte-gral to successful applications for training in the spe-cialty, comprising 58.1% of all letters written in supportof residency applications and 83.1% of the letters writ-ten by emergency physicians.

The basis for nearly all SLORs is a one-month clinicalrotation in the ED. One reason the SLOR has suchpotential value is that in most instances, authors basetheir conclusions on “extended direct contact” workingclinically with the applicant. More often than not, the

author has known the applicant for longer than1 month, providing the potential for additional observa-tions and experiences that could further enrich theSLOR. The ERAS application is rife with informationregarding an applicant’s cognitive abilities, includinggrades, USMLEs, AOA, etc. The SLOR contributes to anunderstanding of cognitive abilities as they pertain tothe ED through assessments such as the ED rotationalgrade, differential diagnosis, and decision-making rela-tive to peers. The SLOR author is also in an ideal posi-tion to reflect on the qualities and traits (e.g.,noncognitive domains such as dedication, recognitionof limits, altruism, and ability to work with a team) soimportant to success in EM. These factors can some-times be difficult to discern from the ERAS applicationotherwise.

Arguably, the most important tenet of the SLOR is itsability to compare the performance of applicants to oneanother in specific areas. However, our data suggestthat grade inflation is an issue in the SLOR. For manyof the questions throughout the SLOR, comparativeinformation appears to be facilitated through well-anchored criteria relative to peers. Rankings are basedon adjectives linked to quantifying anchors. Forinstance, outstanding represents the “top 10%,” excel-lent the “top third,” very good the “middle third,” andgood the “lower third,” in comparison to other EMapplicants. A review of “qualifications for EM (SectionB)” demonstrates that in the area of “commitment toEM,” “work ethic,” “differential diagnosis,” and“personality,” the vast majority of applicants receivedoutstanding or excellent rankings (Table 2). Fewer than5% received very good (at level of peers) or good rat-ings. Under global assessment, 40.1% of candidateswere ranked as “outstanding/top 10%” relative to peers.

Table 4Global Ranking of Outstanding/Top 10% Relative to the Total for Each Group for Single-author SLORs Based on Leadership PositionFollowed by Totals for Single-author and Group SLORs

Position Top 10% Middle Third/All Ranking but Top 10% Total % of Total 95% CI

PD 13 5/27 40 32.5 18.6–49.1CD 56 14/85 141 39.7 31.6–48.3Other leadership 18 4/25 43 41.9 27.0–58.1Faculty 62 14/48 110 56.4 46.6–65.8EP (no affiliation) 18 0/3 21 85.7 63.7–97.0Single-author SLORs 176 37/203 379 46.4 41.3–51.6Group SLORs 58 52/146 204 28.3 22.2–35.0

Other leadership includes APDs and ACDs.ACDs = assistant clerkship directors; APDs = associate/assistant program directors; CD = clerkship director; PD = program director;SLOR = standard letter of recommendation.

Table 3SLOR Profile of 2011–2012 Candidate Global Rankings and the Author’s Self-reported History of Rankings the Previous Year

Global Assessment Candidates’ Global Ranking, % (n/N) Profile of Global Ranking Last Year, %

Outstanding/top 10% 40.1 (234/583) 29Excellent/top third 43.1 (251/583) 45Very good/middle third 15.3 (89/583) 22Good/lower third 1.5 (9/583) 4

Table 5Percentage of Times Individual Questions Were Skipped on theSLOR (if >2.0%)

Question (Question Number) Percentage

“Key comment”? (A3b) 36.4Total number of students

rotating last year? (A5b)11.1

Which EM rotation was this? (A4a) 10.3How long have you known

the applicant? (A1)8.2

Dates of rotation? (A4b) 6.1How highly will candidate

rank on your list? (C2)4.5

Global ranking profile from last year? (C1b) 3.7Grades on rotation? (A5a) 3.7Global assessment ranking

of this candidate? (C1a)3.2

Nature of contact with applicant? (A2) 3.0

ACADEMIC EMERGENCY MEDICINE • September 2013, Vol. 20, No. 9 • www.aemj.org 929

Page 5: Characterization of the Council of Emergency Medicine Residency Directors' Standardized Letter of Recommendation in 2011-2012

These findings are nearly identical to those of the 2000report.4

Unpublished data from the task force survey of SLORauthors reveals that rarely, if ever, do SLOR authors feelthey inflate grades. In addition, SLOR authors reportthat in some instances they use their “gestalt” to selectthe adjective of choice without reference to the associ-ated quantifying anchor. This may explain why nearlyall applicants are rated as either outstanding or excel-lent. Rarely does the SLOR indicate that an individual is“at the level of their peers” or lower.

It is likely that most applicants select faculty to authortheir single-author SLORs based on the likelihood ofthose faculty members writing favorable letters. Theresults summarized in Table 4 demonstrate that theauthors’ job positions plays a role in how discerningthey are when assigning a “global assessment” of “out-standing/top10%.” The data suggest that as educatorsbecome more experienced in writing SLORs, theybecome more discriminating in grading them. Not sur-prisingly, PDs are among the most discriminating of theSLOR authors. An understanding of this differencebased on author type is important for those evaluatingSLORs. It is also worth noting that the most discrimi-nating authors rank 32.5% of candidates as globallyfunctioning in the top 10% of applicants.

The AAMC has encountered very similar challengeswith grade inflation regarding Dean’s letters.10 As aresult, they have instituted a number of changesincluding adopting the new name “Medical StudentPerformance Evaluation” reflecting the intention ofbeing an evaluation and not a recommendation.11 TheSLOR has always been intended to be an assessmenttool. Referring to it as a “recommendation” creates aconundrum for the author who is likely conflictedabout whether he or she is assessing or recommendingthe applicant.

Although not necessarily the result of grade inflation,56.7% of those applying were given grades of “honors”or the equivalent for their EM rotations. In contrast,SLOR authors reported that 27.1% of students receivedgrades of “honors” the previous year (Table 1). This dis-cordance may be explained, in part, by the fact that EMis a required clerkship at some schools and not others.Consequently, last year’s grades may include the perfor-mance of all students and not just those applying to EMresidency programs. As an instrument designed toassess performance relative to EM bound peers, theSLOR report of last year’s grades creates the misper-ception that clerkship grades are more discriminatingamong applicants than they actually are.

A second tenet of the SLOR is to provide a concisesynopsis of the applicant. Toward this end, instructionsassociated with the SLOR have traditionally requestedthat the narrative written comments be kept at a maxi-mum of 150–200.1 Early on, work by Girzadas et al.5

demonstrated that the average time to interpret a SLORwas 16 seconds, compared to 90 seconds for a narrativeletter. In addition, the SLOR offered better interraterreliability than traditional narratives, regardless of thelevel of experience of the interpreter. These advantagesare particularly important in a time when manyprograms receive over 1,000 applications annually. The

current study found that the written comments ofapproximately half of all SLORS (48.6%) were over the200-word limit. From our experience, there has been atrend over recent years to provide additional informa-tion in this section that may include a list of facultycomments from the EM rotation and additional infor-mation about grading, the rotation’s clinical environ-ment, or both. The data from this study reveal that theinclusion of random faculty comments from the rotationis three times more common in written commentsgreater than 200 words. Although staff comments mayhighlight aspects of behavior that emphasize an impor-tant trait, a list of comments without context falls shortof providing the concise synthesis the written commentswere intended to represent. Likewise, an explanation ofgrading or the clinical environment at that institution isthree to four times more common in written commentsthat were longer than the 200-word limit. Although thisinformation may frame the applicant’s educational per-formance, it risks discounting the precise focus on thecandidate that SLORs strive to achieve.

A thoughtful written narrative puts the overall SLORand an applicant’s candidacy into perspective. Essentialto this conversation is a discussion of noncognitiveattributes, such as conscientiousness, intellectual curios-ity, compassion, professional maturity, and leadershipthat are important in predicting future success as a resi-dent and physician.12–16 Additionally, this narrativeshould clarify any questions or issues in the SLOR.Although not quantified in this study, it has been com-mon from our experience for faculty to express concernthat low rankings (when provided) are seldom explainedin the written narrative.

Finally, the SLOR was developed as a standardizedinstrument to limit the variability based on style and ter-minology found in narrative letters, as well as to providecomparative data regarding key performance parametersfor every applicant. Several issues were encountered thatappear to have an effect on standardization. Modificationof the SLOR template is one such factor. Although thesechanges are likely well-intentioned, template modifica-tions increase the difficulty of comparing SLORs amongapplicants. Although modifications may be warranted,they should be based on consensus opinion, so thatchanges are adopted by all SLOR authors and not basedon individual preference. The majority of SLORs haveone or more questions that are left unanswered, whichalso negatively affects standardization. There are severalpossible reasons why questions may have been skipped:1) the author could have mistakenly missed an item;2) the question may be vague, as appears to be the casewith “One Key Comment,” which was skipped 36.4% ofthe time; and 3) the instructions may be unclear aboutwhat is being requested. For instance, the “global assess-ment profile last year” did not provide the completeprofile 80.1% of the time. Understanding which questionsprove problematic for the authors may help to improvecompliance through changes made in specific questionsor instructions. Educating authors as to the intentionand importance of each question may also improvecompliance.

The group SLOR is a relatively new format that con-stitutes approximately one-third of all SLORs. Although

930 Jeffrey et al. • CHARACTERIZATION OF THE CORD SLOR

Page 6: Characterization of the Council of Emergency Medicine Residency Directors' Standardized Letter of Recommendation in 2011-2012

there is some variation, group SLORs are generallywritten by several members of the program leadership,typically the PD and CD, who come to consensusregarding an applicant. In comparison to single-authorSLORs, group SLORs are less likely to base their evalua-tions on “extended direct clinical contact.” Conse-quently, group SLORs either use feedback from otherfaculty or are based on more constrained interactions.Another difference noted was that group SLORs appearto be more discriminating in providing global assess-ments. For example, group SLORs provided signifi-cantly fewer top 10% rankings than any single-authorSLOR with the exception of those from PDs (Table 4). Inaddition, a ranking of very good/middle-third made up25.5% of group SLOR evaluations. By comparison, thevery good or middle-third rankings made up 9.8% of allsingle-author SLORs. One possible explanation for thisoutcome is that applicants inject bias into the single-author SLOR process by choosing the authors them-selves. Group SLORs, on the other hand, often providethe entire grading profile for EM-bound applicants,whether a letter was requested or not.

Multiple factors appear to be responsible for theissues encountered in the SLOR. Two related studiesare taking place to evaluate the perspectives of theSLOR authors and PDs who evaluate SLORS. The inten-tion is to gain a global perspective on issues related tothis instrument. Questions such as, “Just how valuableis the SLOR?”, “To what degree is it limited by theissues raised in the current study?” and “What specificchanges need to be made?” are best addressed by theupcoming PD survey. It is clear that the template couldbe improved, as could education and ongoing mentor-ship on how to complete a SLOR. Consideration shouldbe given as to whether there should be a name changeto “standardized letter of evaluation” (SLOE), which is amore appropriate and less confusing name for thisinstrument. Changes in the name and template, andadditional education regarding completion, all have thepotential to improve compliance with the three basictenets of the SLOR. Yet, grade inflation is the most per-vasive issue. Our sense is that a cultural change regard-ing how we view our roles as educators and theassessments we provide will be required to significantlyaffect this issue.

The great majority of applicants to EM have thepotential to succeed in our specialty. In addition,different programs value different attributes in a candi-date. The ideal SLOR (SLOE) would 1) acknowledgethat every applicant has unique strengths and chal-lenges; 2) take the time to provide honest, insightfulassessments, and written comments; and 3) understandthat true potential is reflected in qualities and traitsfrom noncognitive domains and not simply grades orrankings.

LIMITATIONS

It is possible that there were errors in data entry, as thedata were not rechecked after entry into the spreadsheet.Although every effort was made to ensure interrater reli-ability, it is possible our agreement degraded during thestudy. The study is limited in its ability to discern why

SLOR authors acted as they did in filling out the letters,although the aforementioned concurrent studies ofSLOR authors and PDs should better answer this ques-tion. All three of the training programs participating inthis study were university-based. Consequently, ourresults may have less relevance to community-basedprograms.

CONCLUSIONS

The results of this study demonstrate that the standardletter of recommendation in its current use is limited inits ability to be discerning. There is also a question ofwhether recent changes in its use have affected its abil-ity to be concise and standardized. Programs should beaware of these findings as they attempt to gauge howexemplary an applicant truly is.

References

1. Keim SM, Rein JA, Chisholm C, et al. A standard-ized letter of recommendation for residency applica-tion. Acad Emerg Med. 1999;6:1141–6.

2. Garmel GM. Letters of recommendation: what doesgood really mean? Acad Emerg Med. 1997;4:833–4.

3. Binder LS. Babies and bathwater: standardized vsnarrative data (or both) in applicant evaluation.Acad Emerg Med. 1998;5:1045–8.

4. Harwood RC, Girzadas DV, Carlson A, et al. Char-acteristics of the emergency medicine standardizedletter of recommendation. Acad Emerg Med.2000;7:409–10.

5. Girzadas DV, Harwood RC, Dearie J, Garrett S. Acomparison of standardized and narrative lettersof recommendation. Acad Emerg Med. 1998;5:1101–4.

6. Girzadas DV, Harwood RC, Delis SN, et al. Emer-gency medicine standardized letter of recommenda-tion: predictors of guaranteed match. Acad EmergMed. 2001;8:648–53.

7. National Resident Matching Program. Results of the2012 NRMP Program Directory Survey. Available at:http://www.nrmp.org/data/programresultsbyspecial-ty2012.pdf. Accessed Jul 8, 2013.

8. Crane JT, Ferraro CM. Selection criteria for emer-gency medicine residents. Acad Emerg Med. 2000;7:54–60.

9. Wagoner NE, Suriano JR. Program directors’responses to a survey on variables used to selectresidents in a time of change. Acad Med. 1999;74:51–8.

10. Lee AG, Golnik KC, Oetting TA, et al. Re-engineeringthe resident applicant selection process inophthalmology: a literature review and recommenda-tions for improvement. Surv Ophthalmol. 2008;53:164–76.

11. American Association of Medical Colleges. Hand-book for Student Records Administrators. Transi-tion to Residency. Available at: www.aamc.org/linkableblob/54666-5/data/transitiontoresidency-data.pdf. Accessed Jul 8, 2013.

12. Woods PS, Smith WL, Altmaier EM. A prospectivestudy of cognitive and noncognitive selection criteria

ACADEMIC EMERGENCY MEDICINE • September 2013, Vol. 20, No. 9 • www.aemj.org 931

Page 7: Characterization of the Council of Emergency Medicine Residency Directors' Standardized Letter of Recommendation in 2011-2012

as predictors of resident performance. Invest Radiol.1990;25:855–9.

13. Branch WT. Use of critical incident reports in medi-cal education. J Gen Intern Med. 2005;20:1063–7.

14. Tarico V, Smith WL, Altmaier E, Franken EA, VanVelzen D. Critical incident interviewing in evaluationof resident performance. Radiology. 1984;152:327–9.

15. Smith WL, Berbaum KS. Improving resident selec-tion. Discrimination by perceptual abilities. InvestRadiol. 1991;26:910–12.

16. Altmaier EM, McGuinness G, Wood P, Ross RR,Bartley J, Smith W. Defining successful perfor-mance among pediatric residents. Pediatrics.1990;85:139–43.

932 Jeffrey et al. • CHARACTERIZATION OF THE CORD SLOR