consort-spi 2018 explanation and elaboration: guidance for ......background: the consort...

18
METHODOLOGY Open Access CONSORT-SPI 2018 Explanation and Elaboration: guidance for reporting social and psychological intervention trials Sean Grant 1* , Evan Mayo-Wilson 2 , Paul Montgomery 3 , Geraldine Macdonald 4 , Susan Michie 5 , Sally Hopewell 6 , David Moher 7 , on behalf of the CONSORT-SPI Group Abstract Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers report randomised controlled trials (RCTs) transparently. We have developed an extension to the CONSORT 2010 Statement for social and psychological interventions (CONSORT-SPI 2018) to help behavioural and social scientists report these studies transparently. Methods: Following a systematic review of existing reporting guidelines, we conducted an online Delphi process to prioritise the list of potential items for the CONSORT-SPI 2018 checklist identified from the systematic review. Of 384 international participants, 321 (84%) participated in both rating rounds. We then held a consensus meeting of 31 scientists, journal editors, and research funders (March 2014) to finalise the content of the CONSORT-SPI 2018 checklist and flow diagram. Results: CONSORT-SPI 2018 extends 9 items (14 including sub-items) from the CONSORT 2010 checklist, adds a new item (with 3 sub-items) related to stakeholder involvement in trials, and modifies the CONSORT 2010 flow diagram. This Explanation and Elaboration (E&E) document is a user manual to enhance understanding of CONSORT-SPI 2018. It discusses the meaning and rationale for each checklist item and provides examples of complete and transparent reporting. Conclusions: The CONSORT-SPI 2018 Extension, this E&E document, and the CONSORT website (www.consort- statement.org) are helpful resources for improving the reporting of social and psychological intervention RCTs. Keywords: CONSORT, Randomised controlled trial, Reporting guideline, Reporting standards, Transparency Background CONSORT-SPI 2018 explanation and elaboration The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help authors report randomised controlled trials (RCTs) [1]. It has improved the quality of reports in medicine [25], and has been officially endorsed by over 600 journals and prominent editorial groups [6]. A smaller number of journals have implemented CONSORTparticularly its extension state- mentsas a requirement for the manuscript submission, peer-review, and editorial decision-making process [6, 7]. There are extensions of the CONSORT Statement (http:// www.consort-statement.org/extensions) for specific trial designs [811], types of data (e.g. patient-reported outcomes, harms, and information in abstracts) [1214], and interventions [1517]. Several reviews have shown that RCTs of social and psychological interventions are often not reported with sufficient accuracy, comprehensiveness, and transparency to replicate these studies, assess their quality, and under- stand for whom and under what circumstances the evalu- ated intervention should be delivered [1822]. Moreover, behavioural and social scientists may be prevented from re- producing or synthesising previous studies because trial protocols, outcome data, and the materials required to im- plement social and psychological interventions are often * Correspondence: [email protected] 1 Behavioral & Policy Sciences, RAND Corporation, 1776 Main Street, Santa Monica, CA 90407-2138, USA Full list of author information is available at the end of the article © The Author(s). 2018 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Grant et al. Trials (2018) 19:406 https://doi.org/10.1186/s13063-018-2735-z

Upload: others

Post on 26-Apr-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CONSORT-SPI 2018 Explanation and Elaboration: guidance for ......Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers

METHODOLOGY Open Access

CONSORT-SPI 2018 Explanation andElaboration: guidance for reporting socialand psychological intervention trialsSean Grant1* , Evan Mayo-Wilson2, Paul Montgomery3, Geraldine Macdonald4, Susan Michie5, Sally Hopewell6,David Moher7, on behalf of the CONSORT-SPI Group

Abstract

Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to helpbiomedical researchers report randomised controlled trials (RCTs) transparently. We have developed an extension tothe CONSORT 2010 Statement for social and psychological interventions (CONSORT-SPI 2018) to help behaviouraland social scientists report these studies transparently.

Methods: Following a systematic review of existing reporting guidelines, we conducted an online Delphi processto prioritise the list of potential items for the CONSORT-SPI 2018 checklist identified from the systematic review. Of384 international participants, 321 (84%) participated in both rating rounds. We then held a consensus meeting of31 scientists, journal editors, and research funders (March 2014) to finalise the content of the CONSORT-SPI 2018checklist and flow diagram.

Results: CONSORT-SPI 2018 extends 9 items (14 including sub-items) from the CONSORT 2010 checklist, adds anew item (with 3 sub-items) related to stakeholder involvement in trials, and modifies the CONSORT 2010 flowdiagram. This Explanation and Elaboration (E&E) document is a user manual to enhance understanding ofCONSORT-SPI 2018. It discusses the meaning and rationale for each checklist item and provides examples ofcomplete and transparent reporting.

Conclusions: The CONSORT-SPI 2018 Extension, this E&E document, and the CONSORT website (www.consort-statement.org) are helpful resources for improving the reporting of social and psychological intervention RCTs.

Keywords: CONSORT, Randomised controlled trial, Reporting guideline, Reporting standards, Transparency

BackgroundCONSORT-SPI 2018 explanation and elaborationThe CONSORT (Consolidated Standards of ReportingTrials) Statement was developed to help authors reportrandomised controlled trials (RCTs) [1]. It has improvedthe quality of reports in medicine [2–5], and has beenofficially endorsed by over 600 journals and prominenteditorial groups [6]. A smaller number of journals haveimplemented CONSORT—particularly its extension state-ments—as a requirement for the manuscript submission,peer-review, and editorial decision-making process [6, 7].

There are extensions of the CONSORT Statement (http://www.consort-statement.org/extensions) for specific trialdesigns [8–11], types of data (e.g. patient-reportedoutcomes, harms, and information in abstracts) [12–14],and interventions [15–17].Several reviews have shown that RCTs of social and

psychological interventions are often not reported withsufficient accuracy, comprehensiveness, and transparencyto replicate these studies, assess their quality, and under-stand for whom and under what circumstances the evalu-ated intervention should be delivered [18–22]. Moreover,behavioural and social scientists may be prevented from re-producing or synthesising previous studies because trialprotocols, outcome data, and the materials required to im-plement social and psychological interventions are often

* Correspondence: [email protected] & Policy Sciences, RAND Corporation, 1776 Main Street, SantaMonica, CA 90407-2138, USAFull list of author information is available at the end of the article

© The Author(s). 2018 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, andreproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link tothe Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Grant et al. Trials (2018) 19:406 https://doi.org/10.1186/s13063-018-2735-z

Page 2: CONSORT-SPI 2018 Explanation and Elaboration: guidance for ......Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers

not shared [23–28]. These inefficiencies contribute to sub-optimal dissemination of effective interventions [29, 30],overestimation of intervention efficacy [31], and researchwaste [29]. Transparent and detailed reporting of social andpsychological intervention RCTs is needed to minimisereporting biases and to maximise the credibility and utilityof this research evidence [32, 33].We developed an extension of the CONSORT 2010

Statement for Social and Psychological Interventions(CONSORT-SPI 2018) [34]. To delineate the scope ofCONSORT-SPI 2018, we defined interventions by theirmechanisms of action [35, 36]. We define social and psy-chological interventions as actions intended to modifyprocesses and systems that are social and psychologicalin nature (such as cognitions, emotions, behaviours,norms, relationships, and environments) and arehypothesised to influence outcomes of interest [37, 38].Social and psychological interventions may be offered toindividuals who request them or as a result of policy,may operate at different levels (e.g., individual, group,place), and are usually “complex” [19]. CONSORT-SPI2018 is designed primarily for reports of RCTs, thoughsome parts of this guidance may be useful for re-searchers conducting other types of clinical trials [39] orwho are interested in developing and evaluating complexinterventions [40]. In addition, although terms in this re-port are most appropriate for parallel-group trials, theguidance is designed to apply to other designs (e.g.stepped wedge and N of 1).

MethodsWe previously reported the methods used to developCONSORT-SPI 2018 [41]. In summary, we first con-ducted systematic reviews of existing guidance and qual-ity of trial reporting [18]. Second, we conducted aninternational online Delphi process between September2013 and February 2014 to prioritise the list of potentialitems for the CONSORT-SPI 2018 checklist and flowdiagram that were identified in the systematic review.Survey items can be accessed at the project’s ReSharesite: 10.5255/UKDA-SN-851981. Third, we held a con-sensus meeting to finalise the content of the checklistand flow diagram. The meeting was held in March 2014and comprised 31 scientists, journal editors, and re-search funders. A writing group drafted CONSORT-SPI2018, and consensus group participants provided feed-back and agreed to the final manuscript for theCONSORT-SPI 2018 Extension Statement [42], and thisExplanation and Elaboration (E&E) document(Additional file 1: Table S1).The CONSORT-SPI 2018 checklist extends 9 of the 25

items (incorporating 14 sub-items) found in CONSORT2010 (Table 1; new items are in the ‘CONSORT-SPI 2018’column) and includes a modified flow diagram. Participants

also voted to add a new item about stakeholder involve-ment, and they recommended modifications to existingCONSORT 2010 checklist items (Table 2). This E&E docu-ment briefly summarises the content from the CONSORT2010 E&E document for each CONSORT 2010 item [43],tailored to a behavioural and social science audience, and itprovides an explanation and elaboration for the new itemsin CONSORT-SPI 2018. Specifically, for each item fromCONSORT 2010 and CONSORT-SPI 2018, this E&E pro-vides the rationale for the checklist item and examples ofreporting for a behavioural and social science audience(Additional file 2: Table S2). Throughout this article, we usethe term ‘participants’ to mean ‘participating units’ targetedby interventions, which might be individuals, groups, orplaces (i.e. settings or locations) [44]. While we include abrief statement about each item in the CONSORT 2010checklist, readers can find additional information aboutthese items on the CONSORT website (www.consort-state-ment.org) and the CONSORT 2010 E&E [43].

Results and DiscussionExplanation and elaboration of the CONSORT-SPITitle and abstract

Item 1a: identification as a randomised trial in thetitle Placing the word ‘randomised’ in the title increasesthe likelihood that an article will be indexed correctly inbibliographic databases and retrieved in electronicsearches [45]. Authors should consider providing infor-mation in the title to assist interested readers, such asthe name of the intervention and the problem that thetrial addresses [46]. We advise authors to avoid unin-formative titles (e.g. catchy phrases or allusions) becausethey reduce space that could be used to help readersidentify relevant manuscripts [45, 46].

Item 1b: structured summary of trial design, methods,results, and conclusions (for specific guidance, seeCONSORT for Abstracts) [13, 47] Abstracts are themost widely read section of manuscripts [46], and they areused for indexing reports in electronic databases [45].Authors should follow the CONSORT Extension forAbstracts, which provides detailed advice for structuredjournal article abstracts and for conference abstracts.We have tailored the CONSORT Extension for Abstractsfor social and psychological intervention trials with rele-vant items for the objective, trial design, participants, andinterventions from the CONSORT-SPI 2018 checklist(Table 3 and Additional file 3) [47].

Introduction

Item 2a: scientific background and explanation ofrationale A structured introduction should describe the

Grant et al. Trials (2018) 19:406 Page 2 of 18

Page 3: CONSORT-SPI 2018 Explanation and Elaboration: guidance for ......Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers

rationale for the trial and how the trial contributes towhat is known [48]. In particular, the introductionshould describe the targeted problem or issue [49] andwhat is already know about the intervention, ideally byreferencing systematic reviews [46].

Item 2b: specific objectives or hypotheses The objec-tives summarise the research questions, including any hy-potheses about the expected magnitude and direction ofintervention effects [48, 50]. For social and psychologicalinterventions that have multiple units of intervention andmultiple outcome assessments (e.g. individuals, groups,places), authors should specify to whom or to what eachobjective and hypothesis applies.

CONSORT-SPI 2018 item 2b: if pre-specified, how theintervention was hypothesised to workDescribing how the interventions in all groups (i.e. allexperimental and comparator groups) were expected to

affect outcomes provides important information aboutthe theory underlying the interventions [46]. For eachintervention evaluated, authors should describe the'mechanism of action' [51], also known as the 'theory ofchange' [52], 'programme theory' [53], or 'causal path-way' [54]. Authors should state how interventions werethought to affect outcomes prior to the trial, andwhether the hypothesised mechanisms of action werespecified a priori, ideally with reference to the trial regis-tration and protocol [55]. Specifically, authors shouldreport: how the components of each intervention wereexpected to influence modifiable psychological andsocial processes, how influencing these processes wasthought to affect the outcomes of interest, the role ofcontext, facilitators of and barriers to interventionimplementation, and potential adverse events or unin-tended consequences [48, 56]. Graphical depictions—such as a logic model or analytic framework—may beuseful [19].

Fig. 1 The CONSORT-SPI 2018 flow diagram

Grant et al. Trials (2018) 19:406 Page 3 of 18

Page 4: CONSORT-SPI 2018 Explanation and Elaboration: guidance for ......Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers

Table 1 The CONSORT-SPI 2018 checklist

Section Item # CONSORT 2010 CONSORT-SPI 2018

Title and abstract 1a Identification as a randomised trial in the title§

1b Structured summary of trial design, methods, results, andconclusions (for specific guidance, see CONSORT forAbstracts)§

Refer to CONSORT extension for social andpsychological intervention trial abstracts

Introduction

Background andobjectives

2a Scientific background and explanation of rationale§

2b Specific objectives or hypotheses§ If pre-specified, how the intervention washypothesised to work

Methods

Trial design 3a Description of trial design (such as parallel, factorial)including allocation ratio§

If the unit of random assignment is not theindividual, please refer to CONSORT forCluster Randomised Trials [8]

3b Important changes to methods after trial commencement(such as eligibility criteria), with reasons

Participants 4a Eligibility criteria for participants§ When applicable, eligibility criteria forsettings and those delivering theinterventions

4b Settings and locations where the data were collected

Interventions 5 The interventions for each group with sufficient details toallow replication, including how and when they wereactually administered§

5a Extent to which interventions were actuallydelivered by providers and taken up byparticipants as planned

5b Where other informational materials aboutdelivering the intervention can be accessed

5c When applicable, how intervention providerswere assigned to each group

Outcomes 6a Completely defined pre-specified outcomes, including howand when they were assessed§

6b Any changes to trial outcomes after the trial commenced,with reasons

Sample size 7a How sample size was determined§

7b When applicable, explanation of any interim analyses andstopping guidelines

Randomisation

Sequence generation 8a Method used to generate the random allocation sequence

8b Type of randomisation and details of any restriction (such asblocking and block size)§

Allocation concealmentmechanism

9 Mechanism used to implement the random allocationsequence, describing any steps taken to conceal thesequence until interventions were assigned§

Implementation 10 Who generated the random allocation sequence, who enrolledparticipants, and who assigned participants to interventions§

Awareness of assignment 11a Who was aware of intervention assignment after allocation (forexample, participants, providers, those assessing outcomes),and how any masking was done

11b If relevant, description of the similarity of interventions

Analytical methods 12a Statistical methods used to compare group outcomes§ How missing data were handled, with detailsof any imputation method

12b Methods for additional analyses, such as subgroup analyses,adjusted analyses, and process evaluations

Grant et al. Trials (2018) 19:406 Page 4 of 18

Page 5: CONSORT-SPI 2018 Explanation and Elaboration: guidance for ......Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers

Methods: trial design

Item 3a: description of trial design (such as parallel,factorial) including allocation ratio Unambiguous de-tails about trial design help readers assess the suitabilityof trial methods for addressing trial objectives [46, 57],and clear and transparent reporting of all design features

of a trial facilitates reproducibility and replication [58].Authors should explain their choice of design (especiallyif it is not an individually randomised, two-group paralleltrial) [9]; state the allocation ratio and its rationale; andindicate whether the trial was designed to assess thesuperiority, equivalence, or noninferiority of the inter-ventions [10, 59, 60].

Table 1 The CONSORT-SPI 2018 checklist (Continued)

Section Item # CONSORT 2010 CONSORT-SPI 2018

Results

Participant flow (a diagramis strongly recommended)

13a For each group, the numbers randomlyassigned, receiving the intended intervention,and analysed for the outcomes§

Where possible, the number approached,screened, and eligible prior to randomassignment, with reasons for non-enrolment

13b For each group, losses and exclusions after randomisation,together with reasons§

Recruitment 14a Dates defining the periods of recruitment and follow-up

14b Why the trial ended or was stopped

Baseline data 15 A table showing baseline characteristics for each group§ Include socioeconomic variables whereapplicable

Numbers analysed 16 For each group, number included in each analysisand whether the analysis was by original assigned groups§

Outcomes and estimation 17a For each outcome, results for each group, and theestimated effect size and its precision (such as 95% confidenceinterval)§

Indicate availability of trial data

17b For binary outcomes, presentation of both absolute andrelative effect sizes is recommended

Ancillary analyses 18 Results of any other analyses performed, including subgroupanalyses, adjusted analyses, and process evaluations,distinguishing pre-specified from exploratory

Harms 19 All important harms or unintended effects in each group (forspecific guidance, see CONSORT for Harms)

Discussion

Limitations 20 Trial limitations, addressing sources of potential bias,imprecision, and, if relevant, multiplicity of analyses

Generalisability 21 Generalisability (external validity, applicability) of the trialfindings§

Interpretation 22 Interpretation consistent with results, balancing benefitsand harms, and considering other relevant evidence

Important information

Registration 23 Registration number and name of trial registry

Protocol 24 Where the full trial protocol can be accessed, if available

Declaration of interests 25 Sources of funding and other support, role of funders Declaration of any other potential interests

Stakeholder involvement 26a Any involvement of the interventiondeveloper in the design, conduct, analysis, orreporting of the trial

26b Other stakeholder involvement in trial design,conduct, or analyses

26c Incentives offered as part of the trial

This table lists items from the CONSORT 2010 checklist (with some modifications for social and psychological intervention trials as described in Table 2) andadditional items in the CONSORT-SPI 2018 extension. Empty rows in the ‘CONSORT-SPI 2018’ column indicate that there is no extension to the CONSORT2010 itemWe strongly recommended that the CONSORT-SPI 2018 Explanation and Elaboration (E&E) document be reviewed when using the CONSORT-SPI 2018 checklistfor important clarifications of each item§An extension item for cluster trials exists for this CONSORT 2010 item

Grant et al. Trials (2018) 19:406 Page 5 of 18

Page 6: CONSORT-SPI 2018 Explanation and Elaboration: guidance for ......Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers

CONSORT-SPI 2018 item 3a: if the unit of randomassignment is not the individual, please refer to CONSORTfor Cluster Randomised TrialsRandomising at the cluster level (e.g. schools) hasimportant implications for trial design, analysis, infer-ence, and reporting. For cluster randomised trials,authors should follow the CONSORT Extension toCluster Randomised Trials. Because many social andpsychological interventions are cluster randomised, weprovide the extended items from this checklist inTables 4 and 5 [8]. Authors should also report the

unit of randomisation, which might be social units(e.g. families), organisations (e.g. schools, prisons), orplaces (e.g. neighbourhoods), and specify the unit ofeach analysis, especially when the unit of analysis isnot the unit of randomisation (e.g. randomising at thecluster level and analysing outcomes assessed at theindividual level).

Item 3b: important changes to methods after trialcommencement (such as eligibility criteria), with rea-sons Deviations from planned trial methods are com-mon, and not necessarily associated with flawed orbiased research. Changes from the planned methods areimportant for understanding and interpreting trial re-sults. A trial report should refer to a trial registration(Item 23) [61–63] and protocol (Item 24) developed inadvance of assigning the first participant [55], and to apre-specified statistical analysis plan [64]. The reportshould summarise all amendments to the protocol andstatistical analysis plan, when they were made, and therationale for each amendment. Because selective out-come reporting is pervasive [65], authors should stateany changes to the outcome definitions during the trial.

Methods: participants

Item 4a: eligibility criteria for participants Eligibilitycriteria should describe how participants (i.e. individuals,groups, or places) were recruited. Readers need thisinformation to understand who could have enteredthe trial and the generalisability of findings. Authorsshould describe all inclusion and exclusion criteriaused to determine eligibility, as well as the methodsused to screen and assess participants to determinetheir eligibility [46, 48].

CONSORT-SPI 2018 item 4a: when applicable, eligibilitycriteria for settings and those delivering the interventionsIn addition to the eligibility criteria that apply to individ-uals, social and psychological intervention trials oftenhave eligibility criteria for the settings where participantswill be recruited and interventions delivered, as well asintervention providers [44]. Authors should describethese criteria to help readers compare the trial contextwith other contexts in which interventions might beused [48, 66, 67].

Item 4b: settings and locations of intervention deliv-ery and where the data were collected Informationabout settings and locations of intervention delivery anddata collection are essential for understanding trial con-text. Important details might include the geographic lo-cation, day and time of trial activities, space required,and features of the inner setting (e.g. implementing

Table 2 Noteworthy changes to CONSORT 2010 items in theCONSORT-SPI 2018 checklist

• Item 6a. The distinction between ‘primary’ versus ‘secondary’outcomes has been removed.

• Item 11. ‘Blinding’ has been changed to ‘Awareness of assignment’and ‘masking’ in the section heading and item wording, respectively.These changes address concerns about the use of the term‘blinding’ as well as the need to emphasise the issue of awarenessof assignment by providers and participants in social andpsychological intervention trials.

• Item 12. The section heading ‘Statistical methods’ has beenchanged to ‘Analytical methods’ because some methods may bequalitative in social and psychological intervention RCTs.

• Item 12a. The distinction between ‘primary’ versus ‘secondary’outcomes has been removed.

• Item 12b. Process evaluations are specifically highlighted.• Item 13a. The distinction between ‘primary’ versus ‘secondary’outcomes has been removed.

• Items 13a and 16. The wording ‘number of participants’ has beenchanged to ‘number’ because the term ‘participants’ is notappropriate for RCTs in which the unit of intervention is ageographic area. While social and psychological interventions maytarget individual participants or groups of individuals such asfamilies or schools, they may also involve place-based techniquesthat target geographic units and examine area-level effects.However, for convenience and consistency with the CONSORT2010 guidance [43], the CONSORT-SPI 2018 checklist and E&E willrefer to the unit targeted by the intervention as ‘participants’,though ‘participants’ throughout this guidance is meant to standfor ‘participating units’, or the unit being targeted by the interven-tion [44], which may include geographic units.

• Item 15. The words ‘clinical and demographic’ have been removedbecause this checklist targets interventions that may not bemedical in nature or have health outcomes, and thus to emphasisethe need to report important baseline characteristics irrespective oftheir nature.

• Item 16. The parenthetical ‘(denominator)’ has been removed. Theterm implied the use of dichotomous outcomes, whereascontinuous outcomes are extremely prevalent in social andpsychological intervention RCTs.

• Item 17a. The distinction between ‘primary’ versus ‘secondary’outcomes has been removed.

• Items 23–25. The section ‘Other Information’ has been changed to‘Important Information’ because consensus meeting participantshad concerns that ‘Other’ makes the requested information appearto be of secondary importance to previous sections.

• Item 25. The phrase ‘such as supply of drugs’ has been removedbecause drug trials are not in the purview of this extension bydefinition.

• Item 26: New item. A new sub-section in ‘Important Information’called ‘Stakeholder Involvement’ has been added because consensusmeeting participants thought such a sub-section would best fit thethree sub-items currently allocated to it

Grant et al. Trials (2018) 19:406 Page 6 of 18

Page 7: CONSORT-SPI 2018 Explanation and Elaboration: guidance for ......Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers

organisation) and outer setting (e.g. external context andenvironment) that might influence implementation [68].Authors should refer to the mechanism of action whendeciding what information about setting and location toreport.

Methods: interventions

Item 5: the interventions for each group with suffi-cient details to allow replication, including how andwhen they were actually administered Complete andtransparent information about the content and deliveryof all interventions in all groups (experimental and com-parator) [44] is vital for understanding, replicating, andsynthesising intervention effects [54, 69]. Essential infor-mation includes: naming the interventions, what was ac-tually delivered (e.g. materials and procedures), whoprovided the interventions, how, where, when, and howmuch [70]. Details about providers should include their

professional qualifications and education, expertise orcompetence with the interventions or area in general,and training and supervision for delivering the interven-tions [71]. Tables or diagrams showing the sequence ofintervention activities, such as the participant timelinerecommended in the Standard Protocol Items: Recom-mendations for Interventional Trials (SPIRIT) 2013Statement [55], are often useful [72]. Authors shouldavoid the sole use of labels such as ‘treatment as usual’or ‘standard care’ because they are not uniform acrosstime and place [48].

CONSORT-SPI 2018 item 5a: extent to which interventionswere actually delivered by providers and taken up byparticipants as plannedFrequently, interventions are not implemented as planned.Authors should describe the actual delivery by providersand uptake by participants of interventions for all groups,including methods used to ensure or assess whether the

Table 3 Items to report in journal or conference abstracts for social and psychological intervention trials [13]

Section CONSORT Abstract item Relevant CONSORT-SPI item

Title Identification of the study as randomised

Authors Contact details for the corresponding author

Trial design Description of the trial design (e.g. parallel, cluster,noninferiority)

If the unit of random assignment is not the individual, referto CONSORT for Cluster Randomised Trials and report theitems included in its extension for abstracts [8]

Methods

Participants Eligibility criteria for participants and the settings wherethe data were collected

When applicable, eligibility criteria for the setting ofintervention delivery and the eligibility criteria for thepersons who delivered the interventions

Interventions Interventions intended for each group

Objective Specific objective or hypothesis If pre-specified, how the intervention was hypothesisedto work

Outcomes Clearly defined primary outcome for this report

Randomisation How participants were allocated to interventions

Awareness ofassignment

Who was aware of intervention assignment afterallocation (for example, participants, providers,those assessing outcomes), and how any maskingwas done

Results

Number randomlyassigned

Number randomised to each group

Recruitment Trial status

Interventions Extent to which interventions were actually delivered byproviders and taken up by participants as planned

Number analysed Number analysed in each group

Outcomes For the primary outcome, a result for each groupand the estimated effect size and its precision

Harms Important adverse events or side effects

Conclusions General interpretation of the results

Trial registration Registration number and name of trial register

Funding Source of funding

Grant et al. Trials (2018) 19:406 Page 7 of 18

Page 8: CONSORT-SPI 2018 Explanation and Elaboration: guidance for ......Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers

interventions were delivered by providers and taken up byparticipants as intended [69]. Quantitative or qualitativeprocess evaluations [51] may be used to assess what pro-viders actually did (e.g. recording and coding sessions),the amount of an intervention that participants received(e.g. recording the number of sessions attended), and con-tamination across intervention groups [73, 74]. Authorsshould distinguish planned systematic adaptations (e.g.tailoring) from modifications that were not anticipated inthe trial protocol. When this information cannot be in-cluded in a single manuscript, authors should use onlinesupplements, additional reports, and data repositories toprovide this information.

CONSORT-SPI 2018 item 5b: where other informationalmaterials about delivering the interventions can beaccessedAuthors should indicate where readers can find suffi-cient information to replicate the interventions, such asintervention protocols [75], training manuals [48], orother materials (e.g. worksheets and websites) [54]. Forexample, new online platforms such as the Open Science

Framework allow researchers to share some or all oftheir study materials freely (https://osf.io).

CONSORT-SPI 2018 item 5c: when applicable, howintervention providers were assigned to each groupSome trials assign specific providers to different conditionsto prevent expertise and allegiance from confounding theresults. Authors should report whether the same peopledelivered the experimental and comparator interventions,whether providers were nested within intervention groups,and the number of participants assigned to each provider.

Methods: outcomes

Item 6a: completely define pre-specified outcomes,including how and when they were assessed All out-comes should be defined in sufficient detail for others toreproduce the results using the trial data [50, 76]. Anoutcome definition includes: (1) the domain (e.g. depres-sion), (2) the measure (e.g. the Beck Depression Inven-tory II Cognitive subscale), (3) the specific metric (e.g. avalue at a time point, a change from baseline), (4) the

Table 4 Items to report in the abstract for cluster randomised social and psychological intervention trials [8]

Section CONSORT Abstract item Relevant CONSORT Cluster extension item

Title Identification of the study as randomised Identification of study as cluster randomised

Authors Contact details for the corresponding author

Trial design Description of the trial design (e.g. parallel, cluster,noninferiority)

Methods

Participants Eligibility criteria for participants and the settings wherethe data were collected

Eligibility criteria for clusters

Interventions Interventions intended for each group

Objective Specific objective or hypothesis Whether objective or hypothesis pertains to thecluster level, the individual participant level, or both

Outcomes Clearly defined primary outcome for this report Whether the primary outcome pertains to the cluster level,the individual participant level, or both

Randomisation How participants were allocated to interventions How clusters were allocated to interventions

Awareness of assignmentWho was aware of intervention assignment after allocation(for example, participants, providers, those assessingoutcomes), and how any masking was done

Results

Number randomlyassigned

Number of participants randomised to each group Number of clusters randomised to each group

Recruitment Trial status

Number analysed Number of participants analysed in each group Number of clusters analysed in each group

Outcomes For the primary outcome, a result for each group and theestimated effect size and its precision

Results at the cluster or individual level as applicable foreach primary outcome

Harms Important adverse events or side effects

Conclusions General interpretation of the results

Trial registration Registration number and name of trial register

Funding Source of funding

Grant et al. Trials (2018) 19:406 Page 8 of 18

Page 9: CONSORT-SPI 2018 Explanation and Elaboration: guidance for ......Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers

method of aggregation (e.g. mean, proportion), and (5)the time point (e.g. 3 months post-intervention) [50]. Inaddition, authors should report the methods and personsused to collect outcome data, properties of measures orreferences to previous reports with this information,methods used to enhance measurement quality (e.g.training of outcome assessors), and any differences inoutcome assessment between trial groups [46]. Authorsalso should indicate where readers can access materialsused to measure outcomes [24]. When a trial includes ameasure (e.g. a questionnaire) that is not available pub-licly, authors should provide a copy (e.g. through an on-line repository or as an online supplement).

Item 6b: any changes to trial outcomes after the trialcommenced, with reasons All outcomes assessed

should be reported. If the reported outcomes differfrom those in the trial registration (Item 23) orprotocol (Item 24), authors should state which out-comes were added and which were removed. To allowreaders to assess the risk of bias from outcomeswitching, authors should also identify any changes tolevel of importance (e.g. primary or secondary) [77].Authors should provide the rationale for any changesmade and state whether these were done before orafter collecting the data.

Methods: sample size

Item 7a: how sample size was determined Authorsshould indicate the intended sample size for the trial andhow it was determined, including whether the sample size

Table 5 Items to report in the main text for cluster randomised social and psychological intervention trials [8]

Section Item # Cluster extension item

Title 1a Identification as a cluster randomised trial in the title

Abstract 1b See Table 4

Introduction

Background and objectives 2a Rationale for using a cluster design

2b Whether objectives pertain to the cluster level, the individual participant level, or both

Methods

Trial design 3a Definition of cluster and description of how the design features apply to the clusters

Participants 4a Eligibility criteria for clusters

Interventions 5 Whether interventions pertain to the cluster level, the individual participant level, or both

Outcomes 6a Whether outcome measures pertain to the cluster level, the individual participant level, or both

Sample size 7a Method of calculation, number of clusters(s) (and whether equal or unequal cluster sizes are assumed),cluster size, a coefficient of intracluster correlation (ICC or k), and an indication of its uncertainty

Randomisation

Sequence generation 8b Details of stratification or matching if used

Allocation concealmentmechanism

9 Specification that allocation was based on clusters rather than individuals and whether allocation concealment(if any) was at the cluster level, the individual participant level, or both

Implementation 10a Who generated the random allocation sequence, who enrolled clusters, and who assigned clusters tointerventions

10b Mechanism by which individual participants were included in clusters for the trial (such as completeenumeration, random sampling)

10c From whom consent was sought (representatives of the cluster, individual cluster members, or both) andwhether consent was sought before or after randomisation

Analytical methods 12a How clustering was taken into account

Results

Participant flow (a diagramis strongly recommended)

13a For each group, the numbers of clusters that were randomly assigned, received the intended treatment, andwere analysed for the primary outcome

13b For each group, losses and exclusions for both clusters and individual cluster members

Baseline data 15 Baseline characteristics for the individual and cluster levels as applicable for each group

Numbers analysed 16 For each group, number of clusters included in each analysis

Outcomes and estimation 17a Results at the individual or cluster level as applicable and a coefficient of intracluster correlation (ICC or k) foreach primary outcome

Generalisability 21 Generalisability to clusters and/or individual participants (as relevant)

Grant et al. Trials (2018) 19:406 Page 9 of 18

Page 10: CONSORT-SPI 2018 Explanation and Elaboration: guidance for ......Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers

was determined a priori using a sample size calculation ordue to practical constraints. If an a priori sample size cal-culation was conducted, authors should report the effectestimate used for the sample size calculation and why itwas chosen (e.g. the smallest effect size of interest, from ameta-analysis of previous trials). If an a priori sample sizecalculation was not performed, authors should notpresent a post hoc calculation, but rather the genuinereason for the sample size (e.g. limitations in time orfunding) and the actual power to detect an effect foreach result (Item 17).

Item 7b: when applicable, an explanation of any in-terim analyses and stopping guidelines Multiple stat-istical analyses can lead to false-positive results,especially when using stopping guidelines based onstatistical significance. Any interim analyses should bedescribed, including which analyses were conducted(i.e. the outcomes and methods of analysis), whenthey were conducted, and why (particularly whetherthey were pre-specified [78]). Authors should also de-scribe the reasons for stopping the trial, including anyprocedures used to determine whether the trial wouldbe stopped early (e.g. regular meetings of a datasafety monitoring board) [79].

Methods: randomisation—sequence generation

Item 8a: method used to generate the random alloca-tion sequence In a randomised trial, participants areassigned to groups by chance using processes designedto be unpredictable. Authors should describe themethod used to generate the allocation sequence (e.g. acomputer-generated random number sequence), so thatreaders may assess whether the process was truly ran-dom. Authors should not use the term ‘random’ to de-scribe sequences that are deterministic (e.g. alternation,order of recruitment, date of birth).

Item 8b: type of randomisation; details of any restric-tion (such as blocking and block size) Some trials re-strict randomisation to balance groups in size orimportant characteristics. Blocking restricts randomisationby grouping participants into 'blocks' and by assigningparticipants using a random sequence within each block[46]. When blocking is used, authors should describe howthe blocks were generated, the size of the blocks, whetherand how block size varied, and if trial staff became awareof the block size. Stratification restricts randomisation bycreating multiple random allocation sequences based onsite or characteristics thought to modify intervention ef-fects [46]. When stratification is used, authors should re-port why it was used and describe the variables used forstratification, including cut-off values for categories within

each stratum. When minimisation is used, authors shouldreport the variables used for minimisation and include thestatistical code. When there are no restrictions on ran-domisation, authors should state that they used ‘simplerandomisation’ [43].

Methods: randomisation—allocation concealmentmechanism

Item 9: mechanism used to implement the randomallocation sequence, describing any steps taken toconceal the sequence until interventions wereassigned In addition to generating a truly random se-quence (Item 8a), researchers should conceal the sequenceto prevent foreknowledge of the intervention assignmentby persons enrolling and assigning participants. Other-wise, recruitment and allocation could be affected byknowledge of the next assignment. Authors should reportwhether and how allocation was concealed [80, 81]. Whenallocation was concealed, authors should describe themechanism and how this mechanism was monitored toavoid tampering or subversion (e.g. centralised or 'third--party' assignment, automated assignment system, sequen-tially numbered identical containers, sealed opaqueenvelopes). While masking (blinding) is not always pos-sible, allocation concealment is always possible.

Methods: randomisation—implementation

Item 10: who generated the random allocation se-quence, who enrolled participants, and who assignedparticipants to interventions In many individually ran-domised trials, staff who generate and conceal the randomsequence are different from the staff involved in imple-menting the sequence. This can prevent tampering or sub-version [48]. Other procedures may be used to ensure truerandomisation in trials in which participants (e.g. groups,places) are recruited and then randomised at the sametime. Authors should indicate who carried out each pro-cedure (i.e. generating the random sequence, enrollingparticipants, and assigning participants to interventions)and the methods used to protect the sequence.

Methods: awareness of assignment

Item 11a: who was aware after assignment to inter-ventions (for example, participants, providers, thoseassessing outcomes), and how any masking was doneMasking (blinding) refers to withholding informationabout assigned interventions post-randomisation fromthose involved in the trial [46]. Masking can reducethreats to internal validity arising from an awareness ofthe intervention assignment by those who could be in-fluenced by this knowledge. Authors should state

Grant et al. Trials (2018) 19:406 Page 10 of 18

Page 11: CONSORT-SPI 2018 Explanation and Elaboration: guidance for ......Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers

whether and how (a) participants, (b) providers, (c)data collectors, and (d) data analysts were kept un-aware of intervention assignment. If masking was notdone (e.g. because it was not possible), authors shoulddescribe the methods, if any, used to assess performanceand expectancy biases (e.g. masking trial hypotheses,measuring participant expectations) [82]. Althoughmasking of providers and participants is often notpossible, masking outcome assessors is usually possible,even for outcomes assessed through interviews or obser-vations. If examined, authors should report the extent towhich outcome assessors remained masked to partici-pants’ intervention status.

Item 11b: if relevant, description of the similarity ofinterventions Particularly because masking providersand participants is impossible in many social and psy-chological intervention trials, authors should describeany differences between interventions delivered to eachgroup that could lead to differences in the performanceand expectations of providers and participants. Import-ant details include differences in intervention compo-nents and acceptability, co-interventions (or adjunctiveinterventions) that might be available to some groupsand not others, and contextual differences betweengroups (e.g. differences in place of delivery).

Methods: analytical methods

Item 12a: statistical methods used to compare groupoutcomes Complete statistical reporting allows the readerto understand the results and to reproduce analyses [48].For each outcome, authors should describe the methods ofanalysis, including transformations and adjustment for co-variates, and whether the methods of analysis were chosena priori or decided after data were collected. In the UnitedStates, trials funded by the National Institutes of Healthmust deposit a statistical analysis plan on www.ClinicalTrials.gov with their results [62, 63]. Authors with otherfunding sources should ascertain whether there are similarrequirements. For cluster randomised trials, authors shouldstate whether the unit analysed differs from the unit of as-signment, and if applicable, the analytical methods used toaccount for differences between the unit of assignment,level of intervention, and the unit of analysis [8, 44, 46]. Au-thors should also note any procedures and rationale for anytransformations to the data [46]. To facilitate full reprodu-cibility, authors should report software used to run analysesand provide the exact statistical code [24].

Extended CONSORT-SPI item 12a: how missing data werehandled, with details of any imputation methodMissing data are common in trials of social andpsychological interventions for many reasons, such as

participant discontinuation, missed visits, and participantfailure to complete all items or measures (even forparticipants who have not discontinued the trial) [46].Authors should report the amount of missing data,evidence regarding the reasons for missingness, andassumptions underlying judgements about missingness(e.g. missing at random) [83]. For each outcome, au-thors should describe the analysis population (i.e. par-ticipants who were eligible to be included in theanalysis) and the methods for handling missing data,including procedures to account for missing partici-pants (i.e. participants who withdrew from the trial,did not complete an assessment, or otherwise did notprovide data) and procedures to account for missingdata items (i.e. questions that were not completed ona questionnaire) [76]. Imputation methods, which aimto estimate missing data based on other data in thedataset, can influence trial results [84]. When imput-ation is used, authors should describe the variablesused for imputation, the number of imputations per-formed, the software procedures for executing the im-putations, and the results of any sensitivity analysesconducted to test assumptions about missing data[84]. For example, it is often helpful to report resultswithout imputation to help readers evaluate the con-sequences of imputing data.

Item 12b: methods for additional analyses, such assubgroup analyses, adjusted analyses, and processevaluations In addition to analysing impacts on primaryand secondary outcomes, trials often include additionalanalyses, such as subgroup analyses and mediation ana-lyses to investigate processes of change [51, 85]. All ana-lyses should be reported at the same level of detail.Authors should indicate which subgroup analyses werespecified a priori in the trial registration or protocol(Items 23 and 24), how subgroups were constructed, anddistinguish confirmatory analyses from exploratory ana-lyses. For adjusted analyses, authors should report thestatistical procedures and covariates used and the ration-ale for these.Additionally, qualitative analyses may be used to inves-

tigate processes of change, implementation processes,contextual influences, and unanticipated outcomes [51].Authors should indicate whether such analyses wereundertaken or are planned (and where they are or willbe reported if so). Authors should report methods andresults of qualitative analyses according to reportingstandards for primary qualitative research [86].

Results: participant flowItem 13a: for each group, the numbers randomlyassigned, receiving intended treatment, and analysedfor the outcomes

Grant et al. Trials (2018) 19:406 Page 11 of 18

Page 12: CONSORT-SPI 2018 Explanation and Elaboration: guidance for ......Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers

CONSORT-SPI item 13a: where possible, the numberapproached, screened, and eligible prior to randomassignment, with reasons for non-enrolmentAttrition after randomisation can affect internal validity(i.e. by introducing selection bias), and attrition beforeor after randomisation can affect generalisability [46].Authors should report available information about thetotal number of participants at each stage of the trial,with reasons for non-enrolment (i.e. before randomisa-tion) or discontinuation (i.e. after randomisation). Keystages typically include: approaching participants,screening for potential eligibility, assessment to confirmeligibility, random assignment, intervention receipt, andoutcome assessment. As there may be delays betweeneach stage (e.g. between randomisation and initiation ofthe intervention) [87], authors should include a flow dia-gram to describe trial attrition in relation to each ofthese key stages (Fig. 1; Additional file 4)

Item 13b: for each group, losses and exclusions afterrandomisation, together with reasons Authors shouldreport participant attrition and data exclusion by the re-search team for each randomised group at each follow-uppoint [48]. Authors should distinguish between the numberof participants who deviate from the intervention protocolbut continue to receive an intervention, discontinue anintervention but continue to provide outcome data,discontinue the trial altogether, and were excluded by theinvestigators. Authors should provide reasons for each loss(e.g. lost contact, died) and exclusion (e.g. excluded by theinvestigators because of poor adherence to interventionprotocol), and indicate the number of persons who discon-tinued for unknown reasons.

Results: recruitment

Item 14a: dates defining the periods of recruitmentand follow-up The dates of a trial and its activities pro-vide readers some information about the historical contextof the trial [48]. The SPIRIT 2013 Statement includes atable that authors can use to provide a complete scheduleof trial activities, including recruitment practices,pre-randomisation assessments, periods of interventiondelivery, a schedule of post-randomisation assessments,and when the trial was stopped [55]. In the description,authors should define baseline assessment and follow-uptimes relative to randomisation. For example, by itself,‘4-week follow-up’ is unclear and could mean differentthings if meant after randomisation or after the end of anintervention.

Item 14b: why the trial ended or was stopped Authorsshould state why the trial was stopped. Trials might bestopped for reasons decided a priori (e.g. sample size

reached and predetermined follow-up period completed)or in response to the results. For trials stopped early inresponse to interim analyses (Item 7b), authors shouldstate the reason for stopping (e.g. for safety or futility)and whether the stopping rule was decided a priori. Ifapplicable, authors should describe other reasons forstopping, such as implementation challenges (e.g. couldnot recruit enough participants) or extrinsic factors (e.g.a natural disaster). Authors should indicate whetherthere are plans to continue collecting outcome data (e.g.long-term follow-up).

Results: baseline data

Item 15: a table showing baseline characteristics foreach group

CONSORT-SPI item 15: include socioeconomic variableswhere applicableAuthors should provide a table summarising all data col-lected at baseline, with descriptive statistics for each ran-domised group. This table should include all importantcharacteristics measured at baseline, includingpre-intervention data on trial outcomes, and potentialprognostic variables. Authors should pay particular at-tention to topic-specific information related to socio-economic and other inequalities [88–90]. Forcontinuous variables, authors should report the aver-age value and its variance (e.g. mean and standarddeviation). For categorical variables, authors shouldreport the numerator and denominator for each cat-egory. Authors should not use standard errors andconfidence intervals for baseline data because theseare inferential (rather than descriptive): inferential sta-tistics assess the probability that observed differencesoccurred by chance, and all baseline differences inrandomised trials occur by chance [91].

Results: numbers analysed

Item 16: for each group, number included in eachanalysis and whether the analysis was by originalassigned groups While a flow diagram is helpful for in-dicating the number of participants at each trial stage,the number of participants included in each analysisoften differs across outcomes and analyses [44]. Authorsshould report the number of participants per interven-tion group for each analysis, so readers can interpret theresults and perform secondary analyses of the data. Foreach outcome, authors should also identify the analysispopulation and the method used for handling missingdata (Item 12a) [76].

Grant et al. Trials (2018) 19:406 Page 12 of 18

Page 13: CONSORT-SPI 2018 Explanation and Elaboration: guidance for ......Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers

Results: outcomes and estimation

Item 17a: for each outcome, results for each group,and the estimated effect size and its precision (suchas 95% confidence interval) For each outcome in a trial,authors should report summary results for all analyses, in-cluding results for each trial group and the contrast be-tween groups, the estimated magnitude of the difference(effect size), the precision or uncertainty of the estimate(e.g. 95% confidence interval or CI), and the number ofpeople included in the analysis in each group. The p valuedoes not describe the precision of an effect estimate, andauthors should report precision even if the difference be-tween groups is not statistically significant [46]. For cat-egorical outcomes, summary results for each analysisshould include the number of participants with the eventof interest. The effect size can be expressed as the risk ra-tio, odds ratio, or risk difference and its precision (e.g.95% CI). For continuous outcomes, summary results foreach analysis should include the average value and its vari-ance (e.g. mean and standard error). The effect size is usu-ally expressed as the mean difference and its precision(e.g. 95% CI). Summary results are often more clearly pre-sented in a table rather than narratively in text.

CONSORT-SPI item 17a: indicate availability of trial dataAs part of the growing open-science movement, triallistsare increasingly expected to maintain their datasets, linkedvia trial registrations and posted in trusted online repositor-ies (see http://www.bitss.org/resource-tag/data-repository/),to facilitate reproducibility of reported analyses and futuresecondary data analyses. Data sharing is also associatedwith higher citations [92]. Authors should indicate whetherand how to obtain trial datasets, including any metadataand analytic code needed to replicate the reported analyses[24, 93]. Any legal or ethical restrictions on making the trialdata available should be described [93].

Item 17b: for binary outcomes, presentation of bothabsolute and relative effect sizes is recommended Bythemselves, neither relative measures nor absolute mea-sures provide comprehensive information about inter-vention effects. Authors should report relative effectsizes (e.g. risk ratios) to express the strength of effectsand absolute effect sizes (e.g. risk differences) to indicateactual differences in events between interventions [94].

Results: ancillary analyses

Item 18: results of any other analyses performed, in-cluding subgroup analyses, adjusted analyses, andprocess evaluations, distinguishing pre-specifiedfrom exploratory Authors should report the results foreach additional analysis described in the methods (Item

12b), indicating the number of analyses performed foreach outcome, which analyses were pre-specified, andwhich analyses were not pre-specified. When evaluatingeffects for subgroups, authors should report interactioneffects or other appropriate tests for heterogeneity be-tween groups, including the estimated difference in theintervention effect between each subgroup with confi-dence intervals. Comparing tests for the significance ofchange within subgroups is not an appropriate basis forevaluating differences between subgroups. If reportingadjusted analyses, authors should provide unadjusted re-sults as well. Authors reporting any results from qualita-tive data analyses should follow reporting standards forqualitative research [86], though adequately reportingthese findings will likely require more than one journalarticle [51].

Results: harms

Item 19: all important harms or unintended effectsin each group (for specific guidance, see CONSORTfor Harms) [14] Social and psychological interventionshave the potential to produce unintended effects, bothharmful and beneficial [95]. These may be identified in theprotocol and relate to the theory of how the interventionsare hypothesised to work (Item 2b) [56], or they may beunexpected events that were not pre-specified for assess-ment. Harms may include indirect effects such asincreased inequalities at the level of groups or places thatresult from the intervention [89]. When reporting quanti-tative data on unintended effects, authors should indicatehow they were defined and measured, and the frequencyof each event per trial group. Authors should report all re-sults from qualitative investigations that identify possibleunintended effects because this information may helpreaders make informed decisions about using interven-tions in future research and practice.

Discussion: limitations

Item 20: trial limitations, addressing sources of po-tential bias, imprecision, and, if relevant, multiplicityof analyses Authors should provide a balanced discus-sion of the strengths and limitations of the trial andits results. Authors should consider issues related torisks of bias, precision of effect estimates, the use ofmultiple outcomes and analyses, and whether theintervention was delivered and taken up as planned.

Discussion: generalisability

Item 21: generalisability (external validity, applicabil-ity) of the trial findings Authors should address gen-eralisability, or the extent to which the authors

Grant et al. Trials (2018) 19:406 Page 13 of 18

Page 14: CONSORT-SPI 2018 Explanation and Elaboration: guidance for ......Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers

believe that trial results can be expected in other situ-ations [66]. Authors should explain how statementsabout generalisability relate to the trial design andexecution. Key factors to consider discussing include:recruitment practices, eligibility criteria, sample character-istics, facilitators and barriers to intervention implementa-tion, the choice of comparator, what outcomes wereassessed and how, length of follow-up, and setting charac-teristics [46, 66].

Discussion: interpretation

Item 22: interpretation consistent with results, balan-cing benefits and harms, and considering other rele-vant evidence Authors should provide a briefinterpretation of findings in light of the trial’s objectivesor hypotheses [46]. Authors may wish to discuss plaus-ible alternative explanations for results other than dif-ferences in effects between interventions [44]. Authorsshould contextualise results and identify the additionalknowledge gained by discussing how the trial adds tothe results of other relevant literature [96], includingreferences to previous trials and systematic reviews. Iftheory was used to inform intervention development orevaluation (Item 2b), authors should discuss how theresults of the trial compare with previous theoriesabout how the interventions would work [97, 98]. Au-thors should consider describing the practicalsignificance of findings; the potential implications offindings to theory, practice and policy; and specificareas of future research to address gaps in currentknowledge [49]. Authors should avoid distortedpresentation or 'spin' when discussing trial findings [99,100].

Important information: registration

Item 23: registration number and name of trial regis-try Trial registration is the posting of a minimum infor-mation set in a public database, including: eligibilitycriteria, all outcomes, intervention protocols, andplanned analyses [101]. Trial registration aids systematicreviews and meta-analyses, and responds todecades-long calls to prevent reporting biases [102–104].Trial registration is now required for all trials publishedby journals that endorse the International Committee ofMedical Journal Editors guidelines and for all trialsfunded by the National Institutes of Health in the UnitedStates as well as the Medical Research Council andNational Institute for Health Research in the UK[62, 63, 105, 106].Trials should be registered prospectively, before begin-

ning enrolment, normally in a publicly accessible websitemanaged by a registry conforming to established standards

[101, 105]. Authors should report the name of the trialregistry, the unique identification number for the trial pro-vided by that registry, and the stage at which the trial wasregistered. If authors did not register their trial, theyshould report this and the reason for not registering.Registries used in clinical medicine (e.g. www.Clinical-Trials.gov) are suitable for social and psychological inter-vention trials with health outcomes [107, 108], and severalregistries exist specifically for social and psychological in-terventions (http://www.bitss.org/resource-tag/registry/).

Important information: protocol

Item 24: where the full trial protocol can be accessed,if available Details about trial design should be describedin a publicly accessible protocol (e.g. published manuscript,report in a repository) that includes a record of all amend-ments made after the trial began. Authors should reportwhere the trial protocol can be accessed. Guidance ondeveloping and reporting protocols has recently been pub-lished [55]. Authors of social and psychological interventiontrials who face difficulty finding a journal that publishestrial protocols could search for journals supporting the Reg-istered Reports format (https://cos.io/rr/) or upload theirtrial protocols to relevant preprint servers such as PsyArXiv(https://psyarxiv.com/) and SocArXiv (https://osf.io/preprints/socarxiv).

Important information: funding

Item 25a: sources of funding and other support, roleof funders Information about trial funding and support isimportant in helping readers to identify potential conflictsof interest. Authors should identify and describe all sourcesof monetary or material support for the trial, including sal-ary support for trial investigators and resources providedor donated for any phase of the trial (e.g. space, interven-tion materials, assessment tools). Authors should reportthe name of the persons or entities supported, thename of the funder, and the award number. Theyshould also specifically state if these sources had anyrole in the design, conduct, analysis, and reporting ofthe trial, and the nature of any involvement or influ-ence. If funders had no involvement or influence, au-thors should specifically report this.

CONSORT-SPI item 25b: declaration of any other potentialinterestsIn addition to financial interests, it is important that au-thors declare any other potential interests that may beperceived to influence the design, conduct, analysis, orreporting of the trial following established criteria [109].Examples include allegiance to or professional training inevaluated interventions. Authors should err on the side of

Grant et al. Trials (2018) 19:406 Page 14 of 18

Page 15: CONSORT-SPI 2018 Explanation and Elaboration: guidance for ......Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers

caution in declaring potential interests. If authors do nothave any financial, professional, personal, or other poten-tial interests to declare, they should declare this explicitly.

Important information: stakeholder involvementCONSORT-SPI new item – Item 26a: any involvement of theintervention developer in the design, conduct, analysis, orreporting of the trialIntervention developers are often authors of trialreports. Because involvement of intervention developersin trials may be associated with effect sizes [110, 111],authors should report whether intervention developerswere involved in designing the trial, delivering the inter-vention, assessing the outcomes, or interpreting the data.Authors should also disclose close collaborations withthe intervention developers (e.g. being a former studentof the developer, serving on an advisory or consultancyboard related to the intervention), and any legal or intel-lectual rights related to the interventions, especially ifthese could lead to future financial interests.

CONSORT-SPI new item – Item 26b: other stakeholderinvolvement in trial design, conduct, or analysesResearchers are increasingly called to consult or collab-orate with those who have a direct interest in the resultsof trials, such as providers, clients, and payers [112].Stakeholders may be involved in designing trials (e.g.choosing outcomes) [76], delivering interventions, orinterpreting trial results. Stakeholder involvement mayhelp to better ensure the acceptability, implementability,and sustainability of interventions as they move fromresearch to real-world settings [113]. When applicable,authors should describe which stakeholders wereinvolved, how they were recruited, and how they wereinvolved in various stages of the trial [114]. Authors mayfind reporting standards on public involvement inresearch useful [115].

CONSORT-SPI new item – Item 26c: incentives offered aspart of the trialIncentives offered to participants, providers, organisations,and others involved in a trial can influence recruitment, en-gagement with the interventions, and quality of interven-tion delivery [116]. Incentives include monetarycompensation, gifts (e.g. meals, transportation, access toservices), academic credit, and coercion (e.g. prison diver-sion) [48]. When incentives are used, authors should makeclear at what trial stage and for what purpose incentives areoffered, and what these incentives entail. Authors alsoshould state whether incentives differ by trial group, suchas compensation for participants receiving the experimentalrather than the comparator interventions.

ConclusionsThe results of RCTs are of optimal use when authors re-port their methods and results accurately, completely,and transparently. The CONSORT-SPI 2018 checklistcan help researchers to design and report future trials,and provide guidance to peer reviewers and editors forevaluating manuscripts, to funders in setting reportingcriteria for grant applications, and to educators in teach-ing trial methods. Each item should be addressed beforeor within the main trial paper (e.g. in the text, as an on-line supplement, or by reference to a previous report).The level of detail required for some checklist items willdepend on the nature of the intervention being evalu-ated [70], the trial phase [117], and whether the trial in-volves an evaluation of process or implementation [74].CONSORT-SPI 2018, like all CONSORT guidance,

is an evolving document with a continuous and itera-tive process of assessment and refinement over time.The authors welcome feedback about the checklistand this E&E document, particularly as new evidencein this area and greater experience with this guidancedevelop. For instance, interested readers can providefeedback on whether some of the new items inCONSORT-SPI 2018 that are applicable to othertypes of trials (e.g. handling missing data and avail-ability of trial data) should be incorporated into thenext update of the main CONSORT Statement. Wealso encourage journals in the behavioural and socialsciences to join the hundreds of medical journals thatendorse CONSORT guidelines, and to inform us ofsuch endorsement. The ultimate benefit of this col-lective effort should be better practices leading to bet-ter health and quality of life.

Additional files

Additional file 1: Table S1. The CONSORT-SPI group. (DOCX 68 kb)

Additional file 2: Table S2. Examples of information to include whenreporting randomised trials of social and psychological interventions.(DOCX 132 kb)

Additional file 3: Table S3. Examples of abstracts adherent to theCONSORT-SPI Extension for Abstracts. (DOCX 73 kb)

Additional file 4: Figure S1. Example of a participant flow diagram.(DOCX 111 kb)

AbbreviationsCI: Confidence interval; CONSORT: Consolidated Standards of Reporting Trials;CONSORT-SPI: Consolidated Standards of Reporting Trials for Social andPsychological Interventions; E&E: Explanation and Elaboration; RCT: Randomisedcontrolled trial; SPIRIT: Standard Protocol Items: Recommendations forInterventional Trials; UK: United Kingdom; US: United States

AcknowledgementsWe thank all of the stakeholders who have provided their input on theproject.CONSORT-SPI Group: J. Lawrence Aber (Willner Family Professor ofPsychology and Public Policy and University Professor, Steinhardt School ofCulture, Education, and Human Development, New York University), Doug

Grant et al. Trials (2018) 19:406 Page 15 of 18

Page 16: CONSORT-SPI 2018 Explanation and Elaboration: guidance for ......Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers

Altman (Professor of Statistics in Medicine and Director of the UK EQUATORCentre, Centre for Statistics in Medicine, University of Oxford), KamaldeepBhui (Professor of Cultural Psychiatry & Epidemiology, Wolfson Institute ofPreventive Medicine, Queen Mary University of London), Andrew Booth(Reader in Evidence Based Information Practice and Director of Information,Information Resources, University of Sheffield), David Clark (Professor andChair of Experimental Psychology, Experimental Psychology, University ofOxford), Peter Craig (Senior Research Fellow, MRC/CSO Social and PublicHealth Sciences Unit, Institute of Health & Wellbeing, University of Glasgow),Manuel Eisner (Wolfson Professor of Criminology, Institute of Criminology,University of Cambridge), Mark W. Fraser (John A. Tate DistinguishedProfessor for Children in Need, School of Social Work, The University ofNorth Carolina at Chapel Hill), Frances Gardner (Professor of Child andFamily Psychology, Department of Social Policy and Intervention, Universityof Oxford), Sean Grant (Behavioural & Social Scientist, Behavioural & PolicySciences, RAND Corporation), Larry Hedges (Board of Trustees Professor ofStatistics and Education and Social Policy, Institute for Policy Research,Northwestern University), Steve Hollon (Gertrude Conaway VanderbiltProfessor of Psychology, Psychological Sciences, Vanderbilt University), SallyHopewell (Associate Professor, Oxford Clinical Trials Research Unit, Universityof Oxford), Robert Kaplan (Professor Emeritus, Department of Health Policyand Management, Fielding School of Public Health, University of California,Los Angeles), Peter Kaufmann (Leader, Behavioural Medicine and PreventionResearch Group, National Heart, Lung, & Blood Institute, National Institutesof Health), Spyros Konstantopoulos (Professor, Department of Counselling,Educational Psychology, and Special Education, Michigan State University),Geraldine Macdonald (Professor of Social Work, School for Policy Studies,University of Bristol), Evan Mayo-Wilson (Assistant Scientist, Division of ClinicalTrials and Evidence Synthesis, Department of Epidemiology, Johns HopkinsUniversity), Kenneth McLeroy (Regents & Distinguished Professor, HealthPromotion and Community Health Sciences, Texas A&M University), SusanMichie (Professor of Health Psychology and Director of the Centre forBehaviour Change, University College London), Brian Mittman (ResearchScientist, Department of Research and Evaluation, Division of Health ServicesResearch and Implementation Science, Kaiser Permanente), David Moher(Senior Scientist, Clinical Epidemiology Program, Ottawa Hospital ResearchInstitute), Paul Montgomery (Professor of Social Intervention, Department ofSocial Work and Social Care, University of Birmingham), Arthur Nezu(Distinguished University Professor of Psychology, Department of Psychology,Drexel University), Lawrence Sherman (Director of the Jerry Lee Centre forExperimental Criminology and Chair of the Cambridge Police ExecutiveProgramme, the Institute of Criminology, University of Cambridge.), EdmundSonuga-Barke (Professor of Developmental Psychology, Psychiatry, &Neuroscience, Institute of Psychiatry, Kings College London), James Thomas(Professor Social Research and Policy, UCL Institute of Education, UniversityCollege London), Gary VandenBos (Executive Director, Office of Publicationsand Databases, American Psychological Association), Elizabeth Waters (JackBrockhoff Chair of Child Public Health, McCaughey VicHealth Centre forCommunity Wellbeing, Melbourne School of Population & Global Health,University of Melbourne), Robert West (Professor of Health Psychology andDirector of Tobacco Studies, Health Behaviour Research Centre, Departmentof Epidemiology and Public Health) and Joanne Yaffe (Professor, College ofSocial Work, University of Utah).

FundingThis project is funded by the UK Economic and Social Research Council(ES/K00087X/1).

Availability of data and materialsMethodological protocols, data collection materials, and data from the Delphiprocess and consensus meeting can be requested from ReShare, the UK DataService’s online data repository (doi: https://doi.org/10.5255/UKDA-SN-851981).

Authors’ contributionsSG, EMW, and PM conceived of the idea for the project and contributedequally to this manuscript. SG, EMW, PM, GM, SM, SH, and DM led theproject and wrote the first draft of the manuscript. All authors havecontributed to and approved the final manuscript.

Ethics approval and consent to participateEthics approval was obtained from the Department Research EthicsCommittee for the Department of Social and Intervention, University ofOxford (reference 2011-12_83). Informed consent was obtained from allparticipants.

Consent for publicationNot applicable.

Competing interestsSG’s spouse is a salaried-employee of Eli Lilly and Company, and owns stock.SG has accompanied his spouse on company-sponsored travel. SH and DMare members of the CONSORT Group. All other authors declare nocompeting interest.

Publisher’s NoteSpringer Nature remains neutral with regard to jurisdictional claims inpublished maps and institutional affiliations.

Author details1Behavioral & Policy Sciences, RAND Corporation, 1776 Main Street, SantaMonica, CA 90407-2138, USA. 2Department of Epidemiology, Johns HopkinsUniversity Bloomberg School of Public Health, 615 North Wolfe Street, E6036,Baltimore, MD 21205, USA. 3School of Social Policy, University of Birmingham,Edgbaston, Birmingham B15 2TT, UK. 4School for Policy Studies, 8 PrioryRoad, Bristol BS8 1TZ, UK. 5Department of Clinical, Educational and HealthPsychology, Centre for Behaviour Change, University College London,London WC1E 7HB, UK. 6Oxford Clinical Trials Research Unit, NuffieldDepartment of Orthopaedics, Rheumatology, and Musculoskeletal Sciences,University of Oxford, Botnar Research Centre, Windmill Road, Oxford OX37LD, UK. 7Centre for Journalology, Clinical Epidemiology Program, OttawaHospital Research Institute, Ottawa, ON K1H 8L6, Canada.

Received: 17 January 2018 Accepted: 8 June 2018

References1. Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I. Improving the

quality of reporting of randomized controlled trials. The CONSORTstatement. JAMA. 1996;276:637–9.

2. Moher D, Jones A, Lepage L, for the CONSORT Group. Use of the CONSORTstatement and quality of reports of randomized trials: A comparativebefore-and-after evaluation. JAMA. 2001;285:1992–5.

3. Plint AC, Moher D, Morrison A, et al. Does the CONSORT checklist improvethe quality of reports of randomised controlled trials? A systematic review.Med J Aust. 2006;185:263–7.

4. Turner L, Shamseer L, Altman DG, et al. Consolidated standards of reportingtrials (CONSORT) and the completeness of reporting of randomisedcontrolled trials (RCTs) published in medical journals. Cochrane DatabaseSyst Rev. 2012;11:MR000030.

5. Devereaux PJ, Manns BJ, Ghali WA, Quan H, Guyatt GH. The reporting ofmethodological factors in randomized controlled trials and the associationwith a journal policy to promote adherence to the consolidated standardsof reporting trials (CONSORT) checklist. Control Clin Trials. 2002;23:380–8.

6. Shamseer L, Hopewell S, Altman DG, Moher D, Schulz KF. Update on theendorsement of CONSORT by high impact factor journals: a survey ofjournal ‘instructions to authors’ in 2014. Trials. 2016;17(1):301.

7. Hopewell S, Altman DG, Moher D, Schulz KF. Endorsement of the CONSORTstatement by high impact factor medical journals: a survey of journaleditors and journal ‘Instructions to Authors’. Trials. 2008;9:20.

8. Campbell MK, Piaggio G, Elbourne DR, Altman DG. Consort 2010 statement:extension to cluster randomised trials. BMJ. 2012;345:e5661.

9. Piaggio G, Elbourne DR, Pocock SJ, Evans SJW, Altman DG, for the CONSORTGroup. Reporting of noninferiority and equivalence randomized trials:extension of the CONSORT 2010 statement. JAMA. 2012;308(24):2594–604.

10. Zwarenstein M, Treweek S, Gagnier JJ, et al. Improving the reporting ofpragmatic trials: an extension of the CONSORT statement. BMJ. 2008;337:a2390.

11. Vohra S, Shamseer L, Sampson M, et al. CONSORT extension for reportingN-of-1 trials (CENT) 2015 statement. BMJ. 2015;350:h1738.

Grant et al. Trials (2018) 19:406 Page 16 of 18

Page 17: CONSORT-SPI 2018 Explanation and Elaboration: guidance for ......Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers

12. Calvert M, Blazeby J, Altman DG, et al. Reporting of patient-reportedoutcomes in randomized trials: the CONSORT PRO extension. JAMA. 2013;309(8):814–22.

13. Hopewell S, Clarke M, Moher D, et al. CONSORT for reporting randomisedtrials in journal and conference abstracts. Lancet. 2008;371:281–3.

14. Ioannidis JP, Evans SJ, Gotzsche PC, et al. Better reporting of harms inrandomized trials: an extension of the CONSORT statement. Ann InternMed. 2004;141:781–8.

15. Boutron I, Moher D, Altman DG, Schulz K, Ravaud P, for the CONSORTgroup. Methods and processes of the CONSORT group: example of anextension for trials assessing nonpharmacologic treatments. Ann InternMed. 2008;148(4):W60–6.

16. Gagnier JJ, Boon H, Rochon P, Moher D, Barnes J, Bombardier C. Reportingrandomized, controlled trials of herbal interventions: an elaboratedCONSORT statement. Ann Intern Med. 2006;144:364–7.

17. MacPherson H, Altman DG, Hammerschlag R, et al. Revised STandards forreporting interventions in clinical trials of acupuncture (STRICTA): extendingthe CONSORT statement. PLoS Med. 2010;7(6):e1000261.

18. Grant SP, Mayo-Wilson E, Melendez-Torres GJ, Montgomery P. Reportingquality of social and psychological intervention trials: a systematic review ofreporting guidelines and trial publications. PLoS One. 2013;8(5):e65442.

19. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M.Developing and evaluating complex interventions: the new MedicalResearch Council guidance. BMJ. 2008;337:a1655.

20. Michie S, Fixsen D, Grimshaw J, Eccles M. Specifying and reporting complexbehaviour change interventions: the need for a scientific method.Implement Sci. 2009;4:40.

21. Mayo-Wilson E. Reporting implementation in randomized trials: proposedadditions to the consolidated standards of reporting trials statement. Am JPublic Health. 2007;97(4):630–3.

22. Glasziou P, Meats E, Heneghan C, Shepperd S. What is missing fromdescriptions of treatment in trials and reviews? BMJ. 2008;336(7659):1472–4.

23. Open Science Collaboration. Estimating the reproducibility of psychologicalscience. Science. 2015;349(6251):aac4716.

24. Nosek BA, Alter G, Banks GC, et al. Promoting an open research culture.Science. 2015;348:1422–5.

25. McNutt M. Reproducibility. Science. 2014;343(6168):229.26. Open Science Collaboration. An open, large-scale, collaborative effort to estimate

the reproducibility of psychological science. Perspect Psychol Sci. 2012;7(6):657–60.27. Tajika A, Ogawa Y, Takeshima N, Hayasaka Y, Furukawa TA. Replication and

contradiction of highly cited research papers in psychiatry: 10-year follow-up. Br J Psychiatry. 2015; Available online July 2015: https://doi.org/10.1192/bjp.bp.1113.143701.

28. Michie S, Wood CE, Johnston M, Abraham C, Francis JJ, Hardeman W. Behaviourchange techniques: the development and evaluation of a taxonomic method forreporting and describing behaviour change interventions (a suite of five studiesinvolving consensus methods, randomised controlled trials and analysis ofqualitative data). Health Technol Assess. 2015;19(99):1–187.

29. Glasziou P, Altman DG, Bossuyt P, et al. Reducing waste from incomplete orunusable reports of biomedical research. Lancet. 2014;383(9913):267–76.

30. Moher D, Glasziou P, Chalmers I, et al. Increasing value and reducing wastein biomedical research: Who’s listening? Lancet. 2015; Available online 27September 2015:doi: 10.1016/S0140-6736(1015)00307-00304

31. Driessen E, Hollon SD, Bockting CLH, Cuijpers P, Turner EH. Does publicationbias inflate the apparent efficacy of psychological treatment for majordepressive disorder? A systematic review and meta-analysis of us nationalinstitutes of health-funded trials. PLoS One. 2015;10(9):e0137864.

32. Ioannidis JP, Munafo MR, Fusar-Poli P, Nosek BA, David SP. Publication andother reporting biases in cognitive sciences: detection, prevalence, andprevention. Trends Cogn Sci. 2014;18(5):235–41.

33. Cybulski L, Mayo-Wilson E, Grant S. Improving transparency and reproducibilitythrough registration: the status of intervention trials published in clinicalpsychology journals. J Consult Clin Psychol. 2016;84(9):753.

34. Mayo-Wilson E, Grant S, Hopewell S, Macdonald G, Moher D, MontgomeryP. Developing a reporting guideline for social and psychologicalintervention trials. Trials. 2013;14(1):242.

35. Fraser MW, Galinsky MJ, Richman JM, Day SH. Intervention research:developing social programs. New York: Oxford University Press; 2009.

36. Medical Research Council. Developing and evaluating complexinterventions: New guidance. London: Medical Research Council; 2008.www.mrc.ac.uk/documents/pdf/complex-interventions-guidance.

37. Grant S. Development of a CONSORT extension for social and psychologicalinterventions. Oxford: Social Policy & Intervention, University of Oxford; 2014.

38. Institute of Medicine. Psychosocial interventions for mental and substanceuse disorders: a framework for establishing evidence-based standards.Washington, DC: The National Academies Press; 2015.

39. Riley W. New NIH Clinical Trials Policies: Implications for Behavioral andSocial Science Researchers. 2016; https://obssr.od.nih.gov/new-nih-clinical-trials-policies-implications-for-behavioral-and-social-science-researchers/.

40. Möhler R, Köpke S, Meyer G. Criteria for reporting the development andevaluation of complex interventions in healthcare: revised guideline(CReDECI 2). Trials. 2015;16:204.

41. Montgomery P, Grant S, Hopewell S, et al. Protocol for CONSORT-SPI: anextension for social and psychological interventions. Implement Sci. 2013;8(1):99.

42. Montgomery P, Grant S, Mayo-Wilson E, et al. Reporting randomised trials ofsocial and psychological interventions: The CONSORT-SPI 2017 Extension.TRLS-D-18-00052.

43. Moher D, Hopewell S, Schultz KF, et al. CONSORT 2010 explanation andelaboration: updated guidelines for reporting parallel group randomisedtrials. BMJ. 2010;340:c869.

44. Des Jarlais DC, Lyles C, Crepaz N, the TREND Group. Improving the reportingquality of nonrandomized evaluations of behavioral and public healthinterventions: the TREND statement. Am J Public Health. 2004;94(3):361–6.

45. Petticrew M, Roberts H. Systematic reviews in the social sciences: a practicalguide. Oxford: Blackwell Publishing Ltd; 2006.

46. Cooper H. Reporting research in psychology: how to meet journal articlereporting standards. Washington, DC: APA; 2011.

47. Hopewell S, Clarke M, Moher D, et al. CONSORT for reporting randomizedcontrolled trials in journal and conference abstracts: explanation andelaboration. PLoS Med. 2008;5(1):e20.

48. Davidson KW, Goldstein M, Kaplan RM, et al. Evidence-based behavioral medicine:what is it and how do we achieve it? Ann Behav Med. 2003;26(3):161–71.

49. Moberg-Mogren E, Nelson DL. Evaluating the quality of reportingoccupational therapy randomized controlled trials by expanding theCONSORT criteria. Am J Occup Ther. 2006;60:226–35.

50. Zarin DA, Tse T, Williams RJ, Califf RM, Ide NC. The ClinicalTrials. gov resultsdatabase—update and key issues. N Engl J Med. 2011;364(9):852–60.

51. Moore GF, Audrey S, Barker M, et al. Process evaluation of complexinterventions: Medical Research Council guidance. BMJ. 2015;350:h1258.

52. Breuer E, Lee L, De Silva M, Lund C. Using theory of change to design andevaluate public health interventions: a systematic review. Implement Sci.2016;11(1):63.

53. Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and itsuse in improvement. BMJ Qual Saf. 2015;24(3):228–38.

54. Abraham C, Michie S. A taxonomy of behavior change techniques used ininterventions. Health Psychol. 2007;27:379–87.

55. Chan AW, Tetzlaff JM, Altman DG, et al. SPIRIT 2013 statement: definingstandard protocol items for clinical trials. Ann Intern Med. 2013;158(3):200–7.

56. Bonell C, Jamal F, Melendez-Torres GJ, Cummins S. ‘Dark logic’: theorisingthe harmful consequences of public health interventions. J EpidemiolCommunity Health. 2015;69(1):95–8.

57. American Educational Research Association. Standards for reporting onempirical social science research in AERA publications. Educ Res. 2006;35(6):33–40.

58. Goodman SN, Fanelli D, Ioannidis JPA. What does research reproducibilitymean? Sci Transl Med. 2016;8(341):341.

59. Piaggio G, Elbourne DR, Altman DG, Pocock SJ, Evans SJW, for theCONSORT Group. Reporting of noninferiority and equivalence randomizedtrials: an extension of the CONSORT statement. JAMA. 2006;294(10):1152–60.

60. Shamseer L, Sampson M, Bukutu C, et al. CONSORT extension for N-of-1 trials(CENT) guidelines. BMC Complement Altern Med. 2012;12(Suppl 1):P140.

61. Zarin DA, Tse T, Williams RJ, Carr S. Trial reporting in ClinicalTrials. gov—thefinal rule. N Engl J Med. 2016;375(20):1998–2004.

62. Hudson KL, Lauer MS, Collins FS. Toward a new era of trust andtransparency in clinical trials. JAMA. 2016;316(13):1353–4.

63. National Institutes of Health. NIH policy on dissemination of NIH-fundedclinical trial information. Fed Regist. 2016;81:183.

64. Simmons JP, Nelson LD, Simonsohn U. False-positive psychology:undisclosed flexibility in data collection and analysis allows presentinganything as significant. Psychol Sci. 2011;22(11):1359–66.

65. Scott A, Rucklidge JJ, Mulder RT. Is mandatory prospective trial registrationworking to prevent publication of unregistered trials and selective outcome

Grant et al. Trials (2018) 19:406 Page 17 of 18

Page 18: CONSORT-SPI 2018 Explanation and Elaboration: guidance for ......Background: The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers

reporting? An observational study of five psychiatry journals that mandateprospective clinical trial registration. PLoS One. 2015;10(8):e0133718.

66. Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimentaldesigns for generalized causal inference. Belmont: Wadsworth; 2002.

67. Wells M, Williams B, Treweek S, Coyle J, Taylor J. Intervention description isnot enough: evidence from an in-depth multiple case study on the untoldrole and impact of context in randomised controlled trials of sevencomplex interventions. Trials. 2012;13(1):95.

68. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fosteringimplementation of health services research findings into practice: a consolidatedframework for advancing implementation science. Implement Sci. 2009;4(1):50.

69. Montgomery P, Underhill K, Gardner F, Operario D, Mayo-Wilson E. The Oxfordimplementation index: a new tool for incorporating implementation data intosystematic reviews and meta-analyses. J Clin Epidemiol. 2013;66(8):874–82.

70. Hoffmann TC, Glasziou P, Boutron I, et al. Better reporting of interventions:template for intervention description and replication (TIDieR) checklist andguide. BMJ. 2014;348:g1687.

71. Appelbaum M, Cooper H, Kline RB, Mayo-Wilson E, Nezu AM, Rao SM.Journal article reporting standards for quantitative research in psychology:the APA publications and communications board task force report. AmPsychol. 2018;73(1):3

72. Perera R, Heneghan C, Yudkin PA. Graphical method for depictingrandomised trials of complex interventions. BMJ. 2007;334:127–9.

73. Torgerson DJ. Contamination in trials: is cluster randomisation the answer?BMJ. 2001;322:355–7.

74. Moore G, Audrey S, Barker M, et al. Process evaluation in complex publichealth intervention studies: the need for guidance. J Epidemiol CommunityHealth. 2013;68(2):101–2.

75. West R. Providing full manuals and intervention descriptions: addictionpolicy. Addiction. 2008;103:1411.

76. Mayo-Wilson E, Fusco N, Li T, et al. Multiple outcomes and analyses inclinical trials create challenges for interpretation and research synthesis. JClin Epidemiol. 2017;86:39–50.

77. Altman DG, Moher D, Schulz KF. Harms of outcome switching in reports ofrandomised trials: CONSORT perspective. BMJ. 2017;356:j396.

78. Chalder T, Deary V, Husain K, Walwyn R. Family-focused cognitivebehaviour therapy versus psycho-education for chronic fatiguesyndrome in 11- to 18-year-olds: a randomized controlled treatmenttrial. Psychol Med. 2010;40:1269–79.

79. Slutsky AS. Data safety and monitoring boards. N Engl J Med. 2004;350(11):1143–7.80. Schulz KF, Grimes DA. Allocation concealment in randomised trials:

defending against deciphering. Lancet. 2002;359(9306):614–8.81. Cuijpers P, Cristea IA. How to prove that your therapy is effective, even

when it is not: a guideline. Epidemiol Psychiatr Sci. 2016;25(5):428–35.82. Wilson DB. Comment on ‘developing a reporting guideline for social and

psychological intervention trials’. J Exp Criminol. 2013;9:375–7.83. Little RJ, Rubin DB. Statistical analysis with missing data. 2nd ed. Hoboken:

Wiley; 2002.84. Li T, Hutfless S, Scharfstein DO, et al. Standards should be applied in

the prevention and handling of missing data for patient-centeredoutcomes research: a systematic review and expert consensus. J ClinEpidemiol. 2014;67(1):15–32.

85. Gardner F, Hutchings J, Bywater T, Whitaker C. Who benefits and how doesit work? Moderators and mediators of outcome in an effectiveness trial of aparenting intervention. J Clin Child Adolesc Psychol. 2010;39(4):568–80.

86. Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency inreporting the synthesis of qualitative research: ENTREQ. BMC Med ResMethodol. 2012;12(1):181.

87. Boutron I, Altman D, Moher D, Schulz KF, Ravaud P. CONSORT statement forrandomized trials of nonpharmacologic treatments: a 2017 update and aCONSORT extension for nonpharmacologic trial abstracts. Ann Intern Med.2017;167(1):40–7.

88. Welch V, Jull J, Petkovic J, et al. Protocol for the development of aCONSORT-equity guideline to improve reporting of health equity inrandomized trials. Implement Sci. 2015;10(1):146.

89. Welch V, Petticrew M, Tugwell P, et al. PRISMA-equity 2012 extension:reporting guidelines for systematic reviews with a focus on health equity.PLoS Med. 2012;9(10):e1001333.

90. O'Neill J, Tabish H, Welch V, et al. Applying an equity lens to interventions:using PROGRESS ensures consideration of socially stratifying factors toilluminate inequities in health. J Clin Epidemiol. 2014;67(1):56–64.

91. Knol MJ, Groenwold R, Grobbee DE. P-values in baseline tables ofrandomised controlled trials are inappropriate but still common in highimpact journals. Eur J Prev Cardiol. 2012;19:231–2.

92. Piwowar HA, Day RS, Fridsma DB. Sharing detailed research data isassociated with increased citation rate. PLoS One. 2007;2:e308.

93. Taichman DB, Sahni P, Pinborg A, et al. Data sharing statements for clinicaltrials: a requirement of the International Committee of Medical JournalEditors. JAMA. 2017;317(24):2491–2.

94. Lipsey MW, Wilson DB. Practical meta-analysis. Thousand Oaks: Sage; 2001.95. McCord J. Cures that harm: unanticipated outcomes of crime prevention

programs. Ann Am Acad Polit Soc Sci. 2003;587:16–30.96. Chalmers I. Trying to do more good than harm in policy and practice: the

role of rigorous, transparent, up-to-date evaluations. Ann Am Acad Polit SocSci. 2003;589(22):22–40.

97. Fletcher A, Jamal F, Moore G, Evans RE, Murphy S, Bonell C. Realist complexintervention science: applying realist principles across all phases of theMedical Research Council framework for developing and evaluatingcomplex interventions. Evaluation. 2016;22(3):286–303.

98. Moore GF, Evans RE. What theory, for whom and in which context? Reflectionson the application of theory in the development and evaluation of complexpopulation health interventions. SSM-Popul Health. 2017;3(132–5.

99. Boutron I, Dutton S, Ravaud P, Altman DG. Reporting and interpretation ofrandomized controlled trials with statistically nonsignificant results forprimary outcomes. JAMA. 2010;303(20):2058–64.

100. Chiu K, Grundy Q, Bero L. ‘Spin’ in published biomedical literature: amethodological systematic review. PLoS Biol. 2017;15(9):e2002173.

101. World Health Organization. WHO Trial Registration Data Set (Version 12.1).2017; http://www.who.int/ictrp/network/trds/en/. Accessed 26 Feb 2017.

102. Meinert CL. Toward prospective registration of clinical trials. Control ClinTrials. 1988;9(1):1–5.

103. Simes R. Publication bias: the case for an international registry of clinicaltrials. J Clin Oncol. 1986;4(10):1529–41.

104. Dickersin K. Report from the panel on the case for registers of clinical trialsat the eighth annual meeting of the Society for Clinical Trials. Control ClinTrials. 1988;9(1):76–81.

105. De Angelis C, Drazen JM, Frizelle FA, et al. Clinical trial registration: astatement from the International Committee of Medical Journal Editors. NEngl J Med. 2004;351(12):1250–1.

106. International Clinical Trials Registry Platform (ICTRP). Joint statement onpublic disclosure of results from clinical trials 2017.

107. Harrison BA, Mayo-Wilson E. Trial registration: understanding and preventingreporting bias in social work research. Res Soc Work Pract. 2014;24(3):372–6.

108. Cybulski L, Mayo-Wilson E, Grant S. Improving transparency and reproducibilitythrough registration: the status of intervention trials published in clinicalpsychology journals. J Consult Clin Psychol. 2016;84(9):753–67.

109. Huth EJ. Guidelines on authorship of medical papers. Ann Intern Med. 1986;104(2):269–74.

110. Petrosino A, Soydan H. The impact of program developers as evaluators oncriminal recidivism: results from meta-analyses of experimental and quasi-experimental research. J Exp Criminol. 2005;1(4):435–50.

111. Eisner M. No effects in independent prevention trials: can we reject thecynical view? J Exp Criminol. 2009;5(2):163–83.

112. Concannon TW, Meissner P, Grunbaum JA, et al. A new taxonomy forstakeholder engagement in patient-centered outcomes research. J GenIntern Med. 2012;27(8):985–91.

113. Keown K, Van Eerd D, Irvin E. Stakeholder engagement opportunities insystematic reviews: knowledge transfer for policy and practice. J ContinEduc Health Prof. 2008;28(2):67–72.

114. Concannon TW, Fuster M, Saunders T, et al. A systematic review ofstakeholder engagement in comparative effectiveness and patient-centeredoutcomes research. J Gen Intern Med. 2014;29(12):1692–701.

115. Staniszewska S, Brett J, Simera I, et al. GRIPP2 reporting checklists: tools toimprove reporting of patient and public involvement in research. BMJ. 2017;358:j3453.

116. Bower P, Brueton V, Gamble C, et al. Interventions to improve recruitmentand retention in clinical trials: a survey and workshop to assess currentpractice and future priorities. Trials. 2014;15(1):399.

117. Gottfredson D, Cook T, Gardner F, et al. Standards of evidence for efficacy,effectiveness, and scale-up research in prevention science: next generation.Prev Sci. 2015;16(7):893–926.

Grant et al. Trials (2018) 19:406 Page 18 of 18