national certification corporation (ncc) · web viewnational certification corporation (ncc)...

188
National Certification Corporation (NCC) Obstetric and Neonatal Quality and Safety Sub-Specialty Certification Examination Practice Analysis Study Tina Freilicher, Ph.D., Shoreline Psychometric Services, LLC.

Upload: others

Post on 24-Jan-2021

15 views

Category:

Documents


0 download

TRANSCRIPT

National Certification Corporation (NCC)

NCC Quality and Safety Practice Analysis Study December 2018

Obstetric and Neonatal Quality and Safety Sub-Specialty Certification Examination Practice Analysis Study

Table of ContentsIntroduction4Methodology4Rationale for the Methodology4Role of the Practice Analysis Committee6Development of the Practice Analysis6Pilot Testing the Survey10Survey Sampling Plan and Dissemination of the Survey10Results11Survey Response Rate11Respondents’ Background and Demographic Information14Analysis of Practice Analysis Data19Finalization of the Content Outline26Finalization of the Test Weights29Survey Respondents’ Opinion about Eligibility and Continuing Education Requirements33Interest in the Credential34Summary and Recommendations35

Index of Tables

Table 1. Practice Analysis Committee Members’ Demographic and Background Information6

Table 2. Validity Rating Scales for Major Domains of Practice7

Table 3. Validity Rating Scales for the Task Statements8

Table 4. Validity Rating Scale for the Knowledge Statements8

Table 5. Survey Response Rate11

Table 6. Survey Respondents’ Multiple Credentials12

Table 7. Survey respondents’ credentials13

Table 8. Nurse leadership title14

Table 9. Primary Role14

Table 10. Primary Role: Summary of “Other Please Specify” Responses15

Table 11. Practice Setting15

Table 12. Types of APRN as primary role15

Table 13. OB Designation15

Table 14. NICU designation16

Table 15. Highest level of education16

Table 16. Hours worked per week16

Table 17. Number of years in current role17

Table 18. Areas of current practice areas17

Table 19. State/Country of Residence17

Table 20. Place of residence “Other (please specify)”18

Table 21. Task Statements Importance and Frequency Ratings – Descriptive Statistics19

Table 22. Domains - Importance Ratings and Time Spent23

Table 23. Knowledge Statements - Criticality Ratings24

Table 24. Three Sets of Preliminary Test Specifications29

Table 25. Survey Respondents' Mean Percentage of Test Questions Per Domain30

Table 26. Final test specifications based on job (practice) analysis data32

Table 27. Survey respondents’ opinion about eligibility requirements34

Table 28. Survey respondents’ opinion about continuing education requirements34

Table 29. Interest in seeking the credential34

Table 30. Benefits of the credential35

Table 31. Task Statements Importance Validity Scale - Counts & Percentages55

Table 32. Task Statements – Frequency Validity Rating Scale – Counts & Percentages57

Table 33. Survey respondents’ comments about the task statements60

Table 34. Survey respondents' comments about the major domains of practice72

Table 35. Knowledge Statements – Criticality Rating Scale – Counts & Percentages77

Table 36. Survey respondents' comments about the knowledge statements81

Table 37. Survey respondents’ comments about eligibility requirements93

Table 38. Survey respondents’ comments about continuing education requirements101

Table 39. Likelihood in seeking the credential – responses to open-ended question107

Table 40. Benefits of the credential – responses to Other(please specify)121

Appendix

Appendix A. Psychometrician's qualifications36

Appendix B. Agenda for job analysis meeting37

Appendix C. Demographic/background questions: verbatim responses to “Other, please specify” option39

Appendix D. Task Statements Importance and Frequency Validity Scales – Counts & Percentages55

Appendix E. Survey respondents' comments about the task statements60

Appendix F. Survey respondents’ comments about the major domains of practice72

Appendix G. Knowledge statements: criticality validity scale – counts & percentages77

Appendix H. Survey respondents' comments about the knowledge statements81

Appendix I. Agenda for review of job analysis results85

Appendix J. Final content outline87

Appendix K. Survey respondents’ comments about eligibility requirements93

Appendix L. Survey respondents’ comments about continuing education requirements101

Appendix M. Interest in the credential and perceived benefits – responses to open-ended questions107

National Certification Corporation

Obstetric and Neonatal Quality and Safety Sub-Specialty Certification Examination Practice Analysis Study

Introduction

In 2018, the National Certification Corporation (NCC) conducted a practice analysis study for the development and implementation of a practice-based Obstetric and Neonatal Quality and Safety Sub-Specialty Certification Examination. This is a new sub-specialty certification examination that will be offered by the NCC. The study resulted in the identification of the major domains of practice, the tasks associated with the domains, and the knowledge applied in the performance of the tasks. This information was used to develop a content outline and test specifications for the certification examination.

In January 2018, the NCC retained the services of Dr. Tina Freilicher, Ph.D. of Shoreline Psychometric Services, LLC. for assistance in the conduct of a practice analysis study (i.e., referred to as the “consultant” in this report). Dr. Tina Freilicher, who is psychometrician, facilitated the meetings, analyzed the data, and prepared this report. See Appendix A for a description of Dr. Freilicher’s qualifications.

The study was conducted to develop a content outline and test specifications for the examination that would reflect current, best practice of obstetric and neonatal quality and safety. This report describes the study’s methodology, summarizes data, and presents the results of the study, i.e. a draft content outline and preliminary test specifications presented to the NCC for review and finalization.

MethodologyRationale for the Methodology

Practice analysis methodologies including both empirical and logical were considered. Although the two methodologies differ in many ways, a critical difference is that empirical practice analyses are essentially survey-based and depend on a broad sampling of practitioners. Logical job analyses are focus group-based and depend on the pooled judgments of a representative committee of subject-matter experts (SMEs). It is, in essence, a “brainstorming” process that proceeds until consensus is reached on each point under investigation or discussion. One of the more formal names for the brainstorming employed in the context of job analysis is “role delineation.” The purpose of role delineation, as a job task analysis technique, is to develop practice-based test specifications for the certification-level professional. It achieves this goal by identifying the major and specific work activities (tasks) that define the profession along with the knowledge required of the certification-level candidate.

A combination of the logical and empirical practice analysis methodologies was chosen for the following reasons:

1) The availability of sufficient qualified SMEs from which to select a focus group committee.

2) Using a combination of these data collection methodologies will complement each approach, as the logical approach will contribute to the development of a content outline that would be validated by a larger sample of the population.

The procedure used involved a number of steps including:

1) Identification, development and review of the major domains of practice.

2) Identification, development and review of the task statements associated with the domains.

3) Identification, development and review of the knowledge statements.

4) Development of domain weights.

5) Preparation of final test specifications.

The following is a brief outline of each of the steps as employed in this practice analysis study:

1. The major domains of practice were developed as they are the principal areas of responsibility or activity that comprise the sub-specialty practice of obstetric and neonatal quality and safety. They are the major headings in the outline format of the test specifications document.

2. After the development of the domains, the next step was to identify and develop the tasks associated with each domain. A task is defined as a specific, goal-directed activity or set of activities having a common objective or type of output. The set of tasks for each domain was delineated in such a manner as to be exhaustive and mutually exclusive and cover all aspects of the profession relevant to the objectives of the job task analysis (i.e., the development of practice-based test specifications).

3. The committee developed the finalized domains and tasks while ensuring clarity of meaning and comprehensiveness.

4. The committee developed the knowledge statements associated with the performance of each task.

5. The committee reviewed the knowledge statements to ensure clarity of meaning and comprehensiveness, and aligned the knowledge statements to the task statements.

6. A survey of the content outline was then disseminated to the population of professionals in the area of obstetrics and neonatal. Using an importance validity rating scale and an indication of the percentage of time spent performing tasks in the major domains of practice, as well as the application of importance and frequency validity rating scales for the task statements, the survey respondents’ ratings were used to derive a set of weights for the draft test specifications.

Role of the Practice Analysis Committee

As the first step, the NCC convened a representative committee of subject matter experts (i.e., the practice analysis committee) in the practice of sub-specialty practice of obstetric and neonatal quality and safety for the development of the practice analysis. Their role in the study was to develop the content outline, describing the major domains of practice, the tasks associated with the domains and the knowledge necessary to perform the tasks. The committee also developed questions for the survey, reviewed the survey instrument, and used the data resulting from the study to finalize the content outline and test specifications. See Table 1 below for the roster of the committee of experts.

Table 1. Practice Analysis Committee Members’ Demographic and Background Information

Name

Position, Title

Specialty

Region

Gautham Suresh, MD,DM, MS, FAAP

Section Head and Service Chief, Neonatology, Baylor College of Medicine, Texas Children’s Hospital

Neonatology

Houston, TX (Region 3)

Tami Wallace, DNP, APRN, NNP-BC

Neonatal Nurse Practitioner; Neonatal Allied Nationwide Children’s Hospital

Neonatal

Columbus, Ohio (Region 2)

David McLean, MD, MPH, FACOG, C-EFM

Medical Director of Maternal Fetal Center at Valley Children’s Hospital; Vice –President NCC

Maternal-Fetal Medicine, Obstetrics

Madera, CA (Region 4)

Danielle Felty, MSN, RN, NEA-BC, RNC-OB, C-EFM

Director of Maternal/Child; Liberty Hospital

Maternal Child and Inpatient Obstetrics

Liberty, MO (Region 2)

Andrew Combs, MD, PhD

Director of Quality for Maternal-Fetal Medicine; MEDNAX Obstetrix Medical Group,

Maternal-Fetal Medicine; Obstetrics

Campbell, CA (Region 4)

Catherine Ivory, PhD, RNC-OB, RN, BC

Indiana University Health

Inpatient Obstetrics and Perinatal

Indiana, (Region 2)

Kathleen Simpson PhD, RNC-OB, CSN-BC, FAAN

Perinatal Clinical Nurse Specialist, Mercy Hospital

Perinatal Care, Q&S, EFM

St. Louis Missouri (Region 2)

Patricia Scott, DNP, APRN, NNP-BC, C-NPT

Pediatrix Medical Group of TN, Advanced Practitioner Coordinator, NNP, Coordinator of Neonatal Transport Service. Infant Quality Improvement Specialist, TIPQC.

Neonatology, Transport

Nashville, TN (Region 3)

David Annibale, MD

Medical University of South Carolina, Director of Neonatology. Professor of Pediatrics. Associate Member of the College of Nursing Faculty. NeoReviews Plus Editorial Staff AAP.

Neonatology

Charleston, SC (Region 3)

Katharine E. Donaldson, MSN, WHNP-BC, RNC-OB, CPLS, C-EFM

Perinatal Clinical Nurse Specialist, Capital Health System; Pennington and Trenton, NJ

Obstetrics, Women’s Health

Holland, PA

(Region 1)

Denise Zayack, RN, MS

Division of Quality Improvement and Education, Vermont Oxford Network

Neonatology

Vermont (Region 1)

Suzanne Staebler, DNP, APRN, NNP-BC, FAANP

Clinical Professor, Nell Hodgson Woodruff School of Nursing, Emory University, President NCC

Neonatal

Atlanta, GA (Region 3)

Sue Kendig, JD, MSN, WHNP-BC

Health Policy Advantage, LLC

Women’s Health

Kim L. Armour, PhD, NP-BC, RDMS, NEA-BC

Director of Operations, Women’s Obstetric & Neonatal Services. Prentice Women’s Hospital. Northwestern Memorial Healthcare System.

Obstetrics, Women’s Health

Chicago, Il (Region 2)

Peter Bernstein, MD, MPH

Director, Division of Maternal Fetal Medicine, Professor of Clinical Obstetrics & Gynecology and Women’s Health. Albert Einstein College of Medicine, Montefiore Medical Center.

Maternal-Fetal Medicine, Obstetrics

Bronx, NY (Region 1)

Megan Cunningham, MSN RN, CPHQ

Safety Quality Specialist at the Children’s Hospital of Philadelphia

Cardiac and Special Delivery

Philadelphia, PN (Region 1)

Development of the Practice Analysis

In preparation for an in-person meeting with the practice analysis committee, the consultant prepared a draft content outline using information about the sub-specialty practice provided by the NCC. The information provided were objectives and a preliminary content outline for the sub-specialty exam program, and a preliminary listing of topics associated with the sub-specialty. The information was used to draft the major domains of practice, tasks associated with the major domains, and knowledge that may be used to perform the tasks. The consultant also drafted demographic/background questions, validity rating scales, questions addressing eligibility and continuing education requirements for the practice analysis survey instrument.

The practice analysis committee met on February 8-9, 2018 to develop the content outline describing the major domains of practice, task and knowledge statements. They were provided an orientation to the process of a practice analysis and then worked as a full group to identify the major domains of practice. To assist them in this effort, they were provided with the draft content outline. After the committee identified the major domains of practice, they worked in small groups to develop task and knowledge statements for the small group’s assigned domain(s). They later reconvened as a full group to review the entire document. (See Appendix B for the meeting agenda.)

The committee developed a content outline that consisted of the following five major domains of practice:

Domain 1: Systematically perform ongoing and comprehensive quality and safety assessment and gap analyses

Domain 2: Promote the integration of quality and safety practices within the organization at the governance and leadership levels.

Domain 3: Develop and implement quality and safety initiatives in obstetric and neonatal practice.

Domain 4: Evaluate and measure the effectiveness of quality and safety practices in obstetric and neonatal care

Domain 5: Professionalism and ethical practice

A total of 21 task statements were identified and were distributed among the major domains of practice as follows: Domain 1: three (3) task statements, Domain 2: five (5) task statements, Domain 3: five (5) task statements, Domain 4: three (3) task statements, and Domain 5: five (5) task statements. Sixty-eight (68) knowledge statements were identified and aligned to individual task statements.

The following validity rating scales were reviewed and used to validate the major domains of practice (see Table 2), the task statements (see Table 3) and the knowledge statements (see Table 4). The scales are designed to collect data on the importance of the domains of practice in the area of sub-specialty of obstetric and neonatal quality and safety, the importance of the quality and safety tasks performed in professional practice, and the criticality of the knowledge for work in the area of quality and safety. In addition to the importance validity ratings for the domains, survey respondents were asked to indicate the percentage of time spent performing tasks in each major domain of practice. For the task statements, in addition to applying an importance validation rating, survey respondents were also asked to use a frequency validity rating scale to indicate the frequency in which tasks are performed in their professional practice of quality and safety.

Table 2. Validity Rating Scales for Major Domains of Practice

How important is this major domain in your practice of quality and safety?

0 Not important

1 Somewhat important

2 Important

3 Very important

What is the approximate percentage of time spent performing tasks in each of the following major domains?

Table 3. Validity Rating Scales for the Task Statements

How important is this quality and safety task in your professional practice?

0 Not important

1 Somewhat important

2 Important

3 Very Important

How frequently do you perform this quality and safety task in your professional practice?

0 Never - I do not perform this task

1 Rarely - Once or twice per year

2 Occasionally – Every 2 to 6 months

3 Routinely – At least once a month

4 Frequently – At least once a week

Table 4. Validity Rating Scale for the Knowledge Statements

How critical is this knowledge for your work in quality and safety practice?

0 Not critical

1 Somewhat critical

2 Critical

3 Very critical

Following the February 8-9, 2018 meeting with the practice analysis committee, the content outline was sent to the practice analysis committee in advance of a web conference that was held on April 6, 2018. The purpose of the web conference was to conduct a final review of the content outline before the development of the survey instrument. In preparation for the web conference, the committee was provided with the following set of instructions for reviewing the document:

INSTRUCTIONS FOR REVIEWING THE CONTENT OUTLINE:

The intent of the review is to ensure that the content outline is comprehensive and accurate. Please consider the following as you review the content outline:

Major Domains:

· Are the major domain statements accurate? If not, please edit to make them accurate.

Task Statements:

· Are the task statements accurate? If not, please edit to make them accurate.

· Are there duplicate or similar task statements that can be removed/consolidated? If so, please indicate the task statements that should be removed and for those that can be consolidated, please consolidate the statements.

· Are any task statements missing from the outline? If so, please add them to the appropriate domain.

· When reviewing task statements, please keep in mind, to the extent possible, the task statement should address the following: what task is being performed, how is it being performed, and why is it being performed? The statement should begin with an action verb.

Knowledge Statements:

· Please see instructions for how to review the knowledge statements on page 9 of this document.

INSTRUCTIONS FOR REVIEWING THE KNOWLEDGE STATEMENTS:

The knowledge statements below are listed in two ways. The knowledge statements presented in the first list are in the same sequence as they appear in the content outline. The second list presents the knowledge statements in alphabetical order. (The second list is provided to help in identifying any duplicate or similar knowledge statements.) Please review the knowledge statements for the following:

1. Are the knowledge statements accurate? If not, please revise the statements to clarify.

2. Are any knowledge statements missing? If so, please add them to the content outline following the task(s) that requires the knowledge. (Do not add them to the following lists.)

3. Are any of the knowledge statements very similar? It is recommended that “similar” statements be consolidated, or the best one be selected and the other statements be deleted. Some similar statements are highlighted in yellow on the list that presents the knowledge statements in alphabetical order.

4. In the case of duplicate statements, please keep in mind that a knowledge statement may be applicable to more than one task statement within a domain or applicable to tasks in other domains. Therefore, there may be a few instances in which a knowledge statement is listed more than once, and that’s okay. Those statements will be renumbered and only one of the duplicate statements will appear on the survey.

When editing and/or consolidating knowledge statements, you may make your edits using track changes, or highlight your edits. If you do consolidate knowledge statements, please indicate the knowledge statements that should be deleted by either using track changes (i.e., strikeout), or adding, “Delete” following the statements.

The committee was asked to provide their feedback/suggested revisions in advance of the web conference that was scheduled for April 6, 2018. The committee members’ feedback/comments were added to a “master” document (i.e., the content outline showing all committee members’ comments and suggested revisions) and was shared with the committee during the web conference. The web conference was facilitated by the consultant and NCC. The web conference resulted in making additional revisions to the content outline to clarify statements and to remove redundancies.

Pilot Testing the Survey

The consultant provided the NCC with a mock-up of the survey instrument that the NCC would use to develop the survey in their survey software. The NCC was responsible for disseminating the survey. On May 18, 2018, the NCC piloted the survey by disseminating it the ONQS Job Analysis Team and the INPT, NIC, LRN and EFM content teams. There were 48 participantsThe results of the pilot test did not result in altering the practice analysis survey.

The survey was structured so that the first section of the survey included the majority of the demographic/background questions, followed by the sections that presented the components of the content outline for validation, questions to collect data on the survey respondents’ opinions concerning eligibility and continuing education requirements, and interest in the credential, and the perceived value of the credential. The last section of the survey presented the remaining demographic/background questions.

Survey Sampling Plan and Dissemination of the Survey

Since the sub-specialty of quality and safety in obstetrics and neonatal care units and facilities is a new certification examination program, the NCC decided to disseminate the survey as broadly as possible with the intent of obtaining a representative sample of survey respondents from a variety of geographic locations, roles in the obstetric and neonatal care units and facilities, and types of units and facilities.

On July 18, 2018, the NCC disseminated the survey to approximately 108,245 individuals for review and comment (i.e. validation). Of the 108,245 individuals who were sent the survey, 88,800 held NCC certifications and 19,445 individuals held certifications with three (3) other certifying bodies. Links to the survey were posted by the American Association for Respiratory Care and the Association of Women’s Health, Obstetric and Neonatal Nurses. See the next section for more information about the survey respondents.

The survey recipients were asked to complete the survey by August 20, 2018. The NCC emailed reminders to complete the survey to non-respondents on (July 27 and 30th; August 4 and 7th, 2018). The survey was closed on August 20, 2018. The NCC exported the survey data in an Excel file and provided it to Shoreline Psychometric Services, LLC. for analysis, the results of which can be found below. The results were used and considered by the NCC in the finalization of the content outline and test specifications. The final determination of the changes to the practice analysis (i.e., content outline and test specifications) were determined by the NCC.

Results

This section of the report presents the results of the survey. It presents the results of the background and demographic questions, and the results of the validation ratings of the major domains, tasks, and knowledge statements.

Survey Response Rate

The survey was emailed to 88,800 individuals who hold NCC credentials via the survey software, and a link to the survey was emailed to an additional 19,445 who hold credentials with the following certifying bodies: ACNM, AMA, and Perinatal QI.

A total of 13,625 individuals responded to the survey. To identify professionals who perform work in obstetrical and neonatal care units or facilities, the following screening question was presented at the onset of the survey.

“Do you provide clinical coverage, education, training, research or oversight in Obstetrical or Neonatal Care units or facilities?”

Of the 13,625 survey respondents, 10,676 (78.36%) responded “Yes” to the screening question. Those who responded “No” to the question (2,949, or 21.64%) were exited from the survey, as they did not meet the criteria for being identified as working in obstetrical and neonatal care units or facilities.

Of the 10,676 survey respondents who met the criteria to take the survey, an assessment was made of their completion rate. Some of the respondents did not reply to all questions. The number of respondents (N) for each question is indicated in the tables described in this section of the report.

Table 5. Survey Response Rate 

Survey Response Rate by Group

Group

Sent invites

Unopened or bounced

Received invites

Respondents

Response Rate

Included in Analysis

N

%

N

%

N

%

N

%

%

N

%

NCC Certified INP group 1

9,997

11.26%

3,758

8.02%

6,239

14.87%

1,950

20.19%

31.26%

 

 

NCC Certified INP group 2

9,998

11.26%

3,648

7.79%

6,350

15.14%

1,646

17.04%

25.92%

 

 

NCC Certified MNN 1 and LRN

9,985

11.24%

3,798

8.11%

6,187

14.75%

1,564

16.19%

25.28%

 

 

NCC Certified NIC group 1

9,979

11.24%

3,895

8.31%

6,084

14.50%

1,466

15.18%

24.10%

 

 

NCC Certified NIC group 2 & NNP

9,971

11.23%

3,576

7.63%

6,395

15.24%

1,418

14.68%

22.17%

 

 

NCC Certified C-EFM & C-NPT

9,668

10.89%

4,188

8.94%

5,480

13.06%

1,129

11.69%

20.60%

 

 

Mixed Phys -Neo-MF-OB 

9,233

10.40%

5,906

12.61%

3,327

7.93%

357

3.70%

10.73%

 

 

OB Phys group 1 

9,980

11.24%

9,056

19.33%

924

2.20%

55

0.57%

5.95%

 

 

OB Phys group 2

9,989

11.25%

9,024

19.26%

965

2.30%

74

0.77%

7.67%

 

 

Total

88,800

100%

46,849

100%

41,951

100%

9,659

100%

23.02%

4,206

44%

 Confidence Interval (CI) at 95% confidence level

0.87

1.43

Emailed Weblinks

ACNM

6,528

33.57%

N/A

N/A

6,528

33.57%

25

49.33%

3.95%

 

 

AMA

6,917

35.57%

N/A

N/A

6,917

35.57%

7

1.34%

0.10%

 

 

Perinatal QI

6,000

30.86%

N/A

N/A

6,000

30.86%

258

49.33%

4.30%

 

 

Total

19,445

100%

N/A

N/A

19,445

100%

523

100%

2.69%

CI

4.23

Posted Weblinks

RRTs

1

N/A

N/A

N/A

1

N/A

3,388

98.46%

N/A

 

 

AWHONN

1

N/A

N/A

N/A

1

N/A

53

1.54%

N/A

 

 

Total

2

N/A

N/A

N/A

2

N/A

3,441

100%

1,998

58%

Grand Total

108,245

N/A

N/A

N/A

61,396

N/A

13,623

100%

22.19%

6,204

46%

Table 6. Survey Respondents’ Multiple Credentials

Credentials

All survey respondents who responded to the screening question.

All survey respondents who proceeded to complete the survey.

N

%

N

%

C-BF, NNP-BC

1

0.01%

1

0.02%

C-EFM

735

5.39%

297

4.79%

C-EFM, RNC-HROB

1

0.01%

-

0.00%

C-EFM, RNC-LRN

1

0.01%

-

0.00%

C-EFM, RNC-MNN

29

0.21%

14

0.23%

C-EFM, RNC-OB

493

3.62%

224

3.61%

C-EFM, RNC-OB, RNC-LRN

1

0.01%

-

0.00%

C-EFM, RNC-OB, RNC-LRN, RNC-MNN

4

0.03%

1

0.02%

C-EFM, RNC-OB, RNC-MNN

18

0.13%

8

0.13%

C-EFM, RNC-OB, RNC-NIC

1

0.01%

-

0.00%

C-EFM, RNC-OB, WHNP-BC

1

0.01%

-

0.00%

C-EFM, RNC-TNP

1

0.01%

-

0.00%

C-NPT

55

0.40%

20

0.32%

CNM, C-EFM

13

0.10%

6

0.10%

DO, C-EFM

2

0.01%

-

0.00%

MD, C-EFM

44

0.32%

24

0.39%

MD, C-NPT

1

0.01%

-

0.00%

NM, RNC-MNN

1

0.01%

-

0.00%

NNP-BC

843

6.19%

434

7.00%

NNP-BC, RNC-NIC

21

0.15%

5

0.08%

NNP-BC, RNC-OB

1

0.01%

-

0.00%

NNP-BC, RNC-OB, RNC-MNN, C-EFM

1

0.01%

-

0.00%

PA, C-EFM

2

0.01%

1

0.02%

PCM, RNC-MNN

1

0.01%

-

0.00%

RN

1

0.01%

-

0.00%

RN, C-EFM

229

1.68%

90

1.45%

RN, C-NPT

20

0.15%

9

0.15%

RNC-HROB, C-EFM

6

0.04%

2

0.03%

RNC-HROB, RNC-OB

1

0.01%

-

0.00%

RNC-HROB, RNC-OB, C-EFM

3

0.02%

3

0.05%

RNC-LRN

419

3.08%

191

3.08%

RNC-LRN, C-BF

2

0.01%

1

0.02%

RNC-LRN, C-EFM

1

0.01%

-

0.00%

RNC-LRN, NNP-BC

1

0.01%

1

0.02%

RNC-LRN, RNC-MNN

4

0.03%

1

0.02%

RNC-LRN, RNC-NIC

16

0.12%

9

0.15%

RNC-LRN, RNC-NIC, NNP-BC

1

0.01%

1

0.02%

RNC-MNN

1,087

7.98%

400

6.45%

RNC-MNN, C-NPT

1

0.01%

-

0.00%

RNC-MNN, RNC-OB

5

0.04%

3

0.05%

RNC-MNN, RNC-OB, C-EFM

2

0.01%

1

0.02%

RNC-NIC

1,953

14.33%

882

14.22%

RNC-NIC, NNP-BC

59

0.43%

30

0.48%

RNC-NIC, RNC-LRN

1

0.01%

-

0.00%

RNC-NIC, RNC-OB

1

0.01%

1

0.02%

RNC-NIC, RNC-TNP

1

0.01%

-

0.00%

RNC-NIC, WHNP-BC

2

0.01%

1

0.02%

RNC-OB

2,553

18.74%

1,053

16.97%

RNC-OB, C-EFM

391

2.87%

201

3.24%

RNC-OB, C-EFM, RNC-MNN

1

0.01%

1

0.02%

RNC-OB, RNC-LRN

2

0.01%

1

0.02%

RNC-OB, RNC-MNN

11

0.08%

4

0.06%

RNC-OB, RNC-MNN, WHNP-BC

1

0.01%

-

0.00%

RNC-OB, RNC-NIC

2

0.01%

2

0.03%

RNC-OB, RNC-TNP

2

0.01%

-

0.00%

RNC-OB, WHNP-BC

37

0.27%

16

0.26%

RNC, C-EFM

1

0.01%

1

0.02%

RRT, C-NPT

1

0.01%

-

0.00%

RT, C-NPT

13

0.10%

6

0.10%

WHNP-BC

1

0.01%

-

0.00%

WHNP-BC, RNC-OB

25

0.18%

9

0.15%

WHNP-BC, RNC-OB, C-EFM

17

0.12%

9

0.15%

Total

9,145

67.11%

3,964

63.89%

Blank

4,481

32.89%

2,240

36.11%

Grand Total

13,626

100.00%

6,204

100.00%

Table 7. Survey respondents’ credentials

Credential

Survey respondents who met the criteria to take the survey

Survey respondents who proceeded to take the survey

N

%

N

%

C-BF

3

0.03%

2

0.04%

C-EFM

1,998

18.67%

883

18.88%

C-NPT

91

0.85%

35

0.75%

CNM

13

0.12%

6

0.13%

DO

2

0.02%

0

0.00%

MD

45

0.42%

24

0.51%

NM

1

0.01%

0

0.00%

NNP-BC

928

8.67%

472

10.09%

PA

2

0.02%

1

0.02%

PCM

1

0.01%

0

0.00%

RN

250

2.34%

99

2.12%

RNC

1

0.01%

1

0.02%

RNC-HROB

11

0.10%

5

0.11%

RNC-LRN

453

4.23%

206

4.41%

RNC-MNN

1,166

10.90%

433

9.26%

RNC-NIC

2,058

19.24%

931

19.91%

RNC-OB

3,574

33.40%

1,537

32.87%

RNC-TNP

4

0.04%

0

0.00%

RRT

1

0.01%

0

0.00%

RT

13

0.12%

6

0.13%

WHNP-BC

84

0.79%

35

0.75%

Total

10,699

100.00%

4,676

100.00%

Respondents’ Background and Demographic Information

The background and demographic questions were designed to collect data about the survey respondents’ credentials (i.e., certifications/licenses), job title, primary job roles, practice settings, years of experience, gender, age range, geographical location, and highest level of education, number of hours worked per week, etc. Tables 8 – 20 present the results of the background and demographic questions. See Appendix C for responses to “Other (please specify).”

Table 8. Nurse leadership title

Nurse leadership title

Title

N

%

Chief nursing officer

7

0.90%

Director

202

25.93%

Manager

539

69.19%

Nurse Executive

30

3.85%

Response

1

0.13%

Total

779

100.00%

Table 9. Primary Role

What is your primary role (i.e., where you spend the majority of your time)?

Role

N

%

Administrative

622

10.03%

Clinical practice

4625

74.55%

Educator/Instructor/Professor

645

10.40%

Researcher

35

0.56%

Social Work/Outreach

4

0.06%

Other (please specify)

273

4.40%

Total

6204

100.00%

Table 10. Primary Role: Summary of “Other Please Specify” Responses

What is your primary role (i.e., where you spend the majority of your time)? OTHER Roles*

Responses

N

%

Bedside Nurse

15

5%

Charge Nurse

16

6%

Clinical Informaticist

3

1%

Clinical Supervisor

3

1%

CNS Role

3

1%

Floor RN

4

1%

Quality (see comments at the end of the comment table)

22

8%

Single Letters (e.g., I, J, K, O, P, S)

9

3%

Staff Nurse

11

4%

Team Leader

2

1%

Unit Coordinator

2

1%

Other roles*

183

67%

Total

273

100%

*See Appendix C for verbatim responses.

Table 11. Practice Setting

Which of the following best describes your practice setting?

Practice Setting

N

%

Academic setting

298

4.80%

Birthing center

56

0.90%

Community clinic

77

1.24%

Hospital

5494

88.56%

Private practice

154

2.48%

Research Center

3

0.05%

Other (please specify)*

122

1.97%

Total

6204

100.00%

*See Appendix C for verbatim responses to “Other (please specify).”

Table 12. Types of APRN as primary role

Which of the following types of APRN is your primary role?

APRN

N

%

Certified Nurse-Midwife

240

16.82%

Clinical Nurse Specialist

186

13.03%

Neonatal Nurse Practitioner

652

45.69%

Pediatric Nurse Practitioner

5

0.35%

Women’s Health Nurse Practitioner

44

3.08%

Other (please specify)*

300

21.02%

Total

1427

100.00%

*See Appendix C for verbatim responses to “Other (please specify).”

Table 13. OB Designation

If you work in OB, which of the following describes its designation? (Choose the highest level you cover and spend the majority of time in or oversee).

Designation

N

%

Birth Center (Peripartum care of low-risk women with uncomplicated singleton term pregnancies with a vertex presentation who expect an uncomplicated birth)

76

1.23%

Do not know

76

1.23%

Level I (Basic Care) Uncomplicated pregnancies with ability to detect, stabilize and initiate management of unanticipated maternal-fetal or neonatal problems and transfer as needed

616

9.93%

Level II (Specialty Care) Level I facility plus care of appropriate high-risk antepartum, intrapartum or postpartum conditions both directly admitted or transferred in

1146

18.47%

Level III (Subspecialty Care) Level II facility plus care of more complex maternal conditions, obstetric complications and fetal conditions. Advanced imaging

1108

17.86%

Level IV (Regional Health Care Centers) Level III facility plus on-site medical and surgical care of the most complex maternal conditions and critically ill pregnant women and fetuses throughout antepartum, intrapartum and postpartum. Onsite ICU care, maternal referral and transport, outreach

1265

20.39%

Non-applicable

1917

30.90%

Total

6204

100.00%

Table 14. NICU designation

If you work in an NICU, which of the following describes its designation?

Designation

N

%

Do not know

20

0.32%

Level I (Basic care)

171

2.76%

Level II (Specialty care for newborns at 32 weeks’ gestation or more, weighing 1500 g or more with problems expected to resolve rapidly or who are convalescing from higher level care)

597

9.62%

Level III (Subspecialty care for high-risk newborns needing continuous life support and comprehensive care for critical illness. Includes infants weighing less than 1500 g or less than 32 weeks’ gestation at birth)

1549

24.97%

Level IV (Includes level III care as well as on-site pediatric medical and surgical subspecialties to care for infants with complex congenital or acquired conditions, coordinate transport systems and outreach education)

1111

17.91%

Non-applicable

2756

44.42%

Total

6204

100.00%

Table 15. Highest level of education

What is your highest level of education?

Education

N

%

Associate

679

10.94%

Baccalaureate

2538

40.91%

Doctorate - PhD

76

1.23%

Doctorate - DNP

218

3.51%

High School Diploma

4

0.06%

Masters

1883

30.35%

MD/DO

378

6.09%

Post Masters

169

2.72%

Other (please specify)*

259

4.17%

Total

6204

100.00%

*See Appendix C for verbatim responses to “Other (please specify).”

Table 16. Hours worked per week

How many hours a week do you work (clinical practice, education, research and administration combined)?

Hours

N

%

Less than 20 hours

197

3.18%

20-34 hours

1097

17.68%

35 hours or more

4873

78.55%

Not employed at this time

14

0.23%

Retired

23

0.37%

Total

6204

100.00%

Table 17. Number of years in current role

How long have you been in your current role?

Years

N

%

Less than a year

247

3.98%

1-5 years

1312

21.15%

6-10 years

996

16.05%

11-15 years

887

14.30%

16-20 years

737

11.88%

More than 20 years

2025

32.64%

Total

6204

100.00%

Table 18. Areas of current practice areas

What areas does your current practice incorporate? (Check all that apply)

Area

N

Antepartum

3225

Postpartum

3367

Newborn

3786

Neonatal

3429

Education

2492

Labor and Delivery

3698

Management/Administration

1350

Private practice obstetrics

345

Research

632

Other (please specify)*

307

*See Appendix C for verbatim responses.

Table 19. State/Country of Residence

In what state do you live?

State

N

%

Alabama

47

0.76%

Alaska

20

0.32%

American Samoa

1

0.02%

Arizona

147

2.37%

Arkansas

53

0.85%

California

567

9.14%

Colorado

158

2.55%

Connecticut

94

1.52%

Delaware

26

0.42%

District of Columbia (DC)

5

0.08%

Florida

310

5.00%

Georgia

182

2.93%

Guam

5

0.08%

Hawaii

27

0.44%

Idaho

40

0.64%

Illinois

281

4.53%

Indiana

114

1.84%

Iowa

58

0.93%

Kansas

45

0.73%

Kentucky

65

1.05%

Louisiana

101

1.63%

Maine

31

0.50%

Maryland

170

2.74%

Massachusetts

81

1.31%

Michigan

183

2.95%

Minnesota

80

1.29%

Mississippi

31

0.50%

Missouri

103

1.66%

Montana

22

0.35%

Nebraska

64

1.03%

Nevada

35

0.56%

New Hampshire

31

0.50%

New Jersey

228

3.68%

New Mexico

36

0.58%

New York

415

6.69%

North Carolina

199

3.21%

North Dakota

9

0.15%

Ohio

292

4.71%

Oklahoma

70

1.13%

Oregon

91

1.47%

Pennsylvania

265

4.27%

Puerto Rico

1

0.02%

Rhode Island

13

0.21%

South Carolina

100

1.61%

South Dakota

17

0.27%

Tennessee

84

1.35%

Texas

554

8.93%

Utah

89

1.43%

Vermont

22

0.35%

Virgin Islands

1

0.02%

Virginia

197

3.18%

Washington

157

2.53%

West Virginia

32

0.52%

Wisconsin

112

1.81%

Wyoming

10

0.16%

Other (please specify)

33

0.53%

Total

6204

100.00%

Table 20. Place of residence “Other (please specify)”

In what state do you live? Other (please specify)

1

Active Duty Military in Spain

2

Active duty military living overseas

3

Alberta Canada, N = 3

4

Armed Forces Europe

5

Brazil

6

Canada, N = 3

7

Canada- Manitoba

8

Canada, Saskatchewan

9

Chad, Africa

10

Doha Qatar

11

Indonesian

12

Italy (active duty spouse stationed oversees)

13

Live in Canada work in Michigan

14

Manitoba Canada

15

Military - Ramstein air base

16

Military overseas

17

Ontario

18

ontario canada

19

Ontario, Canada, N = 5

20

Quebec, Canada

21

Saskatchewan, Canada, N = 2

22

U.P. ,India

23

UK, N = 2

Analysis of Practice Analysis Data

The survey respondents were asked to rate the relative importance of the 21 tasks that are associated with the five (5) major practice domains and the frequency in which the tasks are performed. The average rating (M), standard deviation (SD), the lowest or minimum rating (Min), the highest or maximum rating (Max) and the most common rating (Mode) of the importance and frequency ratings are shown in Table 21 below. For each task, the number (frequency) and percentage of respondents selecting each point of the importance rating scale and frequency scales are shown in Tables 31 and 32 in Appendix D, respectively.

For the importance ratings of the task statements, the average ratings ranged from 2.17 for Task 4.1 to 2.78 for Task 5.1. The SD is a statistic that indicates the range or dispersion of raw scores around the mean. For this data set, the SD ranged from 0.46 to 0.81. All tasks were rated as important or very important (i.e., an average rating of 2.17 or higher).

For the frequency ratings of the task statements, the average rating ranged from 1.60 for Tasks 4.1 and 4.2 to 3.10 for Task 5.1. For this data set, the SD ranged from 0.92 to 1.44. No task received a mean frequency rating of “never” performed. The majority of tasks were rated as frequently performed.

These mean importance and frequency ratings are more than sufficient to justify retention of all of the tasks in the draft test specifications. The importance and frequency rating scale data are presented in Table 21.

After completing the importance and frequency ratings, survey respondents were presented with a question asking if they have any comments about the comprehensiveness and/or accuracy of the task statements, or if they believe there was a missing task statement. See Table 33 in Appendix E for comments about the task statements.

Table 21. Task Statements Importance and Frequency Ratings – Descriptive Statistics

Task Statements – Importance and Frequency Ratings

Domains/Tasks

Importance, N = 6204

Frequency, N = 6204

M

SD

Min

Max

Mode

M

SD

Min

Max

Mode

Domain 1: Systematically perform ongoing and comprehensive quality and safety assessment and gap analyses

TASK 1.1 Systematically assesses the organization institutional and environmental culture, patient experience and outcomes, leadership and teamwork by using a variety of methods (e.g., surveys, direct observation and/or environmental scans, adverse events, system errors, and near misses) to identify gaps in quality and safety.

2.65

0.61

0

3

3

2.76

1.31

0

4

4

TASK 1.2 Maintain current knowledge of national quality and safety standards and clinical guidelines from regulatory, accreditation, and specialty organizations, to promote ongoing change in practice to meet quality and safety indicators.

2.76

0.48

0

3

3

2.95

1.06

0

4

3

TASK 1.3 Evaluate quality and safety metrics by analyzing baseline and ongoing data to determine current state of performance, identify gaps, and identify opportunities for improvement.

2.56

0.63

0

3

3

2.43

1.35

0

4

3

Domain 2: Promote the integration of quality and safety practices within the organization at the governance and leadership levels.

TASK 2.1 Incorporate quality and safety aims, tools, checklists and communication strategies into evidence-based projects to improve obstetric and neonatal care.

2.67

0.56

0

3

3

2.75

1.23

0

4

4

TASK 2.2 Foster team function by integrating leadership and teamwork skills that empower members of the clinical team and improve communication to achieve a climate of safety.

2.72

0.52

0

3

3

3.00

1.15

0

4

4

TASK 2.3 Educate and train obstetric and/or neonatal teams on quality and safety practices by conducting and debriefing team training exercises and implementing education using effective learning principles to improve task knowledge and optimize team functioning (e.g. mock codes, simulations).

2.69

0.54

0

3

3

2.30

1.18

0

4

3

TASK 2.4 Advocate for ongoing resource needs by serving as a liaison for quality and safety matters between clinicians and administrators (e.g., participating in meetings, serving on committees and through risk assessment activities) to improve care and outcomes.

2.54

0.63

0

3

3

2.36

1.29

0

4

3

TASK 2.5 Inform patients, colleagues, employers and the public about quality and safety initiatives/outcomes by disseminating outcome data, participating in benchmarking and publishing reports to maintain transparency.

2.38

0.72

0

3

3

1.93

1.40

0

4

3

Domain 3: Develop and implement quality and safety initiatives in obstetric and neonatal practice.

TASK 3.1 Select and monitor key quality metrics that assess a balanced set of quality and safety domains indicative of organizational culture and benchmarking.

2.36

0.73

0

3

3

1.91

1.44

0

4

3

TASK 3.2 Apply recognized methods to improve quality and safety (e.g., model for improvement).

2.54

0.60

0

3

3

2.50

1.23

0

4

3

TASK 3.3 Design quality and safety initiatives in collaboration with necessary stakeholders to identify the target population, measures (e.g., structure, process, outcomes) and data collection approaches to address identified opportunities.

2.33

0.74

0

3

3

1.75

1.38

0

4

0

TASK 3.4 Promote quality and safety practices by using error prevention strategies and appropriate technology to facilitate improvement initiatives.

2.59

0.60

0

3

3

2.62

1.32

0

4

4

TASK 3.5 Integrate effective interventions into daily clinical workflow, using principles of high reliability, to guide practice and improve outcomes.

2.62

0.57

0

3

3

2.96

1.20

0

4

4

Domain 4: Evaluate and measure the effectiveness of quality and safety practices in obstetric and neonatal care

TASK 4.1 Evaluate the implementation of quality improvement initiatives using relevant tools (e.g., fishbone, flow chart, run charts and control charts) to measure effectiveness of processes and outcomes.

2.17

0.81

0

3

2

1.60

1.40

0

4

0

TASK 4.2 Articulate the value of specific obstetric and neonatal quality initiatives by evaluating the balance between quality, outcome and cost, including the perspectives of all stakeholders (e.g., healthcare team, patients, and families).

2.25

0.77

0

3

3

1.60

1.42

0

4

0

TASK 4.3 Identify strategies of moving quality improvement initiatives into sustainment in order to maintain positive change in an overall obstetric and neonatal quality and safety program.

2.42

0.68

0

3

3

1.87

1.37

0

4

3

Domain 5: Professionalism and ethical practice

TASK 5.1 Demonstrate an ongoing commitment to lifelong learning and continued competence by keeping abreast of evidence-based practices related to quality and safety in order to optimize outcomes, improve system function, and reduce potential for harm.

2.78

0.46

0

3

3

3.10

0.92

0

4

3

TASK 5.2 Establish mechanisms to incorporate existing, updated, and new regulations related to quality and safety into obstetric and neonatal care to facilitate compliance with regulatory standards.

2.55

0.61

0

3

3

2.27

1.28

0

4

3

TASK 5.3 Ensure optimal disclosure of adverse events by developing and implementing standardized processes to promote transparency, patient trust, and risk mitigation.

2.59

0.61

0

3

3

2.18

1.41

0

4

3

TASK 5.4. Demonstrate professionalism by applying ethical principles (e.g., fairness, truthfulness, justice, beneficence, nonmaleficence, autonomy) relevant to patient safety and quality activities (e.g., RCA, disclosure) to maintain personal and institutional integrity and to foster a culture of safety and organizational excellence.

2.73

0.50

0

3

3

3.08

1.16

0

4

4

TASK 5.5 Provide support to patients, families and staff following an adverse event by implementing appropriate and standardized processes to minimize negative psychosocial consequences and mitigate risk.

2.69

0.55

0

3

3

2.42

1.30

0

4

3

As shown in Table 22, for each major domain, the survey respondents were asked to indicate the relative importance of each major domain in their practice of quality and safety. They were also asked to indicate the percentage of time spent performing tasks in each major domain. The average importance ratings for the major domains ranged from 2.51 for Domain 1 to 2.82 for Domain 5. The SD ranged from 0.43 to 0.65. As shown in Table 22, the number and percentage of respondents selecting each point of the importance rating scale indicate that more than half or the majority (i.e., from 59% to 83%) selected “Very Important” for the major domains

As shown in Table 22, survey respondents were asked to indicate the percentage of time spent performing tasks in each of the 5 major domains of practice in their practice of quality and safety. The average (Mean) percentage of time ranged from 16.37% for Domain 1 to 31.54% for Domain 5.

These mean importance ratings and average percentages of time spent are more than sufficient to justify retention of all of the domains in the draft test specifications.

After completing the importance ratings and indicating the percentage of time spent performing tasks in each major domain of practice, survey respondents were presented with a question asking if they have any comments about the comprehensiveness and/or accuracy of the major domains of practice, or if they believe there was a missing a major domain of practice. See Table 34 in Appendix F for comments about the major domains of practice.

Table 22. Domains - Importance Ratings and Time Spent

Domain

Importance

Time Spent

0

1

2

3

N

M

SD

M

SD

N

N

%

N

%

N

%

N

%

1

59

1%

295

5%

1892

35%

3205

59%

5451

2.51

0.65

16.37

12.17

5360

2

56

1%

257

5%

1749

32%

3389

62%

5451

2.55

0.63

17.13

11.24

5384

3

31

1%

104

2%

1230

23%

4086

75%

5451

2.72

0.52

18.95

12.06

5384

4

37

1%

136

2%

1454

27%

3824

70%

5451

2.66

0.56

17.06

10.52

5383

5

11

0%

52

1%

869

16%

4519

83%

5451

2.82

0.43

31.54

23.96

5430

101.05

Survey respondents were asked to indicate the criticality of the knowledge statements by applying the criticality rating scale (see Table 4 for the rating scale). Table 23 presents the number of respondents (N), the mean (M) rating, the SD for each knowledge statement. (See Table 35 in Appendix G for the count (N) and percentage (%) of respondents selecting each point of the rating scale.) The average ratings ranged from 1.42 for K-13 to 2.63 for K-11. The SD ranged from 0.58 to 0.95. The range of average ratings represents that survey respondents indicated that all knowledge statements were at a minimum somewhat critical or higher in terms of criticality. Therefore, these mean ratings are more than sufficient to justify retention of all of the knowledge statements as part of the content outline.

After completing the criticality ratings of the knowledge areas, survey respondents were presented with a question asking if they have any comments about the comprehensiveness and/or accuracy of the knowledge statements, or if they believe there was a missing a knowledge statement. See Table 36 in Appendix H for comments about the knowledge statements.

Table 23. Knowledge Statements - Criticality Ratings

Knowledge Statements – Descriptive Statistics

Knowledge Statement

N

M

SD

MIN

MAX

MOD

1

Goals for health care quality improvement (e.g., STEEEP, IHI Triple aim)

4870

1.83

0.92

0

3

2

2

Adverse events and event reporting (e.g., incident reports, near misses, root cause analyses)

4882

2.44

0.69

0

3

3

3

Institutional quality and safety processes and priorities (e.g., peer review, Just Culture, credentialing, goals)

4865

2.21

0.77

0

3

2

4

Design assessment strategies (e.g., defining the population, assembling teams, literature reviews, measure identification, patient/family perspective)

4880

1.88

0.86

0

3

2

5

Assess and maintain appropriate organizational culture and safety assessment (i.e., systems focus, level of Just culture)

4876

2.12

0.80

0

3

2

6

Terms and definitions/common critical language (e.g., quality, patient safety concepts, quality improvement, organizational culture/unit culture, patient experience of care/satisfaction/participation/co-production, systems thinking).

4876

2.10

0.81

0

3

2

7

General quality and safety principles.

4878

2.46

0.65

0

3

3

8

Awareness of legal/statutory and regulatory requirements, national quality and safety standards and clinical practice guidelines.

4876

2.28

0.73

0

3

3

9

Defining and understanding quality terms and concepts, data and quality metrics, identifying gaps in quality and safety, including the use of benchmarking and risk adjustment

4877

1.99

0.81

0

3

2

10

Obstetric and neonatal resources for benchmarking best practice and outcomes

4877

2.37

0.71

0

3

3

11

Current professional standards and guidelines applicable to obstetric and neonatal care

4809

2.63

0.58

0

3

3

12

Current national, state, and regulatory standards and guidelines applicable to obstetric and neonatal care

4803

2.55

0.63

0

3

3

13

Methodologies of data display (e.g., run, control charts, pareto charts)

4796

1.42

0.93

0

3

1

14

The concept of value as a function of quality and cost

4794

1.65

0.87

0

3

2

15

How to implement and evaluate data collection strategies (e.g., checklists, process tools, huddle tools).

4803

1.80

0.89

0

3

2

16

Human factors engineering (i.e., design of systems and processes).

4806

1.76

0.91

0

3

2

17

Human psychology and cognition (e.g., situational awareness, violations of process/protocols, risk-taking, fear or repercussions, cognitive biases, attention and distractions, stress, burnout and fatigue).

4804

2.19

0.78

0

3

2

18

Safety climate, including safety briefings for staff, family involvement councils, quality and safety committees.

4805

2.33

0.73

0

3

3

19

Collaboration and effective communication strategies (i.e., handoffs, SBAR CUSS, debriefing, etc.)

4803

2.49

0.67

0

3

3

20

Understands next steps in advocating for system change when structured communications tools fail (e.g., chain of command; clinical escalation processes).

4798

2.28

0.73

0

3

3

21

Leadership skills (i.e., self-awareness/management; mentoring/sustainability/succession and transition planning; communication and conflict management).

4752

2.34

0.73

0

3

3

22

Teamwork concepts (i.e., team development, structure and function, diversity and inclusivity; collaboration, mutual respect, information diffusion; team meetings; code of conduct).

4747

2.54

0.65

0

3

3

23

Principles of teamwork and behaviors of effective teams

4746

2.49

0.67

0

3

3

24

Effective learning/teaching principles

4736

2.37

0.69

0

3

3

25

Use and principles of simulation, including unit drills involving simulated emergencies.

4742

2.35

0.73

0

3

3

26

Methods for determining human resource needs (e.g., hours per patient day, work hours per unit of service, work hours per birth, clinician to patient ratio, standards for staffing).

4748

2.01

0.90

0

3

2

27

Process for escalating concerns about resource needs.

4733

2.16

0.79

0

3

2

28

Issues that impact the work environment (e.g., the electronic medical record, medical devices, alarm fatigue, distractions, interruptions, overcrowding, noise, lighting, ergonomics of procedures, patient census and acuity).

4746

2.32

0.74

0

3

3

29

Dangers of workarounds.

4740

2.30

0.77

0

3

3

30

Relevant aspects of structural design standards.

4737

1.75

0.90

0

3

2

31

Understand next step in advocating for system change when structured communications tools fail (e.g., chain of command, clinical escalation processes).

4689

2.16

0.76

0

3

2

32

Various methods for disseminating quality and safety data to various stakeholders (e.g., annual reports, presentations, publications, public reporting, including websites, social media, and other media)

4688

1.65

0.92

0

3

2

33

Share data on key quality indicators with colleagues/organizations to improve transparency

4686

1.88

0.86

0

3

2

34

Prioritizing the importance of individual opportunities for improvement

4688

2.04

0.80

0

3

2

35

The importance of balancing measures

4678

1.79

0.86

0

3

2

36

The difference between structural, process, and outcome measures

4674

1.67

0.89

0

3

2

37

Process to develop goal statements for the metrics chosen

4663

1.62

0.91

0

3

2

38

Similarities and differences between quality and safety improvement methods (e.g., PDSA/PDCA, Improve, Six Sigma, Lean, CUSP).

4680

1.51

0.95

0

3

1

39

Team formation and dynamics (patient/family perspective, influencer model)

4691

1.98

0.87

0

3

2

40

Evaluate and review various types of evidence related the quality and safety initiative and application to the population and setting.

4679

1.99

0.83

0

3

2

41

Improvement process design (setting goals, benchmarks, thresholds and implementation plans)

4630

1.89

0.85

0

3

2

42

Select, collect, track, and monitor appropriate measures/indicators (e.g., analysis, data definitions, visualization and interpretation) with consideration of reliability, validity and bias.

4625

1.81

0.89

0

3

2

43

Medication, human milk, blood products, and nutritional safety (e.g., barcodes, e-prescribing, five rights of medication, EHR/ CPOE alarms and alerts).

4629

2.48

0.70

0

3

3

44

Risk reduction strategies (e.g. bundles, clinical pathways, quality guidelines)

4625

2.32

0.75

0

3

3

45

Error prevention strategies, (e.g., bundles, clinical pathways, quality guidelines).

4622

2.43

0.70

0

3

3

46

Auditing practices (e.g., feedback, surveillance).

4621

1.95

0.81

0

3

2

47

How to display and interpret data (e.g., run charts, control charts, score cards)

4624

1.59

0.93

0

3

1

48

Evaluation of outcomes and performance improvement

4625

2.08

0.80

0

3

2

49

The role of technology in quality improvement

4623

1.95

0.82

0

3

2

50

Define the value proposition in healthcare.

4613

1.63

0.94

0

3

2

51

Evaluate patient/family experience.

4618

2.29

0.78

0

3

3

52

Distinguish between cost and value in healthcare.

4616

1.81

0.87

0

3

2

53

Identification of waste in healthcare.

4606

1.92

0.85

0

3

2

54

Cost (monetary and non-monetary).

4605

1.83

0.87

0

3

2

55

Change theory.

4600

1.70

0.91

0

3

2

56

How to implement and maintain a communication strategy that involves all stakeholders.

4604

2.02

0.86

0

3

2

57

Recognition of threats to implementation and sustainability (e.g., fatigue, knowledge degradation, lack of upper level support/commitment, lack of team integrity, lack of personnel, competing priorities, lack of resources, disruptive behaviors, hierarchical professional behaviors).

4613

2.28

0.77

0

3

3

58

Steps in project sustainment (celebration of success, modification of data collection and review).

4604

1.89

0.87

0

3

2

59

Discern the relative strength of the design, source and methodology of new evidence and critical appraise the findings for use in practice (e.g., randomized trials, meta-analysis, expert opinion, observational studies, consensus documents).

4606

1.74

0.90

0

3

2

60

Evaluate changes in key Federal statutes and regulations governing patient safety and quality that impact practice and guidelines.

4598

1.89

0.87

0

3

2

61

The types of patient and provider protections that are regulated by respective statesª statutes and regulations.

4580

1.82

0.88

0

3

2

62

How variations in state law can have an impact on quality and safety activities.

4573

1.83

0.88

0

3

2

63

Identify the elements of effective disclosure (e.g., disclosure of all harmful errors, explanation as to why error occurred, how effects will be minimized, steps to prevent recurrences, apology, acknowledgement of responsibility).

4577

2.05

0.82

0

3

2

64

Distinguish between system error and human error identifying at risk and reckless behavior and respond differently/appropriately to each balancing no blame with accountability

4574

2.23

0.77

0

3

2

65

Awareness of the differences between quality improvement projects and research.

4572

1.76

0.90

0

3

2

66

Human subject protections related to quality and safety

4572

1.92

0.91

0

3

2

67

Understand psychological harm experienced by the patient and second victims.

4570

2.06

0.86

0

3

2

68

Understand the concept of the second victim.

4564

1.92

0.89

0

3

2

Finalization of the Content Outline

On October 15, 2018, a meeting was held with the practice analysis committee to review the results of the practice analysis survey, and review and finalize the content outline and test specifications based on the results. See Appendix I for the agenda for the meeting. At the meeting, the committee was provided with:

1. An overview of the purpose of the meeting and a brief overview of the work performed since the onset of practice analysis study.

2. A presentation of the survey response and completion rates.

3. A presentation of the demographic and background data.

4. An overview on how to use the validity rating data and the qualitative data (i.e., survey respondents’ comments) from the study to finalize the content outline and test specifications.

5. Guidance on how to identify decision rules concerning the validity rating data that they may use to finalize the content outline and test specifications.

After reviewing the demographic and background questions (i.e., Tables 8 through 20 of this report), it was the consensus of the committee that the survey respondent group appeared to be representative of the population.

Based on a review of the summary data for the importance and frequency ratings of the task statements (Appendix D), the committee initially decided to retain all task statements that had average mean importance ratings of 2.0 or higher, as that indicated that the tasks were rated as “Important” (2) or higher on the importance rating scale, and had a mean frequency rating of 1.0 or higher, as that indicated that the task was at least performed annually. The committee made the same conclusion about the major domains, as the domains were rated important or very important, and practitioners’ spent time performing tasks associated with the five domains.

The committee initially decided to retain all knowledge statements that had a mean criticality rating of 1 or higher as that indicated that the knowledge statement was at a minimum “somewhat critical.” All knowledge statements were retained as a result of applying this decision rule.

After reviewing the survey respondents’ comments about the comprehensiveness of the practice analysis, the committee made the following changes to the content outline. The majority of the changes were made to address redundant task and knowledge statements. Below are the changes made to the content outline:

Domain 2 was revised because the committee indicated that the promotion of integration of quality and safety practices should not be limited to governance and leadership levels. This change was made after reviewing survey respondents’ comments. Below is the original domain followed by the revised version of the domain:

Domain 2: Promote the integration of quality and safety practices within the organization at the governance and leadership levels

Revised: Domain 2: Promote the integration of quality and safety practices within the organization

Task 3.2 was removed and consolidated with Task 3.3, as it was redundant to Task 3.3. The committee added examples of stakeholders to clarify what was meant by stakeholders. Below are the original tasks. Following the original tasks is the revised version of Task 3.3.

Task 3.2 Apply recognized methods to improve quality and safety (e.g., model for improvement).

Task 3.3. Design quality and safety initiatives in collaboration with necessary stakeholders to identify the target population, measures (e.g., structure, process, outcomes) and data collection approaches to address identified opportunities.

Revised: Task 3.3 Apply recognized methods to design quality and safety initiatives in collaboration with necessary stakeholders (e.g., healthcare team, patients, and families) to identify the target population, measures (e.g., structure, process, outcomes) and data collection approaches to address identified opportunities.

Task 3.4 was removed and consolidated with Task 3.5 because it was redundant to Task 3.5. Below are the original tasks. Following the original tasks is the revised version of Task 3.5.

Task 3.4 Promote quality and safety practices by using error prevention strategies and appropriate technology to facilitate improvement initiatives.

Task 3.5 Integrate effective interventions into daily clinical workflow, using principles of high reliability, to guide practice and improve outcomes.

Revised: Task 3.5 Integrate quality and safety practices into daily clinical workflow by using error prevention strategies, appropriate technology, and principles of high reliability to guide practice and improve outcomes.

Task 5.2 was removed from Domain 5, as it was redundant to Task 1.2. Below are the original tasks.

Task 1.2 Maintain current knowledge of national quality and safety standards and clinical guidelines from regulatory, accreditation, and specialty organizations, to promote ongoing change in practice to meet quality and safety indicators.

Task 5.2 Establish mechanisms to incorporate existing, updated, and new regulations related to quality and safety into obstetric and neonatal care to facilitate compliance with regulatory standards.

Knowledge Statements:

The following knowledge statements were deleted because they were redundant to K-8, K-11 and K-12.

K-60 Evaluate changes in key Federal statutes and regulations governing patient safety and quality that impact practice and guidelines.

K-61 The types of patient and provider protections that are regulated by respective states’ statutes and regulations.

K-62 How variations in state law can have an impact on quality and safety activities.

Knowledge statement “K-68 Understand the concept of the second victim” was removed as it was redundant to K-67 Understand the psychological harm experienced by the patient and second victims.

The following knowledge statement was added:

Knowledge of ethical principles as they apply to patients, families, providers, and organizations.

· The committee added the knowledge statement indicated above as ethical practice was part of the domains and tasks, and they indicated that there should be a knowledge statement associated with the tasks.

See Appendix J for the final content outline.

Finalization of the Test Weights

To derive preliminary test weights, a multiplicative model was used to combine the data collected from the study. Three different sets of preliminary test weights were produced for review and consideration by the committee. As shown in Table 24, the first set (i.e., #1) was based on the importance and percentage of time estimates for the major domains, the second set (#2) was based on the respondents’ percentage weights for the domains, and the third set (#3) was based on the combination of sets #1 and #2. All weights for the tasks were based on the importance and frequency ratings.

For set #1, to derive the preliminary test weights, the importance and percentage of time estimates were combined to produce weights for the domains. The sums of each of these data were totaled, and dividing the sum of the domain by the total, derived the percentage weight for each domain. To derive the preliminary number of test questions by domain, the total number of test questions (i.e., 100) was multiplied by the domain weight.

To derive the preliminary number of test questions by task statement within a domain, the importance and frequency ratings were combined (i.e., multiplied) to produce weights for the task statements. The task weights were summed within the domain, and then each task weight was divided by the sum, and then multiplied by the domain weight. To derive the preliminary number of test questions by task statement, the total number of test questions on the examination (i.e., 100) was multiplied by the task statement weight.

For set #2, to derive the preliminary test weights, the survey respondents’ mean percentages for the domains (see Table 25 for the survey respondents’ mean weights) were used as the weights for the domains. To derive the preliminary number of test questions by domain, the total number of test questions on the examination (i.e., 100) was multiplied by the domain weight. The weights for the task statements were derived using the same procedure used to derive weights for set #1.

For set #3, averaging the domain weights of sets #1 and #2 were used to derive the domain weights. The weights for the task statements were derived using the same procedure used to derive weights for set #1.

Table 24. Three Sets of Preliminary Test Specifications

Domains/Tasks

#1: Preliminary Test Specifications Using Validity Rating Data

#2: Preliminary Test Specifications Using Survey Respondents’ Domain Percentages & Task Validity Ratings

#3: Preliminary Test Specifications Based on an Average of #1 and #2 Domain Weights

Weights from Validity Ratings

% of Items

# of Items

Domain Weights from Respondents

% of Items

# of Items

Average Weights

% of Items

# of Items

Domain 1

15.20

15.20%

15

17.66

17.66%

18

16.43%

16.43

17

TASK 1.1

7.31

5.13%

5

7.31

6.0%

6

7.31

5.54%

6

TASK 1.2

8.13

5.70%

6

8.13

6.6%

7

8.13

6.17%

6

TASK 1.3

6.22

4.36%

4

6.22

5.1%

5

6.22

4.72%

5

Domain 2

16.16

16.16% 

16 

17.56

17.56% 

18 

16.86

16.68% 

16 

TASK 2.1

7.35

3.68%

4

7.35

3.99%

4

7.35

3.83%

4

TASK 2.2

8.17

4.08%

4

8.17

4.44%

4

8.17

4.26%

4

TASK 2.3

6.20

3.10%

3

6.20

3.37%

3

6.20

3.23%

3

TASK 2.4

6.01

3.00%

3

6.01

3.26%

3

6.01

3.13%

3

TASK 2.5.

4.61

2.30%

2

4.61

2.50%

3

4.61

2.40%

2

Domain 3

19.04

19.04% 

19 

20.96

20.96% 

21 

20.00

20.00% 

20 

TASK 3.1

4.50

2.91%

3

4.50

3.20%

3

4.50

3.05%

3

TASK 3.2

6.36

4.11%

4

6.36

4.52%

4

6.36

4.32%

4

TASK 3.3

4.08

2.64%

3

4.08

2.90%

3

4.08

2.77%

3

TASK 3.4

6.79

4.38%

4

6.79

4.83%

5

6.79

4.61%

5

TASK 3.5.

7.76

5.01%

5

7.76

5.51%

6

7.76

5.26%

5

Domain 4

16.78

16.78% 

17 

19.47

19.47% 

19 

18.13

18.13% 

18 

TASK 4.1

3.47

5.02%

5

3.47

5.83%

5

3.47

5.42%

5

TASK 4.2

3.61

5.23%

5

3.61

6.06%

6

3.61

5.65%

6

TASK 4.3

4.52

6.53%

7

4.52

7.58%

8

4.52

7.06%

7

Domain 5

32.81

32.81% 

33 

24.86

24.86% 

25 

28.84

28.84% 

29 

TASK 5.1

8.61

8.08%

8

8.61

6.12%

6

8.61

7.10%

7

TASK 5.2

5.78

5.43%

6

5.78

4.11%

4

5.78

4.77%

5

TASK 5.3

5.65

5.30%

5

5.65

4.01%

4

5.65

4.66%

5

TASK 5.4.

8.42

7.90%

8

8.42

5.99%

6

8.42

6.95%

7

TASK 5.5

6.50

6.10%

6

6.50

4.63%

5

6.50

5.36%

5

Total

 

100.00%

100

 

100.00% 

100

 

100.00% 

100

Table 25. Survey Respondents' Mean Percentage of Test Questions Per Domain

Domain

Weights from Survey Respondents

Domain 1: Systematically perform ongoing and comprehensive quality and safety assessment and gap analyses

17.66%

Domain 2: Promote the integration of quality and safety practices within the organization

17.56%

Domain 3: Develop and implement quality and safety initiatives in obstetric and neonatal practice

20.96%

Domain 4: Evaluate and measure the effectiveness of quality and safety practices in obstetric and neonatal care

19.47%

Domain 5: Professionalism and ethical practice

24.86%

Total

100.51%

The committee reviewed the three (3) sets of preliminary test weights and decided to use set #2 as the starting point for finalizing the test weights. They selected set #2 because it had the lowest weight for Domain 5, as they decided that the weight for Domain 5 should be lowered due to the reasons described below.

When finalizing the test specifications, in addition to using the results of the job analysis study in making decisions about the test specifications, the committee was advised to take into consideration the practical aspect of test development, as the scope and depth of the subject matter should be taken into consideration when finalizing the test weights. For example, some tasks, although rated high in terms of importance and frequency may be limited in scope, or conversely some tasks may have been rated high in importance but may not be frequently performed, yet may have more breadth and depth of subject matter than a task that is important and is frequently performed. Therefore, the committee may consider adjusting the weights given these considerations.

Based on a review of the data, consideration of the scope and depth of content, and in consideration of the revisions to some of the tasks (i.e., removal of redundant statements) as described in the previous section, the committee decided to adjust the weights by first reviewing the weights associated with Domain 5 – Professionalism and Ethical Practice as follows:

Domain 5: Although Domain 5 – Professionalism and Ethical Practice and its associated tasks are important for the practice of quality and safety, having a weight of approximately 25% (i.e., 24.86%) of the total exam would present a challenge for item and test development given that some of the tasks in Domain 5 are related or redundant to tasks in other domains. Therefore, they adjusted the weight of Domain 5 from 24.86% to 10.02%. As described previously, Task 5.2 was removed from Domain 5 because it is redundant to Task 1.2. In addition, the committee decided that Task 5.1 (“Demonstrate an ongoing commitment to lifelong learning and continued competence by keeping abreast of evidence-based practices related to quality and safety in order to optimize outcomes, improve system function, and reduce potential for harm.”) would not be included as part of the test specifications for the written examination because this task will be addressed as a recertification requirement. Therefore, the weights associated with Task 5.1 (6.12%) and Task 5.2 (4.11%) were redistributed among other tasks in other domains as described below. Of the remaining 3 tasks associated with Dom