a multidisciplinary committee perspectiveit.neurosim.downstate.edu/pdfs/fmd_2015.pdf · modeling...

1
TEN "NOT SO SIMPLE" RULES FOR CREDIBLE PRACTICE OF MODELING & SIMULATION IN HEALTHCARE A Multidisciplinary Committee Perspective Ahmet Erdemir, Computational Biomodeling (CoBi) Core & Department of Biomedical Engineering, Lerner Research Institute, Cleveland Clinic, Cleveland, OH, USA Lealem Mulugeta, Universities Space Research Association, Division of Space Life Sciences, Houston, TX, USA William W. Lytton, SUNY Downstate Medical Center, Kings County Hospital Center, Brooklyn, NY, USA with contributions from members of the Committee on Credible Practice of Modeling & Simulation in Healthcare Project Site: https://simtk.org/home/cpms Wiki Site: http://wiki.simtk.org/cpms e-mail: [email protected] Committee's activities are facilitated by the Interagency Modeling and Analysis Group and Multiscale Modeling Consortium: 2015 BMES/FDA Frontiers in Medical Devices May 18-20, 2015 Washington, DC MOTIVATION GOALS COMMITTEE TASK TEAMS INITIAL SET OF RULES TASK TEAM RANKINGS COMMITTEE RANKING CONCLUSIONS FUTURE use version control use credible solvers explicitly list your limitations dene the context the model is intended to be used for dene your evaluation metrics in advance use appropriate data (input, validation, verication) attempt validation within context attempt verication within context attempt uncertainty (error) estimation perform appropriate level of sensitivity analysis within context of use disseminate whenever possible (source code, test suite, data, etc) report appropriately use consistent terminology or dene your terminology get it reviewed by independent users/developers/members learn from discipline-independent examples be a discipline-independent/specic example follow discipline-specic guidelines conform to discipline-specic standards document your code develop with the end-user in mind provide user instructions whenever possible and applicable practice what you preach make sure your results are reproducible provide examples of use use traceable data that can be traced back to the origin use competition of multiple implementations to check and balance each other Consolidation Rule 1 – Dene context clearly. Rule 2 – Use appropriate data. Rule 3 – Evaluate within context. Rule 4 – List limitations explicitly. Rule 5 – Use version control. Rule 6 – Document adequately. Rule 7 – Disseminate broadly. Rule 8 – Get independent reviews. Rule 9 – Test competing implementations. Rule 10 – Conform to standards. Plan and develop the M&S activity with clear denition of the intended purpose or context accommodating end-users needs. Use data relevant to the M&S activity, which can ideally be traced back to the source. Evaluate the M&S activity through verication & validation, uncertainty quantication, and sensitivity analysis faithful to the context/purpose/scope of the M&S eorts, with clear and a-priori denition of evaluation metrics and including test cases. Provide an explicit disclaimer on the limitations of the M&S to indicate under what conditions or applications the M&S may or may not be relied on. Implement a version control system to trace the time history of the M&S activities, including delineation of contributors' eorts. Document all M&S activities, including simulation code, model markup, scope and intended use of M&S activities, users' and developers' guides. Disseminate appropriate components of M&S activities, including simulation software, models, simulation scenarios and results. Have the M&S activity reviewed by independent third-party users and developers, essentially by any interested member of the community. Use competition of multiple implementations to check the conclusions of dierent implementations of the M&S processes against each other. Adopt and promote generally applicable and discipline specic operating procedures, guidelines, and standards accepted as best practices. commonality of a rule between task team lists relative ranking of a rule in task team lists grouping of rules based on similarities grouping of rules that enhance each other Mathematics & Computation Team Users Team Standards & Guidelines Team interest in development of models and simulation approaches 7 participants 3 - Executive Committee 4 - Advisory Council scientists engineers academia industry government independent Mathematics & Computation Team Users Team Standards & Guidelines Team interest in application of models for clinical research and practice 7 participants 3 - Executive Committee 4 - Advisory Council clinicians scientists engineers academia industry government healthcare organizations interest in evaluation of models and simulation approaches 7 participants 3 - Executive Committee 4 - Advisory Council scientists engineers directors managers industry government initial set of rules discussions within task teams individual task team rankings Task teams decided upon their own approach to identify top ten rules with or without ranking. Task teams interpreted the rules in their own way and combined rules or added new ones. Use competition of multiple implementations to check and balance each other. Document your code and make your code readable. Explicitly identify experimental scenarios illustrating when, why, and how the model is false (alternative: Explicitly list your limitations). Make it easy for anyone to repeat and/or falsify your results (alternative: Make sure your results are reproducible). Use traceable data that can be traced back to the origin. Use version control. Dene your evaluation metrics in advance. Practice verication / validation / uncertainty quantication (alternative: attempt verication within context). Dene the use context for which the model is intended. Develop with the end user in mind. Dene the context the model is intended to be used for. Disseminate whenever possible (source code, test suite, data, etc). Use appropriate data (input, validation, verication). Provide examples of use. Get it reviewed by independent users/developers/members. Use version control. Attempt uncertainty (error) estimation. Explicitly list your limitations. Perform appropriate level of sensitivity analysis within context of use. Attempt verication within context. Plan and develop the M&S with the intended purpose/context, as well as the end-user in mind. Use appropriate data (input, validation, verication). Test the M&S appropriately within context (verication & validation, uncertainty quantication, sensitivity analysis, test cases). Document important elements of the M&S (domain of validity/invalidity, intended use, users' guide, code documentation, etc.). Explicitly list your limitations. Have the M&S reviewed by independent users/developers/ members. Use version control. Use appropriate discipline specic guidelines and standards. Use consistent terminology or dene terminologies. Dissemination of the M&S. 4 responsive members forum discussion alternatives, clusters, additions individual member ranking listing of 3 most important rules group ranking non-linearly weighted scoring rst 10 shown rst 10 shown 4 responsive members e-mail response by individuals individual member response clustering of rules in relation to each other listing ten important rules without ranking group ranking frequency of rules emerging in individual lists (separately or within a group) 7 responsive members forum and e-mail discussions no explicit individual member ranking group ranking synthesis into 10 general themes by consensus weighing based on majority feedback integrating multiple disciplines mathematics life sciences engineering computing medicine engaging diverse organizations government academia hospitals industry leveraging various procedures reporting standards validation verication repeatability ... applications in many areas diagnosis clinical research disease pathway drug development therapeutics design multiple disciplines across organizations variable procedures broad applications 2 co-chairs 8 current members including co-chairs 2 past members 12 current members EXECUTE & CHARGE REVIEW & ADVISE COMMUNICATION ACCOUNTABILITY EXECUTIVE COMMITTEE ADVISORY COUNCIL Committee roster is targeted to establish a balanced representation of the interests and perspectives of the dierent stakeholders to bridge synergistic activities in simulation-based medicine throughout various M&S communities To identify broadly applicable common rules for establishing credibility in M&S in healthcare by synthesizing discussions within a multidisciplinary Committee, the Committee on Credible Practice for M&S in Healthcare. None of us are experts in everything. We need to learn from each other. Credible practice of modeling and simulation in healthcare requires ongoing inclusive communications to establish adaptive workows that can be utilized broadly. Computational modeling and simulation (M&S) plays a growing role in the development and delivery of healthcare. Government agencies and industry are making substantial investments in simulation-based medicine and notable discoveries are being made. There is an emerging need to ensure credible decison-making with output from M&S, accommodating broad and multidisciplinary perspectives in healthcare. Common practice guidelines, to ease communication among stakeholders and to allow a unied and appropriate approach to establish credibility of M&S in a multidisciplinary environment, do not exist. Gauging broader community's opinion Public online survey December 2014 - April 2015 195 participants Preparation of a guidance document relying on Committee's perspective informed by community insight Development of a model certication process to facilitate systematic evaluation of credibility Support by establishing consistent terminology demonstration of workows outreach It is possible to establish broadly applicable overlapping themes for credible practice of M&S in healthcare. Nonetheless These rules are considered "not so simple" as their implied meanings may vary, indicating the need for clear and detailed descriptions during their application. Perception and relative importance of these rules are likely to be inuenced by disciplinary, organizational, and context of use (purpose) factors. 26 rules to start with

Upload: others

Post on 14-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Multidisciplinary Committee Perspectiveit.neurosim.downstate.edu/pdfs/fmd_2015.pdf · MODELING & SIMULATION IN HEALTHCARE A Multidisciplinary Committee Perspective Ahmet Erdemir,

TEN "NOT SO SIMPLE" RULES FOR CREDIBLE PRACTICE OFMODELING & SIMULATION IN HEALTHCAREA Multidisciplinary Committee PerspectiveAhmet Erdemir, Computational Biomodeling (CoBi) Core & Department of Biomedical Engineering, Lerner Research Institute, Cleveland Clinic, Cleveland, OH, USALealem Mulugeta, Universities Space Research Association, Division of Space Life Sciences, Houston, TX, USAWilliam W. Lytton, SUNY Downstate Medical Center, Kings County Hospital Center, Brooklyn, NY, USAwith contributions from members of the Committee on Credible Practice of Modeling & Simulation in Healthcare

Project Site: https://simtk.org/home/cpmsWiki Site: http://wiki.simtk.org/cpmse-mail: [email protected]

Committee's activities are facilitated by the Interagency Modeling and Analysis Group and Multiscale Modeling Consortium:

2015 BMES/FDAFrontiers in Medical Devices

May 18-20, 2015Washington, DC

MOTIVATION

GOALS

COMMITTEE TASK TEAMS

INITIAL SETOF RULES

TASK TEAM RANKINGSCOMMITTEE RANKING

CONCLUSIONS

FUTURE

• use version control • use credible solvers • explicitly list your limitations •

• define the context the model is intended to be used for •

• define your evaluation metrics in advance •

• use appropriate data (input, validation, verification) •

• attempt validation within context • attempt verification within context •

• attempt uncertainty (error) estimation •

• perform appropriate level of sensitivity analysis within context of use •

• disseminate whenever possible (source code, test suite, data, etc) •

• report appropriately • use consistent terminology or define your terminology •

• get it reviewed by independent users/developers/members •

• learn from discipline-independent examples •

• be a discipline-independent/specific example • follow discipline-specific guidelines •

• conform to discipline-specific standards • document your code •

• develop with the end-user in mind •

• provide user instructions whenever possible and applicable •

• practice what you preach • make sure your results are reproducible •

• provide examples of use • use traceable data that can be traced back to the origin •

• use competition of multiple implementations to check and balance each other •Consolidation

Rule 1 – Define context clearly.

Rule 2 – Use appropriate data.

Rule 3 – Evaluate within context.

Rule 4 – List limitations explicitly.

Rule 5 – Use version control.

Rule 6 – Document adequately.

Rule 7 – Disseminate broadly.

Rule 8 – Get independent reviews.

Rule 9 – Test competing implementations.

Rule 10 – Conform to standards.

Plan and develop the M&S activity with clear definition of the intended purpose or context accommodating end-users needs.

Use data relevant to the M&S activity, which can ideally be traced back to the source.

Evaluate the M&S activity through verification & validation, uncertainty quantification, and sensitivity analysis faithful to the context/purpose/scope of the M&S efforts, with clear and a-priori definition of evaluation metrics and including test cases.

Provide an explicit disclaimer on the limitations of the M&S to indicate under what conditions or applications the M&S may or may not be relied on.

Implement a version control system to trace the time history of the M&S activities, including delineation of contributors' efforts.

Document all M&S activities, including simulation code, model markup, scope and intended use of M&S activities, users' and developers' guides.

Disseminate appropriate components of M&S activities, including simulation software, models, simulation scenarios and results.

Have the M&S activity reviewed by independent third-party users and developers, essentially by any interested member of the community.

Use competition of multiple implementations to check the conclusions of different implementations of the M&S processes against each other.

Adopt and promote generally applicable and discipline specific operating procedures, guidelines, and standards accepted as best practices.

commonality of a rule between task team lists relative ranking of a rule in task team listsgrouping of rules based on similaritiesgrouping of rules that enhance each other

Mathematics & Computation Team Users Team

Standards & Guidelines Team

interest in development of models and simulation approaches

7 participants 3 - Executive Committee 4 - Advisory Council

scientistsengineers

academiaindustrygovernmentindependent

Mathematics & Computation Team Users Team

Standards & Guidelines Team

interest in applicationof models for clinical research and practice

7 participants 3 - Executive Committee 4 - Advisory Council

cliniciansscientistsengineers

academiaindustrygovernmenthealthcare organizations

interest in evaluationof models and simulation approaches

7 participants 3 - Executive Committee 4 - Advisory Council

scientistsengineersdirectorsmanagers

industrygovernment

initial setof rules

discussionswithin

task teams

individualtask teamrankings

Task teams decided upon their own approach to identify top ten rules with or without ranking.Task teams interpreted the rules in their own way and combined rules or added new ones.

Use competition of multiple implementations to check and

balance each other.

Document your code and make your code readable.

Explicitly identify experimental scenarios illustrating when, why, and how the model is false (alternative:

Explicitly list your limitations).

Make it easy for anyone to repeat and/or falsify your results

(alternative: Make sure your results are reproducible).

Use traceable data that can be traced back to the origin.

Use version control.

Define your evaluation metrics in advance.

Practice verification / validation / uncertainty quantification

(alternative: attempt verification within context).

Define the use context for which the model is intended.

Develop with the end user in mind.

Define the context the model is intended to be used for.

Disseminate whenever possible (source code, test suite, data, etc).

Use appropriate data (input, validation, verification).

Provide examples of use.

Get it reviewed by independent users/developers/members.

Use version control.

Attempt uncertainty (error) estimation.

Explicitly list your limitations.

Perform appropriate level of sensitivity analysis within context of

use.

Attempt verification within context.

Plan and develop the M&S with the intended purpose/context, as well as

the end-user in mind.

Use appropriate data (input, validation, verification).

Test the M&S appropriately within context (verification & validation,

uncertainty quantification, sensitivity analysis, test cases).

Document important elements of the M&S (domain of validity/invalidity, intended use, users' guide, code

documentation, etc.).

Explicitly list your limitations.

Have the M&S reviewed by independent users/developers/

members.

Use version control.

Use appropriate discipline specific guidelines and standards.

Use consistent terminology or define terminologies.

Dissemination of the M&S.

4 responsive membersforum discussion alternatives, clusters, additionsindividual member ranking listing of 3 most important rulesgroup ranking non-linearly weighted scoring

first 10shown

first 10 shown

4 responsive memberse-mail response by individualsindividual member response clustering of rules in relation to each other listing ten important rules without rankinggroup ranking frequency of rules emerging in individual lists (separately or within a group)

7 responsive membersforum and e-mail discussionsno explicit individual member rankinggroup ranking synthesis into 10 general themes by consensus weighing based on majority feedback

integrating multiple disciplines

mathematics life sciencesengineering

computingmedicine

engaging diverseorganizations

governmentacademiahospitalsindustry…

leveragingvarious procedures

reportingstandardsvalidation

verificationrepeatability

...

applications in many areas

diagnosisclinical researchdisease pathwaydrug developmenttherapeutics design…

mul

tiple

disc

iplin

esacross

organizations

variable

procedures

broa

d

appl

icat

ions

2 co-chairs8 current members including co-chairs2 past members

12 current members

EXECUTE & CHARGE

REVIEW & ADVISE

COMMUNICATION ACCOUNTABILITY

EXECUTIVECOMMITTEE

ADVISORYCOUNCIL

Committee roster is targeted

to establish a balanced representation of the interests and perspectives of the different stakeholders

to bridge synergistic activities in simulation-based medicine throughout various M&S communities

To identify broadly applicable common rules for establishing credibility in M&S in healthcare

by

synthesizing discussions within a multidisciplinary Committee, the Committee on Credible Practice for M&S in Healthcare.

None of us are experts in everything. We need to learn from each other.

Credible practice of modeling and simulation in healthcare requires ongoing inclusive communications to establish adaptive workflows that can be utilized broadly.

Computational modeling and simulation (M&S) plays a growing role in the development and delivery of healthcare.

Government agencies and industry are making substantial investments in simulation-based medicine and notable discoveries are being made.

There is an emerging need to ensure credible decison-making with output from M&S, accommodating broad and multidisciplinary perspectives in healthcare.

Common practice guidelines, to ease communication among stakeholders and to allow a unified and appropriate approach to establish

credibility of M&S in a multidisciplinary environment, do not exist.

Gauging broader community's opinionPublic online surveyDecember 2014 - April 2015195 participants

Preparation of a guidance documentrelying on Committee's perspectiveinformed by community insight

Development of a model certification processto facilitate systematic evaluation of credibility

Support byestablishing consistent terminologydemonstration of workflowsoutreach

It is possible to establish broadly applicable overlapping themes for credible practice of M&S in healthcare.

Nonetheless

These rules are considered "not so simple" as their implied meanings may vary, indicating the need for clear and detailed descriptions during their application.

Perception and relative importance of these rules are likely to be influenced by disciplinary, organizational, and context of use (purpose) factors.

26 rules to start with