eindhoven university of technology master towards a ... · • users may download and print one...

97
Eindhoven University of Technology MASTER Towards a configurable model for outpatient clinics development of a parameterized discrete event simulation model for outpatient clinics Janssen, C.W. Award date: 2014 Disclaimer This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required minimum study period may vary in duration. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Download date: 23. Jun. 2018

Upload: phungkhue

Post on 15-May-2018

214 views

Category:

Documents


1 download

TRANSCRIPT

Eindhoven University of Technology

MASTER

Towards a configurable model for outpatient clinics

development of a parameterized discrete event simulation model for outpatient clinics

Janssen, C.W.

Award date:2014

DisclaimerThis document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Studenttheses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the documentas presented in the repository. The required complexity or quality of research of student theses may vary by program, and the requiredminimum study period may vary in duration.

General rightsCopyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright ownersand it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.

• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

Take down policyIf you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediatelyand investigate your claim.

Download date: 23. Jun. 2018

I  

                  Eindhoven,    September  2014    

       

         

                        By:         Camiel  W.  Janssen      

Bachelor  of  Arts,  University  College  Maastricht  Student  Identity  Number:  815516  

 in  partial  fulfillment  of  the  requirements  for  the  degree  of:  

Master  of  Science  in  Operations  Management  and  Logistics  (OML)  

     

 Supervisors:  dr.  P.M.E.  Van  Gorp,  TU/e,  IS  dr.  ir.  N.P.  Dellaert,  TU/e,  OPAC  ir.  J.  Beerens,  Vreelandgroep  Organisatie-­‐adviseurs  ir.  H.  van  Kaam,  Vreelandgroep  Organisatie-­‐adviseurs  

Towards  a  Configurable  Model  for  Outpatient  

Clinics  Development  of  a  parameterized  discrete  event  

simulation  model  for  outpatient  clinics  

II  

             TU/e  -­‐  School  of  Industrial  Engineering  Series  Master  Theses  Operations  Management  and  Logistics                          Keywords:    process  simulation,  object-­‐oriented,  process  mining,  health-­‐care,  discrete  simulation,  reference  model        

III  

                 

   An  idea  is  always  a  generalization,    and  generalization  is  a  property  of  thinking.    To  generalize  means  to  think.                  Georg  Wilhelm  Friedrich  Hegel,  1833            

IV  

 

Abstract In   the   last  decades   the   landscape   for  hospitals  has  changed  drastically.  Hospitals  have  had  to  become  more  aware  of  their  operations  and  efficiency  as  of  budgetary-­‐cuts  and  market   dynamics.   Simulation  models   can   be   used   to   evaluate  what-­‐if   situation.   It   has  been  found  that  these  simulation  models  are  often  developed  for  one  specific  situation.  Due  to  their  specific  nature,  these  models  can  hardly  be  reused  and  maintained.  In  this  thesis  a  methodology   is  presented   in  which   four  available  models  are  combined   into  a  configurable   process   model   (C-­‐EPC).   This   C-­‐EPC   constitutes   the   basis   for   a   discrete  event   configurable   simulation   tool.   The   tool   incorporates   two   main   features:   the  construction  and  the  running  of  the  model.  Using  a  range  of  test  methods,  it  was  found  that   the   configurable   tool   produces   models   with   the   required   configurations.   A  questionnaire  amongst  a  small  number  of  consultants  showed  the  tool  was  significantly  (p<0.1)  better  in  terms  of  perceived  usefulness,  reusability,  maintenance,  time-­‐intensity  and  needed  amount  of  prior  knowledge.      

V  

People  Graduation  Candidate:  Camiel  W.  Janssen,  Bachelor  of  Arts    Camiel   started   his   academic   career   at  University   College  Maastricht.  He   specialized   in  quantitative  economics,  supply  chain  management  and  econometrics.  After  his  bachelor  he   did   a   traineeship   at  Harsco   Infrastructure  where   his   attention  was   drawn   to  more  technical   and   mathematical   studies.   He   started   the   MSc   program   Operations  Management  and  Logistics  at  the  Technical  University  of  Eindhoven  in  September  2012.    First  Supervisor:  dr.  Pieter  M.E.  Van  Gorp,  TU/e    Pieter  Van  Gorp  is  investigating  and  extending  the  applicability  of  graph  transformation  technology  to  software  modeling  and  health  information  system  challenges  since  2002.  Since   2008,   he   is   an   assistant   professor   in   the   School   of   Industrial   Engineering   at  Eindhoven  University  of  Technology.      Second  Supervisor:  dr.ir.  Nico  P.  Dellaert,  TU/e    Nico  Dellaert  is  associate  professor  at  the  Technical  University  of  Eindhoven.  His  field  of  expertise  comprises  quantitative  analysis  of  stochastic  models  for  logistic  designs.      External  Supervisor:  ir.  Herre  van  Kaam,  Vreelandgroep    Herre  van  Kaam  studied  at  the  Technical  University  of  Delft  and  is  currently  partner  and  director   of   the   Vreelandgroep   Organisatie-­‐adviseurs,   Bilthoven.   His   field   of   expertise  consists  of  the  simulation  of  processes  in  hospitals.      Second  External  Supervisor:  ir.  Jaap  Beerens,  Vreelandgroep    Jaap  Beerems  studied  at   the  Technical  University  of  Delft  and   is  currently  partner  and  director   of   the   Vreelandgroep   Organisatie-­‐adviseurs,   Bilthoven.   Jaap   specializes   in  quantitative  analysis  of  hospital’s  operational  performance.    

VI  

Foreword It  is  with  great  pleasure  that  I  present  you  my  Master  Thesis  Project.  After  six  months  of  conducting  research,   improving  soft  skills,  getting  to  know  the  industry  and  moving  to  another  city,  the  final  product  is  there.  It  has  been  a  journey  in  which  I  was  frequently  introduced  to  my  ‘self’.  Sometimes  this  was  pleasant,  at  other  times  it  was  confronting.      I  started  this  research  from  an  idealistic  point  of  view.  I  wanted  to  improve  healthcare  operations  in  the  Netherlands  and  possibly  beyond.  The  reason  for  this  idealistic  quest  is  the  notion  that   it  would  improve  the  life  of  patients  and  the  life  of  the  hospital  staff.  After  six  months  I  must  conclude  that  it  is  not  all  that  easy.  With  this  thesis  I  hope  that  I  have  contributed  a  small  piece   to  a  system  of  which   its  complexity   is  beyond  a  simple  grasp.  The   final  product(s)  must  not  be   seen  as   solutions,  but  as  a   step   towards  more  efficient   tools   to   improve   healthcare   operations.   Improving   reusability   and  maintainability   of   models   leads   to   better   and   more   accessible   analyses   for   hospitals,  which  can  be  translated  to  better  care  and  cure.      First  of  all  I  would  like  to  thank  my  TU/e  supervisor,  dr.  Pieter  Van  Gorp  who  has  been  very  empathic  during   the   chaotic  and  difficult   last  1.5  years,   as  well   for  his   sharp  and  sincere   feedback   during   the   research.   I   would   like   to   thank   my   girlfriend   drs.   Kelly  Adams  for  her  love  and  support  over  the  course  of  the  last  6  months.  I  would  also  like  to  address  a  word  to  my  dad,  dr.  ir.  Lucas  Janssen,  who  implicitly  stimulated  me  to  pursue  a  technical  MSc  and  provided  me  with  coaching,  feedback  and  inspirational  talks  during  difficult  periods.        Lastly   I   would   like   to   thank   the   Vreelandgroep,   in   particular   ir.   Jaap   Beerens   and   ir.  Herre  van  Kaam,  for  the  opportunity  to  use  their  work,  help  me  with  my  thesis,  learning  me  to  program,  facilitating  the  development  of  my  soft-­‐skills  and  being  optimistic  with  me  and  my  work.        Camiel  Janssen      

VII  

Executive Summary In  the  last  decade  the  market  for  healthcare  in  the  Netherlands  has  changed  drastically.  Due  to  the  introduction  of  elements  of  markets  dynamics  and  budgetary  cuts,  hospitals  have   had   to   become   more   aware   of   their   operational   efficiency.   To   evaluate   the  operational   efficiency   of   what-­‐if   situations,   hospitals   can   use   a   range   of   tools.   One   of  these   tools   is   simulation.   Simulation   allows   one   to   benchmark   and   to   evaluate  performance  indicators  in  different  organizational  scenarios  (Clague  et  al.,  1997).       This   thesis   specifically   deals   with   the   outpatient   departments   of   hospitals.  Compared   to   other   departments,   relatively   little   research   has   been   dedicated   to   this  functional  area  of  the  hospital.  The  outpatient  department,  or  clinic,   is  a  clinic  for  non-­‐hospitalized   patients.   Patients   that   visit   the   outpatient   clinic   are   consulted,   possibly  examined  or  treated,  and  released  on  the  same  day.  Of  the  simulation  studies  that  could  be   identified,   only   a   few  were  generic.  The   simulation   studies  are   typically   tailored   to  one  specific  outpatient  clinic   -­‐  with  certain  physical  and  organizational   features.  Given  the   fact   that   these   models   are   developed   independently,   it   is   assumed   that   existing  information  (i.e.  code,  processes  models,  ..  )  is  not  reused  optimally  or  not  reused  at  all.  Hence  the  question  arises  whether  is  it  possible  to  construct  a  generic  simulation  model,  based  on  available  models,  that  improves  reusability,  maintainability  and  flexibility  but  has  the  same  power  as  the  individual  models.       In  this  thesis  a  generic  parameterized  simulation  tool  is  constructed.  In  order  to  do   so,   four   available   simulation   models   were   evaluated   in   terms   of   typical   process,  assumptions,   parameters,   key   performance   indicators   (KPIs),   metrics,   data   input   and  output.   In   order   to   evaluate   the   models   various   techniques,   such   as   process   mining,  were   used.   By   adding   code   to   the   available   models,   tracefiles   were   constructed   that  allowed   for   analyses   in   Disco,   a   process   mining   package.   The  mined   process   models,  together  with   the   description   of   the  models,  were   translated   to   process  models   in   de  Event-­‐driven   Process   Chain   (EPC)   language.   Using   the   merging   algorithm   from  Configurable  Models   theory,   a   Configurable  model   (C-­‐EPC)  was   constructed   (La   Rosa,  Dumas,  Uba,  &  Dijkman,  2012).  The  C-­‐EPC  allows  for  the  different  process  variations  as  present   in   the   individual   EPCs   of   the   available   models.   Of   course,   also   other  configurations   can   be   set.   The   configurations,   other   than   from   the   available   models,  appear  logical,  but  have  not  been  validated  yet.         To   identify   critical  KPIs   for   the   simulation  of  outpatient   clinics,   both   literature  research   and   the   available  models   were   used.   Based   on   these   two   sources   a   list   was  compiled   from  which  a   selection  of  KPIs  was  chosen.  Given   the  C-­‐EPC,  KPIs  and  other  components  as  described  above,  a  discrete  event  simulation  tool  was  developed.  Due  to  the   fact   that   the   tool   is   object-­‐oriented,   the   user   must   both   construct   and   run   the  simulation  model.  The  reason  why  an  object-­‐oriented  approach  was  chosen  comes  from  the   notion   that   it   allows   to   easily   measure   object-­‐oriented   KPIs   such   as   capacity  utilization,   to   easily   replace   objects  with  more   recent   and   better   versions   and   objects  

VIII  

are   easily   left   to   increase   the   compactness   of   the   model.   Objects   have   configurable  features,   which   are   controlled   from   a   dialog.   So   instead   of   having   two   types   of  consultation  rooms:  one  with  and  one  without  the  possibility  for  external  examination,  only  one  object  is  needed  which  can  do  both.  The  model  is  ran  using  the  a  dialog  which  uses  the  same  logic  as  Quaestio  (La  Rosa,  Lux,  Seidel,  Dumas,  &  ter  Hofstede,  2006).  By  entering   values   in   a   dialog,   the   process   model   can   be   altered   and   therefore   easily  compared   to   other   process   models.   This   approach   in   of   object-­‐oriented   simulation  modeling  seems  unique,  as  it  could  not  be  identified  in  existing  studies.       To   evaluate   the   correctness   and   quality   of   the   tool,   a   number   of   tests   were  conducted.  First  of  all,   the  tool  was  tested  in  small-­‐scale  experiments  to  make  sure  the  right  process  models  were  adapted  for  different  settings.  Second,  one  of  the  four  models  that  was  used   to  construct   the   tool  was  rebuilt;  KPIs,  process  charts  and  development  time   were   compared   (Sargent,   2000).   Last,   a   questionnaire   was   issued   to   a   small  number  of  consultants  to  evaluate  the  usefulness,  usability  and  intention  to  use  (Moody,  2003).       The  small-­‐scale  experiments  have  shown  the  correctness  of  the  different  process  models.  By  altering  the  values  of  the  settings  in  the  dialog,  different  values  for  the  KPIs  were  identified.  Using  the  ‘fixed’  value  test  as  explained  by  Sargent  (2000),  the  results  of  the  deterministic  experiment  could  be  validated  by  manual  calculation.  As  anticipated,  a  strong  difference  in  KPIs  was  found  for  models  with  and  without  central  waiting  system  (RICOH,   n.d.).   Furthermore,   one   of   the   original   modes   was   rebuilt   using   the   tool,  allowing  it  to  compare  their  mined  process  models  and  KPIs.  The  mined  process  models  of   the   original   and   rebuilt   corresponded   and   a   conformance   precision   of   0.8966  was  found   for   the  C-­‐EPC.  Out  of   four  KPIs,   two  were   significantly  different   (p<0.05).   Small  differences   in   modeling   approach,   which   cannot   be   derived   from   process   models   or  model  descriptions,  are  the  foundation  of  the  differences  in  the  KPIs.  Given  the  fact  that  these  nuances  were  overlooked,  this  can  be  considered  a  weakness  of  the  methodology.  The   original   development   time   of   the  model   was   compared   to   the   development   time  with   the   tool.   It   was   found   that   the   tool   decreases   the   development   time   with  approximately  factor  5.  However,  one  should  note  that  the  researcher  operated  the  tool  and   data   was   almost   ready   to   use   (small   data   transformations   only).   To   make   a   fair  comparison   with   this   regard,   a   controlled   experiment   is   recommended,   of   which   a  possible  set-­‐up  is  suggested  in  this  thesis.       To  assess   the  usability   and   likelihood  of   adaption   for   a  development-­‐approach  using   the   tool,   a   questionnaire  was   issued   to   small   number   of   consultants   (n=8)  with  simulation   expertise.   The   questionnaire   was   based   on   the   original   survey   by   Moody  (2003).   In   this  questionnaire   the   respondents  were  asked   to  evaluate   two  methods   to  develop  a   simulation  model.  The   first  method   comprises   solely   the   re-­‐use  of   available  models   while   the   second   method   comprises   use   of   the   tool.   The   first   method   was  presented   as   a   combination   of   screenshots   of   different   simulation   models   with   their  respective   flow-­‐chart.   The   accompanying   text   guided   the   subject   to   thinking   in   the  context  of  copying  code  and  objects.  The  second  method  comprised  a  step-­‐wise  guide  of  how  the  tool,  developed  in  this  thesis,  should  be  operated.  The  methods  were  evaluated  

IX  

using  statements  and  a  7-­‐point  Likert-­‐scale.  Originally   the  output  of   the  questionnaire  was  ought  to  lead  to  the  constructs  as  defined  by  Moody  (2003).  However,  using  factor  analysis,   only   one   valid   construct   could   be   derived.   Perceived   usefulness   was  significantly  (p<0.1)  better  for  the  tool  than  for  the  original  method.  Given  the  fact  that  not   all   constructs   could  be   evaluated,   the   individual   questions  were   compared.   It  was  found  that  all  respondents  valued  the  approach  in  which  the  tool  was  used  significantly  (p<0.1)  better   in   terms  of  reusability,  maintenance,   time-­‐intensity  and  needed  amount  of  prior  knowledge.  Overall  the  respondents  indicated  that  the  tool  was  faster,  simpler  and   more   sustainable.   Despite   the   fact   that   not   all   the   constructs   were   statically  significant   and   therefore   could   not   be   compared,   the   results   from   the   individual  questions  give  some  clear  implications  in  favor  of  the  tool.       This  research  has  contributed  a  pragmatic  and  practical  approach  to  take  more  advantage  of  existing  models.  The  generic  models  that  were  identified  in  the  literature  research  do  not  allow  for  KPI  analyses  in  as  many  different  settings  as  the  tool.  However,  given   this   thesis’   relevance   and   innovativeness,   there   are   also   limitations.   The   tool  assumes   the   outpatient   clinic   to   be   an   autonomous   operating   entity   in   the   hospital.  Clearly,   the  model  may  oversimplify   certain   situations,   e.g.   that  patients  have  multiple  appointments  in  one  day  and  possibly  at  different  outpatient  clinics.  Another  limitation  is   that  only  elective  consults  are  conducted.  The  schedule  of  appointments   is  more-­‐or-­‐less  fixed  before  the  actual  simulation  starts.  In  real-­‐life,  extra  cases  might  be  added  to  the  schedule  during  the  course  of  the  day  and  doctors  may  have  to  leave  for  emergency  cases.  Therefore  conclusions  based  on  the  results  of  simulations,  using  the  configurable  tool,  should  be  made  with  these  limitations  in  mind.                    

X  

Table of Contents Introduction  ......................................................................................................................................  1  1   Literature  Review  ....................................................................................................................  4  1.1   Simulation  for  Outpatient  Clinics  and  Available  models  ....................................  4  1.2   Key  Performance  Indicators  ........................................................................................  7  1.3   Modeling  Methods  ...........................................................................................................  9  1.4   Configurable  Models  ....................................................................................................  10  

2   Methodology  ...........................................................................................................................  13  2.1   Overview  Methodology  ...............................................................................................  13  2.2   Available  Simulation  Models  ....................................................................................  14  2.3   Research  Stage  1:  Analysis  of  the  Models  .............................................................  16  2.4   Research  Stage  2:  Configurable  Model  and  Tool  ................................................  17  2.5   Research  Stage  3:  Model  Evaluation  .......................................................................  18  

3   Available  Models  ...................................................................................................................  19  3.1   Process  Models  and  Process  Mining  .......................................................................  19  3.2   Assumptions  ...................................................................................................................  22  3.3   Static  and  Parameterized  Components  .................................................................  23  3.4   Key  Performance  Indicators  .....................................................................................  24  3.5   Data  ....................................................................................................................................  25  3.6   Model  Metrics  .................................................................................................................  25  

4   Reference  Model  and  Tool  .................................................................................................  27  4.1   Merging  the  existing  process  models  .....................................................................  27  4.2   Merging  assumptions,  settings  and  other  factors  ..............................................  29  4.3   From  Configurable  Model  to  Plant  Tecnomatix  Simulation  Model  ..............  31  Model  Construction  ..............................................................................................................................  32  Running  the  Model  ...............................................................................................................................  33  Model  Output  ..........................................................................................................................................  34  

5   Model  Testing  .........................................................................................................................  35  5.1   Small-­‐Scale  Example  ....................................................................................................  35  5.2   Small  Scale  Model  Validation  and  Verification  ...................................................  36  5.3   Rebuilding  ISA2  (ISA  MDL)  ........................................................................................  38  The  Modeling  Steps  ..............................................................................................................................  39  Results  ........................................................................................................................................................  39  

6   Model  Evaluation  ..................................................................................................................  42  6.1   The  Method  Evaluation  Model  ..................................................................................  43  6.2   Sample  and  Results  ......................................................................................................  46  

XI  

7   Discussion  ................................................................................................................................  48  8   Conclusion  and  Final  Remarks  .........................................................................................  51  8.1   Main  Findings  .................................................................................................................  51  8.2   Recommendations  for  Future  Work  .......................................................................  51  8.3   Final  Remarks  ................................................................................................................  52  

References  ......................................................................................................................................  53  Appendix  A  –  Process  Models  ...................................................................................................  57  

Model  Descriptions  ...............................................................................................................................  57  Model  ISA1  ...............................................................................................................................................  58  Model  ISA2  ...............................................................................................................................................  59  Model  RDG  ................................................................................................................................................  60  Weerawat  .................................................................................................................................................  61  

Appendix  B  –  Parameters  ..........................................................................................................  62  Appendix  C  –  The  Tool  ................................................................................................................  63  

Model  Components  ...............................................................................................................................  63  Attributes  MU  Patient  ..........................................................................................................................  64  Attributes  MU  Doctor  ..........................................................................................................................  64  Model  Constructor  (Example  Waiting  Area)  .............................................................................  65  Automatic  Connections  .......................................................................................................................  66  Appointment  Data  .................................................................................................................................  66  Settings  for  Simulation  .......................................................................................................................  67  

Appendix  D  -­‐  Testing  ...................................................................................................................  68  Experiment  1  .............................................................................................................................  68  Experimental  Settings  .........................................................................................................................  68  Flowchart  Experiment  1  ....................................................................................................................  69  

Experiment  2  .............................................................................................................................  70  Experimental  Settings  .........................................................................................................................  70  Distribution  Throughput  Times  ......................................................................................................  70  Flowchart  Experiment  2  ....................................................................................................................  71  

Experiment  3  .............................................................................................................................  72  Experimental  Settings  .........................................................................................................................  72  Flowchart  Experiment  3  ....................................................................................................................  73  

Experiment  -­‐  5  Days  ................................................................................................................  74  ISA  Rebuilt  ..................................................................................................................................  76  Mined  Process  ISA2  Rebuild  .............................................................................................................  77  Distribution  For  Patients  in  Waiting  Area  ..................................................................................  78  

Appendix  E  -­‐  Questionnaire  ......................................................................................................  79  Survey  Questions  ...................................................................................................................................  79  Constructs  .................................................................................................................................................  80  Original  Moody  Survey  .......................................................................................................................  80  

XII  

Appendix  F  –  Statistical  Results  ...............................................................................................  81  Paired-­‐Samples  T-­‐Test  ........................................................................................................................  81  Correlation  Matrix  ................................................................................................................................  82  Cronbach’s  α  when  deleted  (Perceived  Usefulness)  ..............................................................  82  Paired  Samples  Test  Perceived  Use  ..............................................................................................  83  Overarching  Questions  .......................................................................................................................  83  

 

XIII  

Table of Figures Figure  1.1:  The  ENT  Clinic  (Harper  &  Gamlin,  2003,  p.215)  ...........................................................  5  Figure  1.2:  A  Configurable  Process  Model  (La  Rosa  et  al.,  2011,  p.  3)  ......................................  11  Figure  2.1:  Methodology  Overview  ..........................................................................................................  15  Figure  3.1:  Compiling  a  Tracefile  ..............................................................................................................  20  Figure  3.2:  Trace  Patient:  163  ....................................................................................................................  21  Figure  4.1:  From  Instances  to  Configurable  Model  ...........................................................................  28  Figure  4.2:  Backbone  Model  Process  Outpatient  Clinic  ..................................................................  29  Figure  4.3:  Table  of  Model  Components  Input  File  ...........................................................................  32  Figure  4.4:  Matrix  Walking  Paths  ..............................................................................................................  33  Figure  5.1:  Layout  of  the  Outpatient  Clinic  ...........................................................................................  35  Figure  5.2:  ETConformance  (Mu  &  Carmona,  2010,  p.  6)  ..............................................................  40  Figure  5.3:  Skankey  Diagrams  Output  Plant  Tecnomatix    (left:  original,  right:  rebuilt)  ...  42  Figure  6.1  Effectiveness  vs  Efficiency  (Moody,  2003,  p.3)  .............................................................  43  Figure  6.2:  Method  Evaluation  Model  (Moody,  2003)  .....................................................................  43  Figure  6.3:  Drivers  for  Data  Model  Quality  (Moody  &  Shanks,  1994,  p.101)  ........................  45      

Table of Tables Table  1.1:  Selection  of  Existing  Modeling  Studies  for  Outpatient  Clinics  ..................................  6  Table  1.2:  Common  Themes  Across  Stakeholders  (Martin  et  al.,  2003,  p.69)  .........................  7  Table  1.3:  Perspectives  of  Elements  in  Process  Models  ..................................................................  11  Table  2.1:  Aspects  involved  in  the  comparison  of  the  simulation  models  ..............................  17  Table  3.1:  List  of  Abbreviations  .................................................................................................................  19  Table  3.2:  Assumptions  Categorized  Per  Model  .................................................................................  23  Table  3.3:  Components,  Entities  and  Logic  in  the  VG  models  .......................................................  24  Table  3.4:  Overview  Key  Performance  Indicators  .............................................................................  25  Table  3.5:  Overview  Model  Metrics  .........................................................................................................  25  Table  4.1:  KPIs  from  analysis  and  literature  review  ........................................................................  30  Table  5.1:  KPIs  deterministic  model  ........................................................................................................  36  Table  5.2:  Average  Throughput  Times  per  Run  .................................................................................  37  Table  5.3:  KPIs  for  Experiment  2  ..............................................................................................................  38  Table  5.4:  KPIs  for  Experiment  3  ..............................................................................................................  38  Table  5.5:  Comparison  KPIs  Original  ISA2  model  and  Rebuilt  ....................................................  41  Table  6.1:  Characteristics  Sample  .............................................................................................................  46  Table  6.2:  Constructs  Reliability  ...............................................................................................................  47  

  1  

Introduction The  Dutch  Healthcare  system  has  rapidly  changed  over  the  last  decades,  and  will  remain  to  do  so   in   the   nearby   future.   The  minister   of   Health   announced   that   within   the   next   decades   the  government  will  reduce  the  number  of  hospitals  from  the  existing  120  to  approximately  70  (Bos,  Koevoets,   &   Oosterwaal,   2011).   Also,   the   Dutch   government   voted   in   2006   that   elements   of  market  dynamics  would  be  beneficial   for   the  health-­‐system,   creating   competitive  pressure  on  hospitals  (Schut,  2009).  The  idea  of  market  dynamics  in  Health  Care  meant  the  constitution  of  the   so-­‐called:   zorgverzekeringswet   (Van   Kleef,   Schut,   &   Van   de   Ven,   2014).   The  zorgverzekeringswet   implied   that   health-­‐insurance   companies   had   to   procure   services   of  hospitals   based   on   their   clients'   needs.   The   basic   idea   is   that   market   dynamics   would   force  health   institutions   and   insurance   corporations   to   become   more   efficient,   and   thereby   less  expensive.   Ultimately   this   development   is   ought   to   reduce   costs   for   society   as   a   whole.   This  development  combined  with  the   fact   that  many  hospital  buildings  are  depreciated  means  that  the  Dutch  healthcare  sector,  in  particular  hospitals,  finds  itself  in  heavy  weather  (Karlas  &  Van  Den  Haak,  2013;  Schut,  2009;  Van  Kleef  et  al.,  2014).  

This  thesis  deals  with  one  department  of  the  hospital:  outpatient  clinics.  Relative  to  e.g.  Operating   Room   oriented   research,   not   so   much   research   is   dedicated   to   outpatient   clinics.  Operating   Room   oriented   research   leads   to   approximately   four   times   more   articles   than  outpatient   clinic   research   (Google   Scholar).   This   is   not   strange,   as   it   is   a   recent   trend   that  hospitals   want   to   shift   service   production   from   inpatient   to   outpatient   settings   (Vitikainen,  Linna,   &   Street,   2010).   Given   the   fact   that   outpatient   clinics   become   more   important   in   the  hospital,  more  research  should  be  dedicated  to  this  subject  matter.    

Literally,  outpatient  clinic  means  ‘clinic  for  outside  patients’.  This  implies  that  there  are  also   clinics   for   inside   patients   (Côté,   1999);   this   care   is   also   known   as   hospitalized   care.  Hospitalized  (inpatient)  treatments  are  conducted  within  the  walls  of  the  hospitals,  where  the  patient  stays  in  the  hospital  for  the  full  duration  of  the  treatment.  Logically,  outpatient  patients  are  consulted  and  released  the  same  day.  It  is  also  possible  that  a  patient  is  hospitalized  after  a  consult   at   the   outpatient   clinic,   which   is   not   planned.   So,   it   is   possible   that   patients   have  multiple  appointments  in  (different)  outpatient  clinics  on  one  day.  In  this  thesis  the  outpatient  clinic  is  considered  as  an  autonomous  entity;  interactions  between  other  outpatient  clinics  and  normal  clinics  are  not  taken  into  consideration.  Within  the  domain  of  outpatient  departments,  there   are   different   specialisms;   of   which   on   can   distinguish   between:   surgery,   dermatology,  neurology  and  many  more.      

Typically  outpatient   clinics  deal  with  diagnostic  procedures  or   treatments   that   cannot  be  performed  by  the  General  Practitioner  (GP)  and  hence  need  specialized  medical  staff.  In  the  Netherlands   there   is   a   distinction   between   two  main   types   of   outpatient   facilities:   diagnostic  clinics  and  surgical  clinics.  The  difference  between  diagnostic  and  surgical  outpatient  clinics  is  rather   straightforward.   In   diagnostic   clinic   the   medical   staff   performs   diagnostics,   i.e.   the  identification  and  evaluation  of   the  patient’s  disease.   In  surgical  outpatient  clinics,   the  patient  receives  surgery.    

  2  

With  the  ever-­‐rising   financial  and  competitive  pressure  on  hospitals,  hospitals  need  to  be   cautious   with   their   resources.   In   order   to   maintain   and   improve   their   position   in   a  competitive   market,   hospitals   must   gain   insights   and   quantitative   information   about   their  performance  and  potential  market.  In  this  context,  performance  typically  translates  to  efficiency.  In  the  domain  of  outpatient  clinics  one  of  the  possible  options  to  do  evaluate  such  performance  is   by   using   simulation   models.   These   simulation   models   allow   one   to   evaluate   how   the  outpatient   clinic   performs   in   different   physical   and  organizational   settings.  Hence,   simulation  can  be  well-­‐used  to  evaluate  'what-­‐if'  scenarios  and  should  not  be  confused  with  optimization.  Optimization  in  this  context  may  refer  to  the  construction  of  optimal  schedules  which  minimize,  for  instance,  patient  and  doctor  waiting  time  are  as  well  as  needed  clinic  size.  In  this  context  a  simulation  model  is  only  a  tool,  or  black  box,  to  process  different  scheduling  strategies  (Clague  et  al.,  1997).  

Hospitals  typically  do  not  have  the  expertise  to  develop  or  use  such  simulation  models  and   therefore   hire   professionals   (consultants)   to  make   these   simulation  models.   The  models  developed  by  these  professionals  are  often  rather  specific.  This  means  that  the  model  can  only  be  used  for  one  specific  outpatient  clinic.  After  the  project,  the  model  looses  its  relevance,  whilst  these   specific   models   constitute   a   considerable   amount   of   reusable   components.   Because   I  reckon   this   very   phenomenon   is   a   waste   of   resources,   I   find   it   important   to   gather   existing  knowledge  and  models   for  outpatient  clinics.  These  sources  must  be  recycled  and  combined  to  construct  a  model  that  can  be  easily  configured  to  represent  a  specific  outpatient  clinic  without  starting   from   scratch.   In   other   words,   the   ultimate   goal   of   this  MSc   project   is   to   construct   a  generic  parameterized  simulation  model  for  outpatient  clinics  based  on  a  bottom-­‐up  approach  from   available   models.   The   great   advantage   of   such   a   model   is   that   it   is   relatively   easy   to  maintain   and   easy   to   adapt   to   different   situations.   In   order   to   develop   such   a   model,   the  following  research  question  was  formulated:  Is  it  possible  to  construct  a  generic,  parameterized  simulation  model  for  outpatient  clinics,  which  has  similar  power  as  a  set  of  simulation  models  but  is  respectively  smaller  in  size?  The  research  question  comes  with  the  following  sub-­‐research  questions:    

1. What   typical   processes,   assumptions,   parameters,   KPIs,   and   data   are   used   in   the  available  models  and  literature?  

2. How  can  one  construct  a  configurable  reference  model  in  a  bottom-­‐up  approach?  3. Is   the   reference   simulation   model   capable   of   simulating   the   same   behavior   as   the  

individual  simulation  models?  4. What  is  the  usefulness  and  usability  of  such  a  model?  

The   research   objective   follows   the   same   line   of   reasoning   as   found   in   the   sub-­‐research  questions.   First,   the   existing   available  models   should   be   analyzed   in   detail.   The   assumptions,  typical   traces,   flexible  parameters  and  used  KPIs  and  are  summarized  per  model.  Once   this   is  done   for   the   chosen  pool  of  models,   the  overlap  between   the  models  must  be   identified.  This  overlap  can  be  considered  backbone  of   the  reference  model   to  be  constructed.  Eventually,   the  backbone  model  must   be   extended   in   a   parameterized   fashion   in   order   to   become   a   generic,  parameterized  reference  model   for  outpatient  clinics.  This  model   is  presented  as  configurable  model.  The  configurable  model  forms  the  basis  for  a  tool  in  Siemens  Plant  Tecnomatix  in  which  

  3  

the  actual  simulation  experiments  can  be  conducted.  This  tool  should  be  simple,  easy  to  use  and  easy  to  maintain.  Maintenance  in  this  context  means  adaptations  to  facilitate  the  working  model  in  different  settings.  

It   is  hypothesized  that  it   is  possible  to  construct  a  generic  model  that  can  mirror  the  same  behavior   as   the   individual  models.   In  other  words:   the   generic  model   can  be   adapted   in   such  detail   that   it  can  mirror  at   least   the  models   from  which   it  originates  but  also  other  situations.  Given  that  only  three  available,  working,  simulation  models  could  be  used,  the  genericity  of  the  tool  is  limited.  The  configurable  model,  as  shortly  discussed  in  the  previous  paragraph,  sets  the  boundaries   and   limitations   for   the   possible   configurations   of   the   simulation   tool.   The   tool   is  ought   to   perform   at   least   as   good  with   regard   to   required   time-­‐investment   for   development,  usability  and  KPI  accuracy.  

This  thesis  is  structured  as  follows.  First  the  reader  is  provided  with  a  short  overview  on  the  existing   literature   on   the   field   of   outpatient   clinic   modeling,   parameterized   models   and   key  performance   indicators.   This   part   of   the   paper   provides   the   reader   with   an   overview   and  framework  in  which  the  latter  sections  of  this  paper  can  be  placed.  Then,  the  reader  is  provided  with   an   explanation   of   the   actual   methodology   used   to   construct   the   generic   parameterized  reference   model.   This   methodology   is   subdivided   into   three   stages,   which   will   be   discussed  separately.  Then,  the  reader  is  provided  with  a  structural  analysis  of  the  existing  and  available  models.   This   analysis   forms   one   of   the   most   essential   components   of   this   research.   It  determines  which   paths,   components,   and   behavior   should   be   present   in   the  model.   Once   an  understanding   concerning   the   differences   between   the   available   models   is   established,   the  comparison   results   in   a   list   of   requirements   for   the   actual  model   that   should   be   constructed.  Once   this   model   is   constructed,   its   performance   is   compared   to   the   available   models   using  different  methods.  Evaluating  the  performance  relates  to  the  fourth  research  question,  in  which  will  be  elaborated  on  usability  and  usefulness.  The  usability  and  usefulness  are  evaluated  based  on   two  methods.  The   first  method  comprises   the   rebuilding  of  one  of   the   available  models   in  order   to  evaluate   the  building  steps  and   the  potential  profit   in   terms  of   time   investment.  The  second  method  is  concerned  with  the  application  of  Moody’s  framework.  A  survey  is   issued  in  which  professionals  are  asked  to  evaluate  different  approaches,  of  which  one  is  the  new  model  (Moody,  2003).    

  Finally,   the   reader   is   provided   with   a   discussion   of   the   various   results   as   well   as   a  conclusion   and   recommendations   for   future   research.   This   includes   an   analysis   of   potential  shortcomings  of  the  model.        

  4  

1 Literature Review The  goal  of  this  literature  review  is  to  give  basic  background  information  concerning  the  field  of  outpatient  clinic  simulation.  The  sub  research-­‐questions  govern  the  content  that  is  presented  in  the  literature  study.  This  literature  study  is  structured  as  follows.  The  reader  is  provided  with  an  oversight  of  existing  literature  specifically  focused  on  outpatient  clinic  simulation.  Whereas  this  section  does  not  directly  answers  to  a  research  question,  it  provides  scope,  context  and  an  overview  of   the   developments.   Secondly,   the   literature   research   comprises   an   analysis   of   the  Key  Performance  Indicators  (KPIs)  that  are  typically  used  in  outpatient  clinic  simulation  studies.    This  sub-­‐section  partially  answers  to  the  first  sub  research  question.  The  reason  why  this  part  only   provides   partial   answers   is   the   fact   that   KPIs   are   derived   from   literature   and   not   from  models   obtained   in   the   industry.   Following,   Section   1.3   is   dedicated   to   the   various  modeling  approaches   and   methods   in   simulation   studies.   Finally,   a   short   introduction   to   configurable  models   is   given.   Configurable   models   constitute   the   background   for   the   development   of   the  parameterized  reference  model.    

For   the   literature   review   of   this   thesis   two  main  methods   were   used.   First   the   so-­‐called  Boolean  Strategy  was  used,  comprising  standard  and  delineation  keywords.  Using  the  Boolean  theorem,  various  search  queries  were  fed  to  search  engines  resulting  in  a  starting  point  for  the  literature   review.   Secondly,   the   snowball-­‐method   was   used.   The   snowball-­‐method   basically  means   that   one   uses   the   bibliography   of   good,   valuable   and   relevant   papers   in   order   to   find  more   articles   of   such   kind.   Though   this   method   is   rather   time   consuming   it   resulted   in   a  considerable  amount  of  usable,  good  articles,  which  would  not  have  been  found  using  solely  the  Boolean-­‐strategy.  

1.1 Simulation  for  Outpatient  Clinics  and  Available  models  There   are   several   articles   that   resemble   a   literature   review   on   discrete   event   modeling   in  healthcare.  The  number  of  publications  on  healthcare  simulation  has  increased  rapidly  over  the  last  decades  (Brailsford,  Harper,  Patel,  &  Pitt,  2009).  Günal  &  Pidd  (2010)  elaborate  specifically  on  outpatient  clinics.  They  find  that  there  are  many  different  types  of  outpatient  clinics  such  as:  audiology,   ophthalmology,   and   orthopedics.   It   has   been   found   that   simulation   literature  typically   focuses   on   one   specific   specialty   per   article.   An   example   of   such   a   study   is   done   by  Harper  &  Gamlin   (2003)  where   the   researchers   develop   a   simulation  model   for   an  Ear,  Nose  and   Throat   (ENT)   clinic.   This   model   is   highly   specific,   and   only   usable   for   ENT   purposes.   A  screenshot  of  the  model  can  be  found  in  Figure  1.1.  

  5  

 Figure  1.1:  The  ENT  Clinic  (Harper  &  Gamlin,  2003,  p.215)  

There  is  a  wide  variety  of  such  models  as  depicted  in  Figure  1.1  (Günal  &  Pidd,  2010).  Not  only  are  these  models  tailored  for  one  specific  specialty;  they  are  also  made  for  one  specific  hospital.  This   finding   leads   to   the   notion   that   similar   work   is   done   repetitively   by   different   scholars,  researchers,   consultants   or   companies.   The   aphorism   ‘Reinventing   the  Wheel’   applies  well   in  this   context,   as   these   highly   specific   simulation   models   are   hard   to   maintain   and   are   not  designed   with   reusability   in   mind.   In   order   to   make   the   models   more   flexible   and   reusable,  generic  models  are  needed  which  can  be  easily  adapted  to  represent  a  certain  situation.         As   I   already   explained,   the   search   strategy   used   comprised   both   a   Boolean   search  strategy  as  well  as  the  snowball  method.  Despite  the  fact  that  these  methods  are  quite  rigorous,  there  is  always  a  chance  that  articles  have  been  overlooked.  Therefore  the  list  of  found  articles  may  not  be  perceived  as  exhaustive  although  effort  was  made  to  identify  all  relevant  articles.  As  can   be   seen   in  Table   1.1,   the  majority   of   the   articles   found   are   concerned  with   clinic   specific  models.  Only  three  out  of  10  articles  are  generic,   in  the  sense  that  the  title  and  abstract  of  the  article  claim  the  model  to  be  generic.  However,  after  more  careful  analysis,  the  genericity  of  the  models  is  disputable.  For  instance,  the  article  by  Weerawat  (2013)  goes  by  the  title:  ”A  Generic  Discrete-­‐Event  Simulation  Model  for  Outpatient  Clinics  in  a  Large  Public  Hospital”  whereas  the  model  actually  deals  with  the  orthopedic  outpatient  department  (OPD).    

A  ‘real’  generic  model  of  such  kind  is  CLINSIM  (Paul  &  Kuljis,  1995).  The  attention  to  this  model  was  drawn  via  snowball  search-­‐methodology;  it  appeared  that  this  simulation  model  was  cited   three   times   in   the   library  articles  already  used   for   this   literature  research.  CLINSIM  was  developed  by  Paul  &  Kuljis  (1995).  The  model  was  abstracted  in  various  ways  in  order  to  keep  it  simple.   This   e.g  means   that   it   is   only   possible   to   have   between   one   and   six   doctors   and   the  queues   are   all   first-­‐in-­‐first-­‐out   (FIFO)  with   different   patient   types   in   the   queue.  Whereas   the  queuing   model   may   sound   trivial,   this   actually   is   a   rather   important   feature.   It   implies   that  patients  are  not  consulted  according  to  their  planned  appointment-­‐time,  but  rather  respective  

  6  

to  their  arrival  time  at  the  clinic.  Different  doctors  consult  different  patients,  so  that  less  senior  doctors   are  more   likely   to   consult   new  patients   and   senior  doctors   consult   re-­‐attenders.  This  simplification  makes   the  model  rather  difficult   to  compare   to  e.g.  Dutch  hospitals.    The  model  was  tested  in  20  separate  clinics  in  the  UK,  and  gradually  improved  to  become  more  generic.    

One   of   the   most   interesting   models   (for   this   research)   that   was   found,   is   the   model  developed   by   Swisher,   Jacobson,   Jun,   &   Balci   (2001).   The   model   was   developed   in   order   to  simulate   a   physician   clinic.   The   model   was   developed   in   the   US   and   therefore   based   solely  information   provided   by   a   network   of   US   clinics.   The   model   was   made   in   Orca’s   Visual  Simulation  Environment,  and  almost   fully  parameterized.  As  the  name  of   the  software  package  already  reveals,  the  model  was  designed  to  visualize.  The  purpose  of  this  visual  approach  is  to  make   medical-­‐decision   makers   work   side-­‐by-­‐side   with   operations   research   (OR)   people  (Swisher   et   al.,   2001).   This   is   the   ultimate   goal   of   simulation   in   healthcare:   to   help  medical-­‐decision  makers  make  the  best  decisions  based  certain  KPIs.  The  authors  discuss  the  simulated  performance  of   the  clinics  only  shortly.   In   terms  of  KPIs,   the  model   is  not   that  extended.  This  brings   us   to   a   fundamental   issue   for   simulation   in   healthcare:   namely   the   purpose   of   the  simulation   model.   For   a   model,   it   should   be   clear   what   group   is   targeted   on   and   what  information  is  to  be  displayed  (KPIs,  visualization,  others).  In  the  model  by  Swisher  et  al.  (2001),  the   focus  was   purely   on   visualization   and  not   so  much   on  KPIs.   Another   example   of   a  model  shows  other  possibilities.      

Weerawat  (2013)  demonstrates  a  simulation  model  that  was  developed  to  evaluate  the  performance   of   the   Orthopedic   Outpatient   Department   (OPD).   The   research   elaborates   on  doctor-­‐to-­‐patient  ratio  and  other  KPIs  of  the  clinic.  The  model  is  highly  detailed  with  regard  to  activities.  Despite  the  fact  that  this  model  was  not  parameterized,  the  research  paper  provides  insights  regarding  the  practical  relevance  and  actual  testing  of  the  model;  especially  with  regard  to  practical  KPIs.    Simulation  Study  

Model  For  

(Harper  &  Gamlin,  2003)   Ear,  Nose  and  Throat  (ENT)  outpatient  department  (Paul  &  Kuljis,  1995)   Generic  (Weerawat,  2013)   Generic  -­‐>  focused  on  orthopedics    (Cote,  1999)     Large  Health  Maintenance  Organization  (family  practice  clinic)  (HMO)  

(Weng  &  Houshmand,  1999)   Local  Clinic  (Clague  et  al.,  1997)   Mersey  Regional  Health  Authority  (Rohleder,  Lewkonia,  Bischak,  Duffy,  &  Hendijani,  2011)  

Outpatient  Orthopedic  Clinic  

(Huschka,  Denton,  Narr,  &  Thompson,  2008)  

Outpatient  Procedure  Center  (OPC)  -­‐  interventional  procedures  for  Pain  Medicine)  

(Swisher  et  al.,  2001)   Generic  Physician  Clinic,  Configurable  (Coelli,  Ferreira,  Almeida,  &  Pereira,  2007)  

Unit  3  of  the  National  Cancer  Institute  of  Brazil-­‐INCa  Rio  de  Janeiro,  RJ,  Brazil  

Table  1.1:  Selection  of  Existing  Modeling  Studies  for  Outpatient  Clinics  

  7  

1.2 Key  Performance  Indicators  As  explained   in   the   introduction  of   this  paper,  Dutch  hospitals  are   in  need  of  optimizing   their  operations.  In  order  to  do  so  they,  amongst  other  measures,  simulate  outpatient  clinics  in  order  to  evaluate  to  what  extend  they  use  their  resources  (e.g.  capacities).   In  other  words:  hospitals  want  to  achieve  certain  goals  with  regard  to  their  performance.  In  order  to  quantify  those  goals,  benchmarks   and   thresholds   need   to   be   set.   Benchmarks   and   thresholds   are   typically  represented   in   key   performance   indicators   (KPIs).   If   the   hospital   chooses   to   use   simulation  models,   it  must  be  clear  what  KPIs  must  be  evaluated.  The  choice  of  KPIs  can  influence  design  and  construction  decisions  during  the  modeling  process.  In  Chapter  3,  more  in-­‐depth  technical  information   concerning   possible   KPIs   is   provided.   In   this   section   the   focus   is   put   on  familiarizing  the  reader  with  the  main  themes  and  problems  concerning  KPIs.    

The   concept   of   performance   is   a   container-­‐definition.   It   varies   for   different   people   and  different   settings.   Hence,   one   could   argue   that   performance   has   a   high   and   problematic  contextual  nature  (del-­‐Rio-­‐Ortega,  Resinas,  &  Ruiz-­‐Cortés,  2009;  Martin,  Balding,  &  Sohal,  2003).  It  has  been  found  that  the  perception  of  performance  typically  differentiates  over  the  different  hierarchies  within  the  organization.  Whereas  the  most  downstream  employees  have  individual  work   environment   high   on   the   agenda,   a   partner   or   manager   will   have   a   stronger   focus   on  (financial)  performance  of  a  certain  unit.  Martin  et  al.  (2003)  conducted  a  research  in  which  the  perspective   of   three   main   stakeholders   of   the   hospital   in   an   outpatient   clinic   setting   was  evaluated   based   on   interviews   and   literature   review.   Although   the   research   springs   from  services  performance,   the  KPIs   identified  are   relevant   for   simulation   studies   in   the   context  of  outpatient  clinics.  In  Table  1.2  one  can  find  an  overview  with  different  performance  themes  and  how  the  three  stakeholder  groups  value  these  themes.    Clinicians   Management   Patients  Waiting  time  till  1st  appointment  

Waiting  time  till  1st  appointment  

Waiting  time  till  1st  appointment  

Patient  Discharge   Patient  Discharge   Waiting  time  in  Clinic  Adherence  to  Guidelines   FTA  rates   Being  kept  informed  whilst  

waiting  Patient  Satisfaction   Patient  Satisfaction   Choice  of  Appointment  Clinical  Outcomes   Waiting  time  in  Clinic    Clinician  Workload   Staff  Satisfaction    Teaching   Ratio  of  new  to  review       VACS  targets/actual    

Table  1.2:  Common  Themes  Across  Stakeholders  (Martin  et  al.,  2003,  p.69)  

Of   the   themes   mentioned   in   Table   1.2,   there   is   some   clear   overlapping   between   the   three  different  stakeholders.  Waiting  time  till  first  appointment,  Patient  Discharge,  Clinic  Waiting  Time  and  Patient  Throughput  al  all  mentioned  more  than  once.  Furthermore  once  can  deduct  that  the  stakeholders  clearly  focus  on  their  respective  closet  themes.  The  management  is  more  focused  on   efficiency,   patients   focus   on   getting   the   fastest   appointments   and   clinicians   focus   on   their  own  workload.  The  common  themes  are  discussed  below,  together  with  supporting  references.  

Waiting  Time  in  Clinic  or  Clinic  Waiting  Time.   For  patients   is   convenient   to  have   a  waiting  time  in  the  clinic  which  is  not  too  long.  This  basically  means  that  the  patient  wants  to  have  its  

  8  

appointment  at  the  time  that  was  communicated  (McCarthy,  McGee,  &  O’Boyle,  2000;  Weerawat,  2013).   Martin   et   al.   (2003)   add   that   the   delays   in   clinics   can   be   partially   compensated   by  informing   the   patient   whenever   there   are   delays.   This   performance   indicator   is   typically  relevant   for   the   management,   the   clinicians   and   the   patients.   Obviously   this   measure   (i.e.  waiting   time   in   clinic)   correlates   with   overall   patient   satisfaction   (McCarthy   et   al.,   2000).   It  should  be  noted  that  this  measure  may  lead  to  wrong  conclusions  as  it  was  found  that  70%  of  all  patients  arrives  early  and  therefore  have  to  wait  longer  in  the  clinic  (Zhu,  Heng,  &  Teow,  2012;  Hulshof  et  al.,  2012)    

Waiting  Time  Till  First  Appointment.  Martin  et  al.  (2003)  also  finds  that  waiting  time  to  first  appointment  is  critical  in  assessing  the  performance  of  an  outpatient  clinic.  The  importance  of  this  waiting  time  is  affected  by  the  medical  specialism.  In  oncology  this  is  typically  much  more  essential  than  for  instance  most  cases  at  orthopedics.    

Throughput.  Throughput  of  the  clinic  may  not  be  seen  as  relevant  for  the  patients  in  a  direct  way.   It   is   however   rather   important   for   the   clinicians   and   management.   As   can   be   logically  derived  this  indicator  makes  the  differences  between  the  interests  of  patients  and  hospital  staff  (i.e.  clinicians  and  management)  clear  (Klazinga,  Stronks,  Delnoij,  &  Verhoeff,  2001;  Martin  et  al.,  2003).   As   with   regard   to   the   scope   of   throughput,   there   are   many   derivatives;   Clinician  Workload,   Clinical   Outcomes   and   Target   versus   Actual   Business   Outcomes   are   some   of   the  indicators   that  one  can  think  of.  Clinician  Workload  can  be  described  as   the  absolute  working  time,   or   the   idle   versus   operating   time.   Throughput   is   also   a   measure   of   production.   The  managers  of  outpatient  facilities  make  agreements  with  the  higher  management  to  meet  certain  production   targets.   Deviation   from   the   target   does   not   go   unpunished   in   terms   of   financial  consequences   for   the   hospital   and   is   therefore   also   an   important   performance   indicator   for  outpatient  facilities.    

Despite  the  relevance  of  the  measures  discussed  above,  there  is  still  a   lack  of  performance  indicators   with   regard   to   capacity.  When   referring   to   the   capacity   of   outpatient   clinics,   Côté  (1999)   did   some   important   discoveries.   He   derived   a   simulation  model   for   a   family   practice  clinic.   It  was  hypothesized   that  higher   examining   room  utilization  would   lead   to  higher   room  queue   lengths,   higher   probability   of   full   capacity   and   longer   patient   flow   times   with   higher  arrival   time.   On   the   contrary,   it   was   concluded   that   reducing   examining   room   capacity   only  affects  its  utilization  and  the  probability  of  full  capacity.    Hence  the  number  of  examining  rooms  did   not   affect   queue   lengths   or   patient   flow   times   and   thereby   individual   patient   delays.   It  should  be  noted  that  the  medical  staff  keeps  the  same  routine  in  their  work.    

Aside  from  the  relevant  and  important  findings  in  this  research  performed  by  Côté  (1999),  the  research  also  allows  one  to  derive  important  performance  indicators  with  regard  to  capacity.  First,  Room  Queue  Lengths  can  be  subdivided  into  two  queues  in  the  clinic,  namely  (1)  number  of   patients  waiting   for   initial   consultation   and   (2)   the   number   of   patients  waiting   for   return  consultation  (i.e.  second  appointment  in  the  clinic  on  the  same  day)  (Cote,  1999;  McCarthy  et  al.,  2000).    Second,  Patient  Flow  Time,  the  patient  flow  time  is  the  total  time  that  a  patient  spends  at  the   outpatient   facility   during   the   visit.   Third,   Examining   Room   Utilization   is   the   number   of  occupied  examining  rooms  per  shift.  Côté  (1999)  explains  the  examining  room  utilization  as  “a  weighted   average   of   the   ratio   of   the   length   of   time  where   zero,   one,   two   or   three   rooms   are  

  9  

occupied  to  the  total  lengths  of  time  required  to  complete  a  shift”  (p.  238).  This  same  technique  may  be  used  for  monitoring  the  utilization  of  waiting  rooms  in  the  outpatient  clinic.    

From  the  paragraphs  above,  we  can  deduct   that  KPIs  are  subdivided   into   two  main   types.  The  first  type  comprises  KPIs  on  a   ‘person’   level.   In  other  words:  KPIs  related  to  the  behavior  and   paths   of   patients,   entourage,   doctors,   and   medical   staff   –   these   KPIs   are   typically   time  related.   The   second   type   deals   with   capacity   utilization,   crowdedness   and   queue   lengths   of  different  objects  like  consultation  rooms  and  waiting  areas.    

1.3 Modeling  Methods    There   are   basically   five   modeling   methods   that   can   be   considered   in   constructing   a   generic  simulation  model;  which   are  Markov  Models,  Monte   Carlo   Simulation,   Agent-­‐Based  Modeling,  Continuous   Modeling   and   Discrete   Event   Simulation   (DES)   (Sobolev,   Sanchez,   &   Kuramoto,  2012).     Markov   Models   and   Mote   Carlo   Simulations   are   both   not   the   favorable   modeling  technique   for   a   generic,   parameterized   simulation  model.  Markov  Models   assume   a  memory-­‐less   system,  which   is   typically  not   the   case   in  Outpatient  Clinics.   It   is   suitable   to   evaluate   the  state   of   the   individual   patient   within   the   outpatient   clinic,   but   not   of   the   clinic   as   a   whole  (Sobolev  et  al.,  2012).  Monte  Carlo  Simulation  is  not  appropriate  choice  either,  because  Monte  Carlo  Simulation  is  a  static  modeling  technique.   In  other  words:  there  is   input  and  output,  but  no  in-­‐between  states.  To  evaluate  the  performance  of  the  clinic,  based  on  earlier  discussed  KPIs,  this  modeling  technique  falls  short.    

Agent-­‐based   modeling   makes   use   of   agents   which   follow   a   set   of   predefined   rules  (Maidstone,   2012).   Despite   the   fact   that   agent-­‐based   modeling   could   have   been   a   feasible  method   in   the   development   of   a   reference  model,   precedents   in   literature   and   guidelines   are  scarce   (Siebers,  Macal,   Garnett,   Buxton,  &   Pidd,   2010).   Agent-­‐based  modeling   starts   from   the  principle  that  a  system  exist  in  which  autonomous  decision-­‐making  entities  (i.e.  agents)  operate  (Sobolev  et  al.,  2012).  The  great  advantage  of  agent-­‐based  models  is  the  fact  that  agents  interact.  Interacting  means  that  agents  may  exchange  information,  coordinate  activities  and  synchronize.  Also,   agents   have   autonomy,   which   allows   for   better   and   more   accurate   representations   of  reality.   As   agents   have   autonomy,   the   model   developer   is   not   constraint   to   stochastic  distributions  –  it  is  perfect  for  modeling  individual  behavior.  The  last  advantage  of  agent-­‐based  models  is  the  fact  that  agents  are  able  to  learn.  Agents  may  be  learning  through  artificial  neural  networks  (ANNs),  evolutionary  algorithms  or  other   learning  techniques  (Sobolev  et  al.,  2012).  Despite   the   fact   that   agent-­‐based   modeling   could   be   useful   in   the   context   of   outpatient  simulation   models,   it   is   a   very   time-­‐consuming   method   as   each   agent   has   to   be   modeled  independent.  Given  that  >100  patients  visit  an  outpatient  clinic  per  day,  this  adds  significantly  in   development   time.   However,   there   are   implications   that   certain   features   of   agent   based  modeling   can   be   used   for   within   other   modeling   methods   (Dubiel   &   Tsimhoni,   2005).   For  instance,  using  a  hybrid  method  with  DES  and  Agent-­‐based  modeling  allows  one  to  simulate  the  movement  of  entities  through  a  discrete  even  model.  As  can  be  seen  later,  in  the  attributes  of  the  entities   of   the   simulation  model,   entities   facilitate  methods,  which   trigger   certain   behavior   of  the  entity  itself  or  his/her  entourage.  

  10  

Continuous  modeling,  or  System  Dynamics,  is  not  the  favorable  option  either,  as  it  assumes  constant   change   within   the   input   variables.   System   Dynamics   assume   the   model   consists   of  stocks   and   flows   combined  with   feedback  mechanisms.   This  means   there   are   ‘entities’   going  through   the  model.   It   is   however   not   possible   to   individualize   the   behavior   of   these   entities.  Example:   patients   cannot   have   appointments;   stocks   are   either   FIFO   or   LIFO.   Given   the  discussed  KPIs  above,  this  method  would  clearly  fail  in  evaluating  the  KPIs.  Despite  the  fact  that  this  method  is  often  used  in  healthcare,  it  is  mainly  for  the  development  of  models  in  biological  processes   and   epidemiological   studies   (Lowery,   1998).   An   example   of   such   an   application   is  calculating  the  effect  of  epidemics  on  hospital  utilization.    

Discrete  Event  Simulation  (DES)  is  a  useful  application  in  the  context  of  building  a  reference  model  for  outpatient  clinics  (Weerawat,  2013).  It  allows  one  to  enable  stochastic  features  in  the  processes  that  the  model  captures.  In  DES  one  assumes  a  series  of  discrete  events,  which  means  that  the  entities  (which  is  not  necessarily  restricted  to  1)  move  between  different  states  as  time  passes   (Maidstone,   2012).   Typically   one   thinks   of  DES   as   a   system  of   queues   and   servers,   or  processes.  With  DES,  for  instance,  it  is  possible  to  add  certain  randomness  to  walking  patterns  or  walking  speeds  of  the  agents  in  the  model.  Given  the  fact  that  the  aim  is  to  build  a  reference  model  in  which  hospitals  can  easily  evaluate  what  certain  redesigns  pose  on  capacity  utilization,  DES   is   an   appropriate   method.  Within   the   range   of   DES   packages   there   is   a   wide   variety   of  software.  Selecting  the  appropriate  package  however,   falls  beyond  the  scope  of   this  project  as  the  industrial  partner  provides  the  software  for  this  project.  As  can  be  seen  from  the  paragraphs  above   agent-­‐based   simulation   and   DES   are   the   most   appropriate   simulation   methods   for  outpatient   clinics.   DES  will   be   the  main  method,  whereas   small   agent-­‐based   features  may   be  incorporated.  

   

1.4 Configurable  Models  “A  configurable  process  model  provides  a  consolidated  view  of  a  family  of  business  processes.  It  promotes  the  reuse  of  proven  practices  by  providing  analysts  with  a  generic  modeling  artifact  from  which   to  derive   individual  process  models.”   (La  Rosa,  Dumas,  Ter  Hofstede,  &  Mendling,  2011,  p.  313)  In  other  words:  configurable  models  constitute  a  generic  basis  for  configured  (i.e.  configurations   of   the   configurable)   models.   The   main   goal   of   using   configurable   models   to  constitute   a   generic   model,   is   the   “Design   by   Reuse”   (van   der   Aalst,   Dreiling,   Gottschalk,  Rosemann,  &   Jansen-­‐Vullers,  2006).  This  makes  sense  as  configurable   is  also  described  as   the  inverse   of   inheritance   (Gottschalk,   van   der   Aalst,   &   Jansen-­‐Vullers,   2007).   One   goes   from  specific  to  generic.  The  best  way  to  explain  the  idea  of  configurable  models  is  by  an  example.  In  Figure  1.2  one  can  find  an  example  comprising  film  and  tape  shooting.    

  11  

 

Figure  1.2:  A  Configurable  Process  Model  (La  Rosa  et  al.,  2011,  p.  3)  

As  can   be  seen  from  Figure  1.2,  there  are  two  process  models.  In  the  first  process,  the  one  can  find  the  process  model  for  tape  shooting,  whereas  the  second  process  model  describes  the  film-­‐shooting  model.  The  first  five  steps  in  either  process  model  are  similar.  On  the  other  hand,  the  last   two   steps   are   different   and   hence   cause   variability.   In   the   configurable   model   a  ‘configurable  OR  connector’   is   introduced.  The  model  above   is   represented   in   the   so-­‐called  C-­‐EPC  (Configurable  Event  Driven  Process  Chains)  language  (La  Rosa  et  al.,  2011).       Traditionally,   the   elements   in   process   models   are   divided   into   four   different   major  perspectives  (La  Rosa  et  al.,  2011).  These  four  perspectives  are  shortly  summarized  in  Table  1.3.      Perspective   Explanation  Control-­‐Flow  Perspective     Occurrence  and  temporal  ordering  of  activities.    Data  or  Object  Perspective   Objects  that  are  consumed  and  produced  by  the  process  and  its  

activities.  Also  how  these  data  objects  are  used  in  decision  points.    Resource  Perspective     Organizational  structure  that  supports  the  business  process  in  the  form  

of  resources,  roles  and  groups.      Operational  Perspective     Elementary  actions  required  to  complete  each  activity,  and  how  these  

activities  map  into  underlying  applications.  Table  1.3:  Perspectives  of  Elements  in  Process  Models  

It   has   been   found   that   in   the   field   of   configurable   models,   the   focus   is   on   the   control-­‐flow  perspective   (La   Rosa   et   al.,   2011).   The   other   three   perspectives   are   typically   not   taken   into  consideration,  which   limits   the  applicability  of   the  configurable  models.   In  order   to  overcome  this  limitation,  a  new  language  was  invented  called  C-­‐iEPC  where  the  i  stands  for  integrated  (La  Rosa   et   al.,   2011).  Not  much   research  has  been  done  on   this  C-­‐iEPC   language.  Google   Scholar  only   identifies   approximately   two   dozen   of   articles   in  which   the   term   “C-­‐iEPC”   is  mentioned,  showing   the   language’s  novelty.  Furthermore   it  has  been   found  that   tools   to  actually  simulate  these   C-­‐EPC’s   are   still   in   development   phase   (www.processconfiguration.com)   and   not   easily  

  12  

allow  for  ‘practical  business’  application.  Müller  (2012)  presents  a  study  in  which  four  software  packages   (i.e.   Eclipse,   bFlow,   TH   Wildau   and   DesmoJ)   are   needed,   to   go   from   C-­‐EPC   to  simulation  model.    

Another   interesting   case-­‐study,   specified   on  Dutch  municipalities   operationalized   another  modeling   language:   YAWL,  which   stands   for   Yet   Another  Workflow   Language   (van   der   Aalst,  Rosa,  Gottschalk,  Wagemakers,  &   Jansen-­‐Vullers,  2009).   In  order   to  make   the  model   easier   to  use,   the   tool   uses   a   questionnaire   developed   in   Quaestio,   which   is   then   translated   with   a  Mapping  Table  to  an  integrated  YAWL  model.  An  integrated  YAWL  model  has  the  same  function  as  a  configured  model.  The  procedure  can  therefore  be  seen  as  rather  complex.  It  was  found  that  creation  of  these  models  is  costly  (time,  money),  and  the  identification  of  variation  between  the  models  is  difficult  (van  der  Aalst  et  al.,  2009).  These  YAWL  models  are  typically  focused  on  the  control  perspective.    

 

  13  

2 Methodology This  section   is  organized  as   follows.  First   the  reader   is  given  a  short  overview  of   the  research  methodology,  comprising  the  basic  idea  and  approach,  which  will  govern  the  road  to  answering  the  research  question.  Second,  an  overview  of  the  models  that  will  be  used  is  given.  Third,  the  separate   parts   of   the   research   as   depicted   in   Figure   2.1   are   explained   in   respective  chronological  order.  

2.1 Overview  Methodology  In  order  to  answer  the  research  question,  the  preferred  method  is  a  bottom-­‐up  approach.  The  reason  is  that  the  ultimate  research  objective  is  to  develop  a  model  that  can  be  parameterized  in  order   to  represent  a  certain  outpatient  clinic  without  having   to  start   from  scratch,  or  copying  code  from  many  different  old  models.  The  model  must  be  developed  using  the  available  work  as  good  as  possible.  By  adopting  a  bottom-­‐up  approach,  the  generic  model  makes  it  easier  to  test,  and  last  but  not  least  makes  the  first  couple  of  stages  of  the  design  cycle  redundant;  the  models  that  are  used  for  the  bottom-­‐up  approach  are  already  validated  and  verified  in  real-­‐life  settings.    

The  research  methodology  is  constructed  as  depicted  in  Figure  2.1.  As  one  can  see,  the  model  methodology  consists  of  three  stages.  The  research  starts  with  four  working  simulation  models  for  outpatient  clinics,  which  are  evaluated  in  an  extensive  and  technical  fashion.  Of  these  models  we  need  a  summary  of  characteristics.  These  characteristics  are  explained  in  Section  2.3.  Of   these   evaluations,   the   shared  denominator   is   identified,   in   terms  of  KPIs,   traces   and  other  aspects.   The   shared   part   is   labeled   as   the   backbone   model   and   hence   constitutes   the   part  without   variability.   After   defining   the   backbone  model,   parameterized   features   are   added   so  that  different  configurations  of   the  model  can  exist.  The  configurable  model   is   then  translated  into   an   object   oriented   simulation   tool,  which   can   be   used   in   Siemens  Plant   Tecnomatix.   The  tool   allows   one   to   construct   an   outpatient   clinic   relatively   easy,   with   minimum   input   whilst  allowing  for  different  configurations  as  described  in  the  configurable  model.  

Given   the   research   objectives,   the   model   must   be   evaluated   in   terms   of   validity,  usefulness  and  usability.  Validity   is   tested  using  various  validation  techniques  as  described  by  Sargent   (2001)   and   conformance   analysis   (Rozinat   &   Aalst,   2006).   Then,   the  model  must   be  assessed  in  terms  of  usefulness  and  usability.  In  order  to  do  so,  two  approaches  are  applied.    

First,   the   researcher  will   reconstruct   one   of   the   available  models.   The   existing  model  that   was   chosen   differs   most   from   the   tool   developed   in   Stage   2.   The   reader   will   be   taken  through   the  different   steps.  The  output   is   then  compared   to   the  output  of   the  existing  model,  after  which   a   comparison  will   be  made   concerning   the   time   investment   in   both  methods   (i.e.  how   much   time   did   it   take   to   make   the   original   model,   and   how   much   time   did   it   take   to  reconstruct  the  model  using  the  reference  model  and  tool).  Second,  a  survey  is  issued  in  which  the   opinion   of   experts   (i.e.   other   consultants,   scholars)   is   evaluated,   based   on   Moody's  framework  (Moody  &  Shanks,  1994).  The  experts  are  given  the  choice  between  two  approaches  (i.e.   the   traditional   and   the   tool),   which   they   will   have   to   score   based   on   different   criteria  constituting  the  constructs  developed  by  Moody.    

  14  

2.2 Available  Simulation  Models  As   posed   in   the   Literature   Review,   there   are   quite   some   models   around   that   were   made   to  simulate   outpatient   clinics,   see   Table   1.1   (van   Sambeek,   Cornelissen,   Bakker,   &  Krabbendam,  2010).  These  models  are,  often  clinic   specific  and   thus  not  generic.   It  has  proven   that  gaining  access   to   these   available  models   is   not   an   easy   task.   Scholars   are   reserved  when   the   subject  matter   entails   sharing   of  models.   Due   to   the   alliance  with   the   Vreelandgroep   (VG)   and   some  perseverance  in  communication  with  scholars  the  following  four  models  were  gained  full  access  to.   Despite   the   fact   that   a   bigger   pool  would   have   been   preferable,   these  models   capture   the  typical   dynamics   of   the   outpatient   clinic   in   Dutch   Hospitals   in   case   of   VG.   This   has   been  validated   and   verified   by   the   Vreelandgroep   in   3   assignments   in   two   different   hospitals.   The  model  developed  by  Weerawat  (2013)  captures  a  Thai  Outpatient  Clinic.    

• ISALA  Klinieken  –  Centralized  Waiting  Areas.      The  VG  developed  a  model  to  assess  the  operational  performance  of  all  their  outpatient  clinics.   This   model   was   specifically   developed   to   assess   the   impact   of   a   centralized  waiting  area(s)  on  waiting  time  of  the  patients  and  crowdedness  of  the  different  waiting  areas  (centralized  and  decentralized).  The  model  contains  over  15  specialties.  Access  is  granted   to   the   full   model,   historical   dataset,   generated   dataset   and   underlying  assumptions.    Software:  Plant  Simulation  

• ISALA  Klinieken  “gastrointestinal  and  liver  diseases”  (MDL)    The   VG   developed   this   model   in   order   to   assess   the   capacity   utilization   and  crowdedness  of  various  components  of  the  outpatient  clinic  comprising  three  different  specialties.      Software:  Plant  Simulation  

• Reinier  de  Graaf  Ziekenhuis.    The  VG  made  this  model  in  order  to  establish  more  insights  to  Zone  4  of  the  Reinier  de  Graaf   Ziekenhuis.   The   model   comprises   five   out-­‐patient   clinics,   which   are:   oncology,  breast   surgery,   radiation   therapy,   gastroenterology   and   liver,   gastrointestinal   and  stomach  surgery.  Unfortunately  there  is  no  working  historical  data  yet.  However,  the  full  working  model  is  accessible  and  working  with  dummy  data,  similar  to  ISALA  Klinieken.  Software:  Plant  Simulation  

• Teaching  Hospital  Bangkok.    This  model  was  developed  by  Weerawat  (2013).  It  comprises  the  following  specialties:  orthopedics,   dermatology,   otolaryngology,   surgery,   and   medicine.   Because   of   the  congestion   for   these   clinics,   a  model  was   developed   plan   capacity   for   a   new   location.  There  is  full  access  to  the  scientific  article,  which  comprises  an  evaluation  of  some  of  the  assumptions,  used  KPIs  and  motivation.      Software:  ArenA  

   

 

  15  

 Figure  2.1:  Methodology  Overview  

Start Research

Analyse Existing M

odels

Available M

odels

Merge Process M

odels to Configurable M

odel (C-EPC)

Summ

ary Characteristics of Existing M

odelsFind Overlap and

Differences

Test, Verificate and Validate

Model

Available Data

Transform Data

to Compatible

Format

Improve

Stage 1Stage 2

Stage 3

Moody’s Survey

Rebuild Existing M

odel and Test

Analyze, Com

pare, Disucuss

From C-EPC to

Tool in Tecnom

atix

Determine:

1. Required Data2. Com

ponents3. Param

eters

Merge

Assumptions

Improve Tool

  16  

2.3 Research  Stage  1:  Analysis  of  the  Models  The  analysis  of  the  four  models  is  done  in  a  predefined  structured  way.  In  case  of  the  VG  models  there   is   access   to   the   actual   developers,   which   makes   it   easier   to   identify   and   check   the  assumptions   of   the   model.   It   has   been   found   throughout   this   research   that   the   assumptions  show   some   association   with   the   degree   of   staticness   and/or   parameters   in   the   model.   The  evaluation  of  a  model  consists  of  the  following  consecutive  steps.  These  steps  have  been  derived  from  (Sobolev  et  al.,  2012)  where  the  authors  introduce  the  reader  to  a  approach  to  report  on  simulation   in   healthcare.   The   authors   promote   that   reporting   findings   in   simulation   studies  should   adhere   to   an   extensive   list   of   requirements.   This   list   of   requirements   (Table   10.1   in  Sobolev   et   al.,   (2012))   states   that   the   simulation   model   should   be   described   in   terms   of:  simulation  approach,  typical  processes,  activities,  assumptions,  and  configurations  of  the  model,  input   parameters   and   output   parameters.   Based   on   those   aspects,   the   following   steps   for   the  evaluation  were  defined.         The  first  step  in  comparison  of  the  models  is  the  model’s  typical  processes  (Sobolev  et  al.,   2012).   These  processes   can  be  best   described  by   flow-­‐charts.   In   order   to   do   so,   there   are  various  approaches.  One  way  to  identify  the  model’s  typical  processes  is  to  critically  assess  the  model  descriptions.  The  models  that  are  available  are  provided  with  an  informal  description  of  the  processes,  or  an  actual  research  paper.  A  possible  weakness  of  this  strategy  is  the  fact  that  (1)  the  model  description  was  made  by  a  person  and  (2)  the  researcher’s  analysis  is  done  by  a  person.   In   terms  of   human  bias,   this   could  pose   a   threat   and   thus   a  more   formal   approach   is  needed.     Another,  formal,  approach  towards  analyzing  the  typical  paths  through  the  model  is  by  using  traces  (Sargent,  2000).  A  trace  corresponds  to  the  path  of  an  entity  through  the  model  (e.g.  AàBàCàG).   These   traces   can   be   analyzed   through   process   mining   techniques   and  transformed   to   charts   and   statistics   (van   der   Aalst,   2012).   Various   platforms   exist   in   which  process   mining   is   possible:   Disco,   ProM   but   also   various   Java-­‐based   applications.   For   this  research   Disco   and   ProM   were   used.   Disco   allows   one   to   do   automated   process   discovery,  evaluate   detailed   statistics,   quantify   traces   and   many   more.   Disco   uses   an   upgraded   and  optimized   version   of   the   Fuzzy   Miner   (“Disco   User’s   Guide,”   2012).   Other   process   mining  techniques  were   too   exact   for   real-­‐life   data,   leading   to   complicated  un-­‐readable  models.     The  Fuzzy  Miner   succeeds   in  doing  so  by  allowing   the  user   to  put  emphasis,   aggregate,   customize  and  abstract  the  process  model  that  is  obtained  (Günther  &  Aalst,  2007).    

As  a  second  aspect,  the  assumptions  of  the  models  are  checked.  These  assumptions  have  implications  on  construction  of   the  model  as  well  as  on   the  predictive  power  of   the  model.   In  practice,  assumptions  often  correspond  to  simplifications  of  the  model.  

Third,   the   static   and   parameterized   components   of   the   model   are   summarized.   This  means  that  the  model  is  evaluated  concerning  its  configurability.  It  must  be  clear  what  is  fixed,  and   what   can   be   changed   in   the   model   to   change   the   behavior.   Hence   the   degree   of  ‘configurability’  should  not  be  confused  with  the  configurable  model  as  described  in  Figure  2.1.    

Fourth,  the  KPIs  of  the  model  are  listed  and  evaluated  (i.e.  output  of  the  model).  These  KPIs  tell  something  about  purpose  of  the  simulation  model,  but  also  reveal  the  emphasis  of  the  model  developers.  Also,  it  must  be  known  how  these  parameters  are  technically  monitored.    

  17  

Fifth,  and  finally,  a  short  analysis  of  data-­‐format  of  the  model  is  given.  It  must  be  known  what  the  entities  and  their  respective  attributes  are  in  order  to  see  through  the  complexity  and  logic  of  the  models.  These  correspond  to  the  input  as  stated  by  (Sobolev  et  al.,  2012).  

Once  an  understating  on  all  these  components  is  established  and  evaluated  per  model;  we  can  compare  the  models  in  terms  of  the  five  aspects  listed  before.  This  comparison  leads  to  an  overview  comprising  the  main  similarities  and  the  main  differences.  The  similarities  will  lead  to   the  requirements   for   the  backbone  model,  whereas   the  differences  constitute   the  notion  of  parameterized  aspects  within  the  model.  The  structure  of  the  analysis  is  provided  in  Table  2.1.  Adding  to  the  table,  the  size  of  the  models  were  evaluated.      

 Aspect   Crucial  Question   Method  Typical  process   What  is  the  flowchart?   Process  Mining,  Descriptions  

analysis,  Animations,  Reports  Assumptions   What  assumptions  were  made?   Code  Analysis,  Description,  

Report,    Static  /  Parameterized   Which  components  are  static  and  

parameterized?  Model/Code  Analysis  

KPIs   What  KPIs  are  evaluated?   Reports,  Presentations,  Descriptions,  Documentation  

Data   What  data(format)  serves  as  input  for  the  model?  

Model/Code  Analysis  

Table  2.1:  Aspects  involved  in  the  comparison  of  the  simulation  models  

2.4 Research  Stage  2:  Configurable  Model  and  Tool  Figure   2.1   implies   that   I   want   to   apply   the   theory   from   configurable   models   (van   der   Aalst,  Dreiling,  Gottschalk,  Rosemann,  &   Jansen-­‐Vullers,  2006).  Using   the  obtained   information   from  Stage   1,   the   process   models   can   be   combined   into   an   overarching   configurable   model.   This  configurable  process  model  defined  in  C-­‐EPC  is  the  basis  for  an  object  oriented  tool  that  is  build  in  a  DES  package  (Plant  Tecnomatix).       The  development  of  such  a  tool  is  a  creative  and  time  consuming  process.  Iterative  and  incremental  steps  are  essential  to  make  a  successful  model  (Booch  et  al.,  2008).  As  claimed  by  Booch  et  al.  (2008),  one  does  not  only  need  an  iterative  process,  but  also  a  strong  architectural  approach.  This  means  that  the  researcher,  or  model-­‐designer  has  to  have  a  clear  understanding  of   the   various   components   and   layers   in   the  model,   as  well   as   its  main   goals.   Good   software  architecture  has  the  following  attributes  in  common.  First,  the  model  should  have  well-­‐defined  layers   of   abstraction.   Second,   there   should   be   a   clear   separation   of   concerns   between   the  interface  and   implementation  of  each   layer.  Third,   the  architecture   is  ought   to  be  simple.  This  means   that   “common   behavior   is   achieved   through   common   abstractions   and   common  mechanisms”  (Booch  et  al.,  2008,  p.  249).    Adding  to  this  one  needs  clear  assumptions  to  set  the  boundaries  of  what   the  model   is   able   to   represent   in   conceptual   terms.  This   very   strategy  by  Booch  et  al.  (2008)  is  followed.  The  main  goal  is  to  construct  a  model  (in  this  case  tool)  that  is  (1)  easy  to  maintain,  (2)  simple,  and  (3)  enabling  a  high  degree  of  reusability.     During   the   development   of   the   tool,   also   attention  must   be   given   to   the   needed   input  data.   To  make   sure   the  model   is   flexible   and   usable   in   different   health   care   institutions,   one  

  18  

strives  to  use  as  little  and  simple  input  data  as  possible.  As  claimed  earlier,  constant  testing  and  improvement   is   the   crux   of   a   successful   model   (or   tool)   development.   In   doing   so,   Sargent  (2000)  proposes  the  following  methods,  of  which  a  selection  is  used  in  Chapter  5.    

• Comparison  to  Other  Models.  Compare  various  results   (e.g.  output)  of  validated  models  to  the  model.  

• Fixed  Values.  Models  are  given  certain   fixed   input.  The  output  of   the  model   is   checked  using  manual  calculations  (for  those  processes  in  the  

• Animation.  Display  behavior  of  model  graphically  whilst  moving  the  model  through  time.  • Internal  Validity.  Do  multiple  runs  within   the  model  and  assess   the  amount  of   internal  

variability.  • Parameter  Variability-­‐Sensitivity  Analysis.   Evaluate   the   change   in   the  model’s   behavior  

when  parameters   are   changed.   The   higher   the   amplitude   of   change   in   the   output,   the  more   sensitive   a   parameter   is.   Those   sensitive   parameters   need   extra   attention   and  should  be  checked  critically.  

• Traces.  Follow  certain  entities  throughout  the  model  and  check  whether  their  paths  are  coherent  with  practice  and  theory.  Check  the  correctness  of  the  model’s  logic.  

• Extreme  Conditions  Tests.  Evaluate  how  the  model  functions  under  extreme  (i.e.  extreme  input  or  extreme  parameter)  settings.  

2.5 Research  Stage  3:  Model  Evaluation  For  the   final  part  of   this  research  we  want  to  evaluate  the  usefulness  and  potential  success  of  the  generic  parameterized  reference  model  –  the  tool.  In  order  to  do  so,  two  methods  are  used.  First,   one   of   the   available   models   is   rebuild.   Using   the   traces   of   the   old,   existing,   model   the  conformance  can  be  tested.  When  this  is  satisfactory,  the  KPIs  as  defined  in  research  Stage  1  can  be  compared.  For  the  model  to  be  successful,  the  KPIs  must  show  high  similarity.  Also  the  time  to   rebuild   the  model   is   evaluated   to  make   compare  whether   a   reduction   in   time-­‐invested   for  model   development   is   made.   This   would   imply   improved   efficiency   with   regard   to   model  development.       Second,   a   questionnaire   commissioned   to   other   consultants,   in   which   the   tool   and  configurable   model   are   evaluated   according   to   Moody’s   framework   (Moody,   2003).   Moody’s  framework   describes   a   model   that   can   be   used   to   evaluate   the   model   based   on   adoption   in  practice,  perceived  efficacy  and  actual  efficacy.  In  the  questionnaire,  the  subjects  are  presented  two  approached,  of  which  one  is  the  developed  tool  and  the  other  comprises  a  more  traditional  approach.  Using  statistical  analysis,  the  data  is  evaluated  and  conclusions  are  drawn.      

     

 

  19  

3 Available Models As   explained   in   the  methodology,   this   section  describes   the  models   in  detail   according   to   the  criteria   defined   earlier.   The   descriptions   follow   a   structured   logic,   which   makes   it   easier   to  assess   the   differences   and   similarities   between   the   models.   Finally   a   conclusion   is   posed  comprising  a  list  of  requirements  for  the  generic  parameterized  model.    

All   four  models   that  are  explained  and  analyzed  below  fall  within   the  category  of  Discrete  Event  Simulation  (DES)  models,  see  Section  1.3.  The  model  developed  by  Weerawat  (2013)  uses  a   flow-­‐based   orientation   whereas   the   models   developed   by   the   VG   use   an   object-­‐oriented  approach.  DES  means  that  the  model  has  a  component  were  ‘time’  is  generated  (see  section  1.3).  The  entities  in  the  model  change  of  state  based  on  the  chronological  time  proxy.  This  means  that  if  time  does  not  change,  states  of  the  entities  in  the  model  do  not  change  and  thus  no  simulation  is  conducted.    

This   section   is   structured   as   described   in   the   methodology;   Section   2.3.   First   the   typical  processes   of   the  models   are   evaluated.   This   is   done   both   in   terms   of  model   description   and  using   data   mining   techniques.   Second,   a   section   is   dedicated   to   the   assumptions   of   all   four  models,   after   which   a   discussion   about   the   degree   of   staticness   of   the   models   is   presented.  Fourth,  the  different  types  of  KPI’s  evaluated  in  the  models  is  analyzed  and  compared.  Fifth,  the  data-­‐structures  and  formats  used  in  the  different  models  is  evaluated.      

Subsequently,   a   section   is   dedicated   to   model   metrics,   as   we   need   to   establish  understanding   about   the   size   of   these   models   followed   by   a   conclusion   comprising   a   short  overview  of  the  variety  between  the  available  models.    

3.1 Process  Models  and  Process  Mining  In  this  section  the  various  processes  within  the  ‘available  models  pool’  is  evaluated.  In  order  to  enhance  readability  a  short  list  of  abbreviations  is  provided  in  Table  3.1.    

Abbreviations  Model  Names  ISA1   Model  for  Isala  Klinieken.  Focus  on  CWA  ISA2   Model  for  Isala  Klinieken.  Focus  on  MDL  RDG   Model  for  Reinier  de  Graaf    WW   Model  for  Bangkok  Outpatatient  Clinic  (Weerawat,  2013)  

Abbreviations  Common  Used  Terms  CWA   Central  Waiting  Area,  typically  at  the  lobby  of  the  Hospital  WA   Waiting  Area  at  the  outpatient  clinic  

Table  3.1:  List  of  Abbreviations  

As  explained   in   the  methodology,   there  are   two  options   to  compare   the   typical  processes   in  a  simulation   model.   The   first   is   to   thoroughly   analyze   the   model   description;   the   second  comprises  process  mining.   In   the  paragraphs  below,  each  model   (i.e.   ISA1,   ISA2,  RDG,  WW)   is  evaluated   in   terms   of   model   description   and   obtained   process   model   with   Disco.   The  descriptions  of  the  processes  can  be  found  in  Appendix  A.  

  20  

The  typical  problem  for  process  mining  is  to  obtain  the  proper  data  for  your  mining  activities.  As  already  posed  in  an  earlier  section,  it  was  not  possible  to  run  the  WW  model  and  henceforth  the  letter  of  this  section  will  deal  with  the  ISA1,  ISA2  and  RDG.       The  models   developed  by   the  VG   are  not   tailored   for   process  mining.   The  models   are  object  oriented  and  only  evaluate  those  KPIs  that  are  specified  by  the  customer.  The  models  by  the   VG   were   made   in   Siemens   Plant   Tecnomatix:   an   extensive   commercial   DES   package.  Whereas  Plant  offers  some  build-­‐in  solutions  in  order  to  construct  a  tracefile,  these  options  are  limited  in  terms  of  usefulness  (Bangsow,  2010).  The  most  logical  choice  of  the  in-­‐build  options  is  the  ‘Out’  option,  which  keeps  track  of  entities  entering  and  leaving  objects.  This   ‘Out’  option  was   tested   and   used   on   the   models   and   found   not   very   useful   for   all   models   as   it   lead   to  incomplete   trace-­‐files   and   lack   of   real   activities.   For   the   ISA2  model,   the   automated   tracefile  based   on   ‘Out’   events   was   the   only   appropriate   solution   as   the   methods   are   relatively   poor  defined   in   this   model.   In   other   words,   it   was   not   possible   to   add   code,   in   order   to   create   a  tracefile  with  the  correct  activities.     In  order  to  overcome  the  problem  and  construct  a  useful  tracefile  for  ISA1  and  RDG,  the  following  approach  was  needed.  In  order  for  Disco  to  mine  a  process,  it  needs  cases  (with  ID),  activities  and  time.  As  the  models  by  the  VG  contain  over  100  methods  it  does  not  make  sense  to  collect   them  all.  All  methods   in  which  an   ‘activity’   takes  place  were  given  the  following   line  of  code:         ~.Registratie.RegistreerTrace(patient,  self.name);  This  single  line  of  code  triggers  the  following  method  which  puts  the  PatientID,  Time,  Run  #  and  activity   in   a   trace   file.  The  activity   is   then  given   the  name  of   the  method.  This  means   that  no  distinction   is   made   between   different  WA   or   consultation   rooms.   The  method   compiling   the  tracefile  can  be  found  in  Figure  3.1.       As   shortly   touched   upon   in   the   introduction,   the   Disco   mines   the   model   using   an  adapted  version  of  the  Fuzzy  miner.  The  models  mined  have  been  set  to  show  all  activities  and  all  possible  paths.  In  Appendix  A,  one  can  find  the  mined  process  models  of  ISA1,  ISA2  and  RDG.          

 

Figure  3.1:  Compiling  a  Tracefile  

  21  

When  comparing  the  different  models  of  the  VG,  there  are  some  clear  visual  differences.  First  of  all,  the  ISA2  model  shows  a  rather  different  structure  than  ISA1  and  RDG.  What  can  be  seen  is  that  there   is  external  examination,  and  people  go  for  second  consults.  This  mean  that  patients  may  have  more  than  one  appointment  on  a  single  day.  Another  feature  present  is  that  patients  may  make  an  appointment  after  the  consult  before  they  leave  the  hospital.         The  RDG  model  shows  a  rather  linear  process  in  which  patients  follow  consecutive  steps  without  too  much  variation,  i.e.  none.  The  model  is  pretty  straightforward.  A  patient  enters  the  hospital   after  which  he   takes  place   in   the  waiting   area   (i.e.   there   is   no  Central  Waiting   area).  Then  the  patient  is  called  to  consult,  after  which  the  consult  starts.  It  is  possible  that  the  patient  receives   additional   examination   during   the   consult;   this   however   happens   in   the   same   room.  The  time  delay  that  is  caused  by  this  examination  is  embedded  in  code  and  can  therefore  not  be  seen   in   the  model.  After   consultation,   all  patients  make  a  new  appointment  at   the  desk.  Then  patients  walk  through  the  CWA  and  leave  the  hospital.    

The  ISA1  model  is  slightly  different  than  ISA2  and  RDG,  it  can  be  seen  that  patient  may  have   a   second   appointment   at   the   clinic.   Patient  may   either   await   the   next   appointment   at   a  policluster  or  they  may  go  back  to  the  CWA  to  wait  for  their  next  appointment.  This  corresponds  to  the  description  given  by  the  VG:  After  the  consult  there  are  four  possibilities:  (1)  the  patient  has  another  appointment  at  the  same  clinic  within  30  min  after  the  consult;  the  patient  waits  at  the  WA  of  the  clinic,  (2)  the  patient  has  another  appointment  at  another  clinic;  the  patient  and  entourage  wait   for   a   certain   amount   of   time   a   the  WA   of   the   current   clinic   (nablijftijd)   after  which  they  start  walking  to  the  other  clinic,  (3)  the  patient  has  another  appointment  more  than  30  minutes  after  the  current  consult;  the  patient  stays  a  certain  amount  of  time  at  the  clinic  and  then  walks   to   the   CWA   and   (4)   the   patient   stays   shortly   at   the   clinic,  walks   to   the   CWA   and  leaves  the  hospital.  In  Figure  3.2  one  can  find  a  table  showing  the  behavior  of  Patient:163.  The  patient  has  another  consult  within  30  minutes  (i.e.  20  minutes)  after  his  fist  consult.  Therefore  the  patient  returns  to  the  Policluster  instead  of  waiting  in  the  CWA.  

 

Figure  3.2:  Trace  Patient:  163  

It  is  not  possible  to  mine  the  WW  models  as  the  model  does  not  run.  As  alternative  the  flowchart  from  the  original  research  paper  (Weerawat,  2013)  is  presented  in  Appendix  A.  The  process  is  slightly  more  complex   than   the  processes  described   in   the   first   three  models.  There  are  more  choices  and  therefore  there  is  a  wider  range  of  possible  paths  that  a  patient  can  follow.  First  it  is  determined  whether   a   patients   has   an   appointment   or   not,   if   not   the   patient   is   assigned   to   a  

Arrival  at  hospital  Wait  in  CWA  

Depart  to  Policluster  Arrival  Policluster  Start  Consult  End  Consult  

Arrival  Policluster  Start  Consult  End  Consult  Back  to  CWA  Arrival  CWA  Go  Home  

  22  

certain  time  slot.  Then  it  is  determined  whether  the  patient  needs  extra  diagnostics  such  as  X-­‐rays,  or  blood  testing.  Weigh  and  vital  signs  are  always  checked.  After,  the  patient  sees  a  doctor.  The  doctor  determines  whether  the  patients  need  extra  examination  and  or  surgery.  Dependent  on   the   findings   of   the   doctor,   the   patient   goes   to   the   pharmacy,   makes   an   appointment   for  surgery  or  simply  departs.    

3.2 Assumptions  The   models   are   based   on   certain   assumptions.   Especially   ISA1,   ISA2   and   RDG   have   similar  assumptions  concerning  the  model.  The  assumptions  can  roughly  be  divided  into  4  categories.  The   first   category   comprises   assumptions   on   the   patient’s   behavior.   The   second   category  cumulates   information   about   the   patient   handling,   and   the   actual   appointment.   The   third  category  is  concerned  with  scheduling,  after  which  a  small  category  is  dedicated  to  ‘others’.  The  main  findings  are  summarized  in  Table  3.2.  Please  note  that  these  assumptions  are  derived  from  the   actual  models   and  model   descriptions/research   paper.   The   best   effort   has   been  made   to  identify  the  most  important  assumptions,  it  however  cannot  be  claimed  this  list  is  extensive.       As  can  be  seen  from  the  Table,  there  are  quite  a  few  similarities  between  ISA1,  ISA2  and  RDG.  The  far  right  hand  side  of  the  table  is  rather  empty.  This  is  due  to  the  fact  that  despite  the  fact   that   actual   model   by   Weerawat   (2013)   was   available;   the   model   could   not   be   ran.   The  assumptions   that   are   explicitly   mentioned   in   the   research   papers   or   assignment   description  typically   consist  of  practical   assumptions   such  as   time  of  arrival  before  appointment,  walking  speeds,   duration   of   the   consult   and   number   of   patient’s   entourage.   Another   important  assumption   comprises   the   probability   that   a   patient   needs   ‘extra   examination’;   these  probabilities   are   determined   by   empirical   analysis,   which   are   fitted   to   distributions.   The  assumption  that  goes  along  with  these  aspects  comprises  stochastic  distributions,  which  make  sure  that  there  is  randomness  and  variability  in  the  model  for  the  moving  entities.      

The   reason  why   the   explicit   assumptions  may   be   questioned   is   the   fact   that   they   do   not  encompass  all  assumptions  present  in  the  model.  In  other  words;  there  are  quite  some  implicit  assumptions   throughout   the   various  models.   The  models   assume   that   people   follow   the   logic  and  rules  of  the  hospital.  This  means  that  there  are  no  such  patients  that  do  not  adhere  to  the  rules  like  walking  to  the  wrong  outpatient  clinic  or  wrong  waiting  area.  Adding  to  this  specific  implicit   assumption,   there  are  various   simplifications   such  as   the  notion   that  neither  patients  nor  their  entourage  use  toilets  and/or  other  facilities.  For  the  KPI  that  the  models  measure  this  will  not  be  of   significant   amplitude   to  make  a   real  difference   in   the  outcomes,  however   these  implicit   assumptions   should   me   made   explicit   in   order   make   sure   none   is   overlooked.  Furthermore   it   is   implicitly  assumed  that  people’s  entourage  stay  with  the  patient  at  all   times  (except  for  during  the  consult).  This  is  taken  to  the  extreme  such  that  the  entourage  is  basically  glued   to   the   patient.   This   assumption,   or   simplification,   will   impact   on   e.g.   WA   utilization.  Example:  A  patient  brings   two  people,  as  entourage  and   this  entourage  will  not  wait   together  with  the  patient  in  the  WA  but  rather  in  another  part  of  the  hospital.  If  the  capacity  of  the  WA  is  15   seats,   two   persons   more   or   less   influence   that   capacity   utilization   with   13%,   which   is   a  significant  difference  based  on  the  relative  numbers  presented  above.    

  23  

    Hospital  Category   ISA2   ISA1   RDG   WW  

Patient  Behavior  

Patient  goes  directly  to  WA  

Patient  goes  to  CWA  first.  

  Patients  arrive  hours  before  

consultation  hours.  Walking  speeds  

between  0.7  and  1.2  above  average  set  speed  

Walking  speeds  are  between  0.8  and  1.7  m/s  (uniform)  

 

Patient  entourage  is  set  to  empirical  distribution  

Patient  entourage  is  set  to  triangular  distribution  (0,  3,  1.75)  

 

Patient  arrives  10  minutes  before  appointment  

Patient  arrives  early  with  average  of  30  minutes  (sigma=5)  

 

  Patient  forms  one  entity  with  entourage  (not  get  detached)  

 

Patient  Handling  and  

Appointment  

Duration  consult  takes  +/-­‐  3  minutes  (binomial)  

Duration  is  consult  takes  exactly  the  planned  time  

 

Doctor  prepares  for  average  20  seconds  

Patient  goes  to  doctor  (doctor  awaits  patient)  

 

Probability  that  additional  examination  

is  needed  50%  

  Probability  that  additional  

examination  is  needed.  

 

Probability  for  follow-­‐up  appoint  after  the  consult  is  75%  

  Probability  that  patient  has  to  make  an  

appointment  after  

examination  

 

Staff  at  desk  is  2     Staff  at  desk  is  2    

Scheduling  

  No  double  appointments  in  the  schedule   No  scheduled  appointments  

  No  No-­‐shows         No  empty  slots  

in  planning    

Other  Patient  adhere  to  the  rules    No  capacity  constraints    

Table  3.2:  Assumptions  Categorized  Per  Model  

3.3 Static  and  Parameterized  Components    The  various  models  inherit  a  different  degree  of  static  and  parameterized  components.  For  ISA1,  one   could   argue   that   the   model   is   partially   static   and   partially   parameterized.   The  parameterization  descents  from  the  notion  that  settings  can  be  changed  such  as  walking  speeds,  and  arrival  time.  A  great  feature  of  this  model  is  the  possibility  to  alter  walking  distances  from  the  CWA  to  the  WA  by  means  of  changing  a  table.    

  24  

  For  the  ISA2  model,  one  could  argue  that  the  model  is  more  ‘static’  than  the  ISA1  model.  The  reason  for  this  staticness  is  the  fact  that  the  model  is  fitted  on  an  actual  map  of  the  hospital.  This  makes  it  more  complex  to  add  e.g.  WA,  consultation  rooms,  etc.  This  also  disallows  one  to  easily   change   walking   distances   between   certain   locations.   One   of   the   main   reason   for   this  phenomenon  could  be  the  fact  that  the  choices  in  the  development  of  this  model  were  partially  animation   driven.   Similarly   to   ISA1,   it   is   possible   to   alter  minor   parameters   such   as  walking  speeds.  The  RDG  model  is  also  fitted  to  a  map  of  the  hospital,  which  makes  it  rather  static.  Only  the  minor  settings  such  as  explained  above  can  be  easily  changed.       A  rather  different  approach  can  be  found  in  the  WW.  As  this  model  is  not  object-­‐oriented  but   flow-­‐oriented   (see   section   3.1);   another   approach   applies.   Given   the   fact   that   it   is   flow  oriented,   it   is   easier   to   skip   certain   processes/location   and   thereby   create   other   processes  (configurations)   within   the   model.   With   that   regard   the   model   scores   higher   in   terms   of  parameterized   components.   On   the   other   hand,   it   more   difficult   to   assess   the   clinic’s  performance  when  one  needs  to  add  another  consultation  room.  In  Object-­‐oriented  models,  one  can   add   an   actual   consultation   room.   In   flow-­‐based  modeling,   this   is   not   possible.  Within   the  flow-­‐based   approach   one   could   only   adapt   resources   for   a   certain   process   or   change   time-­‐delays.       In   the   table   below  one   can   find   an   overview  of   the   components,   entities,   settings   and  logic,  which   is  present   in   the  VG  models.  As   the   flow-­‐oriented  model  developed  by  Weerawat  (2013)  does  not  use  objects,  these  are  not  presented  in  the  table  below.      

Components     Entities     Settings  Logic  CWA   ISA1,  RDG     Patient   VG     Patient  Generator   VG  WA   ISA2,RDG     Entourage   VG     Settings   VG  Consult  Room   ISA2,RDG     Front  Desk  Employee   ISA2,RDG     CW  System   VG  Policluster   ISA1     Doctor   ISA2,RDG     Registration   VG  Examination  Room   ISA2,RDG           Agenda   VG  Walking  Paths   VG*     Patient  Mover   VG        Desks   ISA2,  RDG     TransportCluster   VG        *VG  means  that  all  models  by  the  Vreelandgroep  contain  this  part.    

Table  3.3:  Components,  Entities  and  Logic  in  the  VG  models  

3.4 Key  Performance  Indicators  The   KPIs   measured   in   the   four   different   models   are   rather   similar.   Whereas   the   models  developed  by  the  VG  focus  on  the  absolute  number  of  entities  in  a  place  at  a  moment  in  time,  the  model  by  Weerawat  (2013)  emphasizes  the  utilization  rate  of  various  components  in  the  model.  These  two  are  practically  the  same  but  for  the  fact  that  utilization  rate  is  the  absolute  number  divided  by  the  actual  capacity.  Furthermore  the  models  log  the  waiting  times  of  the  patients.  In  other   words,   for   every   patient   a   track   is   kept   from   which   can   be   derived   whether   the  appointment   took   place   on   time   (or   within   a   certain   interval)   and   how   long   the   patient   has  stayed   in   the   clinic.   Other   KPIs   used   are   the   walking   distances   by   medical   staff   and   the  crowdedness   of   the  walking   paths   used   by   the   patients.   A   schematic   overview   of   the   KPIs   is  presented  in  Table  3.4.    

  25  

Key  Performance  Indicators   Model  Number  of  patients  in  WA   VG  Number  of  patients  in  CWA   VG  Number  of  mismatches   ISA1,  RDG  Number  of  patients  waiting  at  desk   ISA2,  RDG  Crowdedness  walking  paths   ISA2,  Walking  distances  staff   RDG  Capacity  utilization  consulting,  examination  rooms   RDG  Total  patient  time  in  hospital   VG,  WW  Working  time  doctors  (at  VG  deterministic)   WW  Capacity  utilization  medical  staff   WW,  VG  Patient  service  levels  (do  not  apply  to  Netherlands)   WW  

Table  3.4:  Overview  Key  Performance  Indicators  

3.5 Data  The  data,  which  is  fed  to  the  models,  differs.  Some  models  only  require  a  list  with  appointment  data.  The  model  constructs  a  workable  agenda,  which  links  the  patient  to  a  doctor  for  a  specified  time-­‐slot  at  the  predefined  outpatient  clinic.  The  rest  of  the  data  is  provided  by  variables  in  the  model,  which  can  be  manually  changed  by  entering  different  values.  In  some  other  models,  one  also  needs  to  specify  capacities  of  certain  objects.  The  difference  is  embedded  in  the  structure  of  the  working  objects.  For  instance,  all  the  models  make  use  of  a  patient  generator  –  however  the  data  that  is  fed  to  these  generators  is  rather  different.  In  the  ISA2  model  the  appointment  data  is  loaded  from  a  consultation-­‐hour  overview,  whereas  the  RDG  uses  an  appointment   list  (one  or  more  lines/appointments  per  patient).    

3.6 Model  Metrics  As   the   aim   of   this   research   is   to   construct   a   parameterized   reference   model   for   outpatient  clinics  which  is  easy  to  use  and  maintain  the  ultimate  metric  is  time-­‐investment.  For  all  models  developed  by   the  VG  the  relative   time-­‐investment   is  depicted   in  Table  3.5.  Due   to  commercial  considerations,  the  development  time  has  been  indexed.    Model   Development  

time  Activities  (Patient)  

Variants    of  Cases  

Call  Cycles  

Consult    Rooms  

Examination  Rooms  

Waiting  Areas  

Desks  

ISA1   1.5x   10   13   635   17  policlusters  

ISA2   x   8   12   414  ≈ 800  

12   7   1   1  

RDG   1.3x   11   3   1193   29   0   2   3  

Table  3.5:  Overview  Model  Metrics  

Typically  one  would  use  model  metrics  such  as    which  incorporate:  encapsulation,  inheritance,  coupling   and  polymorphism   (Harrison,   Counsell,  &  Nithi,   1998).   Encapsulation   comprises   the  MHF   (method  hiding   factor)  and  AHF   (attribute  hiding   factor).   Inheritance   is  divided   into   the  method  inheritance  factor  (MIF)  and  attribute  inheritance  factor  (AIF).  Coupling  consists  of  the  coupling   between   classes,   which   cannot   be   ascribed   to   inheritance   characteristics.   Lastly  

  26  

polymorphism  defines  the  number  of  methods  that  redefine  inherited  methods,  divided  by  the  maximum  of  number  of  possible  distinct  polymorphic  situations.  Because  these  measures  must  be  evaluated  using  the  simulation  software  packages,  one  is  dependent  on  the  possibilities  and  standard  within  these  software  packages.  As  the  Plant  Texnomatix  does  not  allow  for  the  static  metrics  as  described  above,  the  number  of  call  cycles  was  used.  A  call  cycle  consists  of  a  method  being  called,  and  the  subsequent  methods  that  called  by  the  same  very  method.  As  can  be  seen  from  Table  3.5  the  RDG  has  the  highest  number  of  call  cycles.  The  number  of  call  cycles  for  ISA2  is   not   accurate   as   the  model   only   runs   for   a   half   day;   assuming   uniform-­‐like   distribution   for  appointments  over   the  day,   the  number  must  be  multiplied  with   two.  Also,   another  path  was  taken   comprising   dynamic   analysis   using   data   mining   and   process   mining   techniques.   Data  mining  and  process  mining  techniques  allow  the  researcher  to  extensively  evaluate  the  trace  of  the  simulation  model  (van  der  Aalst,  2012).       One  of   the   indicators,  or  model  metrics,   that   can  be  derived  using  dynamic  analysis   is  number  of  activities.  It  was  chosen  to  analyze  the  model  using  process-­‐mining  techniques.  Using  the  traces  of  the  model,  the  number  of  activities  was  found  triggered  by  the  entity  “patient”.  The  procedure  on  how  these   traces  were  made  was  described  earlier   in   this  section.  Furthermore  Table   3.5   indicates   the   absolute   size   of   the   model.   In   the   link   listed   below   one   can   find   a  description  on  how  to  process-­‐mine  a  trace  using  disco.       In   the   previous   paragraphs   one   has   been   made   familiar   with   the   differences   and  similarities  between   the   various  models.   This   is   the  basis   for   the  developing   the   configurable  model,  and  the  simulation  tool.  In  the  next  section  one  will  be  familiarized  with  the  method  for  configurable  models  as  well  as  the  final  results.        Link  ProcessMining  Disco:  http://vimeo.com/97652216.ions        

  27  

4 Reference Model and Tool As  explained  in  the  methodology  section,  the  development  of  the  configurable  model  is  driven  by  the  earlier  obtained  process  models  (process  mining).    First,  all  models  are  translated  to  the  EPC   language   based   on   their   mined   process   models   and   descriptions   and   merged   to   a  configurable  model.  This  means  that  the  overlapping  parts  of  the  models  analyzed  above  must  be  inherently  present,  complemented  by  the  extensions  that  capture  the  variability  between  the  models.   The   second  phase   comprises   the  development   of   a  DES   tool   that   allows  one   to  make  actual  simulation  models  that  can  be  set  to  all  possible  configurations.    

4.1 Merging  the  existing  process  models  As   can   be   seen   in   Appendix   A   the   mined   process   models   differ   in   terms   of   abstraction   and  terminology.  To  be   fair,   for   ISA1  and   ISA2   it   is  rather  difficult   to  derive  a  coherent  and  sound  model  description  based  on  the  mined  models.  Henceforth,  it  may  be  concluded  that  solely  the  mined  models,  in  this  context  do  not  provide  sufficient  information  to  construct  a  configurable  model.       In   existing   literature   on   the   development   of   configurable   models,   the   configured   (or  available  models   in   this   context)  always  possesses  activities  with   the   same  names   (Buijs,  Van  Dongen,  &  Van  Der  Aalst,  2013;  Gottschalk,  2009;  Van  der  Aalst  et  al.,  2006).  Even  more  striking  is  the  fact  that  configurable  models  are  often  discussed  in  the  context  of  activities  ∈ 𝐴,𝐵,𝐶,… ,𝐷.  Having  a  configurable  model  with  compatible  language  or  even  characters  for  activities  allows  one  to  use  merging  algorithms.  These  merging  algorithms  can  automatically  transform  a  pool  of  (compatible)  process  models  or   traces   to  a   configurable  model.  One  of   such  algorithms   is   the  ETM,   evolutionary   tree   miner,   algorithm   (Buijs   et   al.,   2013).   This   tool   allows   one   to   make  configurable  models,   and  optimize   those  models  by  means  of   fitness   indicators.  However,   the  ETM  can  only  be  used  by  means  of  compatible  and  same-­‐abstraction  traces.       Because   the   process   models   mined   earlier   lack   compatibility   and   differ   in   level   of  abstraction,  it  has  been  chosen  to  manually  apply  the  merging  algorithm  (La  Rosa  et  al.,  2012).  The  formal  definition  of  the  algorithm  can  be  found  in  La  Rosa  et  al.,  2012.  In  order  to  be  able  to  apply  the  merging  algorithm,  a  process  model  for  ISA1,  ISA2  and  RDG  is  derived  using  the  EPC,  Event-­‐driven   Process   Chain,   language.   The   basis   for   the   EPCs   will   be   the   mined   model,   the  description   of   the   model   and   the   animations.   Animations   basically   mean   that   one   follows  entities   trough   the  model  and  use   those   ‘traces’   as  basis   for   the  process  description   (Sargent,  2000).  The  reason  why  these  three  components  have  to  be  combined  is  the  fact  that  the  traces  (and   thus   mined   modes)   are   not   perfect   and   lack   states   of   entities.   In   other   words;   most  activities  are  present  but  the  ‘states’  of  the  entities  are  missing.       Because  we  only  merge  three  models,  and  the  models  are  relatively  simple,  it  is  feasible  to  merge  the  models  by  hand.  On  the  next  page  on  can  find  an  overview  of  the  models  and  the  final,  configurable  model.      

  28  

 

Figure  4.1:  From  EPC  Instances  to  C-­‐EPC  

Idle Patient

Patient enters hosptial

Patient is in hospital

Patient starts walking to WA

Patient is walking to waiting area

Patient is idle for consult

Patient walks to consultation room

Patient is in consultation room

Consult start

Patient is in consult

Consult ends

Patient must make appointment

Patient makes appointment at

desk

Appointment is made, patient ready to leave

Patient is walking to Central Hall

Patient is at Central Hall

Patient leaves hospital

Idle Patient

Patient starts walking to WA

Patient is walking to waiting area

Patient is idle for consult

Patient walks to consultation room

Patient is in consultation room

Consult start

Patient is in consult

Consult ends

Consult is done

Patient leaves to next location

Consult is done

XOR

Patient ready to leave

Patient is walking to Central Hall

Patient is at Central Hall

Patient leaves hospital

Idle Patient

Patient enters hosptial

Patient is in hospital

Patient starts walking to WA

Patient is walking to waiting area

Patient is idle for consult

Patient walks to consultation room

Patient is in consultation room

Consult start

Patient is in consult

Consult ends

Consult is done

XORPatient is send to

external examination

Patient is in external

examination

End external examination

Patient is examined

XOR

Patient ready to leave

Patient leaves hospital

Patient walks to desk and queues;

makes appointment

Appointment made

RDG ISA1 ISA2

Patient ready to leave

CONFIGURABLE MODEL

Patient is walking to Central Hall

Patient is at Central HallPatient moves to

exit

XOR

RDG: Appointment is made, patient ready to

leave

XOR Examination done, back to consult

Patient has other appointment at same outpatient

clinic

Patient has appointment later

that day

XOR

XOR

Patient must make appointment

XOR

No appointment needed

XOR

Patient waits

Patient leaves hospital

Patient ready to leave

Extra state needed due to configuration

Idle Patient

Patient enters hosptial

Patient is in hospital

Patient starts walking to WA

Patient is walking to waiting area

Patient is idle for consult

Patient walks to consultation room

Patient is in consultation room

Consult start

Patient is in consult

Consult ends

Consult is done

XORPatient is send to

external examination

Patient is in external

examination

End external examination

Patient is examined

XOR

Patient walks to desk and queues;

makes appointment

Appointment made

XOR Examination done, back to consult

Patient must make appointment

XOR

No appointment needed

Idle Patient

Patient starts walking to WA

Patient is walking to waiting area

Patient is idle for consult

Patient walks to consultation room

Patient is in consultation room

Consult start

Patient is in consult

Consult ends

Consult is done

XORPatient is send to

external examination

Patient is in external

examination

End external examination

Patient is examined

XOR

Patient walks to desk and queues;

makes appointment

Appointment made

XOR Examination done, back to consult

Patient must make appointment

XOR

No appointment needed

Patient enters hosptial

Patient is in hospital

Patient takes place in CWA

Patient is waiting in CWA

Patient has other appointment at same outpatient

clinic

Patient has appointment later

that day

XOR

XOR

Patient enters hosptial

Patient is in hospital

Patient takes place in CWA

Patient is waiting in CWA

  29  

 As   can   be   seen   from   the   configurable  model   on   the   far   right,   only   XOR   connectors   are   used.  There   is   a   clear   difference   between  XOR   and  OR   connectors.   The  XOR-­‐split   component   imply  that  after  the  connector  a  choice  has  to  be  made  about  which  path  to  continue  on  (La  Rosa  et  al.,  2012).  OR-­‐split,  on   the  other  hand,  allows  one   to  continue  on  multiple  paths.  What  cannot  be  derived  from  the  configurable  process  model,  is  the  fact  that  the  patient  may  be  either  picked-­‐up  by  the  doctor,  or  leaves  to  the  doctor  himself.  It  was  found  that  the  distinction  between  these  two  organizational  policies  can  heavily  influence  the  efficiency  of  the  outpatient  clinic  (Hulshof  et  al.,  2012).  The  doctor-­‐to-­‐patient  (DtP)  method  is  superior  to  the  PtD  under  the  condition  that  the  doctors  travel  time  is  less  than  the  patient’s  preparation  time.         From   the   configurable  model   we   can   derive   that   the   backbone  model   consists   of   the  simple   model   as   depicted   below   in   Figure   4.2.   The   patient   enters   the   outpatient   clinic,   the  patient  waits  in  the  waiting  area,  the  patient  is  called  into  the  consultation  room  and  finally  the  patient  leaves  the  hospital.  As  can  be  seen  from  the  figure,  all  the  parts  where  a  XOR-­‐splits  were  present  in  the  configurable  model  are  erased.  The  backbone  model  shows  high  similarity  with  Swisher   et   al.   (2001)   where   seven   activities   were   identified:   registration,   check-­‐in,   pre-­‐examination,  examination,  post-­‐examination,  exit  interview,  check  out.       The  C-­‐EPC  as  depicted  in  Figure  4.1  also  allows  for  configurations,  which  are  not  present  in   the   available   three  models.   These   configurations  must   be   tested  on   their   practical   validity,  preferably  in  empirical  research.  This  is  a  general  point  of  attention:  if  the  configurable  model  is  extended   with   more   processes,   one   should   always   verify   and   check   all   the   possible  configurations  (Gottschalk  et  al.,  2007).    

 

Figure  4.2:  Backbone  Model  Process  Outpatient  Clinic  

4.2   Merging  assumptions,  settings  and  other  factors  Now   that   a   configurable  process  model   is   established,   assumptions   settings   and  other   factors  must   be   merged   as   well.   First   of   all,   it   follows   from   the   previous   chapter   that   the   following  components  should  be  adaptable:  

• Walking  speeds  of  entities  (stochastic)  • Number  of  patient’s  entourage  (stochastic)  • Time  of  arrival  before  appointment  (stochastic  or  deterministic)  • Duration  of  the  consult  (deterministic  or  stochastic)  

Patient is idle for consult

Patient is in consultation room

Patient enters hospital

Patient walks to WA

Patient is in consultConsult is done

Patient ready to leave

Patient leaves

  30  

• Deviation  from  walk-­‐advice  system  • Duration  of  stay  after  consult  • Reading  time  for  consult  (doctor)  

Also  the  following  implicit  assumptions  are  embedded  in  the  model.  First,  the  patient  adheres  to  the   rules   set   by   the   system.   This  means   that   the   patient's   behavior   falls  within   the   scope,   or  boundaries,  set  by  the  model.  This,  of  course,  poses  a  major  simplification  as  the  patient  is  only  allowed  what  he  or  she  is  ought  to.  It  is  for  instance  not  possible  that  a  patient  goes  to  the  desk  before  having  the  appointment  for  e.g.  information.  Second,  the  agenda  is  followed  as  given;  no  alterations  and/or  double  appointments  are  planned  throughout  the  simulation.     A   set   of   KPIs   was   derived   from   both   the   literature   review   and   the   analysis   of   the  available  models.  The  full  list  can  be  found  below  in  Table  4.1.      Inc   Key  Performance  Indicators   Present  

in  Model  Present  in  Research  

+   Number  of  patients  in  waiting  areas   VG   (Côté,   1999;  Weerawat,   2013;   Zhu  et  al.,  2012)  

+   Capacity  utilization  of  consult  /  examination  rooms   VG   (Côté,  1999)  +   Number  of  patients  in  queue  at  desk   ISA2,  RDG   (Côté,   1999;  

McCarthy  et  al.,  2000)  +   Total  patient  time  in  hospital  /  Patient  Flow  Time     (McCarthy   et   al.,  

2000;   Weerawat,  2013)    

+   Total  waiting  time     (Martin   et   al.,   2003;  McCarthy  et  al.,  2000;  Weerawat,  2013)  

+   Working  time  doctors     (Hulshof   et   al.,   2012;  Weerawat,  2013)  

+   Capacity  utilization  medical  staff    VG   (Weerawat,  2013)  +   Patient  throughput  (at  VG  deterministic)     (Klazinga  et  al.,  2001;  

Martin  et  al.,  2003)  +   Traffic  intensity  walking  paths   ISA2,    +   Number  of  mismatches   ISA1,  RDG       Waiting  time  till  first  appointment     (Martin  et  al.,  2003)     Walking  distances  staff   RDG       Patient  service  levels       (Hulshof   et   al.,   2012;  

Zhu  et  al.,  2012)  Table  4.1:  KPIs  from  analysis  and  literature  review  

In   the   Table   4.1   all   KPIs   that   were   identified,   from   literature   and   available   models,   are  summarized.  It  has  been  decided  to  incorporate  those  KPIs  with  a  “+”  in  the  first  column.  When  referring   back   to   the   literature   research;   we   find   that   from   all   stakeholder   groups,   KPIs   are  included  (Martin  et  al.,  2003).  Unfortunately  not  all  KPIs  could  be  included  due  to  the  scope  of  this  thesis,  and  the  qualitative  nature  of  some  KPIs.  The  reason  why  the  last  three  KPIs  will  not  

  31  

be   included   in   the  model   is   the   fact   that   waiting   time   till   first   appointment   is   a   feature   that  cannot   be   measured   in   the   original   models.   The   VG   models   assume   a   certain   appointment  agenda;  which  is  then  adhered  to.  Of  course,  in  real  life  –  extra  appointments  are  scheduled  in  between  and  patients  make  appointments  for  future  visits.  The  runtime  of  the  models  typically  varies  between  1  day  and  1  week,   therefore  Waiting  time  till  first  appointment  is  not   included.  The  walking  distances  of   the  medical  staff  are  neither   included.  The  reason  why  these  are  not  included  is  the  fact  that  it  is  rather  time  consuming  to  extract  the  measure  from  the  RDG  model.  Furthermore,  based  on   the  position  of   the  consult   room  and  waiting  area,  a   fair  estimation  of  the   walking   distances   can   be   made.   Patient   service   levels   are   generally   determined   by   the  aggregation   of   certain   other   KPIs.   As   this   is   only   a   derivative   of   the   other   KPIs,   and   maybe  others,  this  KPI  will  not  be  considered  in  further  analysis.      

4.3 From  Configurable  Model  to  Plant  Tecnomatix  Simulation  Model  From   the   configurable   model   described   in   Section   4.1;   all   possible   configurations   can   be  adapted.  The  possible  paths,  or  traces,  are  illustrated  and  facilitated  by  XOR-­‐splits.  Now  that  the  configurable   process  model   is   established   a   tool  must   be   developed  which   can   facilitate,   and  thus  simulate,  these  configured  process  models.       In   order   to   allow   for   object-­‐oriented   DES,   a   tool   was   developed   in   Siemens   Plant  Tecnomatix.   Siemens  Plant  Tecnomatix   is   the   same   software   as  most   of   the   available  models,  and   hence   was   a   logical   choice   as   it   allows   for   reuse   of   code.   The   tool   build   in   Siemens  Tecnomatix  enables  one  to  simulate  all  possible  processes  respective  to  all  the  configurations  of  the   configurable  model.  The   tool  not  only   allows   simulating  different   configured  processes;   it  also  allows  one  to  implement  other  configurations  such  as  PtD  or  DtP  and  all  other  parameters  as  specified  in  Section  4.2.       The  tool  starts  from  a  basic  set  of  components  and  entities.  Using  a  model-­‐constructor,  a  model  can  be  build,  which  represents  a  certain  outpatient-­‐clinic.  It  does  not  matter  how  big  or  small   that  outpatient-­‐clinic   is,  as   long  as  the  processes   in  the  outpatient  clinic  match  a  certain  configuration  of  the  configurable  model.  The  tool  can  be  divided  into  three  parts.  The  first  part  is   concerned   with   the   construction   of   the   model,   or   the   initialization.   The   reason   why   this  construction  must  be  present  in  the  tool  is  the  fact  that  we  use  an  object-­‐oriented  approach  –  in  other  words,  the  number  and  types  of  objects  is  dependent  on  the  ‘to  be  simulated’  outpatient  clinic.  In  traditional  flow  oriented  simulation  models,  the  sequence  of  activities  is  guided  by  the  layout  of   the   flowchart.   In  object-­‐oriented   simulation,   the   flow  of   the  entities   (or   sequence  of  activities)  is  guided  by  the  methods  in  the  objects  (Sobolev  et  al.,  2012).       The   second   part   is   concerned   with   the   actual   simulation   of   the   model.   A   similar  approach   is   used   as  with   configurable   process  models,   in  which   the   tool   Quaestio   is   used   as  inspiration   of   a   simulation   dialog.   This   dialog   represents   the   same   as   Quaestio   (“Process  Configuration  -­‐  Tools,”  n.d.).   Instead  of  providing  the  designer  a  questionnaire,   the  designer   is  presented  with  a  dialog   in  which   some  questions  have   to  be  answered.  The  answers   to   these  questions  assure   that   the   correct  process  model   is  used  –  or   in  other  words:   that   the  entities  follow   the   correct   paths.   The   settings   that   are   entered   into   the   dialog   are   used   by   various  methods  in  the  different  objects;  typically  in  if…then...else  statements.    

  32  

  The   third  part   is   concerned  with  evaluating   the  KPIs   in   the  model.  Which  data   can  be  deduced   from   the   model   and   how   can   it   be   translated   into   meaningful   KPIs,   statistics   and  graphs.  The  latter  of  this  section  is  dedicated  to  the  three  parts  as  described  above.  

Model  Construction  Due  to   the   fact   that  an  object-­‐oriented  approach   is  used,  each  outpatient  clinic  has  a  different  set  of  objects.  In  order  for  the  model  to  run  with  a  patient-­‐appointment  data,  these  objects  must  have  the  correct  name.  Hence,  before  a  model  can  be  run  in  the  correct  settings,  a  model  must  be   constructed.   Because   a   reference   model   should   be   as   plug-­‐and-­‐play   as   possible,   the   least  input  data  as  necessary  was  used.  The  tool  starts  from  a  seemingly  ‘empty’  frame.  However,  the  objects   that  will   form   the  basis  of   the  model   are   stored   in  a   library  and  basic  building  blocks  such   as   a   Patient   Generator   and   a   Settings   Dialog   are   already   present.   The   objects   hold   a  number  of  methods  that  will  enable  the  correct  working  of  the  model.  A  list  of  the  objects  and  their  respective  working  can  be  found  in  Appendix  C.  Also,   the   library  stores  different  Moving  Units   (MUs):  patients,   entourage  and  doctors.  These  MUs  have  many  attributes;  which  allows  one   to  monitor   the   individual  walking   speeds,   time  of   appointment,   etc..   These   attributes   are  used  by  methods   in   the  objects   to  e.g.  give  directions   for  a  new  location.  The  attributes  of   the  patients,  entourage  and  doctors  can  also  be  found  in  Appendix  C.         In  order  to  construct  the  model,  only  one  file  is  needed  as  input.  From  this  input  file,  the  model  can  be  made  and  tailored  to  the  specific  of  the  outpatient  clinic.  The  main  file  needed  for  the   construction   is   the   ModelComponents   File.   In   this   table,   the   user   must   indicate   which  objects  are  present  in  the  outpatient  clinic.  A  small  example  is  depicted  in  Figure  4.3.  As  can  be  seen,  one  must  specify  the  type  of  object  (e.g.  waiting  area),  it  respective  capacity,  place  (XPois,  YPos),   name,   IconAngle   and   CW   advice.   The   CW   advice   indicates   how   long   before   the  appointment  the  patient  should  start  walking  from  the  CW  to  the  WA.  The  script  for  placing  the  objects,  with  the  right  attributes  on  the  correct  place,  in  the  model  can  be  found  in  Appendix  C.  Of   course,   if   one   desires,   the   architectural-­‐drawing   of   the   outpatient   clinic   can   be   set   as  background.  As  can  be  seen  from  the  script,  the  creation  of  the  object  also  entails  the  creation  of  a  short  walking  path,  with  a  recognizable  name:  namely  that  of  the  object  it  is  linked  to.  One  of  the   most   time   consuming,   and   error-­‐sensitive   activities   of   model   building   is   laying   and  connecting  the  walking  paths.    By  adding  a  small  piece  of  walking  path  to  the  object,  one  is  sure  that  the  object  is  well  connected.    

 

Figure  4.3:  Table  of  Model  Components  Input  File  

After  all  components  are  placed   in  the  model,  a  script  makes  sure  all   ‘small  pieces’  of  walking  paths   are   placed   in   a   matrix   (see   Figure   4.4).   In   this   matrix   one   can   binary   indicate   which  walking   paths   should   be   connected   to   each   other.   Based   on   this  matrix,   connections   are   laid  automatically  (for  script  see  Appendix  C).  After,  the  only  task  that  remains  is  the  setting  of  right  distances  for  the  paths  in  the  models  where  necessary.    

  33  

 

 

Figure  4.4:  Matrix  Walking  Paths    

The   major   benefit   of   using   this   tool   to   simulate   the   configurable   process   model   allows   one  update  Object  Libraries  and  therefore  always  work  with  the  latest  (or  best)  objects  and  entities.  For  instance,  if  another  feature  is  needed  and  the  model  is  tweaked,  the  tweaked  entity  can  be  loaded   into   the   tool   and   be   used   in   the   future.   It   is   also   possible   to   replace   objects   in   a  configured  model,  with   allows   for   even   greater   applicability   to   real   life   cases.   An   example   of  such  applicability  is  that  a  consultant  may  have  to  change  certain  objects  during  the  project;  this  can  then  be  done  without  rebuilding  the  entire  model.    

Running  the  Model  When  the  model  is  constructed;  the  model  can  be  run.  In  order  for  the  model  to  function,  there  is   only   more   data-­‐file   needed,   namely   the   collection   of   patient   appointments.   The   objects  specified  in  the  appointments  should  equal  the  name  of  the  objects  defined  in  the  construction  part;  else  the  simulation  is  prompted  or  does  not  work  properly.  The  appointment  data  may  be  either  historic  or  dummy;  as  long  as  the  names  of  the  object  match  the  objects  in  the  model.  The  typical   string   compiling   a   patient’s   appointment(s)   is:   {Patient_ID,   Doctor,   ConsultRoom,  Specialism,   Waiting   Area,   Desk,   Examination   Room,   DateTimeAppoinment,  AppointmentDuration  and  DayNumber}  –  a  short  explanation  of   these  attributes  can  be  found  Appendix  C.  If  one  wants  to  compute  a  dummy  appointment  set;  the  information  as  depicted  in  Figure  4.5   is  needed.  An  external   file   can   create   the  dataset   according   to   attributes  described  above.   The   generation   of   this   file   however   falls   beyond   the   scope   of   this   research   and   is  developed  by  the  VG.       Once   the   correct   appointment   data   is   in   place   the   model   can   be   initialized;   a   script  makes   sure   that   all   object   match   the   objects   in   the   appointment   data   –   if   not   the   user   is  prompted.  By  adapting  various  parameters,  organizational  settings  can  be  changed  –  which  lead  to  alteration  in  the  flow  model  of  the  patient  or  doctor.    A  dialog  was  made  in  which  most  of  the  settings  can  be  altered,  this  dialog  can  be  found  in  Appendix  C.  The  dialog  is  inspired  on  the  tool  recommended  by  processconfiguration.com;  a  global  research  platform  for  configurable  models  (La   Rosa   et   al.,   2006).   By   answering   questions,   dependencies   and   paths   in   the  model   can   be  formed.   In   the   case   of   object-­‐oriented   models,   this   is   not   as   straightforward   as   with   flow-­‐oriented   models.     Instead   of   altering   the   flow-­‐architecture;   methods   in   objects   read   settings  from   the   main   setting   page   and   thereby   evaluate   e.g.   the   next   destination   of   the   patients.  Logically   these   methods   comprise   if…then…else   statements   in   which   settings   comprise   the  logical  inputs.  As  can  be  seen  from  the  dialog  there  are  four  major  organizational  settings  that  may   be   changed.   First,   the   PdT   or   DtP   can   be   changed   –   changing   this   parameter   does   not  directly  result  in  different  patient  behavior,  but  definitely  alters  the  doctors’  behavior.  Second,  

  34  

one   can   indicate  whether   a   Central  Waiting   System   is   operated.   Third,   one  may   indicate   the  probability  of  additional  examination   in  an  external  examination  room  and  fourth  one  can  set  the  probability  that  patients  must  make  an  appointment  at  the  desk.       Let  us   take   the  example  of   the  desk,   to  understand  how  the  setting   translates   into   the  model.   The   patient   has   an   attribute,  which   is   ProbabilityDesk;   this   is   a   Boolean   variable   and  thus   can   only   be   set   to   true   or   false.  When  patients   are   created   in   the   patient   generator;   the  attribute   is   set   according   to   the   settings  where  one   can   set   the  probability   as  depicted   in   the  dialog.  Using  a  uniform  probability  function  with  [0,100]  a  random  value  is  generated  for  each  patient  upon  a  methcall,  when  the  ‘random’  number  is  lower  than  the  specified  probability  the  patient   is   given   TRUE,   and   vice   versa.  When   the   patient   enters   a   consultation   room   and   the  consult  starts,   the  attribute  is  written  to  a  variable  within  the  consultation-­‐room  frame.  When  the  patient  leaves  to  its  next  destination,  the  direction  is  determined  based  on  (amongst  others)  this  very  variable.  In  the  Dialog  also  other  (non-­‐organizational)  settings  can  be  changed;  these  are   subdivided   into   two   classes:   1)   simulation   settings   and   2)   entity   settings.   Simulation  settings   comprise   settings   concerning  how  often   the  model   should   run  and  whether   it   should  use  animations  or  not.  The  entity  settings  comprise  amongst  others:  walking  speeds,  deviation  from   advice,   how   long   the   patient   stays   after   his/her   appointment,   preparation   time   for   the  doctors  and  the  number  of  entourage.  In  Appendix  B  one  can  find  the  default  settings.    

Model  Output  In  terms  of  model  output  there  is  a  variety  of  options,  based  on  the  specific  KPI  that  one  want  to  measure.   There   are   basically   four  ways   that   KPIs   can   be  measured.   First   one   can   use   the   in-­‐model  charts.  These  are  charts  that  capture  e.g.  the  number  of  patient  in  a  store;  which  can  be  a  waiting   area   or   a   buffer   for   a   specific   place   or   task.   These   charts   are   typically   not   very  well  readable  so  may  serve  as  fast  checks  only.  Second,  one  can  use  skankey  diagrams,  which  show  the  major   streams   in   the  model.  Given   the   fact   that   this   is   also  not   a   very  exact  practice,   two  more  precise  and  accurate  methods  can  be  used.  For   the   third  approach,   the  model  makes   its  own  data  outputs.  The  data  that  is  automatically  generated  by  the  model  consist  of:  a  patient  log,  measure   of   crowdedness   for  waiting   areas/queues   and   a   print   of   the   used   settings.   Also   the  activities  of  doctors  can  be  monitors  in  order  to  evaluate  e.g.  idle  time.  The  .txt  files  compiled  by  the  model  can  be  analyzed  in  any  statistical  program  (i.e.  RStudio,  Excel,  SPSS  ..)  depending  on  the   preference   of   the   user.   In   the   fourth   approach   one   uses   the   tracefile;   the   objects   and  methods  in  the  tool  have  been  bugged  with  lines  of  code  that  make  sure  that  a  correct  tracefile  is  compiled.  This  tracefile  can  be  loaded  to  Disco  without  need  for  any  adjustments.  In  Disco  one  can   analyze   the   flowchart   of   the   simulation   model,   as   well   as   throughput   times,   activity  durations  and  frequencies.  Disco  also  allows  one  to  apply  simple  filters  to  make  the  output  more  visually  attractive,  or  exclude  cases  from  the  analyses.     In   the  next  Section  one  will  be   familiarized  with   the   tool   through  a   small-­‐scale  model.  After,  one  of  the  available  models  (the  ISA2)  model  is  rebuilt  in  order  to  test  the  accuracy  and  needed  building  time  of  the  configurable  tool.  The  ISA2  model  is  especially  interesting  as  it  was  modeled  differently  than  the  RDG  and  ISA1  model.  Contrary  to  RDG  and  ISA,  this  model  does  not  use  a  patient  agenda,  there  is  no  CWA,  objects  are  differently  structured  and  objects  have  almost  no  overlapping  methods  with  the  objects  in  the  tool.    

  35  

5 Model Testing In   this  chapter,   the  model  will  be   tested.  Before  rebuilding  one  of   the   ‘sample-­‐models’  a  small  scale  example   is   shortly  evaluated   to  be  sure   that   the  model  actually  works.  When  rebuilding  one  of  the  pool-­‐models,  all  the  major  steps  using  the  tool  are  described.  The  model  output  will  be  compared  to  the  output  of  the  existing  model.    

5.1 Small-­‐Scale  Example  For   the   sake   of   illustration   and   working   example,   a   small   scale   example   of   an   hypothetical  outpatient   clinic   was   constructed.   The   outpatient   clinic   has   been   given   the   following  characteristics.   The   outpatient   clinic   has   1   central   waiting   area,   2   waiting   areas,   8  consultation  rooms,  4  examination  rooms  and  2  desks.  The  Small-­‐Scale  Model  assumes  that  there  are   three  specializations;  namely  Orthopaedic,  Anaesthesia  and  Surgery.   In  Figure  5.1,  a  representation  of  the  small-­‐scale  outpatient  clinic  is  depicted.  It  can  be  seen  that  at  the  entrance  of  the  outpatient  clinic,  there  are  two  waiting  areas  (red-­‐lined  square)  and  two  desks  (blue  bar  with  two  desk  employees).  The  central  waiting  area  is  situated  just  at  the  right-­‐hand  side  of  the  part   depicted   in   Figure   5.1.   The   grey   squares   with   desks   represent   the   various   consultation  rooms   in   which   the   mint-­‐colored   person   depicts   the   doctor.   The   red   squares   with   a   grey  rectangle  are  the  examination  rooms.     After  the  model  has  ran  for  a  full  day;  a  dataset  is  compiled  composing  (1)  a  patient  log,  (2)  mismatches,  (3)  crowdedness  walking  paths,  (4)  crowdedness  waiting  areas  (5)  the  settings  that  have  been  given  to  the  model  and  (6)  a  tracefile.  For  (5),  this  practically  means  that  a  list  is  produced  in  which  on  can  find  the  entered  values  for  the  parameters.      

 

Figure  5.1:  Layout  of  the  Outpatient  Clinic  

  36  

5.2 Small  Scale  Model  Validation  and  Verification  To   test   the  model;  various  experiments  have  been  conducted.  The  most   important   results  are  presented   below.   The   approach   to   the   experiments   is   to   run   the   small-­‐scale  model   for   three  different   configurations.   The   first   experiment   is   a   deterministic   run,   to  make   sure   the  model  performs   logical.   This   experiment   can   be   compare   to   the   fixed   values   input   as   described   by  Sargent  (2000).  The  second  and  third  experiment  use   two  different  configurations  of   the   tool,  with  the  same  patient  agenda.  The  KPIs  are  then  discussed  in  terms  of  distributions,  confidence  intervals  and  average  values  (Banks,  2010).  The  runtime  is  1  day,  for  each  experiment  ten  runs  were  made.  No  warm-­‐up  or  cool-­‐down  period  was  needed  as  the  model  uses  a  patient  agenda  in  which  a  real-­‐life  schedule  is  used.  In  10  runs,  2760  patients  plus  their  entourage,  go  through  the  model.  In  Appendix  D  one  can  also  find  the  results  of  a  5  day  simulation.      Experiment  1:  Simple  Deterministic  For  this  experiment  the  exact  settings  can  be  found  in  Appendix  D.  The  settings  correspond  to  the   setting-­‐dialog   as   explained  earlier.  The  model  was   run  deterministically,  without   external  examination   and  with   a   100%  probability   for  making   an   appointment   at   the   desk.   As   can   be  seen  from  the  flowchart   in  the  Appendix,   the  flow  model   for  this  simulation   is   linear.  There   is  only  1  variant,  all  cases  are  the  same.  The  average  throughput  time  in  all  5  runs  is  39.5  minutes  (median   39.7).   The   average  waiting   time   is   14.9  minutes   at   the   central  waiting   area,   and   4.6  minutes  at  the  local  waiting  area.    The  reason  why  the  total  average  throughput  time  is  above  the   average   consult   duration   (13.6)   plus   arrival   time   (15.00)   is   that   people   have   to   bridge  walking   distances   and   a   triangular   distribution   is   fitted   to   proctime   at   the   desk   (triangular  2,1,8),  which   leads   to  an  average   ‘desk’-­‐time  of  4.2  minutes.  Given   these  numbers,   adding   the  walking  distances  the  result  is  not  surprising.     Because   the   stochastic   distributions   are   disabled,   the   crowdedness   of   all   areas   is   the  exactly   the   same   for   each   run.   We   find   the   KPIs   as   depicted   in   Table   5.1.   To   calculate   the  capacity   utilization   of   the   consulting   rooms   we   can   apply   a   filter   in   Disco   using   logfile  comprising   the  doctor   or   the   consultation   room  activities   (i.e.   because   a  doctor   is   linked   to   a  room,  these  are  the  same).  By  setting  the  right  time  frame  we  find  that  the  doctor  spends  20%  of   the   day   ‘waiting’,   either   for   patients   to   arrive   or   during   the   lunch-­‐break.   The   capacity  utilization  of  the  Examination  Room  cannot  be  calculated  as  external  examination  was  disabled  in  this  experiment.    

KPI   Value  Average  number  of  persons  in  WA1   4.61  Average  number  of  persons  in  WA2   2  Average  number  of  persons  in  CWA   15.74  Number  of  mismatches   7  Average  number  of  persons  waiting  at  desk1   1.23  Average  Capacity  utilization  (consultation  room/doc)   80%  

Table  5.1:  KPIs  deterministic  model  

Experiment  2:  Configuration  Without  CW  and  PtD  In   this   experiment   the   model   is   ran   for   10   times,   but   not   deterministically.   This   time   the  stochastic  distributions  are  used.  The  settings  of  the  simulation  can  be  found  in  Appendix  D.  The  

  37  

throughput   times   differ   significantly   from   Experiment   1.   The   mean   case   duration   cannot   be  easily  derived  from  Disco.  Because  in  each  run  different  patients  receive  external  examination;  the  data  must  be  analyzed  per  run  and  then  averaged.  A  mean  case  duration  of  40.44  minutes  was  found  (median  39.9);  see  Table  5.2.  The  throughput  times  are  relatively  normal  distributed  as  can  be  seen  in  Appendix  D.  Using  the  formula  by  Banks  (2010,  p.  444)  we  can  calculate  the  confidence  interval.  

Formula  Confidence  Interval  :                        𝑌   ± 𝑡!/!,!!!  𝑆 1 + !!.    

A  95%  confidence  interval  with  9  degrees  of  freedom  (n=10),  with  an  average  average  of  40.44  and  a  standard  deviation  of  0.66  leads  to  a  term  for  a  prediction  interval  of  1.57.  Hence  the  95%  confidence  interval  is  [38.87;  42.01]  minutes,  meaning  that  we  can  be  95%  sure  that  the  actual  average  throughput  time  falls  in  this  interval.       Average  waiting  time  at  the  waiting  area  is  now  21.4  minutes.  This  is  significantly  higher  than  the  4.6  minutes  in  Experiment  1.  It  is  the  logical  consequence  of  having  no  central  waiting  area;  all  patients  are  directly  forwarded  to  the  waiting  area.  The  average  of  the  average  waiting  time   is   21.4   minutes.   The   sample   standard   deviation   is   0.95   minutes.   In   total   there   are   10  replications   of   the   full   day   cycle.   Therefore   the   confidence   interval   becomes  21.5   ± 1.399.  Hence  we   can   be   95%   percent   sure   the   long   term   average   of   the  waiting   time   falls   between  20.00  and  22.79.      

Run   Average   Median  

1   40,6   39,7  2   40,9   40,9  

3   40,5   38,8  4   39,9   39  

5   41,1   41,1  

6   40,5   39,1  7   39,4   39,2  

8   41,5   41  9   40,4   39,3  

10   39,6   40,1  

     Average   40,44   39,82  Table  5.2:  Average  Throughput  Times  per  Run  

For  the  crowdedness  and  capacity  utilization  we  can  also  calculate  the  confidence  intervals.  The  same  method  as  applied  above  was  used.  The  results  are  depicted   in  Table  5.2  below.  For  the  capacity   utilization   of   the   consultation   room   (or   doctor)   and   the   examination   room   no  confidence   intervals   could   be   calculated,   as   the   appropriate   data  was   not   available   –   all   runs  were  identical.  Hence  the  average  over  the  10  runs  was  calculated.  It  can  be  seen  that  despite  the  fact  that  the  central  waiting  system  was  disabled,  an  average  of  1  person  can  be  found  in  the  CWA.   This   is   due   to   the   fact   that   patient   enters   and   leaves   the   clinic   via   the   CWA.   Capacity  

utilization   is   considered  !"#!!"!#$!%$&!"#!!"#$

.   For   utilization   of   the   consult   room   and   the   examination  

room  respectively  87.2%  and  9.4%  is  found.    

  38  

KPI   95%  Confidence  Interval  Average  number  of  persons  in  WA1   [15.74;  18.99]  Average  number  of  persons  in  WA2   [6.78;  9.15]  Number  of  persons  in  CWA   [1.12;  2.23]  Number  of  mismatches   7  (same  for  each  run)  Number  of  persons  waiting  at  desk1   [0.86;  1.17]  Capacity  utilization  (consultationroom/doc)   87.2%  Capacity  utilization  examination  room   9.4%  

Table  5.3:  KPIs  for  Experiment  2    

 Experiment  3:  Configuration  with  DtP,  CWA  For   the   full   table   with   settings   one   is   referred   to   Appendix   D.   The  main   difference   between  Experiment   2  and   3   is   the   fact   that   the   Doctor   picks   up   the   patient   and   the   Central   Waiting  System   is   enabled.   Furthermore   there   is   a   100%   probability   of   external   examination   and  patients  stay   for  a  specified  amount  of   time  at   the  central  waiting  area  before   they   leave.  The  95%   confidence   interval   for   the   average   throughput   time   (minutes)   is   [50.83;   53.22].     The  reason  why  this  much  higher  than  the  throughput  of  Experiment  2   is   the   fact  (1)   that  patients  stay  in  the  central  waiting  area  after  their  consult  for  an  average  of  4  minutes  and  (2)  patients  receive   external   examination  which  poses   extra  walking  distances  during   the   consult   and   (3)  doctors  pick  up  and  bring  back  patients   from  the  waiting  area  and  desk.  The  95%  confidence  interval   for  waiting   time  at   the   local  waiting   area   is   [11.13;  13.79].     From  Table  5.4   it   can  be  seen  that  the  number  of  mismatches  is  higher  than  in  Experiment  1.  This  is  the  result  of  the  DtP  setting;   which   may   result   in   the   fact   that   a   doctor   wants   to   pick   up   a   patient,   whereas   the  patient  is  not  present  yet.    

KPI   95%Confidence  Interval  Average  number  of  persons  in  WA1   [7.77;  10.27]  Average  number  of  persons  in  WA2   [5.72;  7.13]  Number  of  persons  in  CWA   [20.18;  23.45]  Number  of  mismatches   Minimum  16  

Maximum  20  Average  17.5  

Number  of  persons  waiting  at  desk1   [1.05;  1.34]  Capacity  utilization  (consultation  room/doc)   85.9%  Capacity  utilization  examination  Room   18.8%  

Table  5.4:  KPIs  for  Experiment  3  

The   reason  why   the   capacity   utilization   of   the   external   examination   room   is   not   higher   than  18.8%  is  the  fact  that  it  uses  a  deterministic  examination  time  of  3:00  minutes.  If  the  experiment  is  run  again  with  a  normal  distribution  (𝜇 = 5,𝜎 = 2)  we  find  a  higher  utilization  of  32.5%.    

5.3 Rebuilding  ISA2  (ISA  MDL)  In  this  section  the  reader  is  governed  through  rebuilding  the  ISA2  model.  The  ISA2  model  was  chosen  because  it’s  process  map  included  external  examinations  and  is  respectively  not  so  big  in  size  (this  would  only  lead  to  repetitive  work).  Furthermore  the  original  ISA2  model  makes  use  

  39  

of  different  objects  than  the  reference  model;  for  example  –  patients  are  created  differently,  and  consultation  rooms  and  examination  rooms  rather  different   layout  of  methods.  The  steps   that  have   been   made   are   evaluated   in   the   next   section   together   with   their   respective   time  investment.  After,  testing  and  validation  is  applied  to  the  rebuild  of  the  model.    

The  Modeling  Steps  Step  1:  Collecting  the  Right  Data  (1  hour)  As  explained  earlier,  the  correct  data  was  needed.  In  this  case  this  implied;  we  needed  a  map  of  the  hospital  layout  as  well  as  a  Patient  Appointment  Data.  From  the  Patient  Appointment  Data  a  list  of  objects  (Consult  Rooms,  Examination  Rooms,  Desks,  WA)  can  be  derived.  The  patient  data  was  collected  from  the  original  model,  as  no  patient-­‐agenda  was  present.    Step  2a:  Building  the  Model:  Objects  (1  hours)  The  model  is  build  using  the  model  constructor.  This  constructor  works  based  on  a  table  of  the  objects  with  their  respective  attributes.  The  model  places  the  object  in  the  frame.  Each  object  is  already  connected  to  a  two-­‐lane  track,  to  make  sure  the  object  is  well  connected  to  the  network  of  walking  paths.  After,  one  fills  in  a  matrix  to  indicate  which  objects  are  connected  which  other  objects  –  connections  are  made  automatically.  The  user  has  to  manually   indicate  the   length  of  the  walking  paths:  this  is  the  most  time  consuming  step.    Note:   The   time-­‐duration   for   this   step   is   linearly   dependent   on   the   number   of   objects   in   the  model.    Step  2b:  Setting  the  Parameters  (30  minutes)  As   explained   in   Section  4.3   the   settings   can   be   altered   by   adapting   variables,   or   by   using   the  dialog.   These   parameters   are   based   upon   the   configured   version   of   the   configurable   process  model.  Step  3:  Quick  Test  of  the  Model  (30  minutes)  Once  the  model  is  ready  to  run,  the  model  is  shortly  tested  whether  there  a  no  ‘debug’  events.  The   objects   in   the   model   contain   various   debug-­‐codes   in   order   to   make   sure   there   are   no  endless  loops  and/or  deadlocks.      Step  4:  Running  the  Model  &  Extracting  Data  (1  hour)  Now   that   the   model   runs   properly;   data   can   be   extracted.   This   data   typically   contains  information  about  the  KPIs  that  were  described  earlier.    In  total  it  took  four  hours  to  build  the  model  using  the  tool.  Four  hours  is  only  a  small  fraction  of  the  total   time   invested   in   the  original  model.  However,  one  must  add  the  nuance  that  no  time  was   lost   to   explore   the   patient-­‐traces   and   acquire   data   from   the   hospital.   These   components  were  available  and  ‘ready  to  use’.  Also,  the  tool  was  operated  by  the  developer  of  the  tool  which  burdens  a  fair  comparison.  

Results  First  of  all  we  want  to  verify  whether  the  correct  process  is  present  in  the  model.  In  order  to  do  so,  we  use  the  same  strategy  as  with  the  discovery  phase  of  the  available  models.  A  simulation  of  one  day  was  run;  the  trace  file  was  then  analyzed  in  Disco.  The  process  map  shows  the  exact  behavior  the  model  was  ought  to  show.  For  full  map,  I  refer  to  Appendix  D.  As  can  be  seen  from  the  process  maps,   the  activities  shown  show  different  names  and  another   level  of  abstraction.  

  40  

The  overall  process  though,  is  the  same.  The  patient  enters  the  outpatient  clinic  à  takes  place  in  waiting  area  à   receives  consult  with  probability  of  external  examination  à  outpatient   leaves  clinic.       The  best  method   to  check  whether  newly  build  model   is  capable  of  doing   the  same  as  the  existing  model  is  by  using  Conformance  Analysis  (Rozinat  &  Aalst,  2006).  The  challenge  with  conformance  analysis;  is  the  fact  that  it  assumes  perfect  alignment  of  activity  names.  Therefore;  the  tracefile  of  the  existing  model  (which  was  relatively  much  smaller  in  terms  of  activities)  was  translated   to   the   same   activities   where   possible.   After   re-­‐mining   the   trace   in   Disco,   the   logs  were  exported  in  the  .XES  format,  which  is  fit  for  the  ProM  UITopia  java  tool.  In  ProM  UITopia  the  conformance  check  was  used.  A  can  be  seen  from  Figure  4.1  one  would  expect  rather  high  conformance.  Using  the  ETCConformance  tool  in  ProM,  a  precision  of  0.8966  was  found  for  the  ISA2  model  (Mu  &  Carmona,  2010).  The  algorithm  uses  the  petri-­‐net  (of  the  C-­‐EPC)  and  a  log  of  the   original  model.   Based   on   the   escaping   edges   (behavior   that   is   present   in   the   log,   but   not  present   in   the  model),   the  precision  can  be  calculated.  A  schematic  overview  of   the  method   is  provided  in  Figure  5.2.  The  reason  why  the  conformance  is  not  100%  may  be  due  to  the  fact  that  not  all  steps  of  the  patients  can  be  recorded  in  the  original  ISA2  model;  the  tracefile  is  based  on  another  hierarchy.     Certain   combinations   of  missing   activities   and   states  will   be  perceived   as  escaping  edges.  Still,  a  conformance  precision  of  ≈ .9  can  be  considered  high.        

   

  Figure  5.2:  ETConformance  (Mu  &  Carmona,  2010,  p.  6)    

Secondly  we  can  use  the  KPIs  to  check  whether  the  rebuild  of  the  ISA2  is  correct.  By  analyzing  the  output  for  the  different  KPIs  we  can  see  whether  the  same  results  are  produced.  The  KPIs  and  corresponding  outcomes  are  depicted   in  Table  5.5.  For   the  simulation  an  experiment  was  used   using   the   settings   as   depicted   in   Appendix   D.   The   model   was   run   10   times,   and   the  absolute   simulation   time   is   a   full   day   (08:00:00   –   20:00:00).     The  main   problem   faced   in   the  comparison  of  KPIs  is  the  fact  that  the  tool  only  registers  when  a  mutation  occurs  to  for  instance  the  number  of  people  in  the  waiting  area.  The  original  model  logs  the  number  of  people  in  each  area  every  minute.  Therefore  an  additional   table  was  added   to   the   tool,  which  can  be  used   to  register  the  same  as  the  original  model.  In  order  to  assess  whether  the  confidence  intervals  are  significantly  different  a  two-­‐sample  t-­‐test   for  comparing  means   is  used.  The  null-­‐hypothesis   is  that   both  means   are   equal;   if   the   t-­‐statistic   is   greater   than   the   critical   2.26   (i.e.   n=10,   2-­‐tails,  95%);   the   null-­‐hypothesis   can   be   rejected.   The   formula   to   calculate   the   t-­‐statistic   is   (Mann,  

2010):  𝑡 = !!!!!!!

!!!

!!!!!

!

!!

 

  41  

As   can   be   seen   from   Table   5.5,   the   average   number   of   waiting   people   seem   similar;   the  confidence   intervals   have   overlap   and   are   relatively   small.   However   the   corresponding   t-­‐statistic   is   above   the   threshold   of   2.26.   Hence   the   difference   between   the   two   confidence  intervals   is   considered   significant,   and   the   null-­‐hypothesis   must   be   rejected   for   this   KPI.  Because   the   KPI   for   average   people   in   the   waiting   area   did   not   match,   also   the   maximum  number  of  people  in  the  waiting  area  was  compared.  This  difference  for  this  KPI  was  found  to  be  below   the   threshold   of   2.26   and   therefore   can   be   considered   the   same.   It   is   remarkable   that  average  number   of   people   in   the  waiting   area   is   significantly   different   given   the   fact   that   the  process  models  of  the  original  and  rebuilt  model  are  the  same.  After  checking  different  parts  of  code,   a   logical   explanation   could   be   identified.   The   rebuilt   model   runs   from   an   appointment  schedule   adapted   from   the   original   model.   The   original   model,   however,   uses   a   different  approach   in   generating   the   appointments.   The   original   model   assigns   different   appointment  schedules  in  each  run;  although  the  schedule  is  fixed  -­‐  the  duration  of  the  consult  is  separately  generated   for   each   run.   This  way   it   could   be   that   e.g.   Patient   10   has   respectively   consults   of  {9:00,   15:00,   12:00,   8:00,   …}   for   each   run.   The   rebuilt   model,   on   the   other   hand,   applies   a  stochastic  distribution  to  the  variation  of  the   ‘predetermined’  duration  of  the  consult.   In  other  words,  let  us  assume  that  the  appointment  schedule  for  Patient  10  in  the  rebuild  model  assigns  12:00  minutes  for  consult  duration;  the  uniform  distribution  for  ‘actual  consult  duration’  draws  a  random  number  between  12:00  plus  or  minus  10%.  The  maximum  bound  becomes  10:48  and  13:12.  This  is  a  difference  in  modeling  approach,  and  can  be  seen  as  simplification.       Also   the  difference   in   the  queue  at   the  desk   is   large,   leading   to  a  significant  difference  between   the   original   and   the   rebuild   model   (t=11.59).   The   reason   for   this   significant  discrepancy  is  the  fact  that  that  the  rebuilt  model  operates  a  simple  normal  distribution  for  desk  time;  the  original  model  on  the  other  hand  uses  code  to  assign  different  desk  times  to  various  types   of   patients.   Given   the   fact   that   there   are   infinite   ‘types   of   patients’   this   feature   was  simplified.    The  t-­‐statistic   for  average  throughput  time  shows  t-­‐statistic  <2.26:   the  throughput  times  can  be  considered  ‘the  same’.    

KPI   Original   Rebuilt    Hypothesis  Test  𝝁𝒐𝒓𝒊𝒈𝒊𝒏𝒂𝒍 = 𝝁𝒓𝒆𝒃𝒖𝒊𝒍𝒕  

#  Patients   250  (per  run)   250  (per  run)    Call  Cycles  per  Run   442   302    Average  number  of  persons  in  waiting  area  

[7.31;  7.70]   [7.58;  8.95]   t-­‐statistic=7.62  

Average  max  number  of  people  in  waiting  area  

35  [32.76;  37.23]  

34  [30.13;  37.86]  

t-­‐statistic=1.66*  

Average  number  of  persons  waiting  for  desk  

[5.48;13.98]   [1.90;  2.53]   t-­‐statistic=11.59  

Average  throughput  time  patient  (mins)  

[30.24;  37.56]   [33.33;  35.75]       t-­‐statistic=1.18*  

Table  5.5:  Comparison  KPIs  Original  ISA2  model  and  Rebuilt  

Another  option   to  check   the  similarity  between   the  models   is   to  use   their   respective  Skankey  Diagram.  A  Skankey  Diagram  shows  the   intensity  of  streams  trough  the  model.  Therefore   it   is  also  a  convenient  way  to  see  whether  your  model  is  well  connected.  The  two  Diagrams  can  be  

  42  

found   in   Figure   5.3   and   show   high   similarity.   Not   all   KPIs   as   discussed   in   Section   4.2   are  summarized   in   Table   5.5.   The   original   model   does   not   allow   one   to   assess   the   capacity  utilization  of  rooms  or  doctors;  neither  does  it  allow  evaluating  the  number  of  mismatches.    

 

Figure  5.3:  Skankey  Diagrams  Output  Plant  Tecnomatix    (left:  original,  right:  rebuilt)  

 

  43  

6 Model Evaluation In   order   to   evaluate   the   generic   parameterized   reference   model   it   has   been   chosen   to   use  Moody’s  framework  (Moody,  2003).  Moody’s  framework  expresses  the  improved  performance  of   the   (simulation)  model.   “The  objective  of   validation   should  not  be   to  demonstrate   that   the  method  is  “correct”  but  that   it   is  rational  practice  to  adopt  the  method  based  on  its  pragmatic  success”   (Moody,   2003,   p.   3).   Given   this   statement,   one   is   in   need   of   a   clear   definition   of  pragmatic  success.  Pragmatic  success  can  be  explained  by    “the  efficiency  and  effectiveness  with  which  a  method  achieves  its  objectives  (O)”  (Moody,  2003,  p.  3).  Hence  all  methods  are  designed  to   improve   the   performance   of   a   task.   The   performance   of   a   task   can   then   be   divided   into  efficiency  improvement  and  effectiveness.  Efficiency  improvement  translates  into  the  notion  that  one  wants  to  reduce  the  effort  to  complete  the  task  whereas  effectiveness  defines  improving  the  quality  of   the  actual   result.  This  difference  between  efficiency  and  effectiveness   is  depicted   in  Figure  6.1  

 

Figure  6.1  Effectiveness  vs  Efficiency  (Moody,  2003,  p.3)  

6.1 The  Method  Evaluation  Model  Based   on   existing   methodologies,   Moody   (2003)   developed   the   Method   Evaluation   Model  (MEM)  incorporating  only  the  success  of  earlier  methods.  A  schematic  overview  of  MEM  can  be  found  in  Figure  6.2.  

 

Figure  6.2:  Method  Evaluation  Model  (Moody,  2003)  

MEM  distinguishes  between  six  constructs  which  are  listed  below  (Moody,  2003):  

  44  

1. Actual  Efficiency:  the  effort  required  to  apply  a  method.  2. Actual  Effectiveness:  the  degree  to  which  a  method  achieves  its  objectives  (O).  3. Perceived   Ease   of   Use:   the   degree   to   which   a   person   believes   that   using   a   particular  

method  would  be  free  of  effort.  4. Perceived  Usefulness:   the   degree   to  which   a   person  believes   that   a   particular  method  

will  be  effective  in  achieving  its  intended  objectives.  5. Intention  to  Use:  the  extent  to  which  a  person  intends  to  use  a  particular  method.  6. Actual  Usage:  the  extent  to  which  a  method  is  used  in  practice.  

For  constructs  1,2  and  6  one  needs  real-­‐life  driven  analysis  and  experiments,  which  cannot  be  derived   from   surveys   or   interviews.   To   evaluate   these   constructs   one   need   to   perform  controlled   experiments.   Especially   for   construct   1   and   2,   respectively   actual   efficiency   and  actual  effectiveness,  a  controlled  experiment  would  be  the  optimal  test  case.  For  this  research  it  was  not  feasible  due  to  limited  resources  (time,  money,  ..)  to  set  up  these  experiments.  However,  in  the  textbox  below  one  can  find  a  short  summary  of  the  proposed  experimental  set-­‐up.      

||  EXPERIMENTAL  SET  UP||  The   controlled   experiment   is   done   with   two   groups   of   consultants.   Preferably   consultants   who  have   no,   or   minimal   experience,   with   the   models   that   have   established   the   basis   for   the  configurable  model  and  corresponding  tool.  Before  starting  the  actual  experiment  all  participant  will   be   given   a   briefing   in   which   the   ‘problem’   or   challenge   is   explained.   In   this   briefing   the  participant  will   be   familiarized  with   the   concerned  KPIs,   psychical   and  organizational   layout   of  the   outpatient   department.   Also,   the   participant   receives   a   agenda   for   the   patients.   The  participant  is  asked  to  make  a  simulation  model  of  Clinic_Test  in  which  the  KPIs  can  be  evaluated.  Until   this   point,   all   participants   will   have   the   exact   same   information.   Then,   there   exists  ‘treatment’  X  and  treatment  Y.  The  participant  is  randomly  assigned  to  one  of  these  ‘treatments’.       Treatment  X,  comprises  a  method  in  which  the  outpatient  clinic-­‐specific  available  models  are   available   to   the   participants.   In   this   case   this   would   resemble   ISA1,   ISA2   and   RDG.   Other  models  that  are  configured  ‘instances’  of  the  configurable  model  may  be  added  to  this  pool  so  that  the  participant  has  more   information.  The  participant   is  asked  to  build  Clinic_Test  based  on  the  information  provided  by  the  available  models.  The  participant  may  copy,  tweak,  adapt  and  write  new  code.     Treatment   Y,   comprises   an   explanation   of   the   steps   that   need   to   be   taken   in   order   to  construct   the  model  using   the  C-­‐EPC  and  corresponding   tool.  Hence,   the  participant   is  not  given  any  working  model  of  an  outpatient  clinic  but  solely  the  tool  and  the  instructions  that  come  along  with  the  tool.  After  completing  the  assignment  the  participant   is  presented  with  a  survey  comprising  questions  based  on  Moody’s  framework:  perceived  ease  of  use,  perceived  usefulness,  intentions  to  use.         Once   the   participants   have   finished   the   ‘assignment’   of   constructing   Clinic_Test,   a  researcher  will   have   to   analyze   the   quality   of   the  model   by  mining   the   event   logs   and   checking  whether  the  appropriate  behavior  is  present.  Furthermore  it  will  be  tested  whether  the  outcome  of  the  KPIs  correspond  (within  a  confidence  interval)  to  the  desired  outcome.  Also,  the  animation  of  the  model  can  be  checked  in  order  to  be  sure  there  is  no  ‘beaming’  or  other  deviant  behavior.  Lastly  

  45  

the  time  that  was  needed  to  build  the  model  is  evaluated  in  order  to  establish  an  understanding  of  ‘saved  time’  when  using  the  tool.    For   the  other   three  constructs,  on   the  other  hand,  a  survey  study  was   found   fit   for  evaluation  (Moody,  2003).  The  survey  is  based  on  the  original  survey  of  Moody  and  is  ought  to  evaluate  the  perceived  ease  of  use,  perceived  usefulness  and  intention  to  use.  As  the  model  will  not  be  actually  used   by   those   surveyed,   especially   the   intention   to   use   is   interesting.   The   original   survey  developed  by  Moody  can  be  can  be  found  in  Appendix  D.       As  Moody  &  Shanks  (1994)  point  out  it  is  important  to  include  all  the  stakeholders  in  the  process.   In   their   point   of   view   these   comprise   the   end-­‐user,  management,   data   administrator  and  application  developer.  All  these  stakeholder  possess  a  different  view  on  what  is  important  for   the  model   and  what   is   less   important.  Typical   features   are   then:   simplicity,   completeness,  flexibility,   integration,   understandability   and   implementability.   These   components   can   be  systematically   found   in   Figure   6.3.   In   the   survey   I   evaluate   the   reference   model   based   on  intention   to   use,   flexibility,   simplicity   and   understandability.   The   reason   why   these   three  metrics  were  selected  springs  from  the  fact  that  Moody  and  Shanks  (1994)  found  that  these  are  typically   the   most   important   metrics   for   the   Expert   Data   Modeler,   Operational   User   and  Industry  Expert.  These  three  functional  areas  typically  resemble  the  position  of  the  consultant.       The   survey   is   set,   so   that   the   subject   has   to   place   him   or   herself   in   the   shoes   of   a  consultant.  A  case  is  given  to  make  context  more  concrete.  After,  the  subject  is  exposed  to  two  approaches  for  designing  the  DES  model.  The  first  approach  resembles  a  set  of  four  models  from  which  the  new  model  may  be  constructed  /  tweaked.  The  second  approach  presents  the  subject  with  the  reference  model  and  the  consecutive  steps  that  need  to  be  taken   in  order   to  use  this  model.  The  subject   is  asked   to  score  certain  statements  on  a  7-­‐point  Likert   scale   (Hair,  Black,  Babin,  Anderson,  &  Tatham,  2009).  The   line  of  questioning   is  based  on   the  original   survey  by  Moody.  All  the  questions  of  the  survey  can  be  found  in  the  Appendix  E.  The  full  survey  (in  .pdf)  can  be  found  in  the  attached  URL.    

 

Figure  6.3:  Drivers  for  Data  Model  Quality  (Moody  &  Shanks,  1994,  p.101)  

  46  

6.2 Sample  and  Results  The  potential  pool  of  professionals  using  the  parameterized  reference  model  is  rather  small.  In  the  Netherlands  there  are  only  a  few  firms,  which  specialize  in  simulation  models  for  outpatient  clinics  in  hospitals.  Adding,  there  are  only  a  few  hospitals,  which  actively  engage  in  simulation  experiments  to  improve  their  performance  or  awareness.  And  still,  these  hospitals  do  not  have  the  expertise  to  use  the  simulation  models  themselves;  external  parties  are  needed.     The  sample  consists  of  8  participants  of  which  the  demographics  can  be  found  in  Table  6.1.  Furthermore  the  data  shows  that  the  participant  have  experience  (in  years)  with  mean  11.5  years  and  a  maximum  of  34  and  minimum  of  0.5  Furthermore  all  participants  have  an  MSc/MBA  of  which  one  is  PhD.  Six  participants  use  object-­‐oriented  modeling  in  their  daily  work,  the  other  two  are  experience  in  flow-­‐oriented  simulation  modeling.    

Characteristics   Distribution  Country  of  Residence   3  USA,  1  German,  4  Dutch  Gender   3  Female,  5  Male  Type  of  Work   7  Consultant,  1  Model  Developer  

Table  6.1:  Characteristics  Sample  

Before   starting   any   type   of   analyses   on   the   data;   the   negatively   phrased   questions   were  transposed   to   positive   meaning  [8 − 𝑆𝑐𝑜𝑟𝑒!"#$%&] ,   as   the   Likert   scale   was   used.   After,   an  analysis  was  conducted   in  order   to  make  sure   there  are  no  multivariate  outliers.  Multivariate  outliers  can  be  identified  through  the  Mahalonobis  Distance  (Hair  et  al.,  2009).  After  computing  the   Mahalanobis   distance   the   probability   is   calculated   using   a   Chi   Square   distribution;   all  𝑝 < 0.001  are   considered   multivariate   outliers.   From   the   analysis   no   outliers   could   be  identified;  however  from  visual  inspection  it  can  be  found  that  there  was  one  outlier,  namely  the  Data   Developer.   There   is   no   variability   in   his/her   answers   (all   is   ranked   7),   and   hence   his  survey  will  not  be  considered  in  analysis.  Tests  for  normality  are  not  appropriate  for  a  sample  of  𝑛 = 7,   and   will   therefore   not   be   conducted.   In   further   statistical   procedures   such   as   factor  analysis  and  t-­‐tests,   the  assumption  of  normality   is  taken  for  granted  (Mann,  2010).  Given  the  small  𝑛,  the  conventional  0.95  significance  level  is  relieved,  and  a  more  lenient  significance  level  of  0.90  is  adapted  instead  (Hair  et  al.,  2009).     The   responses   to   the   evaluating   questions   of   the   questionnaire  were   compared   using  Paired-­‐Samples  T-­‐Tests   (Appendix   F).   It  was   found   that   out   of   10   variables,   only   4   showed   a  significance   difference   (p<0.1)   regarding   approach   1   and   2:   Reusability,   Maintenance,   Time  Intensity  and  Prior  Knowledge.  All  of  the  significant  variables  were  different  in  favor  of  approach  2  (the  configurable  model  tool).       The  variables  are  linked  to  the  constructs  according  to  the  original  survey  by  Moody;  a  schematic   overview   can   be   found   in   Appendix   E.   Checking   the   constructs   using   the   standard  EFA  procedure  was  not  possible  due  to  the  small  sample;  the  assumptions  of  KMO  and  Bartlett’s  test   could   not   be   evaluated.   Starting   from   an   a-­‐priori   number   of   factors,   the   constructs  were  evaluated   in   terms   of   reliability.   There   are   in   total   6   constructs;   as   can   be   seen   in   Table   6.2.  From  Table  6.2  one  can  conclude  there  are  four  constructs  with  a  negative  𝛼.  This  means  there  is   negative   average   covariance   between   amongst   items   –   given   the   fact   that   these   constructs  only   consist   of   two   variables   it   cannot   be   fixed   by   simple   deletion.   A   Cronbach’s    𝛼  < 0.7  is  

  47  

problematic   in   most   cases,   however   for   exploratory   research   a   Cronbach’s  𝛼 > 0.6  can   be  accepted  (Hair  et  al.,  2009).  One  could  delete  “Effective”  leading  to  a  higher  𝛼  for  both  methods.  The   difference   between   perceived   usefulness   of   the   two   different   methods   is   found   to   be  significant   (𝑝 = 0.035),   even   at   95%   significance.   For   the   statistical   test   results   I   refer   to  Appendix  F.        

Construct   Cronbach’s  𝜶   Cronbach’s  𝜶     Method  1   Method  2     Before  Deletion   After  Deletion   Before  Deletion   After  Deletion  Intention  to  Use     0.766   n/a   -­‐  3.08*   n/a  Perceived  Ease  of  Use   -­‐  45.0*   n/a   -­‐  7.05*   n/a  Perceived  Usefulness   0.601   0.610   0.864   0.911  

Table  6.2:  Constructs  Reliability  

Regarding   the   last   5   questions,   in   which   the   participant   indicates   which   method   he   or   she  prefers,   there   is   definitely   consensus.   In   Appendix   F   one   can   find   the   statistical   output   from  which  can  be  concluded  that  all  participant  think  approach  2  is  faster  (𝜇 = 6.28,𝜎 = 0.23).  The  participants   generally   agree   that   approach   2   is   simpler   ( 𝜇 = 6.14,𝜎 = 0.69)  and   more  sustainable   (𝜇 = 6.00,𝜎 = 1.00).   However,   from   the   general   questions   it   cannot   be   derived  whether  the  participants  think  that  approach  2  leads  to  a  better  reuse  of  the  available  models.  This   is   interesting,  because   in  the  earlier  questions  of   the  questionnaire  (“This  methods   is   the  best   way   to   reuse   available   models”)   it   was   found   that   there   was   a   significant   (𝑝 = 0.07)  difference  between  method  1  and  2.    

  48  

 

7 Discussion The  research  started  from  the  idea  to  combine  a  sample  of  existing  models  for  outpatient  clinics.  Despite   the   fact   that   only   three   useful,   working,   models   were   identified   the   methodology   as  presented  has  been   found  successful.  A   larger  number  of   simulation  models  would  have  been  preferable.  Theory  from  the  concept  of  C-­‐EPCs  has  formed  the  basis  for  a  methodology  enabling  the   aggregation   and   generalization   of   these   three  models   (Gottschalk   et   al.,   2007;  Gottschalk,  2009;   Van   der   Aalst   et   al.,   2006).   These   articles   concerning   C-­‐EPCs   seldom   discuss   other  features   than   the  control-­‐flow  of   the  model.  Despite   the   fact   that   the  control-­‐flow  perspective  has   formed   the   basis   for   the   end   result,   the   control-­‐flow   does   not   incorporate   features   as  resource,  object  or  data  flow  perspective.  A  solution  to  this  shortcoming  was  found  by  La  Rosa  et   al.   (2011),  who   introduced  Configurable   integrated  EPCs   (C-­‐iEPC).   C-­‐iEPCs   incorporate   the  resource   flow   perspective.   Albeit   C-­‐iEPCs   could   have   been   a   good   language   to   construct   a  generic  model   for  outpatient  clinics,  an  object-­‐oriented  approach  was  chosen.  C-­‐iEPCs  are  not  particularly   fit   for   actual   simulation,   but   are   rather  made   for   communication   purposes   or   to  form   the   basis   for   a   simulation   model   (Müller,   2012).   Adding,   the   object-­‐oriented   approach  allows  it  to  have  meaningful  visual  animations  (on  architectural  drawings),  and  measure  object-­‐oriented  KPIs  easily.       Using  the  configurable  process  merging  algorithm  (La  Rosa  et  al.,  2012),  mined  process  models  from  the  existing  simulation  models  were  combined  to  a  C-­‐EPC.  This  C-­‐EPC  has  formed  the  basis  for  a  discrete  event  simulation  tool,  which  can  be  set  to  run  according  to  a  particular  configuration.   The   tool   incorporates   two  major   features,   the   construction   of   a  model   and   the  running  of  the  model.  The  construction  of  the  model  is  necessary  as  each  outpatient  clinic  has  a  different   physical   set-­‐up   of   consultation   rooms,   examination   rooms,   desks,  waiting   areas   and  central   waiting   area(s).   Given   the   fact   that   the   model   is   constructed   from   a   library,   the  simulation  model  does  not  carry  excess  weight  in  terms  of  redundant  objects.  Objects  have  been  constructed   to   allow   for   different   configurations;   so   instead   of   having   e.g.   two   types   of  consulting  rooms  only  one  suffices.  Also,  the  library  can  be  easily  updated  with  more  recent  and  better  versions  of  the  objects.  Running  and  configuring  the  model  can  be  done  through  a  dialog  which   follows  the  same   logic  as  Quaestio,  a  questionnaire  based  configuration  tool   for  C-­‐EPCs  (La  Rosa  et  al.,  2011).     The  KPIs   incorporated   in   the  model  descent   from  both  the  available   three  models  and  the  KPIs  identified  in  the  literature  review.  Despite  the  fact  that  not  all  KPIs  from  the  literature  could  be  implemented,  for  example  due  to  their  qualitative  character,  average  throughput  time,  clinic  waiting  time,  absolute  throughput  and  the  working  time  for  doctors  were  added  (Martin  et  al.,  2003;  McCarthy  et  al.,  2000;  Weerawat,  2013).  Because  the  choice  was  made  to  construct  a   tool  which  works  with   appointment   data,   no  KPIs   could   be  measured   like   ‘waiting   time   till  first   appointment’   (Martin   et   al.,   2003).   In   other   words,   the   appointment   schedule   for   the  runtime  of  the  simulation  experiment  is  fixed  beforehand.  It  is  recommended  that  in  the  future,  more   flexible  schedules  are  adopted.  The  model  simplifies   the  real  world   in   the  sense   that  no  extra  appointments  are  scheduled  in-­‐between  the  existing  agenda,  and  doctors  do  not  leave  for  

  49  

emergency   cases.   Furthermore,   the   simulation   model   implicitly   assumes   that   the   outpatient  clinic   is   an   autonomous   entity;   dynamics   between   other   (outpatient)   clinics   are   not   captured  whereas   these   could   severely   influence   the   KPIs,   especially   if   patients   have   multiple  appointments  on  one  day.     Three   basic   experiments   were   conducted,   using   the   testing  methods   as   presented   by  Sargent   (2000).   The   first   experiment   was   deterministic,   meaning   that   almost   all   stochastic  distributions  were  disabled.  The  test  results  of  this  experiment  allowed  it  to  manually  verify  the  correctness   of   certain   KPIs   such   as   throughput   time.   In   the   latter   two   of   the   experiments,  confidence   intervals   were   computed   (for   the   KPIs   which   allowed   that),   and   process   models  were  mined  in  Disco.  The  mined  process  models  enabled  it  to  evaluate  the  effect  of  the  settings,  and  the  correctness  of  the  model.  For  instance,  the  probability  for  ‘making  appointment  at  desk’  was  changed  between  the  experiments.  The  process  models  clearly  show  different  frequencies  for   the  various  paths,   and  correspond   to   the   settings   fed   to   the  dialog.   In   terms  of  KPIs   there  was  a  clear  difference  between  the  simulation  with  and  without  CWA.  As  one  would  expect  the  CWA  was  more  crowded  with  the  central  waiting  system  (RICOH,  n.d.);  for  the  waiting  area  the  opposite  was  true.       As  fourth  experiment,  one  of  the  original  models  was  rebuilt.  The  conformance  precision  of  the  C-­‐EPC  was  found  to  be  0.8966.  Although  theoretically  one  would  expect  a  conformance  of  1,   the  result   is   satisfactory.  The   tracefile  of   the  original  model  was  computed   from  a  different  hierarchy  than  C-­‐ECP.  Hence,  many  activities  and  states  are  not  present  in  the  original  tracefile.  The  plug-­‐in  in  ProM,  which  computes  the  conformance  precision,  considers  some  of  these  cases  escaping  edges  that  leads  to  a  lower  precision.  Checking  the  KPIs  led  to  surprising  results  given  the  high  conformance  precision.  Four  confidence  intervals  were  compared  of  only  two  were  not  significantly  different.  Because   the   average   crowdedness  of   the  waiting   area  was   significantly  different  (p<0.05),   the  average  maximum  crowdedness  was  computed.  The  average  maximum  crowdedness  was  found  to  be  not  significantly  different  from  the  original  model  which  indicates  similarity.   The   average   throughput   time  was   statistically   similar   (i.e.   confidence   intervals   not  significantly   different).   Furthermore   the   average   queue   length   was   statistically   different   as  consequence   of   a   simplification.   In   terms   of   methodology   and   research   objective,   the  configurable   tool  has   failed   to   compute   the  exact   same  outcomes  as   its   three  original  models.  Taking  the  process-­‐oriented  approach  for  analyzing  and  merging  the  models,  not  all  code  was  extensively  analyzed  and  therefore  nuances  overlooked.       The  construction  time  to  construct  the  model  using  the  tool  was  found  to  be  four  hours.  Four  hours  is  only  a  fraction  of  the  time  that  was  needed  to  build  the  original  model.  Please  note  that  nuances  must  be  added  to  this  finding.  First  of  all,  it  cannot  be  determined  how  much  of  the  original   development   time   was   used   for   ‘pure’   construction   of   the   model.   Second,   all   the  necessary  information  for  the  model  construction  of  the  rebuilt  (i.e.  objects  and  patient  agenda)  was  present:  only  small  data  transformations  had  to  be  made.  Third,  it  was  the  developer  that  operated  the  simulation  tool.  To  make  this  finding  solid,  the  controlled  experiment  as  suggested  in  Chapter  6  should  be  conducted.     To   assess   the   usability   and   likelihood   of   adaption   for   a   method   using   the   tool,   a  questionnaire   was   issued   to   small   number   of   consultants.   The   questionnaire   is   based   on  Moody’s   framework   (Moody,   2003).   In   this   questionnaire   the   respondents   were   asked   to  

  50  

evaluate   two   methods   to   develop   a   simulation   model.   The   first   method   comprises   reuse-­‐of  available  models  while  the  second  method  comprises  use  of  the  tool.  The  surveyed  were  given  screenshots  of  available  models  and  screenshots  of   the  configurable   tool  along  with  a  written  explanation   of   method.   Referring   back   to   the   constructs   defined   by   Moody   (2003),   only  perceived  usefulness  could  be  compared  for  both  methods.  The  other  constructs  were  found  to  be   non-­‐significant   in   their   respective   factor   analysis.   Perceived   usefulness   was   significantly  better  (p<0.1)  for  the  tool  than  for  the  ‘classical’  method.  Because  the  other  constructs  could  not  be  evaluated,  components  of  these  elements  were  compared.  It  was  found  that  all  respondents  found   the   approach   in   which   the   tool   was   used   significantly   (p<0.1)   better   in   terms   of  reusability,   maintenance,   time-­‐intensity   and   needed   amount   of   prior   knowledge.   The   other  variables  did  not  indicate  significant  differences.    Overall  the  respondents  indicated  that  the  tool  was  faster,  simpler  and  more  sustainable  which  aligns  with  the  hypotheses.  It  should  be  noted  that  the  questionnaire  was  issued  to  eight  consultants,  which  is  a  rather  low  sample  size.  Adding,  the  selection  of  the  subjects  was  non-­‐random.  The  survey  has  indicated  a  tendency  in  favor  of  the  configurable  tool,  but  must  be  extended  in  scale.       In   the   literature   review,   three   studies   were   found   which   claimed   to   have   generic  features.   Despite   the   fact   that   the  model   developed   by   Swisher   et   al.   (2001)   allows   for   some  configuration   of   capacities,   number   of   examination   rooms   and   staffing   –   the  model   does   not  allow   to   configure   the   process   of   the   patient.   Each   patient   follows   the   same   the   exact   same  process  model.  The  model  of  the  study  conducted  by  Weerawat  (2013),  takes  the  perspective  of  a   flow-­‐based   model   (in   Arena).   Instead   of   using   an   agenda   with   appointments,   this   model  assumes   a   batch   of   people   arriving   in   the   morning   –   these   are   rotated   through   the   model.  Probability   distribution   functions   are   fitted   to   choice-­‐nodes,   which   then   decide   the   process  model  of  the  patient.  The  simulation  model  does  not  allow  one  to  analyze  streams  through  the  model,  and  is  not  fit  to  evaluate  resource  utilization  of  different  rooms  (consult,  examination)  in  the  outpatient   clinic.  The  model  by  Paul  &  Kuljis   (1995)   is   rather   limited   in   its   capabilities   to  simulate   different   settings   for   outpatient   clinics;   for   instance   only   one   to   six   doctors   can   be  modeled  and  the  number  of  rooms  cannot  be  altered.  The  tool  developed   in  this  research   fills  some  of  the  shortcoming  of  the  models  discussed  above.  This,  however,  does  not  mean  there  is  no  room  for  improvement.  In  order  for  the  simulation  tool  to  become  more  generic  and  widely  applicable,  more  configurations  should  be  enabled,  more  KPIs  could  be  implemented  and  field-­‐verification  should  take  place.         With  this  research  a  basic  foundation  for  merging  object-­‐oriented  simulation  models  is  given.   Translating   available   models   to   process-­‐models   using   data   mining   techniques   is   an  established   academic   field   (van   der   Aalst,   2012).   Translating   these   configurable   models   to  object-­‐oriented   simulation   tool   is  new.  The   crux   stems   from   the   fact   that   there   is  no   ‘natural’  flow   in  object   oriented  models,   the   entity  must   always  be  given  a   certain  direction  or   task   in  order  to  change  the  ‘state’  (DES).  The  methodology  as  adapted  in  this  study  has  worked  for  this  small  pool  of  models,  and  therefore  is  a  good  basis  for  future  work.  On  the  practical  side,  there  is  a  major  impact  on  efficiency  in  developing  simulation  models  for  outpatient  models.  Firms  that  are  willing  to  develop  and  operate  tools  as  described  in  this  thesis  will  have  a  competitive  edge  in  the  market.    

  51  

8 Conclusion and Final Remarks

8.1 Main  Findings  The   main   goal   of   this   thesis   was   to   evaluate   whether   it   would   be   possible   to   build   a  parameterized,  generic,  simulation  model  for  outpatient  clinics,  which  is  smaller  in  size  than  the  sum   of   the   existing   models.   In   order   to   answer   that   question,   three   models   were   analyzed  leading  to  a  configurable  process  model,  a  C-­‐EPC.  This  configurable  process  model  has  formed  the   basis   for   an   object-­‐oriented   DES   tool.   The   tool   allows   to   construct   and   run   a   model  according  to  his/her  preferences  and  the  characteristics  of  the  outpatient  clinic  in  question.  This  tool  is  easily  maintained,  due  to  the  object-­‐oriented  approach  in  which  the  object-­‐library  can  be  updated  and  replaced  by  superior  or  extended  versions.  Given  the  fact  that  the  tool  starts  from  a  library   of   objects   instead   of   a   static,   standing,   model,   the   tool   is   smaller   in   size   than   the  evaluated   original  models.   The   objects   can   fulfill   their   roles   in   different   configurations   hence  fewer  objects  are  needed.  The  number  of  call  cycles  for  the  rebuilt  model  was  25%  lower  than  for  the  original;  although  this  does  not  proves  that  the  tool  will  always  yield  a  model  with  less  call-­‐cycles   this   result   can   be   seen   as   implication.   The   rebuilt   model   showed   a   conformance  precision  of  0.8966  on  the  C-­‐EPC.  One  of  the  research  objectives  was  to  compute  the  same  KPIs  with  the  configurable  tool  as  in  the  original  models.  The  differences  in  KPIs  of  the  rebuilt  model  compared  to  the  original  model  were  in  some  cases  significant  (p<0.05).  In  taking  the  process-­‐oriented  approach  for  analyzing  and  merging  the  models,  not  all  code  was  extensively  analyzed.  Simplifications,  e.g.  for  needed  time  at  front  office,  were  not  accurate  and  thereby  decrease  the  validity  of  the  configurable  tool.       Using   a   questionnaire,   eight   consultants   were   asked   to   evaluate   two   approaches   for  model   development   and   use.   It   was   found   that   the   tool  made   in   this   thesis  was   significantly  (p<0.1)  better  in  terms  of  reusability,  maintenance,  time-­‐intensity  and  needed  amount  of  prior  knowledge.   Also   Moody’s   construct,   perceived   usefulness,   was   scored   significantly   (p<0.1)  higher  for  the  tool  than  for  the  ‘classical’  approach  (Moody,  2003).  

8.2 Recommendations  for  Future  Work  As  discussed  before,   this   research   fills   a   gap   in   the   literature.  At   the   same   time,   this   research  also   has   its   limitations.   First   of   all,   only   3   available   models   were   used   to   construct   the  configurable  simulation  tool.  This  therefore  implies,  that  the  behavior  of  tool  is  limited  to  the  C-­‐EPC   depicted   on   Figure   4.1.   If   this  methodology  was   adopted   in   another   study,  more  models  should  be  used  to  construct  the  C-­‐EPC  and  thereby  simulation  tool.  Secondly,  the  tool  has  only  been  tested  with  in  four  experiments,  of  which  three  hypothetical.  The  results  were  more  or  less  in   line  with   the  expectations.  However,   to   test   the  real  power  of   the   tool,   fieldwork  should  be  conducted  in  which  simulation  models  for    ‘unknown’  outpatient  clinics  are  constructed.  In  this  empirical   studies   one   could   use,   for   example,   GPS   or   transponder   (RFID)   tracking   devices   to  give   one   more   insights   into   the   actual   behavior   of   patients   and   their   entourage.   Also,   the  configurations  of  the  C-­‐EPC  that  do  not  appear  in  the  original  models  must  be  validated.       Third,   the  model  was  evaluated  using  an  online  questionnaire.  This  questionnaire  was  completed  by  a  non-­‐random  sample  of  only  eight  consultants.  Despite  the  fact   that  the  results  

  52  

were   significant,   a   greater   sample   should   be   used.   Preferably,   the   questionnaire   is   combined  with  a  controlled  experiment  as  posed  in  Section  6.1.  Constructs  in  the  'actual  efficacy'  domain  can   thereby  be   tested  and   compared  over   the  different   approaches.   In  developing  a   study   for  actual  efficacy  also  other  measures  of  complexity  and  ease  of  use  can  be  considered.  An  example  of  such  measure  is  the  number  of  mouse  clicks  to  construct  and  run  a  model.     Fourth,   as   this   research  was   conducted  at   a   consultancy   firm,   the   choice   for  modeling  software  was  predetermined  and  explicitly  left  out  of  scope.  There  are  many  packages  available  for  DES,  of  which  some  are  specially  tailored  for  healthcare.  Using  the  same  methodology,  but  rather  developing  a   tool   in  a  healthcare  oriented  DES  package  could   lead  to  other,  and  maybe  better  results.       Fifth,   the   tool   as   presented   in   this   thesis   features   applications   for   what-­‐if   analysis.  Extending  the  tool  with  optimization  features  could  improve  the  real  value  and  applicability  of  a  tool  as  such  (Clague  et  al.,  1997).  A  strong  recommendation  in  that  sense  is  to  extend  the  tool  with  flexible  scheduling  of  patients  (i.e.  no  appointment  schedule  is  fixed  in  advance)  during  the  course   of   the   simulation   run.   This  means   that   appointments  made   during   the   simulation   are  actually   executed   in   that   same   simulation.  The   runtime  of   the   tool  must   then  be   increased   to  weeks  or  months.  This  allows  one  to  measure  KPIs  such  as  Waiting  Time  till  First  Appointment  (Martin  et  al.,  2003).     Last,   in   the  developed   tool,   a   certain   level  of   simplification  was  adapted.  Adding  more  complex   features   as   outpatient   clinic   interface   to   inpatient   clinics   with   non-­‐elective   consults  and/or  examinations  would  increase  the  real-­‐world  value  of  the  tool.  Features  could  be  added  such   as   doctor   cancellations   due   to   emergency   cases   at   other   departments   or   multiple  appointments  per  day  at  different  (outpatient)  clinics  leading  to  more  complex  waiting  time  KPIs.  However,   if   the  methodology  followed  in  this  thesis   is   followed  with  a  greater  pool  of  models,  this  will  happen  automatically.    

8.3 Final  Remarks  In  this  thesis  I  have  tested  a  new  approach  towards  the  development  of  simulation  models  for  outpatient   clinics.   Using   the   theory   from   configurable   models,   a   tool   was   made   which   can  construct   and   run   an   outpatient   clinic   in   different   organizational   settings.   Using   tools   as  described   in   this   thesis   could   give   consultants   (and   maybe   hospitals)   a   competitive   edge.  Instead   of   tweaking   code,   and   copy-­‐paste   from   old   models   –   a   tool   allows   for   constructive  development.  Constructive  in  the  sense  that  every  time  the  tool  must  be  adapted  to  be  able  to  simulate   a   certain   outpatient   clinic,   the   tool   improves   in   terms   of   generalizability   and  applicability.  I  sincerely  hope  that  the  results  of  this  thesis  will  be  used  a  basis,  or  elements,  for  future  research  so  that  this  Dutch  aphorism:  Reinventing  the  Wheel  is  avoided  and  people  can  be  more   efficient   in   their   work.   In   order   to   facilitate   such   an   efficiency   improvement   in   overall  model   development,   a   platform   should   be   developed   in   which   people   can   make   use   of   each  other’s   work   whilst   freeriding   is   prevented.   A   collaborative   community   with   equal  contributions  makes  it  attractive  for  the  industry  and  academics  to  share,  merge  and  compare  tools  as  presented  in  this  thesis.    

  53  

References

Bangsow,  S.  (2010).  Manufacturing  Simulation  with  Plant  Simulation  and  SimTalk:  Usage  and   Programming   with   Examples   and   Solutions.   Simulation   (p.   300).  doi:10.1007/978-­‐3-­‐642-­‐05074-­‐9  

Banks,  J.  (2010).  Discrete-­‐event  System  Simulation.  Prentice  Hall.  Booch,   G.,  Maksimchuk,   R.   A.,   Engel,  M.  W.,   Young,   B.   J.,   Conallen,   J.,   &  Houston,   K.   A.  

(2008).   Object-­‐oriented   analysis   and   design   with   applications   (Vol.   3).   Addison-­‐Wesley.  

Bos,   W.   J.,   Koevoets,   H.   P.   J.,   &   Oosterwaal,   A.   (2011).   Ziekenhuislandschap   20/20:  Niemandsland  of  Droomland?  Raad  voor  de  Volksgezondheid  &  Zorg.  Den  Haag.  

Brailsford,   S.   C.,  Harper,  P.  R.,   Patel,  B.,  &  Pitt,  M.   (2009).  An  analysis  of   the  academic  literature  on   simulation   and  modelling   in  health   care.   Journal  of  Simulation,  3(3),  130–140.  doi:10.1057/jos.2009.10  

Buijs,  J.  C.  A.  M.,  Van  Dongen,  B.  F.,  &  Van  Der  Aalst,  W.  M.  P.  (2013).  Mining  configurable  process  models  from  collections  of  event  logs.  In  Lecture  Notes  in  Computer  Science  (including   subseries   Lecture   Notes   in   Artificial   Intelligence   and   Lecture   Notes   in  Bioinformatics)  (Vol.  8094  LNCS,  pp.  33–48).  doi:10.1007/978-­‐3-­‐642-­‐40176-­‐3_5  

Clague,   J.   E.,   Reed,   P.   G.,   Barlow,   J.,   Rada,   R.,   Clarke,   M.,   &   Edwards,   R.   H.   T.   (1997).  Improving   outpatient   clinic   efficiency   using   computer   simulation.   International  Journal  of  Health  Care  Quality  Assurance,  10,  197–201.  

Coelli,   F.   C.,   Ferreira,   R.   B.,   Almeida,   R.  M.   V.   R.,  &  Pereira,  W.   C.   A.   (2007).   Computer  simulation   and   discrete-­‐event   models   in   the   analysis   of   a   mammography   clinic  patient   flow.   Computer   Methods   and   Programs   in   Biomedicine,   87,   201–207.  doi:10.1016/j.cmpb.2007.05.006  

Cote,   M.   (1999).   Patient   Flow   Patient   flow   and   resource   utilization   in   an   outpatient  clinic.  Socio-­‐Economic  Planning  Sciences,  33.  

Côté,  M.   J.   (1999).   Patient   flow   and   resource   utilization   in   an   outpatient   clinic.  Socio-­‐Economic  Planning  Sciences,  33,  231–245.  doi:10.1016/S0038-­‐0121(99)00007-­‐5  

del-­‐Rio-­‐Ortega,  A.,  Resinas,  M.,  &  Ruiz-­‐Cortés,  A.  (2009).  Towards  Modelling  and  Tracing  Key   Performance   Indicators   in   Business   Processes.   In  Actas  de   los  Talleres  de   las  Jornadas  de  Ingenier’\ia  del  Software  y  Bases  de  Datos  (Vol.  3,  pp.  57–67).  

Disco   User’s   Guide.   (2012).   Retrieved   July   01,   2014,   from  http://fluxicon.com/disco/files/Disco-­‐User-­‐Guide.pdf  

Dubiel,  B.,  &  Tsimhoni,  O.  (2005).  Integrating  agent  based  modeling  into  a  discrete  event  simulation.   In   Proceedings   of   the   2005  Winter   Simulation   Conference   (pp.   1029–1037).  

Gottschalk,   F.   (2009).   Configurable   process   models:   Experiences   from   a   municipality  case  study.  Advanced  Information  ….  

Gottschalk,  F.,  van  der  Aalst,  W.  M.  P.,  &  Jansen-­‐Vullers,  M.  (2007).  Configurable  Process  Models   -­‐   A   Foundational   Approach.   In   Reference   Modeling   (pp.   59–77).  doi:10.1007/978-­‐3-­‐7908-­‐1966-­‐3_3  

Günal,  M.  M.,  &  Pidd,  M.  (2010).  Discrete  event  simulation  for  performance  modelling  in  health   care:   a   review   of   the   literature.   Journal   of   Simulation,   4,   42–51.  doi:10.1057/jos.2009.25  

Günther,   C.   W.,   &   Aalst,   W.   M.   P.   Van   Der.   (2007).   Fuzzy   Mining   –   Adaptive   Process  Simplification  Based  on  Multi-­‐perspective  Metrics.  Business  Process  Management  -­‐  

  54  

Lecture  Notes  in  Computer  Science,  4714,  328–343.  doi:10.1007/978-­‐3-­‐540-­‐75183-­‐0  

Hair,  J.  F.,  Black,  W.  C.,  Babin,  B.  J.,  Anderson,  R.  E.,  &  Tatham,  R.  L.  (2009).  Multivariate  Data  Analysis.  Prentice  Hall  (p.  816).  

Harper,  P.  R.,  &  Gamlin,  H.  M.  (2003).  Reduced  outpatient  waiting  times  with  improved  appointment   scheduling:   a   simulation   modelling   approach.   OR   Spectrum,   25(2),  207–222.  doi:10.1007/s00291-­‐003-­‐0122-­‐x  

Harrison,   R.,   Counsell,   S.   J.,   &   Nithi,   R.   V.   (1998).   An   evaluation   of   the   MOOD   set   of  object-­‐oriented   software  metrics.   IEEE  Transactions   on   Software   Engineering,   24.  doi:10.1109/32.689404  

Hulshof,   P.   J.   H.,   Vanberkel,   P.   T.,   Boucherie,   R.   J.,   Hans,   E.   W.,   Houdenhoven,   M.,   &  Ommeren,  J.-­‐K.  C.  W.  (2012).  Analytical  models  to  determine  room  requirements  in  outpatient  clinics.  OR  Spectrum,  34(2),  391–405.  doi:10.1007/s00291-­‐012-­‐0287-­‐2  

Huschka,  T.  R.,  Denton,  B.  T.,  Narr,  B.   J.,  &  Thompson,  A.  C.  (2008).  Using  simulation  in  the   implementation   of   an   Outpatient   Procedure   Center.   2008  Winter   Simulation  Conference.  doi:10.1109/WSC.2008.4736236  

Karlas,   R.,   &   Van   Den   Haak,   C.   (2013).   BDO-­‐BENCHMARK   ZIEKENHUIZEN   2013.  Retrieved   August   12,   2014,   from  http://www.bdo.nl/nl/publicaties/documents/bdo-­‐benchmark-­‐ziekenhuizen-­‐2013.pdf  

Klazinga,  N.,   Stronks,  K.,  Delnoij,  D.,  &  Verhoeff,  A.   (2001).   Indicators  without  a   cause.  Reflections  on  the  development  and  use  of  indicators  in  health  care  from  a  public  health   perspective.   International   Journal   for  Quality   in  Health  Care :   Journal  of   the  International  Society  for  Quality  in  Health  Care  /  ISQua,  13,  433–438.  

La  Rosa,  M.,  Dumas,  M.,  Ter  Hofstede,  A.  H.  M.,  &  Mendling,  J.  (2011).  Configurable  multi-­‐perspective   business   process   models.   Information   Systems,   36,   313–340.  doi:10.1016/j.is.2010.07.001  

La   Rosa,   M.,   Dumas,   M.,   Uba,   R.,   &   Dijkman,   R.   M.   (2012).   Business   Process   Model  Merging:   An   Approach   to   Business   Process   Consolidation.   ACM   Transactions   on  Software  Engineering  and  Methodology  (TOSEM).  

La  Rosa,  M.,  Lux,  J.,  Seidel,  S.,  Dumas,  M.,  &  ter  Hofstede,  A.  H.  M.  (2006).  Questionnaire-­‐driven   Configuration   of   Reference   Process   Models.   In   Advanced   Information  Systems   Engineering :   19th   International   Conference,   CAiSE   2007,   Trondheim,  Norway,   June   11-­‐15,   2007.   Proceedings   (pp.   424–438).   doi:10.1007/978-­‐3-­‐540-­‐72988-­‐4_30  

Lowery,  J.  C.  (1998).  Getting  started  in  simulation  in  healthcare.  1998  Winter  Simulation  Conference.  Proceedings  (Cat.  No.98CH36274),  1.  doi:10.1109/WSC.1998.744895  

Maidstone,   R.   (2012).   Discrete   Event   Simulation,   System   Dynamics   and   Agent   Based  Simulation:  Discussion  and  Comparison.  System,  1–6.  

Mann,  P.  S.  (2010).  Introductory  Statistics.  In  Introductory  Statistics.  Wiley.  Martin,   K.,   Balding,   C.,   &   Sohal,   A.   (2003).   Stakeholder   perspectives   on   outpatient  

services   performance:   what   patients,   clinicians   and   managers   want   to   know.  Australian  Health  Review :  A  Publication  of   the  Australian  Hospital  Association,  26,  63–72.  doi:10.1071/AH030063  

McCarthy,  K.,  McGee,  H.  M.,  &  O’Boyle,  C.  A.  (2000).  Outpatient  clinic  waiting  times  and  non-­‐attendance  as  indicators  of  quality.  Psychology,  Health  &  Medicine,  5,  287–293.  doi:10.1080/713690194  

Moody,  D.  L.  (2003).  The  Method  Evaluation  Model :  A  Theoretical  Model  for  Validating  Information  Systems  Design  Methods.  Information  Systems  Journal,  1327–1336.  

  55  

Moody,  D.  L.,  &  Shanks,  G.  G.  G.  (1994).  What  Makes  a  Good  Data  Model?  Evaluating  the  Quality   of   Entity   Relationship   Models.   In   Proceedings   of   the13th   International  Conference   on   the   Entity-­‐Relationship   Approach   (ER’94)   (pp.   94–110).  doi:10.1007/3-­‐540-­‐58786-­‐1_75  

Mu,  J.,  &  Carmona,  J.  (2010).  A  fresh  look  at  Precision  in  Process  Conformance.  Business  Process  Management,  211-­‐226.  

Müller,   C.   (2012).   Generation   of   EPC   based   simulation   models.   In   Proceedings   26th  European  Conference  on  Modelling  and  Simulation.  

Paul,   R.   J.,   &   Kuljis,   J.   (1995).   A   generic   simulation   package   for   organising   outpatient  clinics.   Winter   Simulation   Conference   Proceedings,   1995.  doi:10.1109/WSC.1995.478897  

Process   Configuration   -­‐   Tools.   (n.d.).   Retrieved   August   15,   2014,   from  http://www.processconfiguration.com/tools.html  

RICOH.   (n.d.).   Centraal   verblijven,   decentraal   wachten   verhoogt   efficiency   in  ziekenhuizen.   Retrieved   from   http://www.ricoh.nl/Images/Case   Gelre  Ziekenhuis_t_76-­‐26451.pdf  

Rohleder,   T.   R.,   Lewkonia,   P.,   Bischak,   D.   P.,   Duffy,   P.,   &   Hendijani,   R.   (2011).   Using  simulation   modeling   to   improve   patient   flow   at   an   outpatient   orthopedic   clinic.  Health  Care  Management  Science,  14,  135–145.  doi:10.1007/s10729-­‐010-­‐9145-­‐4  

Rozinat,   A.,   &   Aalst,   W.   van   der.   (2006).   Conformance   testing:   Measuring   the   fit   and  appropriateness   of   event   logs   and   process   models.   Business   Process  Management    ….  

Sargent,   R.   G.   (2000).   Verification,   validation   and   accreditation   of   simulation  models.  2000   Winter   Simulation   Conference   Proceedings   (Cat.   No.00CH37165),   1.  doi:10.1109/WSC.2000.899697  

Schut,  F.  T.  (Erik).  (2009).  Is  de  marktwerking  in  de  zorg  doorgeschoten?  Socialisme  En  Democratie,  66(7-­‐8),  68–80.  

Siebers,   P.   O.,   Macal,   C.   M.,   Garnett,   J.,   Buxton,   D.,   &   Pidd,   M.   (2010).   Discrete-­‐event  simulation  is  dead,  long  live  agent-­‐based  simulation!  Journal  of  Simulation,  4,  204–210.  doi:10.1057/jos.2010.14  

Sobolev,  B.,  Sanchez,  V.,  &  Kuramoto,  L.  (2012).  Health  Care  Evaluation  Using  Computer  Simulation:  Concepts,  Methods,  and  Applications  (Google  eBook).  Springer.  

Swisher,   J.   R.,   Jacobson,   S.   H.,   Jun,   J.   B.,   &   Balci,   O.   (2001).   Modeling   and   analyzing   a  physician  clinic  environment  using  discrete-­‐event  (visual)  simulation.  Computers  &  Operations  Research,  28,  105–125.  doi:10.1016/S0305-­‐0548(99)00093-­‐3  

Van   der   Aalst,  W.   M.   P.   van   der,   Dreiling,   A.,   Gottschalk,   F.,   Rosemann,   M.,   &   Jansen-­‐Vullers,   M.   H.   (2006).   Configurable   Process   Models   as   a   Basis   for   Reference  Modeling.   In   Business   Process   Management   Workshops   (pp.   512–518).  doi:10.1007/11678564_47  

Van   der   Aalst,   W.   M.   P.   (2012).   Process   mining:   Overview   and   opportunities.   ACM  Transactions   on   Management   Information   Systems   (TMIS),   3,   1–17.  doi:10.1145/2229156.2229157  

Van  der  Aalst,  W.  M.  P.,  Dreiling,  A.,  Gottschalk,  F.,  Rosemann,  M.,  &  Jansen-­‐Vullers,  M.  H.  (2006).  Configurable  process  models  as  a  basis  for  reference  modeling.  In  Business  Process  Management  Workshops  (pp.  512–518).  doi:10.1007/11678564_47  

Van   Kleef,   R.,   Schut,   E.,   &   Van   de   Ven,   W.   (2014).   Evaluatie   Zorgstelsel   en  Risicoverevening   Acht   jaar   na   invoering   Zorgverzekeringswet:   succes   verzekerd?  Erasmus   Universiteit   Rotterdam.   Retrieved   August   12,   2014,   from  http://www.bmg.eur.nl/fileadmin/ASSETS/bmg/Onderzoek/Onderzoeksrapporte

  56  

n___Working_Papers/2014/Eindrapportage_Evaluatie_Zorgstelsel_en_Risicoverevening_Amcham-­‐Clingendael_project_maart2014.pdf  

Van   Sambeek,   J.   R.   C.,   Cornelissen,   F.   A.,   Bakker,   P.   J.   M.,   &   Krabbendam,   J.   J.   (2010).  Models   as   instruments   for   optimizing   hospital   processes:   a   systematic   review.  International   Journal   of   Health   Care   Quality   Assurance,   23,   356–377.  doi:10.1108/09526861011037434  

Vitikainen,  K.,  Linna,  M.,  &  Street,  A.  (2010).  Substituting  inpatient  for  outpatient  care:  what  is  the  impact  on  hospital  costs  and  efficiency?  The  European  Journal  of  Health  Economics :   HEPAC :   Health   Economics   in   Prevention   and   Care,   11,   395–404.  doi:10.1007/s10198-­‐009-­‐0211-­‐0  

Weerawat,  W.  (2013).  A  Generic  Discrete-­‐Event  Simulation  Model  for  Outpatient  Clinics  in  a  Large  Public  Hospital.  Journal  of  Healthcare  Engineering,  4(2),  285–305.  

Weng,  M.  L.,  &  Houshmand,  A.  A.  (1999).  Healthcare  simulation:  a  case  study  at  a  local  clinic.   WSC’99.   1999   Winter   Simulation   Conference   Proceedings.   “Simulation   -­‐   A  Bridge  to  the  Future”  (Cat.  No.99CH37038),  2.  doi:10.1109/WSC.1999.816896  

Zhu,   Z.,   Heng,   B.   H.,   &   Teow,   K.   L.   (2012).   Analysis   of   Factors   Causing   Long   Patient  Waiting  Time  and  Clinic  Overtime  in  Outpatient  Clinics.  Journal  of  Medical  Systems,  36,  707–713.  doi:10.1007/s10916-­‐010-­‐9538-­‐4  

             

  57  

Appendix A – Process Models

Model  Descriptions  ISA1  

The  patient  is  generated  by  the  model,  and  takes  place  in  the  Central  Waiting  Area  (CWA).  The  patient  receives  an  advice  on  when  to  start  walking  to  the  correct  outpatient  clinic.  This  advice  is   taken   with   a   certain   stochastic   distribution.   The   patient   walks   together   with   his   or   her  entourage   from   the   CWA   to   the   clinic.   Once   the   patient   arrives   the   patient   takes   place   in   the  Waiting  Area  (WA)  of  the  clinic.  The  patient  is  then  called  into  the  examination  room;  and  the  consult  starts.  If  the  patient  is  not  yet  present  in  the  WA,  the  system  registers  a  mismatch.  After  the   consult   there   are   four   possibilities:   (1)   the   patient   has   another   appointment   at   the   same  clinic  within  30  min  after  the  consult;  the  patient  waits  at  the  WA  of  the  clinic,  (2)  the  patient  has  other  appointment  at  another  outpatient  clinic;  the  patient  and  entourage  wait  for  a  certain  amount  of  time  a  the  WA  of  the  current  clinic  (nablijftijd)  after  which  they  start  walking  to  the  other   clinic,   (3)   the  patient  has  another  appointment  more   than  30  minutes  after   the   current  consult;  the  patient  stays  a  certain  amount  of  time  at  the  clinic  and  then  walks  to  the  CWA  and  (4)  the  patient  stays  shortly  at  the  clinic,  walks  to  the  CWA  and  leaves  the  hospital.      

ISA2  The  patient  enters  the  clinic  after  which  the  patient  takes  place  in  the  WA.  Then  the  patient  is  called   into   the   consultation   room.  With   a   stochastic   probability,   the   patient   needs   additional  examination   in   an   external   examination   room.  Once   the   consult   is   fulfilled,   the   patient   either  leavers  the  clinic  or  waits  in  the  queue  at  the  desk  to  make  an  appointment  with  one  of  the  desk  staffs.  After,  the  patient  leaves  the  clinic.    

RDG  Patients  arrive  10  minutes  before  their  planned  appointment.  When  they  arrive  at  the  clinic  the  patients  plus  their  entourage  take  place  in  the  WA.  If  there  is  not  enough  space  in  the  WA,  the  patients  and  entourage  move   to   the  closest  WA  of  another  clinic.  The  patient   is  announced   to  the  consultation  room.  Sometimes  the  doctor  needs  an  external  examination  room  to  examine  the   patient.   If   so,   the   patient   moves   to   the   external   room   and   receives   minor   treatment   or  diagnostics.  After   the  consults  and/or  examination   the  patient  either   leaves  or  queues   for   the  desk  to  make  another  appointment.      

WW  Can  be  found  in  original  research  paper:  see  References.      

  58  

Model  ISA1  

  59  

 

Model  ISA2  

  60  

 

Model  RDG    

  61  

Weerawat    

  62  

Appendix B – Parameters Parameter  (Dutch)    

Parameter  (English)  

Standard  Distribution  

Standard  Parameter  

Unit  

Centraal  wacht  systeem  

Centralized  Waiting  System   Boolean:  True/False  

Arts  haalt  patiënt  op  

Doctor  picks  up  patient   Boolean:  True/False  

Kans  Extern  Onderzoek  

Probability  External  Examination  

  Probability  (e.g.  50%)    Note:  In  another  method  this  50%  will  be  used  in  a  

normal  z  distribution  Aankomst  voor  1st  afspraak  

Arrival  before  consult  

Normal   Mean:  30  Sig:  5    

Min  

Loopsnelheid  patiënt  

Walking  speed  patient  

Uniform   Lower=  0.8  Upper=  1.7    

m/s  

Loopsnelheid  arts   Walking  speed  doctor  

Uniform   Lower=  0.8  Upper=  1.7  

m/s  

Afwijking  CW  Advies  

Deviation  from  Central  Waiting  advice  

Normal   Mean=0    Sig:  5    

Min  

Aantal  begeleiders   Number  of  entourage  

Triangle  (rounded)   Mode=1  Lower=0*Mode  Upper=3*Mode  

-­‐  

Gemiddelde  nablijftijd  Wachtkamer  

Stay-­‐time  at  outpatient  waiting  area  

Triangle   Mode=  0  Lower=0.5*Mode    Upper=1.5*Mode    

Min  

Gemiddelde  nablijftijd  central  hal  

Stay-­‐time  central  waiting  area  

Triangle   Mode=0  Lower=0.5*Mode    Upper=1.5*Mode  

Min  

Uitloop  percentage   Ramp  rate   Uniform   Lower=90  Upper=110  

%  

Inleestijd   Reading  time  before  appointment  

Deterministic   30     Sec  

Max  voorlopen  spreekuur  

Max  running  ahead  of  schedule  

Deterministic   10   Min  

  63  

Appendix C – The Tool

Model  Components  

Patient  Generator  

The  model  should  generate  patients  that  run  through  the  model  based  on  both  the  appointment  data  provided  by  the  hospital  and  the  parameter  settings  concerning  walking  speeds,  number  of  entourage,  time  before  arrival  and  other  behavior.  The  Patient  generator  makes  sure  the  patient  is  ‘generated’  at  the  right  time  and  inserted  to  the  CW  in  the  model.    

Central  Waiting  Area  All  patients  enter  the  hospital  via  the  Central  Waiting  area;  depending  on  the  settings,  the  patient  may  actually  wait  at  the  CWA,  or  walk  directly  to  the  WA  which  is  assigned  to  his/her  appointment.    

Waiting  Areas  

In  the  Waiting  Area,  Patient  and  their  entourage  can  be  stored  when  waiting  for  their  appointment.  An  outpatient  clinic  typically  has  one  or  more  waiting  areas.  Depending  on  the  settings,  a  patient  is  called  or  picked  up  (PtD,  DtP)  from  the  waiting  area  to  the  consultation  room.    

Walking  Paths  Patients  have  to  cover  distances  between  the  different  places  in  the  hospital.  This  means   that   the  patient   covers   a   certain  distance  over   a  walking  path.  The  walking  paths  are  two-­‐lane  tracks.  

Consultation  Room  

This  is  the  basic  and  most  important  room  when  a  patient  visits  the  hospital  for  an  appointment  at  the  outpatient  clinic.  This  consultation  room  is  not  bound  to  a  specific  specialty.  Depending  on  the  settings,  the  patient  may  have  an  external  examination  during  his  time  in  the  Consultation  Room.  After  the  consult  the  patient  either  (1)  has  another  appointment  and  goes  back  to  (C)WA,  (2)  makes  appointment  at  desk  or  (3)  leaves.    

Examination  Room  

Room  specially  equipped  for  whatever  extra  procedure.  This  may  be  additional  diagnostics  or  small  treatments.  As  it  is  not  important  what  exact  procedure  takes  place,  this  examination  is  multi-­‐purpose.  

Desk  Depending  on  the  settings  a  patient  may  be  required  to  make  an  appointment  after  his/her  consult.  

 

  64  

Attributes  MU  Patient  Attribute  (Dutch)   Attribute  (English)   Type   Explanation  

AankomstMinutenVoor1steAfspraak   Arrival  Before  Appointment   Time    

AfwijkingCwAdvies   Deviation  from  CW  advice   Time    

BalieKans   Make  New  Appointment   Boolean    

Begeleider1   Name  of  Entourage  Entity  (MU)   Object    

BegeleiderAantal   Number  of  Entourage   Integer    

Dag   Day   Integer    

LichamelijkOnderzoek   External  Exam   Boolean    

LichamelijkOnderzoekVoltooid   External  Exam  Done   Boolean    

Loopsnelheid   Walking  Speed   Speed    

NablijftijdCentraleHal   Stay-­‐Time  after  Consult  CW   Time      

NablijftijdWachtkamer   Stay-­‐Time  after  Consult  WW   Time      

NeemBegeleidersMee   Take  entourage     Method   Method  that  makes  sure  that  all  the  patient’s  entourage  follow  to  the  next  destination.    

OnderzoeksKamer   Examination  Room   Object   The  designated  examination  room,  if  examination  would  be  necessary.    

Patient_ID   Patient  ID   Integer    

Specialisme   Specialism     String   The  specialism  of  the  appointment.    

TijdsduurAfspraak   Duration  Appointment   Time    

Zorgpaden   Path   Table    

Zorgstap   Path-­‐step   Integer    

 

Attributes  MU  Doctor  Attribute  (Dutch)   Attribute  (English)   Type   Explanation  

Spreekkamer   Number  of  Entourage   Object   Linked  consult  room  of  doctor  

Ophalen  Patient  Aankomst   Day   Time    

Ophalen  Patient  Vetrek   External  Exam   Time    

Op  te  halen  Patient   External  Exam  Done   Object   Patient  of  current  consult  

Loopsnelheid   Walking  Speed   Speed    

  65  

Model  Constructor  (Example  Waiting  Area)  Note:   In   the  ModelComponenten   File;   all   the   objects   in   the  model   are   summarized.   This  maybe  consultation   rooms,  waiting  areas,   desks   or   examination   rooms.   In   the   script   below  one   can   see  how  a  waiting  room  is  placed  in  the  main  frame  according  to  the  pre-­‐specified  attributes.    The  logic  for  other  objects  is  rather  similar,  but  only  uses  other  attributes.    

 

  66  

Automatic  Connections  Note:   After   all   objects   are   placed   in   the   main   frame,   all   the   walking   paths   are   collected   in   a  Temporary  Table  (TempTable)  in  matrix  form.  The  walking  paths  have  been  given  a  recognizable  name  like  CONS1Path  (Consultation  Room  1  Walking  Path).  In  the  matrix  one  can  binary  indicate  whether   the   paths   should   be   connected.     Using   the   script   below,   the   right   walking   paths   are  connected  to  each-­‐other.      

 

Appointment  Data  

Variable  (Dutch)   Variable  (English)   Explanation  

Patient_ID   Patient_ID   This  is  a  unique  patient  ID  (integer)  

Arts   Doctor   The  name  of  the  doctor  for  the  appointment  

Spreekkamer   Consult  Room   The  name  of  the  consultation  room  

Specialisme   Specialism   The  type  of  specialty  (e.g.  Cardiologic)  

Wachtkamer  Waiting  Area   The   name   of   the  waiting   area   that   should   be   used   for   the  

appointment  

Balie  Desk   The  name  of  the  desk  the  patient  must  make  an  follow-­‐up  or  

new  appointment  after  the  consult  

Onderzoekskamer  Examination  Room   The  name  of  the  examination  room,  if  an  examination  room  

may  be  necessary  

DatumTijdAfspraak  DateTime  Appointment  

The  date  and  time  of  the  appointment  

Afspraakduur  Duration  Appointment  

The  duration  of  the  appointment    

Dagdeel  DayPart   Which   part   of   the   day   is   the   appointment;   morning,  

afternoon,  evening  etc..  

  67  

Settings  for  Simulation    

     

  68  

Appendix D - Testing

Experiment  1  

Experimental  Settings  

Behavioral  Settings    

Arrival  before  1st  appointment   15  min   Organizational  Settings

Walking  Speed  (patient)   1.5  m/s   PtD FALSE

Walking  Speed  (doc)   1.5  m/s   Probability  External  Exam 0%

Deviation  from  Advice   0.0000   Central  Waiting  System TRUE

Staytime  after  Appointment  (WA)   0.0000   Probability  Desk 100%

Staytime  after  Appointment  (CWA)   0.0000  

Number  of  Entourage   1   Simulation  Settings

Ramp  Time   100%   Runs 5

Reading  Prep  Time   20  sec   Maximum  ahead  time  schedule 3  min

Reservation  Call  Patient   15  sec   Deterministic   TRUE

  69  

Flowchart  Experiment  1  

 

  70  

Experiment  2  

Experimental  Settings  

Behavioral  Settings    

Arrival  before  1st  appointment   15  min   Organizational  Settings

Walking  Speed  (patient)   1.5  m/s   PtD FALSE

Walking  Speed  (doc)   1.5  m/s   Probability  External  Exam 50%

Deviation  from  Advice   0.0000   Central  Waiting  System FALSE

Staytime  after  Appointment  (WA)   0.0000   Probability  Desk 50%

Staytime  after  Appointment  (CWA)   0.0000  

Number  of  Entourage   1   Simulation  Settings

Ramp  Time   100%   Runs 5

Reading  Prep  Time   20  sec   Maximum  ahead  time  schedule 3  min

Reservation  Call  Patient   15  sec   Deterministic   FALSE

Distribution  Throughput  Times  

 

  71  

Flowchart  Experiment  2  

   

  72  

Experiment  3  

Experimental  Settings  

Behavioral  Settings    

Arrival  before  1st  appointment   15  min   Organizational  Settings

Walking  Speed  (patient)   1.5  m/s   PtD TRUE

Walking  Speed  (doc)   1.5  m/s   Probabilty  External  Exam 100%

Deviation  from  Advice   0.0000   Central  Waiting  System TRUE

Staytime  after  Appointment  (WA)   0.0000   Probability  Desk 75%

Staytime  after  Appointment  (CWA)   4:00  

Number  of  Entourage   1   Simulation  Settings

Ramp  Time   100%   Runs 5

Reading  Prep  Time   20  sec   Maximum  ahead  time  schedule 3  min

Reservation  Call  Patient   15  sec   Deterministic   FALSE

  73  

Flowchart  Experiment  3  

  74  

 

Experiment  -­‐  5  Days  Behavioral  Settings    

Organizational  Settings

Arrival  before  1st  appointment   15  min   PtD FALSE

Walking  Speed  (patient)   1.5  m/s   Probabilty  External  Exam 50%

Walking  Speed  (doc)   1.5  m/s   Central  Waiting  System TRUE

Deviation  from  Advice   0.0000   Probability  Desk 50%

Staytime  after  Appointment  (WA)   0.0000  

Staytime  after  Appointment  (CWA)   0.0000   Simulation  Settings

Number  of  Entourage   1   Runs 5

Ramp  Time   100%   Maximum  ahead  time  schedule 3  min

Reading  Prep  Time   20  sec   Deterministic   FALSE

Reservation  Call  Patient   15  sec    The  settings  and  flowchart  for  this  experiment  can  be  found  above.  This  configuration  assumes  that  patients  leave  directly  after  their  appointment,  and  that  there  is  no  Central  Waiting  System.  Using   the   data   from   all   5   runs   in   Disco,   a   median   throughput   time   of   37.7   minutes   and   an  average  throughput  time  of  39.2  minutes  were  found.  The  figure  below  shows  the  distribution  for  the  different  throughput  times  of  all  the  patients.  An  average  waiting  time  at  the  WA  of  20  minutes  with  a  median  of  18.2  minutes  was  found.    

   

 Figure:  Distribution  Throughput  Times  

The  crowdedness  of  the  waiting  area  (WA1)  was  plotted  for  the  five  different  runs  and  can  be  found   in   Figure   5.3.   Respectively   the   average   crowdedness   was   {17.06,   16.40,   16.33,   16.63,  

  75  

16.36},   with   a   population   standard   deviation   of   7.35.   Because   the   number   of   people   in   the  waiting   area   is   only   logged   when   a   mutation   occurs,   there   are   no   same-­‐data   points,   which  makes  it  impossible  to  construct  a  confidence  interval.      

 Figure:  Crowdedness  Waiting  Area  1  

The  queue  at  the  desk  was  similarly  analyzed  as  the  waiting  area.  The  graph  showing  the  queue  length  shows  high  similarity  between  the  runs  (see  below).  The  mean  queue  lengths  for  all  five  runs  fall  between  [1.87;  1.99].  In  all  five  runs  combined  only  3  mismatches  occurred.    

 Figure:  Queue  Desk  1  

 

  76  

ISA  Rebuilt      

Behavioral  Settings    Organizational  Settings

Arrival  before  1st  appointment   10  min   PtD False

Walking  Speed  (patient)   1.4  m/s   Probabilty  External  Exam 50%

Walking  Speed  (doc)   1.5  m/s   Central  Waiting  System False

Deviation  from  Advice   0.0000   Probability  Desk 75%

Staytime  after  Appointment  (WA)   0.0000  

Staytime  after  Appointment  (CWA)   0.0000   Simulation  Settings

Number  of  Entourage   1   Runs 5

Ramp  Time   100%   Maximum  ahead  time  schedule 3  min

Reading  Prep  Time   20  sec   Deterministic   FALSE

Reservation  Call  Patient   15  sec  

  77  

 

Mined  Process  ISA2  Rebuilt    

102

35

139

104

104

103

104

35

103

104

103

35

104

1

1

102

StroomInZiekenhuis104

VertrekNaarWachtkamer104

AankomenWachtkamer104

VertrekDoorOproepArts104

BufferBeforeConsult139

PatientInConsult139

EindeConsult103

PatientInQueueForAppointment103

PatientMakesAppointment103

AankomenCentraleHal_OpTerugWeg102

PatientInExternalExamaniation35

EindeOnderzoekRouting35

  78  

Distribution  For  Patients  in  Waiting  Area  

 

  79  

Appendix E - Questionnaire

Survey  Questions    

URL FULL SURVEY PDF (10 pages): https://www.dropbox.com/s/rwfzt6ac6kgxyex/Questionnaire%20Outpatient%20Department%2

0Simulation.pdf Demographics (both open and multiple choice answers) What is your age? What is your gender? What is your nationality? What is your highest education ? Which option below describes your profession best? How long (yrs) have your been working in your current profession? When I use simulation for assignments, I most often use: In terms of discrete event simulation I often use:

Evaluation of 2 approaches (2 times same questions). 7-Point Likert Scale I am confident this approach would allow me to develop a model for any outpatient clinic. I think the procedure will be time-intensive: I think that this is the best way to reuse simulation models. Overall, I think that this approach allows for great flexibility. I would definitely not use this approach. Using this approach increases the understandability for you as a designer. Using this approach would make it more difficult to maintain large simulation models. This method makes it easy to verify whether the simulation model is valid. Overall, I think this approach does not provide an effective solution to the task. I think that this approach could be used without prior knowledge about the model(s).

General Questions. 7-Point Likert Scale Overall I would prefer approach 1 I think that approach 2 is faster than approach 1 Overall, I think that approach is 2 simpler than approach 1. On the long term, approach 2 is more sustainable than approach 1. Approach 1 leads to better reuse of the models.  

  80  

Constructs  

I am confident this approach would allow me to develop a model for any outpatient clinic. Ease of Use

I think the procedure will be time-intensive:

Perceived Usefulness

I think that this is the best way to reuse simulation models. Intention to Use

Overall, I think that this approach allows for great flexibility. --

I would definitely not use this approach.

Intention to Use

Using this approach increases the understandability for you as a designer. Perceived Usefulness

Using this approach would make it more difficult to maintain large simulation models. Perceived Usefulness

This method makes it easy to verify whether the simulation model is valid. Perceived Usefulness

Overall, I think this approach does not provide an effective solution to the task. Perceived Usefulness

I think that this approach could be used without prior knowledge about the model(s). Ease of Use  

Original  Moody  Survey    Perceived  Ease  of  Use  Q1.     I  found  the  procedure  for  applying  the  method  complex  and  difficult  to  follow  (PEOU1)    Q4.     Overall,  I  found  the  method  difficult  to  use  (PEOU2)  Q6.     I  found  the  method  easy  to  learn  (PEOU3)  Q9.  I  found  it  difficult  to  apply  the  method  to  

the  example  data  model  (PEOU3)  Q11.     I  found  the  rules  of  the  method  clear  and  easy  to  understand  (PEOU5)    Q14.     I  am  not  confident  that  I  am  now  competent  to  apply  this  method  in  practice  (PEOU6)  

 Perceived  Usefulness  Q2.     I   believe   that   this   method   would   reduce   the   effort   required   to   document   large   data  

models  (PU1)    Q3.     Large  data  models  represented  using   this  method  would  be  more  difficult   for  users   to  

understand  (PU4)    Q5.     This  method  would  make   it  easier   for  users   to  verify  whether  data  models  are  correct  

(PU4)    Q7.     Overall,  I  found  the  method  to  be  useful  (PU4)    Q8.     Using  this  method  would  make  it  more  difficult  to  maintain  large  data  models  (PU5)    Q12.     Overall,   I   think   this  method   does   not   provide   an   effective   solution   to   the   problem   of  

representing  large  data  models  (PU6)    Q15.     Overall,   I   think   this   method   is   an   improvement   to   the   standard   Entity   Relationship  

Model  (PU4)    Q13.     Using  this  method  would  make  it  easier  to  communicate  large  data  models  to  end  users  

(PU5)e  

Intention  to  Use  Q10.     I   would   definitely   not   use   this  method   to   document   large   Entity   Relationship  models  

(ITU1)    Q16.     I  intend  to  use  this  method  in  preference  to  the  standard  (ITU2)  

  81  

Appendix F – Statistical Results

Paired-­‐Samples  T-­‐Test  

Paired Samples Test

Paired Differences t df Sig. (2-

tailed) Mean Std.

Deviation

Std. Error

Mean

95% Confidence Interval of

the Difference

Lower Upper

Pair

1

Best_Reuse1 -

Best_Reuse2 -1,42857 1,71825 ,64944 -3,01769 ,16054

-

2,200 6 ,070

Pair

2

Effective1_Inv -

Effective2_Inv -1,00000 1,73205 ,65465 -2,60188 ,60188

-

1,528 6 ,177

Pair

3

Maintanance1_Inv -

Maintanance2_Inv -1,71429 1,38013 ,52164 -2,99069 -,43788

-

3,286 6 ,017

Pair

4

NotUse1_Inv -

NotUse2_Inv -,85714 1,46385 ,55328 -2,21098 ,49669

-

1,549 6 ,172

Pair

5 Conf_1 - Conf2 1,00000 2,23607 ,84515 -1,06802 3,06802 1,183 6 ,281

Pair

6

Flexibility1 -

Flexibility2 -,71429 1,38013 ,52164 -1,99069 ,56212

-

1,369 6 ,220

Pair

7 Time_Int1 - Time_Int2 2,57143 1,90238 ,71903 ,81202 4,33084 3,576 6 ,012

Pair

8 Valid1 - Valid2 -,42857 ,97590 ,36886 -1,33113 ,47399

-

1,162 6 ,289

Pair

9 Underst1 - Underst2 -1,00000 2,64575 1,00000 -3,44691 1,44691

-

1,000 6 ,356

Pair

10

PriorKnowledge1 -

Priorknowledge2 -2,57143 2,22539 ,84112 -4,62958 -,51328

-

3,057 6 ,022

  82  

Correlation  Matrix  

Cronbach’s  𝜶  when  deleted  (Perceived  Usefulness)  

Item-Total Statistics (method 1)

Scale Mean if Item Deleted

Scale Variance if Item Deleted

Corrected Item-Total Correlation

Cronbach's Alpha if Item Deleted

Underst1 14,8571 20,143 ,529 ,470 Time_Inv1 16,0000 24,000 ,630 ,521 Valid1 14,7143 16,905 ,399 ,530 Effective1_Inv 13,7143 21,238 ,246 ,610 Maintanance1_Inv 15,5714 20,619 ,271 ,597

Item-Total Statistics (method 2)

Scale Mean if Item Deleted

Scale Variance if Item Deleted

Corrected Item-Total Correlation

Cronbach's Alpha if Item Deleted

Underst2 20,5714 20,286 ,924 ,764 Valid2 21,0000 18,000 ,968 ,751 Maintanance2_Inv 20,5714 22,286 ,978 ,755 Effective2_Inv 19,4286 37,286 ,236 ,911 Time_Inv2 20,1429 32,143 ,392 ,895

  83  

Paired  Samples  Test  Perceived  Use  

Paired Samples Test

Paired Differences t df Sig. (2-tailed) Mean Std.

Deviation Std. Error Mean

95% Confidence Interval of the

Difference

Lower Upper

Pair 1

Perceived_Use1 - Perceived_Use2

-1,42857

1,38980 ,52530 -2,71392 -,14322 -

2,720 6 ,035

Overarching  Questions  

Statistics

PreferApp

roach1

Approach2F

aster

Approach

2Simpler

Approach2S

ustainable

Approach

1Reuse

N Valid 7 7 7 7 7

Missing 0 0 0 0 0

Mean 2,8571 6,2857 6,1429 6,0000 3,7143

Std. Deviation 1,67616 ,48795 ,69007 1,00000 1,79947

Variance 2,810 ,238 ,476 1,000 3,238

Percentil

es

25 2,0000 6,0000 6,0000 6,0000 3,0000

50 2,0000 6,0000 6,0000 6,0000 3,0000

75 4,0000 7,0000 7,0000 7,0000 6,0000