Scholarly article on topic 'Developing Strategic Health Care Key Performance Indicators: A Case Study on a Tertiary Care Hospital'

Developing Strategic Health Care Key Performance Indicators: A Case Study on a Tertiary Care Hospital Academic research paper on "Economics and business"

CC BY-NC-ND
0
0
Share paper
Academic journal
Procedia Computer Science
OECD Field of science
Keywords
{"Strategic Key Performance Indicators" / "Evidence Based Healthcare" / Scorecards / Hospitals.}

Abstract of research paper on Economics and business, author of scientific article — Mohamed Khalifa, Parwaiz Khalid

Abstract OBJECTIVE: The main objective is to describe in details the complete process of developing a group of strategic key performance indicators (KPIs) to monitor and improve the performance of a tertiary care hospital, including different services. This project aimed at centralizing and standardizing KPIs to provide higher management with information and support evidence based strategic decision making. METHODS: The researchers used qualitative survey methods through conducting semi-structured interviews with higher management officers as well as hospital department heads and performance professionals. Suggested KPIs were validated against published research then categorized and sorted into ten groups of indicators and finally were approved by the higher management. RESULTS: Fifty eight KPIs could be identified then sorted into ten categories; Patient Access Indicators, Inpatient Utilization, Outpatient Utilization, OR Utilization, ER Utilization, Generic Utilization, Patient Safety, Infection Control, Documentation Compliance, and Patient Satisfaction Indicators. DISCUSSION: Each of these KPIs, and each of the ten categories, has specific value(s); some reflects the effectiveness or efficiency of healthcare provision, such as re-admission rate and average length of stay, some reflects timeliness, such as waiting time for admission, for an outpatient appointment or in the emergency room, and some reflects safety and patient centeredness, such as infection rates and mortality rates. CONCLUSION AND RECOMMENDATIONS: Before considering these KPIs reliable and comprehensive, they have to be validated against other sources of data, alarming triggers should be set up and future expansions should be planned, to include more related indicators or provide the users with ability to drill down to a lower level of details.

Academic research paper on topic "Developing Strategic Health Care Key Performance Indicators: A Case Study on a Tertiary Care Hospital"

CrossMark

Available online at www.sciencedirect.com

ScienceDirect

Procedía Computer Science 63 (2015) 459 - 466

The 5th International Conference on Current and Future Trends of Information and Communication Technologies in Healthcare (ICTH 2015)

Developing Strategic Health Care Key Performance Indicators: A Case Study on a Tertiary Care Hospital

Mohamed Khalifa, MD * and Parwaiz Khalid, PhD

King Faisal Specialist Hospital & Research Center, Jeddah 21499, Saudi Arabia

Abstract

OBJECTIVE: The main objective is to describe in details the complete process of developing a group of strategic key performance indicators (KPIs) to monitor and improve the performance of a tertiary care hospital, including different services. This project aimed at centralizing and standardizing KPIs to provide higher management with information and support evidence based strategic decision making. METHODS: The researchers used qualitative survey methods through conducting semi-structured interviews with higher management officers as well as hospital department heads and performance professionals. Suggested KPIs were validated against published research then categorized and sorted into ten groups of indicators and finally were approved by the higher management. RESULTS: Fifty eight KPIs could be identified then sorted into ten categories; Patient Access Indicators, Inpatient Utilization, Outpatient Utilization, OR Utilization, ER Utilization, Generic Utilization, Patient Safety, Infection Control, Documentation Compliance, and Patient Satisfaction Indicators. DISCUSSION: Each of these KPIs, and each of the ten categories, has specific value(s); some reflects the effectiveness or efficiency of healthcare provision, such as re-admission rate and average length of stay, some reflects timeliness, such as waiting time for admission, for an outpatient appointment or in the emergency room, and some reflects safety and patient centeredness, such as infection rates and mortality rates. CONCLUSION AND RECOMMENDATIONS: Before considering these KPIs reliable and comprehensive, they have to be validated against other sources of data, alarming triggers should be set up and future expansions should be planned, to include more related indicators or provide the users with ability to drill down to a lower level of details.

© 2015TheAuthors. Published by ElsevierB.V.This is an open access article under the CC BY-NC-ND license

(http://creativecommons.Org/licenses/by-nc-nd/4.0/).

Peer-review under responsibility of the Program Chairs

Keywords: Strategic Key Performance Indicators, Evidence Based Healthcare, Scorecards, Hospitals.

* Consultant, Medical & Clinical Informatics. Tel.: +966564520916; fax: +966126677777. E-mail address: khalifa@kfshrc.edu.sa

1877-0509 © 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license

(http://creativecommons.Org/licenses/by-nc-nd/4.0/).

Peer-review under responsibility of the Program Chairs

doi:10.1016/j.procs.2015.08.368

1. Background and Significance

Data and facts are not simply, easily or passively picked up and collected. They can only be understood and measured through an underlying conceptual framework, which defines relevant facts, and distinguishes them from background noise. Many healthcare organizations have been developing key performance indicators (KPIs) for monitoring, measuring, and managing the performance of their healthcare systems to ensure effectiveness, efficiency, equity and quality. Healthcare systems are expected to achieve and manage results in line with their established objectives and quality standards1. Healthcare managers are aware of the effect of using measures on monitoring and improving performance, yet they rarely use measurement as an essential part of their strategies and tactics. Some healthcare managers have the experience and skills of introducing new strategies and innovating new operating processes to achieve breakthrough performance, but they continue to use the same old or short term indicators they have used for years. It is very essential to develop strategic key performance indicators that reflect the actual performance of healthcare organizations2.

Key performance indicators (KPIs) are used by hospitals to monitor and evaluate performing against benchmark values or standards. KPIs show trends and explain how improvements are being made over time. KPIs also help to compare results with approved standards or against other similar comparable organizations; this helps hospitals and healthcare organizations to improve the services they provide by identifying whether the performance is at the desired level or not and also to identify where improvements are required3. Examples of KPIs used in hospitals are the waiting time of patients in the emergency room, number of patients in the ER waiting area and number of ER patients waiting to be admitted to an inpatient department4.

According to the three levels of performance management we can classify key performance indicators and their dashboards and scorecards into operational, tactical and strategic indicators. Each category has its own objectives, methods of measurement and expected outcomes5. According to Donabedian conceptual model, which provides a framework for evaluating healthcare services and quality of care, key performance indicators can be classified differently by being related to the three components of the healthcare system; structures, processes and outcomes. Structure describes the context in which healthcare is delivered, including hospital buildings, staff, financing, and equipment, while processes include all transactions between patients and providers throughout the delivery of healthcare, and outcomes refer to the effects of healthcare on the health status of patients and populations6. And finally, according to studies in the science of healthcare performance measurement and improvement, and according to the Institute of Medicine definition of goals for high quality healthcare delivery systems that we mentioned above, performance indicators can also be classified according to the different dimensions of measurement into the main six elements defined; safety, effectiveness, efficiency, timeliness, patient centeredness and equity7, 8.

Safety indicators should measure the degree of how much any healthcare intervention or procedure is safe or harmful to the patient and/or staff including sentinel events and infection control9. Effectiveness indicators should measure the healthcare service capability of producing the desired results and achieve the intended goals and objectives while efficiency indicators should measure the extent to which healthcare resources such as time, effort or money are well utilized for the intended tasks or purposes10.

Timeliness indicators should measure the degree to which healthcare is provided to the individual at the most beneficial or necessary time or in accordance with patient perception of promptness. Patient centeredness indicators should measure the satisfaction with the healthcare services and the degree to which systems succeed or fail in meeting patient needs, including patient respect, providing accurate information, relief from unnecessary pain and discomfort and emotional support11. Equity indicators should ensure reducing disparities among patient subgroups and ensure that the healthcare system treats all individuals fairly and deliver high-quality care regardless of personal characteristics, such as age, gender, race, ethnicity, education, disability, sexual orientation, income, or location of residence12. A model of performance indicators classification into levels, dimensions and system components is suggested by the researchers and illustrated in figure 1.

Fig 1. Healthcare Key Performance Indicators: What Can They Measure?

(Suggested by the Researchers)

2. Study Objectives

At King Faisal Specialist Hospital and Research Center, Saudi Arabia, the Medical and Clinical Affairs (MCA) decided to develop and utilize a group of strategic KPIs to monitor, measure and improve the performance of the hospital including different departments and services. The main objective of this project was to centralize and standardize KPIs and to make regular measurement data and information available for hospital higher management as well as departmental and services middle managers so that strategic, tactical as well as operational decisions could be made based on available evidence in order to highlight performance gaps and improve deficiencies. The challenge was to survey all the available KPIs from different departments and services then standardize them into a hospital wide level by developing definitions and documenting methods of calculation and reporting.

3. Methods

The researchers used qualitative survey methods through conducting semi-structured interviews with higher management officers, such as chief executive officer, director of medical and clinical affair and director of quality management, as well as hospital department heads and performance designees, which are specialist healthcare professionals assigned by each department and service to communicate and conduct the department performance vision and objectives to higher management. The sample included all professionals who belong to this group of executive and leading function. About forty interview meetings, each for sixty to ninety minutes, were conducted over the last six months of 2014, three for each of the twelve main departments and services in the hospital and some were conducted with higher management executives. The first meeting with each department was dedicated to orient the staff about the importance of KPIs and performance monitoring and improvement. The second meeting was conducted with the department head to discuss and approve their suggested KPIs and departmental priorities.

The third meeting was conducted with the performance designee member to verify and confirm the final list of suggested indicators. Interviews with participants focused on three main domains; verifying old KPIs already used by departments, suggesting new KPIs by staff and departments and providing justification and definition of each KPI. The study used qualitative descriptive approach and thematic analyses of participants' feedback. All collected KPIs were then standardized, in terms of indicator name, definitions and calculation formulae, and validated against published research and internationally recognized benchmarks, such as the healthcare quality indicators of the agency for healthcare research and quality (AHRQ) and the organization for economic cooperation and development (OECD) health care quality indicators project, then categorized and sorted into ten groups of indicators and finally were approved by the higher management of the hospital as the core set of strategic KPIs.

4. Results

Fifty eight KPIs could be identified, standardized and validated against published research and through comparisons with internationally recognized hospitals. These indicators were sorted into ten categories, as shown in table 1. Each category reflects specific performance objectives and helps in improving certain aspects of the performance of hospital departments and services. Detailed KPIs are shown in table 2.

Table 1. The Ten Categories of Suggested Key Performance Indicators.

S/N KPIs Categories Value

A Patient Access Indicators Reflect healthcare services accessibility

B Inpatient Utilization Indicators Reflect inpatient performance

C Outpatient Utilization Indicators Reflect outpatient performance

D OR Utilization Indicators Reflect operating room utilization and performance

E ER Utilization Indicators Reflect emergency department performance

F Generic Utilization Indicators Reflect the performance of some major services

G Patient Safety Indicators Reflect the safety of diagnosis, treatment and procedures

H Infection Control Indicators Reflects quality of care

I Documentation Compliance Indicators Reflects compliance with documentation policies

J Patient Satisfaction Indicators Reflects patient centeredness

Table 2. Detailed Selected KPIs Sorted into the Ten Categories.

S/N Indicator S/N Indicator (Reported on Monthly Basis)

1 Number of Patients Referred

2 Number of Patients Accepted

A Patient Access Indicators

3 Percentage of Patients Accepted

4 Number of Patients on Waiting List for Admission

1 Number of Beds

2 Number of Admissions

3 Number of Discharges

4 Average Daily Census

5 Total Inpatient Days

6 Average Length of Stay

B Inpatient Utilization Indicators

7 Average Bed Occupancy Rate

8 Bed Turnover Rate

9 Number of Patients with LOS > 30 Days

10 Number of Patients with LOS > 60 Days

11 Number of Patients with LOS > 90 Days

12 Number of ICU Beds

13 Average ICU Bed Occupancy Rate

14 Average ICU Length of Stay

15 Number of Patients Transferred to HHC (Home Health Care)

16 Number of Deaths

17 Mortality Rate

1 Total Number of Outpatient Clinic Visits

2 Average First Available Slots > 30 Days for New Patients

3 Patient Seen - New Patients

C Outpatient Utilization Indicators 4 Patient Seen - Follow Up

5 Patient Seen - New Follow Up

6 Number of No Show Patients

7 Percentage of No Show Patients

1 Number of OR Cases Booked

2 Number of OR Cases Performed

3 Number of OR Cases Cancelled

4 Percentage of OR Cancellation Rate

5 Number of OR Cases Done in Day Procedure Unit & Endoscopy

D OR Utilization Indicators 6 Percentage of OR Cases Done in DPU & Endoscopy

7 OR Utilization Rate

8 Number of Cardiac Surgeries

9 Number of Renal Transplants

10 Number of BMT Cases - Adults

11 Number of BMT Cases - Pediatrics

1 Total Number of ER Visits

2 ER Waiting Time (Door to Doctor)

E ER Utilization Indicators 3 ER Treatment Time (Doctor to Disposition)

4 ER Admission Waiting Time (Boarding Time)

5 Percentage of Patients LWBS

1 Total Radiological Procedures

F Generic Utilization Indicators 2 Total Prescriptions

3 Total Lab Investigations

1 Unplanned Readmission within 30 Days of Discharge

2 Unplanned Transfer to Any Critical Unit/OR

G Patient Safety Indicators

3 Cardiac or Respiratory Arrest.

4 Bleeding Requiring Transfusion/Exploration.

1 Blood Stream Infection

H Infection Control Indicators 2 Catheter Related Infection

3 Wound Infection within 30 Days of Surgery

I Documentation Compliance 1 Number of Deficient Records (less than 30 days)

Indicators 2 Number of Delinquent Records (more than 30 days)

1 Inpatient Satisfaction Rate

J Patient Satisfaction Indicators

2 Outpatient Satisfaction Rate

5. Discussion

As a tertiary care hospital, patients' accessibility to services is a very important indicator of performance. The number of patients referred to the hospital from other secondary and primary healthcare systems reflects the

increasing workload. The number of patients accepted reflects performance capacity and the percentage of patients accepted reflects the capability of the hospital to adapt with the increasing number of referred patients. The number of patients on waiting list for admission reflects the timeliness of healthcare provision13. Inpatient utilization indicators reflect work effectiveness and efficiency inside the hospital. Many of these indicators are interrelated and mutually influential. Number of hospital beds, number of admissions and discharges, average daily census and total inpatient days, average length of stay and bed occupancy rates in addition to bed turnover rate are all interrelated and affect each other14.

Hospital bed occupancy rates have been proposed as a measure that reflects the ability of a hospital to properly care for patients. This measure is useful in guiding the planning and operational management of hospital beds in a way that improves how well patients are treated. Many studies discuss the effects of bed occupancy rates and average length of stay on patient outcomes15. It is very important to monitor the number of chronic patients, especially in a tertiary care hospital, such as number of patients with a length of stay exceeding 30 days, 60 days or 90 days. Chronic patients might need to be transferred to another long-term care facility or to a home healthcare program16. The number of ICU beds reflects the capability of a hospital to care for severely ill patients, while the average ICU bed occupancy rate and the average ICU length of stay might reflect complications such as nosocomial infections which are more prevalent in ICU especially ventilator-associated pneumonia17. It is very important to monitor the number of deaths in the hospital as well as the mortality rates, which directly reflect effectiveness of early diseases detection, proper diagnosis and appropriate treatment18.

Most outpatient utilization indicators, such as total number of patients seen, actually reflect the efficiency of work in the outpatient department. Some indicator, such as average waiting time for new patients to be seen, reflects the accessibility of care in addition to the efficiency of care provision19. Many studies correlate no show behavior of patients for their outpatient appointments with a few factors such as patient's age and race, psychosocial problems and the percent of non-cancelled appointments that were kept by mistake. Some studies discuss the effect of patient satisfaction and patient-physician relationship on appointment keeping20.

Operating room (OR) utilization indicators reflects the efficiency and effectiveness of care. It might vary largely according to the level of care provided, whether this is a secondary care or a tertiary care facility. Number of OR cases booked, number of OR procedures performed and number of procedures cancelled all reflects the capacity of work, where both cancellation rates and reasons should be investigated. Generally accepted OR cancelation rates range from 15 to 20%, while cancellation reasons are mainly due to unavailability of theater due to over-run of a previous surgery, no postoperative bed, cancelled by patient and change in patient clinical status. Some procedural reasons are also documented, such as patient not ready, no surgeon, administrative cause, and communication failure. Studies confirm that 60% of cancellations of elective procedures are potentially avoidable21, 22. Operative productivity indicators are also important, such as OR utilization rate, number of cardiac surgeries, organ transplants and bone marrow transplants.

Emergency room (ER) crowding has become a major barrier to receiving timely emergency care all over the world. Patients who present to EDs often face long waiting times to be treated and, for those who require admission, even longer waits for an inpatient hospital bed. It is very important to monitor the number of ER visits, ER waiting time (Door to Doctor), ER treatment time (Doctor to Disposition), ER admission waiting time (Boarding Time) as well as percentage of patients left without being sees23. Because ER crowding is a reflection of larger supply and demand mismatches in the health care system, the problem cannot be solved by examination of the ER in isolation. To find solutions, we must examine ER crowding in the context of the entire delivery system by using reliable methods to understand, measure, and monitor system capacity. Many studies investigated the association between increased hospital occupancy rates and the increased ER crowding and prolonged ER length of stay24, 25.

Patient safety indicators and infection control rates are very important parameters that reflect quality of care and patient centeredness. Safety indicators include unplanned readmission within 30 days of discharge, which directly reflects treatment effectiveness, unplanned transfer to critical care units or to the operating room, number of cardiac

or respiratory arrest and incidents of bleeding requiring transfusion or exploration. Infection control indicators include blood stream infections, catheter related infections and wound infection within 30 days of surgery26. Some generic utilization indicators might reflect overall investigations capacity, such as total imaging procedures, total prescriptions dispensed, both inpatient and outpatient and total lab investigations27.

Among generic performance indicators, documentation compliance is a very crucial indicator, such as number of deficient medical records (incomplete records for patients discharged less than 30 days ago) and number of delinquent medical records (for patients discharged more than 30 days ago). The first reflects compliance but is highly influenced by the number of discharged patients, when you discharge more you get more deficient records, while the second reflects the real status of healthcare professionals' documentation commitment, where proper documentation improves patient safety and reduce medical errors and liability risks28. Finally patient satisfaction rate, for both inpatient and outpatient services, which has gained widespread recognition as a measure of quality in healthcare, despite it still needs better definition and more objective methods for measurement29, 30.

6. Conclusions and Recommendations

Many of these KPIs are already in use individually by hospitals and healthcare organizations. The main contribution of this work is to provide healthcare professionals with a more focused perspective towards classifying KPIs into functional categories and specific measurement groups; the more important is "why" we measure rather than just "how". Building this framework for strategic KPIs is very essential not only to report performance, highlight deficiencies and suggest improvements, but also to provoke more new ideas and suggestions on monitoring both tactical and operational performances as well.

Most of the discussed indicators are already automated, generated electronically from the hospital information system and reported through the hospital data warehouse system retrospectively. Before considering these KPIs reliable we have to validate the reported figures against manual records, nurses' books and departmental information, such as number of operations done or patients seen. In addition to the stage of validation, we have another important stage of setting triggers before these KPIs are fully comprehensive. Triggers are alarming values that KPIs should not exceed, whether these are lower limit triggers, such as number of patients seen in clinics which we don't want to decrease, or upper limit triggers, such as infection rates, which we don't want to increase, or both an upper and a lower limit, such as occupancy rates which we need to keep within a certain range31, 32.

The researchers suggested two methods of setting up triggers; the first is by using international benchmarks and published evidence based figures of similar hospitals and healthcare institutions, some of these could also be standards that the hospital has to meet for accreditation or compliance with international programs, such as JCI (Joint Commission International) standards or other specialized accreditation programs. The second method is to utilize the available data on previous performance; for example using last year performance values of the hospital, and statistically setting an upper control limit, a lower control limit and an average for each indicator, then a suggested trigger, based on hospital or departmental objectives, could be estimated for each indicator.

Since these suggested strategic KPIs are mostly automated, the researchers suggest two types of further expansions; the first is to expand these KPIs horizontally to include more related indicators or more departmental specific indicators, such as specialty specific rates, e.g. fetal death rates, IVF success rate (In Vitro Fertilization) and caesarian section rates for a department like obstetrics and gynecology. The second is to expand these KPIs vertically to include multiple levels of details, e.g. providing scorecards users with the ability to drill down to a lower level of details to check for certain indicators in certain departments, units or services. For example, users should be able to compare average length of stay among different departments or nursing units.

It is very crucial to provide system developers with valid and accurate definition for each indicator and to delegate ownership of each indicator to specific departments, services or users. These indicator definitions should include mainly two components; the first is the importance and value of the indicator; why do we need this

indicator and how are we going to use it in setting future objectives and achieving performance improvement and the second is how should this indicator be calculated, including formulae, inclusion and exclusion criteria, such as the difference between total licensed beds and operational or functional (accessible) beds.

References

1. Arah, O. A., Klazinga, N. S., Delnoij, D. M. J., Ten Asbroek, A. H. A., & Custers, T. (2003). Conceptual frameworks for health systems performance: a quest for effectiveness, quality, and improvement. International Journal for Quality in Health Care, 15(5), 377-398.

2. Kaplan, R. S., & Norton, D. P. (1995). Putting the balanced scorecard to work. Performance measurement, management, and appraisal sourcebook, 66.

3. Parmenter, D. (2010). Key performance indicators (KPI): developing, implementing, and using winning KPIs. John Wiley & Sons.

4. Hwang, U., McCarthy, M. L., Aronsky, D., Asplin, B., Crane, P. W., Craven, C. K. & Bernstein, S. L. (2011). Measures of crowding in the emergency department: a systematic review. Academic Emergency Medicine, 18(5), 527-538.

5. Eckerson, W. W. (2009). Performance management strategies. Business Intelligence Journal, 14(1), 24-27.

6. Gilbert, S. M. (2014). Revisiting structure, process, and outcome. Cancer.

7. Porter, M. E. (2010). What is value in health care?. New England Journal of Medicine, 363(26), 2477-2481.

8. Bauer, J. C. (2014). Paradox and Imperatives in Health Care: Redirecting Reform for Efficiency and Effectiveness. CRC Press.

9. Making health care safer: a critical analysis of patient safety practices. Rockville, MD: Agency for Healthcare Research and Quality, 2001.

10. Grimshaw, J., Thomas, R. E., MacLennan, G., Fraser, C. R. R. C., Ramsay, C. R., Vale, L., ... & Donaldson, C. (2004). Effectiveness and efficiency of guideline dissemination and implementation strategies.

11. Oates, J., Weston, W. W., & Jordan, J. (2000). The impact of patient-centered care on outcomes. Fam Pract, 49, 796-804.

12. Brilli, R. J., Allen, S., & Davis, J. T. (2014). Revisiting the Quality Chasm. Pediatrics, 133(5), 763-765.

13. Chow, C. W., Ganulin, D., Haddad, K., & Williamson, J. (1998). The balanced scorecard: a potent tool for energizing and focusing healthcare organization management. Journal of Healthcare Management, 43(3), 263-280.

14. Black, D., & Pearson, M. (2002). Average length of stay, delayed discharge, and hospital congestion: A combination of medical and managerial skills is needed to solve the problem. BMJ: British Medical Journal, 325(7365), 610.

15. Keegan, A. D. (2010). Hospital bed occupancy: more than queuing for a bed. Med J Aust, 193(5), 291-3.

16. Medical Home Initiatives for Children With Special Needs Project Advisory Committee. (2002). The medical home. Pediatrics, 110(1), 184-186.

17. Vincent, J. L., Bihari, D. J., Suter, P. M., Bruining, H. A., White, J., Nicolas-Chanoin, M. H., ... & Hemmer, M. (1995). The prevalence of nosocomial infection in intensive care units in Europe: results of the European Prevalence of Infection in Intensive Care (EPIC) Study. Jama, 274(8), 639-644.

18. Mandel, J. S., Church, T. R., Ederer, F., & Bond, J. H. (1999). Colorectal cancer mortality: effectiveness of biennial screening for fecal occult blood. Journal of the National Cancer Institute, 91(5), 434-437.

19. Cayirli, T., & Veral, E. (2003). OUTPATIENT SCHEDULING IN HEALTH CARE: A REVIEW OF LITERATURE*. Production and Operations Management, 12(4), 519.

20. Goldman, L., Freidin, R., Cook, E. F., Eigner, J., & Grich, P. (1982). A multivariate approach to the prediction of no-show behavior in a primary care center. Archives of Internal Medicine, 142(3), 563-567.

21. Schofield, W. N., Rubin, G. L., Piza, M., Lai, Y. Y., Sindhusake, D., Fearnside, M. R., & Klineberg, P. L. (2005). Cancellation of operations on the day of intended surgery at a major Australian referral hospital. Med J Aust, 182(12), 612-615.

22. González - Arévalo, A., Gómez - Arnau, J. I., DelaCruz, F. J., Marzal, J. M., Ramírez, S., Corral, E. M., & García - del - Valle, S. (2009). Causes for cancellation of elective surgical procedures in a Spanish general hospital. Anaesthesia, 64(5), 487-493.

23. Asplin, B. R., Magid, D. J., Rhodes, K. V., Solberg, L. I., Lurie, N., & Camargo, C. A. (2003). A conceptual model of emergency department crowding. Annals of emergency medicine, 42(2), 173-180.

24. Hoot, N. R., & Aronsky, D. (2008). Systematic review of emergency department crowding: causes, effects, and solutions. Annals of emergency medicine, 52(2), 126-136.

25. J. C. Moskop, D. P. Sklar, J. M. Geiderman, R. M. Schears & K. J. Bookman, Emergency department crowding, part 1—concept, causes, and moral consequences, Annals of emergency medicine, 53(5) 2009, 605-611.

26. Vincent, C. (2011). Patient safety. John Wiley & Sons.

27. Grosskopf, S., & Valdmanis, V. (1987). Measuring hospital performance: A non-parametric approach. Journal of Health Economics, 6(2), 89-107.

28. Kheterpal, S., Gupta, R., Blum, J. M., Tremper, K. K., O'Reilly, M., & Kazanjian, P. E. (2007). Electronic reminders improve procedure documentation compliance and professional fee reimbursement. Anesthesia & Analgesia, 104(3), 592-597.

29. Williams, B. (1994). Patient satisfaction: a valid concept?. Social science & medicine, 38(4), 509-516.

30. Ware, J. E., Snyder, M. K., Wright, W. R., & Davies, A. R. (1983). Defining and measuring patient satisfaction with medical care. Evaluation and program planning, 6(3), 247-263.

31. Pivnicka, M. (2011, September). The Balanced Scorecard and its Practical Applications in Oracle Balanced Scorecard. In European Conference on Information Management and Evaluation (p. 540). Academic Conferences International Limited.

32. Inamdar, N., Kaplan, R. S., Bower, M., & Reynolds, K. (2002). Applying the balanced scorecard in healthcare provider organizations. Journal of healthcare management, 47(3), 179-196.