Scholarly article on topic 'Usability of accident and incident reports for evidence-based risk modeling – A case study on ship grounding reports'

Usability of accident and incident reports for evidence-based risk modeling – A case study on ship grounding reports Academic research paper on "Clinical medicine"

CC BY-NC-ND
0
0
Share paper
Academic journal
Safety Science
OECD Field of science
Keywords
{"Ship grounding" / "Accident and incident reports" / Near-miss / HFACS / "Safety Factor" / "Evidence-based risk modeling"}

Abstract of research paper on Clinical medicine, author of scientific article — Arsham Mazaheri, Jakub Montewka, Jari Nisula, Pentti Kujala

Abstract This paper presents study of 115 grounding accident reports from the Safety Investigation Authority of Finland and Marine Accident Investigation Branch of the UK, as well as 163 near-miss grounding reports from ForeSea and Finnpilot incident databases. The objective was to find the type of knowledge that can be extracted from such sources and discuss the usability of accident and incident reports for evidence-based risk modeling. A new version of Human Factors Analysis and Classification System (HFACS) is introduced as a framework to review the accident reports. The new positive taxonomy as Safety Factors, which are based on high level positive functions that are prerequisite for safe transport operations, is used for reviewing the incident reports. Accident reports are shown as a reliable source of evidence to extract the most significant contributing factors in the events. Mandatory incident reports are considered useful for understanding the effective barriers as risk control measures. Voluntary incident reports, though, are seen as not very reliable in their current form to be used for evidence-based risk modeling.

Academic research paper on topic "Usability of accident and incident reports for evidence-based risk modeling – A case study on ship grounding reports"

ELSEVIER

Contents lists available at ScienceDirect

Safety Science

journal homepage: www.elsevier.com/locate/ssci

Usability of accident and incident reports for evidence-based risk modeling - A case study on ship grounding reports

Arsham Mazaheria'*, Jakub Montewka a, Jari Nisula b, Pentti Kujalaa

a Aalto University, School of Engineering, Department of Applied Mechanics, Espoo, Finland b Risk in Motion S.A.Sfor Finnish Transport Safety Agency & Université Toulouse III - Paul Sabatier, France

ARTICLE INFO ABSTRACT

This paper presents study of 115 grounding accident reports from the Safety Investigation Authority of Finland and Marine Accident Investigation Branch of the UK, as well as 163 near-miss grounding reports from ForeSea and Finnpilot incident databases. The objective was to find the type of knowledge that can be extracted from such sources and discuss the usability of accident and incident reports for evidence-based risk modeling. A new version of Human Factors Analysis and Classification System (HFACS) is introduced as a framework to review the accident reports. The new positive taxonomy as Safety Factors, which are based on high level positive functions that are prerequisite for safe transport operations, is used for reviewing the incident reports. Accident reports are shown as a reliable source of evidence to extract the most significant contributing factors in the events. Mandatory incident reports are considered useful for understanding the effective barriers as risk control measures. Voluntary incident reports, though, are seen as not very reliable in their current form to be used for evidence-based risk modeling. © 2015 The Authors. Published by Elsevier Ltd. This is an open access article under the CCBY-NC-ND license

(http://creativecommons.org/licenses/by-nc-nd/4.0/).

CrossMark

Article history:

Received 12 November 2014 Received in revised form 15 January 2015 Accepted 22 February 2015 Available online 25 March 2015

Keywords:

Ship grounding

Accident and incident reports

Near-miss

Safety Factor

Evidence-based risk modeling

1. Introduction

Risk models are developed for understanding the behavior of a system and its components in order to mitigate the involved risks by implementing proper control measures (IMO, 2002). In this regard, a suitable model for risk management purposes should reflect the available background knowledge on the system and its components (Aven, 2013; Montewka et al., 2014). Here the term "knowledge" is used as "know-how" (Ackoff, 1989), which in risk management concept could mean "know how to control the risk''. Most of the available risk models for maritime risk analysis are focusing on giving risk figures rather than presenting the available background knowledge of the system (Goerlandt and Kujala, 2014). The models are mostly based on the intuition of the developers rather than the evidence, thus they may not be proper enough for risk management purposes; for a thorough discussion on this subject the reader is referred to Mazaheri et al. (2014b). Lack of background knowledge about the underlying causes of a system or improper presentation of the available background knowledge leads to uncertainty in the used risk models (Aven and Zio, 2011). Therefore, evidence-based risk modeling that addresses real accident scenarios as opposed to imaginary scenarios is encouraged (IMO, 2002,2012; Kristiansen, 2010; Mazaherietal., 2013b, 2014b).

* Corresponding author. Tel.: +358 50 5769989. E-mail address: arsham.mazaheri@aalto.fi (A. Mazaheri).

One of the main sources of the evidence that is available and can be used for evidence-based risk modeling is accident reports that are prepared by expert accident investigators (Schroder-Hinrichs et al., 2011). Since obtaining primary data about an accident that has happened in the past is nearly impossible, using accident reports as a secondary source of data is unavoidable (Mazaheri et al., 2013b); see Fig. 1. However, there are some concerns regarding using only accident reports for modeling. One is that the accidents are scarce in frequency, thus the number of scenarios that can be analyzed is limited (Ladan and Hanninen, 2012). To overcome this imperfection, one of the suggested solutions is to utilize incident reports (Rothblum et al., 2002), as incidents occur much more frequently than accidents (Bole et al., 1987). Besides, since incidents are governed by the similar mechanism and underlying factors that cause accidents (Harrald et al., 1998) but they did not end in actual accidents, analyzing the incidents may likely give insights about the in-placed risk control options that stopped the incident to become an accident. Here, an incident or near-miss refers to an individual or a series of mishaps that did not result in a serious accident like ship grounding with consequences on human life or the environment.

By virtue of the above statement, utilizing accident and incident reports may be beneficial for evidence-based risk modeling. This is because accident and incident reports can be useful for uncovering the factors that have contributed to the occurrence of a mishap as well as for evaluating the level of importance of each factor.

http://dx.doi.org/10.1016/j.ssci.2015.02.019 0925-7535/© 2015 The Authors. Published by Elsevier Ltd.

This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Fig. 1. Framework for evidence-based risk modeling.

Besides, the way that the contributing factors are linked together may be understood from such reports. In this regard, the aim of this paper is to study the usability of accident and incident reports for evidence-based risk modeling by assessing the type of knowledge that one can extract from such reports. For this study, we have used ship grounding related reports due to high frequency that this type of accident has in local and global perspectives (Kujala et al., 2009; Samuelides et al., 2009). This makes the reports of grounding accidents and incidents to be more easily available in compare with other types of accidents. Besides, the importance of this type of maritime accident with regard to its consequences (Hanninen et al., 2014; Mazaheri et al., 2014b) makes this type of accident worth to study.

As Lundberg et al. (2009) highlighted, in practice the result of an accident analysis depends on two issues namely the causes and the causality. The causes are the contributing factors that their presence in the accident is observed, and the causality is related to the mechanism that the causes are interconnected and cause the accident at the end. In this paper, we merely look for the presence of different causes in the causal networks of grounding accidents based on the reviewed reports, and the causality relation analysis is left for further studies. In other words, we only searched for the most important nodes that can later be present in a probabilistic causal risk model of an accident like Bayesian Belief Networks (Pearl, 1988; Hanninen, 2014) (i.e. Parameters of the Model in Fig. 1) and only used that to support our discussion.

The remainder of this paper is organized as follows: the accident and incident reports that are used for the study are introduced in the next Chapter. The applied methodologies for reviewing the reports are presented in Chapter 3. The results of the study are presented in Chapter 4, followed by a discussion in Chapter 5. The paper is concluded in Chapter 6.

2. Accident and incident reports as data sources

2.1. Accident reports

Accident reports are categorized as a secondary source of data, in which the reports are prepared from the primary data that the investigator obtained first-hand by interviewing the operators

and analyzing the evidence, normally short time after an accident (Mazaheri et al., 2013b). In maritime safety analysis, the official accident reports that are prepared by the accident investigation boards usually present valuable information regarding why and how an accident happens. For this study, we have utilized 73 grounding accident reports from the Safety Investigation Authority of Finland (SIAF) and 42 reports from the Marine Accident Investigation Branch (MAIB) of the UK, which both of the sources are freely accessible for the public.

Although more systematic analysis and attention toward the organizational contribution factors can be seen in the recent reports of SIAF, the structures of the reports are more or less the same. They are all started with a summary, which briefly explains the event and the findings of the investigators. The reports continue with general description of the vessel, external condition at the time of the accident, and then the accident and the possible performed rescue operations. These are followed by the analysis of the accident and the causes. At the end, the reports are mostly concluded by presenting the causal chain of events and the underlying factors in the accident, as well as some recommendations to improve maritime safety. The parts that are fully reviewed for this study are summaries, analyses, and the conclusions. However, for some of the reports, other parts are also browsed in order to better understand the accident and the connection of the causal events.

Almost the same approach and structure was taken by MAIB. The reports started with synopsis of the event and the factual information about the accident. They are continued with analysis of the accident and conclusion of the analysis. Then the performed actions by different organizations following the accident are presented and the final recommendation by the investigators concludes the reports. The parts in MAIB reports that are fully reviewed for this study are synopsis, analysis, and conclusion.

2.2. Incident reports

On the contrary to the accidents, there is almost no available systematic reporting system for incidents. Currently, there are quite few available sources that can be used for obtaining the near-miss data, of which not all are available for public use; for a

thorough discussion on this subject see for example Ladan and Hanninen (2012). For this study, we have used the incident reports of ForeSea and Finnpilot, which neither of them is freely accessible for the public. They have been accessed through signed agreements for this study and solely for research purposes.

ForeSea is an anonymous and voluntary feed-by-users database that was initiated by Finnish and Swedish government agencies. The database was created in order to collect the hazardous conditions that are not normally reported to authorities (ForeSea, 2014). Reporting to ForeSea can be performed only by registered users and by filling a form with four questions. The parties need to answer the questions in their own words as clearly as possible. The questions are: What happened? What caused the event? What were the consequences of the event? and What measures were taken? The answers are first handled by a third party, in which the parts that can jeopardize the anonymity will be removed and key words are assigned to each case to make it searchable. Thereafter, the reports are added to the database.

Finnpilot is the company that provides comprehensive pilotage services in all Finnish territorial waters (Finnpilot, 2014). The company collects the near-miss cases with the help of its sea pilots. After each pilotage task, every pilot fills an online multiple-choice questionnaire regarding the performed pilotage; and in case the pilot had faced any abnormality or difficulties during his/her tasks, he/she should write a short report to explain the situation and the actions that were taken to handle the situation. The collected answers for the questionnaires and the attached reports are available through Finnpilot intranet for the usage of its members and for the purpose of increasing the awareness among the pilots and improving the pilotage services.

For this study we have utilized 73 reports from ForeSea and 90 reports from FinnPilot, all returned as near-miss groundings with keyword search. Due to the shortness of the reports and the little amount of information that is provided for such reports, in contrary to the accident reports, the incident reports are reviewed thoroughly for this study.

3. Methodology

Generally, accident and incident reports are in text format and the information first need to be extracted before one is able to utilize them. The extraction normally needs human efforts, thus the risk of human opinion subjectivity exists. There are some text-mining techniques that use machine-learning algorithms to eliminate the need of human efforts for extracting the information and thus cope with the human opinion subjectivity issue; see for example (Artana et al., 2005; Tirunagari et al., 2012a,b). However, still quite many challenges exist in this regard as the reports are written in different natural languages with their own abbreviations and no standard template, and also they often contain misspelling (Hanninen et al., 2013). Additionally, since most of the available data sets whether in categorical- or text-format are prepared by humans at some stages, they contain the views of their creators and thus some level of subjectivity anyway (Hanninen et al., 2013). Therefore, being aware of such possible subjectivity, we have utilized human to review the reports and extract the embedded information. Nevertheless, to minimize the human opinion subjectivity, the reviewers of the reports extracted the information solely based on the words that were mentioned in the reports, and thus avoided further investigating the cases that can introduce opinion subjectivity into the extracted information. Besides, since the effect of the background knowledge of the person, who reviews the reports, is not critical for the extracted information (Hyttinen, 2013; Hyttinen et al., 2014), the reports are all reviewed by researchers who are experts in risk analysis and risk modeling.

3.1. Accident reports

Since the accident reports were prepared in a systematic way by expert accident investigators, in order to uniformly extract the information from all the reports, a framework is needed for reviewing the reports. There are a handful of tools and frameworks available for accident and incident analyzing and reporting (Johnson, 2003), which are mostly based on linear or non-linear accident theories that deal with complex socio-technical systems. Since a ship and her interactions within a maritime traffic system is also a complex socio-technical system (Hollnagel, 2004), in this study we have utilized a redefined version of a well-established complex-linear method as Human Factors Analysis and Classification System (HFACS) framework to review the reports.

HFACS, which is based on the linear accident theory of Reason Swiss Cheese (Reason, 1990), was initially developed to study the contribution of human elements in military aviation accidents (Shappell and Wiegmann, 1997, 2000). The framework was further developed to also cover other causal factors than human factors, namely environmental factors like machinery failures and meteorological conditions (Wiegmann et al., 2005). The success of the method in detecting the contributing latent and active failures in the accident analysis made the method popular in the field of accident analysis that is vastly used in analysis of civil aviation accidents (Shappell et al., 2007) as well as the accidents in other domains like railroad (Reinach and Viale, 2006) and maritime (Chen and Chou, 2012; Chen et al., 2013). Reinach and Viale (2006) have further developed the method by adding the fifth level, namely ''external factors'', to the initial four levels in order to cover the latent failures that come from outside a particular domain. The same practice is followed by recent studies that used HFACS; see for example (Schroder-Hinrichs et al., 2011; Chauvin et al., 2013; Chen et al., 2013).

Since every single accident is unique from its own perspective, frameworks like HFACS try to assign the unique causes of an accident into more global factors to give better understanding of the phenomena by cumulating the causes into frequent factors. In this regard, having a specific framework with more specialized global factors for each domain and purposes seems beneficial. The different versions of HFACS that are recently introduced in the maritime domain like HFACS-MA for general maritime accidents (Chen et al., 2013), HFACS-Coll for collision accidents (Chauvin et al., 2013), and HFACS-MSS for machinery space accidents (Schroder-Hinrichs et al., 2011) support this belief. Therefore, we have revised HFACS to a specific version suitable for grounding accidents analysis (HFACS-Ground; see Fig. 2) by implementing factors that are more related to grounding accidents (see also Mazaheri and Montewka, 2014). HFACS-Ground is also built as a five-level framework and has many similarities with HFACS-Coll and HFACS-MA. However, in addition to the factors that cover traffic control and piloting services as affecting factors on grounding accident, ''infrastructure'' is added as a latent failure subcategory to the ''environmental factors'' in order to cover the waterway complexity related issues, like design and markings (Fig. 2 and Table A-1). These are the factors that are believed to have effect on the frequency of grounding accidents as reported in Mazaheri et al. (2013a) and Mazaheri et al. (2014a). The accident reports in this study were then reviewed using this novel framework of HFACS-Ground, and the results are reported in Section 4.1.

As is mentioned before, in order to avoid subjective interpretations of the reports, only the factors that were explicitly mentioned in the reports were extracted and classified based on the HFACS-Ground. This basically means that the reviewers avoided further investigating the causality of the mentioned causes in the reports up to the higher levels. In total, HFACS-Ground contains 147 factors that each factor is assigned a nanocode. The

Fig. 2. HFACS-Ground.

nanocodes were used during the reviewing process to catch the frequency of the causal factors mentioned in the reports.

3.2. Incident reports

In contrary to the accident reports that were prepared in a systematic analytical way by expert accident investigators, the incident reports suffer from the lack of a systematic view of the event. The ForeSea reports are short (from few sentences to maximum of half a page) and may have been reported by people of

different expertise, thus their qualities depend on the reporters' skills (Hanninen et al., 2013). The reports of FinnPilot have more structured keywords, thanks to the preliminary questionnaire each pilot needs to fill. Although, the actual reports are still short (less than a page), they have the advantage of containing the expert analysis of the situation by a certified mariner. Nevertheless, same as ForeSea, FinnPilot reports has the high potentiality to be subjective and biased as the reports are prepared by the same person who was involved in the event. This has resulted that the reports have different qualities with regard to the provided data and

information. Most of the ForeSea reports contain merely what have been seen, and only few have tried to hypothesize the proximate causes of the events. Even those few reports lack the evidence to support the provided hypotheses. Finnpilot reports also contain only the factors that the pilot was able to catch during performing his task, thus they also lack a broad systematic view to the event. Therefore, using HFACS-Ground for analyzing incident reports is not practical, rather it might be misleading as normally active failures are the only causes that are reported in the incident reports. Therefore, another approach and taxonomy as Safety Factor (SF) has been followed in this study for reviewing the incident reports.

The SFs are the high level positive functions that are believed to be prerequisite for safe transport operations. The SFs are initially drafted by Nisula (2014) and include the (airline) pilot competencies that were produced in the Evidence Based Training (EBT) project (IATA, 2013). The SFs are then refined and customized for maritime purposes within expert panel discussions at the Finnish Transport Safety Agency (TraFi) as part of an on-going experimental project at TraFi. The principles for creating the SFs were that 1 - the factors need to be a positive function and not failure condition or technical device; 2 - the set should cover all high-level safety critical functions; and 3 - overlap among the SFs should be avoided (Nisula, 2014). The SFs provide an approximation of the real system functions and do not go in-depth compared with other methods like HFACS. Many SFs may depend on each other and safety is not simply a sum of these factors. Besides, the positive nature of the SFs as opposed to the failure condition taxonomies used in methods like HFACS helps the researchers to look for the measures that were present in the incident scenarios and presumably stopped the situation to become an accident. These features of SFs provide a suitable platform with proper taxonomy for analyzing the incident reports that are not prepared in a systematic analytical way. For a more comprehensive explanation on the SFs, the reader is referred to Nisula (2014).

Incident reports in this study are thus reviewed using the SFs (see Table A-2) to find which of the functions presented by SFs failed, and also if any of the SFs acted as in-place barrier and had a significant role in preventing the escalation of the event. This way, both positive and negative experiences can be tracked, even though the SFs are presented as positive functions. The positivity of the SFs, like ''Controllability of the ship'', is desirable to detect the presented safety factors that acted as barriers and stopped the incident situation to become severe into an accident. However, for the purpose of analyzing the contributing factors in the incidents it was necessary to have the negation or failed SFs, like ''Loss of Controllability of the ship''. In this way, SFs not only help us to understand why an incident occurred, but also help us to find what it takes for a serious situation not to become an accident. The results of reviewing the incident reports are presented in Section 4.2.

3.3. Statistical analysis

To identify any significant link between the extracted contributing factors as well as between SFs from both types of the reports, the statistical dependencies of the factors are studied two-by-two using Pearson correlation coefficient (r). The significance of the correlation is tested by computing the p-values using Student-t distribution. The Spearman rank coefficient (q) is also used to study the rank relation between the frequencies of the extracted factors from the accident and incident reports. The results of the statistical analysis are used in Section 4.3.1 to compare acquired knowledge from the accident and incident reports as well as to briefly discuss the interrelation between some of the extracted contributing factors from the reports (see Fig. 1).

4. Results

4.1. Accident reports

Table 1 shows the results of the reviewing of the accident reports as the relative frequency of each class of factors. The relative frequency shows the occurrence frequency of a specific class of factors in the reports in relation to the occurrence frequency of all the other factors in each layer (see Fig. 2). It can be seen that levels 1 and 2 of the failures as unsafe acts and preconditions are seen more frequently in the accident reports. Level 5 of failure as the external factors has the smallest frequency. Level 2 of failure as preconditions has the highest frequency among other levels of failures, which may be the result of having the largest coverage of the factors. From among the 147 causal factors that HFACS-Ground covers, 88 factors belong to this level.

Table 1 also shows that ''judgment/decision errors'' is the most frequent active failure. When this is seen together with the most frequent latent failure, as issues related to coordination/communication/planning, one may see the importance of the proper planning and communication, as most of the errors may be avoided as a result of that. Although this is an interrelation (i.e. causality) issue that needs to be further studied, we have discussed a bit further on this issue using the results of the correlation analysis in Section 4.3.1.

Moreover, Table 1 shows that the most frequent failures are among the two first levels of failures in HFACS-Ground framework. Since these two first levels of failures are mostly related to the failures of the frontline operators, this shows that: either the reviewed accident reports somehow failed to further investigate the top tier causes as organizational, supervisional, and external factors; or those preconditions may have been less involved in the causality network of the accidents. If the first conclusion is true, then there is the risk that the recommendations that are made based on these investigations may not be able to tackle the actual problem.

4.2. Incident reports

As is mentioned before, the SFs are phrased as positive functions, e.g. ''Controllability of the ship'', which is desirable to detect the presented safety factors that acted as barriers and stopped the incident situation to become severe into an accident. However, for the purpose of analyzing the contributing factors in the incidents as well as comparing the incident and accident events with each other, it was necessary to have the negation of SFs, e.g. ''Loss of Controllability of the ship''. Therefore, the incident reports are reviewed to simultaneously find those SFs that were reported present in the event as barriers and also to find if the absence or failure of any SFs contributed to the occurrence of the event. This means that not only the SFs are collected as safety barriers (Pos. columns in Tables 2 and A-2) but also the negations of SFs are collected as contributing factors (Neg. columns in Tables 2 and A-2) in the incidents.

Table 3 shows the summary of the results of the reviewing of the incident reports as the relative frequency of each SF, which is the occurrence frequency of a SF or its negation in relation to the occurrence frequency of all the other SFs. The complete results of the reviewing of the incident reports are presented in Table A-2 in Appendix. Table 2 shows that the most contributing factors into the occurred incident were the absence or failure of the Fundamental and External Safety Factors. Looking into the details of the SF categories (Table A-2) one can see that the problems with the propulsion systems as well as the pilotage related problems were the most frequent contributing factors in the reviewed incidents (Fig. 3). The effective SFs that acted as barriers also belong

Table 1

Analyzing the accident reports using HFACS-Ground.

Level 1st Layer % 2nd Layer % 3rd Layer % Rank

Active Failure 1 Unsafe Acts 27.6 Error 83.3 Skill-based 34 4

Judgment/Decision 56 2

Perceptional 10 10

Violation 16.7 Routine 57.8 8

Exceptional 42.2 11

Latent Failure/Condition 2 Preconditions 49.1 Environmental Factors 40.7 Physical environment 37.9 5

Technological environment 41 3

Infrastructures 21.1 6

Condition of Operator 16.9 Cognitive factors 34.3 7

Psycho-behavioral factors 30.3 9

Adverse physiological states 22.9 12

Physical/Mental limitations 9.8 14

Perceptual factors 2.7 15

Personal Factors 42.4 Coordination/Communication/Planning 93.7 1

Personal readiness 6.3 13

3 Unsafe supervision 8.5 Inadequate supervision 52.7 18

Planned inappropriate operations 20.3 20

Failed to correct known problems 15 22

Supervisory violations 12 23

4 Organizational influence 12 Resource management 41.2 17

Organizational climate 13.1 21

Organizational process 45.7 16

5 External factors 2.8 Regulation gaps 15.9 24

Other factors 84.1 19

Table 2

Summary of the incident reports analysis using the defined Safety Factors. See Table A-2 in Appendix for the details. The columns "Pos" present the positive Safety Factors as safety barriers. The columns "Neg" present the negation of Safety Factors as contributing factors in the incidents. The column "Total" represents the cumulative frequency of both data sources using equal weights.

Categories of Safety Factors Finnpilot (%) ForeSea (%) Total (%)

Pos Neg Pos Neg Pos Neg

Fundamental safety factors 1.8 8.9 31.1 56.0 10.8 28.9

Competencies with respect to different crew categories 31.3 15.6 29.7 9.0 30.8 12.8

Knowing and respecting operational limitations 0.0 2.2 1.4 1.0 0.4 1.7

Fitness for work 0.0 0.0 8.1 1.0 2.5 0.4

Procedures practices and culture 28.3 8.9 0.0 7.0 19.6 8.1

Ergonomics and redundancy 0.0 3.0 1.4 3.0 0.4 3.0

Availability of timely and reliable information 1.8 5.2 0.0 6.0 1.3 5.5

External safety factors 36.7 56.3 28.4 17.0 34.2 39.6

Table 3

Frequency of the extracted causes from the incident reports of Foresea and Finnpilot in compare with the extracted causes from the accident reports.

General categories Incident reports (%) Accident reports (%)

ForeSea Finnpilot Total

Accidental loss of control 9.0 0.0 4.4 0.8

Alarm missing or not clear 0.6 0.0 0.3 4.0

Bad visibility 0.6 0.0 0.3 3.3

Darkness 0.6 0.0 0.3 1.7

Errors (skill-based/judgment/decision) 10.2 3.5 6.8 8.9

Fairway 4.2 2.9 3.6 4.1

Hazardous natural environment 2.4 7.6 5.1 5.2

Inappropriate communication and cooperation 0.6 4.1 2.4 10.3

Inappropriate maintenance 2.4 0.0 1.2 0.6

Inappropriate regulations and practices 9.0 31.8 20.5 7.9

Inappropriate route planning 0.0 1.8 0.9 6.5

Inappropriate ship/bridge system design or equipment 3.6 2.9 3.3 3.8

Inappropriate training 5.4 0.6 2.9 5.1

Lack of redundancy 7.8 0.6 4.1 1.0

Lack of situational awareness 0.6 0.0 0.3 3.0

Mechanical failure or unexpected behavior 32.9 7.6 20.1 3.5

Organizational factors and support 5.4 32.9 19.4 7.0

Other personal factors 1.2 1.2 1.2 6.2

Ship moving off course 0.6 0.0 0.3 3.7

Traffic 2.4 0.0 1.2 0.6

Under-manning of necessary stations 0.0 0.0 0.0 5.6

Violation of good seamanship practices 0.6 2.4 1.5 7.0

to the External Safety Factors as well as Crew Competences; which the detail categories of the SFs (see Table A-2) shows that manageability of the external weather conditions as well as the readiness of the crew regarding the upcoming demanding situation have helped the crew to manage the facing threats safely (Fig. 3).

Moreover, going to each group of incident reports separately (i.e. ForeSea and Finnpilot), some patterns can be seen that most probably rooted into the way that the reports are prepared. The most frequent contributing factor related to the incidents reported of ForeSea is the unavailability of propulsion, while for the reports of the Finnpilot is pilotage related factors (see Table A-2). Since ForeSea incident reports are prepared voluntarily by the person who was involved in the incident, this may be the sign that people tend to mostly see the technological failures as the cause of a mishap rather than the human related factors. Contradictory, the reports of the Finnpilot, which are also prepared by the pilot who was involved in the incident shows that the pilotage related difficulties and problems, which is a human related factor, is the dominant contributing factor into the incidents. One possible reason for this contradiction might be that, unlike the ForeSea incident reports, the reported incidents by Finnpilot are the mandatory reports requested by the company that need to be prepared right after each incident. Besides, Finnpilot reports are the incidents that at least two independent parties were involved (i.e. the pilot for the pilotage company and the crew for the shipping company).

Thus, more clear and informative reports are naturally prepared as there were at least two independent witnesses (i.e. sources of information). Moreover, although the ForeSea reports also have the purpose of informing the maritime community about the possible threats, the Finnpilot reports are internal reports requested by the company that has more clear purposes for the reports, i.e. enhancing the piloting services offered by the company. This may show the weakness of the voluntary reports in compare with the mandatory reports, to be used as a reliable source of information for usages other than enhancing the safety awareness through the maritime society.

4.3. Comparison of incidents and accidents

Since the accident and incident reports are reviewed with different approaches, in order to be able to compare the extracted knowledge from them we transferred the results of the accident and incident reports into a common terminology, using general categories that are based on failure terminology. Categories from Rothblum (2000) and McCafferty and Baker (2006) are used as guidelines in building the general categories for this part. Since having fixed categories may result information loss and thus introduce uncertainty (Hanninen et al., 2013), a dynamic category definition, in where the categories change during the process is implemented. The process was both iterative and collective;

Fig. 3. The most frequent safety barriers (Greens; Positive SF) and contributing factors (Reds; Negated SF) in the reviewed incidents.

meaning that when all the extracted contributing factors from the reports were in hand, they were assigned to some pre-created categories. In case that some contributing factors cannot be fit to an existing category or a change to the taxonomy of an existing category is needed in order to accommodate a contributing factor, a new category is created. When a new category is created, all the contributing factors are checked again to see if the new category can be a better fit for any of the previously assigned contributing factors.

The results of this process for both accident and incident reports are shown in Table 3. As mentioned before, it can be seen that mechanical failure or unexpected behavior of equipment by far is the most frequent mishap category that is reported by ForeSea incident reports. The next category, which stands second to mechanical failure by a distance, is errors. Knowing that ForeSea incident reports are voluntary reports written mostly by the person who was involved in the incident, this may be the sign that people tend to mostly see the technological failures as the cause of a mishap rather than a human element. Although errors'' comes next in the list, all the errors reported in the reviewed ForeSea reports were errors made by a person other than the reporter and most of them were people out of the circle of the crew, like port operators or an individual in a third company. Only one report contained an error made by a crew member, which was reported by his superior as the cause of the mishap. This may show that despite of the existing oppose discussions for the blame culture (see for example Russell, 1999 and Bond, 2008) this culture is still alive in the mind of mariners that makes them hesitate to either see or report the mistakes made by themselves or their close colleagues. This could be seen as another reason that why voluntary-based reports in their current format are not reliable to be used for evidence-based risk modeling.

Looking at the most frequent factors from the Finnpilot reports, inappropriate regulations and organizational factors come first in the list. Those are mostly related to the difficulties in the performed pilotage tasks, when for example pilot either embarked after the pilot boarding position or disembarked before that, because the rules in these issues are not clear. Although they are mostly justified (e.g. due to the weather condition), it can be seen that the pilots reported them as threats for safety. This threat was also spotted as a contributing factor for some of the accidents in the reviewed SIAF reports. Same as the Finnish pilots, the SIAF accident investigators were pointing out the generality and ambiguity of the regulations regarding the pilotage practices either by the maritime authorities or the ship operating companies as a threat for safety.

4.3.1. Statistical analysis

The Spearman rank correlation between the rank orders of contributing factors for accident and total incident reports (Table 3), which shows the difference between the most frequent causes in the accident and incident reports, is weak in general (q « 0.3). Nevertheless, the rank correlation changes significantly if we do the test for accident reports and each source of incident reports (i.e. Finnpilot and ForeSea) separately. The rank correlation between accident reports and the Finnpilot incident reports shows quite strong correlation (q « 0.6, p <0.05), while the same test with ForeSea database is absolutely weak (q«0.07). This difference may be interpreted in two ways. The first interpretation comes from the way that the reports are prepared. Since a systematic approach is followed for the accident reports, they may be considered more reliable with regard to the presented knowledge. Thus, the difference between the rank order of the important contributing factors in accident and incidents reports may be a good criterion to examine the reliability of the incident data sources. Therefore and based on the assumed criterion, we may conclude that the involved uncertainties in the voluntary

incident reports of ForeSea are high that make the reports almost unreliable to be used as such in evidence-based risk modeling. Thus, unless the current way of investigating and voluntary reporting the incidents are changed, the reports may only be useful in the way that they are being used nowadays, meaning acting as alerts of the possible threats in the working places or increasing the safety awareness among the mariners. On the contrary, the Finnpilot mandatory incident reports and the embedded information can be considered reliable enough to be utilized in evidence-based risk modeling.

The second interpretation comes from the idea that the incidents are incomplete chains of events toward an actual accident. This means that the more in deep parallel analysis of accident and incident reports may reveal the possible risk control options that were in place to stop the incident to become an accident. For instance, although errors has the second rank in frequency in both types of reports (see Table 3), based on the frequency of inappropriate communication and cooperation in the accident and incident reports, we may conclude that appropriate communication and cooperation" in the incident cases stopped the situation to become serious, and thus highlighting the importance of a proper interaction between the crew. This conclusion can be confirmed at some level by the statistical results of the study (see Fig. 4A). Finnpilot incident reports show that the inappropriate communication and cooperation will interrupt the flow of information (i.e. availability of timely and reliable information/aboard the ship; see Table A-2) between the crew (r = 0.6, p< 0.01), which then will increase the likelihood of errors. In such cases, the skills and knowledge of a pilot (i.e. External Safety Factor/Pilotage; see Table A-2) may compensate this flaw (r =0.6, p <0.01) and acts as a safety barrier. Moreover, inappropriate communication and cooperation is linked to inappropriate route planning (r = 0.5, p <0.05) in the accidents. The importance of appropriate route planning, itself, can be understood by the link that it has with presence of a pilot (r = 0.5, p < 0.05) in accident cases, knowing that in about 40% of the reviewed grounding accident reports a licensed pilot was present onboard the vessel. However, Finnpilot reports show that when the pilotage does not go as is planned, in a favorable environmental condition and good visibility (i.e. External Safety Factors/Manageability of threats related to conditions; see Table A-2) the awareness of the crew on the bridge can save the day by recognizing the error on time (r = 0.9, p < 0.01). It is worth to mentioned that there is also a strong link between inappropriate communication and personal factors (r = 0.7, p <0.01) in incident reports in general, which may show how personality of crew affects the safety through inappropriate communication.

The results also show that the lack of redundancy could be an affecting factor in safety as it is reported in about 4% of the incident cases and shows a strong link with the accidental loss of control (r = 0.8, p <0.01) in the accidents (see Fig. 4B). The results of the accident cases show that the accidental loss of control, itself, can be affected by the inappropriate training (r = 0.8, p < 0.01) and the inappropriate maintenance through technical failures (r = 0.7, p <0.05).

Under-manning, which is reported by almost 6% of the accident cases, shows strong connection with inappropriate regulation and practices (r = 0.6, p < 0.01), which is presented in both incident (~21%) and accident (~8%) cases (see Fig. 4C). This then illustrates the importance of proper policies and training, especially in human resource management, in the maritime safety.

5. Discussion

The results show that the most frequent active failure contributed in the reviewed grounding accidents is the operators'

Fig. 4. Interrelations between some of the contributing factors in grounding events supported by the extracted knowledge from the accident and incident reports. The links are supported by the Pearson coefficients, while the directions of the links are direct logical extraction from the reports.

errors (Table 1). Operators' error comes second as frequent causes in both accident and incident reports (Table 3). Combining this knowledge with the high frequency of inappropriate communication and cooperation in accidents and the low frequency of the same factor in incident reports (Table 3), the importance of proper interaction between the crew members may be highlighted. Most of the operators' errors may be detected and avoided on time through proper monitoring and checking, which is the result of a proper communication and cooperation (Fig. 4A). Moreover, statistical analysis of the extracted causes in accident and incident reports suggests that inappropriate planning is an important factor affecting safety with regard to the grounding accident (Fig. 4A). The results show that inappropriate planning can affect maritime safety as it may even cancel out the presence of a licensed local pilot onboard.

Most of the detected contributing factors in the reviewed accident reports belong to the first and second levels of failures in Fig. 2, which is mostly related to the operator's failure. This may be the sign that either the current frameworks that are used for analyzing the accident reports are not proper for detecting the upper level failures (level 3 upwards in Fig. 2), or upper level failures were not highly involved in the studied accidents. This may be interpreted in two ways. A more conservative interpretation is that active and latent failures of the frontline operators are the most responsible failures for grounding accidents; which then suggests the need for causality analysis of these failures in order to be able to implement proper control measures. Another interpretation, could be that the used frameworks for the accident analysis by MAIB and SIAF are not capable of deep analyzing an accident up to the higher levels of failures; which then suggests the use of a

more proper and updated framework for the investigations. It is worth mentioning that in none of the reviewed accident reports HFACS has been used as the framework for investigation. Therefore, since the presented knowledge in each report is affected by the framework that is used for investigating the accident (Lundberg et al., 2009), our study shows that the use of a different framework to review the reports does not have much effect on the knowledge that can be acquired.

The analysis of the reviewed incident reports shows that the current practice of voluntary reporting the near-misses cannot contribute much on evidence-based risk modeling; because they only highlight active failures as the contributing factors in the occurred mishap. Besides, such incident reports have the tendency to overlook the mishaps related to organizations and operators, and emphasize more on technology failures. This is either because the reports are not prepared based on a holistic investigation and only the observations are reported; or that the blame culture is still highlighted in the minds of the crew that even in an anonymous report they hesitate to report other causes than technology failures. This may be seen as a problem of the voluntary near-miss report systems, as it makes the incident reports to be unreliable with regard to detecting the significant causes of an accident that can be used in evidence-based risk modeling. However, the mandatory incident report system that is carried out by the companies and for their own purposes has better potential to be used for evidence-based risk modeling, specifically if more than one party is involved in the event. Although still suffer from the lack of a systematic view, such reports are considered more reliable with regard to the presented information and thus more consistent source for evidence-based risk modeling as it seems the blame

culture has no effect on such reports. For instance, the study of the Finnpilot reports that are prepared by the pilots themselves shows that the difficulties and problems during the pilotage are responsible for the incidents in more than 38% of the cases.

Moreover, since incidents can be seen as broken chain of hazardous events that did not end in an accident, if the current practice of reporting near-misses is changed into a more systematic way of analysis, these reports may be used in evidence-based risk modeling in combination with accident reports in order to find proper safety barriers and control options. One does not have much to say with regard to the possible controlling options only by knowing about the most frequent causes. Effective control options can be defined and implemented only if the interrelations between the contributing factors are known. The interrelations between the factors were not studied in this paper as such; however the study shows that the statistical analysis on the frequency of the causes may give hints regarding the causal relations between the extracted causes (see Fig. 4). Analysis of causal relations may tell why a mishap like inappropriate communication has occurred. For instance, we have found that the causes recognized by the investigators for inappropriate communication and cooperation in the bridge were different like:

• High gradient authority of the master due to his age or years of experience.

• Lack of guidelines from the shipping or piloting companies regarding the cooperation and communication in different situations.

• Over-trust to one's knowledge and maneuvering skills.

• Established faulty practices like seeing piloting as a one-man job.

This variation in the reasoning of a single cause shows the difficulty of addressing causality issues, as each needs different approaches to be addressed.

Additionally, the majority of the reviewed near-miss grounding cases from ForeSea were categorized into near-miss groundings because a mechanical problem, like steering malfunction, could have led to the possible loses of maneuverability. Therefore, near-miss grounding cases that are extracted from the ForeSea database may not be precisely a near-miss accident from our perspective. This again also shows that the incident databases should be used cautiously in evidence-based risk modeling, as the near-miss definition of an accident may differ with the one that is used in such databases. Besides, by looking at the frequencies in the incident reports, some level of subjectivity in the reports may be recognizable. The Finnpilot reports mostly highlights the causes that are more important for the pilots, and the ForeSea reports may have some level of underreporting of human elements. This may question the reliability of the near-miss reports in general for detecting the contributing factors in accident modeling.

Despite the above discussions, the study is bounded with the involved uncertainties. Use of specific databases to extract the reports may introduce some level of uncertainty that roots to the way that the databases are formulated. Thus, the study needs to be repeated using reports prepared by other authorities and data providers. Besides, although it has been tried to not further investigate the events to avoid subjectivity when the reports are reviewed, still some level of subjectivity by the reviewers of the reports may be introduced into the extracted knowledge as uncertainty. However, this subjectivity is considered as not very significant (Hyttinen, 2013; Hyttinen et al., 2014).

Table A-1

Description of the HFACS-Ground categeroies. Causal category Description

Skill-based error Errors occur in the operator's execution of a routine and practiced task relating to procedure, training or proficiency

Judgment/Decision error Actions of an individual performed as intended but the chosen action was inadequate or wrong that did not end to a desired end-state

Perceptional error Misperception of an object, threat, or situation that causes a human error

Routine violation Actions of the operator that happen in a regular bases as deliberately disregarding rules and instructions

Exceptional violation When the operator intentionally violates procedures or policies without need. This mostly happens due to lack of discipline of an

individual

Physical environment Environmental phenomena such as weather and climate affect the actions of individuals and result in an unsafe situation

Technological environment Design of the workspace or failure of an automation system affect the actions of individuals and result in an unsafe situation.

Mechanical failure or breakage of equipment that is necessary for ship handling is included Infrastructures Design of the waterway/fairway or markings/nav-aids are inadequate and create an unsafe situation. The availability and adequacy of

pilot boarding places for the area is also included Cognitive factors Cognitive conditions affect the perception or performance of individuals and result in an unsafe situation

Psycho-behavioral factors Individual's personality traits, psychosocial problems, psychological disorders, or inappropriate motivation creates an unsafe

situation

Adverse physiological states Physiologic event that decreases performance of an individual and results in an unsafe situation Physical/mental limitations Individual lacks the physical or mental capabilities to cope with a situation, which causes an unsafe situation Perceptual factors Misperception of an object, threat, or situation creates an unsafe situation

Coordination, Communication, Inadequate interactions among individuals, crews, and teams involved with the preparation, planning, and execution of a mission Planning that resulted in an unsafe situation. It includes inappropriate or inadequate ship and bridge resource management that affects the

functionality of operators and results in an unsafe situation Personal readiness Individual shows disregarding of rules and instructions that govern the individuals readiness, or exhibits poor judgment when it

comes to readiness to perform the mission Inadequate supervision Supervision is inappropriate or improper, and fails to identify hazard, recognize and control risk, provide guidance and training that

results in an unsafe situation

Planned inappropriate Supervision fails to adequately assess the hazards associated with an operation, or allows non-proficient or inexperienced personnel

operations to participate in missions beyond their capabilities

Failed to correct known Supervision fails to correct known deficiencies in documents, processes or procedures, or fails to correct inappropriate or unsafe

problems actions of individuals

Supervisory violations Supervision willfully disregards instructions, guidance, rules, or operating instructions

Resource management Resource management or acquisition processes or policies, directly or indirectly, influence system safety and results in poor error

management or creates an unsafe situation Organizational climate Organizational environment, structure, policies, and culture influence individual actions and results in an unsafe situation

Organizational process Organizational operations and procedures negatively influence individual, supervisory, or organizational performance and result in

an unsafe situation

Regulation gaps International or national regulations, laws, or policies influence system safety and results in poor management or unsafe actions of

the operator

Other factors Decisions, actions, or products from outside the organization influence system safety and result in poor management or unsafe

actions of the operator

6. Conclusion

The possibility of using accident and incident reports in evidence-based risk modeling is investigated using accident reports prepared by Finnish and British accident investigation boards (SIAF and MAIB) as well as ForeSea and FinnPilot incident databases with regard to grounding cases. A version of HFACS framework as HFACS-Ground is introduced to review the grounding accident reports, and the concept of Safety Factors as high level positive functions that are prerequisite for safe transport operations is used for reviewing the incident reports. In conclusion, accident reports are seen as a reliable source of evidence to extract the most significant contributing factors in grounding accidents. Nevertheless, their reliability will be better confirmed if their usability for extracting the interrelation between the contributing factors is also tested in future studies. On the other hand, voluntary incident reports are shown as not very useful or reliable in their current form for evidence-based risk modeling, while the mandatory incident reports are in a better position in that respect. The voluntary reports may only be useful in the way that they are currently used, as the alerts for possible hazards in the daily operation of shipping. In general, in order to make the incident reports useful for accident modeling, first they need to be prepared in a more systematic way that can address the causality of the mishaps, and second a more consistent definition of near-miss situation needs to be defined to reliably assign occurred mishaps to a specific type of accident.

Moreover, the results of this study as the extracted frequent contributing factors in grounding accidents as well as the detected interrelations between some of those contributing factors can be used for structuring a risk model for grounding accident. Since the results of this study are based on the actual accident and incident cases (i.e. real scenarios), the model that is structured using these results can be considered as an evidence-based risk model. Such model, as discussed shortly in the Introduction section and more comprehensively in Mazaheri et al. (2014b), is suitable for risk management purposes as it reflects the background available knowledge on the system and its components.

Acknowledgements

This study was conducted partly in ''Minimizing risks of maritime oil transport by holistic safety strategies" (MIMIC) project and partly in ''Winter Navigation Risks and Oil Contingency Plan'' (WinOil) project. The projects were funded by the European Union and the financing comes from the European Regional Development Fund, The Central Baltic INTERREG IV A Programme 2007-2013; South-East Finland - Russia ENPI CBC 2007-2013; the City of Kotka; Kotka-Hamina Regional Development Company (Cursor Oy); Centre for Economic Development, and Transport and the Environment of Southwest Finland (VARELY).

Our colleagues Valtteri Laine, Otto Sormunen, Suhail Aziz, and Noora Hyttinen are appreciated for their assistance in this study. We are also grateful to FinnPilot and ForeSea for providing us with their near-miss databases.

Appendix A

See Tables A-1-A-3.

Table A-2

Details of the Safety Factors and their presence and absence in the incident reports. The columns "Pos" present the positive Safety Factors as safety barriers. The columns "Neg" present the negation of Safety Factors as contributing factors in the incidents.

Defined Safety Factors Finnpilot ForeSea (%)

Fundamental safety factors Pos. Neg. Pos. Neg.

Availability of propulsion 0.0 3.7 0.0 41.0

Awareness of ship position in relation to the 0.0 3.7 0.0 5.0

correct safe route

Capability to evacuate (escape routes, 0.0 0.0 0.0 0.0

equipment, emergency communications)

Capability to maintain survivable conditions 0.0 0.0 0.0 0.0

aboard ship

Capability to stop ship and sea-keeping ability 1.8 0.0 10.8 1.0

Controllability of ship stability 0.0 0.0 0.0 0.0

Maneuverability 0.0 1.5 2.7 8.0

Structural integrity and damage stability 0.0 0.0 12.2 0.0

Technical redundancy 0.0 0.0 5.4 1.0

Competencies with respect to different crew categories

Application of procedures and knowledge 2.4 0.7 9.5 4.0

Communication 3.0 2.2 0.0 1.0

Knowledge 6.6 1.5 0.0 2.0

Leadership and teamwork 0.6 3.0 5.4 0.0

Management of ship's route and related 3.0 4.4 0.0 1.0

automation/equipment

Manual steering of ship 3.0 3.0 1.4 0.0

Problem-solving and decision-making 3.0 0.0 8.1 0.0

Ship maneuvering in port 2.4 0.7 0.0 0.0

Situation awareness (including anticipation) 7.2 0.0 5.4 1.0

Workload management 0.0 0.0 0.0 0.0

Knowing and respecting operational limitations

Limitations concerning the route, speeds, etc. 0.0 2.2 1.4 1.0

Shipload planning and loading: stowage, 0.0 0.0 0.0 0.0

appreciation of cargo characteristics, volume

Fitness for work

Psycho-physical performance level 0.0 0.0 0.0 1.0

Vigilance level 0.0 0.0 8.1 0.0

Procedures practices and culture

Adapted to real operational situations 0.0 0.0 0.0 0.0

Adequate focus on safety in the presence of 0.0 3.7 0.0 6.0

commercial pressures

Anticipating demanding operations and 28.3 0.0 0.0 0.0

situations

Managing a multitude of cultures (and 0.0 0.0 0.0 1.0

languages)

Operational planning 0.0 3.0 0.0 0.0

Quality and clarity 0.0 2.2 0.0 0.0

Ergonomics and redundancy

Adequate redundancy within the crew (deck 0.0 0.0 1.4 0.0

officers)

Ergonomics in how information is presented 0.0 0.7 0.0 0.0

Usability of bridge automation (ergonomics, HCI) 0.0 2.2 0.0 3.0

Availability of timely and reliable information

Aboard ship 1.2 2.2 0.0 0.0

Between the ship and the external world 0.6 3.0 0.0 6.0

External safety factors

Icebreaker assistance 0.6 0.0 0.0 0.0

Manageability of exceptional phenomena and 0.0 0.0 0.0 1.0

situations (e.g. icebergs, pirates)

Manageability of external threats (e.g. restricted 0.0 1.5 14.9 7.0

waters, fairways, infrastructure)

Manageability of threats caused by other vessels 0.0 0.0 8.1 3.0

Manageability of threats related to conditions 27.7 9.6 1.4 3.0

(e.g. weather, visibility, ice, currents)

Pilotage 6.0 38.5 4.1 1.0

Port operations 0.0 3.7 0.0 1.0

Towage 0.0 0.7 0.0 1.0

VTS operations 2.4 2.2 0.0 0.0

Table A-3

Description of the categories used for comparing the incident and accident reports.

General categories

Description

Accidental loss of control

Alarm missing or not clear

Bad visibility Darkness

Errors (Skill-based/Judgment/ Decision)

Fairway

Hazardous natural environment

Inappropriate

communication and cooperation

Inappropriate maintenance

Inappropriate regulations and practices

Inappropriate route planning

Inappropriate ship/bridge system design or equipment

Inappropriate training

Lack of redundancy

Lack of situational awareness

Mechanical failure or unexpected behavior

Organizational factors and support

Other personal factors

Ship moving off course Traffic

Under-manning of necessary stations

Violation of good seamanship practices

Losing the maneuvering control of the vessel in a hazardous situation by any other reason than a mechanical failure An error or a dangerous situation does not have an assigned alarm; or the alarm was not clear enough for the crew to make them aware of the danger

Poor visibility due to e.g. fog or snow that affects the visual or radar visibility No natural light outside (moon or sun) Errors occur in the operator's execution of a task or in the chosen action or in the made decision

The design of the fairway made it hard to navigate, or inadequate markings in the fairway

Natural environmental phenomena such as weather and climate creates hazardous situation to perform the mission Inadequate interactions between the crew on the bridge/engine room

Inappropriate routine maintenance of the vital equipment causes an unsafe situation Inappropriate or no specific regulations for specific or normal situation (onboard) by the company or the authorities that cause an unsafe situation

A route plan is not prepared or is prepared inadequately by the master or the pilot for the intended voyage

The ergonomic design of the working space or other equipment involved in the ship navigation/steering is inappropriate and causes human error or unsafe situation The crew has not received proper training necessary to do their jobs in normal or emergency situations

Inappropriate or no backup system is placed for the equipment that is vital for safely performing the mission Individual is uncertain or unaware of what is happening around him/her e.g. weather conditions and traffic. This includes the uncertainty about the location of the vessel Failure of the vital equipment like navigation/steering systems or unexpected behavior of the equipment that causes unsafe situation

Inappropriate support from the responsible organization or lack of proper regulations/ instructions. This also includes receiving inappropriate support from the responsible organization in emergency situation that causes an unsafe situation or a human error Physiological, physical, mental and behavioral state of an individual like fatigue, intoxication, distraction, panic, stress, and hurry

Not being in the planned route or the waterway causes an unsafe situation Nearby or on route traffic causes an unsafe situation

Given the situation, there is not enough crew on one or more vital positions such as the engine room, bridge, and lookout, which causes task accumulation on an individual or creates an unsafe situation Any other deviation from the routine seamanship practices that makes an individual take unnecessary risk and cause human error or unsafe situation

References

Ackoff, R.L., 1989. From data to wisdom. J. Appl. Syst. Anal. 16, 3-9.

Artana, K.B., Putranta, D.D., Nurkhalis, I.K., Kuntjoro, Y.D., 2005. Development of simulation and data mining concept for marine hazard and risk management. In: Proceedings of the 7th International Symposium on Marine Engineering.

Aven, T., 2013. A conceptual framework for linking risk and the elements of the data-information-knowledge-wisdom (dikw) hierarchy. Reliabil. Eng. Syst. Saf. 111, 30-36.

Aven, T., Zio, E., 2011. Some considerations on the treatment of uncertainties in risk assessment for practical decision making. Reliabil. Eng. Syst. Saf. 96, 64-74.

Bole, A.G., Dineley, W., Nicholls, C.E., 1987. The Navigation Control Manual. William Heinemann Ltd., London.

Bond, J., 2008. The blame culture - an obstacle to improving safety. J. Chem. Health Saf. 15 (2), 6-9.

Chauvin, C., Lardjane, S., Morel, G., Clostermann, J.-P., Langard, B., 2013. Human and organizational factors in maritime accidents: analysis of collisions at sea using the HFACS. Accid. Anal. Prev. 59, 26-37.

Chen, S.-T., Chou, Y.-H., 2012. Examining human factors for marine casualties using HFACS - maritime accidents (HFACS-MA). In: 12th International Conference on ITS Telecommunications. Taipei, Taiwan.

Chen, S.-T., Wall, A., Davies, P., Yang, Z., Wang, J., Chou, Y.-H., 2013. A human and organisational factors (HOFS) analysis method for marine casualties using HFACS-maritime accidents (HFACS-MA). Saf. Sci. 60, 105-114.

FinnPilot, 2014. Finnpilot pilotage ltd - firmly on the fairway.

ForeSea, 2014. Purpose. <http://foresea.net/about.aspx>.

Goerlandt, F., Kujala, P., 2014. On the reliability and validity of ship-ship collision risk analysis in light of different perspectives on risk. Saf. Sci. 62 (February), 348-365.

Hanninen, M., 2014. Bayesian networks for maritime traffic accident prevention: benefits and challenges. Accid. Anal. Prev. 73, 305-312.

Hanninen, M., Mazaheri, A., Kujala, P., Montewka, J., Laaksonen, P., Salmiovirta, M., 2014. Expert elicitation of a navigation service implementation effects on ship groundings and collisions in the Gulf of Finland. Proc. Inst. Mech. Eng., Part O, J. Risk Reliabil. 228 (1), 19-28.

Hanninen, M., Sladojevic, M., Tirunagari, S., Kujala, P., 2013. Feasibility of collision and grounding data for probabilistic accident modeling. 6th Conference on Collision and Grounding of Ships and Offshore Structures (ICCGS), Trondheim, Norway.

Harrald, J.R., Mazzuchi, T.A., Spahn, J., Dorp, R.V., Merrick, J., Shrestha, S., Grabowski, M., 1998. Using system simulation to model the impact of human error in a maritime system. Saf. Sci. 30 (1-2), 235-247.

Hollnagel, E., 2004. Barriers and accident prevention Ashgate, Hampshire.

Hyttinen, N., 2013. The effect of experience on knowledge extraction from accident reports. Aalto University.

Hyttinen, N., Mazaheri, A., Kujala, P., 2014. Effects of the background and experience on the experts' judgments through knowledge extraction from accident reports. In: Proceedings of the 12th Probabilistic Safety Assessment and Management (PSAM 12), Honolulu, Hawaii, US.

IATA, 2013. Evidence-based training implementation guide. International Air Transport Association, Montreal, Canada.

IMO, 2002. Guidelines for formal safety assessment (FSA) for use in the IMO rulemaking process. Organization, I.M. ed. MSC/Circ. 1023. London.

IMO, 2012. Formal safety assessment, outcome of MSC 90. Organization, I.M. ed. Draft revised FSA guidelines and draft HEAP guidelines.

Johnson, C.W., 2003. Failure in Safety-critical Systems: A Handbook of Accident and Incident Reporting. University of Glasgow Press, Glasgow, Scotland.

Kristiansen, S., 2010. A BBN approach for analysis of maritime accident scenarios. In: Proceedings of the ESREL, Rhodes, Greece.

Kujala, P., Hanninen, M., Arola, T., Ylitalo, J., 2009. Analysis of the marine traffic safety in the Gulf of Finland. Reliabil. Eng. Syst. Saf. 94 (8), 1349-1357.

Ladan, M., Hanninen, M., 2012. Data sources for quantitative marine traffic accident modeling. Aalto University publication series SCIENCE + TECHNOLOGY 11/2012. Aalto University, Espoo, p. 68.

Lundberg, J., Rollenhagen, C., Hollnagel, E., 2009. What-you-look-for-is-what-you-find - the consequences of underlying accident models in eight accident investigation manuals. Saf. Sci. 47,1297-1311.

Mazaheri, A., Kotilainen, P., Sormunen, O., Montewka, J., Kujala, P., 2013a. Correlation between the ship grounding accident and the ship traffic - a case study based on the statistics of the Gulf of Finland. Int. J. Marine Navigat. Saf. Sea Transport. 7 (1), 119-124.

Mazaheri, A., Sormunen, O.V.E., Hyttinen, N., Montewka, J., Kujala, P., 2013b. Comparison of the learning algorithms for evidence-based BBN modeling - a case study on ship grounding accidents. Annual European Safety and Reliability Conference (ESREL 2013). Amsterdam, The Netherland.

Mazaheri, A., Montewka, J., Kotilainen, P., Sormunen, O.-V.E., Kujala, P., 2014a. Assessing grounding frequency using ship traffic and waterway complexity. J. Navigat., 1-18

Mazaheri, A., Montewka, J., Kujala, P., 2014b. Modeling the risk of ship grounding -a literature review from a risk management perspective. WMU J. Maritime Aff. 13 (2), 269-297.

Mccafferty, D.B., Baker, C.C., 2006. Trending the causes of marine incidents. 3rd Learning from Marine Incidents Conference. London, UK.

Montewka, J., Goerlandt, F., Kujala, P., 2014. On a systematic perspective on risk for

formal safety assessment (FSA). Reliabil. Eng. Syst. Saf. 127 (July), 77-85. Nisula, J., 2014. Safety factors in the 'tiedosta toimenpiteisiin' (tito) project.

Liikenteen Turvallisuusvirasto, Trafi, Helsinki. Pearl, J., 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible

Inference. Morgan Kaufman Publishers Inc., San Francisco. Reason, J., 1990. The contribution of latent human failures to the breakdown complex systems. Philos. Trans. Roy. Soc. London, Ser. B, Biol. Sci. 327 (1241), 475-484.

Reinach, S., Viale, A., 2006. Application of a human error framework to conduct train

accident/incident investigations. Accid. Anal. Prev. 38, 396-406. Rothblum, A.M., 2000. Human error and marine safety. National Safety Council

Congress and Expo.Orlando. Rothblum, A.M., Wheal, D., Withington, S., Shappell, S.A., Wiegmann, D.A., 2002. Improving incident investigation through inclusion of human factors. Publications and Papers, Paper 32. United States Department of Transportation. Russell, L., 1999. Incident investigation: fix the problem-not the blame. J. Chem.

Health Saf. 6(1), 32-34. Samuelides, M.S., Ventikos, N.P., Gemelos, I.C., 2009. Survey on grounding incidents: statistical analysis and risk assessment. Ships Offshore Struct. 4 (1), 55-68.

Schroder-Hinrichs, J.U., Baldauf, M., Ghirxi, K.T., 2011. Accident investigation reporting deficiencies related to organizational factors in machinery space fires and explosions. Accid. Anal. Prev. 43, 1187-1196.

Shappell, S.A., Wiegmann, D.A., 1997. A human error approach to accident investigation: the taxonomy of unsafe operations. Int. J. Aviat. Psychol. 7,269-291.

Shappell, S.A., Wiegmann, D.A., 2000. The human factors analysis and classification system - HFACS. In: Dot/Faa/Am-00/7 ed. Federal Aviation Administration, Washington DC.

Shappell, S.A., Detwiler, C., Holcomb, K., Hackworth, C., Boquet, A., Wiegmann, D.A., 2007. Human error and commercial aviation accidents: an analysis using the human factors analysis and classification system. Hum. Factors 49, 227-242.

Tirunagari, S., Hanninen, M., Guggilla, A., Stahlberg, K., Kujala, P., 2012a. Impact of similarity measures on causal relation based feature selection method for clustering maritime accident reports. J. Glob. Res. Comput. Sci. 3 (8), 46-50.

Tirunagari, S., Hanninen, M., Stahlberg, K., Kujala, P., 2012b. Mining causal relations and concepts in maritime accidents investigation reports. In: Proceedings of the International Conference cum Exhibition on Technology of the Sea.

Wiegmann, D., Faaborg, T., Boquet, A., Detwiler, C., Holcomb, K., Shappell, S., 2005. Human error and general aviation accidents: a comprehensive, fine-grained analysis using HFACS. Office of Aerospace Medicine.