Accepted Manuscript

A review of source term estimation methods for atmospheric dispersion events using static or mobile sensors

Michael Hutchinson, Hyondong Oh, Wen-Hua Chen

PII: S1566-2535(16)30152-X

DOI: 10.1016/j.inffus.2016.11.010

Reference: INFFUS 821

To appear in:

Information Fusion

Received date: Revised date: Accepted date:

25 August 2016

26 October 2016 14 November 2016

Please cite this article as: Michael Hutchinson, Hyondong Oh, Wen-Hua Chen, A review of source term estimation methods for atmospheric dispersion events using static or mobile sensors, Information Fusion (2016), doi: 10.1016/j.inffus.2016.11.010

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Highlights

A review of techniques to gain information about atmospheric dispersion is presented. Optimisation- and Bayesian inference-based estimation methods are summarised. Mobile sensors provide an ideal platform for data gathering of atmospheric events. The current limitations and recommendations for future research is discussed.

November 15, 2016

A review of source term estimation methods for atmospheric dispersion events using static or mobile sensors

Michael Hutchinson, Hyondong Oh and Wen-Hua Chen

; of scenai

Abstract

Understanding atmospheric transport and dispersal events has an important role in a range^of scenarios. Of particular importance is aiding in emergency response after an intentional or accidental chemical, biological or radiological (CBR) release. In the event of a CBR release, it is desirable to know the current and future spatial extent of the contaminant as well as its location in order to aid decision makers in emergency response. Many dispersion phenomena may be opaque or clear, thus monitoring them using visual methods will be difficult or impossible. In these scenarios, relevant concentration sensors are required to detect the substance^where they can form a static network on the ground or be placed upon mobile platforms. This paper presents a review of techniques used to gain information about atmospheric dispersion events using static or mobile sensors. The review is concluded with a discussion on the current limitations of the state of the art and recommendations for future research.

Index Te

Source Estimation, Inverse Modelling, Boundary Tracking, Atmospheric Dispersion, Optimisation, Bayesian Inference, Source localisation, Dispersion Modelling

I. INTRODUCTION

The growing threat of terrorism [1], the Fukushima nuclear accident (2011) [2] and the Eyjafjallajokull volcanic eruption (2010) [3] are significant events with a detrimental impact on public health and several industries including aviation and transport. What these events have in common is the dispersal of hazardous material into the atmosphere. Atmospheric transport and dispersion (ATD) models are used to forecast the spread of the contaminants to provide emergency responders with crucial intelligence to aid efficient response and post emergency assessment. For an accurate forecast, several variables are needed as an input to the model including, but not limited to: meteorological data, the strength of the release and its location. In general, sparse meteorological data are available from local weather stations or even across the globe. The strength, location and time of the release are often unknown, and thus should be inferred from relevant sensor measurements.

This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) Grant number EP/K014307/1, the MOD University Defence Research Collaboration in Signal Processing and the Future Innovation Research Fund (Project Number 1.160086) of UNIST (Ulsan National Institute of Science and Technology).

Michael Hutchinson and Wen-Hua Chen are with the Department of Aeronautical and Automotive Engineering, Loughborough University, Loughborough, Leicestershire, LE11 3TU, United Kingdom

Hyondong Oh is with the School of Mechanical and Nuclear Engineering, UNIST, Ulsan, South Korea h.oh@unist.ac.kr

For visibly detectable substances, such as volcanic ash, satellite images are the preferred form of measurement data [3]; however, this approach is limited in terms of spatial and temporal resolution of the satellite and obstruction by clouds. Alternatively, sensors that can measure the concentration of ash or a chemical, biological, radiological or nuclear (CBRN) substance are available. The determination of source parameters from these sensor measurements is a problem in inverse modelling; the inverse problem is highly non-linear, ill-posed [4] and subject to input data that is typically sporadic, noisy and sparse [5]. Traditionally, with regards to CBRN source term estimation (STE), a network of static sensors on the ground are used to estimate the source term as illustrated in Fig. 1. A benefit of this approach lies in early detection near places of strategic importance (e.g nuclear power-plant sites). However, for accidents or deliberate attacks in random places, it is infeasible to cover all regions of importance with sensors dense enough to determine the source before it has spread significantly.

Wind direction

X- Sensors • - Concentration X —Source location

Fig. 1. Example of a static sensor networl

With the technological developments in sensing and robotics, mobile sensors such as unmanned aerial vehicles (UAVs) are now well equipped for STE. Mobile sensors provide the additional ability to perform boundary tracking of the contaminant and source seeking to aid in the emergency response. Boundary tracking will provide a direct picture of the spatial extent of the contaminant without modelling efforts. For instance, mobile sensors have been employed to determine the spread of a range of boundaries such as oil spills [6], forest fires [7], ocean temperatures [8] and the growth of harmful algae bloom [9]. Since the ultimate goal of STE is to predict the spread of hazardous material, the boundary can be used as a means to verify the source estimate. In addition, the detected boundary can be used as additional observational data within STE algorithms and to constrain the parameter space. Source seeking will attempt to drive the robot to the location of an emitting source without a direct attempt to estimate the release rate; similarly to boundary tracking, this provides an estimate without modelling efforts. Using mobile sensors for STE introduces an additional area of research concerning how to optimally move the sensor in order

to produce the best estimate of source parameters in the minimum amount of time or effort. The method is related to a number of robotics research areas such as autonomous search, multiple robot cooperation, informative path planning and control.

In this paper, the techniques used to gain information about atmospheric dispersion events are explored where the substance is not detectable visibly. This includes STE using static or mobile sensors, boundary tracking and source seeking. Although there are a few reviews on STE using static sensors [4], [10], [11], this paper aims to provide a more up to date and thorough review, featuring many new developments in the area and also an extension to the application of mobile sensors.

This paper is organised as follows. Section II provides a brief discussion of dispersion modelling, the adjoint source-receptor relationship and STE datasets. Section III contains a review of STE techniques using a static network of sensors. Section IV presents a review of the literature on the use of mobile sensors to gain information of dispersing phenomena, specifically boundary tracking, source seeking and and recommendations for future research.

d STE. 5 OUND

TE. Section V provides conclusions

II. Preliminary backge

Dispersion modelling, the adjoint source-receptor relationship and experimental dispersion datasets are of high importance to source term estimation and will be referred to several times throughout this paper. However, since they are not the main focus of this review, a brief outline is provided in this section. For more detailed information on atmospheric dispersion an interested reader is referred to [12].

A. Dispersion modelling

Atmospheric transport and dispersion models are used to estimate the dispersion of pollutants into the atmosphere. Models in the literature vary in terms of applicable scenarios, assumptions and complexities. Five types of fundamental dispersion models exist along with a number of hybrids and extensions of them as below:

• Box models [13]

• Gaussian plume moc

• Lagrangian model

• Eulerian dispersion models [16]

• Dense gas models [17], [18].

A comprehensive list of atmospheric transport and dispersion (ATD) models is provided by the US Environmental Protection Agency (EPA), including sections for recommended and alternative models. For more information a review can be found in [19]. In this section, the Gaussian plume model is described in further detail as it has been popular throughout the literature in STE due to its simplicity and fast computation. The key parameters in the model are the atmospheric turbulence coefficients ay and az which represent standard deviations that describe the crosswind and vertical mixing of the pollutant. Several derivations of these values exist where a popular approach is based on Pasquill's atmospheric stability class [20]. The equation of the Gaussian plume is derived from the

turbulent diffusion equation by assuming homogeneous, steady state flow and a steady state point source, resulting

C (x,y,z,Q)

uayaz2n V, 2a"2

-(z - h)2 2<2

- (z + h)2 2<2

where C is a concentration at a given position, Q is the release rate, x, y and z are the downwind, crosswind and vertical distances, and u is the mean wind speed at the height h of the release [3]. Several extensions of the Gaussian plume model exist to overcome some of its limiting assumptions such as the Gaussian puff model.

B. The adjoint source-receptor relationship

The adjoint source-receptor relationship is created by an inverse run of an ATD model from a sensor. Effectively the ATD model is run where sensors act as sources and meteorological variables such as wind speed are reversed. Concentrations expected at that sensor can then be calculated for any source term by computing the inner product of the source distribution and the adjoint concentration field [21] .

Within the literature, the adjoint source-receptor relationship has been used standalone to estimate the source term [22], and to quantify the uncertain relationship/sensitivity between source parameters and sensor concentration readings [23]. By using the adjoint, the number of potentially expensive dispersion model runs can be significantly reduced as a single adjoint can be used to test multiple inferences [21]. This provides great benefit in scenarios which prefer a complex and computationally expensive ATD model. However, the adjoint can be limited by non-linearities in the concentration field and, in some complex scenarios (e.g. urban environments), the backwards and forwards dispersion runs will not match. This can be caused by effects from building interactions or puff splitting. Nonetheless, these complex events have seen limited research in the literature on STE.

A simplified version of the adjoint models are back trajectory techniques, where only the inverse run is used. The method is effective in splitting up regions where a source may occur by incorporating null sensor measurements to determine where it is likely the source is not present [24], effectively reducing the parameter space for the location estimate. trajectory techniques have a number of limitations. The most critical of which

is the reliance on rich meteorological information. Under situations where meteorological data are

inaccurate, unr unavailable, the accuracy of STE will suffer. Despite this, the method is effective when

used to d ource regions as an initial guess in estimation algorithms.

Experimental datasets are of high importance to validate STE algorithms. When tested upon experimental data, significant performance in STE accuracy is often lost. This is most likely due to discrepancies between the ATD model simulations and real dispersion events and in the current ability (e.g. accuracy and resolution) of available sensors. Collecting atmospheric transport datasets is an expensive task and significant planning is required. For this reason, the number of available datasets is quite limited. Popular datasets used to validate STE algorithms are the

Fusion Field Trial 2007 (FFT07) experiment [25] and the Joint Urban Experiment 2003 [26]. The datasets can vary among equipment used, the amount of meteorological information available, the contaminant material and the experiment scale. Alternative experimental methods use wind tunnels to validate STE algorithms, for example the mock urban setting test (MUST) [27]. These experiments benefit from better knowledge of the wind field, enabling researchers to focus on refining STE algorithms with less meteorological or dispersion modelling uncertainties. A large collection of datasets and their descriptions can be found at the Atmospheric Transport and Diffusion Data Archive [28] and the Comprehensive Atmospheric Modelling Program [29].

III. Source term estimation using static sensors

Concentration measurements

Measured data with noise

d Diffusion

Prior information

Met data {wind speed/ direction etc)

Initial estimate (location, strength etc)

Other (Le. Domain knowledge)

Dispersion model

Predicted output

Cost/ Likelihood function

Source term estimate

Evaluate

Stopping criteria reached?

Update estimate

Output,

Source term estimate

Fig. 2. Flow diagram of generic STE algorithm.

The goal of STE is to estimate the parameters that describe the source of a release: namely its location and strength. In the literature, meteorological variables have also been included as parameters to account for spatial variations in meteorological conditions in order to find a better estimate of the overall source. The most popular methods of STE use a network of concentration sensors on the ground. Measurements of concentration are fused with prior information such as meteorological data to estimate the unknown source parameters. Estimation has been performed using two dominant approaches: i) optimisation methods and ii) probabilistic approaches based on Bayesian inference. Regardless of the approach, inferred source parameters are run in a forward ATD model to generate predicted concentrations that are compared with the observations in a cost or likelihood function. The overall goal of these methods is to find the best or most likely match between the predicted and observed data, as illustrated in Fig. 2.

The major difference between the optimisation and Bayesian approaches is in the probabilistic aspect of the Bayesian approach. The Bayesian approach allows inputs and models used in the algorithm to be specified via a probability density function (PDF), taking into account uncertainties in the input data and the chosen ATD model. With probabilistic inputs, the final output of the algorithm will be in the form of a PDF, thereby, producing an estimate of the source term with associated confidence levels. In contrast, the optimisation approach takes inputs without uncertainty and attempts to find a single optimal solution to the problem. Both methods have been shown to perform well in simulations; however, it was discovered that there is a significant room for improvement for both when tested on experimental data [30]. Aside from the main estimation algorithm used, the STE algorithms developed have several other differences making a direct comparison difficult. Some of the differences include: The source term parameters

Likelihood/Cost function used to measure the goodness of fit

• Type of release

• Atmospheric dispersion model

• Domain size

• Prior information

As mentioned earlier, the STE parameters include the source strength or release rate, its location, the number of sources, and meteorological variables. Note that this review has been limited to models that estimate at least the source strength and location. Under such scenarios, it is common to assume a constant release rate. The literature is rich with estimation methods for releases of known origin and varying release rate such as the Fukushima accident. For this scenario, Kalman filters and variational data assimilation approaches have been more popular [11]. Source estimation of multiple releases is a particularly complex problem which has been tackled in more recent research [23], [24], [31]-[40]. Several forms of likelihood and cost functions have been used throughout the literature which will be discussed in the following sections. The type of release has varied from: i) a steady state plume [21], [23], [31], [37], [41]—[51], ii) a dynamic plume [24], [32]-[36], [38], [39], [52]-[55] and iii) an instantaneous release or puff [24], [39], [55]. Most research has focused on continuous steady state plumes using the Gaussian plume

measure the

equation. Dynamic plumes and instantaneous releases yield a more demanding problem which is more applicable to emergency response situations. The domain size can range from small scale (<km) to continental scale; however, with a relevant dispersion model, the majority of techniques can be applied to any domain size [10]. Several forms of prior information have been used throughout the literature including meteorological variables, the geometry of the network and parameter bounds such as the time of release, release rate and domain size.

The following section is organised as follows. Section III.A reviews STE solutions using the optimisation approach. In Section III.B, this process is done for solutions formulated in the Bayesian framework. Finally, the work on STE using static sensor networks is summarised in Section III.C.

A. Optimisation

The optimisation approach to STE aims to find the combination of parameters that minimises a cost/objective function J. The objective function has taken many forms, although most often it is derived from the sum of the squared differences between predicted Cr and observed concentrations Dr. Cr are obtained from an ATD model run using the inferred source term and Dr are concentration data from deployed sensors. It is assumed that the parameter combination that produces the minimal difference is the optimal estimate of the source term. Most optimisation techniques employ an iterative process, where the objective function is minimised by using different update rules to provide new improved estimates of the parameters.

The main focus of research on the optimisation approach has been on assessing the performance of existing algorithms in optimising a cost function, however the different methods have also explored various cost functions and the use of better initial estimates. A variety of methods have been used to optimise the objective function such as gradient-based methods [23], [56], direct search methods (e.g. the pattern search method [54]), and intelligent optimisation methods (e.g. simulated annealing [51] and the genetic algorithm [38], [57]-[60]). Details about the specific optimisation approaches are described in this section.

1) Gradient based: This section describes gradient-based STE algorithms found in the literature. The methods used are the extension of thi st squares technique known as Re-normalisation or regularised least squares.

a) Least squares: Th of least squares estimation is to minimise the sum of the squares of the residuals between measured Dr an dicted Cr concentrations for the total number of sensors N. The cost function can be written

J = Y^(Cr - Dr)2 . (2)

The least-squares method is applicable only for an over-determined inverse problem. The iterative minimisation of the cost function Eq. (2) requires an initial guess of source term [61]. Since the least squares optimisation method is not a global optimisation technique, it is largely dependent on a good initial guess, otherwise it may get stuck in a local minimum leading to a poor solution due to the non-linearity of the solution space.

b) Re-normalisation: Re-normalisation or regularised least squares is a strategy for linear assimilation of concentration measurements to identify the unknown releases [62], [63]. The method exploits the natural statistics provided by the geometry of the monitoring network. These statistics are expressed in the form of a weight function derived by a minimum entropy criterion, which prevents the over-estimation of the available information that would lead to the artefacts especially close to the detectors. These weight functions serve as a priori information about the release apparent to the monitoring network and provide regularisation, thus limiting the search space of the algorithm and providing an initial guess. The weight functions could be computed iteratively using an algorithm proposed by Issartel [63]; besides, a minimum norm weighted solution provides an estimate for the distributed emissions and is seen as a generalised inverse solution to the under-determined class of linear inverse problems [64]. Overall, the re-normalisation approach utilises the adjoint source-receptor relationship mentioned in Section II.B and constructs a source estimate among a vector space of acceptable sources, which describes the possible distribution of the emission sources [65]. The method is applicable for both over-determined and under-determined problems.

Sharan et al. [56] used regularised least squares to determine the source term of a point release using the fact that the maximum of the source estimate will coincide with the location of the release. An advection-diffusion based dispersion model [66] was used to generate an adjoint model of the source-receptor relationship. Unlike many other STE methods, the domain was discretised into a grid, where)the size was dependant on the density of the sensor network. The method was extended in [67] for identification of an elevated release with an inversion error estimate. The algorithm was further extended to identify multiple-point releases [68] where the number of releases was known. Two steps were applied to reduce the computational time of the algorithm. First, regions associated with weak weight functions were removed. Then, only one in five grid points in each direction were considered, and this was iteratively refined to obtain an estimate of the source. In [23], Singh and Rani applied the algorithm to data from the FFT07 experiment [25]. A sensitivity analysis was performed to determine the effect of the number of measurements on the inversion results. It was found that on average nine measurements were required to sufficiently identify the source parameters and the accuracy of estimation was subject to the locations of sensors downwind and crosswind of the release. In [40], Singh and Rani applied the framework to multiple source scenarios of the FFT07 dataset. Recently, Kumar et al. [69], [70] have extended the regularised least squares inversion approach to urban environments, where CFD has replaced the underlying ATD model [71]. The method is tested on experimental data from the Mock Urban Setting Test (MUST) field experiment under various stability conditions. Reasonable accuracy was demonstrated in an experimental setting with an idealised urban geometry.

c) Broyden-Fletcher-Goldfarb-Shanno algorithm (BFGS): The BFGS algorithm [72]-[75] is one of the most popular quasi-Newton optimisation techniques [75]. The method is used to rapidly search for extrema of a function. It is similar to Newtons method however the inverse of the Hessian is approximated directly, greatly reducing computational requirements. On its own, the algorithm would struggle to determine the source term since it can become stuck in local minima. To overcome this issue, the Inverse ATD models have been used to generate a suitable initial guess.

In [24], Bieringer et al. used the BFGS algorithm to refine an initial guess of source parameters obtained from an inverse SCIPUFF run. To reduce computation, the simple Gaussian plume equation was used in the iterative optimisation. This equation was enhanced by using dispersion coefficients generated from the SCIPUFF run. The paper attempted to produce a final estimate where the final SCIPUFF and Gaussian plume runs matched as closely as possible with each other and the sensor readings. The algorithm was tested on experimental data from the FFT07 experiment to show similar performance to previous SCIPUFF based methods however with reduced computational complexity. The method was created to be computationally efficient for emergency scenarios where a timely solution would be critical. It was tested more rigorously than previous algorithms under scenarios including: different numbers of sensors, inconsistencies in observations and large distances between sensors and source. The performance was degraded in cases where the measured gradients in the concentration field were reduced (such as longer source to sensor distances, fewer sensors, larger sensor spacing etc.). The need for proper concentration gradients highlights the importance of having null sensor measurements that effectively characterise the spatial extent of the plume.

2) Meta-heuristics: Meta-heuristic optimisation algorithms have been among the most popular of the STE algorithms in the literature. They benefit from their global search performance in order to prevent the estimate from becoming stuck in a local minimum. The algorithms reviewed in this section include the pattern search method (PSM), simulated annealing (SA) and the genetic algorithm (GA). The algorithms use different methods to iterate until convergence to a solution based on evaluation of a cost function. The methods differ by the means in which they alter the parameters to find improved solutions.

a) Pattern search method: The pattern search method (PSM) is one of the basic optimisation methods, consisting of two simple steps. The first step defines the theoretical parameters (source strength Q and location x, y) and their initial values. In the second step, the algorithm varies each parameter by increasing or decreasing their values from the current point applying a constant factor, known as the axis direction move. The cost function is then calculated for the new set of parameter values (the difference between calculated and measured concentration). If there is no increase or decrease of the cost function value compared with the values of the previous points, the step size is halved (the pattern move) and the process is repeated until the termination criteria are reached [76].

In [54], Zheng and Chen developed a PSM to determine the strength and locations of a contaminant source. The method was shown to be more efficient than other intelligent optimisation methods such as the GA, however it was limited as the PSM is a local optimisation method, meaning that it was highly dependent on its initial value. To overcome this limitation, Zheng and Chen [77] developed a hybrid algorithm that incorporated the global search performance of the GA with local search performance of the PSM. The GA algorithm was used to produce a reasonable initial value for use in the PSM. The algorithm was able to define the location and strength of a contaminant source with great accuracy. The algorithms performance was compared with that of an original GA to find an increase in accuracy and efficiency [77].

b) Simulated annealing: The simulated annealing (SA) algorithm is a global optimisation algorithm that was introduced by Kirkpatrick et al. [78]. It is based on an analogy of thermodynamics, specifically the process of heating and controlled cooling of a material to reduce defects. This process directly depends on thermodynamic

energy E. Once applying this thermodynamic analogy to the optimisation problem, the goal is to bring the system from its initial state to a convergent state in which the system uses minimum possible energy. The rule for accepting change in state is based on the Boltzmann probability distribution [51], given as:

where R is a random number from the uniform distribution u between zero and one, En is the energy of the system (similar to a cost function) and Tn is the temperature or cooling parameter. This enables the algorithm to occasionally accept parameter sets that increase En, thus achieving global search performance as it is able to escape from local minima. The algorithm repeats, generating new parameter estimates randomly, until it converges to a solution. Throughout the simulation, Tn is decreased to improve the convergence behaviour of the system.

Thomson et al. [51] applied SA to locate a gas source from measurements of concentration and wind data. The search algorithm was employed to find the source location and emission rate. SA was found to be advantageous as it helps prevent the search algorithm from converging to local minima that might surround the targeted global minimum. Three cost functions with different regularisation terms were evaluated, and the cost function that minimises the total source emissions was found to be the most robust, producing successful event reconstructions. In addition, SA was also used by Newman et al. [79] to determine contaminant source zones in natural ground water. The paper compares SA with Minimum Relative Entropy (MRE) methods for STE, and concluded that SA was more robust and converged more quickly than MRE; however, it was found that the optimal solution was to use a hybrid algorithm, which ran MRE after SA in order to refine the solution and add confidence limits to the parameter space.

c) Genetic algorithm: The genetic algorithm (GA) is a popular global optimisation technique used in numerous STE algorithms. It is classified as one of the artificial intelligent optimisation methods. Similarly to most optimisation techniques, the GA is based on iterations, but the major difference of the algorithm is in the alteration of parameter estimates to generate new solution candidates. This is inspired by the process of natural evolution [80]. The process of the GA can be summarised by the following steps:

1) Initialisation: A random population of candidate solutions called chromosomes are generated.

2) Selection: A cost function is evaluated to measure the quality (fitness) of the solutions.

3) Mating: High quality solutions are mated with each other to generate new parameter estimates while creating a second generation population of solutions. The second generation contains a higher quality of chromosomes than the earlier generation.

4) Mutation: As is the process in evolution, a selection of chromosomes are mutated in order to generate more new solutions.

5) Convergence or termination check is performed.

6) Repeat 2)~5)

Several variations of the GA exist: incorporating different mutation, mating and population generation strategies. It is important to tune parameters such as population size and mutation rate to optimise the performance of the algorithm with regards to efficiency, accuracy and avoidance of local minima. In [59], [81], Haupt et al. first demonstrated the ability of the GA to link readings from receptor data with the Gaussian plume ATD model. Later in [57], Allen et al. used this method to characterise a pollutant source by estimating its two dimensional location, strength and the surface wind direction. Including the surface wind direction as a parameter to be optimised in the GA could account for the sparse resolution of meteorological wind field data and any error therein [57]. The algorithm performed very well during twin experiments (where the Gaussian plume was used to create synthetic data), and performance was decreased with sensor grids with less than 8x8 receptors. It is worthwhile noting that the algorithm showed reasonable performance under sensor noise provided that the noise was less than the signal [57]. To further refine the final estimate of the source term, a hybrid GA was formulated in [58]. A traditional gradient descent algorithm (the Nelder-Meade Downhill Simplex (NMDS)) was run after the GA. The GA produced a suitable initial estimate to prevent the NMDS from becoming stuck in a non global minima. The hybrid algorithm was benefited from the speed and performance of NMDS in a local search with the global search performance of the GA.

To improve the performance of the algorithm in more realistic scenarios, Allen et al. [38] replaced the simple Gaussian plume model with SCIPUFF. This was also used by Long et al. [60] to determine the location of a contaminant release. The sensitivity of the GA in STE was assessed in [82]. The paper investigated the number of sensors necessary to identify source location, height, strength, surface wind direction, surface wind speed, and time of release. It was found that the number of sensors required varied depending on the signal to noise ratio.

In [55] Annunzio et al. combined the GA with the adjoint method in an Entity and Field framework (where entities are Gaussian plumes) for an improved estimate of the source term. This had been demonstrated by Young et al. [83], and this required an input of a large amount of wind and concentration data. The approach estimates the axis of the plume/puff while providing an estimate of the wind direction and the spread of the contaminant. The source was located using a GA with a cost function based on contaminant spread.

To estimate the source terms in a scenario of multiple releases, Annunzio et al. [39] extend the Entity and Field framework approach to use multiple entities. The number of entities was increased to improve the concentration field approximation. When increasing the number of entities did not yield an improved field approximation, the number of sources was found. As there were too many correlated unknowns (i.e. entity mass M, release time t and wind speed U), the source strength was not estimated. Instead, a scaling variable was determined during the optimisation process M/UAt. Based on a comparison by Platt and Deriggi [30] using the FFT07 experimental data, the algorithm obtained a better source location estimate than several other optimisation and Bayesian-based approaches.

3) Summary on optimisation: Optimisation methods provide a single point estimate of source parameters by minimising discrepancies between predicted and measured concentrations. The gradient climbing methods are limited as without a suitable initial guess they can become stuck in an incorrect local minima. However, with a reasonable

initial estimate, for instance by using the adjoint, the algorithm can converge to a solution quite rapidly. Intelligent global search algorithms such as the GA, SA and the PSM have been classified as Meta-heuristics in this paper. The methods benefit over gradient descent methods as they can handle poor initial estimates as they employ methods to prevent becoming stuck in local minima.

Many modifications of the original algorithms have been presented, in which some interesting features include:

• The wind direction in the parameter space to account for sparse meteorological data [57].

• Hybrid algorithms to gain the benefits of global and local search [58].

• Prior information to limit the search space of the algorithms [63].

• The combination of global search algorithms or the adjoint to generate a good initial guess to be refined by a local search algorithm [55].

• Complex ATD models to improve the simple Gaussian plume equation resulting in improved accuracy without increasing too much computational load [24].

• Null sensor readings to narrow down where the source is not present [24].

In twin experiments, the majority of optimisation methods perform well [84]. When tested upon experimental data, the accuracy of the solution is heavily reliant upon the ATD model and knowledge of the atmospheric conditions/stability. Several more complex ATD models exist that may overcome this issue. Unfortunately, for an accurate simulation, a vast amount of meteorological parameters were also required. Furthermore, the benefit of a more accurate dispersion model may be outweighed by the increase in the computational time.

B. Bayesian inference

Bayesian-based methods of STE allow probabilistic considerations to be introduced to the problem in order to account for uncertainties in input data. Another way of exploiting the Bayesian approach consists in seeking not just for one optimal solution, but obtaining the probability density function (PDF) of the estimated source parameters. In this case, the source is defined by a set of parameters, which are the quantities of interest. By means of stochastic sampling, the posterior probability distribution of these parameters is evaluated to fully describe the parameters of the source and the uncertainty on them. The goal of STE is then to look for the most probable parameters for the source in terms of posterior probability.

Bayes theorem estimates the probability of a hypothesis or inference being true, given a new piece of evidence as given [85]:

Prior x Likelihood s P(0\I) x P(D\0,M,I)

Posterior « -----^ P (0\D, M, I) « ( 1 V,, (D '-^ (4)

Evidence v 1 ' ' 7 P(D\M,I)

where the theorem estimates the probability of a hypothesis 0 being true, given the data (measurements) D, model M and prior information I. The prior distribution P(0\I) expresses the state of knowledge about 0 prior to the arrival of data D. The likelihood function P(D\0,M,I) describes the probability of the data D, assuming the hypothesis 0 is true. This is also known as the sampling distribution when considered as a function of the data. The

posterior distribution P(6\D,M,I) is the full solution to the inference problem and, converse to the likelihood, expresses the probability of 0 given D. The final goal is to conduct inference over the parameters which define 0, and the posterior expresses the complete state of knowledge of these parameters given all of the available data. Once completed, post processing is often required in order to extract useful summary information from the posterior.

The evidence (sometimes known as the marginal likelihood) P(D\M,I) is so-named because it measures the support for the hypothesis of interest. For inference problems where only a single hypothesis has been or will ever be considered, the evidence is an unimportant constant of proportionality. When applied to STE, the hypothesis 0 is an inferred set of parameters that describe the source term, the data D are the measured concentrations from the sensors, the model M is an ATD model, and the prior information I can be any information related to the problem. In early work where only a single source is considered, the evidence term is neglected so Eq. (4) may be simplified

Posterior a Prior x Likelihood ^ P(0\D, M, I) a P(0\I) x P(D\0, M, I). (5)

The likelihood function is used to quantify the probability of discrepancy between the measured and predicted concentrations at each sensor. Predictions are made by inputting the inferred parameters into an ATD model. The prior probability is used to encompass any information about the source parameters known prior to any detection. It is often assumed no prior information is known beforehand and therefore this is often initially given a uniform distribution. The posterior probability of the parameters is then proportional to the likelihood. When the inference is performed in a sequential process, the prior is set as the posterior of the previous iteration.

Monte Carlo (MC) sampling methods are employed to determine an accurate estimate of the posterior PDF for the source parameters 0. Parameter estimates and uncertainty can be determined from the statistics of the posterior, commonly the mean and the standard deviations. In a high dimensional space, where there are many parameters inferred, the computational effort increases exponentially. For this reason, efficient sampling techniques are used such as the popular Markov Chain Monte Carlo (MCMC) and Sequential Monte Carlo (SMC). The sequential aspect of SMC enables it to update the data as it arrives making it more applicable to dynamic plumes. In the following sections, different improvements and modifications of the Bayesian approach to STE conducted in the literature are discussed. Improvements have been made in terms of computational efficiency of the algorithms, accuracy, improvements to the likelihood function, extension of the methods to handle multiple-source release scenarios and urban environments and how the algorithm could be enhanced to gain robustness under sensor noise. The Bayesian-based methods explored in this section include: MCMC [21], [41], [52], SMC [36], [43], [52], [86], differential evolution Monte Carlo (DEMC) [53] and polynomial chaos quadrature (PCQ) [49] among others..

1) Markov Chain Monte Carlo (MCMC): MCMC methods are used to efficiently sample from probability distributions by constructing a Markov Chain with the desired distribution equivalent to its equilibrium distribution [87]. With an initial random or informed starting point, a Markov chain is created where new inferences are drawn from the current link in the chain. The likelihood of the current inference is evaluated and based on acceptance criteria, it is either rejected or accepted as the next link in the Markov chain. Several techniques have been proposed

to generate and accept new inferences. The most popular one is the Metropolis-Hastings (MH) algorithm [88], described by the following steps.

Step 1 Initialisation: Propose a starting estimate of the source parameters: 0i For i = 1: N

Step 2 Proposal: Generate a new estimate 0. Sample from the proposal distribution q(^):

source parameters. This should be based on prior information as the initial guess can have a significant impact on the convergence of the algorithm. The next proposal is generated by sampling from the end of the previous link in the Markov Chain. A random walk is the most popular technique, however in the literature more informed techniques have been proposed. During Step 3, the probability of the proposal being accepted is calculated based on the posterior distribution and proposal density of the prior estimate and of that proposed. In Step 4, this is compared with a random number to determine whether or not it is accepted as the next link in the Markov Chain

The MCMC algorithms have been popular in STE due to the computational benefit over the more traditional Monte Carlo method. In [52], Johannesson et al. proposed a number of benefits and implementations of the MCMC algorithms for inverse problems including STE of ATD events. Several approaches to generating proposals were discussed including the Gibbs sampler, random walk and Langevin diffusion which was suggested to yield the most effective random walk. In [41] Borysiewicz et al. compared several MCMC algorithms for STE. Those compared include:

• Standard MCMC

• MCMC via maximal likelihood

• MCMC via rejuvenation and extension

• MCMC via rejuvenation, modification and extension

MCMC via rejuvenation, modification and extension was proposed to be the most effective during a number of synthetic tests which included an assessment of their efficiency when smaller amounts of measurements were available. In [42], Senocak et al. extended the MCMC algorithm for STE to incorporate null/zero sensor measurements. Another extension was an enhancement of the simple Gaussian plume model by incorporating the turbulent diffusion parameters into the parameter space, thus better matching of predicted and observed concentrations.

In [21], Keats et al. estimated the source strength and location of a contaminant plume in an urban environment with the MCMC MH algorithm. A key feature of the method was the adjoint based source-receptor relationship which greatly reduced the computational burden as the advection-diffusion equation was solved only once for each detector as opposed to solving for every combination of source parameters. The method was tested on experimental data from the Joint Urban 2003 atmospheric dispersion study, and the true parameters were shown to be located within one standard deviation of the estimate. In [31], Yee et al. successfully extended the aforementioned method [21] to estimate the parameters of multiple sources during synthetic simulations where the number of sources was known a priori. Here the MH procedure was applied with simulated tempering (ST) [89]. ST was used to alter the likelihood function in a way that the effects of the measured concentration data were introduced gradually. This allowed the algorithm to explore the prior distribution for a number of different source parameter hypotheses, helping with the burn in phase of the MCMC algorithm by delaying sampling from the posterior. In [32], Yee used a reversible jump MCMC algorithm to detect multiple sources where the number of sources was unknown a priori. The reversible jump sampling algorithm which was first introduced by Green. [90] enables the Markov Chain to jump between model spaces of different dimensions. In this STE case, a different dimension referred to a different number of sources. The jump could either add a single new source or remove an existing source from the inferred parameters. The methods successfully estimated the number of sources when tested on synthetic data.

In [33], Yee improved the method by employing a simulated annealing scheme to move between the hypothesis space, increasing the mixing rate of the Markov Chains, which leads to faster convergence. Similarly to ST in [31], the algorithm alters the likelihood function over time to facilitate the burn-in phase of MCMC. The algorithm was tested on data from the FFT07 experiment, resulting in good performance of identifying the parameters of up to four sources along with their associated uncertainties. However, large parameter space by adding the number of sources into the estimation problem caused a slow computational speed. This issue was addressed in [34], where a model selection approach was proposed to determine the number of sources. The number of sources was determined by finding the minimum number of sources necessary to represent the concentration signal in the data. The accuracy of the method was similar to [31], [33] with the computational load significantly reduced.

In [37], Wade and Senocak. presented another method to determine the parameters of an unknown number of sources using the Bayesian MCMC algorithm. The method used a ranking system inspired by the environmental protection agencies (EPA) metric to determine the quality of ATD models. The method successfully determined the correct number of sources on experimental data from the FFT07 experiment. The major drawback of the method, however, was its need to run simulations for each number of sources.

It is worthwhile noting that most algorithms above performed well on synthetic data and on data from the FFT07 experiment. This experiment was conducted in an idealistic scenario, featuring a high number of sensors, releases in the vicinity of the sensor array and a rich amount of meteorological data available. A real world application was presented in [91], [92] by Yee et al. Here, the location and emission rate of a source (from the Chalk River Laboratories medical isotope production facility) was estimated using a small number of activity concentration measurements of a noble gas (Xenon-133) obtained from three stations that form part of the International Monitoring

System radionuclide network [92]. It was discovered that the key difficulty in the STE lay in the correct specification of the model errors. The initial algorithm obtained a reasonable estimate of the source parameters though the precision of the estimate was poor as the uncertainty bounds of the estimated source parameters did not include the actual values. An alternative measurement model was proposed, which incorporated scale factors of the predicted concentrations in order to compensate for the model errors [92].

2) Sequential Monte Carlo (SMC): SMC is another technique used for efficient sampling. Unlike MCMC, the method is inherently parallel which allows all Monte Carlo proposals to be generated and evaluated simultaneously [93]. For this reason, it is considered to be computationally more efficient than MCMC provided the algorithm converges well. Another benefit is the sequential nature of SMC, allowing new data to run in the algorithm as it becomes available [93]. A popular SMC method uses importance sampling (IS). This involves taking a certain number of samples from the current estimate of the source parameters, weighting them and using these weights to form a new posterior distribution, which new samples are drawn from. The steps are outlined as follows:

Step 1 Initialisation: Propose an initial importance sample:

©1:t0 = {^0 Wi : i =1

For t = t0 : T

Step 2 Proposal: Generate a new estimate. Sample from the proposal distribution q(-): For i = 1 : N, sample

Step 3 Update importance weights: For i = 1 : N, evaluate importance weights

C ~ qt(0l,t)= qt(0t\0i,t-l)qt(0i,t-l)

w(i) x B® a P(Dt\^,M,I )P(ât\01:t-l) nt-

■ : weighting

(0l:t) qt(0t|0i:t-i) qt(0l:t-i)

Step 4 Normalise weightings:

Let e((il = ètl and w(i]

(i) n v '

l:t = "M Wlt - n w(j)

2^j = l wl:t

Step oximate the posterior distribution:

n("t) wim - "i) i=l

In [52], Johannesson et al. first proposed SMC for STE of an atmospheric release. The article provides an introduction to the SMC algorithm for Bayesian inference and some sampling techniques including a hybrid MCMC-SMC algorithm. In [43], Gunatilaka et al. used SMC with a progressive correction (PC) technique to converge to a solution for STE. Some limitations of the Gaussian plume model were addressed. In particular, as the assumption of uniform wind speed and diffusivity caused the plume height and ground level concentration to be underestimated.

The concentration read by the sensors was represented by the sum of the mean and fluctuating components where the mean was derived from an analytic solution of the turbulent diffusion equation and the fluctuating part modelled by a PDF. The performance of the algorithm was tested on synthetic data for a range of sensor grid densities. Reasonable performance was attained using grid densities as small as three by three.

In [86], Wawrzynczak et al. estimated the source strength, location, and ATD coefficients using SMC. Sequential importance re-sampling (SIR) was used which combines IS with a re-sampling procedure. Re-sampling was used to replace samples with low importance weights with those from a higher weighting. The algorithm was implemented first by running several iterations of multiple MCMC chains using MH and a random walk. After a number of iterations, the importance weights were found and the initial SMC sample was drawn. The paper compared the performance of the MCMC and SMC algorithms using synthetic data generated using SCIPUFF. It was found that SMC performed significantly better in obtaining the location estimate of the source. However, neither found the correct release rate. This was expected to be caused by differences among the Gaussian dispersion model and SCIPUFF. Additionally, no results were presented for the estimate of the ATD coefficients, which were said to differ among the SCIPUFF and Gaussian puff models in its estimation.

One reason many STE algorithms lose substantial performance when tested on experimental data arise from poor probabilistic models of the likelihood function. Errors in the measurements come from both sensor noise and modelling inaccuracies, both of which are difficult to specify precisely. Issues due to a lack of knowledge of the correct form of the likelihood function were addressed by Lane et al. [36]. Approximate Bayesian computation (ABC) was used to replace the likelihood function in the SMC algorithm with a measure of the difference between predicted and measured concentrations. The method was able to estimate the strength and location of a release, in addition to the release time. Multiple hazardous releases were handled via a trans-dimensional version of the ABC-SMC algorithm. Ristic et al. [46] used ABC-SMC with multiple dispersion models to find the most relevant ATD model for the release scenario. A rejection sampler was used, which removes inferences that do not match the observed data within a specified tolerance. An adaptive iterative multiple model ABC sampler was proposed to increase the acceptance rate of the rejection sampler by adaptively generating the proposal distribution for each sample. The algorithm was tested on experimental data sets collected by COANDA Research and Development Corporation which used a recirculating water channel specifically designed for dispersion modelling. Results were shown for scenarios with and without obstacles. Without obstacles, very good results were obtained although, in the presence of obstacles, the estimate of the upwind source location was affected by producing a bimodal posterior distribution.

In [47], Gunatilaka et al. used binary sensor measurements where the threshold was unknown to determine the parameters of a biochemical source. The achievable accuracy of binary measurements for dispersion events was previously explored using the Cramer Rao bounds by Ristic et al. [45] resulting in promising results. The algorithm found a solution iteratively using SMC IS with PC. The wind speed was included in the parameter space to account for uncertainty in the prior meteorological data. The method was tested on experimental data showing that the algorithm could reasonably estimate the source location, wind speed and a normalised release rate. Due

to the unknown sensor threshold, it was unable to determine the exact source strength; only the source strength normalised by the assumed sensor threshold could be estimated.

3) Differential Evolution Monte Carlo (DEMC): DEMC is a combination of differential evolution (DE) and the Bayesian MCMC methods. Essentially, it is an MCMC version of the GA [94]. The method is a population MCMC algorithm in which multiple Markov Chains are run in parallel. The selection process is based on the Metropolis acceptance ratio and the main difference to the MCMC lies in the generation of new proposals via a jump. Instead of a tuned random walk or multivariate normal distribution, DEMC uses multiple chains to adaptively determine the jump proposal based on the difference among them.

In [53], Robins et al. used DEMC to determine the source term of a biological [95] or chemical [96] release. DEMC was used to enable the jump size to adapt itself to the current state of the posterior estimate, thus alleviating responsibility from the user to specify a reasonable jump size. To reduce the number of expensive dispersion calculation runs, a two step decision process was used. The first accepted or rejected the proposal based on prior information. If it was accepted, it was passed to the dispersion model. Unlike much of the related work, the method had a large focus on operational aspects in emergency response such as incorporating time variant data, additional data collected by newly alerted sensors, and the removal of older data and inferences. The approach used a probabilistic sensor model proposed in [97] based on an analysis of experimental data.

4) Polynomial Chaos Expansion (PCE): The polynomial chaos-based estimation algorithms have received increasing attention in research recently. They arise from an extension of the homogeneous chaos idea developed by Wiener [98] as a non-sampling based method to determine the evolution of uncertainty in a dynamical system. The main principle of the PCE approach when applied to inverse problems such as STE is to expand random variables using polynomial basis functions. Suitably chosen polynomials converge rapidly to a solution of the posterior probability distribution. To manage the non-polynomial nonlinearity difficulties in polynomial chaos integration, Dalbey et al. proposed a formulation known as polynomial chaos quadrature (PCQ) [99]. PCQ replaces the projection step of PCE with numerical quadrature. The resulting method can be viewed as a Monte Carlo evaluation of system equations with sample points being selected by quadrature rules.

In [49], Madankan et al. used a PCE based minimum variance approach for STE. PCQ was implemented using the conjugate unscented transform method [100] to generate new sampling points from the posterior distribution using the Bayesian framework. The paper compared the performance of PCQ with SMC and an extended Kalman filter (EKF) to determine the source parameters of an atmospheric release using SCIPUFF as the underlying ATD model. It was found that the PCQ technique outperformed the EKF in terms of accuracy and the SMC method in computational speed.

5) Summary on Bayesian inference: Bayesian-based approaches to STE were described in this section. The major benefit of methods was in the output of posterior PDFs to determine parameter estimates with associated uncertainties or confidence level. The methods presented implementations of efficient sampling methods to determine the source term. The algorithms varied in the source parameters estimated, specification of the likelihood function, ATD models used and several schemes to improve performance with regards to computational efficiency, solution accuracy and

robustness. A range of scenarios have been considered including utilising varying meteorological information, steady or dynamic plumes, long/short range dispersion events, urban/plain environments and single/multiple releases.

One of the advantages of the Bayesian-based approaches was in specifying probability distributions of the measured and modelled data. In most cases, this had been assumed to take a Gaussian distribution. In [53], more complex models were derived based on the characteristics of particular sensors and the agent.

Several approaches have been proposed to reduce the computational time of the algorithms. This was predominantly done by reducing the number of ATD model runs. This was achieved via: i) a two step inference acceptance criteria so poor samples are not run in a dispersion model [53]; ii) the adjoint source-receptor relationship [21] and iii) by storing a library of pre-computed ATD simulations. The focus of DEMC and PCQ was on reducing the number of iterations required in an MCMC-like algorithm by generating better inferences.

The event of multiple releases posed a significant problem. Methods to determine the number of sources and to correctly characterise them required significantly more computational time. Earlier methods simply ran the original Bayesian algorithms with a specified number of sources and parameters in the parameter space and determined the appropriate number which is most closely matched with the data. Yee [33] determined the number of sources using simulated annealing to move a Markov Chain among parameter spaces and later work used a more efficient model selection method [34].

Upon testing in realistic scenarios or on experimental data, several problems were also identified including the limitation of theoretical/ideal dispersion models (e.g. Gaussian plume model) and the difficulty in attaining accurate representations of model errors and noise. Yee discovered the significance of the representation of model errors and the loss in accuracy caused by differences between the dispersion model and the real dispersion event [92]. Other limitations included computational time despite several improvements to reduce it, the amount of prior information required and the increase in computational cost when more variables are included in the parameter space. Ristic et al. proposed several strategies to overcome the problems such as: making use of ABC to account for the fact it is nearly impossible to accurately know the exact model and sensor errors [36]; the use of multiple dispersion models to find the most appropriate one for the current scenario [46]; the use of binary measurements to reduce noise effects and enable the use of cheaper sensors [45]; and the use of binary sensors where the threshold was unknown was explored in [47] to account for sensor bias/drift and for easy inclusion of alternative data sources.

An example of the limitation of the Gaussian dispersion model was found in [86], where the Gaussian plume dispersion model was unable to accurately estimate the strength of release from simulated data generated using SCIPUFF. A trade-off is required between the accuracy of the dispersion model and its calculation speed. The difficulty of estimating the strength of the release was highlighted further in [30] where algorithms attempted to estimate the strength of release from experimental data. Among eight different algorithm developers, incorporating a number of techniques, only a few of them were able to consistently estimate the strength to within a factor of ten.

C. Discussion on STE

The STE methods examined have been split into optimisation and Bayesian-based approaches. At the end of each subsection, a summary of each of the techniques was given discussing innovative ideas and problems found within the literature. Within each section, there was a range of ideas and implementations of the algorithms; in the following, we will discuss the application of the general frameworks and describe the key problems found within the literature of STE.

The Bayesian methods benefit from producing a final estimate with confidence levels and the fact that prior information can be incorporated into the algorithm with a probability distribution. Any inaccuracies due to modelling errors or sensor noise could be accounted for with appropriate distributions, though these might be difficult to characterise perfectly, in particular, when applied to a real scenario.

The optimisation methods produce a single point estimate of the source parameters. The methods suffer from their inability to include confidence intervals on any prior information it may use or in the final estimate. In spite of this, the optimisation methods are often less computationally expensive and may converge faster than Bayesian methods. They also benefit from the requirement of little or no prior information, though the more available can result in better performance.

Incorporating the adjoint source-receptor relationship or back trajectories methods produces a point estimate of the source by inverting meteorological variables and back tracking from triggered sensors. The method is very fast but highly dependent on accurate rich meteorological information and accurate dispersion models. As a technique to gain an initial estimate to be optimised, it has shown significant performance benefits. The back trajectory algorithms show how the system can benefit from null sensor readings, as these can be used to narrow down the search space for possible source locations. In other words, it helps by providing more information about where the source is not present. By narrowing down the search space, the accuracy of the source term estimate can be increased significantly and computational time reduced. A summary of the STE algorithms that have been reviewed is given in Table II which is accompanied by Table I to describe the variables and acronyms that have not been previously defined in the paper. The algorithms described were created for a static network; however, with some modification, most would be applicable to data gathered by mobiles sensors.

ost would be

TABLE I

Variables and Acronyms used in Table II

Variable Description

x,y,z to t

Source strength or release rate Number of sources

Location coordinates, typically downwind, crosswind, height

Release time

Release duration

Wind speed

Wind direction

Dispersion model parameters, dependant on the model used Steady state Lagrangian stochastic

TABLE II Summary of STE method

Date Paramters

Type Source Algorithm

Plume Single SMC-MCMC M]

SS plume Single MCMC MH

SS plume Single MCMC MH

SS plume Single MCMC MH

SS plume Multiple MCMC SA

Plume Multiple RJ MCMC

Plume Multiple MCMC SA

Plume Multiple MCMC MS

Plume t Multiple MCMC MH

SS plume Single MC IS PC

SS plume Single MC IS MH PC

SS plume Single MCMC MH

SS plume Single ABC-SMC

Plume Multiple ABC-SMC

SS plume Single MC IS PC

SS plume Multiple MCMC MH

Plume Single DEMC

SS plume Single EnKF

SS plume Single gPCq minVar

SS plume Single Least squares

SS plume Multiple Least squares

SS plume Single Least squares

SS plume Single MRE-PSO

Plume Single PSM

SS plume Single SA

SS plume Single GA-NMDS

Plume Multiple GA

Plume/Puff Multiple GA

Plume/Puff Multiple BFGS

Domain knowledge Met var

NA u,e,z

NA u,e

Uniform priors NA

Urban map u,e,z

n u,e,z

Time prior u,e,z

Time prior u,e,z

Time prior u,e,z

Informative priors, n U

NA u,e,z

NA u,e

NA u,e

Informative priors u,e

Gaussian priors u,e,z

Urban map u,e,z

Uniform priors NA

Parameter bounds u,e,z

NA u,e,z

NA u,e,z

Geometry exploitation u,e,z

Geometry exploitation, n u,e,z

Urban geometry u,e,z

Parameter bounds u,e,z

NA u,e,z

NA u,e,z

x,y,q,t Inv SCIPUFF u,e,z

Plume axis/spread NA

x,y,q,t Inv SCIPUFF u,e,z

Dispersion model

[42] [21]

2004 2012 2008 2007 2007

2007 2010 2012 2010

2008 2014 2014

2014 2009 2016 2013 2009

2015 2012

x.y.q x.y.q.C

x,y,q,e,U,H,Z

x,y,z,q

x,y,q,n

x,y,q,n,t

x,y,q,n,t

x,y,q,n,t

x,y,t,e,z

x,y,z,q,U,Z x,y,z,q,U,Z x,y,z,Z,q/U, x,y,z,io,i x,y,q

2014 >] 2015

] 201 ] 201

[23] [40] [69]

[50] 2014

[54] 2010

[51] 2006 [57][58] 2007 [38] 2008

[55] [39] 2012

[24] 2015

x,y,q,t

x,y,q x,y,q x,y,q x,y,q

x,y,q x,y,q,t

x,y,q,e x,y,q,t,n x,y,q/tU,Z ,e,n x,y,z,q,t,U,e,n

INPUFF Gaussian plume Gaussian plume Adv-diff Adv-diff

Backward-time LS Adv-diff

Backward-time LS absorption-drift-diff Turbulent diff equ Gaussian plume Gaussian plume Various

Gaussian plume Turbulent Adv-diff Gaussian plume Gaussian plume Gaussian plume SCIPUFF

Advection-diffusion Advection-diffusion CFD

Gaussian plume Gaussian plume Gaussian plume Gaussian plume SCIPUFF

Gaussian plume/puff SCIPUFF & HLEPM

To summarise the literature in STE, it can be seen that a number of methods produce very good performance in an idealistic scenario of little or no noise, a plain flat environment, plenty of sensors and a single source. Difficulties arise when these conditions are not met, which is generally the case in real scenarios. The difficulties found in STE when moving from a theoretical to a realistic setting are common to most research fields. Some of the key issues are listed in Table III. In the following section, the use of mobile sensors to solve atmospheric dispersion problems are reviewed. Mobile sensors provide several benefits to solve many of the limitations encountered by static networks.

TABLE III Key difficulties in STE

Prior knowledge

Meteorological data Parameter space Domain knowledge Modelling issues Dispersion modelling Accuracy Modelling errors

Sensing

Noise Bias/drift

Sampling frequency Release scenario

Multiple sources Environment Release type

Sensor locations

Not enough triggered sensors Poor sensor loca

IV. Boundary tracking and source estimation using mobile sensors

The use of mobile sensors for STE is a relatively new area of research. It incorporates many of the same research disciplines as static networks for STE with the addition of sensor movement strategies, cooperation between mobile sensors, and dynamics. In estimation of environmental plumes, mobile sensors also provide the ability to track the contaminant boundary directly and to perform source seeking. Boundary tracking refers to approaches that direct sensors along a contour of interest. Source seeking refers to guiding sensors towards the location of a source. Both of these are highly relevant to gain information in contaminant dispersal events. They can be used as data collection strategies for STE and also for verification of the source term estimate. For this reason, a brief review of boundary tracking and source seeking approaches is presented in Sections IV.A and B, followed by a review of algorithms developed specifically for STE using mobile sensors in Section IV.C. Note that source seeking and source term estimation are considered differently as source seeking attempts to move the sensor towards the source whereas source term estimation will estimate the source position and strength not necessarily attempting to move towards

A. Boundary tracking

Boundary tracking algorithms are used to determine the edge of a region. Researchers have explored boundary tracking algorithms to monitor oil spills, algae growth, volcanic ash clouds, contaminant gases and nuclear radiation levels. In the literature, boundary tracking algorithms have taken the form of control approaches [101]-[113] and estimation and control approaches [114]-[129] where several estimation techniques have been used to produce more

informative trajectories. A major difference among methods lies in the approximations of the concentration field. Most methods use point measurements of the concentration value of the substance provided by sensors on-board mobile robots, and with these measurements, various approximations have been made. Many methods use the point measurement itself [107], [109], [110], [118], [119] or as a binary signal to determine whether or not the sensor is inside or out of the affected/contaminated region [6], [101]-[105], [116], [117]. Some use an estimate of the gradient or Hessian of the contaminant obtained either through spatially separated simultaneous measurements by collaborating multiple sensors or via consecutive measurements by a single sensor [106], [111], [112], [121], [124]-[129]. Another method is to estimate the curvature of the boundary; this has been done using several sensors in a formation or by visually estimating the curvature using a camera [7]. The majority of researchers have assumed slow moving, clearly defined, 2-D boundaries with accurate sensors. Some have attempted to extend the state of the art, researching the effect of sensor noise and studying 3-D boundaries [125]. The remainder of this section provides a brief description of the boundary tracking algorithms found in the literature.

1) Control law:

a) Bang-bang control: Bang-bang control is a simple algorithm which involves switching abruptly between two states. In the case of tracking a boundary, the turning direction of the vehicle is changed upon crossing the contour boundary. Several papers in the literature have researched the use of bang-bang control for tracking an environmental boundary [101]-[106].

Kemp et al. [105] implemented a bang-bang control algorithm that required only a concentration sensor to monitor an underwater perimeter using unmanned underwater vehicles (UUVs). Some drawbacks of the method include: i) with a large crossing angle, the tracking can become very inefficient; ii) noise can cause the uuV to turn the wrong way and fail to track the boundary; and iii) narrow bottle necks in the boundary may cause sections to be missed. A turning angle correction was proposed by Bertozzi et al. [103] to improve efficiency and a cumulative sum algorithm was implemented to provide robustness to noise. The turning angle correction was based on the assumption that the boundary between the last two crossing points and beyond was a straight line. In [104], this method was extended to multiple vehicles where separation was maintained between them by alternating the speed should they come to close to one another. In [6], the authors used a random coverage controller, a collision avoidance controller and a bang-bang angular velocity controller to detect and surround an oil spill. In [102], a bang-bang controller was used to follow contours of a radiation field with an autonomous helicopter. The forward speed of the helicopter was set at the beginning of the test and could be adjusted to adapt to search area, the desired speed of the search, and the desired accuracy of the finished contour. The applicability of these sensor movement strategies has only been evaluated for static phenomena, or the authors assumed that the movement of the sensing vehicles was much faster than that of the observed phenomenon. In [101], Brink adapts the method in [103] to track the boundary of a dynamic plume in an environment where a low-density static sensor network was installed. An estimate of the plume centre movement was added to the sensors to account for plume dynamics [101].

b) Sliding mode control: When applied to boundary tracking, sliding mode control [107] is similar to bangbang control as both methods change the turning direction of the vehicle based on its position relative to the

contour. Sliding mode control can produce more efficient tracking as the vehicle turns before exiting/entering the contour. The sliding variable was defined as the difference between the desired/threshold density and the measured density of the contaminant. In [107], a sliding mode control law was used to steer a vehicle to a location where the distribution assumed a pre-specified value and afterwards ensured circulation of the vehicle along this set at the prescribed speed. In simulation, the algorithm tracked a boundary with noise added to the concentration data. In [108], this method was extended to multiple vehicles where a guidance law that altered the longitudinal speed was used to ensure effective distribution of the team. In [109], a real world experiment was performed to justify the navigation and guidance algorithms. The experiments showed some robustness to common sources of uncertainties in robotic applications. The effect of chattering which is common in sliding mode based approaches was not observed in the experiments. In [110], [130] a sliding mode control algorithm was proposed that allowed a single, sensor enabled agent to navigate along the boundary of a contaminated region. The efficacy of the proposed approach was demonstrated on a realistic example pertaining to synthetic volcanic eruption dispersion data generated by the NAME ATD model [131].

c) Formation control: Based on estimated concentration gradient, Hessian matrix and curvature of the environmental contour line, Zhang and Leonard [111] used a formation of Newtonian particles to track level sets of a field at unitary speed. The desired formation was maintained by a formation shape control law based on Jacobi transform. The Jacobi transform decoupled the dynamics of the formation centre from the dynamics of the formation shape, which allowed separate control laws to be developed. Following a differential geometric approach, steering control laws were developed separately that controlled the formation centre to detect and move to a desired level surface and track a curve on the surface with known curvatures. The particles' relative position changed so that they optimally measured the gradient, and the curvature of the field in the centre of the formation was estimated using data fusion. In [112], [113], the estimates from the cooperative filter were used in a provable convergent motion control law that drove the centre of the formation along level curves of an environmental field. The method was later extended [112] to track a 3-D surface.

2) Estimation and control:

a) Approximation of boundaries: In [114], White et al. presented a method of approximating a cloud boundary using a 2-D splinegon defined by a set of vertices linked by segments of constant curvature. The method was inspired by the fact that it is beneficial to be able to express the predicted dispersion of a contaminant cloud in a compact form so that it can be shared among a uAV group with minimal communication overhead and maximum utility in guidance algorithms. Traditional methods of modelling cloud dispersion are computationally expensive and have limited use for directing UAVs. The clouds behaviour must be expressed in a simplified manner to allow fast algorithms to guide UAVs and track the contaminant. The research in [114] is one of very few methods that estimate the dispersion of the cloud in a low computational manner. The splinegon algorithm was tested against contours produced using SCIPUFF and showed a good representation; however, there was some error in predicting the future dispersion of the cloud. The dispersion estimation used a simple linear equation and could be a potential area for improvement using improved estimation techniques. Subchan et al. [115] presented a path planning algorithm comprised of Dubins

paths and straight lines to guide UAVs to approximate a boundary. Equipped with a relevant sensor, the UAVs recorded the entry and exit points of the cloud. These points were used as vertex data in construction of a splinegon [114] that represented the contaminant cloud. In [116], [117], Sinha et al. proposed two methods for coordinating a group of UAVs to gather the vertex data. In [117], the paths of the UAVs were designed progressively, after every transition through the cloud. A transition ended near the centre of the cloud, here the UAVs negotiated optimum target vertices based on the distance from them. Though it is efficient, this method presented problems in collision and obstacle avoidance. In [116], each UAV was assigned a sector. It circulated in its sector and updated the location of two neighbouring vertices. This provided collision avoidance among UAVs and obstacle avoidance was achieved by a simple alteration of the planned path.

b) Model predictive control: In [118], Zhang and Pei used model predictive control (MPC) to track the boundary of an oil spill using a single UAV. Universal Kriging, otherwise known as Gaussian process regression, was used to predict the future state of the system for use in the MPC. The advantage of the Kriging method was that it is an optimal interpolator in the sense that the estimates were unbiased and the minimum variance was known, so that it could relatively accurately construct the environment map. In addition, the advantage of the MPC was its constraint handling capacity. Nonlinear MPC was used to estimate the future states at sampling instants and determine the optimal manoeuvre based on minimising a cost function with control constraints. The cost function was derived from the difference between measured concentration and the desired threshold with a penalty weight added to constrain the angular rate of the vehicle. The method was tested on simulated data based on the advection-diffusion equation which demonstrated the proposed method was feasible and effective; however, this was in the absence of sensor noise and the contaminant boundary was relatively well defined and bounded.

Euler et al. [119] proposed an adaptive sampling strategy to track multiple concentration levels of an atmospheric plume by a team of UAVs. The approach combined uncertainty and correlation-based concentration estimates to generate sampling points based on already gathered data. The adaptive generation of sampling locations was coupled to a distributed MPC for planning optimal vehicle trajectories under collision and communication constraints. The domain area was represented as a grid of discrete cells. Each cell stored a Gaussian distribution defined by the expected concentration value and variance. A vehicle remained at a sampling location for a number of time steps in order to successfully process the sample. A correlation among adjacent measurements was assumed and used to infer information about the concentration at locations surrounding the sampling point. New sampling points were selected based on the maximum variance of reachable positions. Numerical simulation results demonstrated the ability of the method to track a boundary with noise added to the data. The major limitation was in the amount of time taken to generate an estimate of the perimeter, caused by sampling times used to handle noise.

c) Support vector learning: Kim et al. [120] used mobile sensors to estimate the boundary of physical events such as oil spills. The boundary estimation problem was set in the form of a classification problem of the region in which the physical events occur. Support vector domain description (SVDD) was employed, which was able to represent boundaries in a mathematical form regardless of the shape. Furthermore, by using the hyper-dimensional radius function obtained from SVDD, a velocity vector field was generated which gave asymptotic convergence to

the boundary with circulation at the desired speed. The desired speed was adjusted to coordinate the mobile sensor so that their intra-vehicular spaces were maximised for efficient estimation of the boundary and fast reaction when the boundary changes. The method was tested in both simulations and experiments though the boundary was clearly defined and bounded with no account for sensor noise. It was noted by the authors [120] that future work would focus on time-varying boundaries and other methods such as the MPC.

d) Optimisation: In [121], Srinivasan and Ramamritham estimated the contour of a specified concentration in a bounded region with mobile sensors. The spatial domain was modelled as a grid and the sensor was assumed to be able to measure the concentration at its current and neighbouring grid points. At each time step, the sensors could remain still or move to a neighbouring point. The contour was tracked by minimising a cost function based on the difference between the desired and measured concentration of pollutant. The ability to minimise the cost function and track the boundary was assessed for three optimisation algorithms: i) the greedy algorithm; ii) simulated annealing; and iii) a newly proposed collaborative algorithm based on minimising centroid distance. It was found that the collaborative method estimated the contour with less error and latency. The method was capable of estimating complex shaped contours though it required a number of assumptions such as: a well-defined closed curve, an interior point known by the sensors, no sensor error, and that the sensor could determine concentrations at its neighbouring grid locations. In [122], Srinivasan et al. improved the method and named it ACE (adaptive contour estimation). The method estimated and exploited information regarding the gradients in the field to move towards the contour. Instead of assuming knowledge of the centroid, the centroid of the contour was estimated based on history of movements, points already traced on the contour and sensor's current locations. A comparison was made among techniques of approaching the contour, including a direct descent algorithm, a spread always algorithm and the newly proposed adaptive algorithm. In ACE, at each step, a sensor decides whether to move towards the contour or spread, (direct descent or spread always). A bias parameter was used to determine whether the sensors should spread or approach the contour, and it was computed based on the size of the contour, the spread of the sensors and distance from the contour. In numerical simulations, ACE was shown to significantly reduce latency in contour estimation when compared to directly approaching the contour.

Glow-worm swarm optimisation (GSO) is an algorithm originally proposed in [123] primarily to detect multiple optima of a function and considered to be ideal for implementation in multi-robotics platforms. It is used commonly for the detection of multiple emission sources. In [124], this method was extended to simultaneously detect multiple emission sources and map the boundary. Subsequently, the methodology was also extended to map 3-D boundaries [125]. The algorithm finds the source by following the gradient until it reaches a maximum; conversely, it finds the boundary by following the gradient in the negative direction until it reaches a threshold concentration. Once on the boundary, the swarm does not move. In order to prevent clumping up of swarm agents, once on the boundary, they repel one another. The method was successful in simulations [124] using 150 agents to map a boundary and detect three sources. Although the algorithm performed well, the use of such a large number of agents is not ideal. Other problems arise in becoming stuck in local minima or maxima if the assumption of the distribution of the field does not hold.

e) Neural networks: Sun et al. [126] proposed a robust wavelet neural network (WNN) control method to address the problem of environmental contour line tracking using a Newtonian particle. It was assumed that each vehicle was able to estimate the concentration value, the gradient and its current location. To track the contour line, a dynamic control law was designed using the vehicle's uncertain dynamics and the Hessian matrix of the environment concentration function which was approximated by an on-line learning WNN. The method was tested using Lyapunov functions to show accurate tracking of a well-defined, bounded contour line in the absence of sensor noise.

In [127], Sun et al. used a radial basis function neural network (NN) in a similar manner to above; however, the method was designed for a non-holonomic mobile robot as oppose to a Newtonian particle. A radial basis function NN was used to approximate a non-linear function containing the uncertain model terms and the elements of the Hessian matrix of the environmental concentration function. Then, the NN approximation was combined with robust control to construct a robust adaptive NN controller for the mobile robot to track the desired environment boundary. The method was tested in simulations similar to [126].

f) Model based prediction and control: Li et al. developed a control strategy to track the front of an evolving dynamic plume in a marine environment modelled by the advection-diffusion equation [128]. Instead of using only concentration gradient measurements, the transport and dispersion model was incorporated into the control design. An observer was designed to estimate the dynamic movement of the plume front, and a feedback control law was constructed for a robot to track it. The method was extended to a multi-robot scenario where the control laws were designed to account for a robot team in a nearest neighbour communication topology. For the single robot case, the aim was to patrol along the plume front, and for the multi robot case, the aim was to achieve an even distribution of the robots around the plume front. The methods were tested in simulations without consideration of noise.

In [129], Fahad et al. tested the method presented above in a more realistic environmental model set-up. A probabilistic Lagrangian environmental model was used, which can capture both the time-averaged, idealised structure and the instantaneous, realistic structure of a dynamic plume. The simulation demonstrated how a single robot was capable of patrolling a plume front using the control law designed in [128] where the plume front was noisy and fairly realistic. It was found that the sensor measurement of the concentration and estimation of the gradient and divergence of the concentration were of vital importance to the success of the plume tracking. It was assumed that the sensors were area-level measurement sensors (such as ultraviolet, infra-red, visible band, radar or passive microwave sensors) rather than point detectors (such as chemical sensors). If the sampling radius was reduced to a very small value, the plume concentration had very high variance so that the controller struggled to produce accurate tracking results.

3) Summary: A range of methods have been proposed to track the boundary of environmental fields. The methods vary in their measurements of the field such as binary, concentration values (point measurements), gradients or curvature and also in the types of tracking algorithms used to trace the boundary. The effect of 3-D boundaries, sensor noise, and dynamics has been briefly explored with a large area available for potential improvements. Table IV provides a summary of the boundary tracking methods that have been reviewed.

TABLE IV Boundary tracking summary

Ref Date Boundary type Vehicle Cooperation Measurement approximation Tracking algorithm Boundary estimation

[101] 2014 Cloud UAV NA Binary Bang-Bang NA

[102] 2012 Radiation UAV NA Binary Bang-Bang NA

[103] 2007 Ellipse Robot NA Binary Bang-Bang Optimised Ellipse

[104] 2009 Well defined edge Robot Speed control Binary Bang-Bang NA

[105] 2004 Underwater plume UUV Speed control Binary Bang-Bang NA

[6] 2005 Well defined edge Robot Potential Field Binary Bang-Bang NA

[107] 2011 Radiation Nonholonomic NA Conc Sliding mode NA

[109] 2014 Scalar field Nonholonomic Speed control Conc Sliding mode NA

[110] 2014 Cloud Nonholonomic NA Conc Sliding mode NA

[116], [117] 2008 Cloud UAV Geometrical Binary Geometrical Geometrical Splinegon

[106] 2008 Oil spill UAV Speed control Curvature Polygon

[118] 2014 Oil spill UAV NA Conc MPC Kriging

[119] 2012 Cloud UAV MPC Conc MPC Correlation

[121] 2006 Environmental Agent Est centre Gradient Minimise cost function NA

[124], [125] 2012 Environmental Agent Repel Gradient GSO NA

[126] 2011 Environmental Newtonian NA Gradient ^Dynamic control law WNN

[127] 2011 Environmental Newtonian NA Gradient Dynamic control law NN

[128] 2014 Cloud USV Geometric Gradient Estimator-controller Transport model

[129] 2015 Cloud USV Geometric Gradient Estimator-controller Lagrangian model

[111], [112] 2011 Scalar field Newtonian Formation Curvature Curve tracking control Curvature by formation

*Conc: Concentration

B. Source seeking

This section explores source seeking algorithms with mobile sensors. The methods aim to localise a source by moving towards it without an attempt to estimate other parameters such as the release rate. Although there is less information output than STE techniques, source seeking algorithms are still very relevant to the STE problem using mobiles sensors. A number of techniques exist ranging from simple gradient climbing algorithms to more complex techniques to account for sporadic measurements of concentration. As this is not the primary topic of the current work, only a brief overview of source seeking is presented in this paper. A more detailed review has been done by Kowaldo and Russel [132] which focused on odour source localisation, though a lot of research has been conducted in the field since this time.

1) Bio-inspired: Chemotaxis are used throughout the literature for source seeking [133], [134]. The method was biologically inspired from the behaviour of a number of organisms (Moths, Lobsters, E-coli bacteria, Dung beetles, and Blue crabs). Most chemotaxic methods focused on climbing a gradient of the concentration value. The gradient was determined by taking measurements of the concentration at spatially separated positions. These methods relied on the assumption that the concentration gradient would consistently be positive in the direction of the source; this is often not a valid assumption for atmospheric dispersion due to turbulence.

Anemotaxis are another method that has been used in the literature [135], [136]. This technique used knowledge

of the motion of fluid to help find the source. Several researchers have combined chemical concentration and fluid flow measurements to find an odour source. Some techniques include:

• The Zigzag/Dung Beetle method, which involved moving upwind within the odour plume in a zigzagging motion [130], [135]

• Plume-centred upwind search [130], [136]

• Silkworm moth inspired algorithm [133], [137]

Fluxotaxis is a source seeking technique that incorporates fluid and chemical concentration measurements and estimation of the mass flux. Zarzhitsky et al. developed a Fluxotaxic algorithm for a swarm, which found the source by climbing up the mass flux gradient [138]-[141]. Computational fluid dynamics had been used to estimate the average bearing of the flow. The technique outperformed several chemotaxis and anemotaxis methods during simulations though there was no experimental comparison.

2) Bayesian: Bayesian methods introduced probabilistic robotics to the source localisation problem [142], [143]. In [143], Pang and Farrell modelled the plume using stochastic methods based on Bayesian reasoning. A hidden Markov model (HMM) was used to implement the stochastic approach for plume modelling and predicting the most likely location of a source. The approach was tested in simulations and with experimental data. The global wind field was used to integrate upwind and predict the path of the contaminant. Several other approaches have located a source using the Bayesian framework. Li et al. [144] and Neumann et al. [145] have used a particle filter to localise an odour source in outdoor environments. In [146], Vergasolla et al. proposed a search strategy based on information theoretic principles, referred to as Infotaxis. A measurement strategy was adopted, which measured the rate of particle encounters rather than a concentration reading. In a lattice environment, the searcher would determine the move that maximised the expected information gain in the form of entropy reduction or increase in particle encounters. The expectations were based on the information currently available, which was the posterior field. The method capitalised on the fact that the closer to the source, the higher the rate of information acquisition (particle encounters), hence tracking the rate of information acquisition would guide the searcher to the source similarly to the concentration gradients in chemotaxis. The method could handle situations of sporadic and intermittent concentration information where the chemotaxis algorithms would struggle. The infotaxis search attempts to find a balance between exploring to gain more information and exploiting the information currently available. This method was shown to successfully find the source where the data was intermittent and sporadic. Following [146], several researchers have studied the efficacy of infotaxis and proposed modifications and extensions [147]-[151].

3) Summary: Source seeking algorithms have featured many techniques that have been dependant on the quality of information available to the robot. Gradient climbing methods such as chemotaxis perform well in concentration fields with well defined gradients; however, in turbulent flows or with a noisy sensor, the gradient does not always lead directly to the source. Several biologically inspired algorithms have been proposed using a combination of chemotaxis and anemotaxis to capitalise on available wind information. Bayesian based source seeking algorithms yield a benefit from their probabilistic aspect, thus enabling a robot to localise a source in stochastic environments with uncertainty in the observations. An interesting measurement strategy was adopted in [146] where the number

of particle encounters were used rather than a concentration reading. C. Source term estimation

STE using mobile sensors is a relatively immature area of research. The increase in performance and decrease in cost of small computers and electronics has made it a more appealing and feasible option than in the past. Mobile sensors could be used independently, or in conjunction with static sensors. They can overcome many of the limitations imposed by a static network. Firstly, it is infeasible to cover all regions of importance with static sensors, particularly a dense enough grid of static sensors for STE to be performed before the contaminant has spread significantly. Sensors are expensive, as will be their communication network, powering, maintenance and protective holdings. Mobile sensors enable measurements to be taken from more informative locations. This introduces a new area of research to STE, with relation to sensor path planning strategies to provide an accurate estimate of the source term in the least amount of time. In the literature, sensor movement strategies for STE include expert systems, where the sensors follow a set of pre-set guidance rules and information driven motion control, where the movement of the sensor is based on estimates of the expected information gained. The aforementioned techniques are described in more detail in the remainder of this section.

1) Pre-planned rules: In [152], Kuroki et al. used an expert system of navigation rules to guide a UAV to determine the strength and location of a contaminant source. Concentration data was collected throughout the flight and used in the GA described in [82] to estimate the source term. The method required a single concentration sensor on the ground in order to help guide the UAV. The rules then guide the UAV to fly towards the sensor, downwind and then crosswind to gather concentration data. In simulations, an improved estimate was found than using the GA with an 8x8 grid of sensors, with less computation required. Tests were done for both Gaussian plume and puff models. Particular difficulty was found with the puff model where a high amount of UAVs and plume traverses were required to estimate the source location.

Hirst et al. [153] used the Bayesian framework to estimate the location and strength of multiple methane sources with remotely obtained concentration data gathered using an aircraft. The aircraft was flown in a somewhat pre planned manner where it would fly in consecutive crosswind directions, downwind of the source. Concentration measurements were modelled as the sum of spatially and temporally smooth atmospheric background concentration, augmented by concentrations due to local sources. The underlying dispersion model was a Gaussian plume atmospheric eddy dispersion model. Initial estimates of background concentrations and source emission rates were found using optimisation over a discrete grid of potential source locations. Refined estimates (including uncertainty) of the number, emission rates and locations of sources were then found using a reversible jump MCMC algorithm. Other parameters estimated include the source area, atmospheric background concentrations, and model parameters including plume spread and Lagrangian turbulence time scale. The method was tested on synthetic and real data. Two real scenarios were considered, first featuring two landfills in a 1600km2 area and then a gas flare stack in a 225km2 area. Experiments showed good performance of the algorithms. An interesting feature was an extra source estimated downwind of the actual source. This was attributed to bias in wind directions.

2) Informative path planning: An information guided search strategy can be formulated as a partially observed Markov decision process (POMDP) [154]. This consists of an information state, a set of possible actions and a reward function. With regards to STE, the information state is the current estimate of the source parameters. The set of possible actions are the locations where the robot can move next, and the reward function determines a measure of the amount of information gained for each manoeuvre. The reward function can take several forms, such as Kullback-Lieber divergence [155] (variation of entropy), Renyi divergence [156] or a measure of the mutual information.

a) Information gain: In [157], Ristic and Gunatilaka presented an algorithm to detect and estimate the location and intensity of a radiological point source. The estimation was carried out in the Bayesian framework using a particle filter. The sensor motion and radiation exposure time were controlled by the algorithm. The search began with a predefined motion until a detection was made, and then control vectors were selected based on reducing the observation time. The selection of control vectors was done using a multiple step ahead maximisation of the Fisher information gain (Hessian of the Kullback-Leibler divergence). In [158], this was extended to the estimation of multiple point sources using the Renyi divergence between the current and future posterior densities. This enabled decision making using maximum information gain for the entire search duration regardless of the estimate of the number of sources. The method was tested on experimental data with one and two source scenarios and compared with a uniform random and deterministic search. The information driven search obtained much more accurate estimates of the location and strength of the source with similar but slightly faster search time.

In [5], Ristic et al. presented a method to determine the location of a diffusive source in an unknown environment featuring randomly placed obstacles. The method used a particle filter to simultaneously estimate the source parameters, the map of the search domain and the location of the searcher in the map. The map was represented as a lattice where missing links represented obstacles and the source was assumed to be located at a node. The gas and searcher travelled down links in the lattice and concentration measurements were taken from the nodes. Concentration measurements were taken from a Poisson distribution to mimic the sporadic nature of measurements. The searcher travelled along the grid and stopped at the nodes to take measurements of gas concentration and to determine the existence of neighbouring links (available paths). At each step, the searcher remained at its current node or move along one link. Movement was based on information gain similar to that mentioned previously [158]. Numerical simulations demonstrated the concept with a high rate of success.

In [159], a number of different search strategies based on information theoretic rewards were compared for determining the location of a diffusive source in turbulent flows. The reward functions compared include: Infotaxic reward, Infotaxic II reward and Bhattacharyya distance. The Infotaxic reward is based on the expected information gain for a single step ahead. It is based on the assumption that the source location coincides with one of the nodes of the square lattice introduced to restrict motion of the searcher. The reward is defined as the decrement of the entropy. The Infotaxic II reward is a slight modification to account for the case where the source may not coincide with a node of the lattice. The Bhattacharyya distance is a particular type of Renyi divergence, which measures the similarity between two densities. In this context, the densities are the posterior distributions at the current time

and that expected in the next step. The control is selected based on the maximum reward. The techniques were compared on synthetic and experimental data implemented using the SMC method. It was found that the ratio between the search and sensing areas was a key factor to the performance. With a larger search area, systematic search such as parallel sweep outperformed information theoretic searches. However, with a smaller search area, the cognitive strategies were far more efficient. It was also found that for a smaller search area, the Infotaxic reward performed slightly worse than the others and this was attributed to its more exploratory behaviour.

b) Mutual information: In [160], Madankan et al. presented an information driven sensor movement strategy that attempted to maximise the mutual information between the model output and data measurements. A combination of generalised polynomial chaos and Bayesian inference were used for data assimilation similar to the previous work that used static sensors [49]. A sensor movement strategy was created to move a group of UAVs to maximise the mutual information between the sequence of observational data and the source parameters over the time. To reduce computational complexity a limited look-ahead policy was used and the optimal positions of the UAVs were chosen individually. This means the only cooperation among them was to maintain a distance from one another. This approach was compared with a static network approach using syntheticMata. The results show significant improvements in accuracy and confidence in the estimation.

3) Summary: The main area of research in mobile sensors for STE has been in developing intelligent motion strategies for maximum information gained by the sensors. The STE algorithms themselves are similar to those reviewed earlier using static networks. Pre-planned rules have shown to be capable of moving the sensor to determine the source term provided there is enough information on the wind and there exists at least one static sensor within the contaminant plume. Informative path planning strategies have featured maximising information in terms of entropy gain and mutual information. In [5], the need to sample from a position for a significant amount of time was highlighted whilst using a Lagrangian stochastic dispersion model in order to gain a more accurate concentration estimation from noisy sensor readings. The effect of search area was studied and its impact on the performance of reactive or informative search str s.

nd STE have been summarised. The main limitations of the algorithms presented arise due to assumptions that limit their applicability in realistic scenarios such as: gradient estimation, which is infeasible in turbulent flows where the gradient is not consistent; sensor measurement models where sampling times

are neglected or errors are assumed Gaussian or ignored; static assumptions with regards to the plume; and the availability and certainty of prior information such as source release rate and meteorological data.

platform for data gathering of atmospheric events. Approaches to perform

V. Conclusions and future work

This paper has presented the problem and importance of estimating atmospheric dispersion events, a review of STE algorithms using static or mobiles sensors, and a brief review on boundary tracking and source seeking.

Static sensors have been the dominant method of STE in the literature, particularly for emergency response applications arguably due to their benefit of early detection. Despite this, they have a number of limitations when it comes to estimating the source which have been referred to throughout this paper. The algorithms of STE are relevant for both static and mobiles sensors. Mobile sensors reveals new research opportunities given by their mobility. STE algorithms are dominantly iterative based on probabilistic or optimisation techniques. The iterative behaviour results in high computational demand, and for this reason, many researchers have used the simple Gaussian plume equation as the underlying dispersion model. When applied to real data from experiments such as the FFT07 dataset, the loss in accuracy of this model is undeniable. In fact, even complex dispersion models have shown significant loss of accuracy on real data with a distinct problem in estimation of the release rate. This limitation is one of many that boast the use of mobile sensors which can provide a boundary or source location estimate without modelling errors. Besides, much more data is needed than what can be provided by a static network and mobile sensors can gather data from more desirable locations and be used to check source estimates by also searching for where the contaminant is not present. Most research into STE has focused on improving existing methods to reduce computational cost, a crucial factor in emergency response. It was found that reducing the search space and a good initial estimate worked best in reducing computation by decre umber of iterations needed and hence the

Future research could take many directions to improve the ite of the art. Prior information, in terms

of narrowing down possible source locations, can significantly improve performance of the approaches shown by the performance of [39] and [30]. Prior information may be used further with regards to release time and more refined narrowing of possible source locations with levels of uncertainty included to account for errors in meteorological variables. Improvements to sampling techniques such as adaptive sampling or using prior information to generate better inferences could significantly reduce the number of iterations required. Dispersion modelling could be improved by applying a multiple model filter. Computational time has been reduced by applying a two step acceptance criteria in [53] to reduce the number of expensive dispersion model runs, and this could be reduced further by: adding more steps; improved generation of inferences; or by emulating the dispersion model [161], [162]. The effect of poor or varying (or perhaps anything other than ideal) meteorological data has received limited attention in the literature, but their effect on STE results will be of high importance and should be studied. Dynamic plumes have also received little attention, which will introduce much more difficulty in estimation of the source term. Variations in temporal concentration readings could provide some useful information.

There is limited research in the area of STE using mobile sensors. In simplified simulations, current approaches have obtained more accurate, less uncertain estimates than static sensors thanks to their ability to sample from more informative locations [49]. However, this benefit has not been experimentally validated yet. Research has focused on optimal information collection strategies. In future research, cooperative multiple vehicle approaches should be explored and their performance benefit over a single vehicle or a static network analysed. Besides, alternative derivations of the information gain should be researched. It is expected that the maximum entropy will provide good results, following the theory that the most information may be gained by sampling from positions

number of dispersion model runs.

where the least is known. Other extensions and research follow from STE using static networks such as applying uncertainty in meteorological data and improvements to estimation algorithms. Computational complexity will play an especially important role to reduce idle time of the mobile sensor, so fast converging sequential algorithms could be explored for faster on-line estimation such as variational Bayesian inference, use of the adjoint source-receptor relationship and null sensor readings. It will be valuable to investigate the effect on performance between waiting for an algorithm to converge to an optimal manoeuvre versus collecting more information while the algorithm runs with available sub-optimal manoeuvres.

Boundary tracking algorithms have been shown to perform well in simulations where there are many simplifying assumptions. Future research should focus on tracking of boundaries in more complex scenarios that may feature plume splitting, dynamic boundaries and noisy or intermittent sensing. Probabilistic boundary tracking is expected to be one approach that could extend the current state of the art. Other areas of future research should extend the cooperation among mobile sensors, estimating the boundary growth and capitalising prior information such as meteorological data for more effective tracking.

Source seeking algorithms have been created for various applications. Performance comparisons have been made between reactive and cognitive strategies. The best approach has been shown to depend greatly on the scenario between the type of source, the meteorological conditions and the size of the search domain. Algorithms have been developed that can handle complex scenarios; however, their efficiency can yet be improved. Possible areas of future research in this domain include: i) exploration of varying meteorological conditions; ii) the application of probabilistic chemical sensor models; and iii) development of more efficient source seeking systems either by extension to multiple cooperating vehicles or the development of hybrid approaches to take advantage of the benefits provided by different strategies. For example, an approach to effectively balance exploration and exploitation to more effectively handle multiple scenarios.

Unmanned mobile sensor platforms have seen a huge growth in popularity and ability over the past few years. With the reduction in cost and size of electronics and growth of research, they will soon have applications in a vast amount of disciplines. They are the preferred tool for environmental monitoring tasks such as STE as they can sample from optimal positions in the atmosphere without putting humans in harm's way. For emergency response, UAVs provide a particular benefit as they can travel to and within the search area quickly, unobstructed by objects on the ground. Some issues encountered by mobile sensors for environmental monitoring include the need to sphere for a duration of time and the effect movement will have on sensing accuracy and the local

References

[1] A. Gunatilaka, B. Ristic, R. Gailis, On localisation of a radiological point source, in: Information, Decision and Control, 2007. IDC'07, IEEE, 2007, pp. 236-241.

[2] T. J. Yasunari, A. Stohl, R. S. Hayano, J. F. Burkhart, S. Eckhardt, T. Yasunari, Cesium-137 deposition and contamination of japanese soils due to the fukushima nuclear accident, Proceedings of the National Academy of Sciences 108 (49) (2011) 19530-19534.

[3] A. Stohl, A. Prata, S. Eckhardt, L. Clarisse, A. Durant, S. Henne, N. I. Kristiansen, A. Minikin, U. Schumann, P. Seibert, et al., Determination of time-and height-resolved volcanic ash emissions and their use for quantitative ash dispersion modeling: the 2010 eyjafjallajokull eruption, Atmospheric Chemistry and Physics 11 (9) (2011) 4333-4351.

[4] S. K. Singh, M. Sharan, J.-P. Issartel, Inverse modelling methods for identifying unknown releases in emergency scenarios: an overview, International Journal of Environment and Pollution 57 (1-2) (2015) 68-91.

[5] B. Ristic, A. Skvortsov, A. Walker, Autonomous search for a diffusive source in an unknown structured environment, Entropy 16 (2) (2014) 789-813.

[6] J. Clark, R. Fierro, Cooperative hybrid control of robotic sensors for perimeter detection and tracking, in: American Control Conference, 2005. Proceedings of the 2005, IEEE, 2005, pp. 3500-3505.

[7] D. W. Casbeer, R. W. Beard, T. W. McLain, S.-M. Li, R. K. Mehra, Forest fire monitoring with multiple small uavs, in: Proceedings of the 2005, American Control Conference, 2005., IEEE, 2005, pp. 3530-3535.

[8] F. Zhang, N. E. Leonard, Cooperative filters and control for cooperative exploration, IEEE Transactions on Automatic Control 55 (3) (2010) 650-663.

[9] D. Marthaler, A. L. Bertozzi, Tracking environmental level sets with autonomous vehicles, in: Recent devel opments in cooperative control and optimization, Springer, 2004, pp. 317-332.

[10] M. Redwood, Source term estimation and event reconstruction : a survey, Tech. rep., Atmospheric Dispersion Modelling Liaison Committee report : ADMLC-R6 (2011).

[11] K. S. Rao, Source estimation methods for atmospheric dispersion, Atmospheric Environment 41 (33) (2007) 6964-6973.

[12] I. Lagzi, R. Meszaros, G. Gelybo, A. Leelossy, Atmospheric Chemistry, 2013.

[13] K. W. Ragland, Multiple box model for dispersion of air pollutants from area sources, Atmospheric Environment (1967) 7 (11) (1973) 1017 - 1032.

URL http://www.sciencedirect.com/science/article/pii/0004698173902138

[14] C. H. Bosanquet, J. L. Pearson, The spread of smoke and gases from chimneys, Trans. Faraday Soc. 32 (1936) 1249-1263. URL http://dx.doi.org/10.1039/TF9363201249

[15] A. Stohl, C. Forster, A. Frank, P. Seibert, G. Wotawa, Technical note: The lagrangian particle dispersion model flexpart version 6.2, Atmospheric Chemistry and Physics 5 (9) (2005) 2461-2474.

[16] N. S. Holmes, L. Morawska, A review of dispersion modelling and its application to the dispersion of particles: an overview of different dispersion models available, Atmospheric Environment 40 (30) (2006) 5902-5928.

[17] J. A. Havens, T. O. Spicer, Development of an atmospheric dispersion model for heavier-than-air gas mixtures. volume 3. degadis user's manual., Tech. rep., DTIC Document (1985).

[18] D. L. Ermak, User's manual for slab: An atmospheric dispersion model for denser-than-air-releases, Tech. rep., Lawrence Livermore National Lab., CA (USA) (1990).

[19] N. S. Holmes, L. Morawska, A review of dispersion modelling and its application to the dispersion of particles: an overview of different dispersion models available, Atmospheric Environment 40 (30) (2006) 5902-5928.

[20] F. Pasquill, The estimation of the dispersion of windborne material, Meteorol. Mag 90 (1063) (1961) 33-49.

[21] A. Keats, E. Yee, F.-S. Lien, Bayesian inference for source determination with applications to a complex urban environment, Atmospheric environment 41 (3) (2007) 465-479.

[22] J. A. Pudykiewicz, Application of adjoint tracer transport equations for evaluating source parameters, Atmospheric environment 32 (17) (1998) 3039-3050.

[23] S. K. Singh, R. Rani, A least-squares inversion technique for identification of a point release: Application to fusion field trials 2007, Atmospheric Environment 92 (2014) 104-117.

[24] P. E. Bieringer, L. M. Rodriguez, F. Vandenberghe, J. G. Hurst, G. Bieberbach, I. Sykes, J. R. Hannan, J. Zaragoza, R. N. Fry, Automated source term and wind parameter estimation for atmospheric transport and dispersion applications, Atmospheric Environment 122 (2015) 206-.

[25] D. Storwold, Detailed test plan for the fusing sensor information from observing networks (fusion) field trial 2007 (fft 07, US Army Dugway Proving Ground West Desert Test Center Doc. WDTC-TP-07-078 .

[26] M. J. Brown, D. Boswell, G. Streit, M. Nelson, T. McPherson, T. Hilton, E. R. Pardyjak, S. Pol, P. Ramamurthy, B. Hansen, et al., Joint urban 2003 street canyon experiment, in: 84th AMS Meeting, Paper J, Vol. 7, Citeseer, 2004.

[27] C. Biltoft, E. Yee, C. Jones, Overview of the mock urban setting test (MUST), in: Proceedings of the Fourth Symposium on the Urban Environment, 2002, pp. 20-24.

[28] J. S. Irwin, Atmospheric transport and diffusion data archive, Online. URL http://www.jsirwin.com/Tracer_Data.html

[29] Z. Boybeyi, Comprehensive atmospheric modelling program, Online. URL http://camp.cos.gmu.edu/data_resources_overview.html

[30] N. Platt, D. DeRiggi, Comparative investigation of source term estimation algorithms using fusion field trial 2007 data: linear regression analysis, International Journal of Environment and Pollution 48 (1-4) (2012) 13-21.

[31] E. Yee, Bayesian probabilistic approach for inverse source determination from limited and noisy chemical or biological sensor concentration measurements, in: Defense and Security Symposium, International Society for Optics and Photonics, 2007, pp. 65540W-65540W.

[32] E. Yee, Bayesian inversion of concentration data for an unknown number of contaminant sources, Tech. rep., DTIC Document (2007).

[33] E. Yee, Validation of a bayesian inferential framework for multiple source reconstruction using fft-07 data, HARMO13-1-4 June 2010, Paris, France-13 th Conference on Harmonisation within Atmospheric Dispersion Modeling for Regulatory Purposes, 2010.

[34] E.Yee, Inverse dispersion for an unknown number of sources: model selection and uncertainty analysis, ISRN Applied Mathematics .

[35] C. Huang, T. Hsing, N. Cressie, A. R. Ganguly, V. A. Protopopescu, N. S. Rao, Bayesian source detection and parameter estimation of a plume model based on sensor network measurements, Applied Stochastic Models in Business and Industry 26 (4) (2010) 331-348.

[36] R. Lane, M. Briers, K. Copsey, Approximate bayesian computation for source term estimation, Mathematics in Defence 2009 .

[37] D. Wade, I. Senocak, Stochastic reconstruction of multiple source atmospheric contaminant dispersion events, Atmospheric Environment 74 (2013) 45-51.

[38] C. T. Allen, S. E. Haupt, G. S. Young, Source characterization with a genetic algorithm-coupled dispersion-backward model incorporating scipuff, Journal of applied meteorology and climatology 46 (3) (2007) 273-287.

[39] A. J. Annunzio, G. S. Young, S. E. Haupt, A multi-entity field approximation to determine the source location of multiple atmospheric contaminant releases, Atmospheric environment 62 (2012) 593-604.

[40] S. K. Singh, R. Rani, Assimilation of concentration measurements for retrieving multiple point releases in atmosphere: A least-squares approach to inverse modelling, Atmospheric Environment 119 (2015) 402-414.

[41] M. Borysiewicz, A. Wawrzynczak, P. Kopka, Bayesian-based methods for the estimation of the unknown models parameters in the case of the localization of the atmospheric contamination source, Foundations of Computing and Decision Sciences 37 (4) (2012) 253-270.

[42] I. Senocak, N. W. Hengartner, M. B. Short, W. B. Daniel, Stochastic event reconstruction of atmospheric contaminant dispersion using bayesian inference, Atmospheric Environment 42 (33) (2008) 7718-7727.

[43] A. Gunatilaka, B. Ristic, A. Skvortsov, M. Morelande, Parameter estimation of a continuous chemical plume source, in: Information Fusion, 2008 11th International Conference on, IEEE, 2008, pp. 1-8.

[44] B. Ristic, A. Gunatilaka, R. Gailis, Achievable accuracy in parameter estimation of a gaussian plume dispersion model, in: Statistical Signal Processing (SSP), 2014 IEEE Workshop on, IEEE, 2014, pp. 209-212.

[45] B.Ristic, A.Gunatilaka, R.Gailis, Achievable accuracy in gaussian plume parameter estimation using a network of binary sensors, Information Fusion 25 (2015) 42-48.

[46] B. Ristic, A. Gunatilaka, R. Gailis, A. Skvortsov, Bayesian likelihood-free localisation of a biochemical source using multiple dispersion models, Signal Processing 108 (2015) 13-24.

[47] B. Ristic, A. Gunatilaka, R. Gailis, Localisation of a source of hazardous substance dispersion using binary measurements, Atmospheric Environment 142 (2016) 114-119.

[48] Y. Wang, H. Huang, W. Zhu, Stochastic source term estimation of hazmat releases: algorithms and uncertainty .

[49] R. Madankan, P. Singla, T. Singh, Application of conjugate unscented transform in source parameters estimation, in: American Control Conference (ACC), 2013, IEEE, 2013, pp. 2448-2453.

[50] D. Ma, S. Wang, Z. Zhang, Hybrid algorithm of minimum relative entropy-particle swarm optimization with adjustment parameters for gas source term identification in atmosphere, Atmospheric Environment 94 (2014) 637-646.

[51] L. C. Thomson, B. Hirst, G. Gibson, S. Gillespie, P. Jonathan, K. D. Skeldon, M. J. Padgett, An improved algorithm for locating a gas source using inverse methods, Atmospheric Environment 41 (6) (2007) 1128-1134.

[52] G. Johannesson, B. Hanley, J. Nitao, Dynamic bayesian models via monte carlo-an introduction with examples, Lawrence Livermore National Laboratory, UCRL-TR-207173 .

[53] P. Robins, V. Rapley, N. Green, Realtime sequential inference of static parameters with expensive likelihood calculations, Journal of the Royal Statistical Society: Series C (Applied Statistics) 58 (5) (2009) 641-662.

[54] X. Zheng, Z. Chen, Back-calculation of the strength and location of hazardous materials releases using the pattern search method, Journal of hazardous materials 183 (1) (2010) 474-481.

[55] A. J. Annunzio, G. S. Young, S. E. Haupt, Utilizing state estimation to determine the source location for a contaminant, Atmospheric environment 46 (2012) 580-589.

[56] M. Sharan, J.-P. Issartel, S. K. Singh, P. Kumar, An inversion technique for the retrieval of single-point emissions from atmospheric concentration measurements, in: Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, The Royal Society, 2009, pp. rspa-2008.

[57] C. T. Allen, G. S. Young, S. E. Haupt, Improving pollutant source characterization by better estimating wind direction with a genetic algorithm, Atmospheric Environment 41 (11) (2007) 2283-2289.

[58] S. E. Haupt, G. S. Young, C. T. Allen, A genetic algorithm method to assimilate sensor data for a toxic contaminant release, Journal of Computers 2 (6) (2007) 85-93.

[59] S. E. Haupt, G. S. Young, C. T. Allen, Validation of a receptor-dispersion model coupled with a genetic algorithm using synthetic data, Journal of applied meteorology and climatology 45 (3) (2006) 476-490.

[60] K. J. Long, S. Haupt, G. S. Young, L. M. Rodriguez, M. McNeal III, Source term estimation using genetic algorithm and scipuff, in: 7th Conference on Artificial Intelligence and its Applications to the Environmental Sciences, 2009.

[61] M. Sharan, S. K. Singh, J. Issartel, Least square data assimilation for identification of the point source emissions, Pure and applied geophysics 169 (3) (2012) 483-497.

[62] J.-P. Issartel, Rebuilding sources of linear tracers after atmospheric concentration measurements, Atmos. Chem. Phys 3 (2003) 2111-2125.

[63] J. Issartel, Emergence of a tracer source from air concentration measurements, a new strategy for linear assimilation, Atmospheric Chemistry and Physics 5 (1) (2005) 249-273.

[64] G. Turbelin, S. K. Singh, J.-P. Issartel, Reconstructing source terms from atmospheric concentration measurements: Optimality analysis of an inversion technique, Journal of Advances in Modeling Earth Systems 6 (4) (2014) 1244-1255.

[65] J.-P. Issartel, M. Sharan, M. Modani, An inversion technique to retrieve the source of a tracer with an application to synthetic satellite measurements, in: Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, Vol. 463, The Royal Society, 2007, pp. 2863-2886.

[66] M. Sharan, A. K. Yadav, M. Singh, P. Agarwal, S. Nigam, A mathematical model for the dispersion of air pollutants in low wind conditions, Atmospheric Environment 30 (8) (1996) 1209-1220.

[67] M. Sharan, J.-P. Issartel, S. K. Singh, A point-source reconstruction from concentration measurements in low-wind stable conditions, Quarterly Journal of the Royal Meteorological Society 138 (668) (2012) 1884-1894.

[68] S. K. Singh, M. Sharan, J.-P. Issartel, Inverse modelling for identification of multiple-point releases from atmospheric concentration measurements, Boundary-layer meteorology 146 (2) (2013) 277-295.

[69] P. Kumar, A.-A. Feiz, S. K. Singh, P. Ngae, G. Turbelin, Reconstruction of an atmospheric tracer source in an urban-like environment, Journal of Geophysical Research: Atmospheres 120 (24) (2015) 12589-12604.

[70] P. Kumar, S. K. Singh, A.-A. Feiz, P. Ngae, An urban scale inverse modelling for retrieving unknown elevated emissions with building-resolving simulations, Atmospheric Environment 140 (2016) 135-146.

[71] P. Kumar, A.-A. Feiz, P. Ngae, S. K. Singh, J.-P. Issartel, Cfd simulation of short-range plume dispersion from a point release in an urban like environment, Atmospheric Environment 122 (2015) 645-656.

[72] C. G. BROYDEN, The convergence of a class of double-rank minimization algorithms: 2. the new algorithm, IMA Journal of Applied Mathematics 6 (3) (1970) 222-231.

URL http://imamat.oxfordjournals.org/content/6/3/222.abstract

[73] R. Fletcher, A new approach to variable metric algorithms, The Computer Journal 13 (3) (1970) 317-322. URL http://comjnl.oxfordjournals.org/content/13/3/317.abstract

[74] D. Goldfarb, A family of variable-metric methods derived by variational means, Mathematics of computation 24 (109) (1970) 23-26.

[75] D. F. Shanno, Conditioning of quasi-newton methods for function minimization, Mathematics of computation 24 (111) (1970) 647-656.

[76] W. C. Davidon, Variable metric method for minimization, SIAM Journal on Optimization 1 (1) (1991) 1-17.

[77] X. Zheng, Z. Chen, Inverse calculation approaches for source determination in hazardous chemical releases, Journal of Loss Prevention in the Process Industries 24 (4) (2011) 293-301.

[78] S. Kirkpatrick, Optimization by simulated annealing: Quantitative studies, Journal of statistical physics 34 (5-6) (1984) 975-986.

[79] M. Newman, K. Hatfield, J. Hayworth, P. Rao, T. Stauffer, A hybrid method for inverse characterization of subsurface contaminant flux, Journal of contaminant hydrology 81 (1) (2005) 34-62.

[80] R. B. Goldberg, S. J. Barker, L. Perez-Grau, Regulation of gene expression during plant embryogenesis, Cell 56 (2) (1989) 149-160.

[81] S. E. Haupt, A demonstration of coupled receptor/dispersion modeling with a genetic algorithm, Atmospheric Environment 39 (37) (2005) 7181-7189.

[82 [83

[85 [86

[87 [88

[89 [90 [91

[100 [101

K. J. Long, S. E. Haupt, G. S. Young, Assessing sensitivity of source term estimation, Atmospheric environment 44 (12) (2010) 1558-1567. G. Young, J. Limbacher, S. Haupt, A. Annunzio, Back trajectories for hazard origin estimation, in: Seventh Conference on Artificial Intelligence and Its Applications to the Environmental Sciences at AMS Annual Meeting, Phoenix, AZ, January, 2009, pp. 11-15.

D. Ma, J. Deng, Z. Zhang, Comparison and improvements of optimization methods for gas emission source identification, Atmospheric Environment 81 (2013) 188-198.

V. N. Vapnik, V. Vapnik, Statistical learning theory, Vol. 1, Wiley New York, 1998.

A. Wawrzynczak, P. Kopka, M. Borysiewicz, Sequential monte carlo in bayesian assessment of contaminant source localization based on the sensors concentration measurements, in: Parallel Processing and Applied Mathematics, Springer, 2013, pp. 407-417. W. R. Gilks, Markov chain monte carlo, Wiley Online Library, 20 N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, E. Teller, Equation of state calculations by fast computing machines, The journal of chemical physics 21 (6) (1953) 1087-1092.

E. Marinari, G. Parisi, Simulated tempering: a new monte carlo scheme, EPL (Europhysics Letters) 19 (6) (1992) 451.

P. J. Green, Reversible jump markov chain monte carlo computation and bayesian model determination, Biometrika 82 (4) (1995) 711-732. E. Yee, I. Hoffman, R. P. Branch, K. Ungar, A. Malo, N. Ek, P. Bourgouin, Bayesian inference for source term estimation: Application to the International Monitoring System radionuclide network, 2014.

E. Yee, I. Hoffman, K. Ungar, Bayesian inference for source reconstruction: A real-world application, International Scholarly Research Notices .

A. Doucet, N. De Freitas, N. Gordon, An introduction to sequential monte carlo methods, in: Sequential Monte Carlo methods in practice, Springer, 2001, pp. 3-14.

C. J. T. Braak, A markov chain monte carlo version of the genetic algorithm differential evolution: easy bayesian computing for real parameter spaces, Statistics and Computing 16 (3) (2006) 239-249.

P. Robins, V. E. Rapley, P. A. Thomas, Biological source term estimation using particle counters and immunoassay sensors, in: 2006 9th International Conference on Information Fusion, IEEE, 2006, pp. 1-8.

P. Robins, P. Thomas, Non-linear bayesian cbrn source term estimation, in: 2005 7th International Conference on Information Fusion, Vol. 2, IEEE, 2005, pp. 8-pp.

P. Robins, V. Rapley, P. Thomas, A probabilistic chemical sensor model for data fusion, in: 2005 7th International Conference on Information Fusion, Vol. 2, IEEE, 2005, pp. 7-pp.

N. Wiener, The homogeneous chaos, American Journal of Mathematics 60 (4) (1938) 897-936.

K. Dalbey, A. Patra, E. Pitman, M. Bursik, M. Sheridan, Input uncertainty propagation methods and hazard mapping of geophysical mass flows, Journal of Geophysical Research: Solid Earth 113 (B5).

N. Adurthi, P. Singla, T. Singh, The conjugate unscented transforman approach to evaluate multi-dimensional expectation integrals, in: American Control Conference (ACC), 2012, IEEE, 2012, pp. 5556-5561.

J. Brink, Boundary tracking and estimation of pollutant plumes with a mobile sensor in a low-density static sensor network, Urban Climate 14 (2015) 383-395.

111 112

120 121 122

J. Towler, B. Krawiec, K. Kochersberger, Radiation mapping in post-disaster environments using an autonomous helicopter, Remote Sensing 4 (7) (2012) 1995-2015.

Z. Jin, A. L. Bertozzi, Environmental boundary tracking and estimation using multiple autonomous vehicles, in: Decision and Control, 2007 46th IEEE Conference on, IEEE, 2007, pp. 4918-4923.

A. Joshi, T. Ashley, Y. R. Huang, A. L. Bertozzi, Experimental validation of cooperative environmental boundary tracking with on-board sensors, in: American Control Conference, 2009. ACC'09., IEEE, 2009, pp. 2630-2635.

M. Kemp, A. L. Bertozzi, D. Marthaler, Multi-uuv perimeter surveillance, in: Proceedings of, 2004, pp. 102-107.

S. Susca, F. Bullo, S. Martinez, Monitoring environmental boundaries with a robotic sensor network, Control Systems Technology, IEEE

Transactions on 16 (2) (2008) 288-296.

A. S. Matveev, H. Teimoori, A. V. Savkin, Method for tracking of environmental level sets by a unicycle-like vehicle, Automatica 48 (9) (2012) 2252-2261.

K. Ovchinnikov, A. Semakova, A. Matveev, Decentralized multi-agent tracking of unknown environmental level sets by a team of nonholonomic robots, in: Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), 2014 6th International Congress on, IEEE, 2014, pp. 352-359. K.Ovchinnikov, A.Semakova, A.Matveev, Cooperative surveillance of unknown environmental boundaries by multiple nonholonomic robots, Robotics and Autonomous Systems 72 (2015) 164-180.

P. P. Menon, C. Edwards, Y. B. Shtessel, D. Ghose, J. Haywood, Boundary tracking using a suboptimal sliding mode algorithm, in:

Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, IEEE, 2014, pp. 5518-5523.

F. Zhang, N. E. Leonard, Generating contour plots using multiple sensor platforms., in: SIS, 2005, pp. 309-316.

W. Wu, F. Zhang, Cooperative exploration of level surfaces of three dimensional scalar fields, Automatica 47 (9) (2011) 2044-2051.

F. Zhang, E. Fiorelli, N. E. Leonard, Exploring scalar fields using multiple sensor platforms: Tracking level curves, in: Decision and

Control, 2007 46th IEEE Conference on, IEEE, 2007, pp. 3579-3584.

B. A. White, A. Tsourdos, I. Ashokaraj, S. Subchan, R. Zbikowski, Contaminant cloud boundary monitoring using network of uav sensors, Sensors Journal, IEEE 8 (10) (2008) 1681-1692.

S. Subchan, B. A. White, A. Tsourdos, M. Shanmugavel, R. Zbikowski, Dubins path planning of multiple uavs for tracking contaminant cloud, in: Proceedings of the 17th World Conference on the International Federation of Automatic Control, Seoul, Korea, 2008, pp. 6-11. A. Sinha, A. Tsourdos, B. White, Multi uav coordination for tracking the dispersion of a contaminant cloud in an urban region, European Journal of Control 15 (3) (2009) 441-448.

A.Sinha, A.Tsourdos, B.White, Multi uav negotiation for coordinated tracking of contaminant cloud, in: Control Conference (ECC), 2009 European, IEEE, 2009, pp. 109-114. ^^ V

C. Zhang, H. Pei, Oil spills boundary tracking using universal kriging and model predictive control by uav, in: Intelligent Control and Automation (WCICA), 2014 11th World Congress on, IEEE, 2014, pp. 633-638.

J. Euler, A. Horn, D. Haumann, J. Adamy, O. Stryk, Cooperative n-boundary tracking in large scale environments, in: Mobile Adhoc and Sensor Systems (MASS), 2012 IEEE 9th International Conference on, IEEE, 2012, pp. 1-6.

W. Kim, D. Kwak, H. J. Kim, Joint detection and tracking of boundaries using cooperative mobile sensor networks, in: Robotics and Automation (ICRA), 2013 IEEE International Conference on, IEEE, 2013, pp. 889-894.

S. Srinivasan, K. Ramamritham, Contour estimation using collaborating mobile sensors, in: Proceedings of the 2006 workshop on Dependability issues in wireless ad hoc networks and sensor networks, ACM, 2006, pp. 73-82.

S. Srinivasan, K. Ramamritham, P. Kulkarni, Ace in the hole: Adaptive contour estimation using collaborating mobile sensors, in: Information Processing in Sensor Networks, 2008. IPSN'08. International Conference on, IEEE, 2008, pp. 147-158. K. Krishnanand, D. Ghose, Detection of multiple source locations using a glowworm metaphor with applications to collective robotics, in: Swarm Intelligence Symposium, 2005. SIS 2005. Proceedings 2005 IEEE, IEEE, 2005, pp. 84-91.

P. P. Menon, D. Ghose, Simultaneous source localization and boundary mapping for contaminants, in: American Control Conference (ACC), 2012, IEEE, 2012, pp. 4174-4179.

P. Menon, D. Ghose, Boundary mapping of 3-dimensional regions, in: American Control Conference (ACC), 2013, IEEE, 2013, pp. 2984-2989.

126] T. Sun, H. Pei, Y. Pan, C. Zhang, Robust wavelet network control for a class of autonomous vehicles to track environmental contour line, Neurocomputing 74 (17) (2011) 2886-2892.

127] T.Sun, H.Pei, Y.Pan, C.Zhang, Robust adaptive neural network control for environmental boundary tracking by mobile robots, International Journal of Robust and Nonlinear Control 23 (2) (2013) 123-136.

128] S. Li, Y. Guo, B. Bingham, Multi-robot cooperative control for monitoring and tracking dynamic plumes, in: Robotics and Automation (ICRA), 2014 IEEE International Conference on, IEEE, 2014, pp. 67-73.

129] M. Fahad, N. Saul, Y. Guo, B. Bingham, Robotic simulation of dynamic plume tracking by unmanned surface vessels, in: Robotics and Automation (ICRA), 2015 IEEE International Conference on, IEEE, 2015, pp. 2654-2659.

130] R. Sykes, D. Henn, S. Parker, R. Gabruk, Scipuff-a generalized hazard dispersion model, Tech. rep., American Meteorological Society, Boston, MA (United States) (1996).

131] A. Jones, D. Thomson, M. Hort, B. Devenish, The uk met office's next-generation atmospheric dispersion model, name iii, in: Air Pollution Modeling and its Application XVII, Springer, 2007, pp. 580-589.

132] G. Kowadlo, R. A. Russell, Robot odor localization: a taxonomy and survey, The International Journal of Robotics Research 27 (8) (2008) 869-894.

133] L. Marques, A. T. De Almeida, Electronic nose-based odour source localization, in: Advanced Motion Control, 2000. Proceedings. 6th International Workshop on, IEEE, 2000, pp. 36-40.

134] L. Marques, N. Almeida, A. De Almeida, Olfactory sensory system for odour-plume tracking and localization, in: Sensors, 2003. Proceedings of IEEE, Vol. 1, IEEE, 2003, pp. 418-423.

135] H. Ishida, K.-i. Suetsugu, T. Nakamoto, T. Moriizumi, Study of autonomous mobile sensing system for localization of odor source using gas sensors and anemometric sensors, Sensors and Actuators A: Physical 45 (2) (1994) 153-157.

136] R. A. Russell, Laying and sensing odor markings as a strategy for assisting mobile robot navigation tasks, IEEE Robotics & Automation Magazine 2 (3) (1995) 3-9.

137] R. A. Russell, A. Bab-Hadiashar, R. L. Shepherd, G. G. Wallace, A comparison of reactive robot chemotaxis algorithms, Robotics and Autonomous Systems 45 (2) (2003) 83-97.

138] D. Zarzhitsky, D. F. Spears, Swarm approach to chemical source localization, in: Systems, Man and Cybernetics, 2005 IEEE International Conference on, Vol. 2, IEEE, 2005, pp. 1435-1440.

139] D. Zarzhitsky, D. Spears, D. Thayer, W. Spears, Agent-based chemical plume tracing using fluid dynamics, in: Formal Approaches to Agent-Based Systems, Springer, 2004, pp. 146-160.

140] D. Zarzhitsky, D. F. Spears, W. M. Spears, D. R. Thayer, A fluid dynamics approach to multi-robot chemical plume tracing, in: Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems-Volume 3, IEEE Computer Society, 2004, pp. 1476-1477.

141] D. Zarzhitsky, D. F. Spears, W. M. Spears, Swarms for chemical plume tracing, in: Swarm Intelligence Symposium, 2005. SIS 2005. Proceedings 2005 IEEE, IEEE, 2005, pp. 249-256.

142] J. A. Farrell, S. Pang, W. Li, R. Arrieta, Chemical plume tracing experimental results with a remus auv, in: OCEANS 2003. Proceedings, Vol. 2, IEEE, 2003, pp. 962-968.

143] S. Pang, J. A. Farrell, Chemical plume source localization, Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on 36 (5) (2006) 1068-1080.

144] J.-G. Li, Q.-H. Meng, Y. Wang, M. Zeng, Odor source localization using a mobile robot in outdoor airflow environments with a particle filter algorithm, Autonomous Robots 30 (3) (2011) 281-292.

145] P. P. Neumann, V. Hernandez Bennetts, A. J. Lilienthal, M. Bartholmai, J. H. Schiller, Gas source localization with a micro-drone using bio-inspired and particle filter-based algorithms, Advanced Robotics 27 (9) (2013) 725-738.

146] M. Vergassola, E. Villermaux, B. I. Shraiman, infotaxis as a strategy for searching without gradients, Nature 445 (7126) (2007) 406-409.

147] J. Masson, M. B. Bechet, M. Vergassola, Chasing information to search in random environments, Journal of Physics A: Mathematical and Theoretical 42 (43) (2009) 434009.

[148] E. M. Moraud, D. Martinez, Effectiveness and robustness of robot infotaxis for searching in dilute conditions, Frontiers in neurorobotics 4 (1) (2010) 1-8.

161 162

C. Barbieri, S. Cocco, R. Monasson, On the trajectories and performance of infotaxis, an information-based greedy search algorithm, EPL (Europhysics Letters) 94 (2) (2011) 20005.

N. Voges, A. Chaffiol, P. Lucas, D. Martinez, Reactive searching and infotaxis in odor source localization, PLoS Comput Biol 10 (10) (2014) e1003861.

H. Hajieghrary, M. A. Hsieh, I. B. Schwartz, Multi-agent search for source localization in a turbulent medium, Physics Letters A 380 (20) (2016) 1698-1705.

Y. Kuroki, G. S. Young, S. E. Haupt, Uav navigation by an expert system for contaminant mapping with a genetic algorithm, Expert Systems with Applications 37 (6) (2010) 4687-4697.

B. Hirst, P. Jonathan, F. G. del Cueto, D. Randell, O. Kosut, Locating and quantifying gas emission sources using remotely obtained concentration data, Atmospheric Environment 74 (2013) 141-158. L. P. Kaelbling, M. L. Littman, A. R. Cassandra, Planning and acting in partially observable stochastic domains, Artificial Intelligence 101 (12) (1998) 99 - 134.

URL http://www.sciencedirect.com/science/article/pii/S000437029800023X S. Kullback, R. A. Leibler, On information and sufficiency, The annals of mathematical statistics

A. RRNYI, On measures of entropy and information .

B. Ristic, A. Gunatilaka, Information driven localisation of a radiological point source, Information Fusion 9 (2) (2008) 317-326. B. Ristic, M. Morelande, A. Gunatilaka, Information driven search for point sources of gamma radiation, Signal Processing 90 (4) (2010) 1225-1239.

B. Ristic, A. Skvortsov, A. Gunatilaka, A study of cognitive strategies for an autonomous search, Information Fusion 28 (2016) 1-9. R. Madankan, P. Singla, T. Singh, Optimal information collection for source parameter estimation of atmospheric release phenomenon, in: American Control Conference (ACC), 2014, IEEE, 2014, pp. 604-609.

P. M. Tagade, B.-M. Jeong, H.-L. Choi, A gaussian process emulator approach for rapid contaminant characterization with an integrated multizone-cfd model, Building and Environment 70 (2013) 232-244.

M. C. Kennedy, A. O'Hagan, Bayesian calibration of computer models, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 63 (3) (2001) 425-464.

R. I. Sykes, S. Parker, D. Henn, C. Cerasoli, L. Santos, Pc-scipuff version 1.2 pd technical documentation, ARAP Rep 718 (180) (1998) 08543-2229.

A. J. Cimorelli, S. G. Perry, A. Venkatram, J. C. Weil, R. J. Paine, W. D. Peters, Aermod-description of model formulation .

C. McHugh, D. Carruthers, H. Edmunds, Adms-urban: an air quality management system for traffic, domestic and industrial pollution, International Journal of Environment and Pollution 8 (3-6) (1997) 666-674.

N. Hazon, G. A. Kaminka, On redundancy, efficiency, and robustness in coverage for multiple robots, Robotics and Autonomous Systems 56 (12) (2008) 1102-1114.

S. Rutishauser, N. Correll, A. Martinoli, Collaborative coverage using a swarm of networked miniature robots, Robotics and Autonomous Systems 57 (5) (2009) 517-525.

N. Agmon, N. Hazon, G. A. Kaminka, Constructing spanning trees for efficient multi-robot coverage, in: Robotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on, IEEE, 2006, pp. 1698-1703.

H. Choset, Coverage for robotics-a survey of recent results, Annals of mathematics and artificial intelligence 31 (1-4) (2001) 113-126. M. H. Jaward, D. Bull, N. Canagarajah, Sequential monte carlo methods for contour tracking of contaminant clouds, Signal Processing 90 (1) (2010) 249-260.

A. M. Lyapunov, The general problem of the stability of motion, International Journal of Control 55 (3) (1992) 531-534. Z. Cheng, P. Hailong, Tracking oil spills boundary using universal kriging and barrier method by uav .

T. Sun, C. Zhang, H. Pei, Lyapunov-based environmental boundary tracking control of mobile robots, in: Networking, Sensing and Control (ICNSC), 2012 9th IEEE International Conference on, IEEE, 2012, pp. 340-345.

K. Dantu, G. S. Sukhatme, Detecting and tracking level sets of scalar fields using a robotic sensor network, in: Robotics and Automation, 2007 IEEE International Conference on, IEEE, 2007, pp. 3665-3672.

E. Yee, Source reconstruction: a statistical mechanics perspective, International Journal of Environment and Pollution 48 (1-4) (2012) 203-213.

[176] A. Keats, E. Yee, F.-S. Lien, Information-driven receptor placement for contaminant source determination, Environmental Modelling & Software 25 (9) (2010) 1000-1013.

[177] E. Yee, An operational implementation of a cbrn sensor-driven modeling paradigm for stochastic event reconstruction, Tech. rep., DTIC Document (2010).

[178] E. Yee, T. K. Flesch, Inference of emission rates from multiple sources using bayesian probability theory, Journal of Environmental Monitoring 12 (3) (2010) 622-634.

[179] E. Yee, F.-S. Lien, A. Keats, R. DAmours, Bayesian inversion of concentration data: Source reconstruction in the adjoint representation of atmospheric diffusion, Journal of Wind Engineering and Industrial Aerodynamics 96 (10) (2008) 1805-1816.

[180] E. Yee, F. Lien, W. Keats, K. Hsieh, R. DAmours, Validation of bayesian inference for emission source distribution reconstruction using the joint urban 2003 and european tracer experiments, in: Fourth International Symposium on Computational Wind Engineering (CWE2006), Yokohama, Japan, 2006.

[181] A. Keats, F.-S. Lien, E. Yee, Source determination in built-up environments through bayesian inference with validation using the must array and joint urban 2003 tracer experiments, in: Proc 14th Annual Conference of the Computational Fluid Dynamics Society of Canada, July, 2006, pp. 16-18.

[182] W. A. Keats, Bayesian inference for source determination in the atmospheric environment .

[183] E. Yee, Theory for reconstruction of an unknown number of contaminant sources using probabilistic inference, Boundary-Layer Meteorology 127 (3) (2008) 359-394.

[184] T. J. Loredo, Bayesian adaptive exploration, arXiv preprint astro-ph/0409386 .

[185] E. Yee, A bayesian approach for reconstruction of the characteristics of a localized pollutant source from a small number of concentration measurements obtained by spatially distributed electronic noses, in: Russian- Can adian Workshop on Modeling of Atmospheric Dispersion of Weapon Agents, Karpov Institute of Physical Chemistry, Moscow, Russia, 2006.

[186] E. Yee, Probability theory as logic: Data assimilation for multiple source reconstruction, Pure and applied geophysics 169 (3) (2012) 499-517.

[187] B. Ristic, A. Skvortsov, A. Walker, Autonomous search for a diffusive source in an unknown structured environment, Entropy 16 (2) (2014) 789-813.

[188] E. Yee, A. Gunatilaka, B. Ristic, Comparison of two approaches for detection and estimation of radioactive sources, ISRN Applied Mathematics 2011.

[189] M. R. Morelande, A. Skvortsov, Radiation field estimation using a gaussian mixture, in: Information Fusion, 2009. FUSI0N'09. 12th International Conference on, IEEE, 2009, pp. 2247-2254.

[190] M. Morelande, B. Ristic, A. Gunatilaka, Detection and parameter estimation of multiple radioactive sources, in: Information Fusion, 2007 10th International Conference on, IEEE, 2007, pp. 1-7.

[191] K. S. Rao, Source estimation methods for atmospheric dispersion, Atmospheric Environment 41 (33) (2007) 6964-6973.

[192] P. Tricard, S. Fang, J. Wang, H. Li, J. Qu, J. Tong, D. Fang, Fast on-line source term estimation of non-constant releases in nuclear accident scenario using extended kalman filter, in: 2013 21st International Conference on Nuclear Engineering, American Society of Mechanical Engineers, 2013, pp. V003T06A004-V003T06A004.

[193] X. Zhang, J. Chen, G. Su, H. Yuan, Study on source inversion technology for nuclear accidents based on gaussian puff model and enkf, in: The 10th International ISCRAM Conference, 2013, pp. 634-639.

[194] M. Yuanwei, W. Dezhong, Source term estimation based on environmental radiation data in qinshan nuclear power plant of china .

[195] M. Drews, B. Lauritzen, H. Madsen, Analysis of a kalman filter based method for on-line estimation of atmospheric dispersion parameters using radiation monitoring data, Radiation protection dosimetry 113 (1) (2005) 75-89.

[196] X. Zhang, G. Su, H. Yuan, J. Chen, Q. Huang, Modified ensemble kalman filter for nuclear accident atmospheric dispersion: Prediction improved and source estimated, Journal of hazardous materials 280 (2014) 143-155.

[197] X. Zhang, G. Su, J. Chen, W. Raskob, H. Yuan, Q. Huang, Iterative ensemble kalman filter for atmospheric dispersion in nuclear accidents: An application to kincaid tracer experiment, Journal of hazardous materials 297 (2015) 329-339.

[198] X. Zhang, Q. Li, G. Su, M. Yuan, Ensemble-based simultaneous emission estimates and improved forecast of radioactive pollution from nuclear power plant accidents: application to etex tracer experiment, Journal of environmental radioactivity 142 (2015) 78-86.

[199] M. Drews, B. Lauritzen, H. Madsen, J. Q. Smith, Kalman filtration of radiation monitoring data from atmospheric dispersion of radioactive materials, Radiation protection dosimetry 111 (3) (2004) 257-269.

[200] V. Winiarek, M. Bocquet, O. Saunier, A. Mathieu, Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant: Application to the reconstruction of the cesium-137 and iodine-131 source terms from the fukushima daiichi power plant, Journal of Geophysical Research: Atmospheres 117 (D5).

[201] X. Davoine, M. Bocquet, Inverse modelling-based reconstruction of the chernobyl source term available for long-range transport, Atmospheric Chemistry and Physics 7 (6) (2007) 1549-1564.

[202] M. Bocquet, Inverse modelling of atmospheric tracers: non-gaussian methods and second-order sensitivity analysis, Nonlinear Processes in Geophysics 15 (1) (2008) 127-143.

[203] V. Winiarek, J. Vira, M. Bocquet, M. Sofiev, O. Saunier, Towards the operational estimation of a radiological plume using data assimilation after a radiological accidental atmospheric release, Atmospheric environment 45 (17) (2011) 2944-2955.

[204] M. R. Koohkan, M. Bocquet, L. Wu, M. Krysta, Potential of the international monitoring system radionuclide network for inverse modelling, Atmospheric environment 54 (2012) 557-567.

[205] M. Krysta, M. Bocquet, J. Brandt, Probing etex-ii data set with inverse modelling, Atmospheric Chemistry and Physics 8 (14) (2008) 3963-3971.

[206] R. Madankan, P. Singla, A. Patra, M. Bursik, J. Dehn, M. Jones, M. Pavolonis, B. Pitman, T. Singh, P. Webley, Polynomial chaos quadrature-based minimum variance approach for source parameters estimation, Procedia Computer Science 9 (2012) 1129-1138.

[207] R. Madankan, P. Singla, T. Singh, P. D. Scott, Polynomial-chaos-based bayesian approach for state and parameter estimations, Journal of Guidance, Control, and Dynamics 36 (4) (2013) 1058-1074.

[208] Y. Kuroki, G. S. Young, S. E. Haupt, Uav navigation by an expert system for contaminant mapping with a genetic algorithm, Expert Systems with Applications 37 (6) (2010) 4687-4697.

[209] L. M. Rodriguez, S. E. Haupt, G. S. Young, Impact of sensor characteristics on source characterization for dispersion modeling, Measurement 44 (5) (2011) 802-814.

[210] A. J. Annunzio, S. E. Haupt, G. S. Young, 7b. 3 methods of mitigating uncertainty in contaminant dispersion in a turbulent flow: Data assimilation vs. mutisenosr data fusion .

[211] M. A. Rege, R. W. Tock, A simple neural network for estimating emission rates of hydrogen sulfide and ammonia from single point sources, Journal of the Air & Waste Management Association 46 (10) (1996) 953-962.

[212] V. Smidl, R. Hofman, Tracking of atmospheric release of pollution using unmanned aerial vehicles, Atmospheric Environment 67 (2013) 425-436.

[213] R. K. Williams, G. S. Sukhatme, Probabilistic spatial mapping and curve tracking in distributed multi-agent systems, in: Robotics and Automation (ICRA), 2012 IEEE International Conference on, IEEE, 2012, pp. 1125-1130.

[214] A. D. Woodbury, Minimum relative entropy, bayes and kapur, Geophysical Journal International 185 (1) (2011) 181-189.

[215] R. Addis, G. Fraser, F. Girardi, G. Graziani, Y. Inoue, N. Kelly, W. Klug, A. Kulmala, K. Nodop, J. Pretel, et al., Etex: a european tracer experiment;observations, dispersion modelling and emergency response, Atmospheric Environment 32 (24) (1998) 4089-4094.

[216] C. Persson, H. Rodhe, L.-E. De Geer, The chernobyl accident: A meteorological analysis of how radionuclides reached and were deposited in sweden,