ARS
ADVANCED RGBOIIC SYSTEMS s
Real Time Mapping and Dynamic Navigation for Mobile Robots
Maki K. Habib
Graduate School of Science and Engineering, Saga University, Japan maki@ieee.org
Abstract: This paper discusses the importance, the complexity and the challenges of mapping mobile robot's unknown and dynamic environment, besides the role of sensors and the problems inherited in map building. These issues remain largely an open research problems in developing dynamic navigation systems for mobile robots. The paper presenst the state of the art in map building and localization for mobile robots navigating within unknown environment, and then introduces a solution for the complex problem of autonomous map building and maintenance method with focus on developing an incremental grid based mapping technique that is suitable for real-time obstacle detection and avoidance. In this case, the navigation of mobile robots can be treated as a problem of tracking geometric features that occur naturally in the environment of the robot. The robot maps its environment incrementally using the concept of occupancy grids and the fusion of multiple ultrasonic sensory information while wandering in it and stay away from all obstacles. To ensure real-time operation with limited resources, as well as to promote extensibility, the mapping and obstacle avoidance modules are deployed in parallel and distributed framework. Simulation based experiments has been conducted and illustrated to show the validity of the developed mapping and obstacle avoidance approach.
Keywords: Mapping, Real-Time Navigation, Mobile Robot, Dynamic Environment, Localization, SLAM, Obstacle Avoidance, Topology, Occupancy Grid, Uncertainty, Unknown Environment.
1. Introduction
An autonomous mobile robot is required to wander around and explore its environment without colliding with any obstacles for the purpose to fill its mission by executing successfully an assigned task, and to survive by affording the possibility of finding energy sources and avoid dangerous hazards. To efficiently carry out complex missions, autonomous robots need to learn and maintain a model of their environment. The acquired knowledge through learning is used to build an internal representation. Knowledge differs from information in that it is structured in long-term memory and it is the outcome of learning. In order to enable an autonomous mobile robot to navigate in unknown or changing environment and to update in real-time the existing knowledge of robot's surroundings, it is important to have an adaptable representation of such knowledge and maintain a dynamic model of its environment. Navigation in unknown or partially unknown environments remains one of the biggest challenges in today's autonomous mobile robots. Mobile robot dynamic navigation, perception, modeling, localization, and mapping robot's environment have been central research topics in the field of developing robust and reliable navigation approach for autonomous mobile robots. To efficiently carry out complex missions in indoor environments, autonomous mobile robots must be able to
acquire and maintain models of their environments. Robotic mapping addresses the problem of acquiring spatial models of physical environments through mobile robots and it is generally regarded as one of the most important problems in the pursuit of building truly autonomous mobile robots. Acquiring and mapping unstructured, dynamic, or large-scale environments remains largely an open research problem. (Kuipers & Byun, 1991; Thrun & Bucken, 1996; Murphy, 2000; Thrun, 2002). Building maps of unknown and dynamic environment is an essential probem in robotics and requires taking care of connected problems other than mapping itself, such as localization, sensor uncertainty, obstacle avoidance and real-time navigation. There are many factors imposing practical limitations on a robot's ability to learn and use accurate models. The availability of efficient mapping systems to produce accurate representations of initially unknown environments is undoubtedly one of the main requirements for autonomous mobile robots.
A key component of this task is the robot's ability to ascertain its location in the partially explored map or to determine that it has entered new territory. Accurate localization is a prerequisite for building a good map, and having an accurate map is essential for good localization (Se et al., 2002; Choset, 2001). All robots, which do not use pre-placed landmarks or GPS, must employ a localization algorithm while mapping an unknown space. Therefore,
International Journal of Advanced Robotic Systems, Vol. 4, No. 3 (2007) ISSN 1729-8806, pp. 323-338
accurate simultaneous localization and mapping (SLAM) represents a critical factor for successful mobile robot dynamic navigation in a large and complex environment because it enables the robot to function autonomously, intelligently, purposefully, and robustly. The term SLAM was first coined by Leonard and Durrant-Whyte (Leonard & Durrant-Whyte, 1991) to describe a technique used by robots and autonomous vehicles to build up a map within unknown environment while at the same time keeping track of its current position. This technique has attracted immense attention in the mobile robotics literature and has been applied successfully by many researchers (Leonard & Durrant-Whyte, 1991; Se et al., 2002; Choset & Nagatani, 2001). SLAM has not yet been fully perfected, but it is starting to be employed in unmanned aerial vehicles, autonomous underwater vehicles, planetary rovers, and newly emerging domestic robots. All the numerous methods proposed in literature are based on some sort of incremental integration: a newly acquired partial map is integrated with the old maps. To integrate the partial map obtained at each sensing step into the global map of the environment, the localization of the robot is fundamental. To perform localization, it needs to estimate both robot's pose and obstacles positions are needed.
Map building in an unknown and dynamic environment has been under study for a long time and many different approaches have been developed and evaluated (Borenstien & Koren, 1991a; Thrun & Bucken, 1996; Singhal, 1997; Borenstien & Ulrich, 1998; Murphy, 2000; Ellore, 2002). Other important issues related to navigation of an autonomous mobile robot are the need to deal with moving obstacles/objects, and fusing sensory information from multiple heterogeneous and/or homogeneous sensors. These issues usually cannot be resolved through the use of conventional navigation techniques. During real time simultaneous map building and localization, the robot is incrementally conducting distance measurements. At any iteration of map building the measured distance and direction traveled will have a slight inaccuracy, and then any features being added to the map will contain corresponding errors. If unchecked, these positional errors build cumulatively, grossly distorting the map and therefore affect the robot's ability to know its precise location. One of the greatest difficulties of map building arises from the nature of the inaccuracies and uncertainties in terms of noise in sensor measurements, which often lead to inaccurate maps. If the noise in different measurements is statistically independent, a robot can simply take multiple measurements to cancel out the effects of the noise. But, the measurement errors are statistically dependent due to odometry errors that accumulate over time and affect the way that future sensor measurements are interpreted. Small odometry errors can have large effects on later position estimates. There are various techniques to compensate for this such as recognizing features that the
robot has come across previously and re-skewing recent parts of the map to make sure the two instances of that feature become one. For the last decade the field of robot mapping has been dominated by probabilistic techniques for simultaneously solving the mapping problem and the induced problem of localizing the robot relative to its growing map and accordingly different approaches have been evolved. The first category includes approaches that employ Kalman filter to estimate the map and the robot location (Lu & Milios, 1997; Castellanos & Tardos, 1999; Thrun, 2002). Another approach is based on Dempster's expectation maximization algorithm (Thrun, 2001; Thrun, 2002). This category specifically addresses the correspondence problem in mapping, which is the problem of determining whether sensor measurement recorded at different points in time correspond to the same physical entity in the real world. The Extended Kalman Filter (EKF) has been the de facto approach to the SLAM problem. However, the EKF has two serious deficiencies that prevent it from being applied to large, real-world environments: quadratic complexity and sensitivity to failures in data association. An alternative approach called Fast-SLAM is based on the Rao-Blackwellized Particle Filter, and can scale logarithmically with the number of landmarks in the map (Montemerlo & Thrun, 2003). The other category of approaches seeks to identify objects and landmarks in the environment, which may correspond to ceilings, walls, doors, furniture and other objects that move. For the last two decades, there has been made tremendous progress in the development of efficient and highly accurate map building techniques. Most of these techniques focus either on capturing the metric layout of an environment with high accuracy (Moravec & Elfes, 1985; Moravec, 1988; Elfes, 1989a; Elfes, 1989b; Borenstien & Koren, 1991a and b; Borenstien & Koren, 1998; Ribo & Pinz, 2001; Ellore, 2002), or on representing the topological structure of an environment (Habib & Yuta, 1988; Kuipers & Byun, 1991; Habib & Yuta, 1993; Kuipers, 2000; Choset & Nagatani, 2001).
To acquire a map and achieve efficient simultaneous localization, robots must possess sensors that enable them to perceive the outside world. There are different types of sensor modalities commonly brought to bear for this task such as ultrasonic, laser range finders, radar, compasses, vision, infrared, tactile sensors, etc. However, while most robot sensors are subjected to strict range limitations, all these sensors are subject to errors, often referred to as measurement noise. Laser scanning system is active, accurate but slow. Vision systems are passive and of high resolution but it demands high computation Ultrasonic range finders are common in mobile robot navigation due to their simplicity of operation, high working speed and cheap but usually they are very crude. These sensors provide relative distances between them and surrounding obstacles/objects located within their radiation cone. However, these devices are prone to
several measuring errors due to various phenomena, such as, multiple reflections, wide radiation cone, and low angular resolution. Robot motion is also subject to errors, and the controls alone are therefore insufficient to determine a robot's pose relative to its environment. Hence, one of the main problems in SLAM is coming from the uncertainty in the estimated robot pose. This uncertainty creates correlation between the robot pose and the estimated map. Maintaining such a correlation increases computational complexity. This characteristic of SLAM makes the algorithm hard to apply to estimate very dense maps due to the computational burden. This paper discusses the importance, the complexity and the challenges of mapping robot's unknown and dynamic environment, besides the role of sensors and the problems inherited in map building. These issues remain largely an open research problem in developing an autonomous navigation system for mobile robots. The paper introduces an autonomous map building and maintenance method with focus on having an incremental grid based mapping technique that is suitable for real-time obstacle detection and avoidance. The robot maps its environment incrementally while wandering in it and staying away from all obstacles. In this case, the navigation of mobile robots can be treated as a problem of tracking geometric features that occur naturally in the environment. This implementation uses the concept of occupancy grids and a modified Histogrammic In-Motion Mapping (HIMM) algorithm to build and maintain the environment of the robot by enabling the robot to recognize and track the elements of the occupancy grid in real-time. In parallel to this, the incrementally built and maintained map model is integrated directly to support dynamic navigation and obstacle avoidance in real time. Simulation based experiments has been conducted and illustrated to show the validity of the developed mapping and obstacle avoidance approach.
2. Mapping Approaches for Autonomous Mobile Robots Navigation
An accurate model of the environment surrounding a robot enables it to complete complex tasks quickly, reliably and successfully. Without such model, a robot neither can plan a path to a place not currently sensed by its sensors nor may effectively search for an object or place.
Map based navigation calls for three processes: map learning, localization, and path planning (Levitt & Lawton, 1990). These three processes may rely on three distinct sources of information available to a robot; the first source enables the robot to sense its own personal configurations and its relationship to the environment; the second is required to enable the robot to survive in the mission's environment by detecting and avoiding obstacles along the desired trajectory toward its target;
and the third is to recognize the associated requirements and features of the targeted mission and assure its successful execution. There are many ways to integrate these sources of information in a representation that is useful for robot navigation. The drawbacks and advantages of these sources of information are complementary. Localization is a critical issue in mobile robotics. If a robot does not know where it is, it cannot effectively plan movements, locate objects, or reach goals. Localization and map learning are interdependence processes, because using a map to localize a robot requires a fact that the map exists and building a map requires the position to be estimated relative to the partial map learned so far. Map building and learning is a crucial issue in autonomous mobile robot navigation, where robots must acquire appropriate models of their environment from their actual sensory perceptions while interacting with the real world. Generally, there are three main approaches to generate purposeful maps. Metric (geometric), topological or hybrid navigation schemes have been proposed and studied. Each one of these methods is optimal concerning some characteristics but can be very disappointing with respect to other requirements. These approaches present complementary strengths and weaknesses.
The mapping problem is generally regarded as one of the most important problems in the pursuit of building truly autonomous mobile robots. Despite significant progress in this area, it still poses great challenges (Thrun, 2002). Robust methods are available for mapping environments that are static, structured, and of limited size, whereas mapping unstructured, dynamic, or large-scale environments remains largely as an open research problem. Since the 1990s, the field of robot mapping has been dominated by probabilistic techniques because of its ability to deal successfully with the inherent uncertainties associated with robot perception that would otherwise make map-building a very hard process. Some algorithms are building the map incrementally to suit real-time needs, whereas others require multiple passes through the data. Some other algorithms require exact pose information to build a map, whereas others can do so by using odometry measurements. Dynamic environment with dynamic objects, and imperfect sensors and actuators can lead to serious errors in the resulting map. The problem of learning maps in unknown and dynamic environment has been studied by many researchers. Some of these efforts include tracking dynamic objects while filtering out the measurements reflected by those objects (Hahnel et al. 2002; Wang & Thorpe 2002). The approach developed by Hahnel et al, interleaves mapping and localization with a probabilistic technique to identify spurious measurements. Such approaches have been demonstrated some robustness compared to traditional approaches, but they results in producing maps that describe the static aspects of the environment. Enhanced sensor models combined with the EM algorithm have
been used to filter out arbitrary dynamic objects to improve the scan registration and lead to more accurate maps (Hahnel et al., 2003). Another approach deals with mapping dynamic environments by explicitly modelling the low-dynamic or quasi-static states, and this was achieved by dividing the entire map of the environment into several sub-maps and learns the typical configurations of the corresponding part of the environment by clustering local grid maps for each of the sub-maps (Stachniss & Burgard, 2005). Similar work was developed aims to learn models of non-stationary objects from proximity data by applying a hierarchical EM algorithm based on occupancy grids recorded at different points in time. A robotic mapping method based on locally consistent 3D laser range scans has been developed (Nuchter et al. 2006). In this method, scan matching is combined with a heuristic for closed loop detection and a global relaxation method, results in a 6D concurrent localization and mapping on natural surfaces. In recent years, a few researchers have discussed the possibility of decoupling the mapping and localization processes in SLAM in order to gain computational efficiency. Since the observations made by a robot are about the relative locations between the robot and features, a natural way to decouple mapping and localization is to extract information about the relative locations among the features and then construct a relative map using this part of information. Martinelli et al (2004) made use of relative maps where the map state only contains distances between features that are invariants under shift and rotation. But, this approach has redundant elements in the state vector of the relative map while a significant increase in computational complexity of the SLAM algorithm results due to constraints applied to avoid the generation of an inconsistent map.Furthermore, issues of how to extract the information about the relative map from the original observations has has not been been addressed yet properly. Some of these issues have been tackled by developing a decoupled SLAM algorithm (D-SLAM) based on a new formulation of 2-D relative map with no redundant elements (Wang et al., 2005).
The Extended Kalman Filter (EKF) has served as the primary approach to SLAM for the last fifteen years. However, EKF-based SLAM algorithms suffer from two well known shortcomings that complicate their application to large, real world environments. These two problems are the quadratic complexity, and the sensitivity to failures in data association. The EKF has become widely known in terms of the growth of complexity due to the update step that requires computation time proportional to the square of the number of landmarks. This obviously becomes difficult to deal with in large environments. To address these issues, FastSLAM family of algorithms that applies particle filters to the SLAM problem have been developed to provide new insights into the data association problem.
In the original form of FastSLAM, each particle may have different number of features and maintains these features in its local map. Multiple readings can be incorporated per time step by processing each observation sequentially to increase the accuracy of data association. The weight for each particle is equal to the product of the weights due to each observation considered alone (Montemerlo & Thrun, 2003). Modification, extensions and analysis has been made to the original FastSLAM approach, and the recent analysis shows that it degenerates with time, regardless of the number of particles used or the density of landmarks within the environment, and will always produce optimistic estimates of uncertainty in the long-term. FastSLAM behaves like a non-optimal local search algorithm; in the short-term it may produce consistent uncertainty estimates but, in the long-term, it is unable to adequately explore the state-space to be a reasonable Bayesian estimator. However, the number of particles and landmarks does affect the accuracy of the estimated mean. Given sufficient particles FastSLAM can produce good non-stochastic estimates in practice (Bailey et al., 2006). FastSLAM has several practical advantages, particularly with regard to data association, and may work well in combination with other versions of stochastic SLAM, such as EKF-based SLAM. FastSLAMtype algorithms have enabled robots to acquire maps of unprecedented size and accuracy, in a number of robot application domains and have been successfully applied in different dynamic environments, including the solution to the problem of people tracking (Nieto et al, 2003; Montemerlo & Thrun, 2007). Since the complexity of the global map cannot be avoided, some researchers have proposed hierarchical division of the global map into sub-maps, within which the complexity can be bounded through the use of the generalized Voronoi graph (GVG). The GVG serves as a high-level topological map organizing a collection of feature-based maps at the lower level, and this leads to create a hierarchical approach to the simultaneous localization and mapping (Lisien et al, 2003).
2.1 Types of map representation 2.1.1 Metric maps
The geometric approach to world representation attempts to capture the geometric properties, and build a detailed metrical description of robot's environment from sensor data (Preciado et al., 1991; Durrant-Whyte, 1998; Thurn, 2001). This kind of representations has a reasonably well defined relation to the real world, but it is highly vulnerable to metrical inaccuracy in sensory devices and movement actuators.
The map quantitatively reproduces the metric spatial features of the environment in an accurate way. The occupancy grid framework represents a fundamental departure from conventional traditional geometric approaches and it uses a stochastic tessellated representation of spatial information maintaining
probabilistic estimates of the occupancy state of each cell in a spatial lattice. An early representative of the grid based metric approach was Elfes and Moravec's important occupancy grid mapping algorithm (Elfes, 1987; Moravec, 1988; Elfes, 1989a; Elfes, 1989b). In this approach, the metric map represents the environment with a grid of fixed resolution. At a low level, a local map, such as a fine array of pixels can model the environment of the robot. The grids hold information of the observed environment and each cell of the grid represents some amount of space in the real world. Sensors are used to detect obstacles and objects, and the acquired sensory data are used to determine the occupancy value of each grid space so that a map can be generated. Each grid cell estimates the occupancy probability of the corresponding region of the environment. Metric maps are easy to construct because grids reproduce explicitly the metrical structure of the environment. Besides, the geometry of the grid corresponds directly to that of the real environment, so that the robot position and orientation within its model can be determined by its position and orientation in the real world. However, any approaches using purely metric maps are vulnerable to inaccuracies in both map-making and dead-reckoning abilities of the robot. Even by taking into account all relationships between features and the robot itself, in large environments the drift in the odometry causes big troubles for map maintenance and makes the global consistency of the map difficult to maintain. Landmark-based approaches, which rely on the topology of the environment, can better handle this problem, because they only have to maintain topological global consistency, not the metric one. While grid-based methods produce accurate metric maps, their complexity often prohibits efficient planning and problem solving in large-scale indoor environments, and it can be very memory intensive as the world map grows. This is because the resolution of a grid must be fine enough to capture every important detail of the world (Kuipers & Byun, 1991; Engelson & McDermott, 1992; Borghi & Brugali, 1995; Thrun & Bucken, 1996). On the downside, it is slow, not expressive and also it is associated with local minima problem. The grid based metric approach has been used in a great number of robotic systems, such as (Borenstien & Koren, 1991a and b; Burgard et al., 1999; Yamaguchi & Langley, 2000). Alternative metric mapping algorithms were proposed by many researchers aiming to describe the free space geometrically using natural landmarks, polyhedral shapes, prime rectangular area, and cones to formulate the geometrical features of the environments (Habib & Yuta, 1993, Chatila & Laumond, 1985; Habib & Yuta, 1990; Brooks, 1983). While metric maps have the advantage of its high resolution and of being well suited for robot navigation tasks, they are typically unstructured and contain no information about the different types of places or objects in the environment. Also, it suffers from their enormous space and time complexity.
2.1.2 Topological maps
Topological and metrical methods for representing spatial knowledge have complementary strengths. Topology maps describe the connectivity of different places and it can be automatically extracted while building metric maps. They can also be enriched by path shape information, i.e. the geometrical relations between places to help planning and navigation (Fabrizi & Saffiotti, 2000; Engelson & McDermott, 1992).
A topological map is a feature-based map that uses symbolic description. Such maps can be used to reduce the SLAM problem to a graph-matching problem at the topological scale. Hence, in office buildings with corridors and rooms, or roads, the topology of important locations and their connections can highly support the navigational requirements. The topological approach is a qualitative one, less sensitive to sensor errors, less complex and permits more efficient planning than metric maps. Topological maps can capture the structure of the free space in the environment in terms of basic topological notions of connectivity and adjacency as a list of significant places connected via arcs that is needed to plan a navigation strategy. At high level, a topological map serves as an example of symbols and connects between them. The symbols that are local maxima of the distance to nearby obstacles are nodes in a graph and correspond to perceptually distinct regions, places, landmarks or situations. While the connections between the symbols represent the graph edges/arcs that link the distinct places to indicate the spatial relations between them, i.e. a direct path exists between them. Arcs are usually annotated with information on how to navigate from one place to another. For an indoor building environment, junctions and termination points of corridors represent symbols while the corridors themselves are the connections (Habib & Yuta, 1988; Habib & Yuta, 1993).
Distances are usually not considered; instead only relative positions are considered. The graph describing a topological map is good to be stored and communicated because it has the advantage of using a small amount of memory and the ability to record only parts of the environment that are of interest and necessary for navigation. Such models involve more compact representations. However, it relies heavily on the existence of recognizable landmarks Kuipers & Byun, 1991; Engelson & McDermott, 1992; Borghi & Brugali, 1995). Since sensory input depends strongly on the viewpoint of the robot, topological approaches may fail to recognize geometrically nearby places. The resolution of topological maps corresponds directly to the complexity of the environment. The compactness of topological representations gives them three key advantages over grid-based approaches: they permit fast planning; they facilitate interfacing to symbolic planners and problemsolvers; and they provide natural interfaces for human instructions. Moreover, it is easier to generate and
maintain global consistency for topological maps than for metric maps. Since topological approaches usually do not require the exact determination of the geometric position of the robot, they often recover better from drift and slippage that must constantly be monitored and compensated in grid-based approaches. However, these techniques often ignore valuable metric information and they still lack the ability to describe individual objects in an environment. In addition, coherent topological maps are often difficult to learn and maintain in large-scale environments. A fundamental problem in topological mapping is recognizing an already-visited place, i.e., closing the loop. With sufficiently rich sensory information and features abstraction, detecting unique sensing signatures that describe such place will easily solve this problem. Topology-based maps are fairly robust against sensor noise and small environmental changes, and have nice computational properties. Some examples of topological approaches are available in the following references (Habib & Yuta, 1988; Mataric, 1990; Kuipers & Byun, 1991; Engelson & McDermott, 1992; Habib & Yuta, 1993; Choset, 1996; Kuipers et al, 2004).
2.1.3 Hybrid maps
The hybrid integration of both metric and topological paradigms is gaining popularity among researchers because it leads to maximizing their effectiveness by combining the best characteristics of both universes and can support to cover a large scale environment. For this, the overall environmental model embodies both a metric and a topological representation (Habib & Yuta, 1993). In a hybrid map, the environment is described by a global topological map, so that it facilitates planning and replanning within the whole environment to enable the robot finding a suitable route to its currently stated target; on the contrary, local metric maps can be used by the robot for safe navigation toward a given target, and the robot needs further localization precision as soon as. The world can be represented as a graph, like a pure topological map. The nodes in this graph represent topological locations and each node can be a metric map. The traveling between nodes along the edges causes a switch from the topological to the metric paradigm. A global topological map connects the disconnected local metric maps, and allows the representation of a compact environment model, which does not require global metric consistency and permits both precision and robustness. The environment models allow the use of two different navigation methods with complementary characteristics. This gives the map more precise detail where it is needed, while leaving out details that are not needed. This reduces memory size dramatically. Detectable metric features are needed to determine the transition point and to initialize the metric localization. The metric localization permits a very precise positioning at the goal point whereas the topological one guarantees robustness against getting lost due to the multimodal representation
of robot's location. The effectiveness of such integration for navigation and localization has already been demonstrated in different work (Elfes, 1987; Habib & Yuta, 1993; Tomatis et al., 2001).
In (Habib & Yuta 1988; Habib & Yuta 1993) the author has developed and implemented a 4-level hierarchical topological map integrated with local metric maps. The 4-level hierarchical topological map used to describe large indoor environment consists of a network of buildings connected through a common corridor like passage way. The four levels are
a) Network of buildings with nodes at the center of each passage way between adjacent buildings,
b) Network of corridors within each building. Each corridor has two nodes, one at each of its terminals.
c) Rooms within each corridor. Each room is represented by a node at each entry point of the associated doors,
d) The free space with a room has been structured into prime rectangular areas (PRAs). The PRAs are intersecting with each other, and the header of each PRA is treated as a node.
The geometric maps are generated and required for each corridor, and each PRA that are coming along a path generated by connecting the current robot location to a given target.
2.2 Sensors and problems inherited in map building Map building potentially provides a good framework for integrating information from many different sensors, of the same types or heterogeneous types that can perceive the surroundings from many different poses (position and orientation) into a single knowledge source. The use of heterogeneous sensors is essential because it compensates for many weaknesses inherent in various sensors used in map building. If the information from different sets of sensors is integrated into the same map, then they can be used to either confirm or dispute each other, resulting in a more robust and less error-prone model of the environment. One of the most prominent difficulties in mapping an environment is in knowing the exact position and orientation the robot was in when it received the sensor readings. If a wrong position is estimated, then the incorrect part of the map will be updated and this leads to large errors. Information regarding the distance the robot has traveled, and in which direction, is usually calculated by measuring the number of times each wheel has turned, i.e., from odometry. Unfortunately, occurrences of wheel slippage and angular drift can lead to the estimation of wrong robot position and orientation. It is therefore necessary to integrate other sensors with the robot, or recognizing certain situations with environmental features to help correct errors in pose estimation. This is called localization. Global localization is necessary when the robot has a previously generated map of its environment. When it is first turned on, it must be able to know where
it is within that map in order to incorporate its new sensor readings into the correct area of that map. Position tracking is used to compensate for wheel slippage and angular drift. When performing map building with mobile robots, a method called Simultaneous Localization and Map-Building (SLAM) is usually performed in parallel with continuous corrected update of robot's estimated pose.
The real world is: a harsh place that is not complete and with no consistent environmental knowledge, non-deterministic (interactions in dynamic and open environments can neither be formulated, nor designed, or analyzed fully using stationary or static models), dynamic (changes happen at the time of decisions are made, such as closing/opening motion of doors, movement of chairs and people, etc.), and it is continuous and not discrete. Accordingly, the robot should not only be capable of controlling their motion in response to sensor inputs, but it should,
a) React to unexpected events and accordingly change course if necessary by efficiently deciding the next action to be performed,
b) Learn from their experiences in different ways to improve their behavior and performance with time and when facing similar situation, and
c) Deal with multiple conflicting objectives simultaneously, and overcome errors in perceptions and actions.
Primary difficulties arise when building maps of an environment, and such difficulties may include:
a) Sensors are inaccurate, noisy, faulty, and with limited field of view, which means that decisions might be based on wrong information. Limitation is related to range, resolution, occlusion, etc. Accordingly, there is a need to efficiently interpret sensor readings into knowledge about the environment, while compensating for the considerable amount of inaccuracy in sensor readings.
b) The more information is stored about an environment by increasing its dimensionality, the more use can be made of it, but also the more memory and computation time is required.
c) The time available to decide what to do is limited, because the robot needs to respond quickly to environmental demand and has to operate at a pace dictated by its surroundings. Such limit to respond may be due to computation time or control paradigm
d) A robot cannot assume correct and perfect execution of its actions due to imperfections of actuators and uncertainties in the environment, such as the accumulation of odometry errors over time. Accordingly, there is a need of a reliable approach to estimate the position and orientation of the robot.
e) The dynamic and complex features of robot environments make it principally impossible to maintain exact models and to predict accurately.
f) The problem is how to extract reliable evidence from unreliable sources of information.
3. Representation Requirements and Incremental Grid Map Building
Any representation used for modeling physical attributes of an environment and any of the entities present in it, must possess certain qualities. It must be powerful enough to express all the entities that need to be represented. At the same time, it must be adaptive in those entities that affect the navigation strategy at any given state of the environment. Also, the characteristics of any representation must be able to quickly update its knowledge about the current state of the environment without heavy computation effort (Singhal, 1997). In order to effectively support the goal of acquiring autonomously a world model, the robot must exhibit several characteristics: able to manage a collection of different sensors; provide mechanisms to manage the uncertainty associated with sensory information; able to recover the uncertainty on its position, and able to provide exploration strategies in order to plan autonomously the sequence of actions that allow to execute an assigned task.
3.1 Spatial information and occupancy grid The spatial information is represented by a metric map and it is based on occupancy grids. The occupancy grids, also known as evidence grids or certainty grids were pioneered by Moravec and Elfes (Moravec & Elfes, 1985; Elfes, 1987, Moravec, 1988; Elfes, 1989a; Elfes, 1989b) and formulated in CMU (Martin & Moravec, 1996) as a way to construct an internal representation of static environments by evenly spaced grids based on ultrasonic range measurements. Occupancy grids provide a data structure that allows for fusion of sensor data. It provides a representation of the world which is created with inputs from the sensors. Apart from being used directly for sensor fusion, there also exist interesting variations of evidence grids, such as place-centric grids (Youngblood, 2000), histogram grids (Koren & Borenstein, 1991) and response grids (Howard & Kitchen, 1996). The variations will not be studied in this research. Occupancy Grids is certainly the state of the art method in the field of grid based mapping. It is the most widely used robot mapping technique due to its simplicity and robustness and also because it is flexible enough to accommodate many kinds of spatial sensors with different modalities and combining different sensor scans. It also adapts well to dynamic environments. Occupancy grids have been implemented with laser range finders, stereo vision sensors (Moravec, 1996) and even with a combination of sonar, infrared sensors and sensory data obtained from stereo vision (Lanthier et al., 2004).
In general, the occupancy grid technique divides the environment into two dimensional discrete grid cells as shown in Fig. 1.a. The occupancy grids map is considered as a discrete state stochastic process defined over a set of continuous spatial coordinates. Each grid cell is an
element and represents an area of the environment. The state variable associated with any grid cell Cij in the grid map yields the occupancy probability value of the corresponding region. Since the probabilities are identified based on the sensor data, they are purely conditional. Given a sensor data, each cell in the occupancy grid can be in two states s(Cij) = Occupied or s(Cij)= Empty, and to each cell there is probability P[s(Cij) = Occupied] attached, which reflects the belief of the cell Cij being occupied by an object. Since
P[s(Cij ) = Empty] = 1 - P[s(Cij) = Occupied] (1) Sensor readings supply uncertainty regions within which an obstacle is expected to be. The grid locations that fall within these regions of uncertainty have their values increased while locations in the sensing path between the robot and the obstacle have their probabilities decreased. For the purpose of this work, the histogram grid approach has been selected due to its simplicity, robustness and adaptability to dynamic environments. Like in certainty grid concept, each cell Cij in the histogram grid holds a certainty value (CVij) that represents the confidence of the algorithm in the existence Cij of an obstacle at that location.
3.2 Ultrasonic sensor and map building
Map building methods depend strongly on the characteristics of the sensors that provide the raw data. In order to create a map using sensors, such as, ultrasonic range measurements, it is necessary to consider the following important steps (Thrun, 1998): sensor interpretations, integration over time, pose estimation, global grid building, and exploration. The probabilistic representation approach yields a world model, called a certainty grid that is especially suited to the unified representation of data from different sensors such as ultrasonic, vision, and proximity sensors as well as the accommodation of inaccurate sensor data (Moravec, 1988). Ultrasonic sensors provide good range data but offer only poor directionality and associated with inaccuracy. A typical ultrasonic sensor returns a radial measure of the distance to the nearest object within its conical field of view (Polaroid, 1989), yet it does not specify the angular location of the object. The angle of the cone depends on the noise absorption of the material which causes the reflection. Typically, it lies between 10°to 30°. The robot used in this research is a cylindrical in shape (the robot is 20 cm in radius), equipped with multiple simulated ultrasonic sensors (24 Polaroid ultrasonic ranging modules that are equally spaced (15 degree apart), arranged and mounted on a horizontal ring around the robot circumferences at a height of 30 cm above the ground), and has been designed, modeled and simulated by the author as a test bed for this work (see Fig. 1.b). Scanning with a horizontal ultrasonic ring does not require rotating parts and motors, and a view of 360° coverage can be acquired rapidly. However, due to the possibility of a significant crosstalk, all sensors cannot be
fired at the same time. Hence, there is a need to design a firing sequence that can reduce crosstalk (6 groups, each group with 4 sensors, and the sensors of each group are triggered simultaneously), but this will inevitably increase the overall time needed to obtain all 24 sensors readings for full 360° coverage. Typical scan times range from 60 to 500 ms, for a full scan cycle (Crowley, 1989), and it is important to keep it as small as possible. Because of the wide beam angle, any individual range provides only indirect information about the location of the detected objects. Hence, the constraints from individual readings were combined to reduce the uncertainty. The ultrasonic sensors were set up to detect an object within a range of between 15 cm to 180 cm with accuracy of ±1%, and the beam of acoustic energy spreads in a cone of approximately 30°. Therefore, the longest time for each sensor, i.e. each sensors group, to wait for its echo is bounded by the maximum measurable distance, i.e., R =180 cm, and this time equal to t = 2*1.8/340 = 10.58 ms. A complete 360 scan should therefore take 6*10.58= 63.48 ms. Accordingly, If the sensor returns a distance value between the specified minimum and maximum range (12 < r < 180), then the returned distance measurement is proportional to the distance of nearest object located within the field of view of the sensor with small error along the arc perpendicular to the sensor acoustic axis. In this case, we have the range to the nearest point on the object but the sensor can't recognize the angle between the sensor axis and the nearest point on the object. An object may be detected at a certain instance by one or more sensors simultaneously. An object located near the acoustic axis is more likely to produce an echo than an object further away from the acoustic axis. Whether or not an echo is received as an indication to the presence of an object depends on the relative angles, the surface structure, object reflect-ability, and the distance to the object.
Let's now consider the regions in the general sonar model. We can recognize four zones within the operational view in front of the sensor. These zones are named as Z1, Z2, Z3 and Z4. The areas of the four areas are superimposed over a grid of cells as shown in Fig. 1.b. The zone, Z1, no reading returned from any anything in this region and this indicates that cells within this zone must be free from objects. The zone, Z2 (the area occupied by the arc at the radial distance equal to the measured distance (r)), indicates, for a given sensor range reading, the cells in which the nearest point on the nearest object that cause the measurement distance (r) is likely to be found, i.e., a reflection from an object laying in this region. In zones Z3 and Z4, the area of these zones is undetected by the current sonar reading. Our interest in this work is focusing on the area of Z1 and mainly the cells along the acoustic axis for each ultrasonic sensor. Given that the beam is superimposed on an occupancy grid, the sonar model looks similar to Fig. 1.d. The elevation indicates a high probability of the grid element being occupied and the depression signifies a high probability of being empty.
(a) The designed mobile robot (b) The An example of used as a testbed occupancy grid structure
1 D di ita: ice 5, R
Z 1 1 an d r ai e
Z4 1 I alr >ng th e
| ac (Hi tic
1 ax is of th e
1 se isr r
The Ultrasonic Sensor (c) The Classification of regions in the sonar model
(d) Visualization of the probabilities distribution
Fig. 1. shows the model of the designed mobile robot and the regions in the sonar model
The sensor interpretation is the first phase that interprets sonar data for the purpose to perform the higher level mapping and navigation functions to support occupancy grid based navigation. The grids allow the efficient accumulation of small amounts of information from individual sensor readings for increasingly accurate maps of the robot's surroundings. During this process, the incoming 24 sonar range scalar values are interpreted and converted to local occupancy values around the robot. It should be noted that the range readings of each sensor group are asynchronously communicated to the map building algorithm. Since different sonar measurements give different values for a grid cell because of noise/error and the change in sensor's viewpoint, it is important to integrate the conditional probabilities of distinct
moments. After the local grid is created around robot's current location, its values have to be merged into the global grid. Beyond the coordinate transformation between the grids, there is a need for a global position where the local grid can be integrated. Robot position can be estimated using position estimation method like "odometry", a continuous calculation of changes of the robot pose combined with correction of errors cumulated by sensor and motor noise (Moravec, 1988). This method is basically a data structure to store data acquired from different sources and takes into account the uncertainty of sensory data by working with probabilities or certainty values. The resulting probabilistic spatial models obtained from these sensor models serve as maps of the robot's environment, and can be used directly for robot navigation tasks such as path planning and obstacle avoidance, position estimation, and also to incorporate the motion uncertainty into the mapping process. It provides a way of representing data to be later extracted as useful and informative maps that can be used directly in robotic planning and navigation (Moravec, 1988. The data structure for each point in the environment represents the probability that some object occupies the point. In principle, uncertainty grid store qualitative information about which areas of the robot's surroundings are empty, and which areas are occupied by obstacles. Besides, no other characterization of the environment is of interest.
3.3 Building the grid map incrementally with robot in motion There are different uncertainty calculi techniques to build occupancy grids of an unknown environment using sensory information provided by a ring of ultrasonic range finders. These techniques are based on Bayesian theory, Dempster-Shafer theory of evidence, fuzzy set theory, and the histogrammic in-motion mapping (HIMM) method (Murphy, 2000; Ribo & Pinz, 2001; Borenstein & Koren, 1991). Each of these methods has its own approach in transforming sensor readings into the representation of uncertainty and the update rule for combining the uncertainty for a pair of observation. It has been recognized that Dempster-Shafer method provides an accurate map compared to HIMM. However, the computational time required by HIMM method is significantly less than both Bayesian and Depmpster-Shafer methods. The method introduced by (Moravec & Elfes, 1985; Elfes, 1989a; Elfes, 1989b) projects a heuristic probability function that assign CVs to all cells in zone 1 (see Fig. 1.c), and this makes it computationally intensive and would impose a heavy time-penalty in relation to real-time execution, and due to this the mobile robot should remain stationary while taking a full scan with its 24 ultrasonic sensors. The histogram grid differs from the certainty grid (Moravec & Elfes, 1985; Elfes, 1989a; Elfes, 1989b) in the way it is built and updated. Let's focus on the incremental grid based map building approach using ultrasonic range measurements. The map
of robot's environment has been constructed incrementally based on the histogrammic in-motion mapping approach as it is a good method for real-time map building with a mobile robot in motion, that is well suited to model inaccurate and noisy range sensor data and require minimum computational time. What makes it attractive is its capability to provide instantaneous environmental information that is ready to be used with integrated real-time obstacle avoidance. Each cell in the histogram grid holds a certainty value (CV) that represents the confidence in the existence of an obstacle at that location. Upon receiving a sonar range reading from each sensor available six sensor groups, the algorithm updates only the CVs of the cells that are directly long the acoustic axis and within the sensor range reading. This approach results in a histogram probability distribution by continuously and rapidly sampling each sensor while the mobile robot is in motion. High certainty values are obtained in cells close to the actual location of the obstacle. For illustration the size of the environment has been considered as 10 meter x 10 meter, and the size of each cell has been decided to be as 10 x 10 cm. Each cell has its own coordinate and contains a CV. The CVs are updated by a heuristic probabilistic function that is applied to each one of the 24 range readings and takes into account the characteristics of a given sensor. The range of a CV has chosen arbitrary from 0 (the minimum CV of a cell) to 16 (the maximum CV of a cell) to represent the confidence in the existence of an obstacle. The CV=16 shows, any cell with this value represents high probability to be occupied by obstacle or object at that instance. While the CV=0 indicates that any cell with this value represents high probability to be empty. Sensor position is determined relative to robot reference frame. Sensor readings are mapped into CVs according to specific rules that produce high CVs for cells that correspond to obstacles, while it keeps low CVs for empty cells or for cells that were incremented due to misreading or moving objects. he update rule for the CV of any cell consists of the following steps:
1. At the beginning, it starts by having the certainty values of all cells representing the environment assigned to zero, i.e., all cells are assigned to be free from obstacles.
2. The robot starts at an unknown position and obtain measurements of the environment relative to its momentary position.
3. For each sensor group, sensor readings are communicated to the map building module. This information is used to build and maintain incrementally a map that can support navigation and localization.
4. For each sensor range reading, the algorithm updates the CVs of the cells that are only located along the acoustic axis of the sensor and at the radial distance within the view of the sensor. The update rules are as follow:
a) The certainty value (CV) of the cell on the acoustic axis and corresponds to the measured distance (r) is updated by adding 4 to it and the CV is bounded by its maximum value, i.e., only one cell is incremented for each range reading. A large increment makes the robot reacts to a single and possibly false readings, while a small increment would not build up timely CVs to support realtime avoidance maneuver.
b) The CVs of other cells that lie along the acoustic axis and at a range less than the sensor range reading are decremented. The CVs of the cells are decremented according to the following:
I. If the range reading is, r < 60, then decrement the CVs of the cells that are at a distance less than the range reading by 1, i.e., CV = CV - 1.
II.If the range reading is, 60<r<120, then decrement the CVs of the cells that are within 60 cm distance along the acoustic axis by 2 and decrement the other cells that are at a distance between 60 cm and less than 120 cm by 1.
III. If the range reading is, 120<r<180, then decrement the CVs of the cells that are within 60 cm distance along the acoustic axis by 3, decrement the cells within the distance 60 and 120, by 2, and decrement the remaining cells within the distance between 120 and 180 by 1.
c) When there is no echo reflecting back, i.e., no obstacle within the view of the sensor, i.e., the sensor reading is r = R. In this case, there is no need for any increment to the CV of the cell corresponds to the sensor range reading, and the decrement will follow the rule mentioned in 4.b.III.
A histogram type probability distribution is obtained by continuous and rapid sampling of the sensors whiles the robot in motion (see Fig. 2.a and b for illustration). A global path-planning and obstacle avoidance strategies can be employed iteratively as the map is incrementally updated.
(a) A robot environment with a (b) Incrementally constructed robot inside it histogram grid-based map
Fig. 2. Example of a robot environment and a histogram grid-based map building
4. Dynamic Navigation Supporting Obstacle Avoidance
Different obstacle avoidance techniques have been develop and available in the literature. Some of the relevant obstacle avoidance techniques are based on edge detection, potential field and certainty grids. The edge detection the algorithm tries to determine the position of the vertical edges of a detected obstacle using ultrasonic sensors, in order to enable the robot to steer around the obstacle (Kuc & Barshan, 1989). A disadvantage of this method is the need for the robot to remain stationary while gathering sensor information about the obstacle. In addition, this method is very sensitive to sensor accuracy. As for the certainty grid for obstacle detection and avoidance, a method for probabilistic representation of obstacles in a certainty grid-type world model has been developed (Moravec & Elfes, 1985, Elfes, 1987; Moravec, 1988). This approach is especially suited to accommodate inaccurate sensor data such as range measurements from ultrasonic sensors. With this approach too, the mobile robot has to remain stationary while it takes a panoramic scan with its 24 ultrasonic sensors. After updating the certainty value of the relevant grid cells, the robot moves to a new location, stops, and repeats the procedure. After having robot traverses its environment and having it map, a global path-planning method is then employed for offline calculations of subsequent robot paths. The critical issue with this method is computation cost. In addition, there are widely used methods for obstacle avoidance that are based on the concept of potential field. The idea of imaginary forces acting on a robot has been suggested by (Khatib, 1983) in which obstacles exert repulsive forces while the target applies an attractive force to the robot, and accordingly a resultant force vector comprising the sum of a target-directed attractive force and repulsive forces from obstacles, is calculated for a given robot position. With the resultant force vector acting as accelerating force on the robot, the robot's new position for a given time interval is calculated, and the algorithm is repeated. Krogh and Thorpe (Krogh & Thorpe, 1986) suggest a combined method for global and local path planning, which uses a generalized potential field approach. This method assumes a known and prescribed world model, in which simple, predefined geometric shapes represent obstacles and the robot's path is generated off-line.
But, the need is to a method that deals with real-time sensory information and allows for fast, continuous, and smooth motion of the controlled vehicle among unexpected obstacles, and does not require the vehicle to stop in front of obstacles. Obstacle avoidance methods must account for the sensors' shortcomings, such as (for the case of ultrasonic sensors) inaccuracies, crosstalk, and spurious readings. The adopted approach in this work used the extended vector field histogram with obstacle avoidance technique (Borenstien & Ulrich, 1998) that uses in real-time as input an intermediate data structure as a 2D histogram grid representing robot's local environment
that is based on occupancy grid concept. This approach was further developed for real time mobile robot obstacle avoidance to offer smoother robot trajectories and better reliability.
This approach employs the following steps: 1. The two-dimensional map grid is reduced to one-dimensional polar histogram that is constructed around the robot's momentary location. This includes:
a) Select a circular masking window of a certain diameter and attach it to the robot reference frame at the robot centre point. This window moves together with the robot. The area covered by this window at any time is called the active region.
b) Based on the current map information, the content of every cell covered by the circular masking window is represented by an obstacle vector with a direction described by the robot's current position and the relevant cell position. The magnitude is proportional to the squared value of CV of that cell, and it is a function of the squared distance between that cell and the current position of the robot. Occupied cells produce larger vector magnitude when they are close to the robot. According to the adopted axes and angle convention (Fig. 3.b (upper)), the direction of the obstacle vector is tan -'[(x,. - x0)/(y j - y 0)] and the final value is calculated using the signs of both arguments to determine the quadrant of the return value. X0, y0, represent the coordinates
of the robot's current position at its center point, and x , y represents the coordinates of a
cell C within the circular mask window.
Global coordinates is used to describe robot position.
c) The circumference of the circular mask window is divided into m angular sectors. The resolution for each sector has been selected to be 5°, i.e., the total number of sectors is, m = 360 '/5 ° = 72 , Sn,n = 1, 2, .., 72 (See Fig. 3.c). It is possible to have a finer resolution for the sectors, but this will result in extra computation cost. Each sector is represented by a value that reflects the confidence in the existence of object/obstacle within it, and an obstacle density describes it. This value is calculated by summing up the obstacle vector magnitudes for all cells within each sector to yield the confidence value of the corresponding sector.
d) Map the confidence value of each sector by using its angular position ( 5 x n ) onto the primary polar histogram.
e) Build the polar histogram around the current position of the robot using the adopted axes and angle convention, and it is independent of robot's orientation. A polar histogram with robot's width consideration is the output from this step.
2. Using the results of the previous step to generate a binary polar histogram. This is performed by using a hysteresis technique based on two thresholds values, high and low, (these values are decided experimentally). The two threshold values help to build a binary polar histogram and to identify sectors within the binary polar histogram that are free from objects or blocked, and if a sector is free, whether it has a wide or a narrow opening that allow the robot to pass through. In addition, the two threshold values help to smooth the desired trajectory and avoid oscillations in the steering command. The binary polar histogram is updated in real-time.
3. Select the most appropriate candidate as new direction for the robot based on the generated binary polar histogram, the give target, and a cost function. This selection aims to enable the robot to achieve the desired target through simultaneous translation and rotation.
The navigation system continuously reads sensory input and writing motor commands at regular intervals. The sensory input comprises the readings of the ultrasonic and odometry sensors. The motor commands govern information about the environment that is stored in the internal map. To navigate reliably, a mobile robot must know its pose and this can be done by estimating robot's pose relative to its environment. A kinematic model of the designed differential drive robot is used for simulation based experiments to track robot position and orientation. Other parameters can be considered to increase accuracy, such wheel radius and slip angle. This approach has been combined with the mapping algorithm discussed in section 3, implemented and demonstrated. Figure 3.a shows an example of a selected robot environment with a target location framed by dark lines and the robot is shown in a dark spot, and figure 3.b (bottom) shows a momentary situation faced by the robot within the selected environment. The primary polar histogram for this situation is shown in Figure 3.d. For the sectors represented by 'A' and 'E', the obstacle density is very small. For the sectors represented by 'B' and 'F', the obstacle density is high. The sectors covered by 'D' posse medium value of the obstacle density as the obstacle at 'D' has just been detected, and hence its CV is still small. Although, sectors that are represented by 'C' are free of obstacles, it doesn't have a small obstacle density value, and this is due to the poor angular resolution of the sonar which restricts the detection of small opening. Figure 3.e shows the binary polar histogram for the considered momentary situation.
90°°
(a) A selected robot (b) A robot's momentary
environment populated with situation
objects
Circular mask window Si to S70
'i i / i \
S72 Si S2
(c) Circular mask window and the divided sectors
Pnmuy lhnl*r
Thresh H
Thresh -L-
d) Polar histogram and the two threshold parameters
(e) Binary polar histogram
Fig. 3. Obstacle avoidance example with highlight for a robot's momentary location
5. Conclusion and Future Work
This paper presented the state of the art, the complexity, and the challenges facing simultaneous and real-time mapping and localization for mobile robots. The development of an autonomous map building and maintenance method with focus on developing an incremental approach that is suitable for real-time obstacle detection and avoidance has been presented. The paper has introduced a modified histogram grid based navigation techniques that is integrated with the concept of vector field histogram to support real-time obstacle avoidance in parallel with the incremental map building. Through the integrated implementation, the mobile robot was able to map the environment, avoid obstacles and move towards its target successfully. The results are important for simultaneous localization and map building applications, and can facilitate the use of additional natural landmarks to improve the accuracy of the localization algorithm.
5.1 Future work
The future expansion can include the following:
1. As the grid based mapping in general and the histogram grids require fixed-size environment, it is important to consider a technique that helps to deal with dynamic size of robot's environment map, i.e. variable-sized maps without imposing size or shape constraints on the grid map. Dynamically expanding occupancy grids seek to remove dependency on an array for representing a histogram or occupancy grid. Dynamically expanding occupancy grids tackle the issues related to the efficient use of available memory and the requirement for real time searching and action. Such techniques help to increase the size of map dynamically when new area is detected.
2. Integrating heterogeneous sensor modules, such as ultrasonic range data and an active stereo-vision system or other type of sensors. Due to the complementary error characteristics with respect to range and angular resolution, fusion of stereovision data with ultrasonic range information improves mapping precision significantly. In addition, the vision can support the classification of dynamic and modeling of obstacles in 3D.
3. The consideration of building grid map in 3D.
4. It is essential to develop a powerful incremental integration between the geometric and topological mapping approaches supported by belief values. The should have simultaneous support for localization, path planning and navigation
5. Multi-robot sharing map building. Merging accurately topological maps, or metric maps or hybrid maps created by different mobile robots. In addition, the key challenge is in how
representation (i.e., any form of world model and mapping) can be effectively distributed over the behavior structure
6. Due to the limitations associated with Grid and Topological based mapping, it is necessary to find way to efficiently integrate both paradigms.
6. References
Bailey, T.; Nieto, J. & Nebot, E. (2006). Consistency of the FastSLAM algorithm. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'06), May 2006, pp. 424-429.
Borenstien, J. & Koren, Y. (1991a). Histogrammic InMotion Mapping for Mobile Robot Obstacle Avoidance, IEEE Transaction on Robotics and Automation, Vol. 7, No. 4, pp. 535-539, 1991.
Borenstein, J. & Koren, Y. (1991b). The Vector Field Histogram: Fast Obstacle Avoidance for Mobile Robots, IEEE Journal of Robotics and Automation, Vol. 7, No. 3, June 1991, pp. 278-288.
Borenstien, J & Ulrich, I. (1998). VFH: Reliable Obstacle Avoidance for Fast Mobile Robots, IEEE International Conference on Robotics and Automation (ICRA'98), Leuven-Belgium, pp. 1572-1577, 1998.
Borghi, G. & Brugali, D. (1995). Autonomous Map Learning For A Multisensor Mobile Robot Using Diktiometric Representation And Negotiation Mechanism, Proceedings International Conference on Advanced Robotics, Spain, September 1995, pp. 1-8.
Brooks, A. (1983). Solving the Find Path Problem by Good representation of free space, IEEE Transaction on Systems, Man and Cybernatics, Vol.13, No. 2, March 1983. pp. 190-197.
Burgard, W.; Cremers, A.B.; Fox, D.; Hahnel, D.H.; Lakemeyer, G.; Schulz, D.; Steiner, W. & Thrun, S. (1999). Experiences with an Interactive Museum Tour-Guide Robot. Artificial Intelligence, Vol. 114, No. 1-2, pp. 3-55, 1999.
Castellanos, A. & Tardos, J.D. (1999). Mobile Robot Localization and Map Building: A Multisensor Fusion Approach, Kluwer Academic Publishers, Boston, MA, 1999.
Chatila, R. & Laumond. J.-P. (1985). Position Referencing and Consistent World Modeling for Mobile Robots. In Proceedings of the 1985 IEEE International Conference on Robotics and Automation ICRA'1985, March 1985, pp. 138 - 145 vol. 2.
Choset, H. (1996). Sensor Based Motion Planning: The Hierarchical Generalized Voronoi Graph. PhD thesis, California Institute of Technology, 1996.
Choset, H. & Nagatani, K. (2001). Topological Simultaneous Localization and Mapping (SLAM): Toward Exact Localization Without, IEEE Transactions on Robotics and Automation, Vol. 17, No. 2, April 2001, pp. 125-137.
Crowley, J.L. (1989). World Modeling and Position Estimation for a Mobile Robot Using Ultrasonic Ranging, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'89), Scottsdale, Arizona, May 1989, pp. 674-680.
Durrant-Whyte H.F. (1998). Integration, Coordination and Control of Multi-Sensor Robot Systems, Kluwer Academic Publishers, 1988.
Elfes, A. E. (1987). Sonar-based Real-World Mapping and Navigation, IEEE Journal of Robotics and Automation, Vol. RA-3, No 3, June 1987, pp. 249-265.
Elfes, A. E. (1989a). Occupancy Grids: A Probabilistic Framework for Robot Perception and Navigation. PhD thesis, Department of Electrical and Computer Engineering, Carnegie Mellon University, 1989.
Elfes, A. E. (1989b). Using Occupancy Grids for Mobile Robot Perception and Navigation, Computer Magazine, , Vol. 22, No. 6, June 1989, pp. 46-57.
Ellore, B.K. (2002). Dynamically Expanding Occupancy Grids, M. Sc. Thesis, Computer Science, Texas Technical University, 2002.
Engelson, S.P. & McDermott, D.V. (1992). Error Correction in Mobile Robot Map Learning, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'92), Nice-France,
1992, pp. 2555-2560.
Fabrizi, E. & Saffiotti, A. (2000). Extracting Topology-Based Maps from Grid maps, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'2000), pp. 2972-2978, 2000.
Habib, M.K. & Yuta, S. (1988). Map Representation and Path Planning for Mobile Robots in a Building Environment, IEEE International Conference on Intelligent Robots and Systems (IROS'88), Tokyo, Oct. 1988, pp.173-178.
Habib, M.K. & Yuta, S. (1990). Efficient On-line Path Planning Algorithm and Navigation for a mobile robot, International Journal of Electronics, Vol. 69, No. 2, 1990, pp. 187-210.
Habib, M.K. & Yuta, S. (1993). Map Representation for Large In-door Environment, Path Planning and Navigation Techniques for an Autonomous mobile robot with its Implementation, International Journal of Automation in Construction, Vol.1, No.2,
1993, pp.155-179.
Hahnel, D.; Schulz, D.; & Burgard, W. (2002). Map building with mobile robots in populated environments. In Proceedings of the IEEE/RSJ Int. Conference on Intelligent Robots and Systems (IR0S'2002), Lausanne, October 2002, pp. 496-501 vol.1.
Hahnel, D.; Triebel, R.; Burgard, W. &Thrun, S. (2003). Map building with mobile robots in dynamic environments. Proceedings of the IEEE International Conference on Robotics and Automation(ICRA'03), Sept. 2003, pp. 1557- 563 vol.2
Howard, A. & L. Kitchen (1997, March). Sonar mapping for mobile robots. Technical Report 96/34, Department of Computer Science, University of Melbourne.
Khatib, O. (1985). Real-Time Obstacle Avoidance for Manipulators and Mobile Robots, Proceedings of the IEEE International Conference on Robotics and Automation(ICRA'85), St. Louis, Missouri, March 1985, pp. 500-505.
Koren, Y. & J. Borenstein (1991). Potential eld methods and their inherent limitations for mobile robot navigation. In IEEE International Conference on Robotics and Automation (ICRA'91), pp. 1398-1404.
Krogh, B. H. & Thorpe, C. E. (1986). Integrated Path Planning and Dynamic Steering Control for Autonomous Vehicles, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'86), San Francisco, California, April, 1986, pp. 1664-1669.
Kuc, R. & Barshan, B. (1989). Navigating Vehicles Through an Unstructured Environment With Sonar, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'89), Scottsdale, Arizona, May 1989, pp. 1422-1426.
Kuipers, B. & Byun, Y.T. (1991). A Robot Exploration and Mapping Strategy Based on a Semantic Hierarchy of Spatial Representations, Robotics and Autonomous Systems, 8, pp. 47-63, 1991.
Kuipers, B. (2000). The Spatial Semantic Hierarchy, Artificial Intelligence, 119, pp. 191-233, 2000.
Kuipers, B.; Modayil, J.; Beeson, P.; MacMahon, M. & Savelli F. (2004). Local metrical and global topological maps in the Hybrid Spatial Semantic Hierarchy. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'04), April-May 2004, pp. 4845-4851 Vol.5.
Lanthier, M., D.Nussbaum, and A.Sheng (2004, August). Improving Vision-Based Maps by using sonar and infrared data. submitted to Robotics and Applications, IASTED 2004.
Leonard, J.J. & Durrant-Whyte, H. F. (1991). Simultaneous Map Building and Localization for an Autonomous Mobile Robot, Proceedings of IEEE/RSJ Int. Conference on Intelligent Robots and Systems (IROS'91), Osaka-Japan, November 1991, pp. 1442-1447.
Levitt, T.S. & Lawton, D.T. (1990). Qualitative navigation for Mobile Robots, Artificial Intelligence, Vol. 44, No. 3, pp. 305-360.
Lisien, B.; Morales, D.; Silver, D.; Kantor, G.; Rekleitis, I. & Choset, H. (2003). Hierarchical Simultaneous Localization and Mapping. Proceedings of IEEE/RSJ Int. Conference on Intelligent Robots and Systems (IR0S'03), Las Vegas, Nevada, October 2003, pp. 448-453.
Lu, F. & Milios, E. (1997). Globally Consistent Range Scan Alignment for Environment Mapping, Autonomous Robots, Vol.4, N. 4, pp. 333-349, October 1997.
Martin, M. & H. Moravec (1996). Robot Evidence Grids, Technical report CMU-RI-TR-96-06, The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, March 1996.
Martinelli, A.; N. Tomatics, N. & Siegwart, R. (2004). Open challenges in SLAM: An optimal solution based on shift and rotation invariants. In Proceedings IEEE International Conference on Robotics and Automation {ICRA'04), 2004, pp. 1327-1332.
Mataric, M. J. (1990). A distributed model for mobile robot environment-learning and navigation, Master thesis, MIT, Cambridge, MA, January 1990. Also available as MIT AI Lab Tech Report AITR-1228.
Montemerlo, M. & Thrun, S. (2003). Simultaneous Localization and Mapping with Unknown Data Association Using FastSLAM, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'2003), September 2003, pp. 1985 - 1991 vol.2.
Montemerlo, M. & Thrun, S. (2006). FastSLAM: A Scalable Method for the Simultaneous Localization and Mapping Problem in Robotics. Springer Tracts in Advanced Robotics, 2007.
Moravec, H. P. & Elfes, A. E. (1985), High Resolution Maps from Wide angle Sonar, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'85), St. Louis, Missouri, March 1985, pp. 116-121.
Moravec, H. P. (1988). Sensor Fusion in Certainty Grids for Mobile Robots, AI Magazine, Vol. 9, No. 2, Summer 1988, pp. 61-74.
Moravec, H. P. (1996, September). Robot spatial perception by stereoscopic vision and 3d evidence grids. Technical report, Carnegie Mellon University.
Murphy, R.R. (2000). Introduction to AI Robotics. The MIT Press, Massachusetts Institute of Technology, MA-USA, 2000.
Nieto, J.; Guivant, J. & Nebot, E. . (2003). Real Time Data Association for FastSLAM. Proroceedings of the IEEE International Conference on Robotics and Automation (ICRA'2003), Taipei, Taiwan, September 2003, pp. 412-418.
Nüchter, A.; Lingemann, K.; Hertzberg, J. & Surmann, H. (2006). Heuristic-Based Laser Scan Matching for Outdoor 6D SLAM, in Proceedings of the International Workshop on Safty, Security and Rescue Robotics (SSRR '06), Gaithersburg, Maryland, USA, August 2006.
Preciado A., Meizel D.; Segovia, A. & Rombaut, M (1991). Fusion of Multi-Sensor Data: a Geometric Approach. Proroceedings of the IEEE International Conference on Robotics and Automation (ICRA'1991). Sacramento, California, April 1991, pp. 2806-2811.
Ribo, M. & Pinz, A. (2001). A Comparison of three Uncertainty Calculi for Building Sonar-Based Occupancy Grids, The Journal of Robotics and Autonomous Systems, 35, pp. 201-209, 2001.
Polaroid Corporation, Ultrasonic Components Group, 119 Windsor Street, Cambridge, Massachusetts 02139, 1989.
Se, S.; & D. Lowe, D. & Little, J. (2002). Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks, The International Journal of Robotics Research, Vol. 21, No. 8, August 2002, pp. 735-758.
Singhal, A. (1997). Issues in Autonomous Mobile Robot Navigation, Report, Computer Science Department, University of Rochester, 1997.
Stachniss, C. & Burgard, W. (2005). Mobile Robot Mapping and Localization in Non-Static Environments. In Proc. of the National Conference on Artificial Intelligence (AAAI), Pittsburgh, PA, USA, 2005, pp. 1324-1329.
Szab, R. (2004). Topological Navigation of Simulated Robots Using Occupancy Grid, International Journal of Advanced Robotic Systems, Vol. 1, No. 3, 2004, pp. 201-206, ISSN 1729-8806
Thrun, S. & Bucken, A. (1996). Learning Maps for Indoor Mobile Robot Navigation, Technical Report CMU-CS-96-121, Carnegie Mellon University, School of Computer Science, Pittsburgh, PA 15213, 1996.
Thrun, S. (1998). Learning Metric-Topological Maps for Indoor Mobile Robot Navigation, Artificial Intelligence, Vol. 99, No. 1, pp. 21-71, 1998.
Thrun, S. (2001). A Probabilistic Online Mapping Algorithm for Teams of Mobile Robots", International Journal of Robotics Research, Vol.20. No.5, pp. 335-363, 2001.
Thrun, S. (2002). Robotic Mapping: A Survey, CMU-CS-02-111 report, Feb. 2002.
Tomatis, N.; Nourbakhsh, I.; Arras, K. & Siegwart, R. (2001). A Hybrid Approach for Robust and Precise Mobile Robot Navigation with Compact Environment Modeling, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'2001), Seoul-Korea, May 2001, pp. 1111-1116.
Wang, C.-C., & Thorpe, C. (2002). Simultaneous localization and mapping with detection and tracking of moving objects. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'2002), Sep. 2002, pp. 842-849 vol.1.
Wang, Z.; Huang, S. & Dissanayake, G. (2005). Decoupling localization and mapping in SLAM using compact relative maps. In Proceedings of IEEE/JRS Intl. Conf. on Intelligent Robotics and Systems (IROS'5), Edmundton, Canada, Aug 2005, pp. 1041-1046.
Yamauchi, B. & Langley, P. (2000). Place Recognition in Dynamic Environments. Journal of Robotic Systems, Vol. 14, No. 2, pp. 107-120.
Youngblood, G. M.; Holder, L. B. & Cook, D. J. (2000). A Framework for Autonomous Mobile Robot Exploration and Map Learning Through the Use of Place-Centric Occupancy. ICML Workshop on Machine Learning of Spatial Knowledge, 2000.