Scholarly article on topic 'Tidying and Cleaning Rooms using a Daily Assistive Robot - An Integrated System for Doing Chores in the Real World -'

Tidying and Cleaning Rooms using a Daily Assistive Robot - An Integrated System for Doing Chores in the Real World - Academic research paper on "Mechanical engineering"

0
0
Share paper
OECD Field of science
Keywords
{""}

Academic research paper on topic "Tidying and Cleaning Rooms using a Daily Assistive Robot - An Integrated System for Doing Chores in the Real World -"

Research Article • DOI: 10.2478/s13230-011-0008-6 • JBR • 1(4) • 2010 • 231-239

Tidying and Cleaning Rooms using a Daily Assistive Robot - An Integrated System for Doing Chores in the Real World -

Kimitoshi Yamazaki*, Ryohei Ueda, Shunichi Nozawa, Yuto Mori, Toshiaki Maki, Naotaka Hatao, Kei Okada, Masayuki Inaba

Department of Mechano - Informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan

Received 20 November 2010 Accepted 14 March 2011

Abstract

This paper describes a system integration of daily assistive robots. Several tasks related to cleaning and tidying up rooms are focused on. Recognition and motion generation functions were integrated onto the robot, and these provided failure detection and in some cases, its recovery. Experiments of several daily tasks handling daily tools showed the effectiveness of our system.

Keywords

Assistive robots ■ Integrated software system ■ doing chores

|1. Introduction

Daily assistive robots working in real world need to have various abilities to achieve certain tasks. Because various types of furniture and tools exist in human lives, it is desirable that a robot is capable enough to handle them for completing chores.

In general, daily routine works include various object manipulations. Because recent robots have many of DOFs like a human, it is possible that such a robot has the potential to replace housekeepers in chores. In order to leverage such capability, software systems running on the robot should be equipped with a wealth of functions to recognize and to manipulate furniture and tools. Moreover, a framework for generating robot behavior is also essential. The purpose of this research is to develop and deploy a highly integrated software system to complete daily works using a robot. 3D geometrical simulator is placed on the heart of the system, it effectively connects recognition and motion generation functions.

Daily assistive robots have been developed over several decades. Researchers evaluated their control intelligence or teaching system by applying their method to a single daily task in a real environment [5, 7, 9, 16]. From the viewpoint of system integration, Petersson et al. [15] developed a mobile manipulator system which could pick up an instructed object, recognise it and then hand the object to a person.

In recent years, daily assistance using humanoid robots has become an active area of robotics research [1, 12]. Sugano et al. presented assistance behavior using a human symbiotic robot which has object manipulation skills [21]. We also have developed daily assistive robots

*E-mail: {yamazaki, ueda, nozawa, y-mori, maki, hatao, k-okada, inaba}@jsk.t.u-tokyo.ac.jp

with perception, learning and motion planning skills. These robots were used for several daily tasks or cooperative working etc. [14]. One difference from the above researche is that we aim to develop a multifunctional robot similar to a housekeeper who are able to carry out "cleaning and tidying up rooms'' one room after another. With minimal robotics research related to daily assistance reporting on sequential task execution, we focus on such continuous working and proof our system considering with a certain level of functions for failure detection and recovery.

12. Issues and approach

We aim to build an integrated software system for a robot which has several degrees of freedom, like a human. This section describes the basic policy.

2.1. Previous knowledge

Manipulation targets satisfy following conditions:

■ 3D geometrical model is given in advance. If the object has an articular structure, this information is also added to the model. On the other hand, although we also focus on clothes as manipulation target, its 3D position is only considered for picking it up.

■ The pose of a target object is given in advance. However, a certain level of error is permitted because it is assumed that the robot estimates and corrects for the error automatically.

■ The robot has the basic knowledge of its manipulation target. his means that the robot knows what features can be used for

Table 1. Specification of IRT Daily Assistive Robot

Dimension H 1550 mm x W 500 mm x D 650 mm

Weight 130 kg

Head Neck 3 DOF (Yaw-Pitch-Roll)

Arms 7 DOF (Shoulder: Pitch-Roll-Yaw), (Elbow: Pitch), (Wrist: Yaw-Pitch-Yaw)

Hands 3 Fingers (Each of fingers has 2 pitch joints)

Waist 1 DOF (Pitch)

Mobile Platform Two wheeled mobile base (Tred 500 mm, Wheel Diameter 170 mm)

Figure 1. A Daily Assistive Robot.

recognizing the target and which sensors should be used for effective recognition.

Although this policy indicates that environmental models are given, it is predicted that manufacturer will provide robots with this model data in the future.

2.2. Tidying and Cleaning Task

There are various daily assistance tasks which are able to be performed by an autonomous robot. This research copes with "tidying and cleaning rooms'' which includes some chores as follows:

• Carry a tray from a table to a kitchen,

• Collect clothes in a room and put them into a washing machine,

• Clean a floor by using a broom.

Because this task needs various recognition and manipulation skills such as tool recognition, tool manipulation, door opening and so on, it is a good example of daily assistance to prove our implementation.

2.3. System configuration

Figure 1 shows a daily assistive robot in our use. Upper body consists of 2 arms with 7 DoFs, a head with 3 DoFs, and a waist with 1 DoFs. An end-effector equips 3 fingers and each finger has 2 DoFs. In order to grasp an object with the palm, these fingers are fixed without locating to diagonal pair. Lower body is a 2 wheeled mobile platform. (See Table 1)

This robot mounts a stereo camera (STH-MDCS3 made by VIDERE Design Inc.) on the head, and a LRF (Laser RangeFinder, LMS200 made by SICK Inc.) on the wheelbase. Force sensors are also equipped on the wrists and shoulders.

Figure 2. Software Architecture.

To develop a robot system for achieving the tasks previously indicated, behavior generation functions constituting of mobility, dual arm manipulation and dextrous handling are needed. Meanwhile, recognition functions such as environment recognition, self monitoring and positioning should also be satisfied.

2.4. Software system overview

Figure 2 shows the software system overview. Several recognition functions and motion generation functions are prepared, and a 3D geometrical simulator combines them. While the robot executes a task, the simulator provides a 3D shape model and appearance information using recognition functions, and also provides handling information using motion generation functions. There are 5 groups of functions as follows:

Environment recognition (Section 3) These functions have a role in providing manipulation information. In this paper, it is assumed that approximate pose of furniture and daily tools is given in advance, the main purpose of this function is to estimate accurate pose of a manipulation target. A stereo camera and a LRF are used, relative pose between the robot and target is calculated.

Motion generation of upper body (Section 4.1) These functions have a role in generating manipulation behavior. Based on recognition results, references such as grasped points and attentive viewpoint are specified. From these, joint angles of both arms, head and waist are calculated.

Motion generation of wheelbase (Section 4.2) The role of these functions is to control the motion of the wheelbase. Because our assuming task includes several series of subtasks, the robot needs to move around the workspace. Navigation functions are implemented for a mobile platform which has two active wheels.

Self localization (Section 5.1) This function provides the system with the pose of the wheelbase based on a pre-constructed 2D map. Scan matching and odometory are combined, stable localization can be achieved.

State monitoring (Section 5.2) These functions are very important to determine the state of manipulation. In addition to external sensors such as cameras and a LRF, internal sensors such as force sensors and joint angles are used in the observation.

Meanwhile, a different layer is implemented to observe and to manage the robot state in realtime. For instance, collision checking of wheel-base by using the LRF, measuring of joint load and so on. These functions drew upon the plugin system described in [8].

13. Environment Recognition 3.1. 3D model based pose estimation

The room where the robot works including a variety of objects. In our case, large objects such as a chair and a washing machine, and tools such as a tray and a broom are manipulation targets.

3.1.1. Candidates selection

Although the pose of target objects are roughly given in advance, its accuracy is not sufficient to get information for manipulation. In addition, positioning error of a wheelbase has harmful affects on the prediction of the objects. The purpose of pose estimation is to identify object pose x = (x,y, 6) relative to a robot on the spot.

A LRF mounted on the wheelbase can be used to narrow the existing region of target objects. A partial matching LRF data with 3D geometrical models is performed using the following procedure: A planar model of the furniture is first generated from a 3D geometrical model. Figure 3 shows the examples sliced by a horizontal plane, the height of which is the same as the region scanned by the LRF. Next, a group of scan points which correspond well with the model's are extracted as candidates. In the case of the table and the chair, only 3 of the feet are used for the matching because of occlusion. The top row of Figure 4 shows examples of the matching result. In this case, 2 or 3 candidates were found for each piece of furniture.

This procedure is only applied to furniture placed on the floor, and the results are input to the pose estimation method described below. On

(1) Table Model (2) Chair Model (3) Dustbin Model

Figure 3. 2D furniture models.

„ • - r ,. I ,

® LRF ® LRF

Figure 4. Pose estimation of a table and a chair. Top row: the results of candidate selection based on 2D planar model. Middle and Bottom: Wireframe models illustrated as dark-red lines show candidates of target objects. Red wireframe models at lower figures show pose estimation results. In these experiments, weight parameters were wi = W2 = W3 = 1.0 and wq = 0.0.

the other hand, candidate selection is skipped in the case of small objects which are assumed to exist on top or side of the already recognized furniture.

3.1.2. Pose estimation

After the candidate selection described above, 3D geometrical models are virtually placed in a simulator world based on their estimated position. From here, a set of virtual figures are generated based on a probabilistic model, and they are projected to an image captured by a stereo camera. Each projected model is compared with several types of features, and the degree of estimation accuracy is evaluated. We refer to a particle filter based approach in this process [13]. The estimation is proceeded according to probabilistic formulation as follows:

p(xt\Zt-1) = J p(xt \x

t-1 )p(xt-i \Zt-1 )dxt-i.

F (fe, fd, fi, fc ) = wfe + W2fd + wfi + W4 fc,

Original image

This equation indicates a prior probability which is calculated from an object pose xt and a sensor measurement zt. We denote Zt = {Zi, i = 1,...n}.

The posterior probability p(xt\Zt) can be calculated obeying Bayes rule as follows:

p{Xt\Zt) « p(Zt\xt)p{Xt\Zt-i), (2)

where p(zt\xt) denotes the likelihood at each time. In order to calculate likelihood, easily extractable image features such as edges and line segments are used. If a large part of the line segments that are the projected results of a 3D model have a crossover with image edges, the matching score is increased. Likewise, LRF data and stereo data are also compared with the surface of the 3D model. The scores are integrated using the following equation:

Gabor filtering

Detection result

Figure 5. Cloth detection based on wrinkly features in an image.

In the recognition process, an image is divided into 3 regions; (1) the region which can obviously be judged as a wrinkle region, (2) non-wrinkle region, (3) unclear region. These are calculated by using following equation:

y = wt Xj — h.

where fe,fd, fl and fc are the results of correspondence based on edge, depth, lrf data and color respectively. w^ to w4 indicate predefined weight values which rely on the target object. After the F(■) is normalized by dividing the sum of F(■) related to all of particles, it is directly substituted to p(zt\xt).

These low level features are defined as multi-cue visual knowledge. A model having the most feasible pose x is selected. As Figure 4 shows, this approach is suitable to recognize the pose of patternless furniture.

3.2. Cloth recognition based on wrinkly features[17]

A recognition approach without 3D geometrical model is needed to find clothes because they have a soft body. We take an approach to find wrinkles in images. Our method involves image learning to define the wrinkly features.

In the learning process, gabor filtering is first applied to several images which capture clothes placed on daily environments. Next, we crop partial images including clothes region. On the other hand, background regions are also cropped randomly. 20 bins of histograms are calculated from these regions, and then a vector of discriminant functions is calculated using the following equation:

L(w, h,a) = 1 \w\2 — £ a^w'x, — h)}, (4)

where L( ) denotes Lagrange function. w and h are parameters of the discriminant function. xt denotes ith data for learning.

Next, the region belonging to (3) is segmented by means of graphcut [3] regarding region (1) and (2) as seeds. Figure 12 shows an example of shirt detection.

This process is to judge whether or not clothes exist in the front of the robot. 3D position of clothes is calculated by combining the result with a 3D depth image.

3.3. Attention area extraction and change monitoring

For successful object manipulation, one effective way is to visually confirm the state of object while doing the manipulation. Two types of functions are provided to estimate the manipulation condition. One uses specific color extraction, and the other uses differentiation between two images. Extracted regions through these methods are classified, their shapes and areas are utilized to judge whether or not the manipulation is going well.

14. Motion generation

4.1. Upper body motion

In our assumptions, the coordinates which provide the robot with visual and grasping points are embedded in the 3D object model. This means that once the pose of the target object is estimated, it is possible to sense the object in more detail and to initialize motion planning for manipulation. For the motion planning, jacobian-based inverse kinematics is used. Particularly, we use the SR-inverse [11] which demonstrates good behavior in stability around singular point. The equation for calculating the velocity of an end-effector 0 is as follows:

' = J#x + (\ — Jw #J)y

I ......

(5)Goal

Figure 6. Wheelbase motion generation. The robot tracks several planned coordinates in order.

15. Localization and state monitoring

5.1. Self localization

Wheelbase localization is achieved using a LRF (Laser RangeFinder) mounted on the wheelbase. The environment map was generated by SLAM techniques in advance, and the present robot pose is calculated by means of scan matching [6].

In the map generation phase, we apply a SLAM approach which combines an ICP algorithm based on scan matching [2] and GraphSLAM [10]. Because this map is represented as dozens of reference scans and robot positions, the ICP algorithm can be used to match between input scan and reference scans in the localization phase. However, this matching is subject to failure when the wheelbase rotates sharply. In order to eliminate such mismatches, the difference in odometory from time t — 1 to t is incorporated into the scan matching process.

where J# is a SR-inverse of J, and J# is the product of J# and weight matrix W.

The diagonal matrix W is determined from the following equation [4]:

{@max @min ) @max @min)

4 {6[ — 6max) {@i — @min)

5.2. State monitoring

When the robot performs object manipulation, its state such as loads on joints should be observed in order to detect manipulation failure. From this reason, monitoring functions are required. Examples of these are: (i) load monitoring using force sensors embedded in arms and (ii) difference monitoring between joint angles of a present pose and those of a reference pose. In orderto maintain navigation safety, collision risks of the wheelbase is also determined using LRF.

where 6max and Qmin indicate maximum or minumum angle limit respectively. In the equation (7), wj is replaced by 1 /(1 + wj) when the value becomes smaller than the previous value. In the alternate case, it is replaced by 1. This arrangement results in small weights when joint angles approach the angle limits.

y indicates an optimization function for avoiding self collision by using redundant degrees of freedom.

I 6. Behavior description of daily assistive tasks

In order for a life-sized robot to provide assistance in various daily tasks, a unified framework of behavior description is needed. Moreover, this framework should be capable of recovery from failures. This section describes the policy we develop.

4.2. Wheelbase motion

Because we focus on performing several types of chores with a single robot, the robot is required to shift position to where each task is performed. The motion of the wheelbase is controlled by using line trajectory tracking [18]. First, the floor is manually discretized into a set of coordinates. The trajectory of the wheelbase is then generated to follow these coordinates. Figure 6 shows an example of this motion. The coordinates marked (1) to (5) are connected by sequential line segments and they are regarded as the target trajectory. The initial heading of the robot is assumed to be in the positive x direction of the coordinates (1). The controller outputs a velocity v and angular velocity to with respect to the relative pose of coordinates. In the case shown in Figure 6, the robot first moves away from the goal in the interval 'A', because coordinate (2) is located behind (1), and then it proceeds towards the goal coordinates via the remaining coordinates. For smooth tracking, the target line is changed before the robot reaches the termination coordinates. In our implementation, a new target line is given to the control system when the distance between the wheelbase and termination coordinates becomes less than 150 mm. The initial wheelbase poses of every task are previously defined on an environment map and the intermediate coordinates are also given as pass points. When the robot moves to an initial position of another task, or restarts a task from the beginning, the motion generator provides the robot with a set of coordinates to shift towards the required target pose.

6.1. Basic configuration

A manipulation behavior consists of two phases: "approach" and "manipulation". The approach phase is responsible for finding the manipulation target and navigating near to it. The manipulation phase is concerned with confirming the pose of the target, planning the robot motion and executing it. We call the smallest element of a behavior a "behavior unit'' which is a set of condition checks (check), motion generation (planning) and motion execution (do). That is, daily assistive tasks consist of an approach phase and a manipulation phase, and each task is constructed from several behavior units. Figure 7 shows an example of this behavior description for chair handling. The task starts from approach part and proceeds as follows:

Approach part, check1 confirm the position of a chair and whether or not it can be recognized at a present robot pose

Approach part, plan1 plan a wheelbase motion for shifting the initial pose

Approach part, do1 execute the planned motion

Appraoch part, check2 recognize a chair pose

Approach part, plan2 plan a wheelbase motion for approaching the chair

( 1 .Check/approaching-pose ] Approach part Behavior unit Behavior unit /

[ 2.Plan/go-to-approaching-pose ]

( 3.Do/go-to-approaching-pose )

( 4.Check/chair-pose )

( 5.Plan/approach-to-chair )

( 6.Do/approach-to-chair ]

[ 7.Check/chair-pose ] Manipulation part Behavior unit Behavior unit

( 8.Plan/grasp-chair ]

( 9. Do/grasp-chair J

' 10.Check/grasping-state }

( 11.Plan/draw-chair ]

( 12.Do/draw-chair j

[ Check & finish ]

Figure 7. Task structure in the case of chair manipulation.

Figure 8. Task structure including standard failure recovery.

Approach part, do2 execute the wheelbase motion

Manipulation part, check1 comfirm the pose of the chair again

Manipulation part, plan1 plan a grasping pose of the chair

Manipulation part, do1 execute the grasping

Manipulation part, check2 check the grasping state

Manipulation part, plan2 plan a motion to draw the chair

Manipulation part, do2 execute the drawing motion

One advantage of this framework is that each behavior unit can specifically be reused as the situation demands. For instance, when a hand which grasps a chair is isolated while drawing it, the error is immediately detected by means of state monitoring, such as by the force load on the wrist or joint angles of fingers. In such a case, the task can be continued by moving from 12. to 1. in Figure 7. These recognition and motion generation functions run simultaneously at a high layer, while state observation functions run at a low layer. These latter functions have the role of observation and detection of abnormal state of the manipulation in realtime.

6.2. Classification of failures

The term "failure" in this paper refers to the condition that the result of a sensor measurement is differs from its assumed value, and that fact adversely affects task execution. Because cleaning and tidying each require that several sub-tasks are continuously performed, it is not acceptable to stop the execution when a failure is detected. From this reason, failure detection and recovery are absolutely necessary. There are many types of failures and correspondingly many ways to recover from these failures. Examples in our task are that a cloth hangs

out of a washing tab, or a broom lies down on a floor because the robot failed to grasp it. In order to establish the basic structure of error recovery, we classify actually observed failures into three groups from the viewpoint of the levels of recovery intractability.

■ (A) Failures observed before manipulation : This type of failure occurs while approaching or at the beginning of object manipulation. One of the examples is tha the robot cannot plan its handling pose because of a wheelbase positioning error after it approaches a manipulation target.

■ (B) Failures observed after manipulation, with almost no changes to the manipulation target : This type of failure occurs in the manipulation phase and is observable by checking whether or not the sensing data is similar to the assumed data. If the robot can recover a failure just by repeating the same behavior units again, it is classified to this group.

■ (C) Failures with changing the manipulation condition significantly : Although this type of failure is observed in the same way as (B), the manner of recovery differs from (B) because simple behavior repetition does not make sense in this case. For instance, if the robot tries to grab a broom propped against a wall but drops it, a new picking up motion is needed.

In the case of failures classified as (A), the most basic class of failures, the system should discover failures with a "check" function and a "plan" function, and then a subsequent call to the "check" function. If an adequate result cannot be acquired through this process, a behavior unit which has already been called is used again. In the case of (B), recovery is achieved with moving from a "do" function to a "check" function in the same behavior unit. If it cannot, a behavior unit which has already been called is used, in the same way as (A). In the case of (C), a completely different behavior unit is called first. For example, if the robot discovers a cloth falling out of a washing tab,

broorr^

Washer machine

Kitchen

Figure 9. Experimental Environment.

robot puts it into the washing machine after moving to the point S.

3. Sweep the floor: The robot grabs a broom which is propped up against the washing machine, and moves to the point R. It sweeps under the table after removing the chair, and then moves around the room sweeping, as shown in the upper figure of Figure 9.

These three types of tasks divided into 14 behavior units. In total, 2 were for carrying a tray, 5 for collecting a cloth, and 7 for sweeping the floor. In each behavior unit, a "check" function first returned the pose of a target object, and then a "plan" function generated the robot pose stream based on the target pose, and finally a "do" function executed the poses with observing manipulation state.

For this task division procedure, we did not enumerate behaviors as "maintaining present grasping state'' into one behavior unit. The reason is that such behavior should be supervised by state monitoring functions which run in parallel with the main process. If the monitoring function detected an irregular state, for instance a grasped object was separated from the hand, the result is used as a trigger which calls a behavior unit for recovery.

7.2. Tidying and Cleaning a Room

a new motion is inserted to pick the clothes up. Next, original behavior unit for putting the clothes into the washer is called again. If the new behavior succeeded, the system returns a process to a behavior which was originally executed.

Figure 8 shows a state transition diagram in which arrows and new behavior units are added. More arrows may be depicted in this diagram because "check" functions should be called from check/plan/do functions in another behavior unit. Note that this representation allows for the effective reuse of behavior units. It also suggests that the implementation of a new function does not have to be restarted from the beginning, because a part of a function previously implemented is easy to remodel for the new function based on our software system shown in Figure 2.

17. Experiments 7.1. Settings

7.1.1. Division into behavior units

Figure 9 shows an experimental environment. The size of room was 5 m x8 m. A set of furniture including a table, chairs and shelves was placed in the room, as well as some home electrical appliances as a fridge and a washing machine. Tasks imposed on the robot were (i) to carry a tray, (ii) to collect clothes and (iii) to clean the floor.

1. Carry a tray:The robot stands in front of a table (described Figure 9, P) in the beginning, picks the tray up, moves to point Q and puts it in a kitchen.

2. Collect a cloth: The robot moves to the point R, and finds a cloth placed on a back of a chair. After picking up the cloth, the

Figure 10 shows one set of experimental results including the three types of tasks mentioned above. Due to the integrated system and failure recovery routine, experiments succeeded more than 10 times without break. Implementation time was 8 minutes 20 seconds on average. However, when some failures belonging to (B) or (C) occurred, which needed explicit additional robot motion, the time was extended for the recovery.

Several improvements in the form of error recovery are introduced here. Figure 11 shows a failure detection resulting from the case of opening the door of a washing machine. This is a class (B) failure. In this case, a function to check present door state was invoked. This was based on differentiation between two images described in Section 3.3. Two images were captured, before and after door opening, and then the appearance change with respect to door region was observed, having removed illumination effects (a result is shown in Figure 11). Whenever the door opening was tried, this function was called. If the function returns an unexpected result, a behavior unit to push a button for opening the door is called again. Moreover, if several such trials did not succeed, the error handling layer selected other behavior unit to retry the task, providing success. For instance, a standing point of the robot was replanned. We found and implemented other failures and recoveries classified as (B) as follows; (i) regrasping a cloth: if the robot could not grasp the cloth, the motion was repeated, (ii) adjusting broom direction: the orientation of a broom head was rotated to be suitable for sweeping.

Figure 12 shows an example of failure recovery related to cloth handling. The figure shows that the robot dropped a cloth after picking up it from a back of a chair. This is a class (C) failure. In this case, functions to find and pick up the cloth on the floor were added. That is, a behavior unit was invoked. We also found and implemented additional behavior units in a similar fashion as follows; (i) putting a cloth into a washer tab: if a part of the cloth hangs out of a washing tab, the robot puts it into the washer tab, (ii) picking up a broom; if the broom falls onto the floor because the robot failed to grasp it, the pose of the broom is detected in order to plan the picking up motion.

Figure 10. An experiment.

Success: The door opened Push the button ^(manipulation effect can be observed^

Figure 11. Failure Detection in washer door opening.

Figure 12. Failure Recovery by picking up a cloth.

18. Conclusion

This paper described an integrated system for robots which provide assistance with daily activities. In particular, the focus was placed on several tasks related to cleaning and tidying up rooms. We designed functions related to recognition, motion generation and state monitoring, which were then coordinated through the use of a 3D geometrical simulator. Our system relied on simple task descriptions, as well as a number of recognition and motion generation functions. In addition, functions for detecting and recovering from failures were also devel-

oped. The effectiveness of our framework and system was demonstrated through experiments showing robots performing several daily tasks requiring the handling of tools.

In our future work, we will aim to develop more functions to detect failures automatically. In addition, we predict that automatic behavior unit generation is needed because manually dividing a task into behavior units is a tedious process.

References

[1] T. Asfour, et. al.,"ARMAR-III: An Integrated Humanoid Plattfrom for Sensory-Motor Control,'' IEEE-RAS Int'l Conf. on Humanoid Robots, 2006.

[2] Besl, P. J. and McKay, N. D., "A Method for Registration of 3D Shapes,'' IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 14, No. 2, pp. 239-256, 1992.

[3] Y. Boykov and V. Kolmogorov, "An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision,'' IEEE Trans. Pattern Analysis and Machine Intelligence, 26(9), pp.1124-1137, 2004.

[4] T. F. Chang and R.-V. Dubey, "A weighted least-norm solution based scheme for avoiding joint limits for redundant manipulators,'' in IEEE Trans. On Robotics and Automation, 11((2):286-292, April 1995.

[5] N.Y. Chong and K. Tanie, "Object Directive Manipulation Through RFID,'' Proc. Int'l Conf. on Control Automation and Systems, pp.22-25, 2003.

[6] N. Hatao, K. Okada and M. Inaba, "Autonomous Mapping and Navigation System in Rooms and corridors for Indoor Support Robors,'' in Proc. of The IEEE Int'l Conf. on Mechatronics and automation, TE3-6, 2008.

[7] H Jang et. al., "Spatial Reasoning for Real-time Robotic Manipulation,'' Proc. of IEEE/RSJ Int'l Conf. on Intelligent Robotics and Systems, pp.2632-2637 , 2006.

[8] K.Yokoi et.al, "Experimental Study of Humanoid Robot HRP-1S,'' The International Journal of Robotics Research, 2004.

[9] R. Katsuki et.al, "Handling of Objects with Marks by a Robot,'' Proc of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, pp.130-135, 2003.

[10] F. Lu and E. Milios, "Globally consistent range scan alignment for environment mapping,'' Autonomous Robots, Vol. 4, No. 4, pp.333-349, 1997.

[11] Y. Nakamura and H. Hanafusa, "Inverse Kinematic Solutions with

Singularity Robustness for Robot Manipulator Control," Journal of Dynamic Systems, Measurement, and Control, Vol.108, pp.163171, 1986.

[12] E. S. Neo et. al., "Operating Humanoid Robots in Human Environments'', Proc. Workshop on Manipulation for Human Environments, Robotics: Science and Systems, 2006.

[13] K. Okada, M. Kojima, S. Tokutsu, T. Maki, Y. Mori and M. Inaba, "Multi-cue 3D Object Recognition in Knowledge-based Vision-guided Humanoid Robot System,'' Proc. of the IEEE/RSJ Int'l. Conf. on Intelligent Robots and Systems, pp.1505-1506, 2007.

[14] K.Okada, M. Kojima, Y. Sagawa, T. Ichino, K. Sato and M. Inaba, "Vision based behavior verification system of humanoid robot for daily environment tasks,'' 6th IEEE-RAS Intl. Conf. on Humanoid Robots, pp7-12, 2006.

[15] L. Petersson, et al., "Systems Integration for Real-World Manipulation Tasks,'' Proc. of Int'l Conf. on Robotics and Automation, Vol.3, pp.2500-2505, 2002.

[16] T. Takahama, K. Nagatani, Y. Tanaka "Motion Planning for Dualarm Mobile Manipulator -Realization of "Tidying a Room Motion'' -,'' Proc. of Int'l Conf. on Robotics and Automation, pp.4338-4343, 2004.

[17] K. Yamazaki and M. Inaba, "A Cloth Detection Method Based on Image Wrinkle Feature for a Daily Assistive Robots,'' IAPR Conf. on Machine Vision Applications, 2009. (to appear)

[18] Y. Watanabe and S. Yuta, "Estimation of Position and its Uncertainty in the Dead Reckoning System for the Wheeled Mobile Robot,'' Proc. of 20th International Symposium on Industrial Robots, pp.205-210, 1989.

[19] K. Yamazaki, T. Tsubouchi and M. Tomono, "Modeling and Motion Planning for Handling Furniture by a Mobile Manipulator,'' Proc. of IEEE Int'l Conf. on Intelligent Robots and Systems, pp.1926-1931, 2007.

[20] R. Zollner, T. Asfour and R. Dillman, "Programming by Demonstration: Dual-Arm Manipulation Tasks for Humanoid Robots,'' IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems , 2004.

[21] http://twendyone.com/index.html