INTECH
open science | open minds
OPEN Vy ACCESS ARTICLE
International Journal of Advanced Robotic Systems
Design of Robotic Human Assistance Systems Using a Mobile Manipulator
Regular Paper
Yunyi Jia1, Yong Liu2*, Ning Xi1, Hai Wang1 and Philipp Stürmer1
1 Department of Electrical and Computer Engineering, Michigan State University, USA
2 School of Computer Science and Engineering, Nanjing University of Science & Technology, Nanjing, China * Corresponding author E-mail: liuy1602@njust.edu.cn
Received 30 May 2012; Accepted 19 Jun 2012 DOI: 10.5772/50828
© 2012 Jia et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract Robotic systems have been widely used in many areas to assist human beings. Mobile manipulators are among the most popular choices. This paper investigates human assistance systems using a mobile manipulator, for example, to guide the blind and to transport objects. Distinct from existing systems, an integrated dynamic model and controller of the mobile manipulator are designed. Singularity, manipulability and safety are all considered in the system design. Furthermore, two human assistance modes - Robot-Human mode and Teleoperator-Robot-Human mode - are designed and analysed. The Teleoperator-Robot-Human mode can integrate human intelligence into the assistance system to further enhance the system efficiency and safety. The experimental results implemented on a mobile manipulator demonstrated the effectiveness of the designed systems.
Keywords human assistance, mobile manipulator, cooperation
1. Introduction
Robotic systems have been incorporated into our lives to assist us in many respects like a human would and
sometimes they can even help us in a way that is better than that of a human. With the development of robotic technologies, more and more robotic systems will be invented to assist humans in numerous ways, including manufacturing assistance, homecare and patient care, among others [1][2].
Many researchers have studied different ways to assist human beings using different kinds of robotic systems. Pereira et al. used a mobile robot to help humans perform object transportation tasks [3]. Wang et al. did the same task using a passive mobile robot and taking torque analysis into account [4]. In papers [5] and [6], robotic manipulators were also used to achieve human-robot cooperative manipulation and transportation. These studies considered either mobile robots or manipulators. In addition, some studies have begun to consider using mobile manipulators. Yamanaka et al. proposed a cooperative motion control approach for a human and a mobile manipulator using the interactive virtual impedance method [7]. Nozaki and Murakami used a more dexterous two-wheel driven mobile manipulator to achieve human-robot cooperative transportation [8]. These two papers, however, both investigated this problem by simulation rather than through experiments.
In paper [9], Umeda et al. considered a hybrid position/force control of a real mobile manipulator based on cooperative task sharing with humans. It considered the mobile manipulator as a redundant manipulator. However, it only considered the kinematic modelling and control of the system. Kosuge et al. studied a control algorithm for a mobile manipulator with dual arms for cooperative work with humans [10]. However, it controlled the mobile base and dual arms separately rather than considering them as one system. In these past studies, most of them considered robotic systems as being out of the singularity. This assumption, however, limits the working space of the system and is not practical in real robotic applications.
This paper introduces the design and implementation of a robotic human assistance system using a mobile manipulator. The dynamic model of the mobile manipulator is derived and an effective task-space dynamic controller is designed. Besides this, the problems of singularity, manipulability and safety are all considered in the system design. Two modes of human assistance system are designed: a Robot-Human mode and a Teleoperator-Robot-Human mode. Most of the previous studies used a Robot-Human mode to provide assistance for humans. In this mode, the robot only communicates with the human where they are helping directly. The mode is very helpful in many areas, like cooperative transportation. However, in some more complicated cases - like blind guidance - this mode is not the most reliable or the safest choice because the efficiency and safety only depends upon the intelligence and stability of the robotic system. As such, this mode should be improved in this case. Thus, this paper designs a Robot-Human assistance system by first using a mobile manipulator; then, an improved assistance system using the mobile manipulator - the Teleoperator-Robot-Human mode - is also designed. The mode integrates the intelligence of the teleoperator into the human assistance system so it can provide a more efficient, more reliable and safer assistance for the human. Both modes are implemented on a mobile manipulator, and the experimental results demonstrated the effectiveness of the proposed methods. This paper is an improvement of previous work [11]. Some details are complemented and improved.
2. Mobile manipulator System 2.1 System Architecture
Figure 1 shows a picture and the architecture of the mobile manipulator. It consists of a PUMA 560 robotic manipulator and a mobile platform. The mobile platform was completely updated from the original NOMAD XR4000 [12]. Multiple onboard sensors are added to the
mobile manipulator system, including a laser scanner, cameras, sonar and infrared sensors, force/torque and acceleration sensors. The system is controlled by three controllers which run three different operational systems. An industrial PC running the real-time QNX system is used to control the PUMA 560 manipulator and connect the force/torque/acceleration sensors. A PMAC controller running the real-time embedded system is used to control the mobile platform. Another industrial PC running the Linux system is used as a host system to coordinate the manipulator and mobile platform. It is also used for connection to the Internet. Besides this, it is used as a controller for all other sensors, including the laser scanner, cameras, sonar and infrared distance sensors. A PHANTOM haptic device and a MS force feedback joystick are used for the operation of the system. Using different controllers, the mobile manipulator can work either autonomously or by teleoperation via the Internet.
Laser scanner, camera, sonar, infrared sensors.
Figure 1. Robotic system Architecture
end-effector position and end-effector position and
In what follows, the model and controller of the mobile manipulator are introduced based on previous work [13]. We define the world frame as ZW and the mobile platform frame as ZB . The origin of ZB is located at the centre of the top of the mobile platform. For the modelling of the system, an additional frame ZB is also defined, which shares the same origin as frame ZB , but is always parallel with frame ZW. Based on these coordinate frames, the definitions of the variables in the mobile manipulator are defined as follows.
• Y = {x, y, z, O, A,T}T : end-effector position and
orientation in ZW;
• YB = K, y6, z6, Ob, Ab ,Tb }T
orientation in Z B ;
• Y' = {x'b, y'b, z'b, O'b ,A'b ,Tb }T
orientation in Z B ;
• X = ix , v ,rn }T: the coordinates of the mobile
V m'sm'Tm)
platform in ZW
• q = {1^ qг, qз, qt, q^ q6 }T: ioint angles of the
manipulator 2.2 Model of the Mobile Manipulator
By multiplying a 6 transformation matrix, the forward kinematics of the manipulator in frame ZB can be easily obtained and denoted as YB = h(q) . Differentiating on both sides, we get the velocity kinematics as YB = Jq . Differentiating again, we get the acceleration relationship between the task space and the joint space variables as YB = Jq + Jq . The frames Z'B and ZB share the same origin. The transformation between them is a rotation of the mobile platform orientation q>m . Then, defining the joint angles of the manipulator in frame Z'B as q', we have the following relationship in frame Z B :
Y'B = Jq
Y'B = Jq + J 'Öm + Jq
where 0 = {rn ,0,0,0,0,0}T is the orientation of the
m r m ? ? ? ? ? )
mobile platform and J' is the Jacobian considering 0m in frame Z B .
and combining the mobile platform velocity, we have the velocity kinematics and acceleration relationship of the mobile manipulator in the world frame ZW , as below:
y = Yb + J'q'
Y = YB + J'q + J '0m + J'q
where YB = {xm, ym, hm ,0,0,0}T is the augmented position coordinates of the origin of frame Z B .
The next step is to derive the dynamic model of the mobile manipulator. First, and considering the 6 DOF manipulator, its dynamics in frame Z B can be described as:
D(q)q + C (q, q) + G(q) =Tl
where is the 6 x 1 vector of the applied torques, D(q) is the 6 x 6 positive definite manipulator of the inertia matrix, C(q, q) is the 6x1 centripetal and Coriolis torques, and G(q) is the 6 x 1 vector of the gravity term.
Substituting (2) into (3), we have:
D(q) J'-1 (Y - Yb - J'q' - J'0m ) + C(q, q) + G(q) = r, (4) Using variables Y and X, (4) is written as:
D(q)J'-1Y+ MX - D(q)J'-1 J'q' + C(q, q) + G(q) = r, (5)
where MX is the effect of the mobile platform on the dynamics of the manipulator. It combines the terms of
- D(q)J'-1YB and - D(q)J'-1J0m .
From the previous work [13] and [14], the dynamics of a holonomic mobile platform are represented by:
D2( p)Y + M 2( p) X + C2( p, p) = r2
p = {1i , qг, q3, qi, q5, q6, xm, ym ,0m V , A(P) is the inertia matrix which reflects the dynamic effect of the arm motion on the platform, M2(p) is the inertia matrix of the mobile platform considering the presence of the manipulator, and r2 is the input.
Combining (5) and (6), we have the final dynamic model of the entire mobile manipulator as:
D(q)J' -1 M M 2 ( p) D2( p)
- D(q) J'-1 J'q' + C (q, q) + G(q)
Zr and ZB _ C2(P, p) _ J 2 _
2.3 Controller for the Mobile Manipulator
The dynamic model is a nonlinear system. By nonlinear feedback linearization, the system can be linearized and decoupled. The nonlinear feedback law is expressed as:
D(q) J '-1 M 2( p)
M Dt( P)
- D(q) J '-1 J q + C(q, q) + G(q) C2(P> P)
Plug it into (7), and the linearized and decoupled system is expressed as:
v = u (9)
where v = {x, y, z,O, A,T, x , y ,6 }T is the 9 x 1 output of the mobile manipulator and U is the 9 x 1 internal input torque.
Given the desired states of the mobile manipulator vd = {xd,yd,zd,Od, Ad ,Td,xdm,ydm,6dm }T, a PD tracking controller is designed to control this system so as to achieve the desired states. The PD controller is expressed
u = kD (vd -1>) + kP (vd - v)
After applying the nonlinear feedback control law (8) and the PD control law (10) to system (7), the closed-loop system is asymptotically stable and the states can track the desired states if kP and KD are both positive. In model (7), the desired states of the mobile manipulator consist of the desired position and orientation of the end-effector and mobile platform in the world frame ZW, Yd
and Xd. In real applications, the task of a mobile manipulator is usually given as the desired position and orientation of the end-effector in ZW. Thus, the relationship between them should be derived online. Given the end-effector states, the desired states of the mobile manipulator have multiple solutions. One best solution should be selected from among them. This can be solved by considering the singularity, manipulability, safety and other factors of the entire system. First, in properly positioning the mobile platform, singular arm configurations can be avoided. Second, the coordinated motion of the mobile platform and the arm can maximize the capability for manipulation. Third, and based on the intention of the human, a human-like and safe human/robot interaction to assist humans can be obtained by a flexible task distribution scheme between the
manipulator while the mobile platform should be designed. Therefore, the human/robot interaction for human assistance is introduced in the following section.
3. Human Assistance System
3.1 Human Assistance Mode
Two human assistance modes are proposed to build the system: a Robot-Human mode and a Teleoperator-Robot-Human mode, as shown in Figure 2 and Figure 3. In the Robot-Human mode, the mobile manipulator communicates with the human directly via the force interaction. The robot runs autonomously according to the intention of the human. The intention of the human can be detected though the force/torque sensor mounted on the gripper of the mobile manipulator. In the Teleoperator-Robot-Human mode, the mobile manipulator communicates with both the human and the teleoperator. The robot runs autonomously by considering both the human's intention and the teleoperation's commands. This mode is helpful for the assisting system thanks to the high intelligence of the human teleoperator. The teleoperator can observe the environmental information and provide much better advice to assist the human when accomplishing certain complicated tasks.
3.2 Robot-Human Mode
In the Robot-Human mode, the desired movement of the end-effector is decided exclusively by the intentions of the human. The robot can sense the intentions of the human through the force and torque information from the force/torque sensor mounted on the end-effector. The desired speed of the end-effector for both can be obtained as:
Yd = kFFh (11)
Figure 2. Robot-Human mode
Figure 3. Teleoperator-Robot-Human mode
where, Fh = {fx,, f , fz,rx,ry,rz}T is a vector of 3 forces and 3 torques from the force/torque sensor.
When using mobile manipulators, and in order to complete some complicated tasks, the human may need the robot to move in different ways. For example, sometimes only the motion of the arm or the mobile platform is needed, and sometimes the motion of both of them may perform better. To provide the human with more flexible control of the mobile manipulator, three motion functions are designed to assist the human: Arm-Moving, Base-Moving and Full-Moving. The switching of the three motions is determined by the human through applying different forces on the end-effector. The switching scheme for the motion of the manipulator in frame SB can described as:
If any A* in (13) is equal to zero or is very close to zero,
the singularity will occur. To avoid the singularity, the proper motion of the manipulator needs to be planned online so as to avoid such singularity points. It is not always necessary to keep these A* maximal. If they are
always maximized, the manipulator will be fixed at a certain configuration and lose its dexterous manipulability. As long as the values are greater than certain predefined minimum values, it will move out of the singularity and will not lose dexterity.
Define A = {Ab,Al,Aw}T, then at time t, A is a function of the manipulator's configuration, which is expressed as:
A(t) = %(Y (t))
\F„\ < FA Fz <\Fh\ < FB \Fh\ > Fb
When the human force is under the threshold FA, the Arm-Moving function is used. A proper manipulation motion will be performed according to the human force. At the same time, proper mobile platform motion may also be performed when necessary. When the human force is beyond the threshold FB, the Base-Moving function is used. In this case, only the mobile platform motion is performed and the manipulator will remain still. When the human force is between the two thresholds, the Full-Moving function is used. The motions of both the manipulator and the mobile platform will be performed. The switching between these three motion functions can be changed during the running of the system according to the requirements of the tasks.
When the Arm-Moving function is used, the desired speed of the manipulator for both position and
orientation in frame YJB should be equal to Yd if the case is ideal. However, and because the singularity exists for the manipulator, directly assigning Yd to YA cannot guarantee that the manipulator will always run out of the singularity points. As such, the singularity of the manipulator should be analysed and the proper motion of the manipulator should be assigned.
In [13], and by the singularity analysis of the manipulator, the following singular condition was obtained:
Ab = d4 C3 - a^ S3
A= d 4 S 23 + a2C2 + a3C23
A = — s.
For the Arm movement, and in order to make the manipulator run at the speed of Yd to its maximum ability, the desired speed of the manipulator YAd can be
chosen as the speed which is the closest to Yd and with which the manipulator will not cross or cross near to the singularity points. YAd can, therefore, be described as a
function of Yd and the calculation can be described as:
YAd (t) = ®(Yd (t))
= arg mm
\y — Yd||: A(Ya(t) + e/) > Tmln, 0 < e < At}
where ^(Yd) is the neighbourhood of Y , Tmin is the predefined minimum values threshold for the three singularity conditions, and At is the sampling time or the control interval.
With the Full-Moving function, the manipulator and the mobile platform need to coordinate together so as to achieve the motion objective of the end-effector. To order to make the mobile manipulator maintain its maximal ability in assisting the human, the mobile manipulator should try to keep the manipulability as high as possible. Thus, the motion of the manipulator will not only consider the avoidance of the singularity points but also the maximal manipulability. Frequently, the manipulability can be expressed as a function of the manipulator configuration H = n(YA) based on different criteria, e.g., the maximal moving range, the maximal force, etc. Based on this manipulability evaluation function, the desired manipulation motion can be obtained by maximizing this function as well as avoiding the singularity points. Thus YAd can be calculated as:
Yd (t) = ®| arg max {nfc (t) + fit)}
r. ye[rmin. rmax ]
After the motion of the manipulator in frame Z^ is determined, based on the desired speed of the end-effector in world frame Yd, the desired motion of the mobile platform can be obtained by considering the difference of the desired speed Y d and the computed manipulator motion Ydd . Thus, the desired motion of the mobile platform can be described as:
xd (t) = xYd - Yd)
according to what the teleoperator sends, its motion should not only consider the motion commands from the teleoperator but also consider the motion ability of the human. To achieve this objective, a force-based switching scheme is proposed. The desired motion of the mobile manipulator can be designed as:
Yd = Yd
Xd =*| Yd -®| argmax {nfc(t) + yAt)}
r.Mrmm.rmax]
if IF I- FM
Yd = Yd
(17) |xd =z(Yd -ok; F - Fm )))
if Fm -I Fh I- FOK (19)
Where the function is used to translate the
difference between Yd and Yd into the motion of the
mobile platform so that Y d can be satisfied as much as possible.
It is important to notice that, because the end-effector has 6 DOF and the mobile platform only has 3 DOF, the mobile platform will not always be able to make the effector achieve Yd in all 6 degrees. However, and with this method, the singularity points are all avoided and the most important thing for the human - safety - is guaranteed. Moreover, the maximal manipulability can always be kept when the Full-Moving function is used. To achieve completely the 6 DOF motion of the end-effector, the human can do it by switching among the 3 motion functions.
After the desired speeds of both the end-effector and the mobile platform in the world frame are determined, the desired positions can be calculated by the integration of the desired speeds. Taking them as the input of the dynamic model and the dynamic controller, the desired positions and speeds can be achieved at a very high speed thanks to the dynamic control of the system.
3.3 Teleoperator-Robot-Human Mode
In the Teleoperator-Robot-Human mode, the teleoperator and the mobile manipulator need work together to assist the human. The teleoperator observes the environmental information and sends motion commands to move the end-effector in the task space based on the task requirements. The human and the mobile manipulator communicate with each other through the force/torque sensor mounted on the end-effector. To consider the motion ability of the human, the human may not move as the teleoperator advises them to do in some cases, e.g., the human cannot track the speed sent by teleoperator or else the human is blocked by some expected obstacles. Since the mobile manipulator cannot always move
Yd = 0
Xd = o
if I Fh |> F0i
where YTd denotes the desired velocity sent by the teleoperator, k'F denotes the force-to-speed coefficient matrix, FM and FOK are two force thresholds for switching, and the functions and ®(-) are as same as in (15) and (17).
In this mode, when the human force is below FM , this
indicates that the human can follow the speed sent by the teleoperator. At this point, the manipulator autonomously adjusts its motion to achieve maximal manipulability and avoid the singularity points. When the human force is between FM and FOK , this indicates that the human cannot follow the desired speed as sent by the teleoperator, but the speed difference is still acceptable for the human. In this case, the manipulator will act as a real human arm and adjust its own motion autonomously so as to cause the human to follow the teleoperator's speed. When the human force is beyond FM , this indicates that the human cannot follow the teleoperator's speed at all. In this case, the human may be blocked by some obstacle or else the teleoperator may send at a very high speed. It would be very dangerous to the human if the robot were to continue to move. Accordingly, the whole system stops and will not restart until the obstacle is removed or the teleoperation speed is slowed down.
3.4 Human Force Compensation
The human force is very important for the designed human assistance system. It is crucial to the stability, security and efficiency of the system. For many reasons, the human force obtained from the force/torque sensor may not be accurate in representing the force human applied. As such, it is important to improve it and gain an accurate picture of the human force.
One important improvement is to cancel the gravity of the end-effector. The mass of the end-effector cannot usually be ignored, and so the gravity of the end-effector will definitely be sensed by the force/torque sensor, which is mounted on the end-effector. This noise can be removed based on the position and orientation of the end-effector. Defining mE as the mass of the end-effector and g as the gravity acceleration, the force compensation for removing gravity can be described as:
-n (YB ){/
m Eg = mEg = mEg = mEg = mEg = mEg
Y (21)
where Fraw is the raw data from the force/torque sensor, n(TB ) is the transformation matrix to transform the gravity to the 3 force directions and the 3 torque directions of the end-effector based on the current manipulator configuration.
In addition, another important improvement is to integrate the acceleration information so as to compensate the force and gain more accurate human forces. Acceleration can be considered as a type of force, especially when the robot changes its speed suddenly, e.g., then robot begins moving from stationary. Accordingly, acceleration information can be used as a first movement indicator. Thus, the acceleration can be considered as some additional force that enhances the human's intention. The final force compensation can be described as:
F = Fraw ){mEg, mEg> mEg> mEg> mE& MEg^ + KAY A
where Y A is the acceleration of the end-effector sensed by the acceleration mounted on the end-effector and K A is a acceleration-to-force coefficient matrix. The advantage of using acceleration is that it can provide valuable information at the beginning of a new movement command. This approach increases the accuracy and ability of the robot to respond to the human motion.
4. Experimental results
The designed human assistance systems were implemented on the mobile manipulator in Figure 1. Both the Robot-Human mode and the Teleoperator-Robot-Human mode were used. Using the Robot-Human mode, the mobile manipulator was used to help a woman to transport an object, as shown in Figure 4 (a). Using the Teleoperator-Robot-Human mode, the mobile manipulator was used to guide a blind person to a chair and help him to sit down, as shown in Figure (b).
In the Robot-Human mode experiment, the robot helped a woman to pick up an object. Then, they worked together to transport the object to a desk. Afterwards, the robot helped her to place the object on the desk. The force thresholds were chosen as FA = 14 N, FB = 40 N . The control parameters were determined through a series of tests to make the robot have the best performance. The experiment showed that the robot could help the woman to accomplish this task easily and smoothly. The singularity points of the manipulator were efficiently avoided and the manipulability was guaranteed. The switching between the 3 moving functions was smooth and could be easily operated by the human. Figures 5-7 show the experimental results of this mode. Figure 5 and Figure 6 show the compensated force and the manipulator movement speed in three directions. Figure 7 shows the moving direction and velocity of the mobile platform respectively. From these results, we can see that, based on the accurate compensated human force, the movements of the mobile platform and the manipulator were well-coordinated together. The whole motions of both the mobile platform and the manipulator were smooth and the switching among the 3 moving functions was smooth and effective as well. Through these movements, the robot could help the woman to finish the transportation work smoothly and it was very convenient and helpful for the woman in using this assistance system.
Figure 4. (a) Robot-Human mode (b) Teleoperator-Robot-Human mode
C ompsjis jlud Force Mbasursnients
□ 20 40 BQ
100 120 140 160 180
Figure 5. Compensated force measurement in Robot-Human mode
Figure 6. XYZ speed of the manipulator in Robot-Human mode
In the Teleoperator-Robot-Human mode experiment, a blind person was first guided to the mobile manipulator. Then the person held the end-effector. The teleoperator could observe the environment by the environmental sensors, including on-board sensors and two off-board top-view cameras via the Internet. The on-board sensors included the force/torque/acceleration sensor, cameras and a laser scanner. Besides this, the teleoperator could also talk to the blind person via the Internet. The task was to guide the blind person from the original position to a chair and then help him to sit down on the chair. An obstacle was placed between the original position and the chair. Accordingly, obstacle avoidance for both robot and human was required. Moreover, an unexpected obstacle was suddenly placed to block the blind person and was then removed later on. The force thresholds were chosen as FM = 20 N , FB = 60 N . The control parameters were also determined through a series of tests to ensure the robot gave the best performance. The experiment shows that the robot could successfully guide the person to move to the chair and help him to sit down. The obstacles were successfully avoided for both the robot and the human, and the safety of the entire assistance process was always guaranteed.
The results the Teleoperator-Robot-Human mode experiment are shown in Figures 8-10. The figures have a similar meaning to the first experiment's figures. We can see that, based on the accurate compensated human force, the mobile platform and the manipulator were smoothly coordinated together. In addition, and like the advantages in the first experiments, the manipulator could perform as a human arm by adjusting its position adaptively according to the blind person's intentions. As for obstacle avoidance, at about 80 s, the blind person was suddenly blocked by an unexpected obstacle and the mobile manipulator stopped. At about 100 s, the obstacle was removed and the robot started to move immediately and autonomously. The safety of the entire system is effectively guaranteed using the proposed method.
.Compensated Force Measurements
Figure 7. Moving direction and speed of the mobile platform in Robot-Human mode
Figure 8. Compensated force measurement in the Teleoperator-Robot-Human mode
Arm Moving Speed
\ 1 m à i 1 fl X-Direction Y-Direction Z-Direction -
V L \ M p H Ai
Figure 9. XYZ speed of the manipulator in the Teleoperator-Robot-Human mode
Base ftiöving Direction
0' 5Q B» IBS ÜB 250 300
Figure 10. Moving direction and speed of the mobile platform in the Teleoperator-Robot-Human mode
5. Conclusions
Two kinds of human assistance systems were designed to help humans in different ways using a mobile manipulator. The dynamic model and the control of the mobile manipulator system are first introduced. Two human assistance modes - a Robot-Human mode and a Teleoperator-Robot-Human mode - were designed and implemented. To improve such assistance, singularity, manipulability, safety and human force compensation
were all considered in the system design. Experiments implemented on a mobile manipulator demonstrated the effectiveness and success of the designed human assistance systems by accomplishing different complicated tasks. As for future work, some advanced sensors and control devices could be added to make the assistance interface between the human and robot more efficient and convenient. Besides this, the issue of how to apply this system to other human assistance applications - like homecare and factory transportation - will be very important in any potential future work.
6. Acknowledgements
This work was supported in part by the China National Science Foundation under grant 61175082 and the Jiangsu Science & Technology Pillar Program under grant BE2011192.
7. References
[1] K. Kosugne, M. Sato and N. Kazamura, "Mobile Robot Helper", Proceedings of the IEEE Conference on Robotics and Automation, vol. 1, 2000, pp. 583-588.
[2] K. Kosugne and Y. Hirata, "Human-Robot Interaction", IEEE International Conference on Robotics and Biomimetics, 2004, pp. 8-11.
[3] F. G. Pereira, F. B. de Sa, B. D. Ferreira and R. F. Vassallo, "Object Transportation Task by a Human and a Mobile Robot", 2010 IEEE International Conference on Industrial Technology (ICIT), 2010, pp. 1445 - 1450.
[4] Z. Wang, K. Fukaya, Y. Hirata and K. Kosuge, "Control passive mobile robots for object transportation - braking torque analysis and motion control," Proceedings of the IEEE Conference on Robotics and Automation, 2007, pp. 2874-2879.
[5] T. Takubo, H. Arai, Y. Hayashibara and K. Tanie, "Human-robot cooperative manipulation using a virtual nonholonomic constraint," The international Journal of Robotics Research, vol. 21, no. 5-6, pp. 541553, 2002.
[6] N. Uchiyama, A. Mori, Y. Kajita, S. Sano and S. Takagi, "Object transportation control for a human-operated robotic manipulator," IEEE International Conference on Emerging Technologies and Factory Automation, 2008, pp. 164-169.
[7] E. Yamanaka, T. Murakami and K. Ohnishi, "Cooperative Motion Control by Human and Mobile Manipulator", Proceedings of the 7th International Workshop on Advanced Motion Control, 2002, pp. 494-499.
[8] K. Nozaki and T. Murakami, "A Motion Control of Two-wheels Driven Mobile Manipulator for HumanRobot Cooperative Transportation", The IEEE 35th Annual Conference on Industrial Electronics, 2009, pp. 1574 - 1579.
[9] Y. Umeda, D. Nakamura, T. Murakami and K. Ohnishi, "Hybrid Position/Force Control of a Mobile Manipulator based on Cooperative Task Sharing", Proceedings of the IEEE International Symposium on Industrial Electronics, vol. 1, pp. 139-144, 1999.
[10] K. Kosuge, H. Kakuya and Y. Hirata, "Control Algorithm of Dual Arms Mobile Robot for Cooperative Works with Human", Proceedings of the 2001 IEEE International Conference on Systems, Man, and Cybernetics, vol. 5, 2001, pp. 3223-3228.
[11] Y. Jia, H. Wang, P. Sturmer and N. Xi, "Human/Robot Interaction for Human Support System by Using A Mobile Manipulator", IEEE International Conference on Robotics and Biomimetics, 2010, pp. 190-195.
[12] Y. Liu, Y. Jia and N. Xi, "Model and Control for Four-Powered-Caster Vehicle: A Probability-based Approach", Transactions of the Institute of Measurement and Control, pp. 1-8, May, 2012.
[13] J. Tan and N. Xi, "Unified Model Approach for Planning and Control of Mobile Manipulators", Proceedings of the 2001 IEEE International Conference on Robotics and Automation, 2001, pp. 3145-3152.
[14] J. K. Lee and H. S. Cho, "Mobile Manipulator Motion Planning for Multiple Tasks Using Global Optimization Approach," Journal of Intelligent and Robotic Systems, vol. 18, pp. 169-190, 1997.