Scholarly article on topic 'Integrated Sensors System for Human Safety during Cooperating with Industrial Robots for Handing-over and Assembling Tasks'

Integrated Sensors System for Human Safety during Cooperating with Industrial Robots for Handing-over and Assembling Tasks Academic research paper on "Economics and business"

CC BY-NC-ND
0
0
Share paper
Academic journal
Procedia CIRP
OECD Field of science
Keywords
{"Human-robot interaction" / Human-safety / "visual servoing" / "force control" / "handing-over and assembly tasks;"}

Abstract of research paper on Economics and business, author of scientific article — Mohamad Bdiwi

Abstract Human safety is the main concern which prevents performing some tasks requiring physical interaction between human and robot. Therefore, the safety concept was previously based on eliminating contact between human and robots. This paper will propose a robot system which integrates different types of sensors to ensure human safety during the physical human robot interaction. The implemented sensors are vision, force and sensitive skin. Using vision system, the robot will be able to detect and recognize human face, loadfree human hand and any object carried by human hand (active hand). Furthermore, it will help the robot to define in which directions the force should be applied and what are the dangerous directions for human safety. The force sensor will help the robot to react to the motion of the human hand during the handing-over or assembling task. The sensitive skin will prevent any collision between the human and the robot arm. The proposed system is supported with a voice system for informing human about the actual status of the system.

Academic research paper on topic "Integrated Sensors System for Human Safety during Cooperating with Industrial Robots for Handing-over and Assembling Tasks"

Available online at www.sciencedirect.com

ScienceDirect

Procedia CIRP 23 (2014) 65 - 70

Conference on Assembly TechnologiesandSystems

Integrated sensors system for human safety during cooperating with industrial robots for handing-over and assembling tasks

Mohamad Bdiwi*

Department ofJoinging and Assembling, Fraunhofer Institute for Machine Tools and Forming Technology IWU, Chemnitz, Germany * Corresponding author. Tel.: +49-371-5397-1658; EFax: -61658. E-mail address: Mohamad.bdiwi@iwu.fraunhofer.de

Abstract

Human safety is the main concern which prevents performing some tasks requiring physical interaction between human and robot. Therefore, the safety concept was previously based on eliminating contact between human and robots. This paper will propose a robot system which integrates different types of sensors to ensure human safety during the physical human robot interaction. The implemented sensors are vision, force and sensitive skin. Using vision system, the robot will be able to detect and recognize human face, loadfree human hand and any object carried by human hand (active hand). Furthermore, it will help the robot to define in which directions the force should be applied and what are the dangerous directions for human safety. The force sensor will help the robot to react to the motion of the human hand during the handingover or assembling task. The sensitive skin will prevent any collision between the human and the robot arm. The proposed system is supported with a voice system for informing human about the actual status of the system.

© 2014 The Authors.PublishedbyElsevierB.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.Org/licenses/by-nc-nd/3.0/).

Selection andpeer-reviewunderresponsibilityofthe International Scientific Committee of 5th CATS 2014 in the person of the Conference Chair Prof. Dr. Matthias Putz matthias.putz@iwu.fraunhofer.de

"Keywords: Human-robot interaction; Human-safety, visual servoing; force control; handing-over and assembly tasks ;"

1. Introduction

Physical interaction between human and robot consists usually of two main parts: the human hand and the robot hand. However, in most service robots applications this interaction will not be performed directly between human hand and robot hand, but a target object will serve as a connection bridge between human hand and robot hand. The target object could be a transferred object between the human hand and the robot hand or an object which needs to be assembled, operated etc.

For better understanding of the behavior of human and robot before starting the physical interaction phase, the handing-over task will be taken as an proposed example. In every handing-over task there are two parties: the giver and receiver, and an object which will be transferred. By handing over an object from human hand to the robot, the human will be giver and the robot will be receiver. Otherwise, the robot will be giver and the human will be receiver. This work will divide giver/receiver (two parties; human and robot) into three types depending on their behavior during the handing-over:

• Positive giver (receiver): In this case, the giver will play a positive role during handing-over of the objects. In other words, the giver will move toward the receiver and track it to achieve smooth handing-over task.

• Neutral giver (receiver): In this case, this party will try to fix its hand in a stable pose, and the receiver should move toward it to achieve the handing-over task.

• Negative giver (receiver): Here, this party will play a negative role; party is e.g. elderly or blind or he/she is doing something else at the same time. In this case, the receiver should expect some random motions from giver during the task and react accordingly to them.

Fig. 1 Human-robot interaction

2212-8271 © 2014 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

Selection and peer-review under responsibility of the International Scientific Committee of 5th CATS 2014 in the person of the Conference Chair Prof. Dr. Matthias Putz matthias.putz@iwu.fraunhofer.de doi:10.1016/j.procir.2014.10.099

Hence, if both parties the receiver and the giver behave as negative or neutral parties, the physical interaction will not be accomplished. At least one of both parties should perform the task positively by tracking the other party, defining the contact point, searching for contact and also tracking during the interaction phase in order to achieve a smooth handing over task. All the possible cases are represented in the table I.

In general, the common used approach in the previous works is the case 8, e.g. [1], [2], [3] and [4], where the tasks are performed exclusively by the human. This means that the robot will bring the robot hand into a specified position and orientation and then it will wait until the human places the object between the fingers of the gripper. When the robot detects that an object has been placed in its hand, it attempts to grasp the object, the same case has been applied previously for handing-over object from robot to human or during the performance of assembling task. In fact, this scenario will not be fit to assist blind, disabled or elderly people or even to support workers concentrating on their work. The main reason which leads the previous works to choose this approach is the factor of human safety. The robot is not allowed to move (stay stable and behaves as neutral party) when the human is moving toward it.

In this work, we will assume that the human is the weakest party of human-robot team (blind, disabled or concentrated on his own work), i.e. the transfer or the physical interaction task will be exclusively established and controlled by the robot. Therefore, we should propose an integrated sensors system which ensures the safety of the human. As a consequence, if the robot was able to perform the task when the human behaves as negative or neutral party, so for sure it will be able to perform the task efficiently when the human is positive party.

This paper will not only present the vision and force information as control signal, but it will illustrate; how the robot system can benefit from these signals in order to insure the safety of the human, how to integrate these information together with skin sensor feedback, how the robot can use all the available information which could be provided by vision sensor not only for the target object but also for the whole

scene. Using vision system, the robot will be able to detect human face, to recognize loadfree human hand or any object carried by human hand (active hand). Furthermore, it will help the robot to define the optimal combination of vision, force and skin sensor (in which directions the force should be applied and what the dangerous directions are) in order to guarantee the human safety, to ensure the fulfillment of grasping task and to react to the motion of human hand during the interaction phase. Hence, the robot will play the main role as a positive party to perform the task. This scenario could be useful in various applications, e.g. with robot assistants for blind, disabled or elderly people helping them in fetching, carrying things or transporting objects. In other applications the robots serve as members of human-robot team as physical support to humans for such applications as space exploration, construction, assembly etc.

The proposed system will be shortly presented in the next section. In section 3, the procedures for safety issues will be illustrated. Section 4 presents briefly the control algorithms. The last section contains the conclusion, future work and the benefits of improving the physical interaction between human and robot.

2. Proposed System

This section describes the proposed system representing the feasibility of integrating vision, force and skin sensor feedbacks in order to insure the human safety during the physical interaction between human and robot.

Many algorithms have been proposed to avoid the collision for the whole arm of the industrial robots using skin sensor. However, many of them have required a large number of sensors, e.g. in [5] it is presented a prototype of sensing skin for a robot arm. Rings of Sensors are around the robot link, each ring consisting of several infrared range sensors, which can detect objects in a distance range between 4 and 30 cm. In [6] it has been developed a sensitive skin consisting of hundreds of active infra-red proximity sensors that cover the whole arm body. Another work [7] has presented a cost-effective invisible sensitive skin that can cover a large area without utilizing a large number of sensors and it is built inside the robot arm. Only 5 contactless capacitive sensors and specially designed antennas are used to cover the whole arm of a 6-DOF industrial robot. The sensors, antennas and wires are all built inside the covers the robot arm and the sensing distance of each sensor is 10cm. In fact, this approach is very fit to be combined with the proposed system and its information could be easily integrated with the vision and force feedback.

The overall setup of the proposed system consists of an industrial robot provided with vision and force sensors. In our experiment, the implemented system is a Staubli RX90 robot with a JR3 multi-axis force/torque sensor together with the eye-in-hand camera system. The end-effector is installed on the collision protection device. The end-effector is the two-fingers gripper which hold the obj ect. it has digital input (0 = open, 1 = closed). JR3 (120M50A) is a six component force/torque sensor and its effective measurement range is ± 100 N for forces and ± 10 N.m for torques . The vision

Table I behaviors of human/robot

Human Robot Physical human robot interaction

Negative Negative Unsuccessfully performed

2 Negative Neutral Unsuccessfully performed

3 Negative Positive Could be successfully performed, if robot is faster than human

4 Neutral Negative Unsuccessfully performed

5 Neutral Neutral Unsuccessfully performed

**6 Neutral Positive Successfully performed

7 Positive Negative Could be successfully performed, if human is faster than robot

*8 Positive Neutral Successfully performed

**9 Positive Positive Successfully performed (the optimal case)

* The common approach in the previous works.

** The proposed approach.

camera is RGBD (Kinect) camera which delivers depth and color pictures with VGA resolution (640x480 pixels).

Fig. 2 Proposed system

Fig. 2 demonstrates the structure of the robotic system and the environment components and it presents the safety regions which are provided by the different sensors. As is shown is Fig. 2 the implemented sensors will provide the robot system with complementary information. The sensitive skin sensor will prevent any collision between the human and the whole robot arm. In the most applications, the force sensor is mounted on manipulator's wrist before the end-effector and it provides information about six components: three Cartesian forces and three Cartesian torques. Hence, the force sensor will help the robot to react to the motion of the human hand during the physical interaction phase and it will prevent any collision between human hand and robot gripper. The vision system will detect and segment the object from the human hand without any information about the model of it [8]. After that the system will track the object visually and then it will grasp it by combining the vision and force feedback. Before grasping, the system will calculate the graspability (if the robot is able to grasp the object) and the grasp point (where the robot hand will contact the object) [9]. The output of the vision sensor will be the position of the face and of the object as well the status of the task as follows:

(facex, facey, facez, objx, objy, objz, time_ dif, vision_ status) (1) where (facex, facey,facez) is the middle point of the face rectangle (facereg) , (objx,objy,objz) is the tracking point of the object which will be later the contact point between the object and the gripper. (time_dif)is time difference between two frames. (vision_status) represents the current status of vision system which could return the following values:

• vision_status =1; no face is detected.

• vision_status = 2; face is detected.

• vision_status = 3; face and human hand (loadfree hand) detected.

• vision_status = 4; face, human hand and object are detected tracking phase could start).

• vision_status = 5; face, human hand and object are detected. However, the robot is not able to grasp the object because the conditions of the graspability are not satisfied, e.g. when the height of the robot hand is greater than the distance between contact point of the object and human hand (safety factor for the human fingers during the grasping phase).

3. Safety Procedures

Haddadin in [10] and [11] has evaluated the injuries which could happen during the human robot interaction relating to the robot speed, robot mass and constraints of the environment. A lot of papers have been published which have proposed different solutions for improving the safety factor during the interaction between human and robot. [12] and [13] e.g. have proposed to improve the mechanical design by reducing the mass of the robot, whereas another work [14] has proposed trajectories which consider constraints related to the human body. However, the proposed work does not focus on robot design or trajectory planning to improve the safety factor, instead of that it focuses on the benefits of using and integrated sensors system in order to improve the safety factor during physical interaction between the human and the robot. In our opinion, even if the system would use lightweight robot system and predefined trajectories, it is indispensable to integrate sensor information to guarantee the safety of the human especially when unexpected problems or errors happen during the physical interaction. This section will illustrate how the proposed procedures are performed to ensure the safety with the help of vision and force information and it will present proposed voice subsystem which will help to increase the safety of the user especially if the user is blind or he/she should concentrate on own work.

3.1. Vision procedures for safety

This section will propose two vision safety factors. The first one (SF_body ) will be related to guarantee the safety of the whole human body, whereas the second safety factor ( SF_hand) will be related only to the safety of the fingers during the physical interaction between human and robot. Values of both factors will be zero as long as the safety requirements are fulfilled. Otherwise, if any error or dangerous position of human is recognized, the safety variables will be immediately activated and the task will be canceled.

In the proposed system, face detection algorithms are implemented and the human face can be detected. Robot system can detect the human face only when the human looks directly to the camera or with deviation of up to ± 50°. When the robot system can detect the human face, this means that the robot is also within the sight view of the human and the human can see the motion of the robot. Face detection could be considered as positive sign which helps the robot to recognize if the human is able to follow its movements and is prepared to react. In the case of blind user, the voice subsystem of the robot will help the user to recognize the robot's direction, so the user will have to move his head toward the robot.

Fig. 3 Human body segmentation

Fig. 3 presents the results of segmentation of the human body and of the related target object which is carried by human hand. This segmentation step will follow before distinguishing between the object from the human hand [8]. Human body segmentation will help the robot system to define the depth map of the body. With the help of human body depth map, robot system can detect if any part of the human body expects the active hand (the hand which carries the target object) is located near area of interest. In our procedures the area of interest which contains the target object and the active hand should be the nearest part of the human body to the robot, and other parts of the human body should be located far away from the area of interest (80mm) as shown in Fig. 4. Hence, SF_bodywill be activated and the task will be canceled in the following cases: If the robot system is not able to detect the human face during the task anymore, if any part of the human body is closer than the active hand to the robot or if any part of the human body such as head or chest is located in distance less than 80mm from the area of interest (in the depth map). If more than one person are inside the sight view of the robot system, it will handle and interact only with the closest person and all other users will be ignored. If the closest person has no objects in his/her hand, the operation will not be performed.

Fig. 4 Area of interest The second safety factor (SF_hand) will be needed during the first contact phase between human hand and robot hand, when the robot moves toward the human hand to grasp the object. As shown in Fig. 5, robot system will calculate the graspability by defining the boundary line between human hand and object. When the robot system defines the graspability, it will compare between the characteristics of robot hand and the size of the object (width and length).

Fig. 3 Segmentation of human hand and object

During this phase the robot system will add a safety zone of 20mm to protect the fingers of the user. If the user has carried the object in a way that the robot will not be able to grasp it, the safety factor of the hand will be activated and the mission will be canceled.

3.2. Force procedures for safety

In [15] the importance of monitoring and controlling the force information was presented, especially in cooperative systems and motions guided by human. The procedure proposed in this work will include one force safety factor ( SF_ force ). The value of this factor will be zero as long as the force safety requirements are fulfilled. However, if any errors or unexpected values of force are recognized, the safety factor will be immediately activated and the mission will be canceled. Monitoring the force values is very important, especially when the robot is moving toward the human (z direction). In this phase the speed of the robot could be fast which means that a hard impact force could arise if any unexpected obstacle has appeared or if the human has moved toward the robot in an unexpected way.

Fig. 6 illustrates how the robot system will react if any unexpected force is measured, especially when the robot is moving toward the object in z direction. As shown in Fig. , the measurement starts when the robot is moving toward the human to grasp the object. The initial position in z direction was (z = -1.9cm). During this phase, unexpected obstacle has faced the robot at t = 5,7s, so that the applied force in negative z direction will be increased. When the measured force exceeds the safety limit (SL = ± 20N), the force safety factor will be immediately activated and the task will be canceled. As shown in Fig. 6, at t =6,6s the measured force in z direction has exceeded safety limit Fz = -20.24N, so that the task will be immediately aborted and robot will return back.

SVi---------------------------------------------------------

T-,f r ---------

SI - MoXior to object in Zdirection

E p^ j hlisston abort

"" return homepasttiori

0 1 ! 3 < 5 f 1 8 S It II

1« II 1 I 1 l ill JC&

■11 0 ...............f Hi /I\ i ukleippeirs v................. '

■20 j|ctiont»«bj(diniii«tM phase ¡Fijutip / —Cortactfarce exceeds safety limitatisr

-ll) II i i i ! III

jp I_I_I_I_I_:_I_I_I_

0 12 3 4 E Tims. Sec S I 8 ! II 11

Fig. 6 Canceled task because of force safety factor

Fig. 7 shows another experiment where the requirements of the force safety factor have been fulfilled. In this experiment, the robot is moving toward the human hand to grasp the object in z direction (at t = 0s, z = -4.6cm and then at t = 27,5s, z = 29.7cm). Robot has arrived to the target point (tracking point) of the object at t = 27,5s without any unexpected obstacles, so that in the next phase at t = 28s robot will start searching for the first contact with the object. As shown in Fig. 7 robot will move slowly in x direction (at t = 28s, x = 9.12cm and then at t = 33s, x = 7.45cm). Hence, when the applied force in x direction exceeds the desired threshold (e.g. 3N is high enough so that the robot system can

recognize contact with the object). In this case robot system will close the robot hand in order to grasp the object.

Table II Status of voice subsystem

Mo: on to ob.ect il I direct on phi!!

........L.

H:JUni

_________

i: 2)7ai

First contact

0.....

I Me»

s « 1! it ¡5 it 35

i First iorticl Ik!

Motion to object in Z direction piiise

phase /

flcbctia contact : Grasp objects

Safety issue is ote|t whilfsl'. 1 recalibrate Fcrces

Nc »bstecies during die motion to object FcîNJI L

I W-sj 1 1 1 1 .....

15 Time. Sec ¡0 ¡5 3«

Fig. 7 moving toward object and searching for first contact

An important question in this section is how the robot can recognize that e.g. the applied force in x direction will serve as contact force with the object and the applied force in z direction will be impact force with an unexpected obstacle. Answer to this question will be illustrated in section 4. Briefly, with the help of vision system, the robot system will define when and in which direction it should apply the contact force with the target object, so that any other measured force in an unexpected time or in different direction will be the warning sign and the force safety factor will be activated.

3.3. Skin sensor procedures for safety

Skin sensor information would be very necessary, when the human is out of the field-view of the vision system. This could be happened, when the robot is not facing the human side and then it starts rotating to interact with the human. Here, the robot system should ensure the safety of the human during the motion with the help of skin sensor ( SF _ skin ). If a human is very near (10 cm) from robot arm during the rotation, the safety factor ( SF _ skin ) will be activated and the task will be aborted.

3.4. Voice subsystem

Robot system is supported with voice subsystem. It will announce the current phase (what it is going to do), the status of the operation or if some errors have occurred. The voice subsystem will give the human the opportunity to know and to understand what the robot is doing now and to be ready if any error has occurred during the task. It will increase the safety factor between human and the robot, especially if the user is blind or he/she should concentrate on his/her own work.

The voice subsystem can be easily improved in order to announce different states. Table II illustrates only the main status of the proposed tasks (transferring objects between human and robot). This version of voice subsystem is suitable for our experiments. If a mobile robot were used, the voice messages could be easily modified.

Event Status Voice message

New face detected FACE DETECTED Nice to meet you

Person leaves frame NO FACES Good Bye

Person detected but distance is too big FACE DETECTED Please come closer

Person detected, object not found HAND ONLY Hand only detected

Distance O.K. object segmentation successful TRACKING Object size in mm. I am tracking

Vision phase complete, robot began moving to object COMING PHASE I am coming to you

Moving failed COMING PHASE BREAKS I can't come to you Operation failed

Searching first contact point with force sensor SEARCHING CONTACT Searching for contact

Starting force interaction INTERACTION Starting interaction phase

Failing interaction phase INTERACTION BREAKS No contact with object

Interaction completely successful NO MORE FACES Return to home position

4. Control algorithms

Numerous papers have discussed the fusion of vision and force information, e.g. [16] and [17]. However, in this work the visual information will not only be used as simple feedback, but it will also determine and help to control the values of the selections matrix [18] as shown in Fig. 8 in order to define the types of the feedback used in every direction. In other words, with the help of vision the robot system will decide (by values of the selection matrix S, Si = 1 or 0) which subspace will be force controlled (AF) and which subspace will be position controlled ( AX ), as follows:

0 0 " F S1 0 0 0 0 0 APX Afx,

0 0 0 S2 0 0 0 0 AP, %

0 0 S'AX= 0 0 S3 0 0 0

S4 0 0 K, % 0 0 0 S4 0 0 %

1-s5 0 0 0 0 0 S5 0 AR, iR„

0 1-S( ml 0 0 0 0 0 S6 ARZ,

Fig. 8 Control algorithms

Fig. 8 represents only the vision/force feedback as control signals, whereas the feedback of skin sensor will be considered only as safety signal. In other words, the skin sensor has no effect on the control algorithm; it just aborts the mission if its safety factor is activated. In Fig. 8, X represents the pose of end effector in different coordinate system. With upper and lower indices C, W and E the coordinate systems of camera, world and end effector are denoted. Pose is represented by homogeneous matrix, e.g. E Xm or by an

equivalent v ector EX mT = [EXm,EYm,EZm,E Vm,E Sm,E 9m ].

Here E X m is the measured pose of end effector from robot control system, EXV is desired pose of end effector determined from vision. Poses are transformed from camera coordinate system and world coordinate system to end effector coordinate system using transformation matrixes ETC and etw .

5. Conclusion

This work has presented the importance of the integration different kind of sensors (vision, force and skin sensors) in order to insure the safety of the human during the physical interaction between the human and the robot. This work will not present the vision and force information only as control signal, but it will illustrate; how the robot system can benefit from these signals in order to insure the safety of the human, how to integrate these information together with skin sensor feedback, how the robot can use all the available information which could be provided by them not only for the target object but also for the whole scene. Using vision system, the robot will be able to detect and recognize human face, loadfree human hand and any object carried by human hand (active hand). Furthermore, it will help the robot to define in which directions the force should be applied and what are the dangerous directions for human safety. The force sensor will help the robot to react to the motion of the human hand during the handing-over or assembling task. The sensitive skin will prevent any collision between the human and the robot arm. In addition to that, the proposed system is supported with a voice system for informing human about actual status of system.

This work will assume that the human is the weakest party of human-robot team (blind, disabled or concentrated on his/her own work), i.e. the transfer or the physical interaction task will be exclusively established and controlled by the robot. Therefore, an integrated sensors system was important to ensure the safety of the human. The integrated sensor system could be easily modified to fit different kinds of applications and it could be implemented in service and rescue robots, industrial human-robot teamworks (assembly task) etc.

In future work, tactile sensors could be integrated with the system to optimize the grasping algorithms.

References

[2] A. Edsinger, and C. Kemp, "Human-Robot Interaction for Cooperative Manipulation: Handing Objects to One Another," 16th IEEE International Conference on Robot & Human Interactive Communication", vol. 2, pp. 1167—1172, 2007.

[3] M. Cakmak, S. Srinivasa, M.Lee, J. Forlizzi and S. Kiesler, "Human preferences for Robot-Humand Hand-Over Configurations," IEEE/RSJ International Conference on Intelligent Robots and Systems USA, pp. 1986-1993, September 2011.

[4] M. Huber, M. Rickert, A. Knoll, T. Brandt, and S. Glasauer, "HumanRobot Interaction in Handing-Over Tasks," Proceedings of the 17th International Symposium on Robot and Human Interactive Communication, Germany, pp. 107-112, August 2008.

[5] D. Gandhi, E. Cervera, "Sensor Covering of a Robot Arm for Collision Avoidance," IEEE International Conference on Systems, Man and Cybernetics, 2003, Vol. 5, pp. 4951-4955.

[6] E. Cheung and V. Lumelsky, "Development of Sensitive Skin for a 3d Robot Arm Operating in an Uncertain Environment, "Proc. 1989 IEEE Conference on Robotics and Automation, Scottsdale, AZ, May 1989.

[7] T. Lam, H. Yip H. Qian and Y. Xu, "Collision Avoidance of Industrial Robot Arms using an Invisible Sensitive Skin", IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4542-4543, 2012.

[8] M. Bdiwi, A. Kolker, J. Suchy and A. Winkler "Segmentation of ModelFree Objects Carried by Human Hand: Intended for Human-Robot Interaction Applications", The International Conference on Advanced Robotics, Uruguay, 2013.

[9] M. Bdiwi, A. Kolker, J. Suchy and A. Winkler, "Automated Assistance Robot System for Transferring Model-Free Objects From/To Human Hand Using Vision/Force Control", The International Conference on Social robotics, England, 2013.

[10] S. Haddadin, A. Albu-Schaffer, M. Strohmayr, M. Frommberger and G. Hirzinger, "Injury Evaluation of Human-Robot Impacts," IEEE International Conference on Robotics and Automation, USA, pp. 22032204, May 2008.

[11] S. Haddadin, A. Albu-Schaffer, and G. Hirzinger, "Safety Evaluation of Physical Human-Robot Via Crash-Testing," Robotics: Science and Systems Conference (RSS), USA, June 2007.

[12] A. Bicchi and G. Tonietti, "Fast and Soft Arm Tactics: Dealing with the Safety-Performance Trade-off in Robots Arms Design and Control," IEEE Robotics and Automation Mag. Vol. 11, pp. 22-33, 2004.

[13] M. Zinn, O. Khatib, and B. Roth, "A New Actuation Approach for Human Friendly Robot Design," Int. J. of Robotics Research, vol. 23, pp. 379-398, 2004.

[14] J. Mainprice, E. Sisbot, T. Simeon, and R. Alami, "Planning Safe and Legible Hand-Over Motions for Human-Robot Interaction," IARP Workshop on Tech. Challenges for Dependable Robot in Human Environments, 2010.

[15] G. Reinhart, S. Zaidan, and J. Hublele, "Is Force Monitoring in Cooperating Industrial Robots Necessary?", 6th German Conference on Robotik, Germany, pp.523-530, June 2010.

[16] B. Nelson, D. Morrow, and P. Khosla, "Robotic Manipulation Using High Bandwidth Force and Vision Feedback," Mathematical and Computer Modeling, vol. 24, No. 5/6, pp. 11-29, 1996.

[17] J. Beaten, and H. Bruyninckx, "Integrated Vision/Force Robotics Servoing in the Task Frame Formalism," International Journal of Robotics Research, vol. 22, No. 10-11, pp.941-954, 2003.

[18] M. Bdiwi and J. Suchy, "Automatic Decision System for the Structure of Vision-Force Robotic Control", 10th International IF AC Symposium on Robot control, Dubrovnik, Croatian, pp. 172-177, 2012.

[1] R. Bischoff, and V. Graefe, "HERMES - a Versatile Personal Assistant Robot," Proc. IEEE - Special Issue on Human Interactive Robots for Psychological Enrichment., vol. 92, pp. 1759-1779, November 2004.