Scholarly article on topic 'Building technology platform aimed to develop service robot with embedded personality and enhanced communication with social environment'

Building technology platform aimed to develop service robot with embedded personality and enhanced communication with social environment Academic research paper on "Materials engineering"

CC BY-NC-ND
0
0
Share paper
Academic journal
Digital Communications and Networks
OECD Field of science
Keywords
{"Human-centric robot" / "Personal robot" / "Artificial emotional intelligence"}

Abstract of research paper on Materials engineering, author of scientific article — Aleksandar Rodić, Miloš Jovanović, Ilija Stevanović, Branko Karan, Veljko Potkonjak

Abstract The paper is addressed to prototyping of technology platform aimed to develop of ambient-aware human-centric indoor service robot with attributes of emotional intelligence to enhance interaction with social environment. The robot consists of a wheel-based mobile platform with spinal (segmented) torso, bi-manual manipulation system with multi-finger robot hands and robot head. Robot prototype was designed to see, hear, speak and use its multimodal interface for enhanced communication with humans. Robot is capable of demonstrating its affective and social behavior by using audio and video interface as well as body gestures. Robot is equipped with advanced perceptive system based on heterogeneous sensorial system, including laser range finder, ultrasonic distance sensors and proximity detectors, 3-axis inertial sensor (accelerometer and gyroscope), stereo vision system, 2 wide-range microphones, and 2 loudspeakers. The device is foreseen to operate autonomously but it may be also operated remotely from a host computer through wireless communication link as well as by use of a smart-phone based on advanced client-server architecture. Robot prototype has embedded attributes of artificial intelligence and utilizes advanced cognitive capabilities such as spatial reasoning, obstacle and collision avoidance, simultaneous localization and mapping, etc. Robot is designed in a manner to enable uploading of new or changing existing algorithms of emotional intelligence that should provide to robot human-like affective and social behavior. The key objective of the project presented in the paper regards to building advanced technology platform for research and development of personal robots aimed to use for different purpose, e.g. robot-entertainer, battler, robot for medical care, security robot, etc. In a word, the designed technology platform is expected to help in development human-centered service robots to be used at home, in the office, public institutions, etc.

Academic research paper on topic "Building technology platform aimed to develop service robot with embedded personality and enhanced communication with social environment"

Digital Communications and Networks (2015) 1, 112-124

HOSTED BY

Building technology platform aimed to develop service robot with embedded personality and enhanced communication with social environment

Aleksandar Rodica,n, Milos Jovanovica, Ilija Stevanovica, Branko Karanb, Veljko Potkonjakc

aMihajlo Pupin Institute, University of Belgrade, Volgina 15, 11060 Belgrade, Serbia bInstitute of Technical Sciences, Serbian Academy of Sciences and Arts, Knez Mihajlova 35, 11000 Belgrade, Serbia

cSchool of Electrical Engineering, University of Belgrade, Bulevar Kralja Aleksandra 73, 11000 Belgrade, Serbia

Received 24 December 2014; received in revised form 3 March 2015; accepted 12 March 2015 Available online 6 April 2015

Abstract

The paper is addressed to prototyping of technology platform aimed to develop of ambient-aware human-centric indoor service robot with attributes of emotional intelligence to enhance interaction with social environment. The robot consists of a wheel-based mobile platform with spinal (segmented) torso, bi-manual manipulation system with multi-finger robot hands and robot head. Robot prototype was designed to see, hear, speak and use its multimodal interface for enhanced communication with humans. Robot is capable of demonstrating its affective and social behavior by using audio and video interface as well as body gestures. Robot is equipped with advanced perceptive system based on heterogeneous sensorial system, including laser range finder, ultrasonic distance sensors and proximity detectors, 3-axis inertial sensor (accelerometer and gyroscope), stereo vision system, 2 wide-range microphones, and 2 loudspeakers. The device is foreseen to operate autonomously but it may be also operated remotely from a host computer through wireless communication link as well as by use of a smart-phone based on advanced client-server architecture. Robot prototype has embedded attributes of artificial intelligence and utilizes advanced cognitive capabilities such as spatial reasoning, obstacle and collision avoidance, simultaneous localization and mapping, etc. Robot is designed

Available online at www.sciencedirect.com

ScienceDirect

journal homepage: www.elsevier.com/locate/dcan

^jjjj CrossMark

KEYWORDS

Human-centric robot; Personal robot; Artificial emotional intelligence

*Corresponding author.

E-mail addresses: aleksdandar.rodic@pupin.rs (A. Rodic), milos.jovanovic@pupin.rs (M. Jovanovic), ilija.stevanovic@pupin.rs (I. Stevanovic), karan@afrodita.rcub.bg.ac.rs (B. Karan), potkonjak@yahoo.com (V. Potkonjak). Peer review under responsibility of Chongqing University of Posts and Telecommunications.

http://dx.doi.org/10.10167j.dcan.2015.03.002

2352-8648/© 2015 Chongqing University of Posts and Communications. Production and Hosting by Elsevier B.V. All rights reserved.

in a manner to enable uploading of new or changing existing algorithms of emotional intelligence that should provide to robot human-like affective and social behavior. The key objective of the project presented in the paper regards to building advanced technology platform for research and development of personal robots aimed to use for different purpose, e.g. robot-entertainer, battler, robot for medical care, security robot, etc. In a word, the designed technology platform is expected to help in development human-centered service robots to be used at home, in the office, public institutions, etc.

© 2015 Chongqing University of Posts and Communications. Production and Hosting by Elsevier B.V. All rights reserved.

1. Introduction

Human-centered high-tech systems like personal devices (PC-computers, smart phones, cars or personal robots) are designed to be used by humans in everyday life. Personal devices (widely accepted and available to majority) were recognized in the recent decades as forthcoming technology of modern society that will initiate new technology revolution like it was the invention and exploitation of the Internet in 90-ies. Robotic scholars and practitioners particularly emphasize human-like movement, human-like intelligence and human-like communication as necessary features of the future personal robots [1-3] as the technology of the 21st century.

Recognizing the need to intensify work in human-centric robotics, a group of academic institutions [4] have launched a project of research and development of ambient intelligent service robots. Development of new laboratory prototypes of intelligent service robots, intended for applications in indoor environment, was designated as one of the main project objectives. At the same time, the issue was on the variety of software solutions, starting from the modeling and simulation of the motion of service robots to algorithms of multimodal interaction and emotion modeling including social i.e. inter-personal (human-robot or robot-robot) interaction. The prototype is developed with two main objectives in mind: first, to serve as a test-bed for scientific research and second, to provide a basis for possible commercial and noncommercial applications. The intended operation area (indoor space) influenced the selection of locomotion structure and navigation sensors. Starting from the decomposition of motion of the service robot into the transport (global) motion, by which the robot changes its operating positions (using its legs or wheels), and the local task-specific manipulation by which the working task is directly performed, a structure consisting of a mobile platform with rolling wheels as a means of transport has been adopted as the basis of the prototype. Similarly, the navigation is based on sensors suitable for operation in indoor environments, primarily the ultrasonic sensors and low-cost visual depth sensors. In order to bring the manner of carrying out the service tasks as close as possible to human, the task performer was chosen in a form of humanoid torso with a head and artificial hands with anthropomorphic characteristics and the abilities of multimodal (audio, gesture) communication with human users. The resulting prototype has been assigned the acronym UBMSR (Universal Bi-Manual Service Robot).

The paper is organized as follows. The overview of the prototype as a whole is given in the next section. Especially important subsystems—motion, sensory, and the control system are elaborated in more details in three subsequent Sections 3-5. Development of the cognitive features of the system is considered in Section 5, too. In the Section 6 some experimental results are presented to verify potential system capabilities. The closing Section 7 to be in plan contains a short description of the state of the project, conclusion and further developments.

2. System overview

The UBMSR prototype is designed for indoor applications. The robot is designed primarily for research purposes although it has a universal character according to its functionality, and moreover, its modules are of industrial quality. This device can be used in households (as a household appliance as battler or as a socializing or entertainment robot), and in public places (airports, railroad stations, museums, shopping malls, offices, medical institutions, etc.) to provide information and user assistance, as a carrier of load or medical waste, as well as in industry as a mobile flexible robot system easily adaptable to changes in technological process and in production program. Thanks to its soft compliant joints, this robotic system may work in cooperation with people without special need of ensuring safety of workplace. As a research tool, the UBMSR may be used for research, development, and evaluation of algorithms of motion of mobile robots in unstructured environment, evaluation of artificial intelligence algorithms, sound and pattern recognition, developing of multimodal humanrobot interface (HRI), etc.

When designing the system, a care has been taken partly of anthropomorphic proportions as well as the functionalities that should imitate human behavior and activities. For this reason, this device possesses a high level of anthropomorphism although it integrates particular components that are purely industrial realizations, e.g. lightweight robot arms and hands. In the future, these elements shall be possibly replaced by corresponding elements inspired by human body.

The robot structure consists of several functional modules (Fig. 1): (i) wheel-driven mobile robot platform; (ii) segmented torso; (iii) bi-manual robotic system with three-fingered graspers; (iv) movable robot head; (v) sensory-acquisition

Fig. 1 The Universal Bi-manual Service Robot as technology platform for research and development of personal robots and wireless networked systems.

system; and (vi) control and communication module. The adopted system structure, as displayed in Fig. 1, allows fine system mobility in different indoor environments. When determining robot dimensions, human proportions have been partly taken into account, since it has been planned to use the robot in space where people live and work and the robot is expected to cooperate with people in particular tasks. Overall dimensions of the mobile platform, including wheels are L x W x H=0.70 x 0.70 x 0.72 m3. The torso is placed so that the shoulders are on the level of 1.5 m and the span between shoulders is 0.475 m, which corresponds to human proportions [5]. The size of the robot head, from chin to crown is 0.26 m and it is proportional to the width of the torso, whereas the width of the head comes out of proportion due to the need of installing quality microphones at the position of ears. Such defined dimensions of the robot allow its easy passage through the standard door frame (0.80 m) in households and public areas. Drive wheels have a diameter of 0.35 m, whereas the diameter of auxiliary wheels is 0.20 m. These dimensions allow motion of the mobile platform over the flat floor surface as well as the diverse floor cloths. The robot cannot climb stairs but it can roll over small obstacles whose height does not exceed 0.07 m. More details about the mechanical structure of the robot are given in the next section.

3. Mechanical design 3.1. Mobile robot platform

The mobile robot platform is a motorized trolley with two driving, microprocessor controlled wheels and two auxiliary wheels ensuring stability and maintenance of direction of motion. The platform is designed to house a cabinet with two horizontally divided compartments. On the lower level, two 12 V 120 Ah batteries are placed together with a 12-230 VAC adapter/charger, electrical fuses, etc., whereas the

upper level houses the control and communication electronics - the "brain" of the device. To unload the actuators of the torso and achieve optimal distribution of masses, i.e. to get the center of mass placed as low as possible, a care has been taken to seat all electronic boards inside the cabinet and not in the head or torso of the robot. The system is designed to ensure safety against overturning even in the position of maximally abducted/extended robot hands and with maximum payload of 5 kg per single arm.

3.2. Segmented torso

The torso of the robot consists of lower and upper part. The upper part carries the robot head and two 6 DOF robot arms with artificial hands. The lower part is a central support pillar ending with a two-axis rotation drive with a slewing drive transmission, allowing bending (change in elevation) of the upper part of the torso around the horizontal axis in the range [ - 30, + 60] degrees, and turning (rotation about the vertical axis) in the range [ -180, +180] degrees. These redundant degrees of freedom significantly expand the operation work space of the robot (reachability of its hands).

3.3. Bi-manual manipulation system

The bi-manual manipulation system consists of two 6 DOF, 5 kg payload universal robot arms UR5 [6] of industrial quality. Each arm is equipped with a three-fingered Barrett-type robotic hand grasper [7]. Two-arm manipulation system, combined with ability of bending the torso, provides good manipulation properties of the overall system. The universal robot hands UR5 possess compliant joints facilitating the interaction with the man without risks of injury. The hands have their own industrial controllers that are integrated in the UBMSR control architecture. In the future, the UR5 lightweight robot arms will be replaced by two anthropomorphic arms with poly articulated joints [8].

3.4. Robot head

The robot head (Fig. 2) contains: (i) kinematic mechanism with 3 DOF, actuated by three DC electric motors with slewing drive transmission, (ii) two industrial-grade cameras for robot vision, (iii) two wide-band professional-grade microphones with increased sensitivity, and (iv) optional visual depth sensor placed on the crown of the head. The head has good kinematic abilities of rotation about three axes: vertical, frontal, and sagittal. Thanks to visual and audio capabilities of the robot, the head can be rotated in direction of the source of the light and/or sound. The robot is also equipped with the appropriate LED illumination facilitating its functioning in the conditions of reduced ambient light.

4. Sensorial system

The sensory and data acquisition system of the robot is heterogeneous. It serves to perceive its physical and social environment. Sensors are embedded almost throughout the

whole mechanical structure of the system. The robot has joint position sensors (optical encoders) and velocity sensors (tachogenerators) as well as external distance sensors (laser range-finder and proximity ultrasonic probes), inertial sensors (3-axis gyroscope and 3-axis accelerometer), and optional special-purpose sensors (humidity and temperature sensor, smoke detector etc.). The universal robot hands are equipped with joint encoders, torque sensors in the arm and wrist joints, and force/torque sensors in the hand graspers. A tilt sensor is placed on the platform to detect collision with surrounding objects.

Two fast industrial grade CMOS color cameras [9] and audio equipment — microphones [10] and speakers [11,12] are embedded within the robot head. It is also possible to place the optional RGBD sensor (the combined color and depth camera) on the crown of the head that can be used for human gesture recognition and identification of moving obstacles within the range of 1-4m around the robot. The actual version of the employed RGBD sensor is Kinect XBOX 360, operating as a structured light sensor [13], but it is planned to replace it with a more advanced Kinect for Windows v2 time-of-flight sensor [14].

It is possible to set the visual system to operate in different configurations yielding different performances of vision-based navigation. The basic configuration is a bifocal stereo vision consisting of two CMOS cameras (monocular system is also feasible, but its depth derivation is possible only when the robot is in motion). The system can be operated in both outdoor and indoor environments, but its effectiveness significantly depends on the texture of surrounding objects. Furthermore, it is possible to improve the robustness and speed of the system by adding the third camera, obtaining in this manner a trifocal sensor. Alternatively, the CMOS cameras can be completely replaced by the RGBD sensor, which is limited for use in indoor environment only and is slower compared to industrial grade CMOS cameras. It provides better resolution of depth maps. Further combinations of RGBD and ordinary cameras are also possible, and they can be tuned to inherit good features of both sensors.

5. Control and communication architecture

5.1. Control, communication and networking

The control module is represented by a hierarchy-distributed structure that best suits the individual tasks that are in advance imposed to the service robot. This implies that there is a controller at the highest hierarchical level, which coordinates all tasks, data acquisition, and communication traffic, and which has subordinated particular low level controllers such as (i) mobile platform controller, responsible for motion planning, navigation, and path following together with obstacle and collision avoidance, and which takes into account different characteristics of friction between the wheels and the ground; (ii) controller of the manipulation system (totally 12 DOF) including the torso (2 DOF) and the robot head (3 DOF), which also integrates two separate industrial controllers for UR5 hands; (iii) image and sound processing, interpretation of voice commands, etc.

The communication block provides networking of the robot in LAN using a Wi-Fi access or GSM-GPRS modem (the case of information-structured environment), and connecting the robot to the Internet. Also, it is planned to remotely command the robot using a smartphone with Android application or standard notebook or PC computer with corresponding GUI remote application. This assumes a WI-FI hot spot or GSM-GPRS modem on the robot which brings an additional functionality of remote communication with authorized access and ability to monitor system parameters or remote transmission of image and sound.

As it is shown in Fig. 1, presenting the key components of the mobile service robot system, the control system is complex and distributed. Because of that, a modular control system design is assumed which is combined with a high speed CAN bus architecture [15]. This high speed CAN bus architecture is the main bus of the entire mobile service robot control system. Because of the speed and reliability, the high speed CAN bus architecture provides a proper function of all modules, reliable and time precise data exchange, as well as proper supervision of the whole system

Touch screen control monitor

Visual controller VAFER-ULT I1-I7

Dual manipulation robotic arm 2xUR-5

Sound system

Fig. 3 Global block structure of the control system of mobile service robot.

by main controller of the system and also in a remote way by TABLET PC supervision software specially realized for the purpose of the remote supervision, commanding and control of the robotic system and global task settings. The global block control schematic based on the high speed CAN bus architecture is presented in Fig. 3.

Whole control structure is based on the high speed CAN bus architecture controlled by the ADLE3800 PC PCIe104 processor board [16]. This is a powerful PCIe 104 processor module where the ROS operation system is implemented [17]. It controls all subsystems connected onto the high speed CAN bus architecture. Let's mention before all a trolley controller based on National Instruments single board RIO architecture. A detailed description of this subsystem will be presented in the text to follow. The system of machine vision is a subsystem of robot global control structure and it is connected to the high speed CAN bus, too. This machine vision subsystem is based on the powerful graphic processor which is in charge of image acquisition and image processing as well as decision making system according to the results of the machine vision module control algorithms. A detailed description of this subsystem and its main characteristics will be also presented in the text to follow. As it is presented in Fig. 3 a dual arm manipulation subsystem is also connected to the high speed CAN bus and it is a part of the robot control architecture. The dual arm robotic system is based on the light way robotic arm UR-5 produced by the Universal Robot Co. [18]. Each light way robotic arm is directly controlled by its own controller. Both controllers are connected to the CAN bus. Based on that, prescribed dual arm manipulation task is possible to be completed without any collision, according to the results of machine vision algorithm. This global command structure provides possibility to realize complex task movements such

as obstacle avoidance and avoidance of dual arm collision during the desired task.

The processor module ADLE3800 PC PCIe104 directly controls implemented audios system. Based on microphone implemented into the mobile robot a generic speech recognition task is realized together with a voice synthesis. These two systems are running into the ROS operation system and it is responsible for audio command recognition and robot-human voice interaction during the task. Complete human-robot interface (HRI), together with voice and image data interpreter, are implemented onto the touch screen panel mounted to the chest of the robot. In such a way, the audio, visual and touch facilities of the HRI can be controlled and supervised by operator. Implemented touch screen panel is used not only for HR interaction, but also for the global control of the overall system as well as its constituent modules. Another possibility of global control and supervision of mobile service robot is remotely established using a smartphone and implemented wireless GSMGPRS communication integrated into the control loop of the system structure. In such a way, it is possible to establish remotely command settings and supervision of the robot status as well as real time image data transmission from robot to the screen of a smartphone.

In the text to follow the other functional subsystems, as a part of the global control structure of mobile robot, are presented and analyzed in details. Primarily, the wheeled trolley subsystem of the robot is in charge of movement of the robot in human area. A detailed control structure of the wheel-based trolley is presented in Fig. 4.

The trolley of the service robot is equipped with a large number of sensors. This implicates that controller itself must be equipped with a large number of I/O ports to receive and collect all these data. Controller itself must

LED lamp

NI-SB_RIO-9636 Hi speed CAN bus

8 Ultra sound range detector

Connection to PC104 E Board -►

OTTOBOCK two axis drive module

iBJ| LdJ

E^ I EMS

AMC Frontal slew ring drive module

Left and Right Autonomous wehicle motor, break, encoder control

AMC Sagittal slew ring drive module

Slew ring Frontal motor, encoder control

Slew ring Sagittal motor, encoder control

Fig. 4 Detailed control structure of the wheeled trolley of the service robot.

have a proper structure providing all data to be adequately and in time collected and available in any time to the other subsystems for proper supervision and control. Primarily, it is important to have reliable obstacle sensor detection. For that purpose, the trolley of the mobile robot is equipped with both ultrasound and infrared sensors capable of detecting obstacle to the mid-range distance (10-80 cm). At least, 8 ultrasound distance sensors and 8 infra-red obstacle detectors are implemented onto the trolley whereby 2 of each are mounted to the edge of the vehicle. By combining ultrasound and IR detectors, required accuracy and reliability are obtained for the system to detect obstacles in various indoor environments. These sensors are connected to the internal unique I2C bus providing easy and timely reading of each sensor. Because of that, the sensors are mounted to the edge of every side of the trolley. By proper reading sensor information and combining sensors' information, it is possible to detect obstacles, measure the proximity and generate regular path to avoid immobile and mobile obstacles. Accurate Laser Range Finder is mounted to the front panel of the trolley. It is responsible for mapping whole frontal plain with respect to the path of motion. Precise obstacle detection in the motion direction is possible by this sensor. Accordingly, it is possible to generate a desired path between the detected obstacles. This Laser Range Finder sensor is connected to the controller by RS232 type connection as it is presented in Fig. 3. Complete collection and basic processing of the data obtained from sensors is in charge of the NI SB RIO 9626 single board controller by National Instruments [19]. It operates under the LabView [20] real time OS. This controller is responsible for the proper time data acquisition from the sensors, basic sensors data processing and sensor data transmission to the high speed CAN bus architecture towards to the supervisory controller. Data received from the sensors are promptly available to the global controller into the system. In parallel, NI SB RIO 9626 controller uses these data collected from the sensors for the local servo control of the main wheels during the motion. The global robot controller is responsible for navigation and path planning. Simultaneous localization and mapping (SLAM) algorithms are running in this controller. Beside proximity sensors and range finder robot has the corresponding 3-axis

inertial sensor. The 3-axis inertial system comprises of 3-axis accelerometer and 3-axis gyro. This sensor is mounted to the trolley of the robot and connected to the NI SB RIO 9626 controller trough to the internal unique I2C bus. By appropriate sensor-data fusion, NI controller and global controller calculate the global position/orientation (coordinates) and speed of the robotic system during the movement. Complementary to collecting and analysis of the data acquired from sensors of the robotic system, NI SBRIO controller manages intelligent operation of the front lights. LED lamp lights are mounted in the front panel of the trolley. They are used for lightning of the path in advance in direction of motion. Light intensity is measured by the visual subsystem and acquired information is transmitted through the high Speed CAN bus architecture. The NI SBRIO controller is also in charge of controlling of power-driven wheels of the trolley. Two robust and reliable OTTOBOCK DC brush motors with breaks and incremental encoders are mounted into the trolley for that purpose. Control of the driving motors is realized by a NI SB RIO 9626 architecture which contains strong FPGA programmable field. This FPGA is directly included into the control structure to supervise the driving wheels, motors' breaks and incremental encoders of the motors. The current and the speed of the motors are calculated using the data from incremental encoders and status of the motors. FPGA structure inside NI SB RIO 9636 board is programed in such way to directly produce current mode and velocity mode control of the motors. It can acquire value of the motor current as well as value of the incremental encoder counter. Combining with the global position of the system determined, it is also possible to establish odometry approach of task accomplishment. The mentioned NI controller is also responsible to control pan and tilt joint actuators of the torso of robot. These joints are realized using slewing-drive actuators. The FPGA structure inside the controller is also used to directly control mentioned two axis slowing-drive joints. The encoders used in these two joints are of absolute type so the values are directly readable as an accurate position in the task space. The current mode and velocity mode control are implemented into the structure so the inclination of the torso could be harmonized regarding the desirable contact task or by desirable two arm movement together with the machine

IIC bus

Visual controller VAFER-ULT I1-I7

Connection to PC104 E Board

Fig. 5 Control structure of the system of machine vision with service robot.

vision algorithm. As it has been already mentioned that NI control board is directly connected to the high speed CAN bus architecture, then direct flow of the data in real time from and to global controller and others subsystems are completely provided.

Vision system is a robot subsystem based on employments machine vision algorithms and vision-based task operation (navigation and/or manipulation). Detailed control structure of this subsystem is presented in Fig. 5.

The visual system is based on stereovision system which consists of two high speed USB3 cameras. KINECT camera is also mounted on the top of the head of the mobile service robot for precise depth vision system. KINECT camera is connected through USB2 port to the controller. By combining data from the stereovision and from the KINECT sensor based on depth image vision, it can be obtained accurate information about the object(s) in surrounding, its shape, structure and distance. A powerful VAFER-ULT I1 controller [21] is in charge of information processing from the vision system. It has an INTEL I7 processor and powerful NVIDIA graphic accelerator and imaging processor which allow maximal speed of image processing. These collected data from the image processor can be used for further analyzing onto the main controller or at the remote station (smart phone, tablet, etc.). The images can be transferred to the remote user, by wireless GSM/GPRS communication, for real time supervision or for further processing in order to support task realization. Vision subsystem controller is also connected to other subsystems into the control loop by high speed CAN bus architecture.

5.2. Remote monitoring and supervisory control

Remote control system is accomplished via GSM/GPRS communication or Wi-Fi communication. This kind of a remote connection enables the transmission of commands, sensor-data, audio signals (sound) and images to/from the robot. Monitoring can be done via computer, mobile phone, or tablet. Two-way communication allows remote control of the task(s), reading the status of a service robot and transmission of the sound and images from the robot to smartphone and from smartphone to the robot. Thus, it is possible to give the robot the sound commands directly without the involvement of specific commands on the

screen. The supervisory system was implemented in the CLOUD application that allows maximum reliability during data transmission, real time video transmission and maximum availability of information as well as real time information transfer. The realized supervisory structure in such way enables updating the data and their reception at any time. The data are always available to the operator and it is possible to simultaneously share, analyze and modify it in real time. This especially applies to the data of the sensor system, which are of great importance both for the accomplishing of the service robot tasks and for monitoring the robot by the human-operator.

Specially designed voice, i.e. audio control application implemented on the smartphone under the ANDROID operation system allows user to simultaneously impose in a direct way voice commands and then to listen robot voice response from a remote site. After receiving command(s), robot proceeds execution of the same(s) and user can directly supervise the robot operation by video stream.

5.3. Cognitive EI interface

Since the UBMSR is aimed to accomplish service tasks that require interaction with people, a special cognitive interface, so-called EI (Emotional Intelligence) controller, is designed allowing the robot to have a better integrative performances and better social acceptance by humans. The EI robot interface (EI-controller) should provide robot the ability of perception/acquisition of human emotions (affective state), understanding emotions, and control of its own affective and social behavior as well as interaction with people-interpersonal interaction. These capabilities shall be developed on the basis of embodied attributes of artificial emotional intelligence in robots using human personality profile model that assumes type of personality, temperament and human character. Basis for this functionality represents research from humanities, and especially psychology.

The researches conducted in the paper should enable development of the specialized EI interface that represents a basis for design of the robot EI-controller. Research in this paper is based on experimental work with human examinees. Amongst the set of 237 human examinees of different age (adolescents 53.58%, junior adults 24.47%, senior adults 21.52% and elderly persons 0.63%), gender (males 41% and females 59%), degree of education (low educated 2.53%, intermediate 76.79% and high-educated 20.68%) and amenability to the different social frames (e.g. family education, companions, religion, etc.), corresponding tests of identification their attributes of psychological behavior in different life situations marked as "trigger events" have been done. Initially, the anonymous examinees were required to fill out the on-line questionnaires available at the Internet for acquisition of their personality types and temperaments [22,23]. Then, the same ones were required to fill out corresponding questionnaire for acquiring socio-economic conditions as well as the current psyho-physical conditions (interior state). After that, the examinees were asked to give their opinion (assessment) about their physical and psychological reactions upon the imposed scenarios (trigger events) assumed from everyday life (e.g. from jolly and

funny emotions to frightening, tragic or disgusting). The examinees have ranked the degree of their experiencing the imposed trigger events on the scale 0-100% assessing their degree of effectiveness. Based on these tests, corresponding statistical processing of the results was accomplished and knowledge generalization was done. Knowledge fusion was made by merging obtained results with the acquired personality profiles done with the examinees. Based on this outputs, corresponding mathematical model of human affective and social behavior [24] was developed. The model (software) was implemented in the controller of the service robot [25]. The EI-controller should ensure better social acceptance of the device from the potential customers as well more effective and intuitive communication between robot and other persons. Beginning with the theory of personality and the experimental results acquired in advance, mathematical model of human affective and social behavior was developed whose fundamentals are given in the text to follow.

While the artificial intelligence (AI) copes with tasks such as pattern recognition, spatial reasoning, context reasoning, decision making, etc., the emotional intelligence (EI) is responsible in humans for self-awareness, self-management, awareness of others and relationship management [26-28]. According to the theory of personality from humanities, individual emotional conditions primarily depend on three factors [29]: (i) personality type, (ii) personality

character, and (ii) type of temperament. These factors are determined by the genetic code of an individual and it is acquired by birth. Besides, the behavior of individuals is strongly influenced by additional exterior and interior factors such as (Fig. 6): (i) "exterior world" i.e. physical and social environment, (ii) "interior stimuli" as well as (iii) "socio-economic factors". The personality character, as an important factor, was disregarded in this research since it is related mostly to certain pathologic states and that is not of our interest bearing in mind that in this study of development model of human affective and social behavior we concerned only health and regular behavior of humans [24].

Under the term "exterior world" our physical and social environment (that includes trigger-events, -situations, -acts or relationships with other persons) are assumed. The "interior stimuli" mainly concerns with human physical and psychological conditions (regarding to body and psyche) such us senses of fatigue, pain, disease, etc. Under the same term, the feelings like love, depression, anger, hatred, fear, satisfaction etc. are assumed, too. The "socio-economic factors" regard to the conditions that influence some features of individual behavior. The "social factors" includes: family and school education, influence of companions, religion and affiliation to the social groups e.g. political parties, civil communities, clubs, fans, economic conditions, etc. Bearing in mind the previous facts, a psychological (affective) behavior of an

Fig. 6 High-level model description: trigger/stimuli-personality profile-behavior-reactions.

Fig. 7 Structure of the three-stage EDB-model with fuzzy blocks: emotionally state generator, emotionally state modulator and behavior interpreter.

individual is determined by plenty of different exterior and interior factors. To derive a generic model of human psychological behavior the following assumptions are assumed. The considered biological system can be approximated by a deterministic, dynamic, non-linear, multi-input-multi-output (MIMO) system of arbitrary complexity. The input and output variables of emotion-driven behavior model (EDB) are hardly measurable but they can be quantified by using corresponding psychological questionnaires [22,23] as described previously.

The variables of interest required for modeling EDB (Fig. 7) include qualitative information regarding personality profile, physical and social environment, emotional and health conditions, social factors, etc. They are quite heterogeneous and rather linguistic and descriptive than measurable and numeric. Generally observed, humans use to behave (in psychological sense) in a fuzzy manner. That means, people use their sensory-motor skills, symbolic body and facile gestures as well phonetic forms to express their psychological state - behavior attributes, mood, etc. Due to a fuzzy nature of human behavior, the modeling approach assumed in this project consequently belongs to the knowledge-based type [22,23]. It assumes utilization of the fuzzy inference system i.e. fuzzy logic structure. Accordingly, the appropriate structure of the EDB model is represented by a three-stage block structure (Fig. 7). It consists of three fuzzy inference systems FIS-1 to FIS-3 aligned in the cascade with aim to propagate individual behavior attributes towards the output of the model. Achieved behavior attributes depend on information entered to the model by the input variables such as: "personality profile", "exterior world", "interior stimuli" and "social factors".

Biological mechanism of human affective behavior produces emotive reactions based on three excitation signals (information carriers): (i) "trigger" of behavior (that arouses different psychological reactions), (ii) "profiler" (that shapes event-driven emotional state (ES) to fit the personality profile of the individual), and (iii) behavior "booster" or behavior "inhibitor" (that increases or decreases the expressiveness of the individual affective manifestations).

The trigger of psychological behavior (mood) is the "exterior world" according to the Fig. 6. The exterior world represents our physical and social environment (type of event, situation, inter-personal relationship and interaction or acts). The initiated emotional state is then gained by influence of some "interior stimuli" and "socio-economic factors". According to the Fig. 7, the trigger variables are led to the input of the FIS-1 block named "ES generator". This block produces at its output the "non-profiled" ES. It can be also called the "non-personalized" (whereas it does not reflect an individual affective behavior but one estimated by an experience based on emotional state of a large sample of individuals). For example, a sad and a frightening event (situation) certainly cause with majority persons corresponding feeling of sadness as well fear as the normal psychological reactions to be expected. But, the expressiveness of feelings differs from person to person depending on their personality profiles (personality type and temperament), interior stimuli and social factors. Because of that, the individual traits (personality type and temperament) represent so called the "ES profiler" while the "interior stimuli" and "social factors" represent the

"ES booster" or "ES inhibitor" depending if it increases or decreases energy of emotions. Both, the emotional state "profiler" and "booster" (or "inhibitor") shape the "non-profiled" ES making of it one that is "profiled" and called also "personalized" since its ES attributes are individually-specific. The fuzzy block FIS-2 (Fig. 7) is called "ES modulator" because the adjustment (modulation) of the non-profiled ES is achieved in this block.

Fuzzy model blocks FIS-1 to FIS-3 (Fig. 7) are assumed to be of Mamdani type. The input and output vectors are of the arbitrary number depending on how complex (general) model wish to be developed. Since the considered biological system is non-linear, the corresponding input and output membership functions (MFs) of the FIS-1 to FIS-3 blocks are assumed to be the Gauss curve or the sigmoidal function [22]. The number of fuzzy rules in the corresponding rules data-bases can vary depending on how refined behavior properties intend to fit by model designed. If one requests from the model to provide high resolution (distinction) between the similar psychological states then it is required larger number of rules in the corresponding data-base. The proposed model architecture (Fig. 7) with fuzzy blocks aligned in a cascade enables easy monitoring of the intermediate model states (ES attributes that correspond to the non-personalized or personalized states) generated at the particular FIS blocks. Also, such modular structure enables easier analysis of the separated or synergetic influence of the "interior stimuli" and "social factors" upon the individual affective behavior.

6. Experiments and model simulation

Prototype of personal robot presented in Fig. 1 is still in progress. Hardware integration is expected to be ready soon. Being the main novelties presented in the paper regards to problems of EI-model design described in Section 5, some selected results presented here are used to verify concept and feasibility of the approach. In that sense, test results obtained with human examinees as well as corresponding simulation results of modeling human affective behavior will be presented in this Section. Model of emotion-driven behavior (Fig. 7) is basis for building EI-controller of the service robot presented in Fig. 1. In order to verify the feasibility, two characteristic independent demo-scenarios are selected. These demo-scenarios as characteristic trigger-events are used with aim to excite human emotionally state and cause affective reactions. Selected test scenarios are described in the following way:

Scenario 1 (irritating trigger event): You are in the supermarket waiting in front of the cash register to pay things you have chosen. You are late and in danger to lose some scheduled appointment very important for your future professional career. Some leisured, boring and talkative person slows down the cashier and do not care about anybody in his/ her surrounding.

Scenario 2 (unpleasant trigger event): At the dinner table, someone told a funny joke. While laughing, you have suddenly burped loudly.

How do you feel and what are your reactions in both scenario cases?

Fig. 8 Representation of human emotionally-driven behavior caused by trigger event described in the Scenario 1; (a) Emotionally states acquired from the persons of ESFJ(a) and ESFJ(b) type with Choleric temperament; (b) Emotionally state acquired from the person of INTP type with Choleric temperament.

Within the frame of experimental verification of the approach proposed in this paper for building EI-controller, human examinees (237 of them) have been asked to come out about how much the chosen demo (virtual) scenarios 1 and 2 affect their emotional experiencing. Majority of examinees, 22.14% of them, were categorized in the group determined as ESFJ (Extraverted-Sensing-Feeling-Judging) type of personality, and 44.25% of them were identified to have temperament of a Choleric type. With aim to examine the sensitivity of the emotion-driven behavior model in a way to fit fine differences in affective behavior caused by small variations of personality attributes, the experimental results obtained from two examinees belonging to the group "ESFJ and Choleric" are compared first. Expression of their personality attributes were identified in percentages: (i) Person ESFJ(a) is E(7%), S(39%), F(13%), J(19%) and Choleric 77%; (ii) Person ESFJ(b) is E(36%)-S(14%)-F(25%)-J(9%) and Choleric 30%. Obtained emotionally states of the persons ESFJ(a) and ESFJ(b) are presented in Fig. 8a by corresponding graphs shown in an emotionally state plane. In the same graph, it is also presented so-called „NON-PROFILED „emo-tionally state that was determined as the average state derived from emotionally states acquired amongst 237 examinees. After that, emotionally-driven behavior model from Fig. 7 was simulated. In simulation, personal attributes of the previously described examinees EFSJ(a) and ESFJ (b) and the demo Scenario 1 (irritating trigger event) were introduced as inputs to the model. Obtained model outputs (emotionally states) are shown in Fig. 9a. These results demonstrate relatively fine congruence with the test results shown in Fig. 8a that are acquired from human examinees. It verifies assumption that designed emotionally-driven behavior model fits well human affective behavior. This test also confirms that model developed in the paper is enough

sensitive to variation of personality attributes and that model makes difference within the same personality category (e.g. ESFJ). Experimental results with human examinees as well as corresponding results of model simulation regarding to implementing unpleasant trigger event (Scenario 2) are presented in Figs. 10 and 11 successively. Both types of personality ESFJ as well as INTP were taken into account, too.

Validation tests have been accomplished not only upon two here presented scenarios but also with more scenario examples. Obtained results prove required generality of the model developed in the paper. That makes us to believe that the designed model can be extended in a way to fit a broader domain of human affective states. Model can be used effectively with service robot to enhance its social and inter-personal compatibility and communication skill.

In the second test, simulation of human emotionally-driven behavior model is accomplished for the case of an INTP (Introverted-iNtuitive-Thinking-Perceiving) type of personality and for the Choleric temperament. Chosen person has the attributes: I(36%), N(16%), T(51%), P(5%) and Choleric temperament 55%. Emotionally states acquired by examination of the person with these personality traits are presented in Fig. 8b. Corresponding simulation results of the model that correspond to these personality traits are shown in Fig. 9b. By comparison of the emotionally states ESFJ and INTP presented in Figs. 8a and 8b, significant differences are noticeable due to differences of personality type attributes. Emotionally states presented in Figs. 8b and 9b show satisfactory congruence as examples presented previously in Figs. 8a and 9a.

Affective reactions generated by emotion-driven behavior model proposed in the paper are demonstrated by use of some simple graphical interface in a form of the feeling charts presented in Fig. 12.

Fig. 9 Simulation results plotted in an ES plane obtained by use of the emotionally-driven behavior model (Fig. 7) excited by the trigger event described in Scenario 1. (a) Emotionally states obtained by model simulation for the personality profiles ESFJ(a) and ESFJ(b). (b) Emotionally state obtained by model simulation for the personality profile INTP.

Fig. 10 Representation of human emotionally-driven behavior caused by the trigger event described in Scenario 2; (a) Emotionally states acquired from the persons of ESFJ(a) and ESFJ(b) type with Choleric temperament; (b) Emotionally state acquired from the person of INTP type with Choleric temperament.

Simulation results presented in Figs. 9 and 11 verify assumption that the model proposed in this paper fits human affective behavior with satisfactory accuracy. Additional improvements can be achieved by further tuning of fuzzy membership functions by use of the extensive experimental results obtained from human examinees. With additional tuning of fuzzy rules in the model as well as with certain modification of the aggregate

algorithms, accuracy of the model can be additionally improved. The next step in the project will be a microprocessor implementation of model algorithms within the robot EI-controller. Obtained results make us believe that the model of emotion-driven behavior proposed in this paper can be effectively implemented with robot from Fig. 1 to enable its better emotion awareness of others and itself.

Fig. 11 Simulation results plotted in the ES plane obtained by use of the emotionally-driven behavior model (Fig. 7) excited by the trigger event described in Scenario 2. (a) Emotionally states obtained by model simulation for the personality profiles ESFJ(a) and ESFJ(b). (b) Emotionally state obtained by model simulation for the personality profile INTP.

0% 20% 40% 60% 80% 100%

not at all moderate entirely

Fig. 12 Example of feeling charts used in the graphical interface. Mood rating scale with respect to the sadness and happiness.

Fig. 13 Personal entertainment robot [30]. Robot design derived from the basic model presented in Fig. 1.

7. Conclusion and future development

The paper describes architecture and design details of the technology platform aimed to develop a laboratory prototype of human-centric service robot for indoor applications. Development of such a system is important step towards integration of "smart" home technology of future with social robots developed for personal purposes.

During the work done so far, many individual modules have been developed and/or separately tested. As the interim outline of the project, the personal entertainment robot named Personal Robot entertainer was developed for the purpose of exhibition at the Festival of Science 2014, Belgrade, Serbia [30]. This model has less technical capabilities than the basic platform presented in Fig. 1, but can be assumed as one of the possible robot modifications (Fig. 13).

The laboratory prototype (Fig. 1) will be completed, tested and evaluated in typical household tasks, which should open up the possibilities for its commercial applications. The robot designed has an original hardware and software architecture and it is especially interesting due to its original cognitive EI robot interface necessary for social and service robots operating and interacting with people.

Acknowledgment

The work presented in this paper is supported by the Serbian Ministry of Education, Science and Technology Development within the project no. TR-35003 entitled "Research and Development of Ambient Intelligent Service Robots of Anthropomorphic Characteristics". The researchers are partially supported by the Alexander von Humboldt Foundation (Germany), under the title "Building attributes of artificial emotional intelligence aimed to make robots

feel and sociable as humans (Emotionally Intelligent Robots - EIrobots)", contract no. 3.4-IP-DEU/112623. The authors acknowledge to the staff of the Intermediate education center "Gosa" from Smedervska Palanka (Serbia) for meticulously done questionnaires with human examinees regarding acquisition of personality attributes.

References

[1] R.A. Brooks, C. Breazeal, R. Irie, C.C. Kemp, M. Marjanovic, B. Scassellati, and M.M. Williamson, Alternative essences of intelligence, in: Proceedings of the AAAI'98/IAAI'98, American Association for Artificial Intelligence, Menlo Park, CA, USA, 1998, pp. 961-968.

[2] T. Fukuda, R. Michelini, V. Potkonjak, S. Tzafestas, K. Valavanis, M. Vukobratovic, How far away is artificial man, IEEE Robot. Autom. Mag. 8 (1) (2001) 66-73.

[3] B.R. Duffy, Anthropomorphism and the social robot, Robot. Auton. Syst. 42 (3-4) (2003) 177-190.

[4] M. Vukobratovic, Belgrade school of robotics, Institute Mihailo Pupin, Belgrade, 2002.

[5] HPC - Human Proportion Calculator (ideal). online at: (http:// hpc.anatomy4sculptors.com) (accessed 16.11.14).

[6] Universal Robots A/S, Universal robot UR5. online at: (http:// www.universal-robots.com/GB/Products/Downloads.aspx) (accessed 16.11.2014).

[7] Barrett Technology Inc., Barrett hand, online at: (http://www. barrett.com/robot/products-hand.htm) (accessed 16.11.2014).

[8] A. Rodic, B. Miloradovic, D. Urukalo, On developing anthro-pomimetic robot-arm, in Proceedings of the 17th International Multiconference Information Society (IS 2014), Volume F, Robotics, Institut "Jozef Stefan", Ljubljana, Slovenia, October, 8th, 2014, pp. 29-32.

[9] Point Grey Research Inc., Flea3 1.3MP color USB 3.0 camera (e2v EV76C560). online at: (http://www.ptgrey.com/flea3-13-mp-co lor-usb3-vision-e2v-ev76c560-camera) (accessed 16.11.14).

[10] Sennheiser Electronic GmbH, Sennheiser MK 4—condenser microphone for professional studio recordings, online at: (http://en-de.sennheiser.com/condenser-microphone-studio-r ecordings-professional-mk-4) (accessed 16.11.14).

[11] ProElectronic, Piezo visokotonski zvucnik KHS106 (Piezo tweeter KHS106), online at: (http://www.proelectronic.rs/PIEZO-VIS0K0T0NAC-KHS106-KHS106-4367.htm) (accessed 16.11.14).

[12] ProElectronic, Mini zvucnik 45021 (Mini speaker 45021), online at: (http://www.proelectronic.rs/MINI-ZVUCNIK-45021-MINI-Z VUCn17-13181.htm) (accessed 16.11.14).

[13] Microsoft Corporation, Microsoft XBOX Kinect, online at: (http://www.microsoft.com/education/ww/products/Pages/ kinect.aspx) (accessed 16.11.14).

[14] Kinect for Windows team, Revealing Kinect for Windows v2 hardware, online at: (http://blogs.msdn.com/b/kinectforwin dows/archive/2014/03/27/revealing-kinect-for-windows-v2-hardware.aspx) (accessed 16.11.14).

[15] CAN with Flexible Data-Rate, BOSH user manual, Specification Version 1.0 (released April 17th, 2012) (http://www.bosch-se miconductors.de/media/pdf_1/canliteratur/can_fd_spec.pdf).

[16] (http://adl-usa.com/products/detail/103/adle3800pc) (accessed 16.11.14).

[17] (http://www.ros.org/) (accessed 16.11.14).

[18] (http://www.universal-robots.com/GB/Products.aspx) (accessed 16.11.14).

[19] (http://www.ni.com/pdf/manuals/373378c.pdf) (accessed 16.11.14).

[20] (http://www.ni.com/labview/) (accessed 16.11.14).

[21] (http://www.ieiworld.com/product_groups/industrial/con tent.aspx?gid=00001000010000000001 &cid=09050665109937 740140&id=0D270425292726476606.VIlWUMu85aQ) (accessed 16.11.14).

[22] On-line personality test: (http://www.16personalities.com/ free-personality-test) (accessed 20.12.2014).

[23] On-line temperament test: (http://www.neoxenos.org/wpcon tent/blogs.dir/1/files/temperaments/tempera-ment_test. htm) (accessed 20.12.14).

[24] Rodic Aleksandar, Khalid Addi, Mathematical modeling of human affective behavior aiemed to design of robot EI-controller Series: Mechanisms and Machine Science, in: A. Rodic, D. Pisla, H. Bleuler (Eds.), New Trends in Medical and Service Robots: Challenges and Solutions, 20, Springer Publishing House, 2014, pp. 141-163. http://dx.doi.org/ 10.1007/978-3-319-05431-5 ISSN: 2211-0984.

[25] A. Rodic, M. Jovanovic, How to make robots feel and social as humans, in: Proceedings of the 6th IARIA International Conference on Advanced Cognitive Technologies and Applications (COGNITIVE), Venice, Italy, May 25th-29th, 2014, pp. 133-139.

[26] D.H. Kluemper, Trait emotional intelligence: The impact of core-self evaluations and social desirability, Personal. Individ. Differ. 44 (6) (2008) 1402-1412.

[27] J.D. Mayer, P. Salovey, What is emotional intelligence?, in: P. Salovey, D. Sluyter (Eds.), Emotional Development and Emotional Intelligence: Implications for Educators, Basic Books, New York, 1997, pp. 3-31.

[28] K.V. Petrides, R. Pita, F. Kokkinaki, The location of trait emotional intelligence in personality factor space, Br. J. Psychol. 98 (2007) 273-289.

[29] D.G. Myers, Psychology, 9th ed., Worth Publishers, New York, 2010.

[30] Personal entertainment robot video: (http://youtu.be/wM2ea ViT_i0) (accessed 20.12.2014).