[ieee 2012 ieee 3rd international conference on cognitive infocommunications (coginfocom) - kosice,...

6
Towards Easier Human–Robot Interaction to Help Inexperienced Operators in SMEs Sakari Pieskä, Jari Kaarela, Ossi Saukko Centria Research and Development, Vierimaantie 7, 84100 Ylivieska, Finland [email protected] Abstract—The capability to use advanced tools, devices, and software is essential for enterprises to survive in global competition. Inter-cognitive communication is an important element in the development of engineering applications where natural and artificial cognitive systems should work together efficiently. Small and medium-sized enterprises (SMEs) are becoming increasingly important in society, as our economies depend heavily on SMEs, which represent the majority of jobs created. Global competition has forced these SMEs to change and develop their production systems radically, to be more flexible. Industrial robots are seen as a key element in flexible manufacturing systems. However, currently, industrial robots are not commonly used in SMEs; one reason is the complex handling, especially the time-consuming programming. Industrial robot systems usually lack simple user interfaces, and the programming is usually carried out by the typical teach pendant teaching method. This method is a tedious and time-consuming task that requires a remarkable amount of expertise. In industry, this type of robot programming can be justified economically only for the production of large lot sizes, which are not typical for SMEs. Therefore, new approaches to human–robot interaction are required. Cognitive infocommunication can play a key role in these applications. Accordingly, our main goal in this paper is to present experiences in developing easier human–robot interaction to help even inexperienced operators use robots in SMEs. These examples show a variety of ways that inter-cognitive communication between human and artificial cognitive systems can be utilized in robotics. We also present our system and software architecture used in the development of generic industrial robot programming for easy-to-use applications, as well as some examples of our service robot development. Service robotics offers numerous possibilities to utilize cognitive infocommunication, but development of reliable and flexible solutions is challenging, due to dynamic environments, and because inexperienced users often understand very little about the robots and their internal states. I. INTRODUCTION Human–robot interaction requires easier and more flexible ways for guiding robots in order to increase the utilization potential of robotics in small and medium-sized enterprises (SMEs), which are widely recognized as a key driver for economic growth, innovation, and employment [1, 2, 3]. Manufacturing SMEs are especially a crucial factor in competitiveness and employment; for instance, two-thirds of European workers in manufacturing are employed in SMEs [2]. However, SMEs have limited resources for continuously monitoring the progress of technology or developing new methods. On the other hand, it is typical that new technology for production operations is not available as plug-and-play type solutions, but rather, requires customization in every application. That is especially the case with industrial robots. Industrial robot programming in SMEs is currently carried out often by a tedious and time-consuming teaching method. Flexible, low-cost, and easy-to-use methods are certainly needed for expanding robotics in SMEs. Therefore, we need new approaches for human–robot interaction where cognitive infocommunications [4, 5] can play a key role. Pires has stated that it means taking special care of the human–machine interfaces (HMI)— i.e., the devices, interfaces, and systems that enable humans and machines to cooperate on the shop floor as coworkers benefitting from each other’s capabilities [3]. The development of robots more suitable for SMEs also has been appreciated at the EU level, as shown by the accepted SMErobotics project [2]. In our paper, we present some related work and our inter-cognitive experiences toward easy programming. These include user-friendly interfaces for offline programming, remote monitoring and control of robots, a graphical user interface supplemented with a 3D measurement arm, and a depth camera system for hand gesture recognition. We also present how laser scanning can be used in sensor-bridging communication and how virtual design and simulation can be used in cognitive infocommunication in the entire development process of robotic work cells. The last chapters present our system and software architecture used in the development work of generic industrial robot programming for easy-to-use applications, as well as some examples of our service robot development. II. RELATED WORK Robots are recognized as complex machines that are not easy to program. Programming an industrial robot is usually known as a tedious and time-consuming task that requires a remarkable amount of technical expertise [6, 7, 8]. The human–robot interaction is problematic because robot systems usually lack simple user interfaces. The situation is a challenge for manufacturing SMEs. On the other hand, in recent decades, several different robot programming methods, languages, and systems have been developed. There are several surveys or reviews of robot programming systems. Lozano-Perez [9] presented his widely referred review in 1983. He divided robot programming systems into three main categories: guiding, robot level programming, and task-level programming. Later, Biggs and MacDonald [10] conducted a survey of robot programming systems, dividing them into automatic programming, manual programming, and software architectures. In addition, Kramer and Scheutz [11] presented a survey of development environments for 333 978-1-4673-5188-1/12/$31.00 ©2012 IEEE CogInfoCom 2012 • 3rd IEEE International Conference on Cognitive Infocommunications • December 2-5, 2012, Kosice, Slovakia

Upload: ossi

Post on 09-Mar-2017

216 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: [IEEE 2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom) - Kosice, Slovakia (2012.12.2-2012.12.5)] 2012 IEEE 3rd International Conference on Cognitive

Towards Easier Human–Robot Interaction to Help Inexperienced Operators in SMEs

Sakari Pieskä, Jari Kaarela, Ossi Saukko Centria Research and Development, Vierimaantie 7, 84100 Ylivieska, Finland

[email protected]

Abstract—The capability to use advanced tools, devices, and software is essential for enterprises to survive in global competition. Inter-cognitive communication is an important element in the development of engineering applications where natural and artificial cognitive systems should work together efficiently. Small and medium-sized enterprises (SMEs) are becoming increasingly important in society, as our economies depend heavily on SMEs, which represent the majority of jobs created. Global competition has forced these SMEs to change and develop their production systems radically, to be more flexible. Industrial robots are seen as a key element in flexible manufacturing systems. However, currently, industrial robots are not commonly used in SMEs; one reason is the complex handling, especially the time-consuming programming. Industrial robot systems usually lack simple user interfaces, and the programming is usually carried out by the typical teach pendant teaching method. This method is a tedious and time-consuming task that requires a remarkable amount of expertise. In industry, this type of robot programming can be justified economically only for the production of large lot sizes, which are not typical for SMEs. Therefore, new approaches to human–robot interaction are required. Cognitive infocommunication can play a key role in these applications. Accordingly, our main goal in this paper is to present experiences in developing easier human–robot interaction to help even inexperienced operators use robots in SMEs. These examples show a variety of ways that inter-cognitive communication between human and artificial cognitive systems can be utilized in robotics. We also present our system and software architecture used in the development of generic industrial robot programming for easy-to-use applications, as well as some examples of our service robot development. Service robotics offers numerous possibilities to utilize cognitive infocommunication, but development of reliable and flexible solutions is challenging, due to dynamic environments, and because inexperienced users often understand very little about the robots and their internal states.

I. INTRODUCTION Human–robot interaction requires easier and more

flexible ways for guiding robots in order to increase the utilization potential of robotics in small and medium-sized enterprises (SMEs), which are widely recognized as a key driver for economic growth, innovation, and employment [1, 2, 3]. Manufacturing SMEs are especially a crucial factor in competitiveness and employment; for instance, two-thirds of European workers in manufacturing are employed in SMEs [2]. However, SMEs have limited resources for continuously monitoring the progress of technology or developing new methods. On the other hand, it is typical that new technology for production operations is not available as plug-and-play type solutions,

but rather, requires customization in every application. That is especially the case with industrial robots. Industrial robot programming in SMEs is currently carried out often by a tedious and time-consuming teaching method. Flexible, low-cost, and easy-to-use methods are certainly needed for expanding robotics in SMEs. Therefore, we need new approaches for human–robot interaction where cognitive infocommunications [4, 5] can play a key role. Pires has stated that it means taking special care of the human–machine interfaces (HMI)—i.e., the devices, interfaces, and systems that enable humans and machines to cooperate on the shop floor as coworkers benefitting from each other’s capabilities [3]. The development of robots more suitable for SMEs also has been appreciated at the EU level, as shown by the accepted SMErobotics project [2].

In our paper, we present some related work and our inter-cognitive experiences toward easy programming. These include user-friendly interfaces for offline programming, remote monitoring and control of robots, a graphical user interface supplemented with a 3D measurement arm, and a depth camera system for hand gesture recognition. We also present how laser scanning can be used in sensor-bridging communication and how virtual design and simulation can be used in cognitive infocommunication in the entire development process of robotic work cells. The last chapters present our system and software architecture used in the development work of generic industrial robot programming for easy-to-use applications, as well as some examples of our service robot development.

II. RELATED WORK Robots are recognized as complex machines that are not

easy to program. Programming an industrial robot is usually known as a tedious and time-consuming task that requires a remarkable amount of technical expertise [6, 7, 8]. The human–robot interaction is problematic because robot systems usually lack simple user interfaces. The situation is a challenge for manufacturing SMEs.

On the other hand, in recent decades, several different robot programming methods, languages, and systems have been developed. There are several surveys or reviews of robot programming systems. Lozano-Perez [9] presented his widely referred review in 1983. He divided robot programming systems into three main categories: guiding, robot level programming, and task-level programming. Later, Biggs and MacDonald [10] conducted a survey of robot programming systems, dividing them into automatic programming, manual programming, and software architectures. In addition, Kramer and Scheutz [11] presented a survey of development environments for

333978-1-4673-5188-1/12/$31.00 ©2012 IEEE

CogInfoCom 2012 • 3rd IEEE International Conference on Cognitive Infocommunications • December 2-5, 2012, Kosice, Slovakia

Page 2: [IEEE 2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom) - Kosice, Slovakia (2012.12.2-2012.12.5)] 2012 IEEE 3rd International Conference on Cognitive

autonomous mobile robots. However, most of the robot programming methods presented in reviews and surveys are not suitable for SMEs.

Teaching method, through the use of the robot teach pendant, has remained the typical method, even though it is a tedious and time-consuming task that requires some technical expertise. In industry, this type of robot programming can be justified economically only for production of large lot sizes. One alternative method is offline programming, where the actual programming is carried out on a computer and 3D models of the work cell, with the robot, pieces, and surroundings. The advantages of offline programming are that production can continue during programming, and the robot programs, in most cases, can be created quickly through the reuse of existing CAD data. The major disadvantage is that it demands a remarkable investment in offline programming software and its training. Offline programming systems are often out of reach for SMEs because they are very expensive and require long training periods to be utilized efficiently. Therefore, new approaches to robot programming are needed. Pires [3, 6], Neto, Pires, and Moreira [7, 8], Schraft and Meyer [12], Naumann, Wegener, Schraft, and Lachello [13], and Solvang, Sziebig, and Korondi [14] presented examples of how human–robot interaction can be developed to be more suitable for SMEs by means that include inter-cognitive communication elements.

Recently, hand gesture recognition has emerged as a promising method by which natural and artificial cognitive systems might work together efficiently. Hand gesture recognition-based HRIs are natural and easy to use for human operators. Suarez and Murphy [15] recently presented an interesting review of applications where gesture recognition was tested. Nakachi, Takeuchi, and Katagami [16] investigated individuality in perception analysis of these gesture recognition processes. Barattini, Morand, and Robertson [17] presented a gesture set that could be used as a sign language for controlling industrial robots. Salem, Kopp, Wachsmuth, Rohlfing, and Joublin [18] investigated how humans perceive representational hand and arm gestures performed by a robot during a task-related interaction.

III. INTER-COGNITIVE COMMUNICATION EXPERIENCES TOWARD EASY ROBOT PROGRAMMING

We have developed inter-cognitive communication

between humans and robots with easy robot programming methods that enable robot programming even for operators who are not experts on robotics. The common factor in these solutions is that the user could utilize and even extend his/her cognitive capabilities through infocommunication devices. In the most successful implementations, robot operators could feel that they could use their robot with no programming at all.

A. User-Friendly Interfaces for Offline Programming Offline robot programming is an alternative for the

tedious and time-consuming teaching method. However, it usually requires a long training period for the robot operators. A guided menu-driven system, as presented in Fig. 1, is one way to help users focus their cognitive capabilities on the work itself, and not on the

programming details. In the case presented in Fig. 1, a robotic flange welding application with parameter-based offline programming was designed to enable rapid production changes with a user-friendly, menu-driven, graphical user interface [19]. The application software provided guided help to the operator with macro functions, starting from the creation of a product model where the macro function automatically generated parametric polygon geometry from the given product dimensions. Automatic robot program creation also included a simulation of welding tracks before the program was translated to the real robot. The entire robot program creation took no more than five minutes, so the developed user interface was found to be very effective and easy to use.

In the robotic flange welding programming process, inter-cognitive representation-bridging communication is used so that the same robot program information is transferred to both the user and the robot control system, but in different representations—in 3D simulation form for the user and in correct robot program syntax form for the robot controller.

Figure 1. Menu-driven graphical user interface for robotic welding

B. Graphical User Interface with 3D Measurement Arm Figure 2 shows how inter-cognitive communication

between a human and a robot control system occurs with a simple 3D measurement arm, which is used in generating complex-shaped robot tracks. The user does not have to take care of the robot commands; the user only shows the tracks he/she wishes the robot to move along. The guiding graphical user interface enables the human operator to control the industrial robot without previous experience in robot programming. The idea of an external measurement device-based interaction in robot programming has also been applied in other studies with different types of input devices. VTT recently developed a method based on an interactive 3D sensor system for robotic applications [20]. In the European SME Robot project, a digital pen was developed to be used as a robot programming device [21]. This type of easy programming method could radically change inter-cognitive communication in robot programming procedures. Development is moving in the no-programming direction, where the robot operator no longer has to be a robot expert to utilize his/her cognitive capabilities.

S. Pieskä et al. • Towards easier human-robot interaction to help inexperienced operators in SMEs

334

Page 3: [IEEE 2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom) - Kosice, Slovakia (2012.12.2-2012.12.5)] 2012 IEEE 3rd International Conference on Cognitive

Figure 2. En easy way for robot programming with a 3D measurement arm

C. Hand Gesture Recognition-Based Interaction An alternative method for using a 3D measurement arm

in human–robot interaction is recognition of human hand movements. We have developed natural human–robot interactions based on hand gesture recognition with a depth camera system. The hand recognition sensor in our system is Microsoft Kinect, a widely used, low-cost sensor originally developed by Microsoft for full-body tracking to be used for interaction with games. The application programming interface (API) for Kinect that we used is based on Zigfu's OpenNI bindings for Unity. Zigfu contains three different elements of software that are required for Kinect applications: OpenNI, NITE, and SensorKinect. OpenNI is a general-purpose framework for obtaining data from 3D sensors, while NITE is used for gesture recognition and skeleton tracking. SensorKinect is a driver for interfacing with Kinect.

Figure 3. Testing gesture recognition-based control of industrial robot

This gesture-based communication was demonstrated in

our case for a palletizing application with an industrial robot (Motoman MH6). In the demonstration, the operator used her cognitive skills to define where the robot should pick up and place the objects (Fig. 3). The developed graphical user interface enables the human to control the industrial robot with no previous experience in robot programming. This type of low-cost solution for cognitive infocommunication is well suited for SMEs that lack robot programming experts. It is also a good example of applications where natural and artificial cognitive systems work together efficiently and humans can utilize their cognitive capabilities without being robot programming experts.

D. Virtual Design and Cognitive Infocommunication Virtual layout design and simulation provide effective

tools to examine the effects of layout changes before

implementing them, or layout alternatives before the construction of a new factory, a new production line, or a new production cell. 3D visualization may be used for intra-cognitive communication, which takes place not only in robot programming but also in the entire development process of robotic work cells [22]. Figure 4 presents an example of results from the collaboration project for designing and manufacturing a robotized work cell for log-building production. The development changed the SME’s regular design process, because virtual design and simulations were used from the early design phase to manufacturing and export, with the help of the research team. The actual work cell was constructed, tested, and finalized after some modifications, in collaboration with the SME’s factory. After successful tests, the SME’s customer delivered it abroad. During the development, the virtual model was used for intra-cognitive communication among the research team, the work cell manufacturer, and the customer (log-building company).

Figure 4. Virtual design and simulation was used in the development of

a robotic log-building work cell

Laser scanning can be integrated into virtual modeling.

The procedure starts with measurements that sweep an area and return an accurate 3D point cloud, a high-definition map of surfaces. From that point cloud, a 3D model can be created with feasible image processing software. The 3D model can be used in several infocommunication devices, such as laptops, tablets, and smartphones. Figure 5 shows an example of laser-scanning data developed further for a 3D model of our production engineering laboratory. The results of laser scanning-based virtual modeling was then used as a 3D map for navigation of mobile robots, as presented in Fig 5. The robot also had its own local 3D scanner that it could use for sensor- and representation-sharing communication when moving in the area. Two SME production facilities were modeled in the same way when the SMEs were planning layout changes. They also could use them as a 3D map for navigation of mobile work machines or robots. The results of laser scanning can be utilized in the design process for robot work environments and robot movements. It also can be used in online robot control to provide information about the dynamic environment. We have utilized laser scanning information in the remote control of a mobile robot and in providing information during autonomous operations, both in industrial and mobile robots [23].

335

CogInfoCom 2012 • 3rd IEEE International Conference on Cognitive Infocommunications • December 2-5, 2012, Kosice, Slovakia

Page 4: [IEEE 2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom) - Kosice, Slovakia (2012.12.2-2012.12.5)] 2012 IEEE 3rd International Conference on Cognitive

Figure 5. Robot positioned in the 3D model by wireless sensor network

(left) and presentation of robot’s own laser scanner data (right)

IV. SYSTEM AND SOFTWARE ARCHITECTURE FOR GENERIC INDUSTRIAL ROBOT PROGRAMMING

We have developed a generic industrial robot

programming platform for efficient human–robot interaction. This platform allows inputs from different sources to be used as a basis for robot program creation for many different robot types. The platform is suitable for both robot programming experts and those with no experience in robot programming, such as CAD designers. The basic principle for the inter-cognitive human–robot interaction is presented in Fig. 6, which shows how a CAD model, for example, can easily be translated to a robot program.

Figure 6. Generic principle of software used in Robot Programmer

The alternative data sources we used are 3D sensors (e.g., Kinect) and smart camera systems. The operator is provided with a guiding graphical user interface with a set of ready-made plugins. The operator chooses one, such as cutting, which then uses the information from the input CAD model. The plugin asks the operator to provide the parameters for operation, after which it creates a generic

robot program. Then, the operator can choose which robot type will be used. Usually, Robot Simulator is used first to verify the robot program. After that, the generic robot program is translated into robot programs, such as JBI format for Motoman robots or PRG format for ABB robots. Then, the robot program can be downloaded to the robot controller for execution. Figure 7 presents the software architecture of RobotProgrammer in the case where the data source is a CAD model. The structure consists of the main application, program plugins, save plugins, and libraries.

<<Plugin>> Program plugins:

Figure 7. The architecture of the Robot Programmer software Program plugins are dynamically loaded when main

application starts. This makes it possible to add new functionality into RobotProgrammer without modifying the main application. Program plugins get access to CAD data, which they can use to make general robot programs. These plugins can also add new items into the CAD scene to show the user a route of what the program is going to do.

RobotProgrammer is the main application where the user can load a CAD image. CAD data is then given to the program plugins. Selected program plugins then show the route over a CAD image and make GeneralRobotProgram when required. The main application gives this generic robot program to save the plugin selected by the user. Save plugin writes the program to actual file.

Save plugins are dynamically loaded when the main application starts. This makes it possible to add support for new target formats without modifying the main application. Save plugin provides a list of support file formats to the main application; therefore, one plugin may actually support more than just one file format. In the main application, the user can select the desired format from the save dialog, where all the formats from all the plugins are collected.

GrplCompiler is a testing application for save plugins. It can be used to compile GRPL source files and then save the program into a target format using save plugin.

LibRobotProgram is a collection of common classes and functions that the main application and program plugins might need. This library also provides a base class for quick program plugin development.

S. Pieskä et al. • Towards easier human-robot interaction to help inexperienced operators in SMEs

336

Page 5: [IEEE 2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom) - Kosice, Slovakia (2012.12.2-2012.12.5)] 2012 IEEE 3rd International Conference on Cognitive

LibDxfReader is used to load CAD files in DXF format. The main application shows the loaded scene in a view that the user can zoom and move. The user can also select visible layers. Program plugins use loaded CAD data to make generic robot programs.

LibGrplTools library is used to compile and decompile GRPL source files. The main application provides the user with the ability to save GRPL source files using a decompiler, while program plugins can use a compiler to make a program from the GRPL source code.

LibGrpl is the library of General Robot Programming Language for making generic robot programs. Program plugins make programs in this format, while save plugins convert these into actual files in the target format.

Our experiences using RobotProgrammer have validated the system and software architecture. The experiments have shown that even inexperienced personnel can create robot programs from CAD files without tedious and time-consuming robot programming procedures. The generic features of the architecture allow the operator to either program or use the robot flexibly at his/her own cognitive level.

V. MULTIMODAL INTERACTION FOR SERVICE ROBOTS

Robotics has recently spread out dramatically from

purely industrial applications to service robotics, such that the number of service robots is currently larger than the number of industrial robots. SMEs will be using them more in the future, and more service robots will be used for professional purposes. Service robots have already been placed into use in several social environments, including healthcare, welfare, domestic, and leisure applications [24, 25, 26]. In these dynamic social environments, interaction often may be even more difficult than with industrial robots, because inexperienced users do not understand the robots’ internal states, intentions, actions, and expectations [18]. For welfare applications, we have implemented a multimodal system with a combination of voice and multi-touchscreen, to allow elderly people to interact comfortably with the robot. Regarding the voice-based interface, an important research question is which words and syntax should be chosen to be understood easily by the elderly. Currently, there are about 100 Finnish words that we have chosen with respect to robot capabilities and functionalities. We have taken into account the robot manufacturer’s experiences with voice-based interface, and we have adapted it for the Finnish language user interface with the help of a Finnish voice recognition software dealer. We have also developed a Finnish language version of the graphic user interface, and now, in most of the commands, the user is free to choose either voice command or touchscreen. The functionalities of the multimodal user interface currently include greetings, general information of time and date, wake-up function, appointments, drug management, shopping list, e-mail, medical diagnostics, news, weather forecasts, photo albums, music, videos, Skype calls, and commands for robot movement. The solution also offers intra-cognitive communication between elderly persons and their relatives, nurses, and

doctors. An example of the multimodal interface is presented in Fig. 8.

Figure 8. Kompaï service robot with a multimodal user interface

We have also demonstrated the use of our service robot

Kompaï as part of a smart restaurant system. The purpose of this system is to simplify and speed up ordering in restaurants, and to help restaurant employees with their work. Applications are used on tablet or laptop computers equipped with touchscreens. The Smart Menu system covers applications of the entire order process of a restaurant, from the customer or waiter to the kitchen and the cashier (Fig. 9). It offers a platform for both inter- and intra-cognitive communication. The robot’s touchscreen has a service menu application that contains different applications for restaurant customers. The service menu may include information (e.g., weather, news), Skype calling, and entertainment, such as videos and games. The children’s menu consists of pictures of the foods so children find it easy to browse.

Figure 9. Service robot as a part of a restaurant system

Service robots will be a challenging field for cognitive

infocommunications in the future. The same infocommunication devices that are used for everyday communications are often available to be used with service robots. However, dynamic social environments and the users’ inexperience with technology can cause extra challenges for development efforts. Interactive service robots used in everyday life must be especially simple to operate, because the users are ordinarily not technical experts.

337

CogInfoCom 2012 • 3rd IEEE International Conference on Cognitive Infocommunications • December 2-5, 2012, Kosice, Slovakia

Page 6: [IEEE 2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom) - Kosice, Slovakia (2012.12.2-2012.12.5)] 2012 IEEE 3rd International Conference on Cognitive

VI. CONCLUSIONS

Inter-cognitive communication is an important element in the development of engineering applications where natural and artificial cognitive systems need to work together efficiently. Herein, we have presented our experiences in developing an easier human–robot interaction program to help even inexperienced operators to utilize robots in SMEs. We also presented the system and software architecture used in the development work for generic industrial robot programming, as well as examples of multimodal interaction in service robotics.

ACKNOWLEDGMENTS The authors would like to acknowledge everyone who

participated in the development of these HRI applications. In particular, we want to thank Rumi Takahashi from Ochanomizu University and Jari Mäkelä, Juhana Jauhiainen, and Esa Pyykölä from Centria for their valuable contribution. This work was carried out within projects supported by EU Structural Funds, the TE Centre for Northern Ostrobothnia, Council of Oulu Region, the Finnish Funding Agency for Technology and Innovation (Tekes), Ylivieska Region, Nivala-Haapajärvi Region, and Haapavesi-Siikalatva Region.

REFERENCES [1] Small and medium-sized enterprises (SMEs). Fact and figures

about the EU´s Small and Medium Enterprise (SME). Available at: http://ec.europa.eu/enterprise/policies/sme/facts-figures-analysis/index_en.htm

[2] SMErobotics homepage: http://www.smerobotics.org/ [3] J.N. Pires, “Robotics for small and medium enterprises: control

and programming challenges.” Industrial Robot 2006; 33(6). [4] P. Baranyi and A. Csapo, “Cognitive Infocommunications:

CogInfoCom”, 11th IEEE International Symposium on Computational Intelligence and Informatics, Budapest, Hungary, 2010.

[5] P. Baranyi and A. Csapo, “Definition and synergies of cognitive infocommunications.” Acta Polytechnica Hungarica 2012; 9(1): 67–83.

[6] J.N. Pires, “New challenges for industrial robotic cell programming.” Industrial Robot 2008; 36(1).

[7] P. Neto, J.N. Pires, and A.P. Moreira, “3D CAD-based robot programming for the SME shop-floor.” 20th International Conference on Flexible Automation and Intelligent Manufacturing, FAIM 2010, San Francisco, CA, 2010.

[8] P. Neto, J.N. Pires, and A.P. Moreira, “High-level programming and control for industrial robotics: using a hand-held accelerometer-based input device for gesture and posture recognition.” Industrial Robot 2010; 37(2):137–147.

[9] T. Lozano-Perez, “Robot programming.” Proceedings of the IEEE, Vol 71, July 1983, 821–841.

[10] G. Biggs and B. MacDonald, “A survey of robot programming systems.” In Proceedings of the Australasian Conference on Robotics and Automation, CSIRO, Brisbane, Australia, 2003.

[11] J.F. Kramer and M. Scheutz, “Development environments for autonomous mobile robots: A survey.” Autonomous Robot 2007; 22(2): 101–132.

[12] R.D. Schraft and C. Meyer, “The need for an intuitive teaching method for small and medium enterprises.” ISR 2006 - ROBOTIK 2006, Proceedings of the Joint Conference on Robotics, Munich, Germany, 2006.

[13] M. Naumann, K. Wegener, R.D. Schraft, L. Lachello, “Robot cell integration by means of application-P’n’P.” ISR 2006 - ROBOTIK 2006, Proceedings of the Joint Conference on Robotics, Munich, Germany, 2006.

[14] B. Solvang, G. Sziebig, P. Korondi, “Vision based robot programming.” IEEE International Conference on Networking, Sensing and Control ICNSC, Sanya, China, 2008.

[15] J. Suarez and R.R. Murphy, “Hand gesture recognition with depth images: a review.” IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication pp. 411–417, Paris, France, 2012.

[16] I. Nakachi, Y. Takeuchi, D. Katagami, “Perception analysis of motion contributing to individuality using Kinect Sensor. IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication pp. 308–313, Paris, France, 2012.

[17] P. Barattini, C. Morand, N.M. Robertson, “A proposed gesture set for the control of industrial collaborative robots.” IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication pp. 132–137, Paris, France, 2012.

[18] M. Salem, S. Kopp, I. Wachsmuth, K. Rohlfing, F. Joublin, “Generation and evaluation of communicative robot gesture.” International Journal of Social Robotics 2012; 4(2):201–217.

[19] S. Pieskä, M. Sallinen, J. Kaarela, V.-M. Honkanen, Y. Sumi, “Applying remote monitoring and control for rapid and safe changes in robotic production cells.” Proceedings of the 5th International Conference on Machine Automation ICMA pp. 523–527, Osaka, Japan, 2004.

[20] T. Heikkilä, J.M. Ahola, E. Viljamaa, M. Järviluoma, “An interactive 3D sensor system and its programming for target localizing in robotics applications.” Proceedings of the IASTED International Conference, Robotics (Robo 2010), Phuket, Thailand, 2010.

[21] J.N. Pires, T. Godinho, K. Nilsson, M. Haage, C. Meyer, “Programming industrial robots using advanced input-output devices: test-case example using a CAD package and a digital pen based on the Anoto technology.” International Journal of Online Engineering 2007; 3(3):7.

[22] P. Baranyi, B. Solvang, H. Hashimoto, P. Korondi, “3D internet for cognitive infocommunication.” 10th International Symposium of Hungarian Researchers on Computational Intelligence and Informatics (CINTI ’09), pp. 229–243, Budapest, Hungary, 2009.

[23] J. Jämsä, M. Luimula, S. Pieskä, V. Brax, O. Saukko, P. Verronen, “Indoor positioning with laser scanned models in metal industry.” Proceedings of the International Conference on Ubiquitous Positioning, Indoor Navigation and Location Based Service, Helsinki, Finland, 2010.

[24] B. Graf, C. Parlitz, M. Hägele, “Robotic Home Assistant Care-O-bot® 3 Product Vision and Innovation Platform.” In: Jacko J.A. (ed.): Human–Computer Interaction, Part II, HCII 2009, LNCS 5611, Berlin Heidelberg: Springer-Verlag, pp. 312–320.

[25] C. Granata, M. Chetouani, A. Tapus, P. Bidaud, V. Dupourqué, “Voice and graphical-based interfaces for interaction with a robot dedicated to elderly and people with cognitive disorders.” 19th IEEE International Symposium on Robot and Human Interactive Communication, pp. 785–790, Viareggio, Italy, 2010.

[26] S. Pieskä, M. Luimula, J. Jauhiainen, V. Spitz, “Social service robots in public and private environments.” 12th WSEAS International Conference on Robotics, Control and Manufacturing Technology (ROCOM '12), Rovaniemi, Finland, 2012.

S. Pieskä et al. • Towards easier human-robot interaction to help inexperienced operators in SMEs

338