design of hybrid cells to facilitate safe and efficient human–robot … paper presents a framework...

11
Krishnanand N. Kaipa 1 Department of Mechanical and Aerospace Engineering, Old Dominion University, Norfolk, VA 23529 e-mail: [email protected] Carlos W. Morato ABB Corporate Research Center ABB Inc., Windsor, CT 06065 e-mail: [email protected] Satyandra K. Gupta Center for Advanced Manufacturing, University of Southern California, Los Angeles, CA 90089-1453 e-mail: [email protected] Design of Hybrid Cells to Facilitate Safe and Efficient Human–Robot Collaboration During Assembly Operations This paper presents a framework to build hybrid cells that support safe and efficient human–robot collaboration during assembly operations. Our approach allows asynchro- nous collaborations between human and robot. The human retrieves parts from a bin and places them in the robot’s workspace, while the robot picks up the placed parts and assembles them into the product. We present the design details of the overall framework comprising three modules—plan generation, system state monitoring, and contingency handling. We describe system state monitoring and present a characterization of the part tracking algorithm. We report results from human–robot collaboration experiments using a KUKA robot and a three-dimensional (3D)-printed mockup of a simplified jet-engine assembly to illustrate our approach. [DOI: 10.1115/1.4039061] 1 Introduction Factories of the future will be expected to produce increasingly complex products, demonstrate flexibility by rapidly accommo- dating changes in products or volumes, and remain cost competi- tive by controlling capital and operational costs. Networked machines with built-in intelligence will become the backbone of these factories. Humans will continue to play a vital role in the operation of the factories of the future to achieve flexibility at low costs. Realizing complete automation that meets all three above- described requirements does not appear to be feasible in the near foreseeable future. The goal of achieving flexibility at low costs simply means that humans will continue to play a vital role in the operation of the factories of the future. Their role will change from doing routine tasks to performing challenging tasks that are difficult to automate. Humans and robots share complementary strengths in perform- ing assembly tasks. Humans offer the capabilities of versatility, dexterity, performing in-process inspection, handling contingen- cies, and recovering from errors. However, they have limitations in terms of factors of consistency, labor cost, payload size/weight, and operational speed. In contrast, robots can perform tasks at high speeds, while maintaining precision and repeatability, operate for long periods of times, and can handle high payloads. However, currently robots have the limitations of high capital cost, long programming times, and limited dexterity. Owing to the reasons discussed above, small batch and custom production operations predominantly use manual assembly. The National Association of Manufacturers estimates that the U.S. has close to 300,000 small and medium manufacturers (SMM), repre- senting a very important segment of the manufacturing sector. As we move toward shorter product life cycles and customized prod- ucts, the future of manufacturing in the U.S. will depend upon the ability of SMM to remain cost competitive. The high labor cost is making it difficult for SMM to remain cost competitive in high wage markets. They need to find a way to reduce the labor cost. Clearly, setting up purely robotic cells is not an option for them as they do not provide the necessary flexibility. Creating hybrid cells where humans and robots can collaborate in close physical prox- imities is a potential solution. However, current generation indus- trial robots impose safety risks to humans, so physical separation has to be maintained between humans and robots. This is typically accomplished by installing the robot in a cage. In order for the robot to be operational, the cage door has to be locked and elabo- rate safety protocol has to be followed in order to ensure that no human operator is present in the cage. This makes it very difficult to design assembly cells where humans and robots can collaborate effectively. In this paper, we design and develop a framework for hybrid cells that support safe and efficient human–robot collaboration during assembly operations. Our prior work on this topic focused on the problem of ensuring safety during human–robot collabora- tions inside a hybrid cell by developing a human-monitoring sys- tem and precollision robot control strategies [1]. The specific contributions of this work include: (1) Details on the interaction between different system compo- nents of the human–robot collaboration framework (2) New part-tracking system that augments the state- monitoring capability of the hybrid cell significantly. The part-tracking system enables efficient monitoring of the assembly operations by detecting whether the correct part is being picked by the human and whether it is placed at the correct location/orientation in front of the robot. (3) New experimental results consisting of a collaboration between a human and a KUKA robot to assemble a three- dimensional (3D)-printed mockup of a simplified jet-engine. These experiments also demonstrate how the part-tracking system, combined with the human-instruction module, ena- bles replanning of assembly operations on-the-fly. Preliminary works related to this paper were presented in Refs. [2] and [3]. There are several works in the human–robot col- laboration literature that compared different modes of collabora- tion [47]. Since this paper is mainly focused on the part estimation system, we present quantitative results on this topic. More system-level comparative results are outside the scope of this paper. Recent advances in safer industrial robots [810] and exteroceptive safety systems [1,11] create a potential for hybrid cells where humans and robots can work side-by-side, without 1 Corresponding author. Contributed by the Computer-Aided Product Development Committee of ASME for publication in the JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING. Manuscript received October 26, 2017; final manuscript received January 10, 2018; published online June 12, 2018. Special Editor: Jitesh H. Panchal. Journal of Computing and Information Science in Engineering SEPTEMBER 2018, Vol. 18 / 031004-1 Copyright V C 2018 by ASME Downloaded From: https://asmedigitalcollection.asme.org/ on 07/09/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use

Upload: vomien

Post on 15-Jun-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

Krishnanand N. Kaipa1

Department of Mechanical

and Aerospace Engineering,

Old Dominion University,

Norfolk, VA 23529

e-mail: [email protected]

Carlos W. MoratoABB Corporate Research Center ABB Inc.,

Windsor, CT 06065

e-mail: [email protected]

Satyandra K. GuptaCenter for Advanced Manufacturing,

University of Southern California,

Los Angeles, CA 90089-1453

e-mail: [email protected]

Design of Hybrid Cells toFacilitate Safe and EfficientHuman–Robot CollaborationDuring Assembly OperationsThis paper presents a framework to build hybrid cells that support safe and efficienthuman–robot collaboration during assembly operations. Our approach allows asynchro-nous collaborations between human and robot. The human retrieves parts from a bin andplaces them in the robot’s workspace, while the robot picks up the placed parts andassembles them into the product. We present the design details of the overall frameworkcomprising three modules—plan generation, system state monitoring, and contingencyhandling. We describe system state monitoring and present a characterization of the parttracking algorithm. We report results from human–robot collaboration experiments usinga KUKA robot and a three-dimensional (3D)-printed mockup of a simplified jet-engineassembly to illustrate our approach. [DOI: 10.1115/1.4039061]

1 Introduction

Factories of the future will be expected to produce increasinglycomplex products, demonstrate flexibility by rapidly accommo-dating changes in products or volumes, and remain cost competi-tive by controlling capital and operational costs. Networkedmachines with built-in intelligence will become the backbone ofthese factories. Humans will continue to play a vital role in theoperation of the factories of the future to achieve flexibility at lowcosts. Realizing complete automation that meets all three above-described requirements does not appear to be feasible in the nearforeseeable future. The goal of achieving flexibility at low costssimply means that humans will continue to play a vital role in theoperation of the factories of the future. Their role will changefrom doing routine tasks to performing challenging tasks that aredifficult to automate.

Humans and robots share complementary strengths in perform-ing assembly tasks. Humans offer the capabilities of versatility,dexterity, performing in-process inspection, handling contingen-cies, and recovering from errors. However, they have limitationsin terms of factors of consistency, labor cost, payload size/weight,and operational speed. In contrast, robots can perform tasks athigh speeds, while maintaining precision and repeatability,operate for long periods of times, and can handle high payloads.However, currently robots have the limitations of high capitalcost, long programming times, and limited dexterity.

Owing to the reasons discussed above, small batch and customproduction operations predominantly use manual assembly. TheNational Association of Manufacturers estimates that the U.S. hasclose to 300,000 small and medium manufacturers (SMM), repre-senting a very important segment of the manufacturing sector. Aswe move toward shorter product life cycles and customized prod-ucts, the future of manufacturing in the U.S. will depend upon theability of SMM to remain cost competitive. The high labor cost ismaking it difficult for SMM to remain cost competitive in highwage markets. They need to find a way to reduce the labor cost.Clearly, setting up purely robotic cells is not an option for them as

they do not provide the necessary flexibility. Creating hybrid cellswhere humans and robots can collaborate in close physical prox-imities is a potential solution. However, current generation indus-trial robots impose safety risks to humans, so physical separationhas to be maintained between humans and robots. This is typicallyaccomplished by installing the robot in a cage. In order for therobot to be operational, the cage door has to be locked and elabo-rate safety protocol has to be followed in order to ensure that nohuman operator is present in the cage. This makes it very difficultto design assembly cells where humans and robots can collaborateeffectively.

In this paper, we design and develop a framework for hybridcells that support safe and efficient human–robot collaborationduring assembly operations. Our prior work on this topic focusedon the problem of ensuring safety during human–robot collabora-tions inside a hybrid cell by developing a human-monitoring sys-tem and precollision robot control strategies [1]. The specificcontributions of this work include:

(1) Details on the interaction between different system compo-nents of the human–robot collaboration framework

(2) New part-tracking system that augments the state-monitoring capability of the hybrid cell significantly. Thepart-tracking system enables efficient monitoring of theassembly operations by detecting whether the correct partis being picked by the human and whether it is placed atthe correct location/orientation in front of the robot.

(3) New experimental results consisting of a collaborationbetween a human and a KUKA robot to assemble a three-dimensional (3D)-printed mockup of a simplified jet-engine.These experiments also demonstrate how the part-trackingsystem, combined with the human-instruction module, ena-bles replanning of assembly operations on-the-fly.

Preliminary works related to this paper were presented inRefs. [2] and [3]. There are several works in the human–robot col-laboration literature that compared different modes of collabora-tion [4–7]. Since this paper is mainly focused on the partestimation system, we present quantitative results on this topic.More system-level comparative results are outside the scope ofthis paper. Recent advances in safer industrial robots [8–10] andexteroceptive safety systems [1,11] create a potential for hybridcells where humans and robots can work side-by-side, without

1Corresponding author.Contributed by the Computer-Aided Product Development Committee of ASME

for publication in the JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN

ENGINEERING. Manuscript received October 26, 2017; final manuscript receivedJanuary 10, 2018; published online June 12, 2018. Special Editor: Jitesh H. Panchal.

Journal of Computing and Information Science in Engineering SEPTEMBER 2018, Vol. 18 / 031004-1Copyright VC 2018 by ASME

Downloaded From: https://asmedigitalcollection.asme.org/ on 07/09/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use

being separated from each other by physical cages. However, real-izing this goal is challenging. Humans might accidentally come inthe way of the robot. Therefore, the robot must be able to executeappropriate collision avoidance strategies. Humans are prone tomaking errors and doing operations differently. Therefore, robotmust be able to replan in response to an unpredictable humanbehavior and modify its motion accordingly. The robot must beable to communicate the error to the human as well.

We consider a one-robot one-human model that exploits com-plementary strengths of either agents. The human identifies a partfrom a bin of multiple parts, picks it, and places it in front ofthe robot. The part is then picked up, and assembled, by therobot. The human also assists the robot in critical situations byperforming dexterous fine manipulation tasks required duringpart-placing. A state monitoring system allows to maintain a“knowledge” about the development of the assembly tasks, andprovide additional information to the human operator if needed.After placing the part in front of the robot, the human can proceedwith executing the next task instruction, rather than waiting untilthe robot finishes its intended task. The robot also replans andadaptively responds to different human actions (e.g., robot pausesif the human accidently comes very close to it, waits if the humanplaces an incorrect part in front of it, etc.). All these features resultin asynchronous collaborations between robot and the human. Anoverview of the hybrid cell is shown in Fig. 1.

2 Related Work

2.1 Support Human Operations in the Assembly Cell.Recent advances in information visualization and human–computer interaction have given rise to different approaches toautomated generation of instructions that aid humans in assembly,maintenance, and repair. Heiser et al. [12] derived principlesfor generating assembly instructions based on insights intohow humans perceive the assembly process. They compare theinstructions generated by their system with factory-provided andhand-designed instructions to show that instruction generationinformed by cognitive design principles reduces assembly timesignificantly. Dalal et al. [13] developed a knowledge-basedsystem that generates temporal multimedia presentations. Thecontent included speech, text, and graphics. Zimmerman et al.[14] developed web-based delivery of instructions for inherently3D construction tasks. They tested the instructions generatedby their approach by using them to build paper-based origamimodels. Kim et al. [15] used recent advances in information visu-alization to evaluate the effectiveness of visualization techniquesfor schematic diagrams in maintenance tasks.

Several research efforts have indicated that instruction presen-tation systems can benefit from augmented reality techniques.

Kalkofen et al. [16] integrated exploded view diagrams into aug-mented reality. The authors developed algorithms to compose vis-ualization images from exploded/nonexploded real world data andvirtual objects. Henderson and Feiner [17] developed an aug-mented reality system for a mechanic performing maintenanceand repair tasks in a field setting. The authors carried out aqualitative survey to show that the system enabled easier taskhandling. Dionne et al. [18] developed a model of automaticinstruction delivery to guide humans in virtual 3D environments.Brough et al. [19] developed VIRTUAL TRAINING STUDIO, a virtualenvironment-based system that allows (i) training supervisors tocreate instructions and (ii) trainees to learn assembly operations ina virtual environment. A survey of virtual environments-basedassembly training can be found in Ref. [20].

2.2 Assembly Part Recognition. The increasing availabilityof 3D sensors such as laser scanners, time-of-flight cameras,stereo cameras, and depth cameras has stimulated research in theintelligent processing of 3D data. Object detection and pose esti-mation is a vast area of research in the computer vision. In thepast decade, researchers focused on designing robust and discrimi-native 3D features to find reliable correspondences between 3Dpoint sets [21–24]. Very few approaches are available for objectdetection based on feature correspondences when scenes are char-acterized by clutters and occlusions [25–27]. In addition, thesemethods cannot deal with the presence of multiple instances of agiven model, which is also the case with bag-of-3D featuresmethods [28–31] (refer to Ref. [32] for a survey on this topic).Feature-free approaches have also been developed based on theinformation available from depth cameras. The use of depth cam-eras became popular after the introduction of the low-cost Kinecttechnology. Kinect camera provides good-quality depth sensingby using a structured light technique [33] to generate 3D pointclouds in real time. Approaches based on local shape descriptorsare expected to perform better [25,26] in environments with manyobjects that have different shapes. However, these approaches donot work in the presence of symmetries and objects with similarshapes.

3 System Overview

The hybrid cell will operate in the following manner:

(1) The cell planner will generate a plan that will provideinstructions for the human and the robot in the cell.

(2) Instructions for the human operator will be displayed on ascreen in the assembly cell.

(3) The human will be responsible for retrieving parts frombins and bringing them within the robot’s workspace.

(4) The robot will pick up parts from its workspace and assem-ble them into the product.

(5) If needed, the human will perform the dexterous finemanipulation to secure the part in place in the product.

(6) The human and robot operations will be asynchronous.(7) The cell will be able to track the human, the locations of

parts, and the robot at all time.(8) If the human operator makes a mistake in executing an

assembly instruction, replanning will be performed torecover from that mistake. Appropriate warnings and errormessages will be displayed in the cell.

(9) If the human comes too close to the robot to cause a colli-sion, the robot will perform a collision avoidance strategy.

The overall framework used to achieve the above list of hybridcell operations consists of the following three modules:

Plan generation. We should be able to automatically generateplans in order to ensure efficient cell operation. This requires gen-erating feasible assembly sequences and instructions for robotsand human operators, respectively. Automated planning poses thefollowing two challenges. First, generating precedence constraintsfor complex assemblies is challenging. The complexity can come

Fig. 1 Hybrid cell in which a human and a robot collaborate toassemble a product

031004-2 / Vol. 18, SEPTEMBER 2018 Transactions of the ASME

Downloaded From: https://asmedigitalcollection.asme.org/ on 07/09/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use

due to the combinatorial explosion caused by the size of theassembly or the complex paths needed to perform the assembly.Second, generating feasible plans requires accounting for robotand human motion constraints. In Sec. 4, we present methods forautomatically generating plans for the operation of hybrid cells.

System state monitoring. We need to monitor the state of theassembly operations in the cell to ensure error-free operations. Wepresent methods for real-time tracking of the parts, the humanoperator, and the robot in Sec. 5.

Contingency handling. Contingency handling involves collisionavoidance between robot and human, replanning, and warninggeneration. In Sec. 6.1, we describe how the state information isused to take appropriate measures to ensure human safety whenthe planned move by the robot may compromise safety. If thehuman makes an error in part selection or placement, and the errorgoes undetected, it can lead to a defective product and inefficientcell operation. Human error can occur due to either confusionabout poor instructions or human not paying adequate attention.In Sec. 6.2, we describe how the part tracking information is usedto automatically generate instructions for taking corrective actionsif a human operator deviates from the selected plan. Correctiveactions involve replanning if it is possible to continue assemblyfrom the current state or issuing warning instructions to undo thetask.

4 Plan Generation

4.1 Assembly Sequence Generation. Careful planning isrequired to assemble the complex products [34–36]. Precedenceconstraints among assembly operations must be used to guidefeasible assembly sequence generation. We utilize a methoddeveloped in our earlier works [37,38] that automatically detectspart interaction clusters that reveal the hierarchical structure in aproduct. This thereby allows the assembly sequencing problem tobe applied to part sets at multiple levels of hierarchy. A 3D CADmodel of the product, with the individual parts in their assembledconfiguration, is used as an input to the algorithm. Our approachdescribed in Ref. [38] combines motion planning and part interac-tion clusters to generate assembly precedence constraints. Weassume that the largest part PartL of the assembly guides theassembly process. Therefore, this part is extracted from the CADmodel and kept aside. Next, spatial k–means clustering is used togroup the remaining parts into k part sets. Accordingly, the

assembly comprises kþ 1 part sets (PartL, PartSet1, PartSet2,…,PartSetk) in the first step. Now, the assembleability of this newassembly is verified. This is achieved by using motion planning tofind the part sets that can be removed from the assembly. Theseparts sets are removed from the assembly and added to a new dis-assembly layer. Again, we find the part sets that can be removedfrom the simplified assembly. These part sets are removed fromthe assembly and added to the second disassembly layer. If thisprocess halts before all part sets are removed, the method goesback to the first step where the number of clusters is incrementedby one. This results in a different grouping of kþ 1 new clusters.This cycle is repeated until all disassembly layers are identified.Next, the above process is recursively applied to find disassemblylayers for each part set identified in the previous step. The infor-mation extracted in this way is used to generate a list of assemblyprecedence constraints among part sets, which can be used to gen-erate feasible assembly sequences for each part set and the wholeassembly. More details on the principal techniques (motion plan-ning, generation of disassembly layers, and spatial partitioning-based part interaction cluster extraction), the corresponding algo-rithms used to implement the above approach, and test results on awide variety of assemblies can be found in Ref. [38].

The assembly model used to illustrate the concepts developedin this paper is a jet engine assembly as shown in Figs. 2(a) and2(b). The result of applying the above-mentioned method onthis assembly model is a feasible assembly sequence as shown inFig. 2(c).

4.2 Instruction Generation. The human worker inside thehybrid cell follows a list of instructions to perform assembly oper-ations. However, poor instructions lead to the human committingmistakes related to the assembly. We address this issue by utiliz-ing an instruction generation system developed in our previouswork [39] that creates effective and easy-to-follow assemblyinstructions for humans. A linearly ordered assembly sequence(result of Sec. 4.1) is given as input to the system. The output is aset of multimodal instructions (text, graphical annotations, and 3Danimations) that are displayed on a screen. Text instructions arecomposed using simple verbs such as Pick, Place, Position,Attach, etc. As mentioned in Sec. 4.1, we compute a feasibleassembly sequence directly from the given 3D CAD model of thechassis assembly. Therefore, the following assembly sequence isinput to the instruction generation system:

Fig. 2 (a) Assembly computer-aided design (CAD) parts from a simplified jet engine, (b) asimple jet engine assembly, and (c) feasible assembly sequence generated by the algorithm

Journal of Computing and Information Science in Engineering SEPTEMBER 2018, Vol. 18 / 031004-3

Downloaded From: https://asmedigitalcollection.asme.org/ on 07/09/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use

(1) Pick up FRONT SHROUD SAFETY(2) Place FRONT SHROUD SAFETY on ASSEMBLY

TABLE(3) Pick up MAIN FAN(4) Place MAIN FAN on ASSEMBLY TABLE(5) Pick up SHROUD(6) Place SHROUD on ASSEMBLY TABLE(7) Pick up FRONT SHAFT(8) Place FRONT SHAFT on ASSEMBLY TABLE(9) Pick up FIRST COMPRESSOR

(10) Place FIRST COMPRESSOR on ASSEMBLY TABLE(11) Pick up SECOND COMPRESSOR(12) Place SECOND COMPRESSOR on ASSEMBLY

TABLE(13) Pick up REAR SHAFT(14) Place REAR SHAFT on ASSEMBLY TABLE

(15) Pick up SHELL(16) Place SHELL on ASSEMBLY TABLE(17) Pick up REAR BEARING(18) Place REAR BEARING on ASSEMBLY TABLE(19) Pick up EXHAUST TURBINE(20) Place EXHAUST TURBINE on ASSEMBLY TABLE(21) Pick up COVER(22) Place COVER on ASSEMBLY TABLE

Figure 3 shows the instructions used by the system for some ofthe assembly steps. Humans may get confused about which topick when two parts look similar to each other. To addressthis problem, we utilize a part identification tool developed inRef. [39] that automatically detects such similarities and presentthe parts in a manner that enables the human worker to select thecorrect part. For this purpose, a similarity metric between two

Fig. 3 Generation of instructions for chassis assembly (1–6)

031004-4 / Vol. 18, SEPTEMBER 2018 Transactions of the ASME

Downloaded From: https://asmedigitalcollection.asme.org/ on 07/09/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use

parts was constructed based on attributes like part volume, surfacearea, types of surfaces, and curvature [40,41].

5 System State Monitoring

Monitoring the system state inside the hybrid cell involvestracking of the states of the robot, the human, and the part cur-rently being manipulated by the human. We assume that the robotwill be able to execute motion commands given to it, so that theassembly cell will know the state of the robot.

A human tracking system was developed in our previous works[1,11] by using multiple Microsoft–Kinect sensors. The system iscapable of building an explicit model of the human in near realtime. Human activity is captured by the Kinect sensors that repro-duce the human’s location and movements virtually in the form ofa simplified animated skeleton. Occlusion problems are resolvedby using multiple Kinects. The output of each Kinect is a 20-jointhuman model. Data from all the Kinects are combined in a filter-ing scheme to obtain the human motion estimates. A systematicexperimental analysis of factors like shape of the workspace,number of sensors, placement of sensors, and presence of deadzones was carried out in Ref. [1].

The assembly cell state monitoring uses a discrete state-to-statepart monitoring system that was designed to be robust anddecrease any possible robot motion errors. A failure in correctlyrecognizing the part and estimating its pose can lead to significanterrors in the system. To ensure that such errors do not occur, themonitoring system consists of two control points—the first controlpoint detects the part selected by the human and the second con-trol point detects the part’s spatial transformation when it isplaced in the robot’s workspace. The detection of the selected partin the first control point helps the system to track the changesintroduced by the human in real time and trigger the assemblyreplanning and the robot motion replanning based on the newsequence. Moreover, the detection of the posture of the assemblypart related to the robot in the second control point sends a feed-back to the robot with the “pick and place” or “wait” flag.

The part monitoring system is based on a 3D mesh matchingalgorithm, which uses a real-time 3D part registration and a 3Dmesh interactive refinement [42]. In order to register the assemblypart in 3D format, multiple acquisitions of the surface are neces-sary given that a single acquisition is not sufficient to describe theobject. These views are obtained by the Kinect sensors and repre-sented as dense point clouds. The point clouds are refined inreal time by a dense projective data association and a point-planeiterative closest point, all embedded in KINECTFUSION [43–46].KINECTFUSION is used to acquire refined point-clouds from bothcontrol points and for every single assembly part. In order to per-form a 3D mesh-to-mesh matching, an interactive refinementrevises the transformations composed of scale, rotation, and trans-lation. Such transformations are needed to minimize the distancebetween the refined point cloud in a time ti and the refinedpoint cloud at the origin t0, also called mesh model. Point corre-spondences were extracted from both meshes using a variation ofProcrustes analysis [47–49] and then compared with an iterativeclosest point algorithm [50]. Details of the 3D mesh matchingalgorithm follows.

5.1 Three-Dimensional Mesh Matching Algorithm. Three-dimensional vision measurements produce 3D coordinates of therelevant object or scene with respect to a local coordinate system.3D point cloud registration transforms multiple data sets into thesame coordinate system. Currently, there is no standard methodfor the registration problem and the performance of the algorithmsis often related to preliminary assumptions.

Consider a point cloud representation of a rigid object with aset of n points X ¼ fx1;…; xng 2 R3g that is subject to an orthog-onal rotation R 2 R3x3 and a translation t 2 R3. Then the goal isto fit the set of points X into a given point cloud representation ofthe same object or scene with n points Y¼ {y1,…, yn} under the

choice of an unknown rotation R, an unknown translation t, andan unknown scale factor s. We can represent several configura-tions of the same object in a common space by maximizing thegoodness-of-fit criterion. We do this with the aid of three high-level transformations: (1) translation (move the centroids of eachconfiguration to a common origin), (2) isotropic scaling (shrink orstretch each configuration isotropically to make them as similar aspossible), and (3) rotation/reflection (turn or flip the configurationsin order to align the point clouds).

Algorithm 1 Weighted extended orthogonal Procrustes analysisalgorithm

Input:

X¼ {x1, x2,…, xn} (point cloud reference)Y¼ {y1, y2,…, yn}Initial transformation values R0, T0, s0

Output:

R ¼2 R3x3 (rotation)t 2 R3; (translation)s ¼2 R; (scale)

k¼ 0, D¼ 10–9, Dk¼Dþ 1;while Dk>D do

if Hessian is positive definite then

Compute a Newton search direction;else

Compute a Gauss–Newton search direction;end if

Update R, t and sk¼ kþ 1Update fitting error D

end while

Return R, t, s

The set of transformations of the rigid object can be representedby sxiRþ jtT¼ yi, where j is a 1� n unit vector. The optimizationproblem of finding R, t, and s that minimizes the fitting error isoften called extended orthogonal Procrustes analysis [51]. Wecast our matching/registration problem as a weighted extendedorthogonal Procrustes analysis (WEOPA). The rotation R can becomputed by solving

minRksRX þ jtT � Yk2

F subject to RTR ¼ I3; detðRÞ ¼ 1

where k:kF is the Frobenius matrix norm. The pseudocode of theWEOPA algorithm is given in algorithm 1 to compute a solutionto the orthogonal Procrustes problem.

Algorithm 2 Heuristic iterative-WEOPA

k¼ 0;while k< number of iterations do

R0¼ random orthogonal matrix with RTR¼ I and det(R)¼ 1;t0¼ random translation vector;s0¼ random scale unit;

R; t; s:¼ computed minimum with R0, t0, s0 as initial values for theWEOPA fitting algorithm.if R; t; s is a new minimum then

Store R; t; s;k¼ 0;

end if

k¼ kþ 1end while

Return R, t, s

The WEOPA algorithm depends on a good R0, t0, s0 initializa-tion; therefore, the algorithm is not stable. In order to solve thestability problem, a heuristic method was designed in Ref. [51],which we call the heuristic iterative-WEOPA (algorithm 2). R0

and s0 initially take an identity value and t0 takes the value of

Journal of Computing and Information Science in Engineering SEPTEMBER 2018, Vol. 18 / 031004-5

Downloaded From: https://asmedigitalcollection.asme.org/ on 07/09/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use

zero. This initialization is sufficient for noise-free point clouds butmost of the point clouds generated by the sensor contain noise,which shifts the centroid of the 3D point cloud far from the trueposition. Our algorithm deals with this problem by randomly gen-erating orthogonal rotations, translations, and scaling as part ofthe initialization process. The heuristic combines these data withthe WEOPA fitting algorithm, to compute and store additionalminimums. When no new minimum is found after a certain num-ber of iterations (¼150), the algorithm is terminated. Later, thetotal number of minimums is used in order to draw conclusions.Moreover, experimentation showed that in most of the cases thealgorithm found the minimum in less than 35 initialization param-eters. The system was developed in Cþþ and uses a prebuilt PCL

visualization package. In addition, the sensing and reconstructionof point clouds was customized from the original manufacturer toallow quasi-real-time reconstruction and processing.

5.2 Part Tracking Results. We have created a 3D printed jetengine replica, which is composed of eleven assembly parts. Weselected five representative parts (shown as inputs in Fig. 4) thatafford different recognition complexities to illustrate the chal-lenges encountered during an assembly task. A block diagram ofthe part tracking system is shown in Fig. 4. The first step is to

perform segmentation on the point cloud in order to retrieve allassembly parts. In this case, we performed a plane segmentationto find any table in the scene, and consider only clusters sitting onit. Later, we removed all clusters that are too small or too big inorder to reduce the number of clusters and therefore the noise inthe scene. After human places the part, it is ready to be picked bythe robot. Uncertainties related to pose estimation are reduced to asmall variation in the final location. That is, any attempt by therobot to pick up the part results in a successful grasping(Fig. 5(c)).

Regardless of the control point, the algorithm uses the pointcloud generated from the 3D CAD model as a target and comparesthis target against the N point clouds or clusters extracted fromthe scanned scene. This approach allows the system to evaluatethe alignment error for each assembly part, detected under theassumption that the minimum error belongs to the matchingcluster. Once this analysis is completed, the system identifies thecluster that represents the best matching cluster, and thereby,recognizes the cluster. Experiments showed that our Iterative–WEOPA algorithm successfully detected the correspondingmatching between point clouds obtained from scanning and pointclouds generated from 3D CAD models. Cluster identification andscene labeling provide the system with a tracking mechanism todetect, and report, changes in the scene.

Fig. 4 Three-dimensional part tracking block diagram

Fig. 5 The state-state discrete monitoring system has two control points: (a) Initial location:parts are located out of the robot workspace in a random configuration. Human pic the partsone by one. (b) Intermediate location: human place the parts at the robot workspace in a spe-cific configuration. (c) Robot successfully picking up the part from the assembly table andperform the task.

031004-6 / Vol. 18, SEPTEMBER 2018 Transactions of the ASME

Downloaded From: https://asmedigitalcollection.asme.org/ on 07/09/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use

We compared the results with the classical iterative closestpoint algorithm. Our algorithm performs better for every part. Inorder to evaluate and compare performance of our approach, aresidual error was computed as the mean square distance betweenthe points of the current mesh and the model mesh and their clos-est point. After 100 iterations, very small changes were observedin terms of these parameters. Therefore, we set 150 as a fixednumber of iterations for this specific experiment. The objects con-sidered in this study are assumed to be rigid bodies. Therefore,rotation, translation, and scaling transformation do not deformtheir corresponding point clouds. This allows the algorithm to usescaling as a compensatory transformation between a noisy pointcloud and the point cloud generated by the CAD model. In addi-tion, scaling transformation evaluated at step one is also used as atermination flag. This is valid under the assumption that if scalingtransformation is above a specific threshold, then there is a highprobability that the scanned part is actually different than theCAD model used for the query.

5.3 Algorithm Characterization. A complex problem incomputer vision is detecting and identifying a part in a subset ofparts that are similar. In order to test our model, we analyzed fiveparts that are geometrically similar. Due to the intrinsic noiseand resolution of the sensor, the generated point cloud has manyirregularities that eventually can affect the performance of thealgorithm. Figure 6 shows the mean square error on point corre-spondence between five parts, where three of them have a lot ofsimilarities between each other. Despite these irregularities, thealgorithm was able to identify the correct part. Any mean squareerror on point correspondence below 0.09 can be considered as atrue positive. Figure 6 shows that the mean square error (MSE) ofthe three most similar parts are below the threshold. In order toreduce the uncertainty, our algorithm uses a local comparisonbetween parts that belong to a specific assembly. This step helpsto sort the parts based on the MSE and identify the one with mini-mum MSE as the matched part. Experimental results showed thatincreasing the density of the point cloud improved the perform-ance of the algorithm, in terms of MSE, until some point afterwhich there was no visible improvement. However, the processingtime increased exponentially (Fig. 7).

6 Contingency Handling

6.1 Collision Avoidance Between Robot and Human.Ensuring safety in the hybrid cell via appropriate control of therobot motion is related to traditional robot collision avoidance.

However, interaction scenarios in shared work cells differ fromclassical settings significantly. For instance, we cannot ensuresafety always, if the robot reacts to a sensed imminent collision bymoving along alternative paths. This is primarily due to therandomness of human motion, which is difficult to estimate inadvance, and the dynamics of the robot implementing such acollision avoidance strategy. Also, these methods increase thecomputational burden as collision-free paths must be computed inreal time. Velocity-scaling [52] can be used to overcome theseissues by operating the robot in a tri-modal state: the robot is in aclear (normal operation) state when the human is far away fromit. When the distance between them is below a user specifiedthreshold, the robot changes into a slow (same path, but reducedspeed) state. When the distance is below a second threshold(whose value is lesser than that of the first threshold), the robotchanges to a pause (stop) state.

Our approach to ensuring safety in the hybrid cell is based onthe precollision strategy developed in Ref. [11]: robot’s pauses tomove whenever an imminent collision between the human and therobot is detected. This is a simpler bi-modal strategy, in which therobot directly changes from clear to pause when the estimateddistance is below a threshold. This stop-go safety approach con-forms to the recommendations of the ISO standard 10218 [53,54].In order to monitor the human–robot separation, the human modelgenerated by the tracking system is augmented by fitting all pairsof neighboring joints with spheres that move as a function of thehuman’s movements in real time. A roll-out strategy is used, inwhich the robot’s trajectory into the near future is precomputed tocreate a temporal set of robot’s postures for the next few seconds.Now, we verify if any of the postures in this set collides with oneof the spheres of the augmented human model. The method isimplemented in a virtual simulation engine developed based onTUNDRA software. More details on this safety system can be foundin Ref. [11].

6.2 Replanning and Warning Generation. If a deviationfrom the plan is detected, the system will automatically generateplans to handle the contingency. We present a proposal for thedesign of a contingency handling architecture for hybrid assemblycell that has the ability to replan its sequence in real time. Thisdesign permits a human operator to introduce adjustments orimprovements into the assembly sequence in real time with littledelays to the assembly cell output.

From the disassembly layers generated from the CAD model ofthe jet engine assembly, we can extract the following assembly

Fig. 6 First compressor identified in a subset of similar parts:cluster 1 (rear bearing), cluster 2 (first compressor), cluster 3(second compressor) and cluster 4 (third compressor), andcluster 5 (exhaust turbine)

Fig. 7 Performance characterization: region close to the inter-section between processing time and MSE, and below thethreshold represents the “sweet spot”

Journal of Computing and Information Science in Engineering SEPTEMBER 2018, Vol. 18 / 031004-7

Downloaded From: https://asmedigitalcollection.asme.org/ on 07/09/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use

sequence: (1) front shroud safety, (2) main fan, (3) shroud, (4)front shaft, (5) first compressor, (6) second compressor, (7) rearshaft, (8) shell, (9) rear bearing, (10) exhaust turbine, and (11)cover. This assembly sequence also defines the plans for thehuman and the motion planning for the robot. Although humanoperator and robot handle the same assembly parts, their kinemat-ics constraints are different and have to be considered in theassembly planning.

Initially we can describe a scene where the human operator fol-lows the system generated assembly plan with no-errors orrequested adjustments. Figure 8 shows the complete process ofthe assembly operation. An initial assembly plan is generatedbefore the operations begin in the hybrid assembly cell. The plangenerates the sequence for the human pick and place operationsand the motion plan for the robot assembly operations. A full inte-gration among the assembly plan, human tracking system, and therobot significantly reduces the probability of error introduced bythe robot in the cell. We will ignore those errors in this work. Thisconfiguration leaves the human operator as the only agent with thecapacity to introduce errors in the assembly cell. We define devia-tions in the assembly cell as a modification to the predefined plan.These modifications can be classified into three main categories:(1) Deviations that leads to process errors, (2) deviations thatleads to improvements in the assembly speed or output quality,and (3) deviations that leads to adjustment in the assemblysequence.

6.2.1 Deviations That Lead to Process Errors. Deviationsthat lead to process errors are modifications introduced by thehuman operator that cannot generate a feasible assembly plan.These errors can generate an error in the assembly cell in a waythat will require costly recovery. In order to prevent this type oferrors, the system has to detect the presence of this modificationby the registration of the assembly parts. Once the system has theinformation about the selected assembly part, it evaluates the errorin real time by propagating the modification in the assembly planand giving a multimodal feedback (e.g., text, visual and audible

annotations). We have hand-coded several examples to illustratethe deviation described above. Following the assembly plan in ourexample and after placing the rear-bearing, the next part to beassembled is “exhaust turbine.” Rather than following the assem-bly sequence, the human operator can decide to use a differentsequence. For example, the human picks the “compressor” partinstead of exhaust turbine as shown in Fig. 9(a). In order to find afeasible plan, the new assembly sequence with Compressor as asecond step is evaluated in real time. Using the explorationmatrix, the system determines that there is no possibility to find afeasible assembly sequence following this step. Therefore, thesystem raises an alarm and generates appropriate feedback usingtext annotations. This forces the human operator to rely on thepredefined assembly sequence.

6.2.2 Deviations That Leads to Improvement. Every singlemodification to the master assembly plan is detected and eval-uated in real time. The initial assembly plan is one of the manyfeasible plans that can be found. A modification in the assemblyplan that generates another valid feasible plan classifies as animprovement. These modifications are accepted and give the abil-ity and authority to the human operators to use their experience inorder to produce better plans. This process helps the system toevolve and adapt quickly using the contributions made by thehuman agent. Following the assembly sequence, the next part tobe assembled is “Front Shaft”. The human operator decides basedon his/her previous experience that placing the “first compressor”next will improve the performance of the assembly process. Thepart first compressor is selected and the step is evaluated in realtime. The system discovers that the changes made in the prede-fined assembly sequence can also generate a feasible assemblysequence. Therefore, the step is accepted and human is promptedto continue with the assembly operation. The updated assemblysequence becomes: (1) front shroud safety, (2) main fan, (3)shroud, (4) first compressor, (5) front shaft, (6) second compres-sor, (7) rear shaft, (8) shell, (9) rear bearing, (10) exhaust turbine,and (11) cover.

Fig. 8 Assembly operations: (a) human picks up the part, (b) in order to allow synchronization, the system recognizes thepart, (c) human moves the part to the intermediate location, and (d) human places the part in the intermediate location

031004-8 / Vol. 18, SEPTEMBER 2018 Transactions of the ASME

Downloaded From: https://asmedigitalcollection.asme.org/ on 07/09/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use

The most important feature of the framework is that the hybridassembly cell not only accepts the modification in the assemblysequence, but also adapts its configuration in order to completethe assembly process.

6.2.3 Deviations That Leads to Adjustment. Adjustments inthe assembly process may occur when the assembly cell can easilyrecover from the error introduced by the human by requestingadditional interaction in order to fix it. Assuming that the humanoperator is following the predefined assembly sequence, the nextassembly part to be assembled is front shaft. The system recog-nizes the assembly part and validates the step. Therefore the partcan be moved and placed in the intermediate location. Anothercommon mistake in assembly part placement is the wrong pose(rotational and translational transformation that diverges fromthe required pose). The human is informed by the system aboutthe mistake and is prompted to correct it. The system verifies theposes of the assembly parts in the intermediate location in realtime and forces the human operator to place the part in the rightlocation in order to resume the assembly process. Once the

assembly part is located in the right position and orientation, theassembly process resumes.

7 Conclusions

We presented the design details of a framework for hybrid cellsthat support safe and efficient human–robot collaboration duringassembly operations. We presented an approach for monitoringthe state of the hybrid assembly cell during assembly operations.The discrete state-to-state part monitoring was designed to berobust and decrease any possible robot motion errors. While theassembly operations are performed by human and robot, thesystem constantly sends feedback to the human operator aboutthe performed tasks. This constant feedback, in the form of 3Danimations, text and audio, helps to reduce the training time andeliminate the possibility of assembly errors. We will conductexperiments to quantitatively demonstrate these benefits of theproposed method in the future. A Microsoft–Kinect sensor, whichhas an effective range of approximately 1 to 4 m, was used forboth part monitoring and human monitoring. Therefore, the moni-toring equipment is placed sufficiently far from the robot without

Fig. 9 (a) Human picks a part (compressor); appropriate text annotations are generated as a feedback to the human. (b) Partselected is different from the assembly sequence; after a real-time evaluation, the system does not accept the modification inthe assembly plan. (c) Human returns the part to location 1. (d) Human picks a part (exhaust turbine), after real-time evaluationthe part is accepted. (e) Human places the part into the robot’s workspace. (f) The robot motion planning is executed for theexhaust turbine. If the assembly plan is modified (replanning), the robot uses the altered motion plan to pick the part and placeit in its target position in the assembly.

Journal of Computing and Information Science in Engineering SEPTEMBER 2018, Vol. 18 / 031004-9

Downloaded From: https://asmedigitalcollection.asme.org/ on 07/09/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use

affecting its normal working process. We carried out a detailedsensor placement analysis w.r.t. the human-monitoring system inRef. [1]. We will carry out a similar placement analysis of thepart-monitoring system in the future. The proposed method uses aprecollision strategy to predict human’s impending collision withthe robot and pauses its motion. We will compliment this capabil-ity in the future by exploiting the KUKA robot’s inbuilt forcesensing and impedance control features to implement compliantcontrol for handling postcollision scenarios. In our previous work,we have developed other modules including ontology for task par-titioning in human–robot collaboration for kitting operations [55]and resolving perception uncertainties [56] and occlusions inrobotic bin-picking in hybrid cells [57]. Future work consists ofinvestigating how to integrate them into the development ofhybrid work cells for assembly applications.

Funding Data

� National Science Foundation (Grant Nos. 1634431 and1713921).

References[1] Morato, C., Kaipa, K. N., Zhao, B., and Gupta, S. K., 2014, “Toward Safe Human

Robot Collaboration by Using Multiple Kinects Based Real-Time HumanTracking,” ASME J. Comput. Inf. Sci. Eng., 14(1), p. 011006.

[2] Morato, C., Kaipa, K. N., Liu, J., and Gupta, S. K., 2014, “A Framework forHybrid Cells That Support Safe and Efficient Human-Robot Collaboration inAssembly Operations,” ASME Paper No. DETC2014-34671.

[3] Morato, C., Kaipa, K. N., and Gupta, S. K., 2017, “System State Monitoring toFacilitate Safe and Efficient Human-Robot Collaboration in Hybrid AssemblyCells,” ASME Paper No. DETC2017-68269.

[4] Bauer, A., Wollherr, D., and Buss, M., 2008, “Human-Robot Collaboration: ASurvey,” Int. J. Humanoid Rob., 5(1), pp. 47–66.

[5] Shi, J., Jimmerson, G., Pearson, T., and Menassa, R., 2012, “Levels of Humanand Robot Collaboration for Automotive Manufacturing,” Workshop on Per-formance Metrics for Intelligent Systems (PerMIS), College Park, MD, Mar.20–22, pp. 95–100.

[6] Cherubini, A., Passama, R., Crosnier, A., Lasnier, A., and Fraisse, P., 2016,“Collaborative Manufacturing With Physical Human-Robot Interaction,” Rob.Comput.-Integr. Manuf., 40, pp. 1–13.

[7] Sadrfaridpour, B., and Wang, Y., 2017, “Collaborative Assembly in HybridManufacturing Cells: An Integrated Framework for Human-Robot Interaction,”IEEE Trans. Autom. Sci. Eng., PP(99), pp. 1–15.

[8] Baxter, 2010, “Rethink Robotics,” Rethink Robotics, accessed Jan. 29, 2018,http://www.rethinkrobotics.com/baxter

[9] KUKA, 2010, “KUKA LBR IV,” KUKA Robotics Corporation, Shelby CharterTownship, MI, accessed Jan. 29, 2018, https://www.kuka.com/en-us/products/robotics-systems/industrial-robots/lbr-iiwa

[10] ABB, 2013, “ABB Friendly Robot for Industrial Dual Arm FRIDA,” ABB,accessed Jan. 29, 2018, http://new.abb.com/products/robotics/industrial-robots/yumi

[11] Morato, C., Kaipa, K. N., Zhao, B., and Gupta, S. K., 2013, “Safe Human RobotInteraction by Using Exteroceptive Sensing Based Human Modeling,” ASMEPaper No. DETC2013-13351.

[12] Heiser, J., Phan, D., Agrawala, M., Tversky, B., and Hanrahan, P., 2004,“Identification and Validation of Cognitive Design Principles for AutomatedGeneration of Assembly Instructions,” Working Conference on Advanced Vis-ual Interfaces (AVI), Gallipoli, Italy, May 25–28, pp. 311–319.

[13] Dalal, M., Feiner, S., McKeown, K., Pan, S., Zhou, M., H€ollerer, T., Shaw, J.,Feng, Y., and Fromer, J., 1996, “Negotiation for Automated Generation ofTemporal Multimedia Presentations,” Fourth ACM International Conference onMultimedia (MULTIMEDIA), Boston, MA, Nov. 18–22, pp. 55–64.

[14] Zimmerman, G., Barnes, J., and Leventhal, L., 2003, “A Comparison of theUsability and Effectiveness of Web-Based Delivery of Instructions forInherently-3D Construction Tasks on Handheld and Desktop Computers,”Eighth International Conference on 3D Web Technology (Web3D), Saint Malo,France, Mar. 9–12, pp. 49–54.

[15] Kim, S., Woo, I., Maciejewski, R., Ebert, D. S., Ropp, T. D., and Thomas, K.,2010, “Evaluating the Effectiveness of Visualization Techniques for SchematicDiagrams in Maintenance Tasks,” Seventh Symposium on Applied Perceptionin Graphics and Visualization (APGV), Los Angeles, CA, July 23–24, pp.33–40.

[16] Kalkofen, D., Tatzgern, M., and Schmalstieg, D., 2009, “Explosion Diagramsin Augmented Reality,” IEEE Virtual Reality Conference (VR), Lafayette, LA,Mar. 14–18, pp. 71–78.

[17] Henderson, S., and Feiner, S., 2011, “Exploring the Benefits of AugmentedReality Documentation for Maintenance and Repair,” IEEE Trans. Visualiza-tion Comput. Graph., 17(10), pp. 1355–1368.

[18] Dionne, D., de la Puente, S., Le�on, C., Herv�as, R., and Gerv�as, P., 2009, “AModel for Human Readable Instruction Generation Using Level-Based

Discourse Planning and Dynamic Inference of Attributes Disambiguation,”12th European Workshop on Natural Language Generation, Athens, Greece,Mar. 30–31, pp. 66–73.

[19] Brough, J. E., Schwartz, M., Gupta, S. K., Anand, D. K., Kavetsky, R., andPettersen, R., 2007, “Towards the Development of a Virtual Environment-Based Training System for Mechanical Assembly Operations,” Virtual Reality,11(4), pp. 189–206.

[20] Gupta, S. K., Anand, D., Brough, J. E., Kavetsky, R., Schwartz, M., and Thakur,A., 2008, “A Survey of the Virtual Environments-Based Assembly TrainingApplications,” Virtual Manufacturing Workshop, Turin, Italy, pp. 1–10.

[21] Ohbuchi, R., Osada, K., Furuya, T., and Banno, T., 2008, “Salient Local VisualFeatures for Shape-Based 3D Model Retrieval,” IEEE International Conferenceon Shape Modeling and Applications (SMI), Stony Brook, NY, June 4–6, pp.93–102.

[22] Chen, H., and Bhanu, B., 2007, “3D Free-Form Object Recognition in RangeImages Using Local Surface Patches,” Pattern Recognit. Lett., 28(10), pp.1252–1262.

[23] Liu, Y., Zha, H., and Qin, H., 2006, “Shape Topics: A Compact Representationand New Algorithms For 3D Partial Shape Retrieval,” IEEE Computer SocietyConference on Computer Vision and Pattern Recognition, New York, June17–22, pp. 2025–2032.

[24] Frome, A., Huber, D., Kolluri, R., Bulow, T., and Malik, J., 2004, “RecognizingObjects in Range Data Using Regional Point Descriptors,” European Confer-ence on Computer Vision (ECCV), Prague, Czech Republic, May 11–14, pp.224–237.

[25] Mian, A., Bennamoun, M., and Owens, R., 2009, “On the Repeatability andQuality of Keypoints for Local Feature-Based 3D Object Retrieval From Clut-tered Scenes,” Int. J. Comput. Vision, 89(2–3), pp. 348–361.

[26] Mian, A., Bennamoun, M., and Owens, R., 2009, “A Novel Representation andFeature Matching Algorithm for Automatic Pairwise Registration of RangeImages,” Int. J. Comput. Vision, 66(1), pp. 19–40.

[27] Zhong, Y., 2009, “Intrinsic Shape Signatures: A Shape Descriptor for 3DObject Recognition,” IEEE 12th International Conference on Computer VisionWorkshops (ICCV), Kyoto, Japan, Sept. 27–Oct. 4, pp. 689–696.

[28] Johnson, A., and Hebert, M., 1999, “Using Spin Images for Efficient ObjectRecognition in Cluttered 3D Scenes,” IEEE Trans. Pattern Anal. Mach. Intell.,21(5), pp. 433–449.

[29] Chua, C., and Jarvis, R., 1997, “Point Signatures: A New Representation for 3DObject Recognition,” Int. J. Comput. Vision, 25(1), pp. 63–85.

[30] Stein, F., and Medioni, G., 1992, “Structural Indexing: Efficient 3-D ObjectRecognition,” IEEE Trans. Pattern Anal. Mach. Intell., 14(2), pp. 125–145.

[31] Hetzel, G., Leibe, B., Levi, P., and Schiele, B., 2001, “3D Object RecognitionFrom Range Images Using Local Feature Histograms,” IEEE Computer SocietyConference on Computer Vision and Pattern Recognition (CVPR), Kauai, HI,Dec. 8–14, pp. II-394–II-399.

[32] Tangelder, J., and Veltkamp, R., 2004, “A Survey of Content Based 3D ShapeRetrieval Methods,” IEEE International Conference on Shape Modeling Appli-cations, Genova, Italy, June 7–9, pp. 145–156.

[33] Freedman, A., Shpunt, B., Machline, M., and Arieli, Y., 2008, “Depth MappingUsing Projected Patterns,” Prime Sense Ltd., Israel, Patent No. WO 2008/120217 A2.

[34] Gupta, S. K., Regli, W. C., Das, D., and Nau, D. S., 1997, “Automated Manu-facturability Analysis: A Survey,” Res. Eng. Des., 9(3), pp. 68–190.

[35] Gupta, S. K., Paredis, C., Sinha, R., Wang, C., and Brown, P. F., 1998, “AnIntelligent Environment for Simulating Mechanical Assembly Operation,”ASME Design Engineering Technical Conferences (DETC), Atlanta, GA, Sept.13–16, pp. 1–12.

[36] Gupta, S. K., Paredis, C., Sinha, R., and Brown, P. F., 2001, “Intelligent Assem-bly Modeling and Simulation,” Assem. Autom., 21(3), pp. 215–235.

[37] Morato, C., Kaipa, K. N., and Gupta, S. K., 2012, “Assembly Sequence Plan-ning by Using Multiple Random Trees Based Motion Planning,” ASME PaperNo. DETC2012-71243.

[38] Morato, C., Kaipa, K. N., and Gupta, S. K., 2013, “Improving Assembly Prece-dence Constraint Generation by Utilizing Motion Planning and Part InteractionClusters,” J. Comput.-Aided Des., 45(11), pp. 1349–1364.

[39] Kaipa, K. N., Morato, C., Zhao, B., and Gupta, S. K., 2012, “Instruction Gener-ation for Assembly Operations Performed by Humans,” ASME Paper No.DETC2012-71266.

[40] Cardone, A., Gupta, S. K., and Karnik, M., 2003, “A Survey of Shape SimilarityAssessment Algorithms for Product Design and Manufacturing Applications,”ASME J. Comput. Inf. Sci. Eng., 3(2), pp. 109–118.

[41] Cardone, A., and Gupta, S. K., 2006, “Similarity Assessment Based on FaceAlignment Using Attributed Applied Vectors,” Comput.-Aided Des. Appl.,3(5), pp. 645–654.

[42] Petitjean, S., 2002, “A Survey of Methods for Recovering Quadrics in TriangleMeshes,” ACM Comput. Surv., 34(2), pp. 211–262.

[43] Newcombe, R., and Davison, A., 2010, “Live Dense Reconstruction With aSingle Moving Camera,” IEEE Conference on Computer Vision and PatternRecognition (CVPR), San Francisco, CA, June 13–18, pp. 1498–1505.

[44] Newcombe, R., Lovegrove, S., and Davison, A., 2011, “DTAM: Dense Track-ing and Mapping in Real-Time,” International Conference on Computer Vision(ICCV), Barcelona, Spain, Nov. 6–13, pp. 2320–2327.

[45] Newcombe, R., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.,Pushmeet, K., Shoton, J., Hodges, S., and Fitzgibbon, A., 2011, “Kinectfusion:Real-Time Dense Surface Mapping and Tracking,” Tenth IEEE InternationalSymposium on Mixed and Augmented Reality (ISMAR), Basel, Switzerland,Oct. 26–29, pp. 127–136.

031004-10 / Vol. 18, SEPTEMBER 2018 Transactions of the ASME

Downloaded From: https://asmedigitalcollection.asme.org/ on 07/09/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use

[46] Izadi, S., Kim, D., Hilliges, O., Newcombe, R., Molyneaux, D., Newcombe, R.,Kohli, P., Shoton, J., Hodges, S., Freeman, D., Davison, A., and Fitzgibbon, A.,2011, “Kinectfusion: Real-Time 3D Reconstruction and Interaction Using aMoving Depth Camera,” 24th Annual ACM Symposium on User Interface Soft-ware and Technology (UIST), Santa Barbara, CA, Oct. 16–19, pp. 559–568.

[47] Toldo, R., Beinat, A., and Crosilla, F., 2010, “Global Registration of MultiplePoint Clouds Embedding the Generalized Procrustes Analysis Into an ICPFramework,” International Conference on 3D Data Processing, Visualization,and Transmission (DPVT), Paris, France, May 17–20, pp. 1–8.

[48] Goodall, C., 1991, “Procrustes Methods in the Statistical Analysis of Shape,” J.R. Stat. Soc. Ser. B, 53(2), pp. 285–339.

[49] Krishnan, S., Lee, P., Moore, J., and Venkatasubramanian, S., 2005, “GlobalRegistration of Multiple 3D Point Sets Via Optimization-on-a-Manifold,” ThirdEurographics Symposium on Geometry Processing (SGP), Vienna, Austria,July 4–6, pp. 1–11.

[50] Rusinkiewicz, S., and Levoy, M., 2001, “Efficient Variants of the ICP Algo-rithm,” IEEE Third International Conference on 3D Digital Imaging and Mod-eling, Quebec City, QC, Canada, May 28–June 1, pp. 145–152.

[51] Wedin, P. A., and Viklands, T., 2006, “Algorithms for 3-Dimensional WeightedOrthogonal Procrustes Problems,” Umea University, Umea, Sweden, TechnicalReport No. UMINF-06.06.

[52] Davies, S., 2007, “Watching Out for the Workers [Safety Workstations],” IETManuf., 86(4), pp. 32–34.

[53] Andrieu, C., and Doucet, A., 2011, “Robots and Robotic Devices:Safety Requirements or Industrial Robots—Part 1: Robot,” International Orga-nization for Standardization, Geneva, Switzerland, Standard No. ISO 10218-1:2011.

[54] ISO, 2011, “Robots and Robotic Devices: Safety Requirements for IndustrialRobots—Part 2: Robot Systems and Integration,” International Organizationfor Standardization, Geneva, Switzerland, Standard No. ISO/FDIS 10218-2:2011.

[55] Banerjee, A. G., Barnes, A., Kaipa, K. N., Liu, J., Shriyam, S., Shah, N., andGupta, S. K., 2015, “An Ontology to Enable Optimized Task Partitioning inHuman-Robot Collaboration for Warehouse Kitting Operations,” Proc. SPIE,9494, p. 94940H.

[56] Kaipa, K. N., Kankanhalli-Nagendra, A. S., Kumbla, N. B., Shriyam, S.,Thevendria-Karthic, S. S., Marvel, J. A., and Gupta, S. K., 2016, “AddressingPerception Uncertainty Induced Failure Modes in Robotic Bin-Picking,” Rob.Comput.-Integr. Manuf., 42, pp. 17–38.

[57] Kaipa, K. N., Shriyam, S., Kumbla, N. B., and Gupta, S. K., 2016, “ResolvingOcclusions Through Simple Extraction Motions in Robotic Bin-Picking,”ASME Paper No. MSEC2016-8661.

Journal of Computing and Information Science in Engineering SEPTEMBER 2018, Vol. 18 / 031004-11

Downloaded From: https://asmedigitalcollection.asme.org/ on 07/09/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use