This paper presents a framework to build hybrid cells that support safe and efficient human–robot collaboration during assembly operations. Our approach allows asynchronous collaborations between human and robot. The human retrieves parts from a bin and places them in the robot's workspace, while the robot picks up the placed parts and assembles them into the product. We present the design details of the overall framework comprising three modules—plan generation, system state monitoring, and contingency handling. We describe system state monitoring and present a characterization of the part tracking algorithm. We report results from human–robot collaboration experiments using a KUKA robot and a three-dimensional (3D)-printed mockup of a simplified jet-engine assembly to illustrate our approach.
Introduction
Factories of the future will be expected to produce increasingly complex products, demonstrate flexibility by rapidly accommodating changes in products or volumes, and remain cost competitive by controlling capital and operational costs. Networked machines with built-in intelligence will become the backbone of these factories. Humans will continue to play a vital role in the operation of the factories of the future to achieve flexibility at low costs. Realizing complete automation that meets all three above-described requirements does not appear to be feasible in the near foreseeable future. The goal of achieving flexibility at low costs simply means that humans will continue to play a vital role in the operation of the factories of the future. Their role will change from doing routine tasks to performing challenging tasks that are difficult to automate.
Humans and robots share complementary strengths in performing assembly tasks. Humans offer the capabilities of versatility, dexterity, performing in-process inspection, handling contingencies, and recovering from errors. However, they have limitations in terms of factors of consistency, labor cost, payload size/weight, and operational speed. In contrast, robots can perform tasks at high speeds, while maintaining precision and repeatability, operate for long periods of times, and can handle high payloads. However, currently robots have the limitations of high capital cost, long programming times, and limited dexterity.
Owing to the reasons discussed above, small batch and custom production operations predominantly use manual assembly. The National Association of Manufacturers estimates that the U.S. has close to 300,000 small and medium manufacturers (SMM), representing a very important segment of the manufacturing sector. As we move toward shorter product life cycles and customized products, the future of manufacturing in the U.S. will depend upon the ability of SMM to remain cost competitive. The high labor cost is making it difficult for SMM to remain cost competitive in high wage markets. They need to find a way to reduce the labor cost. Clearly, setting up purely robotic cells is not an option for them as they do not provide the necessary flexibility. Creating hybrid cells where humans and robots can collaborate in close physical proximities is a potential solution. However, current generation industrial robots impose safety risks to humans, so physical separation has to be maintained between humans and robots. This is typically accomplished by installing the robot in a cage. In order for the robot to be operational, the cage door has to be locked and elaborate safety protocol has to be followed in order to ensure that no human operator is present in the cage. This makes it very difficult to design assembly cells where humans and robots can collaborate effectively.
In this paper, we design and develop a framework for hybrid cells that support safe and efficient human–robot collaboration during assembly operations. Our prior work on this topic focused on the problem of ensuring safety during human–robot collaborations inside a hybrid cell by developing a human-monitoring system and precollision robot control strategies [1]. The specific contributions of this work include:
- (1)
Details on the interaction between different system components of the human–robot collaboration framework
- (2)
New part-tracking system that augments the state-monitoring capability of the hybrid cell significantly. The part-tracking system enables efficient monitoring of the assembly operations by detecting whether the correct part is being picked by the human and whether it is placed at the correct location/orientation in front of the robot.
- (3)
New experimental results consisting of a collaboration between a human and a KUKA robot to assemble a three-dimensional (3D)-printed mockup of a simplified jet-engine. These experiments also demonstrate how the part-tracking system, combined with the human-instruction module, enables replanning of assembly operations on-the-fly.
Preliminary works related to this paper were presented in Refs. [2] and [3]. There are several works in the human–robot collaboration literature that compared different modes of collaboration [4–7]. Since this paper is mainly focused on the part estimation system, we present quantitative results on this topic. More system-level comparative results are outside the scope of this paper. Recent advances in safer industrial robots [8–10] and exteroceptive safety systems [1,11] create a potential for hybrid cells where humans and robots can work side-by-side, without being separated from each other by physical cages. However, realizing this goal is challenging. Humans might accidentally come in the way of the robot. Therefore, the robot must be able to execute appropriate collision avoidance strategies. Humans are prone to making errors and doing operations differently. Therefore, robot must be able to replan in response to an unpredictable human behavior and modify its motion accordingly. The robot must be able to communicate the error to the human as well.
We consider a one-robot one-human model that exploits complementary strengths of either agents. The human identifies a part from a bin of multiple parts, picks it, and places it in front of the robot. The part is then picked up, and assembled, by the robot. The human also assists the robot in critical situations by performing dexterous fine manipulation tasks required during part-placing. A state monitoring system allows to maintain a “knowledge” about the development of the assembly tasks, and provide additional information to the human operator if needed. After placing the part in front of the robot, the human can proceed with executing the next task instruction, rather than waiting until the robot finishes its intended task. The robot also replans and adaptively responds to different human actions (e.g., robot pauses if the human accidently comes very close to it, waits if the human places an incorrect part in front of it, etc.). All these features result in asynchronous collaborations between robot and the human. An overview of the hybrid cell is shown in Fig. 1.
Related Work
Support Human Operations in the Assembly Cell.
Recent advances in information visualization and human–computer interaction have given rise to different approaches to automated generation of instructions that aid humans in assembly, maintenance, and repair. Heiser et al. [12] derived principles for generating assembly instructions based on insights into how humans perceive the assembly process. They compare the instructions generated by their system with factory-provided and hand-designed instructions to show that instruction generation informed by cognitive design principles reduces assembly time significantly. Dalal et al. [13] developed a knowledge-based system that generates temporal multimedia presentations. The content included speech, text, and graphics. Zimmerman et al. [14] developed web-based delivery of instructions for inherently 3D construction tasks. They tested the instructions generated by their approach by using them to build paper-based origami models. Kim et al. [15] used recent advances in information visualization to evaluate the effectiveness of visualization techniques for schematic diagrams in maintenance tasks.
Several research efforts have indicated that instruction presentation systems can benefit from augmented reality techniques. Kalkofen et al. [16] integrated exploded view diagrams into augmented reality. The authors developed algorithms to compose visualization images from exploded/nonexploded real world data and virtual objects. Henderson and Feiner [17] developed an augmented reality system for a mechanic performing maintenance and repair tasks in a field setting. The authors carried out a qualitative survey to show that the system enabled easier task handling. Dionne et al. [18] developed a model of automatic instruction delivery to guide humans in virtual 3D environments. Brough et al. [19] developed virtualtrainingstudio, a virtual environment-based system that allows (i) training supervisors to create instructions and (ii) trainees to learn assembly operations in a virtual environment. A survey of virtual environments-based assembly training can be found in Ref. [20].
Assembly Part Recognition.
The increasing availability of 3D sensors such as laser scanners, time-of-flight cameras, stereo cameras, and depth cameras has stimulated research in the intelligent processing of 3D data. Object detection and pose estimation is a vast area of research in the computer vision. In the past decade, researchers focused on designing robust and discriminative 3D features to find reliable correspondences between 3D point sets [21–24]. Very few approaches are available for object detection based on feature correspondences when scenes are characterized by clutters and occlusions [25–27]. In addition, these methods cannot deal with the presence of multiple instances of a given model, which is also the case with bag-of-3D features methods [28–31] (refer to Ref. [32] for a survey on this topic). Feature-free approaches have also been developed based on the information available from depth cameras. The use of depth cameras became popular after the introduction of the low-cost Kinect technology. Kinect camera provides good-quality depth sensing by using a structured light technique [33] to generate 3D point clouds in real time. Approaches based on local shape descriptors are expected to perform better [25,26] in environments with many objects that have different shapes. However, these approaches do not work in the presence of symmetries and objects with similar shapes.
System Overview
The hybrid cell will operate in the following manner:
- (1)
The cell planner will generate a plan that will provide instructions for the human and the robot in the cell.
- (2)
Instructions for the human operator will be displayed on a screen in the assembly cell.
- (3)
The human will be responsible for retrieving parts from bins and bringing them within the robot's workspace.
- (4)
The robot will pick up parts from its workspace and assemble them into the product.
- (5)
If needed, the human will perform the dexterous fine manipulation to secure the part in place in the product.
- (6)
The human and robot operations will be asynchronous.
- (7)
The cell will be able to track the human, the locations of parts, and the robot at all time.
- (8)
If the human operator makes a mistake in executing an assembly instruction, replanning will be performed to recover from that mistake. Appropriate warnings and error messages will be displayed in the cell.
- (9)
If the human comes too close to the robot to cause a collision, the robot will perform a collision avoidance strategy.
The overall framework used to achieve the above list of hybrid cell operations consists of the following three modules:
Plan generation. We should be able to automatically generate plans in order to ensure efficient cell operation. This requires generating feasible assembly sequences and instructions for robots and human operators, respectively. Automated planning poses the following two challenges. First, generating precedence constraints for complex assemblies is challenging. The complexity can come due to the combinatorial explosion caused by the size of the assembly or the complex paths needed to perform the assembly. Second, generating feasible plans requires accounting for robot and human motion constraints. In Sec. 4, we present methods for automatically generating plans for the operation of hybrid cells.
System state monitoring. We need to monitor the state of the assembly operations in the cell to ensure error-free operations. We present methods for real-time tracking of the parts, the human operator, and the robot in Sec. 5.
Contingency handling. Contingency handling involves collision avoidance between robot and human, replanning, and warning generation. In Sec. 6.1, we describe how the state information is used to take appropriate measures to ensure human safety when the planned move by the robot may compromise safety. If the human makes an error in part selection or placement, and the error goes undetected, it can lead to a defective product and inefficient cell operation. Human error can occur due to either confusion about poor instructions or human not paying adequate attention. In Sec. 6.2, we describe how the part tracking information is used to automatically generate instructions for taking corrective actions if a human operator deviates from the selected plan. Corrective actions involve replanning if it is possible to continue assembly from the current state or issuing warning instructions to undo the task.
Plan Generation
Assembly Sequence Generation.
Careful planning is required to assemble the complex products [34–36]. Precedence constraints among assembly operations must be used to guide feasible assembly sequence generation. We utilize a method developed in our earlier works [37,38] that automatically detects part interaction clusters that reveal the hierarchical structure in a product. This thereby allows the assembly sequencing problem to be applied to part sets at multiple levels of hierarchy. A 3D CAD model of the product, with the individual parts in their assembled configuration, is used as an input to the algorithm. Our approach described in Ref. [38] combines motion planning and part interaction clusters to generate assembly precedence constraints. We assume that the largest part PartL of the assembly guides the assembly process. Therefore, this part is extracted from the CAD model and kept aside. Next, spatial k–means clustering is used to group the remaining parts into k part sets. Accordingly, the assembly comprises k + 1 part sets (PartL, PartSet1, PartSet2,…, PartSetk) in the first step. Now, the assembleability of this new assembly is verified. This is achieved by using motion planning to find the part sets that can be removed from the assembly. These parts sets are removed from the assembly and added to a new disassembly layer. Again, we find the part sets that can be removed from the simplified assembly. These part sets are removed from the assembly and added to the second disassembly layer. If this process halts before all part sets are removed, the method goes back to the first step where the number of clusters is incremented by one. This results in a different grouping of k + 1 new clusters. This cycle is repeated until all disassembly layers are identified. Next, the above process is recursively applied to find disassembly layers for each part set identified in the previous step. The information extracted in this way is used to generate a list of assembly precedence constraints among part sets, which can be used to generate feasible assembly sequences for each part set and the whole assembly. More details on the principal techniques (motion planning, generation of disassembly layers, and spatial partitioning-based part interaction cluster extraction), the corresponding algorithms used to implement the above approach, and test results on a wide variety of assemblies can be found in Ref. [38].
Instruction Generation.
The human worker inside the hybrid cell follows a list of instructions to perform assembly operations. However, poor instructions lead to the human committing mistakes related to the assembly. We address this issue by utilizing an instruction generation system developed in our previous work [39] that creates effective and easy-to-follow assembly instructions for humans. A linearly ordered assembly sequence (result of Sec. 4.1) is given as input to the system. The output is a set of multimodal instructions (text, graphical annotations, and 3D animations) that are displayed on a screen. Text instructions are composed using simple verbs such as Pick, Place, Position, Attach, etc. As mentioned in Sec. 4.1, we compute a feasible assembly sequence directly from the given 3D CAD model of the chassis assembly. Therefore, the following assembly sequence is input to the instruction generation system:
- (1)
Pick up FRONT SHROUD SAFETY
- (2)
Place FRONT SHROUD SAFETY on ASSEMBLY TABLE
- (3)
Pick up MAIN FAN
- (4)
Place MAIN FAN on ASSEMBLY TABLE
- (5)
Pick up SHROUD
- (6)
Place SHROUD on ASSEMBLY TABLE
- (7)
Pick up FRONT SHAFT
- (8)
Place FRONT SHAFT on ASSEMBLY TABLE
- (9)
Pick up FIRST COMPRESSOR
- (10)
Place FIRST COMPRESSOR on ASSEMBLY TABLE
- (11)
Pick up SECOND COMPRESSOR
- (12)
Place SECOND COMPRESSOR on ASSEMBLY TABLE
- (13)
Pick up REAR SHAFT
- (14)
Place REAR SHAFT on ASSEMBLY TABLE
- (15)
Pick up SHELL
- (16)
Place SHELL on ASSEMBLY TABLE
- (17)
Pick up REAR BEARING
- (18)
Place REAR BEARING on ASSEMBLY TABLE
- (19)
Pick up EXHAUST TURBINE
- (20)
Place EXHAUST TURBINE on ASSEMBLY TABLE
- (21)
Pick up COVER
- (22)
Place COVER on ASSEMBLY TABLE
Figure 3 shows the instructions used by the system for some of the assembly steps. Humans may get confused about which to pick when two parts look similar to each other. To address this problem, we utilize a part identification tool developed in Ref. [39] that automatically detects such similarities and present the parts in a manner that enables the human worker to select the correct part. For this purpose, a similarity metric between two parts was constructed based on attributes like part volume, surface area, types of surfaces, and curvature [40,41].
System State Monitoring
Monitoring the system state inside the hybrid cell involves tracking of the states of the robot, the human, and the part currently being manipulated by the human. We assume that the robot will be able to execute motion commands given to it, so that the assembly cell will know the state of the robot.
A human tracking system was developed in our previous works [1,11] by using multiple Microsoft–Kinect sensors. The system is capable of building an explicit model of the human in near real time. Human activity is captured by the Kinect sensors that reproduce the human's location and movements virtually in the form of a simplified animated skeleton. Occlusion problems are resolved by using multiple Kinects. The output of each Kinect is a 20-joint human model. Data from all the Kinects are combined in a filtering scheme to obtain the human motion estimates. A systematic experimental analysis of factors like shape of the workspace, number of sensors, placement of sensors, and presence of dead zones was carried out in Ref. [1].
The assembly cell state monitoring uses a discrete state-to-state part monitoring system that was designed to be robust and decrease any possible robot motion errors. A failure in correctly recognizing the part and estimating its pose can lead to significant errors in the system. To ensure that such errors do not occur, the monitoring system consists of two control points—the first control point detects the part selected by the human and the second control point detects the part's spatial transformation when it is placed in the robot's workspace. The detection of the selected part in the first control point helps the system to track the changes introduced by the human in real time and trigger the assembly replanning and the robot motion replanning based on the new sequence. Moreover, the detection of the posture of the assembly part related to the robot in the second control point sends a feedback to the robot with the “pick and place” or “wait” flag.
The part monitoring system is based on a 3D mesh matching algorithm, which uses a real-time 3D part registration and a 3D mesh interactive refinement [42]. In order to register the assembly part in 3D format, multiple acquisitions of the surface are necessary given that a single acquisition is not sufficient to describe the object. These views are obtained by the Kinect sensors and represented as dense point clouds. The point clouds are refined in real time by a dense projective data association and a point-plane iterative closest point, all embedded in kinectfusion [43–46]. kinectfusion is used to acquire refined point-clouds from both control points and for every single assembly part. In order to perform a 3D mesh-to-mesh matching, an interactive refinement revises the transformations composed of scale, rotation, and translation. Such transformations are needed to minimize the distance between the refined point cloud in a time ti and the refined point cloud at the origin t0, also called mesh model. Point correspondences were extracted from both meshes using a variation of Procrustes analysis [47–49] and then compared with an iterative closest point algorithm [50]. Details of the 3D mesh matching algorithm follows.
Three-Dimensional Mesh Matching Algorithm.
Three-dimensional vision measurements produce 3D coordinates of the relevant object or scene with respect to a local coordinate system. 3D point cloud registration transforms multiple data sets into the same coordinate system. Currently, there is no standard method for the registration problem and the performance of the algorithms is often related to preliminary assumptions.
Consider a point cloud representation of a rigid object with a set of n points that is subject to an orthogonal rotation and a translation . Then the goal is to fit the set of points X into a given point cloud representation of the same object or scene with n points Y = {y1,…, yn} under the choice of an unknown rotation R, an unknown translation t, and an unknown scale factor s. We can represent several configurations of the same object in a common space by maximizing the goodness-of-fit criterion. We do this with the aid of three high-level transformations: (1) translation (move the centroids of each configuration to a common origin), (2) isotropic scaling (shrink or stretch each configuration isotropically to make them as similar as possible), and (3) rotation/reflection (turn or flip the configurations in order to align the point clouds).
Input: |
X = {x1, x2,…, xn} (point cloud reference) |
Y = {y1, y2,…, yn} |
Initial transformation values R0, T0, s0 |
Output: |
(rotation) |
; (translation) |
; (scale) |
k = 0, Δ = 10–9, Δk = Δ + 1; |
while Δk > Δ do |
if Hessian is positive definite then |
Compute a Newton search direction; |
else |
Compute a Gauss–Newton search direction; |
end if |
Update R, t and s |
k = k + 1 |
Update fitting error Δ |
end while |
ReturnR, t, s |
Input: |
X = {x1, x2,…, xn} (point cloud reference) |
Y = {y1, y2,…, yn} |
Initial transformation values R0, T0, s0 |
Output: |
(rotation) |
; (translation) |
; (scale) |
k = 0, Δ = 10–9, Δk = Δ + 1; |
while Δk > Δ do |
if Hessian is positive definite then |
Compute a Newton search direction; |
else |
Compute a Gauss–Newton search direction; |
end if |
Update R, t and s |
k = k + 1 |
Update fitting error Δ |
end while |
ReturnR, t, s |
where is the Frobenius matrix norm. The pseudocode of the WEOPA algorithm is given in algorithm 1 to compute a solution to the orthogonal Procrustes problem.
k = 0; |
whilek < number of iterationsdo |
R0 = random orthogonal matrix with RTR = I and det(R) = 1; |
t0 = random translation vector; |
s0 = random scale unit; |
: = computed minimum with R0, t0, s0 as initial values for the WEOPA fitting algorithm. |
if is a new minimum then |
Store ; |
k = 0; |
end if |
k = k + 1 |
end while |
ReturnR, t, s |
k = 0; |
whilek < number of iterationsdo |
R0 = random orthogonal matrix with RTR = I and det(R) = 1; |
t0 = random translation vector; |
s0 = random scale unit; |
: = computed minimum with R0, t0, s0 as initial values for the WEOPA fitting algorithm. |
if is a new minimum then |
Store ; |
k = 0; |
end if |
k = k + 1 |
end while |
ReturnR, t, s |
The WEOPA algorithm depends on a good R0, t0, s0 initialization; therefore, the algorithm is not stable. In order to solve the stability problem, a heuristic method was designed in Ref. [51], which we call the heuristic iterative-WEOPA (algorithm 2). R0 and s0 initially take an identity value and t0 takes the value of zero. This initialization is sufficient for noise-free point clouds but most of the point clouds generated by the sensor contain noise, which shifts the centroid of the 3D point cloud far from the true position. Our algorithm deals with this problem by randomly generating orthogonal rotations, translations, and scaling as part of the initialization process. The heuristic combines these data with the WEOPA fitting algorithm, to compute and store additional minimums. When no new minimum is found after a certain number of iterations (=150), the algorithm is terminated. Later, the total number of minimums is used in order to draw conclusions. Moreover, experimentation showed that in most of the cases the algorithm found the minimum in less than 35 initialization parameters. The system was developed in c++ and uses a prebuilt pcl visualization package. In addition, the sensing and reconstruction of point clouds was customized from the original manufacturer to allow quasi-real-time reconstruction and processing.
Part Tracking Results.
We have created a 3D printed jet engine replica, which is composed of eleven assembly parts. We selected five representative parts (shown as inputs in Fig. 4) that afford different recognition complexities to illustrate the challenges encountered during an assembly task. A block diagram of the part tracking system is shown in Fig. 4. The first step is to perform segmentation on the point cloud in order to retrieve all assembly parts. In this case, we performed a plane segmentation to find any table in the scene, and consider only clusters sitting on it. Later, we removed all clusters that are too small or too big in order to reduce the number of clusters and therefore the noise in the scene. After human places the part, it is ready to be picked by the robot. Uncertainties related to pose estimation are reduced to a small variation in the final location. That is, any attempt by the robot to pick up the part results in a successful grasping (Fig. 5(c)).
Regardless of the control point, the algorithm uses the point cloud generated from the 3D CAD model as a target and compares this target against the N point clouds or clusters extracted from the scanned scene. This approach allows the system to evaluate the alignment error for each assembly part, detected under the assumption that the minimum error belongs to the matching cluster. Once this analysis is completed, the system identifies the cluster that represents the best matching cluster, and thereby, recognizes the cluster. Experiments showed that our Iterative–WEOPA algorithm successfully detected the corresponding matching between point clouds obtained from scanning and point clouds generated from 3D CAD models. Cluster identification and scene labeling provide the system with a tracking mechanism to detect, and report, changes in the scene.
We compared the results with the classical iterative closest point algorithm. Our algorithm performs better for every part. In order to evaluate and compare performance of our approach, a residual error was computed as the mean square distance between the points of the current mesh and the model mesh and their closest point. After 100 iterations, very small changes were observed in terms of these parameters. Therefore, we set 150 as a fixed number of iterations for this specific experiment. The objects considered in this study are assumed to be rigid bodies. Therefore, rotation, translation, and scaling transformation do not deform their corresponding point clouds. This allows the algorithm to use scaling as a compensatory transformation between a noisy point cloud and the point cloud generated by the CAD model. In addition, scaling transformation evaluated at step one is also used as a termination flag. This is valid under the assumption that if scaling transformation is above a specific threshold, then there is a high probability that the scanned part is actually different than the CAD model used for the query.
Algorithm Characterization.
A complex problem in computer vision is detecting and identifying a part in a subset of parts that are similar. In order to test our model, we analyzed five parts that are geometrically similar. Due to the intrinsic noise and resolution of the sensor, the generated point cloud has many irregularities that eventually can affect the performance of the algorithm. Figure 6 shows the mean square error on point correspondence between five parts, where three of them have a lot of similarities between each other. Despite these irregularities, the algorithm was able to identify the correct part. Any mean square error on point correspondence below 0.09 can be considered as a true positive. Figure 6 shows that the mean square error (MSE) of the three most similar parts are below the threshold. In order to reduce the uncertainty, our algorithm uses a local comparison between parts that belong to a specific assembly. This step helps to sort the parts based on the MSE and identify the one with minimum MSE as the matched part. Experimental results showed that increasing the density of the point cloud improved the performance of the algorithm, in terms of MSE, until some point after which there was no visible improvement. However, the processing time increased exponentially (Fig. 7).
Contingency Handling
Collision Avoidance Between Robot and Human.
Ensuring safety in the hybrid cell via appropriate control of the robot motion is related to traditional robot collision avoidance. However, interaction scenarios in shared work cells differ from classical settings significantly. For instance, we cannot ensure safety always, if the robot reacts to a sensed imminent collision by moving along alternative paths. This is primarily due to the randomness of human motion, which is difficult to estimate in advance, and the dynamics of the robot implementing such a collision avoidance strategy. Also, these methods increase the computational burden as collision-free paths must be computed in real time. Velocity-scaling [52] can be used to overcome these issues by operating the robot in a tri-modal state: the robot is in a clear (normal operation) state when the human is far away from it. When the distance between them is below a user specified threshold, the robot changes into a slow (same path, but reduced speed) state. When the distance is below a second threshold (whose value is lesser than that of the first threshold), the robot changes to a pause (stop) state.
Our approach to ensuring safety in the hybrid cell is based on the precollision strategy developed in Ref. [11]: robot's pauses to move whenever an imminent collision between the human and the robot is detected. This is a simpler bi-modal strategy, in which the robot directly changes from clear to pause when the estimated distance is below a threshold. This stop-go safety approach conforms to the recommendations of the ISO standard 10218 [53,54]. In order to monitor the human–robot separation, the human model generated by the tracking system is augmented by fitting all pairs of neighboring joints with spheres that move as a function of the human's movements in real time. A roll-out strategy is used, in which the robot's trajectory into the near future is precomputed to create a temporal set of robot's postures for the next few seconds. Now, we verify if any of the postures in this set collides with one of the spheres of the augmented human model. The method is implemented in a virtual simulation engine developed based on tundra software. More details on this safety system can be found in Ref. [11].
Replanning and Warning Generation.
If a deviation from the plan is detected, the system will automatically generate plans to handle the contingency. We present a proposal for the design of a contingency handling architecture for hybrid assembly cell that has the ability to replan its sequence in real time. This design permits a human operator to introduce adjustments or improvements into the assembly sequence in real time with little delays to the assembly cell output.
From the disassembly layers generated from the CAD model of the jet engine assembly, we can extract the following assembly sequence: (1) front shroud safety, (2) main fan, (3) shroud, (4) front shaft, (5) first compressor, (6) second compressor, (7) rear shaft, (8) shell, (9) rear bearing, (10) exhaust turbine, and (11) cover. This assembly sequence also defines the plans for the human and the motion planning for the robot. Although human operator and robot handle the same assembly parts, their kinematics constraints are different and have to be considered in the assembly planning.
Initially we can describe a scene where the human operator follows the system generated assembly plan with no-errors or requested adjustments. Figure 8 shows the complete process of the assembly operation. An initial assembly plan is generated before the operations begin in the hybrid assembly cell. The plan generates the sequence for the human pick and place operations and the motion plan for the robot assembly operations. A full integration among the assembly plan, human tracking system, and the robot significantly reduces the probability of error introduced by the robot in the cell. We will ignore those errors in this work. This configuration leaves the human operator as the only agent with the capacity to introduce errors in the assembly cell. We define deviations in the assembly cell as a modification to the predefined plan. These modifications can be classified into three main categories: (1) Deviations that leads to process errors, (2) deviations that leads to improvements in the assembly speed or output quality, and (3) deviations that leads to adjustment in the assembly sequence.
Deviations That Lead to Process Errors.
Deviations that lead to process errors are modifications introduced by the human operator that cannot generate a feasible assembly plan. These errors can generate an error in the assembly cell in a way that will require costly recovery. In order to prevent this type of errors, the system has to detect the presence of this modification by the registration of the assembly parts. Once the system has the information about the selected assembly part, it evaluates the error in real time by propagating the modification in the assembly plan and giving a multimodal feedback (e.g., text, visual and audible annotations). We have hand-coded several examples to illustrate the deviation described above. Following the assembly plan in our example and after placing the rear-bearing, the next part to be assembled is “exhaust turbine.” Rather than following the assembly sequence, the human operator can decide to use a different sequence. For example, the human picks the “compressor” part instead of exhaust turbine as shown in Fig. 9(a). In order to find a feasible plan, the new assembly sequence with Compressor as a second step is evaluated in real time. Using the exploration matrix, the system determines that there is no possibility to find a feasible assembly sequence following this step. Therefore, the system raises an alarm and generates appropriate feedback using text annotations. This forces the human operator to rely on the predefined assembly sequence.
Deviations That Leads to Improvement.
Every single modification to the master assembly plan is detected and evaluated in real time. The initial assembly plan is one of the many feasible plans that can be found. A modification in the assembly plan that generates another valid feasible plan classifies as an improvement. These modifications are accepted and give the ability and authority to the human operators to use their experience in order to produce better plans. This process helps the system to evolve and adapt quickly using the contributions made by the human agent. Following the assembly sequence, the next part to be assembled is “Front Shaft”. The human operator decides based on his/her previous experience that placing the “first compressor” next will improve the performance of the assembly process. The part first compressor is selected and the step is evaluated in real time. The system discovers that the changes made in the predefined assembly sequence can also generate a feasible assembly sequence. Therefore, the step is accepted and human is prompted to continue with the assembly operation. The updated assembly sequence becomes: (1) front shroud safety, (2) main fan, (3) shroud, (4) first compressor, (5) front shaft, (6) second compressor, (7) rear shaft, (8) shell, (9) rear bearing, (10) exhaust turbine, and (11) cover.
The most important feature of the framework is that the hybrid assembly cell not only accepts the modification in the assembly sequence, but also adapts its configuration in order to complete the assembly process.
Deviations That Leads to Adjustment.
Adjustments in the assembly process may occur when the assembly cell can easily recover from the error introduced by the human by requesting additional interaction in order to fix it. Assuming that the human operator is following the predefined assembly sequence, the next assembly part to be assembled is front shaft. The system recognizes the assembly part and validates the step. Therefore the part can be moved and placed in the intermediate location. Another common mistake in assembly part placement is the wrong pose (rotational and translational transformation that diverges from the required pose). The human is informed by the system about the mistake and is prompted to correct it. The system verifies the poses of the assembly parts in the intermediate location in real time and forces the human operator to place the part in the right location in order to resume the assembly process. Once the assembly part is located in the right position and orientation, the assembly process resumes.
Conclusions
We presented the design details of a framework for hybrid cells that support safe and efficient human–robot collaboration during assembly operations. We presented an approach for monitoring the state of the hybrid assembly cell during assembly operations. The discrete state-to-state part monitoring was designed to be robust and decrease any possible robot motion errors. While the assembly operations are performed by human and robot, the system constantly sends feedback to the human operator about the performed tasks. This constant feedback, in the form of 3D animations, text and audio, helps to reduce the training time and eliminate the possibility of assembly errors. We will conduct experiments to quantitatively demonstrate these benefits of the proposed method in the future. A Microsoft–Kinect sensor, which has an effective range of approximately 1 to 4 m, was used for both part monitoring and human monitoring. Therefore, the monitoring equipment is placed sufficiently far from the robot without affecting its normal working process. We carried out a detailed sensor placement analysis w.r.t. the human-monitoring system in Ref. [1]. We will carry out a similar placement analysis of the part-monitoring system in the future. The proposed method uses a precollision strategy to predict human's impending collision with the robot and pauses its motion. We will compliment this capability in the future by exploiting the KUKA robot's inbuilt force sensing and impedance control features to implement compliant control for handling postcollision scenarios. In our previous work, we have developed other modules including ontology for task partitioning in human–robot collaboration for kitting operations [55] and resolving perception uncertainties [56] and occlusions in robotic bin-picking in hybrid cells [57]. Future work consists of investigating how to integrate them into the development of hybrid work cells for assembly applications.
Funding Data
National Science Foundation (Grant Nos. 1634431 and 1713921).