Research Projects

Path Constrained Trajectory Planning

Path constrained toolpaths require a robot to keep a tool center point (TCP) aligned with waypoints along the path. The waypoints can either be fully constrained or semi-constrained. A semi-constrained waypoint will allow the tool to align within pre-defined tolerances in position and orientation. Additionally, multiple TCPs can be used in some processes like robotic finishing. The constraints imposed by the toolpath include but are not limited to general end-effector constraints, avoiding collision and singularities, maintaining desired velocity and force at the tool-tip, and complying with the joint limits. These path-constraints or the motion of the objects can be generated using traditional motion planners. However, if we want to automate these tasks using robots, then we need to generate configuration space trajectories for robots in addition to generating the motion plans for the objects. The problem of generating such trajectories is known as the path constraint trajectory generation problem. Traditionally, a version of the problem which imposes several path constraints is solved using graph search techniques. However, there are limitations of the approach such as high computational costs to approximate the constraint manifold in C-space, methods to define path consistency are not present, and large problems cannot be solved. We developed an approach where a reduced graph can be successively constructed by using cues from the workspace and biasing the start point based on the quality of paths generated. Workspace heuristics use nearest neighbor technique to find the pose of flange in the workspace and then sample points in C-space. We show the working of the heuristic by using kinematic relationship established by robot Jacobian. We show an average 90% improvement in nodes evaluated compared to state of the art techniques. Larger problems with as many as 3000 different poses for a single waypoint can be handled by our planner.

Robot Base Placement Planning

The workspace of a serial manipulator is limited. Additionally, complexities exist due to the joint limit and self-collision-avoidance constraints. The robot workspace includes singularities, and it is generally preferred for a robot to carry out tasks away from singularities. Different locations in the robot workspace impose different constraints on achievable forces and velocities. Variations in achievable forces and velocities can be quite large. Moreover, there might not exist a continuous path between two points in the robot workspace that are far apart. Workspace characteristics can be quite complex for redundant manipulators, and unfortunately, there is no simple model to capture these. Many complex manufacturing tasks such as welding, robotic finishing, composite layup, painting, 3D printing, etc. use manipulators to follow continuous paths under constraints on the workpiece. It is important to find the right location and orientation for the workpiece with respect to the robot to meet all the task constraints. This problem is called a workpiece placement problem. We formulate the problem of identifying a feasible placement as a non-linear optimization problem over the constraint violation functions. This is a computationally challenging problem. Our approach includes successively searching for the solution by incrementally applying different constraints. We prove the gains in success rate of a gradient-based algorithm when constraints are applied successively by developing a theory based on attractor basins. An initial guess can be attracted to regions of infeasibility when all constraints are applied at once. Successively application prioritizes constraints and directs the consecutive iterates to explore feasible regions.

Sequence of Mobile Base Placements

Sensors are widely used in the industry to collect information about a physical object. Operational range of the sensor is limited and therefore the sensor needs to be moved around a large complex part in order to capture complete information. Robot arm or manipulators can provide the degrees of freedom needed to maneuver the sensor through the complex geometry. However, a robotic arm has a limited workspace as well and cannot cover large parts. Mobile base can enhance the capability of the robotic arm by adding mobility to the arm and carrying the arm around the part. Mobile base will need to relocate around the part during the process. Relocating the mobile base increases execution time and also introduces uncertainty in the localization as mobile base moves inaccurately. It is important to reduce the number of mobile base repositioning and reduce execution time and uncertainty. In this paper, we develop a motion planner that finds the minimum number of mobile base placements in order to find robotic arm trajectories that can cover a large complex part using a RGB-D camera sensor. The planning problem, also known as optimal base sequencing, is challenging due to the immensity of the search space. The computation costs involved in inverse kinematics calculations also adds to the search time. A branch and bound search algorithm is developed with efficient branch guiding and pruning heuristics that quickly explores the search space. A capability map based method is developed to improve the search space construction time. Output of our method is an optimal sequence of base placements for the mobile base that will lead to minimum number of placements and execution time required for the process.

Robotic Composite Layup

Hand layup is a commonly used process for making composite structures from several plies of carbon-fiber prepreg. The process involves multiple human operators manipulating and conforming layers of prepreg to a mold. The manual layup process is ergonomically challenging, tedious, and limits throughput. Moreover, different operators may perform the process differently and hence introduce inconsistency. We are developing a smart robotic cell to automate the prepreg sheet layup process. The cell uses multiple robots to manipulate and drape sheets over a tool. A human expert provides a sequence to conform the ply and types of end-effectors to be used as input to the system. The system automatically generates trajectories for the robots that can achieve the specified layup. Planning algorithms are capable of (a) generating plans to grasp and manipulate the ply and (b) generating feasible robot trajectories. Our system can generate plans in a computationally efficient manner for complex parts. We are also developing an approach for selecting and placing robots in the cell and a description of tools and end effectors needed for utilizing the cell. We have demonstrated the automated layup by conducting physical experiments on an industry-inspired mold using the generated plans. Our system can perform sheet layup at a speed comparable to human operators.

Planning Algorithms for Acquiring High Fidelity PointClouds for Fast and Accurate 3D Reconstruction

Sensors are widely used to construct a 3D model of parts by collecting data. The accuracy of the collected data depends on sensor placement with respect to the part and sensor operational range. The operational range and limitations of the sensor need to be applied as constraints while planning for robot motions that move the sensor around the part to collect data. Overly conservative constraints on sensor placement will lead to high execution times. We present an approach where we develop a robot motion planning algorithm that takes into account the camera performance constraints and produces output with low error. An RGB-D camera is used to obtain the pointcloud of the part. An offline planning method improves point density in the regions having zero or low density. Our method guarantees a high point density across the surface of the part. Results are presented on six geometries with different complexity and surface properties. We also present results on how camera parameters influence the output of our method. Our results show that algorithmic advances reported in this paper enable us to use low-cost depth cameras for producing high accuracy uniform density scans of physical objects.

Robotic Supportless Additive Manufacturing

Extrusion-based additive manufacturing systems usually use three degrees of freedom extrusion tools to perform the deposition operation. This requires the use of support structures to deposit structures with overhang features. The use of support structures can be avoided by adding degrees of freedom to the build platform. The elimination of build structures can offer benefits in terms of reduction of build time and elimination of post-processing costs. The work demonstrated that the use of three degrees of freedom build platform enables printing of complex shapes without support structures. We presented computational foundations for generating paths and trajectories for synchronizing the motion of three degrees of freedom build platforms and three degrees of freedom extrusion tools. We reported results on six different test parts in terms of reduction in build time, accuracy, and surface roughness. The parts had an average accuracy of 0.193 mm and a maximum error of about 0.9 mm using a 0.8 mm extrusion tooltip for material deposition. Build time reduction of 88% was observed as compared to printing with support.

Human Robot Collaboration on Assembly Operations

Factories of the future are expected to produce increasingly complex products, demonstrate flexibility by rapidly accommodating changes in products or volumes, and remain cost competitive by controlling capital and operational costs. Humans and robots share complementary strengths in performing assembly tasks. Humans offer the capabilities of versatility, dexterity, performing in-process inspection, handling contingencies, and recovering from errors. However, they have limitations in terms of consistency, payload size/weight, and operational speed. In contrast, robots can perform tasks at high speeds, while maintaining precision and repeatability, operate for long periods of times, and can handle high payloads. However, currently robots require long programming times and have limited dexterity. We are developing a framework to build assembly cells that support safe and efficient human-robot collaboration during assembly operations. Our approach allows asynchronous collaborations between human and robot. The human retrieves parts and places them in the robot's workspace, while the robot picks up the placed parts and assembles them into the product. We are developing technologies for automated plan generation, system state monitoring, and contingency handling.