Advances in Industrial Robot Intelligence

Intelligent robots can perform industrial control tasks today that were not possible just a few years ago, using vision and tactile sensing. Ongoing cost reductions have made it possible to deploy robots for many tasks.


September 18, 2006
URL:http://www.drdobbs.com/embedded-systems/advances-in-industrial-robot-intelligenc/193001587

According to the Robotic Industries Association , the North American robotics industry grew at an average annual rate of 20% from 2003-2005. Taking into consideration a relatively soft automotive market and increased pressure from overseas manufacturers, how has this strong growth occurred? An ongoing trend of cost reductions has been a factor. The price for both robots and overall turnkey systems has continued to decline. Also driving the strong growth is the continually improving performance of robots. Robots can perform tasks today that were not possible just a few years ago. Robots can also do more in less time, providing higher levels of productivity.

Perhaps the most important long-term trend has been the increased advances in robot intelligence. Since their initial inception, robots have had some level of intelligence in making decisions about part availability, checking if a feature is present, detecting error conditions, or related issues. In most cases, this intelligence was based on a specific sensor detecting a specific condition.

For example, a photo eye is used to detect that a part is present, and in the correct orientation through the presence or absence of a pin, detent or other feature. This photo eye is then wired to a PLC or directly to the robot controller. At the appropriate time in the robot program the robot checks this photo eye to confirm that the part is in position and in the correct orientation before picking it up or performing some other operation.

Using a photo eye or similar sensor for this example is a simple and reliable approach, and is probably the right choice in this instance. However, opportunities for automation are not always this simple. Multiple part styles may need to be handled. The means of differentiating parts may be more complex.

Parts or the manufacturing process may not lend itself to simple conveyors. For example, parts located in bins, with layers separated by a slip sheet are commonly used for metal parts. Parts may have complex geometries, making them more difficult to locate without the additional cost for fixtures to locate the parts.

Two Dimensional Vision Location

Adding the means necessary to deal with these types of complexities has been a major barrier to the increased use of robotics in some industries. Recently this has begun to change. The technology that has had the most significant short -term impact has been two dimensional vision systems. For info on vision, see the Automated Imaging Association .

2D vision systems consist of standard industrial cameras used to take images that are processed by the robot to make decisions on how parts should be handled. Industrial vision systems have been available for some time, but they have reached the price-performance-reliability point that allows them to be used for applications that were not feasible just a few years ago.


Figure 1: FANUC Robotics Offers Integrated Robot Vision.

A good example is using a vision system in conjunction with a robot to locate parts stacked in bins separated by standard slip sheets. This is a common means of transporting parts from plant to plant or even to transport parts within a plant. Without the use of a vision system, manufacturers must use relatively expensive formed plastic dunnage or some other means of accurately locating parts within a bin This type of formed plastic dunnage that can be stacked within a bin is relatively expensive with the mold alone costing $60,000 to $100,000 to design and manufacture. 2D vision systems are a good alternative to formed dunnage or other relatively expensive methods of locating parts within a bin. Until recently, there were many issues that made using a vision system difficult, including variations in part color from batch to batch, variations in the condition of the bins, or markings that are left on reused separator sheets. With ongoing advances in vision technology, these issues can now be overcome with good success. Today's 2D vision systems can locate most parts that can stack on top of separator sheets within bins.

The typical approach to this application is to use a camera mounted over the bin to locate parts. The camera is mounted high enough so that a robot can move underneath the camera and into the bin. At the beginning of each layer, the robot processes an image of a layer of parts and determines where to pick up each part.

If more precise positioning is required, a camera can also be mounted to a robot. The robot then moves the camera over a part or group of parts, takes a picture, and passes this information to the robot to determine where to pick each part. When all of the gears in each layer are removed, the robot removes the separator sheet and starts removing parts from the next layer.

With either a fixed camera or a robot-mounted camera, the incremental investment to add vision is significantly less than the cost to develop a special dunnage or other alternatives to locating parts. Vision systems also provide greater flexibility for handling different parts on the same line, or adapting to a part change-over. Automated de-stacking systems like this were cost prohibitive just a few years ago. Now, with integrated vision, robots are feasible and affordable.


Another example where vision technology has enabled the use of robots is by lowering the cost of conveyors used to present parts to robots. Prior to the use of 2D vision systems, many parts had to be located on fixtured pallets conveyed by pallet conveyors. The cost for even the simplest pallet conveyor is $30,000, with longer runs of conveyor costing more. With advances in 2D vision technology, parts can be transported from operation to operation on relatively inexpensive belt conveyors. Parts placed onto conveyors either by operators or other robots are then conveyed to the robot. A camera located over the end of the conveyor first detects when a part is located at the end of the conveyor, stopping the conveyor drive. This camera also locates the part so the robot can pick it up.

As with picking parts off separator sheets in bins, vision technology has advanced to the point where most parts can be identified and picked up by a robot off of a belt conveyor.

2D vision systems are great for parts that lie flat, but this is not always possible. Within the past few years, three-dimensional vision systems have become feasible for some applications where parts do not lie flat. For example, parts that can stack upon each other, but may shift from side to side as the parts stack up. A 2D image does not provide enough information to handle this shifting.

A simple technique that has proven effective is to use laser light strips in conjunction with a 2D camera. An overhead 2D camera provides a rough location of parts in a bin. This camera also identifies the next part to be selected. A second camera mounted to the robot works in conjunction with a laser. The robot moves the laser and camera over the next part and then the laser places a cross hair over a target on the part. This target could be an edge, circle or other distinct feature on the part. Through simple triangulation, the camera can locate the position and orientation of the part in 3D.

The ultimate application is to use a 3D vision system to locate randomly oriented parts in a bin. There are many challenges to this application including the possibility of parts being tangled up with one another, and avoiding the bin walls.

Tactile Feedback

Although vision systems are the most common form of intelligent sensors for robots, they are not the only alternative. Six degrees of freedom force sensors are commonly used to give robots tactile feedback. For high precision assembly, force sensors are used to guide tight fit insertions such as for inserting shafts, with or without keys, into holes. Robots with force sensors can also be used for more complex assembly tasks such as inserting gears into housings such as transmissions. Gears being inserted into clutches will often need to engage and then pass through multiple stages. The robot can be programmed just like a person and move a gear back and forth until it engages with each stage.

Another example of using force sensors to give robots tactile feedback is for polishing or grinding a complex contour. Traditionally this is handled with compliant devises, but these devises may not meet the tolerances required for precise applications. Adding a 6D force sensor to a robot and then attaching a grinding disk gives the robot the ability to maintain a constant force as the orientation varies, compensating for gravitational effects.

Beyond Today's Solutions

Intelligent sensor technology has played a critical part in the successful use of robots in a variety of applications. As intelligent sensor technologies continue to advance, robots will have even greater capabilities in the future. Nothing will ever replace the super computer in each person's brain that can make very complex distinctions, but applications once thought to be impractical are now common tasks for intelligent robots.

Mark Handelsman is an industry marketing manager at FANUC Robotics America Inc.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.