Mhlnews 2235 Intelligent Parts Feeder

Automation: Science Fact

Dec. 1, 2008
Robotic 3D vision could become standard.

As you look into the future of manufacturing technology, the technology may begin to look back at you. As manufacturing demands increase, as technology expands, and as costs for robots decrease, 3D vision could become standard equipment on robots in the next several years, according to John Burg, president of Ellison Technologies Automation, a Council Bluffs, Iowabased integrator of robotic systems.

“I believe that the technology is moving at a fast enough pace now,” he states. “Prices will decrease, ease of use will increase and customers will be demanding more of the flexibility that vision brings.”

Burg believes that robotics and 3D vision will work hand in hand in activities, such as material handling, welding and machine-tool loading. The technologies will improve efficiency, reduce manual labor costs and improve quality. “One of the reasons you install new technology is to reduce costs in the long run,” he explains.

So, why is 3D vision technology becoming more affordable? As processing power increases, and the cost of processors and memory decreases, programmers can do more than just take a single picture and match it up with a reference model, which was the basis of most early 2D vision images.

“With the processors available today, we can actually look at multiple images and analyze an object’s geometry in real time,” explains Edward Roney, manager of intelligent robotics and vision systems at Fanuc Robotics, based in Rochester Hills, Mich. Fanuc has developed what it calls the industry’s first integrated 3D system. Known as iRVision, the technology is available on all Fanuc R-30iA systems. Processes are executed from the main robot CPU, eliminating communication delays and the need for additional hardware.

Vision Evolution
Early uses of robotic vision involved robots locating complete auto bodies on automotive production lines. Without vision, special tooling was needed to attach the tooling holes on the body so that the robots would know where the bodies were. When vision cameras were developed, the need for this expensive tooling was eliminated. The robots were able to determine where the auto bodies were and then locate the four holes mathematically.

“From the very early days, vision was recognized as a way to reduce tooling costs,” notes Fanuc’s Roney. “Tooling is very expensive, and it is fixed. If you need to change your model next year, you have to redo all of your tooling.” According to Roney, auto plants can save millions of dollars a year on tooling and rack costs alone by using vision technology.

How Vision Works
A vision system uses algorithms to recognize what is in an image and then find the things that the robot is trained to find. An image is based on pixel-based data. Each pixel in the series has a gray-scale value, and the algorithm analyzes that data.

Robots can determine where an image was taken, so they can identify where an object is sitting and then make judgments about its size, type and quality compliance.

The LR Mate 200iC picks and sorts different sizes and types of bolts delivered by an intelligent parts feeder.
An M-710iC/50 with iRVision 3DL picks parts from a wheeled cart, takes them to a vision inspection station for error checking and places them on a transfer stand.
The LR Mate 200iC robot picks and places randomly located computer chips.
Parts for 12 assemblies are loaded onto a cart, with the correct parts for each assembly located in each slot. The M-16iB uses iRVision 3DL to locate the cart.

They can also change the program based on the images and algorithms. For example, parts of different sizes may take different paths. And, part A will be dropped off at the location where part B should be dropped off.

“Ultimately, a robot with machine vision will be able to manipulate any part in any orientation,” explains David Dechow, president of Aptura Machine Visioning in Lansing, Mich.

With the use of an onboard camera system or a remotely-mounted camera, it is possible to get a snapshot of an object and find where that object is in space, relative to the robot’s position. “The robot can then use that positional data,” states Jeremy Pennington, a controls engineer with Guide Engineering in Ft. Wayne, Ind. “It allows the robot to locate that object, no matter where you move it.”

3D Vision Applications
While 3D vision can be used in a number of applications, some seem to present better opportunities than others. The most beneficial results can be found in bin picking, machinetool loading, packaging and welding.

• Bin picking. To date, the premier application for robots has been picking randomly stacked parts from bins. This requires three elements: vision, bin avoidance and collision detection. Vision is the most obvious: Where is the part? However, there are constraints associated with the bin wall. As the robot gets farther into the bin, the parts become more difficult to pick. Modeling used in bin-picking applications understands the picking tool, the sensor and the constraints of the arm itself. As such, once the part is located, calculations are made on the robot as to whether it can actually remove the part from the bin. The third element is collision detection. Eventually, the robot will hit a wall, so it needs to be able to differentiate a soft hit from a hard hit; hard contact can damage the robot.

“We have been very successful with both structured and random bin picking,” says Fanuc’s Roney. Structured picking is where everything is facing up. In random picking, parts are in a random pile. Roney admits that the latter has more challenges, but with the three elements (vision, bin avoidance and collision detection) in place, it is possible.

• Machine-tool loading. “In many applications, we are picking up a part and loading it directly into a machine tool,” says Ellison Technologies’ Burg. “In many cases, the fixturing of that machine tool does not have the forgiveness for an out-of-location robot placement.” Accurate placement is critical so the part clamps properly. If this does not occur, a very costly crash can occur in the machine tools. Vision technology allows accurate placement.

• Packaging. Vision is critical for robotic packaging. For example, food products often come down a conveyor or slide down a ramp into a pickup area. There is no repeatable positioning. The products are in different positions and need to be picked up, oriented and placed in packages. Vision allows the robots to find the products and perform the required tasks.

• Welding. When welding, robots that use vision can adapt to subtle changes in the presentation of the two components that are to be welded. Even in spot welding applications, vision can be used for error proofing.

3D in Action
Ellison Technologies Automation installed a robotics system with 3D vision for processing its 4 x 2 x ¼ plates. The plates begin as bar stock, are cut to size, dropped into a bucket and then transferred into a room where robots, using a welding process, put a hard surface on the plates to make them last longer.

“The welding process tends to contaminate the gripper,” explains Burg. “If the gripper encounters a different-sized part, the new-sized part can be contaminated.” Prior to the introduction of vision technology, these adjustments had to be made manually. Now, positions can be verified on each part. “The technology allows the robot to show the part to the 3D camera, and the robot can adjust all its points,” continues Burg. “This allows it to run without manual intervention.”

For 10 years prior to the introduction of vision technology, the company had four robots, and at least one of the robots was always in need of manual adjustment. Now, the company is able to do the same amount of work with three robots and much less manual intervention.

The Future
Beyond further cost reductions and more powerful systems, what does the future hold for robotic 3D vision systems?

“There is more interest going into what is called visual servoing,” replies Fanuc’s Roney. “Right now, we think of vision being used to find an object at one point in time.” However, parts often move and/or sway, making a one-point-in-time snapshot invalid. Visual servoing continues to acquire information over and over as to where the object is. This allows the robot to be guided to the object by constantly adapting to where it is.

“This would come in handy in assembly-line situations, where assemblies are hanging from a moving drive chain or other material handling device,” suggests Roney.

Even without advancements, such as visual servoing, it is clear that processing power is increasing, and costs are decreasing. This is a combination that is likely to make 3D vision much more prevalent in the years to come.

Gerard Jackson is a Detroit-area freelance writer.

Latest from Technology & Automation

#177137895@Mustsansar Syed| Dreamstime
AI's Role in Developing Resilient Supply Chains
#146171327 © Kittipong Jirasukhanont| Dreamtime
Warehouse Market to Increase 24% By 2030