Vision-Guided Robots Tighten TolerancesVision-Guided Robots Tighten Tolerances
December 31, 2002
adil shafi, president of robot user-interface software developer Shafi Inc., says users will benefit greatly from robot-vision systems. “What they get is consistency over manual [processes],” says Shafi, who cites the case of a major automotive lighting supplier that managed to reduce gate-flash tolerances to within 0.2 mm. The primary benefits of vision-equipped robots include labor savings, throughput consistency, reduction of workplace injuries, flexibility, and alleviation of retooling.
“You can install [a robot] with vision for not that much more,” says Joseph Portelli, plastics program manager for Fanuc Robots, Rochester Hills, mi. Prices have come down dramatically since the early days of vision-guided robots.
Improved usability is an important factor in encouraging the adoption of vision-guided robots. “Most of the [recent] development has been towards ease of use,” says Portelli. “Vision systems have been notoriously difficult to use.”
“Ease of use to an engineer is very different from ease of use to a non-technical person,” Shafi says. “Ease of use has to do with how much support [is needed] — how many support calls come from a workcell, how many times technicians have to be sent out.”
Over and above wizards for specific tasks, Shafi says an essential component of usability is the availability and quality of diagnostic tools. Good, easily navigable diagnostic tools can make machine operation much simpler. “Often, the information is there,” Shafi says. “It’s just [hard] to find.
“We don’t expect plants to raise their level of expertise,” Shafi notes, since that would make the robotic system more expensive to the processor.
With Shafi’s Reliabot software, the Cognex vision system by Cognex Corp., Natick, ma, obtains a picture from one or more cameras. Then, the software finds and recognizes objects and features in each picture, and converts the 2-D coordinates of each object or feature into 3-D robot coordinates and orientation. A 3-D pick offset is applied, which allows objects to be picked up off-center, and the final 3-D robot coordinates and orientation are passed to the robot over a serial or Ethernet line. Then, the robot moves to the object and performs the desired operation. The Reliabot software is written to perform all of these procedures automatically, except for the application-specific robot action on the object.
Three-dimensionality is possible by taking pictures from different angles with a single arm-mounted camera. “If the camera is stationary, [the software] does more math” to get a 3-D image, Shafi explains. With a third camera on a multicamera 3-D vision system, the third, redundant camera verifies the integrity of the system, and becomes an instant backup in case one of the cameras fails. When data from one camera are not available, the system automatically uses data from the third camera.
Refinements to systems include improvements to both 3-D robot guidance software and color-sensing hardware. “In 2-D you get fields of view, but in 3-D you have cubes of view,” Shafi explains. “A lot of times you can do stereo [imaging],” he continues, but the precision can be problematic. Refinements to the software have raised precision.
The firm says the software gives any networkable robot the ability to use vision feedback, and it features an interface operators can learn in a few minutes. Applications include racking, sorting, bin-picking, and fixture elimination. Operators can quickly perform 3-D calibration with step-by-step wizards, and the firm says the software’s support for a third camera increases the reliability of results.
The firm says that it has worked with controllers from Nachi, Kawasaki, Kuka, Staubli, ABB, Adept, Panasonic, Kiefel, and Fanuc allowing a vision system to be plugged in with existing robots. The software also serves as a simplified interface to a Cognex MVS-8100, In-Sight, or Check-point vision system.
Fanuc Robotics’ vision-guided robotics system, Visloc, is a pc-based vision package used for part location and orientation. It provides 2-D and 3-D robotic guidance tools for material-handling applications. The company says the system’s graphical environment simplifies integration of the vision system, since visual tools handle camera calibration, part handling, and operation. Applications include machine loading and unloading, material-handling, packaging, assembly, depalletizing, and measurement. Visloc is integrated into the robot controller.
Meanwhile, Vistrac is a vision and line-tracking package that the firm says requires no programming to set up and operate.
Recently, the firm upgraded its Vistrac variant of Visloc, which adds tools for vision-coordinated line-tracking. “Line-tracking is nothing new,” says Portelli, but this upgrade allows for circular line tracking. For example, parts could fall on a non-linear conveyor belt revolving around the robot. “As long as the part is stationary on the belt, the robot will remember where the part is,” Portelli explains.
Vision systems add a lot of flexibility to line-tracking systems, says Portelli. “With vision you can do a lot more.” One example is the use of the robot to sort parts by color or appearance. This is possible because the robot can easily see the different parts.
Numerous variables can adversely affect robot vision — from variation in part color and specification, to changes in ambient light. Even a cloudy day or a passing shadow can throw off finicky vision systems. “Vision systems live or die by lighting [conditions],” says Portelli. New vision systems can better accommodate poor lighting. “They can accept a wider range of parts,” says Portelli, and they work even if the image is blurry or the part is rotated.
Portelli claims that improvements to Fanuc’s vision system have dramatically eased robot setup. “[An experienced technician] can literally get a robot to track a part…within about 15 minutes,” he says.
One example is the crucial calibration step at installation, although calibration needs to be done only when the robot or camera is moved. “Calibration is tricky,” notes Portelli. “Now we have calibration wizards,” which make the process simpler, he adds. Shifting to pc-based controls has also benefitted usability. “That made everything easier,” Portelli says. New data-logging tools allow users to create a library of images, which aids servicing. Another new ease-of-use feature is the addition of software wizards to automate setup of inspection and measurement. One tool, for example, puts an image of the part and a caliper onscreen, allowing processors to quickly measure parts.
Since the firm’s vision system is completely integrated, Portelli notes that processors don’t have to worry about maintaining separate software packages. “Everything is done in the robot software,” says Portelli. “You only have to work with one set of software.”
In February 2002, Braintech released its first commercialized version of eVisionFactory (eVF), a software environment for developing vision-guided robotics. Braintech has strategic relationships with robot supplier ABB Flexible Automation, and Marubeni Corp., Osaka, Japan.
The software contains the firm’s Single-Camera 3D (SC3D) technology, which allows the robot to see parts in three dimensions with the use of a single camera. “The importance of eVF and SC3D is profound,” says Owen Jones, ceo and director. “What we’ve brought to the market is the next-generation application for multiplanar, rigid-parts handling and assembly.”
Braintech recently released a multicamera 3-D technology (MC3D) for development of vision-guided robotic applications. The system enables guidance of robots in applications involving large parts or objects with a mixture of visual landmark features. For example, the system can be used for guidance of industrial robots to automotive body panels and other parts to perform secondary processes and assembly.
Babak Habibi, Braintech’s president, says the development expands the firm’s products for the automotive sector and pushes the company further toward its goal of biologically-inspired multifunctional robotics.
“With increases in usability, accuracy, and value, vision-guided robots are becoming more attractive,” says Portelli.
You May Also Like