Robot Vision Systems

- Agustus 27, 2017

Motoman Robotics and Universal Robotics Unveil New 3D Vision ...
photo src: www.zycon.com

Machine vision (MV) is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry. Machine vision is a term encompassing a large number of technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of computer science. It attempts to integrate existing technologies in new ways and apply them to solve real world problems. The term is also used in a broader sense by trade shows and trade groups; this broader definition also encompasses products and applications most often associated with image processing.


Robot vision system - AdeptSightâ„¢ - ADEPT TECHNOLOGY
photo src: www.directindustry.com


Maps, Directions, and Place Reviews



Definition

Definitions of the term "Machine vision" vary, but all include the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection and robot and process guidance in industry. This field encompasses a large number of technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of basic computer science; machine vision attempts to integrate existing technologies in new ways and apply them to solve real world problems in a way that meets the requirements of industrial automation and similar application areas. The term is also used in a broader sense by trade shows and trade groups such as the Automated Imaging Association and the European Machine Vision Association. This broader definition also encompasses products and applications most often associated with image processing. The primary uses for machine vision are automatic inspection and industrial robot/process guidance.


Robot Vision Systems Video



Imaging based automatic inspection

The primary uses for machine vision are imaging-based automatic inspection and robot guidance.; in this section the former is abbreviated as "automatic inspection". The larger scale process includes defining and creating a solution. This section describes the technical process that occurs during the operation of the solution.

Methods and sequence of operation

The first step in the automatic inspection sequence of operation is acquisition of an image, typically using cameras, lenses, and lighting that has been designed to provide the differentiation required by subsequent processing. MV software packages and programs developed in them then employ various digital image processing techniques to extract the required information, and often make decisions (such as pass/fail) based on the extracted information.

Equipment

The components of an automatic inspection system usually include lighting, a camera or other imager, a processor, software, and output devices.

Imaging

The imaging device (e.g. camera) can either be separate from the main image processing unit or combined with it in which case the combination is generally called a smart camera or smart sensor. When separated, the connection may be made to specialized intermediate hardware, a frame grabber using either a standardized (Camera Link, CoaXPress) or custom interface. MV implementations also use digital cameras capable of direct connections (without a framegrabber) to a computer via FireWire, USB or Gigabit Ethernet interfaces.

While conventional (2D visible light) imaging is most commonly used in MV, alternatives include hyperspectral imaging, imaging various infrared bands, line scan imaging, 3D imaging of surfaces and X-ray imaging. Key divisions within MV 2D visible light imaging are monochromatic vs. color, resolution, and whether or not the imaging process is simultaneous over the entire image, making it suitable for moving processes.

Though the vast majority of machine vision applications are solved using two-dimensional imaging, machine vision applications utilizing 3D imaging are a growing niche within the industry. The most commonly used method for 3D imaging is scanning based triangulation which utilizes motion of the product or image during the imaging process. A laser is projected onto the surfaces of an object and viewed from a different angle. In machine vision this is accomplished with a scanning motion, either by moving the workpiece, or by moving the camera & laser imaging system. The line is viewed by a camera from a different angle; the deviation of the line represents shape variations. Lines from multiple scans are assembled into a depth map or point cloud. Stereoscopic vision is used in special cases involving unique features present in both views of a pair of cameras. Other 3D methods used for machine vision are time of flight, grid based and stereoscopic. One method is grid array based systems using pseudorandom structured light system as employed by the Microsoft Kinect system circa 2012.

Image processing

After an image is acquired, it is processed. Multiple stages of processing are generally used in a sequence that ends up as a desired result. A typical sequence might start with tools such as filters which modify the image, followed by extraction of objects, then extraction (e.g. measurements, reading of codes) of data from those objects, followed by communicating that data, or comparing it against target vales to create and communicate "pass/fail" results. Machine vision image processing methods include;

  • Stitching/Registration: Combining of adjacent 2D or 3D images.
  • Filtering (e.g. morphological filtering)
  • Thresholding: Thresholding starts with setting or determining a gray value that will be useful for the following steps. The value is then used to separate portions of the image, and sometimes to transform each portion of the image to simply black and white based on whether it is below or above that grayscale value.
  • Pixel counting: counts the number of light or dark pixels
  • Segmentation: Partitioning a digital image into multiple segments to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze.
  • Edge detection: finding object edges
  • Color Analysis: Identify parts, products and items using color, assess quality from color, and isolate features using color.
  • Blob discovery & manipulation: inspecting an image for discrete blobs of connected pixels (e.g. a black hole in a grey object) as image landmarks. These blobs frequently represent optical targets for machining, robotic capture, or manufacturing failure.
  • Neural net / deep learning processing: weighted and self-training multi-variable decision making
  • Pattern recognition including template matching. Finding, matching, and/or counting specific patterns. This may include location of an object that may be rotated, partially hidden by another object, or varying in size.
  • Barcode, Data Matrix and "2D barcode" reading
  • Optical character recognition: automated reading of text such as serial numbers
  • Gauging/Metrology: measurement of object dimensions (e.g. in pixels, inches or millimeters)
  • Comparison against target values to determine a "pass or fail" or "go/no go" result. For example, with code or bar code verification, the read value is compared to the stored target value. For gauging, a measurement is compared against the proper value and tolerances. For verification of alpha-numberic codes, the OCR'd value is compared to the proper or target value. For inspection for blemishes, the measured size of the blemishes may be compared to the maximums allowed by quality standards.

Outputs

A common output from automatic inspection systems is pass/fail decisions. These decisions may in turn trigger mechanisms that reject failed items or sound an alarm. Other common outputs include object position and orientation information from robot guidance systems. Additionally, output types include numerical measurement data, data read from codes and characters, displays of the process or results, stored images, alarms from automated space monitoring MV systems, and process control signals. This also includes user interfaces, interfaces for the integration of multi-component systems and automated data interchange.


Cubic Automation
photo src: www.cubicautomation.com


Market

As recently as 2006, one industry consultant reported that MV represented a $1.5 billion market in North America. However, the editor-in-chief of an MV trade magazine asserted that "machine vision is not an industry per se" but rather "the integration of technologies and products that provide services or applications that benefit true industries such as automotive or consumer goods manufacturing, agriculture, and defense."

As of 2006, experts estimated that MV had been employed in less than 20% of the applications for which it is potentially useful.

Source of the article : Wikipedia



EmoticonEmoticon

 

Start typing and press Enter to search