[Skip to content]

TWI
Search our Site
.

A vision for manufacturing

TWI Bulletin, November/December 1990

 

Robbie Birrell
Robbie Birrell

Robbie Birrell joined TWI in 1988 as a Research Engineer in the Manufacturing Systems Department, after obtaining a degree in manufacturing engineering and two years involved with factory data collection systems. He has looked at a number of areas related to Computer Integrated Manufacture and manufacturing automation ranging from studies of sensors and their application in welding automation, to computer communications using the Manufacturing Automation Protocol.

Information technology is now widely used throughout manufacturing. As a result there is a need to capture information at plant level, and to disseminate it throughout the business hierarchy where it can be used to control manufacturing activities and provide input to the business decision-making process. Robbie Birrell takes up the story ...



Sensing

To capture information on the shop floor at any stage of the manufacturing process, some form of sensing system is needed - for example, a bar-code reader. The information might be the location of a component being manufactured, and might be used as an input for work-in-progress tracking. Another means of information capture might be via a sensor that can monitor whether or not a machine is running which, when stored against a timebase, allows machine utilisation to be determined.

However, there are circumstances within manufacturing where conventional sensors would not provide enough information from which a man or a machine could reach a decision, and here a more 'data-rich' sensor is needed.

One such available sensing technology is machine vision.

Vision systems

Machine vision is concerned with both sensing visual information and interpreting it by computer. Vision systems of this type normally consist of a camera, a computer and all the hardware and software necessary for interfacing. The operation of the vision system can be considered as follows:

  • capturing an image;
  • converting it into a digital format;
  • processing;
  • analysing.

Images are captured by a camera focused upon the scene of interest. In most industrial vision systems the camera is of the charge-coupled device (CCD) type, which is based upon a two-dimensional sensor array consisting of approximately 250 000 light-sensitive elements.

A major consideration at this stage is the contrast in the scene being viewed; an object or objects of interest should be easily distinguishable from the background. Scene illumination is important, as it is far simpler to achieve good contrast by controlling lighting than by enhancing the contrast of a poorly-illuminated scene using computer-based techniques.

The captured raw vision data are then digitised and stored in a computer. Storage size might range from a matrix of 32 x 32 to one of 1024 x 1024 picture elements (pixels). Each pixel can have an associated grey-scale value which might range from 0 to 255 (where 0 represents black and 255 white).

The stored digitised image can now be processed to reduce the quantity of data present. This decreases data-processing time and therefore allows faster system response; the most common methods are thresholding and windowing.

Thresholding relies on the image of the object within the scene of interest having pixels whose grey-scale values are different from those of the background. A threshold grey-scale value which lies between the grey-scale values of the object of interest and the grey-scale values of the background is selected - see Figure 1. Typically, all the pixels above the selected threshold are made white and all those below become black, thus significantly simplifying the image.

Fig.1. Grey-scale image and corresponding histogram showing threshold
Fig.1. Grey-scale image and corresponding histogram showing threshold

Windowing involves selecting a specific portion - or window - of the stored image for processing and analysis. The rationale for windowing is that only a portion of an image is relevant to an application. For example, to read an identification number on a component a rectangular window would be selected around the area of interest (the identification number), and only those pixels within the window would be processed.

Once the image has been processed to reduce the quantity of data, it may be analysed. Image analysis is concerned with extracting features - such as area, perimeter or diameter - so that one object may be distinguished from another. The chosen features, when considered together, form a unique description of the object; this is the basis of object recognition.

Applications

An application for vision might be to recognise an object and then determine its location and orientation for a welding robot. These three separate tasks might be performed as follows.

As we have seen, object recognition is based on features extraction. However, a database of prior knowledge, or virtual model, is needed against which to compare the features. In the simplest case, this is achieved by teaching the system using objects similar to those it will later be required to recognise. When the vision system is in use it will attempt to find a best fit between the features of the object in view and those in its database.

Fig.2. Determining object orientation
Fig.2. Determining object orientation

Once an object has been recognised its location and orientation can be determined. The location of the object can be derived from the pixel co-ordinates of its centroid; data on the positions of images relative to the welding robot will have been ascertained previously. Orientation is found by determining the major and minor axes of an equivalent ellipse for the object - see Figure 2. The direction of these two axes defines the orientation of the object relative to the image axis - which is calibrated relative to the welding robot.

If a single camera is used, orientation and location of the object are two dimensional. The third dimension may be inferred from prior knowledge of the object and its environment. Alternatively, multiple cameras may be used to give additional information for determining the third dimension.

The location and orientation data are passed to the robot, which then moves to a position and posture in which it can weld the object.

Visual data may provide other information of specific use to welding, namely the true joint geometry. The geometric parameters are passed to a welding procedure generator which provides a correct procedure for the specific joint geometry, or rejects the object if the geometry does not fall within the specified tolerance bands.

After welding, a vision system could be used as part of a statistical process-control system to inspect both the weldment for surface defects, and the complete component for final size. This final inspection would again require some model or database of prior knowledge so that the vision system could determine whether the weld surface was flawed.

A final look

To improve business performance, current data must be captured from all areas of a company, including the shop floor. At this level the most efficient method of capturing data from the dynamic environment is by using sensors to measure and monitor critical aspects of the manufacturing process. One of the most data-rich sensors is machine vision, for it is applicable to measuring and monitoring many aspects of manufacturing.