Machine-Vision - rice grain inspection application

Many such applications require hard real-time I/O, particularly for controlling external devices in a prompt and highly deterministic fashion. An overview of a rice grain inspection system, running entirely on the a vision processor board (i.e., image capture, image processing and image analysis, I/O control), will be used to illustrate the capabilities of the board in this regard. The purpose of this document is to provide an overview of the vision processor board real-time output control capabilities, and outline a technique for performing blob-analysis on a continuous and seamless video stream.

Time-critical setup

The rice grain inspection process consists of one (or multiple) linescan cameras capturing images of rice grains as they fall off the end of the conveyer belt or through a chute into the camera’s field of view (Figure 1). The images of the rice grains captured by the line-scan cameras are then transmitted to and captured by the vision processor board located in a host PC. The linescan data is assembled into a virtual image, which is a 2D image created using 1D images, and then processed and analysed to identify undesirable rice grains. Once these are identified, the vision processor board, by way of a third-party I/O board, controls a bank of air jets located at a known distance from the line-scan camera, and ejects the undesired rice grains at a precise moment in time.

Closing the inspection loop

The hard real-time requirement for closing the inspection loop comes from the fact that all the rice grains must be imaged and analysed consistently before the undesired rice grains appear before the bank of air jets. To do so, the entire process must be performed within a fixed and known distance and thus, time period. For example, with a 50 ms interval for the rice grains to move from the camera’s field of view to the front of the air-jets, one can calculate the maximum allowable time to capture an image, analyse it and then respond to the results (Figure 2).

The period to capture the image with a linescan camera depends on the speed of the falling rice grains, camera’s image resolution, data capture time and required size of the virtual image. To be effective, the number of lines per virtual image must be at least greater than the largest possible rice grain. In this example, the virtual image size is overestimated to be 2048 x 128 lines. With a new line captured every 100 ?s, the virtual image is captured in 12.8 ms. The maximum allowable period to analyse the virtual image is the Total Delay Period minus the Capture Period (or roughly 37.2 ms). Moreover, the application running on the vision processor board must be able to alter the state of the air jets (64) at very short and regular intervals without exception1. That is, the air jets must be synchronized with the camera’s line rate (100 ?s or 10 kHz) or multiples of this line rate. While it is possible to achieve a higher output update frequency, doing so will increase the load on vision processor board CPU. In this example, the 10 kHz output update rate would use about 25% of a G4 PowerPC running at 1 GHz and increasing the rate would result in less CPU cycles available for other tasks. Note that if it becomes necessary to either increase the speed of the inspection application or increase the amount of image analysis, it is possible to increase the available processing power by adding an additional vision processor board in order to scale-up the system.

Blobanalysis on a continuous and seamless image

At the heart of the application is the analysis of a continuous and seamless image. Since the application running on the vision processor board is working with successive virtual images, rice grains can appear anywhere within a virtual image as well as overlapping two virtual successive images.
Blob Analysis is the technique typically employed for analysing objects (blobs) such as rice grains. However, rice grains divided between images will be incorrectly analysed and maybe even rejected. It is thus essential to reconstitute rice grains that straddle adjoining virtual images using blob merging2 (Figure 3).

System and application considerations

Application determinism on a vision processor board (i.e., firing of the air jets at a precise moment in time) is not a given. There are several factors that must be considered in order to achieve consistent real-time performance; the complexity of the image processing and analysis algorithm employed and the optimization of the application code. In both cases, processing and analysis must be performed within the available time constraint and with the available resource (i.e., CPU cycles) knowing that some of the resource is needed to control the outputs. If the capture rate and image resolution are too high, processing time increases and this will also introduce delays. Furthermore, ensuring deterministic behavior also requires developing the application so that there is little or no interaction with the host platform.
Finally, consideration must be given to how vision processor board controls the air jets if using a third-party I/O board connected directly to the host system (as opposed to directly to the vision processor board). Care must be taken to avoid or prevent any host platform activity that might adversely interrupt the programming of the I/O board from the vision processor board. If using a third-party I/O board connected directly to a Matrox vision processor board, application performance can be isolated from any unexpected behavior from the host platform including its operating system. Moreover, since the application no longer needs to communicate to the third-party I/O board through the host platform, a higher level of consistent real-time performance can be expected.

Source: Rauscher