The Promise and Challenge of Autonomous Casting Inspection Systems
In this digital age, it seems that new breakthroughs arrive daily representing advances in data storage, manipulation and graphics. Easily the most notable recent advance in communications and analysis of data is the advent of artificial intelligence and its accessibility to the masses.
AI-guided writing and decision-making capability has captured our imagination, but within the context of work it offers the promise of smart systems that reduce or eliminate our reliance on humans. It is a promise perhaps even more exciting than the one almost a century ago that brought automation to factories to reduce human labor. Now, we are seeing AI as capable of reducing the need for human thinking.
While the applications of artificial intelligence are many, the manufacturing sector and foundries in particular have shown great interest in reducing labor cost through AI. Clearly one of the most obvious places for labor reduction is in the non-value-added inspection process.
Foundry engineers and their customers are interested in this potential. The foundry engineers want to reduce the cost of labor but also the high cost of nonconformity. The thought is that as robots don’t make mistakes, AI-guided inspection systems won’t make mistakes––after all, “to err is human.”
Foundry customers are also excited about the potential price reductions from their foundry suppliers and certainly want to realize reduced part returns and warranty.
To understand a bit of the reality beyond the hype, it is helpful to understand the basics of the functional elements of an autonomous inspection system and their limitations.
The System Elements
If we consider what must happen in an autonomous inspection system, there must be four major parts or system components to the process, which are illustrated on the previous page.
Each of the necessary elements can represent challenges to the foundry interested in applying AI-guided inspection. Only image processing makes use of the computer algorithms associated with artificial intelligence.
While some parts might simply present themselves on a belt or roller conveyor to be “seen” by the inspection system, it is important to recognize that in order for the images of the part to be recognized and evaluated by a computer, parts need to be presented to the image capture apparatus (a lens or series of lenses) in a consistent way.
Positioning accuracy is important to image analysis. Complex parts that have critical surfaces in more than one plane will need to be manipulated rapidly and accurately to present each view to the image capture system. In foundries, this will likely include many surfaces and thus many views. A simple robot gripper may hide critical surfaces and thus full-surface evaluation often requires two or more placements of the handling device. The engineering challenge of rapid, precise, and efficient movement of the part at a planned viewing distance from the camera lens is not trivial when part geometry is complex. This programming will need to be customized for each unique part geometry.
Lighting Must Be Right
In cases where a variety of parts might be presented to the inspection system, some type of machine vision process is used to identify the part so that the proper manipulation occurs, and the proper inspection sequence and criteria are employed in later processing. This relatively simple vision task does not need artificial intelligence enhancement but can simply be conducted by a visual comparison or by reading alphanumeric characters on a barcode or QR-code.
Image capture is the sensing functionality of the process and is typically an advanced optical system of lenses that create an image in a similar way to that of a digital camera. The image itself consists of a very large number of bits of data (pixels), which can be analyzed by the “brains” of the system where AI is utilized.
It is a classic photographer’s challenge to choose the right lens and exposure time to capture an image with good resolution. Likewise, decisions regarding focal distance and aperture size of the lens determine both the light required for a good, well-saturated image and a reasonable field of view and depth of focus. Providing effective lighting for complex surfaces is challenging as texture, shadows, and the gloss of parts all create visual patterns that are difficult to repeat part after part. Shadowed areas in one image must be illuminated in another, increasing the number of views required and thus increasing the handling complexity and cycle time.
Image processing is performed in a very fast computer utilizing artificial intelligence software, typically enabled by neural net processing. The amazing capability of this software is breathtaking when images are assessed in just a few milliseconds, examining the huge number of pixels for their contrast and connection in groups to adjacent pixels, making sense of an image in literally the 100th part of a blink of an eye.
This speed comes at a bit of an upfront cost, however. Despite the seemingly human ability to make decisions demonstrated by this software, it needs to be taught what to look for and what is acceptable and not acceptable. This requires the user to present to the machine hundreds, if not thousands of images that are representative of the discontinuities that may be encountered by the vision system. Each must be known to the computer as acceptable or not. The software then establishes its own inferences, its own boundaries, of what constitutes something that is acceptable or not.
It learns by observing the judgments associated with many parts that must be prepared and presented to the system in an identical manner as actual inspection. This teaching burden is a significant challenge for smaller volume projects. Some foundries’ entire production run of a part would be less than that required to properly train the software to recognize defective parts.
Even in large foundries with thousands of parts to run, it is impossible to explain to a customer exactly what criteria the software is using to accept or reject parts. No specific “rules” have been fed into the software; the software has figured that out by itself through the repetition of examples and this is not able to be expressed in understandable terms by the software. Acceptance of such systems therefore comes only through actual testing against sets of parts whose conformity is known and measuring the equipment’s miss rate and false alarm rate.
Once the software system has determined the acceptability of the part, it can send a signal to handling equipment to direct the part accordingly. If parts are rejected, they may be marked and sent through the inspection cycle again to confirm, thus reducing false alarms at the risk of missing a truly defective part. In other installations, a human reviews the results of the autonomous inspection system to sample accepted parts and reviews rejected parts as a double-check on system function.
Application Challenges
The description above may give the impression that machine-operated visual inspections are so challenging that few can implement them effectively. It is true that while autonomous inspection is not a simple “plug and play” type solution, it can be extremely useful. Some tasks that are well within the reach of today’s systems include:
Recognizing parts and routing them properly, for example, to be sent to a riser cutting cell or machining operation.
Identifying the presence or absence of a feature, such as gate removal, hole clearance, sand removal, etc.
Evaluating a specific surface that is planar for discontinuities as small as a mm in diameter.
Performing dimensional measurements to +/- 0.01 mm.
Some tasks or situations where autonomous inspection is currently very difficult include:
Fully inspecting parts with a complex geometry, especially with critical interior features.
Evaluating texture or textural lack of uniformity.
Inspecting polished or glossy surfaces, or those with a very light-absorbing matte finish.
Inspecting optically for cracks or thin laminar indications (these are best detected visually using NDT methods such as dye penetrant or magnetic particle inspection).
Inspecting low-volume, high-dimensional varying products.
The technology for robotics, AI software, and image capturing equipment is constantly improving, and it is best to discuss a potential application with a “general contractor” type automation company that can help determine the best equipment and layout for the specific task identified.
A great starting place to find help is with your state’s Manufacturing Extension Partnership (MEP). Often associated with a university, these organizations are government-funded entities whose reason for existence is to support manufacturing development. They have access to contacts and resources that can prove invaluable.