How Hyperspectral Image Data Supports Machine Vision
Discover how easy it can be by watching the video from our presentation at CHII 2018 in Graz.
The Power of Spectroscopy
Spectroscopy enables users to identify spectral features that are invisible to common cameras or the human eye. These features are typically directly related to the optical properties of the analyzed surface. Since each material has a unique spectral signature, such data has the potential not only to distinguish specific materials from others, but also to make qualitative statements about the analyzed object. Spectral imaging allows for the examination of the spatial distribution of different materials and quality variations.
Quantification, Qualification, and Classification
A key focus is identifying different materials and surfaces where the human eye is insufficient. Spectral features may be too small to recognize, hidden in the near-infrared spectrum, or missed due to the eye’s inability to keep up with fast-moving processes. Image classification techniques help identify these differences and quantify the results. Hyperspectral imaging, when used for the supervision and evaluation of industrial processes, can support and even automate decisions, speeding up processes and ultimately saving money.
Challenges in Software Development
Setting up an appropriate software application to retrieve information from spectral data typically involves extensive development, testing, and evaluation. This process can be time-consuming and often suffers from a lack of expertise due to the sophisticated requirements in fields such as mathematics, statistics, remote sensing, optics, and programming.
Providing a Solution
To address these challenges, we now offer support for our customers. Through our collaboration with perClass BV, a software company specializing in tools for interpreting spectral images and machine learning solutions, users can (1) record spectral data, (2) create a statistical classifier for specific materials, and (3) apply this classifier to the live data stream as a plugin to the Cubert Utils software—all within minutes.
The perClass software is a classification tool based on machine learning and includes state-of-the-art classifiers such as Support Vector Machines and Random Forests. With perClass Mira, a GUI based on the perClass engine, users do not need a deep understanding of machine learning and classification techniques—it simply works without requiring specialized knowledge.
Real-time Classification Using Machine Vision
To demonstrate the potential of hyperspectral cameras for machine vision, we placed samples of different herbs (chamomile, oregano, basil) on a rotary plate in our laboratory. The hyperspectral snapshot camera FireflEYE 185 was positioned above the samples, equipped with a 23mm lens, providing a 13° field of view. This setup ensures that the camera captures all necessary details and sufficient spectral information from the samples.
Image Capture and Classification Training
Abb. 1: Die Proben, wie sie von der hyperspektralen Kamera erfasst werden.
Abb. 2: Die in Abb. 4 gezeigten Spektren repräsentieren die spektralen Informationen aller Pixel innerhalb der entsprechenden Rechtecke in Abb. 3. Die Reflexion der verschiedenen Kräuter ähnelt sich über das gesamte Spektrum, was es für den Klassifikator schwierig macht, sie zu unterscheiden. Wir haben Bilder mit unserer Software aufgenommen, um den Klassifikator zu trainieren.
After exporting the recorded images to perClass Mira, the first step in training was to define three classes for the herbs and one for the background. This was done by simply labeling known pixels within the image (Fig. 5). Using this reference information, the model was trained and directly applied to the data (Fig. 6). The initial classification results were promising, although some artifacts in the form of misclassified pixels were observed.
Optimizing the Model
To improve the model and reduce misclassification, several techniques can be applied interactively in Mira. A simple step in this example was to exclude some bands at the beginning and end of the wavelength region during model training. This was done by redefining the band start and end (Fig. 7). The goal was to reduce the spectral information required for the classifier to achieve an acceptable result, thereby making the model less prone to errors.
Once the classifier delivered satisfactory results, it was easily exported from the perClass Mira interface and integrated into the Cubert Cuvis software, where it was directly applied to the live data stream (Fig. 8). The classifier performed as expected, especially considering it took only a few minutes to generate and optimize. Most of the pixels were classified correctly (chamomile in purple, basil in blue, oregano in green, and background in dark red). The few misclassified pixels were primarily in the border regions of the herbs, which was expected since we did not define a class for those spectrally mixed pixels. The stability of the classifier was demonstrated when the rotary plate was activated (see the video). The pixels were still classified correctly, despite changes in light conditions, such as the illumination angle for each pixel.
This example showcases the strong potential of hyperspectral snapshot cameras and how intelligent software solutions, such as machine vision, can be a valuable asset for various applications, particularly when working with live data.
About the Author
Dr. Matthias Locherer has been the Sales Director at Cubert GmbH since 2017. With a PhD in Earth Observation from Ludwig Maximilian University of Munich, he brings extensive expertise in remote sensing, spectral imaging, and data analysis. Matthias has contributed to numerous research projects and publications, particularly in the multispectral monitoring of biophysical and biochemical parameters using hyperspectral satellite missions. His deep knowledge of optical measurement techniques and physical modeling makes him a key driver in advancing innovative hyperspectral technologies at Cubert.