# Hyperspectral

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

# Multispectral imaging

Multispectral imaging involves measuring the wavelength-dependent distribution of energy at imaged points in a scene. Sometimes, we will know the spectral power distribution (SPD) of the ambient illumination; then, we try to estimate the spectral reflectance functions of the scene points. In color digital imaging applications, usually SPDs are digitized by sampling them at every 10 nm in the wavelength interval [400,700] nm (roughly the visible range of the spectrum) and we need to measure the values of the SPD at 31 different wavelengths.

Commonly, special instruments designed to measure SPDs (spectraphotometers or spectraradiometers) make measurements that are averaged over a large number of points in the scene. To acquire 2D multispectral imgage data, a common techinque is to use a digital camera (color or monochrome) augmented in some way.

Digital color cameras usually give three readings. These are three differently weighted sums of the SPDs of scene points. The three sets of weights have dominating values in different regions of the spectrum, typically, red, green, and blue. Without augmenting these three measurements in some way, we would need to estimate at each point 31 values from only three measurements. This is usually a difficult task. Instead, it is common to take multiple acquisitions with a color camera to get more measurements. The multiple measurements are taken either with different optical filters or with different illuminants. An optical filter or an illuminant alter the weights associated with the summed SPDs and allow us to increase the number of measurements. For example, we may acquire three different images with a color camera under different illuminant combinations to make a total of 9 measurements. Then, we only have to estimate 31 values from 9 measurements. This is usually possible since common SPDs vary slowly with wavelength.

Mathematically, for each point in the scene we have to solve the problem:

$y = S^T x\,,$

where $S$ has size $31 \times n$. $n$ is the number of measurements and each column of $S$ has the weights associated with a camera color filter and illuminant (or optical filter) combination. There are several ways to address this problem. Some of them are described in the Reflectance and Illuminant Estimation page.

# Experiments - data acquisition

Our multispectral imaging setup is built around the LED-based illuminator Max Klein designed and built for his Psych 221 project (Winter 2008). The specifications of this illuminator and details about its design and construction (including CAD diagrams and Spice models) are online at his project webpage. Here we describe its use to acquire multispectral images. We need the following equipment:

• A calibrated camera
• The multispectral illuminator and its controller (hyperterminal on a PC)
• A uniform gray chart for spatial calibration

## Camera calibration

During camera calibration we determine the spectral sensitivity functions of the camera sensor. These functions describe the wavelength-dependent weights associated with the camera color channels. We can find these weights by relating camera measurements of a number of known monochromatic lights to their corresponding SPD measurements. We solve the following mathematical problem to find $S$:

$Y = S^T X\,,$

where we measure $k$ different lights with the camera and a spectraradiometer. $Y$ is of size $3 \times k$ and its columns hold the camera measurements. The columns of $X$ hold the corresponding SPD measurements.

Figure 1 shows a diagram of how the different devices are set up. The monochromator is the light source and is capable of producing light in several different narrowband ranges of wavelengths. The light from the monochromator is made to fall on a standard reflectance target. The spectraradiometer measures the SPD of the light, while the camera to be calibrated takes an image of the same light. We collect measurements of SPDs and corresponding camera images to find $S$.

 Figure 1: Camera calibration A monochromator (the ORIEL 3000) is used to shine several narrowband lights onto a reflectance target. The monochromatic light is measured simultaneously with the camera and a spectraradiometer (the PR-650 or the PR-715). Figure 2: Remove the integrating sphere We need to remove the integrating sphere that is typically attached to the monochromator. Otherwise, the intensity of light is very low and we can not acquire images in reasonable lengths of time.

### Notes

• Typically, light from the monochromator comes out through an integrating sphere that blurs the light over a uniform area. In our experiments, we found that the integrating sphere reduces light intensity too much. So, we remove the integrating sphere; this gives a light with more focused energy.

## Spatial calibration

The different LED types are arranged on a grid on the illuminator's PCB. Since the LED types are shifted with respect to each other, the light fall-off pattern due to each LED is different. Also, the beam widths of the LED types are not similar. To account for these effects, we must individually correct for the lens fall-off of each color channel for each LED type. To find the correction factors used to correct for lens/light combinations, we use images of a uniform gray chart.

The gray chart may have some texture. To prevent the fine texture on the gray chart from affecting the fall-off calibration, capture the gray chart images with a large defocus. You can do this manually by turning off the autofocus on the camera and manually setting the focus to infinity. Figure 9 shows one channel of an image taken with the blue LED. Note that some texture shows up in this image. Correcting scene images by dividing by the factors in the version of the calibration image introduces the the structure of the gray chart into the scene. To prevent this, we find the best polynomial fit to the gray chart image data, and use this for spatial calibration. The polynomial-fitted version of the image in Figure 9 is shown in Figure 10.

 GrayExampleImage.png GrayExamplePolyfit.png

### Notes

• The code for spatial calibration is in \$svn/pdcprojects/LEDms/Code/CameraCalibration. The relevant file is lensfalloff.m

## Controlling the camera

We use Nikon's camera control pro software to control the camera remotely from the PC. Figure 11 shows the main window. Camera settings can not be changed for different acquisitions (under different LEDs). Some parameters should always be fixed:

1. Output type - RAW at finest resolution
2. ISO - same setting at which camera was calibrated (ISO 100 for the D200 and the D2Xs)
3. AWB - should not matter if the output is RAW; yet we set this to day light
4. f # - same setting ath which the camera was calibrated (f/8 for the Nikkor 50 mm f/1.8 lens)
5. Exposure mode - manual
6. Exposure compensation - off

The only parameter you will need to change is shutter speed. Determine the best exposure by varying the shutter speed using the Nikon Camera Control Pro software. Look at the RGB histogram as you vary the shutter speed (you can see a rough histogram in the image preview window - Figure 12). The best exposure duration is the longest shutter for which none of the RGB channels are saturated.

 Nikon ccp.png Nikon ccp rgb histogram.png

### Notes

• Use the Nikon Camera Control Pro software to specify where the data will be stored and the naming convention. We suggest that you name the calibration files "calib_N" where N is the order in which the lights turn on. Use the tools>download options to specify the folder. Select "edit" to specify the prefix (e.g. "calib") and starting number, N.

# Recovery

The modern recovery code (before Manu leaves) will

• Take in the 9-channel images from the Max Klein illuminator and Nikon no IR blocking filter set up.
• Use look-up tables that are defined with labels comments and so forth and built by Steve
• Produce multi-spectral image representations

The way in which the illuminator and data acquisition software works should be above.

The principles of designing the look up tables should be in the Reflectance and Illuminant Estimation part of the wiki. These are the ideas that will be in Steve's dissertation.

JEF and SL are very interested in evaluating the reconstructions. This will be in the Reflectance and Illuminant Estimation section (and possibly a paper) also. The way we can do this is using the PR-650 and the camera data along with the sLUTs in some specific cases. JEF wants to work with people (hard). BW and hopefully SL want to work with flowers. Or toy cars. Or stuff that doesn't mind staying still for an hour.

We could do the evaluation by printing large targets, or by using MCC targets, or by sewing cloth targets, or something. We can't really evaluate based on natural surfaces. We could put some of the targets on cylinders (tubes), or systematically change the position/angle.

In evaluating, we should probably get illumination numbers and do the evaluation for a known illuminant rather than for an unknown. If that works, we then expand to trying to go for the color signal or for the illuminant and surface.