Hyperspectral

From VISTA LAB WIKI

Jump to: navigation, search

Contents

[edit] Multispectral imaging

Multispectral imaging involves measuring the wavelength-dependent distribution of energy at imaged points in a scene. Sometimes, we will know the spectral power distribution (SPD) of the ambient illumination; then, we try to estimate the spectral reflectance functions of the scene points. In color digital imaging applications, usually SPDs are digitized by sampling them at every 10 nm in the wavelength interval [400,700] nm (roughly the visible range of the spectrum) and we need to measure the values of the SPD at 31 different wavelengths.

Commonly, special instruments designed to measure SPDs (spectraphotometers or spectraradiometers) make measurements that are averaged over a large number of points in the scene. To acquire 2D multispectral imgage data, a common techinque is to use a digital camera (color or monochrome) augmented in some way.

Digital color cameras usually give three readings. These are three differently weighted sums of the SPDs of scene points. The three sets of weights have dominating values in different regions of the spectrum, typically, red, green, and blue. Without augmenting these three measurements in some way, we would need to estimate at each point 31 values from only three measurements. This is usually a difficult task. Instead, it is common to take multiple acquisitions with a color camera to get more measurements. The multiple measurements are taken either with different optical filters or with different illuminants. An optical filter or an illuminant alter the weights associated with the summed SPDs and allow us to increase the number of measurements. For example, we may acquire three different images with a color camera under different illuminant combinations to make a total of 9 measurements. Then, we only have to estimate 31 values from 9 measurements. This is usually possible since common SPDs vary slowly with wavelength.

Mathematically, for each point in the scene we have to solve the problem:

<math> y = S^T x\,, </math>

where <math> S </math> has size <math>31 \times n</math>. <math> n </math> is the number of measurements and each column of <math> S </math> has the weights associated with a camera color filter and illuminant (or optical filter) combination. There are several ways to address this problem. Some of them are described in the Reflectance and Illuminant Estimation page.

[edit] Experiments - data acquisition

Our multispectral imaging setup is built around the LED-based illuminator Max Klein designed and built for his Psych 221 project (Winter 2008). The specifications of this illuminator and details about its design and construction (including CAD diagrams and Spice models) are online at his project webpage. Here we describe its use to acquire multispectral images. We need the following equipment:

  • A calibrated camera
  • The multispectral illuminator and its controller (hyperterminal on a PC)
  • A uniform gray chart for spatial calibration

[edit] Camera calibration

During camera calibration we determine the spectral sensitivity functions of the camera sensor. These functions describe the wavelength-dependent weights associated with the camera color channels. We can find these weights by relating camera measurements of a number of known monochromatic lights to their corresponding SPD measurements. We solve the following mathematical problem to find <math> S </math>:

<math> Y = S^T X\,, </math>

where we measure <math> k </math> different lights with the camera and a spectraradiometer. <math> Y </math> is of size <math> 3 \times k </math> and its columns hold the camera measurements. The columns of <math> X </math> hold the corresponding SPD measurements.

Figure 1 shows a diagram of how the different devices are set up. The monochromator is the light source and is capable of producing light in several different narrowband ranges of wavelengths. The light from the monochromator is made to fall on a standard reflectance target. The spectraradiometer measures the SPD of the light, while the camera to be calibrated takes an image of the same light. We collect measurements of SPDs and corresponding camera images to find <math> S </math>.

Figure 1: Camera calibration A monochromator (the ORIEL 3000) is used to shine several narrowband lights onto a reflectance target. The monochromatic light is measured simultaneously with the camera and a spectraradiometer (the PR-650 or the PR-715).
Figure 2: Remove the integrating sphere We need to remove the integrating sphere that is typically attached to the monochromator. Otherwise, the intensity of light is very low and we can not acquire images in reasonable lengths of time.

[edit] Notes

  • Typically, light from the monochromator comes out through an integrating sphere that blurs the light over a uniform area. In our experiments, we found that the integrating sphere reduces light intensity too much. So, we remove the integrating sphere; this gives a light with more focused energy.
  • The code for camera calibration is in PDC at: $svn/pdcprojects/LEDms/CameraCalibration. The relevant file is: filtersCharacterize.m

[edit] Multispectral illuminator

Figures 3-5 show the multispectral illuminator setup in Packard 070. We have placed the illuminator on an optical grid platform facing a wall. Figure 5 indicates the connections we need to make:

  • The power supply.
  • The serial connection to a PC (we can use a Serial to USB adapter to operate the illuminator from newer PCs that do not have serial ports).
  • The cable to the remote shutter release of the camera.
Ms illuminator front.jpg
Ms illuminator side.jpg
Ms illuminator ports.png

The interface used to control the illuminator operates through a Windows hyperterminal. The properties of this hyperterminal are available on Max Klein's project page. There is an instance of hyperterminal with appropriate properties called leds.ht saved on the Desktop of the Lab PC.

Note Please follow the following sequence of operations while using the illuminator:

  1. Connect the remote shutter release to the camera
  2. Connect the serial cable to the PC
  3. Open leds.ht on the PC
  4. Connect power cable
  5. Turn camera on

This order is important. If you open leds.ht after power to the illuminator has been turned on, you will not receive the initialization commands used to take over control of the illuminator via the serial link. The initialization stage requires leds.ht to be open before the illuminator is powered on.

[edit] The illuminator's PC interface - leds.ht

As soon as you power on the illuminator, a list of instructions will appear on your open led.ht hyperterminal window. Figure 6 is a screenshot of this list. You can return to this list at any point by typing '?' at the prompt. Note that backspace does not work in this hyperterminal. You can print the current configuration my typing 'p' at the prompt. Figure 7 is a screenshot of the default configuration.

Leds ht intro screen.png
Leds ht default.png

[edit] An example

Leds ht config visible.png
In the default case, all 9 LED types are enabled and the switching time is set to 400 ms. To capture images in the visible range, you will have to turn off LEDs 1, 8, and 9. You can do this by typing 'Dn' at the prompt one at a time, where n is the LED number. Figure 8 shows a screen shot with the sequence of commands used to set up the illuminator to capture visible range images.

[edit] Notes

  • You must set the wait time for each camera fire to some time greater than the exposure time of the camera. We also recommend leaving a buffer of about 150 ms for smooth operation. This allows sufficient time for the camera to transfer images from its buffer to the PC/CF card and ensures a smooth, continuous operation. For example, if you exposure time is 500 ms, you should set the wait time for each camera fire to at least 650 ms.
  • If the illuminator has been in operation for a long time, it may overheat. Max has built in some protection in case of such an event. At a board temperature of 50 C, the illuminator will turn off automatically.
  • A copy of leds.ht is in: $svn/pdcprojects/LEDms/Code/Tools/

[edit] Spatial calibration

The different LED types are arranged on a grid on the illuminator's PCB. Since the LED types are shifted with respect to each other, the light fall-off pattern due to each LED is different. Also, the beam widths of the LED types are not similar. To account for these effects, we must individually correct for the lens fall-off of each color channel for each LED type. To find the correction factors used to correct for lens/light combinations, we use images of a uniform gray chart.

The gray chart may have some texture. To prevent the fine texture on the gray chart from affecting the fall-off calibration, capture the gray chart images with a large defocus. You can do this manually by turning off the autofocus on the camera and manually setting the focus to infinity. Figure 9 shows one channel of an image taken with the blue LED. Note that some texture shows up in this image. Correcting scene images by dividing by the factors in the version of the calibration image introduces the the structure of the gray chart into the scene. To prevent this, we find the best polynomial fit to the gray chart image data, and use this for spatial calibration. The polynomial-fitted version of the image in Figure 9 is shown in Figure 10.

GrayExampleImage.png
GrayExamplePolyfit.png

[edit] Notes

  • The code for spatial calibration is in $svn/pdcprojects/LEDms/Code/CameraCalibration. The relevant file is lensfalloff.m

[edit] Controlling the camera

We use Nikon's camera control pro software to control the camera remotely from the PC. Figure 11 shows the main window. Camera settings can not be changed for different acquisitions (under different LEDs). Some parameters should always be fixed:

  1. Output type - RAW at finest resolution
  2. ISO - same setting at which camera was calibrated (ISO 100 for the D200 and the D2Xs)
  3. AWB - should not matter if the output is RAW; yet we set this to day light
  4. f # - same setting ath which the camera was calibrated (f/8 for the Nikkor 50 mm f/1.8 lens)
  5. Exposure mode - manual
  6. Exposure compensation - off

The only parameter you will need to change is shutter speed. Determine the best exposure by varying the shutter speed using the Nikon Camera Control Pro software. Look at the RGB histogram as you vary the shutter speed (you can see a rough histogram in the image preview window - Figure 12). The best exposure duration is the longest shutter for which none of the RGB channels are saturated.

Nikon ccp.png
Nikon ccp rgb histogram.png

[edit] Notes

  • Use the Nikon Camera Control Pro software to specify where the data will be stored and the naming convention. We suggest that you name the calibration files "calib_N" where N is the order in which the lights turn on. Use the tools>download options to specify the folder. Select "edit" to specify the prefix (e.g. "calib") and starting number, N.

[edit] Recovery

The modern recovery code (before Manu leaves) will

  • Take in the 9-channel images from the Max Klein illuminator and Nikon no IR blocking filter set up.
  • Use look-up tables that are defined with labels comments and so forth and built by Steve
  • Produce multi-spectral image representations

The way in which the illuminator and data acquisition software works should be above.

The principles of designing the look up tables should be in the Reflectance and Illuminant Estimation part of the wiki. These are the ideas that will be in Steve's dissertation.

JEF and SL are very interested in evaluating the reconstructions. This will be in the Reflectance and Illuminant Estimation section (and possibly a paper) also. The way we can do this is using the PR-650 and the camera data along with the sLUTs in some specific cases. JEF wants to work with people (hard). BW and hopefully SL want to work with flowers. Or toy cars. Or stuff that doesn't mind staying still for an hour.

We could do the evaluation by printing large targets, or by using MCC targets, or by sewing cloth targets, or something. We can't really evaluate based on natural surfaces. We could put some of the targets on cylinders (tubes), or systematically change the position/angle.

In evaluating, we should probably get illumination numbers and do the evaluation for a known illuminant rather than for an unknown. If that works, we then expand to trying to go for the color signal or for the illuminant and surface.

Personal tools