# Hyperspectral

(Difference between revisions)
 Revision as of 13:26, 10 August 2015 (edit) (Created page with "= Multispectral imaging = Multispectral imaging involves measuring the wavelength-dependent distribution of energy at imaged points in a scene. Sometimes, we will know the spect...")← Older edit Revision as of 13:33, 10 August 2015 (edit) (undo)Newer edit → Line 1: Line 1: - = Multispectral imaging = + = Multispectral imaging = - Multispectral imaging involves measuring the wavelength-dependent distribution of energy at imaged points in a scene. Sometimes, we will know the spectral power distribution (SPD) of the ambient illumination; then, we try to estimate the spectral reflectance functions of the scene points. In color digital imaging applications, usually SPDs are digitized by sampling them at every 10 nm in the wavelength interval [400,700] nm (roughly the visible range of the spectrum) and we need to measure the values of the SPD at 31 different wavelengths. + Multispectral imaging involves measuring the wavelength-dependent distribution of energy at imaged points in a scene. Sometimes, we will know the spectral power distribution (SPD) of the ambient illumination; then, we try to estimate the spectral reflectance functions of the scene points. In color digital imaging applications, usually SPDs are digitized by sampling them at every 10 nm in the wavelength interval [400,700] nm (roughly the visible range of the spectrum) and we need to measure the values of the SPD at 31 different wavelengths. Commonly, special instruments designed to measure SPDs (spectraphotometers or spectraradiometers) make measurements that are averaged over a large number of points in the scene. To acquire 2D multispectral imgage data, a common techinque is to use a digital camera (color or monochrome) augmented in some way. Commonly, special instruments designed to measure SPDs (spectraphotometers or spectraradiometers) make measurements that are averaged over a large number of points in the scene. To acquire 2D multispectral imgage data, a common techinque is to use a digital camera (color or monochrome) augmented in some way. - Digital color cameras usually give three readings. These are three differently weighted sums of the SPDs of scene points. The three sets of weights have dominating values in different regions of the spectrum, typically, red, green, and blue. Without augmenting these three measurements in some way, we would need to estimate at each point 31 values from only three measurements. This is usually a difficult task. Instead, it is common to take multiple acquisitions with a color camera to get more measurements. The multiple measurements are taken either with different optical filters or with different illuminants. An optical filter or an illuminant alter the weights associated with the summed SPDs and allow us to increase the number of measurements. For example, we may acquire three different images with a color camera under different illuminant combinations to make a total of 9 measurements. Then, we only have to estimate 31 values from 9 measurements. This is usually possible since common SPDs vary slowly with wavelength. + Digital color cameras usually give three readings. These are three differently weighted sums of the SPDs of scene points. The three sets of weights have dominating values in different regions of the spectrum, typically, red, green, and blue. Without augmenting these three measurements in some way, we would need to estimate at each point 31 values from only three measurements. This is usually a difficult task. Instead, it is common to take multiple acquisitions with a color camera to get more measurements. The multiple measurements are taken either with different optical filters or with different illuminants. An optical filter or an illuminant alter the weights associated with the summed SPDs and allow us to increase the number of measurements. For example, we may acquire three different images with a color camera under different illuminant combinations to make a total of 9 measurements. Then, we only have to estimate 31 values from 9 measurements. This is usually possible since common SPDs vary slowly with wavelength. - Mathematically, for each point in the scene we have to solve the problem: + Mathematically, for each point in the scene we have to solve the problem: - $y = S^T x\,,$ + $y = S^T x\,,$ - where $S$ has size $31 \times n$. $n$ is the number of measurements and each column of $S$ has the weights associated with a camera color filter and illuminant (or optical filter) combination. There are several ways to address this problem. Some of them are described in the [[Reflectance and Illuminant Estimation]] page. + where $S$ has size $31 \times n$. $n$ is the number of measurements and each column of $S$ has the weights associated with a camera color filter and illuminant (or optical filter) combination. There are several ways to address this problem. Some of them are described in the [[Reflectance and Illuminant Estimation]] page. - = Experiments - data acquisition = + = Experiments - data acquisition = - Our multispectral imaging setup is built around the LED-based illuminator Max Klein designed and built for his Psych 221 project (Winter 2008). The specifications of this illuminator and details about its design and construction (including CAD diagrams and Spice models) are online at his [http://scien.stanford.edu/class/psych221/projects/08/Klein/index.html project webpage]. Here we describe its use to acquire multispectral images. We need the following equipment: + Our multispectral imaging setup is built around the LED-based illuminator Max Klein designed and built for his Psych 221 project (Winter 2008). The specifications of this illuminator and details about its design and construction (including CAD diagrams and Spice models) are online at his [http://scien.stanford.edu/class/psych221/projects/08/Klein/index.html project webpage]. Here we describe its use to acquire multispectral images. We need the following equipment: - * A calibrated camera + *A calibrated camera - * The multispectral illuminator and its controller (hyperterminal on a PC) + *The multispectral illuminator and its controller (hyperterminal on a PC) - * A uniform gray chart for spatial calibration + *A uniform gray chart for spatial calibration - == Camera calibration == + == Camera calibration == - During camera calibration we determine the spectral sensitivity functions of the camera sensor. These functions describe the wavelength-dependent weights associated with the camera color channels. We can find these weights by relating camera measurements of a number of known monochromatic lights to their corresponding SPD measurements. We solve the following mathematical problem to find $S$: + - $Y = S^T X\,,$ + During camera calibration we determine the spectral sensitivity functions of the camera sensor. These functions describe the wavelength-dependent weights associated with the camera color channels. We can find these weights by relating camera measurements of a number of known monochromatic lights to their corresponding SPD measurements. We solve the following mathematical problem to find $S$: - where we measure $k$  different lights with the camera and a spectraradiometer. $Y$ is of size $3 \times k$ and its columns hold the camera measurements. The columns of $X$ hold the corresponding SPD measurements. + $Y = S^T X\,,$ - + - Figure 1 shows a diagram of how the different devices are set up.  The monochromator is the light source and is capable of producing light in several different narrowband ranges of wavelengths. The light from the monochromator is made to fall on a standard reflectance target. The spectraradiometer measures the SPD of the light, while the camera to be calibrated takes an image of the same light. We collect measurements of SPDs and corresponding camera images to find $S$. + - {| border="0" cellpadding="5" cellspacing="0" style="margin: 1em 1em 1em 0; background: #ffffff; border: 0px #aaa solid; border-collapse: collapse; font-size: 100%;" + where we measure $k$ different lights with the camera and a spectraradiometer. $Y$ is of size $3 \times k$ and its columns hold the camera measurements. The columns of $X$ hold the corresponding SPD measurements. - |- + - |[[Image:Camera calibration.png|thumb|right|500px|'''Figure 1: Camera calibration''' A monochromator (the ORIEL 3000) is used to shine several narrowband lights onto a reflectance target. The monochromatic light is measured simultaneously with the camera and a spectraradiometer (the PR-650 or the PR-715).]] + Figure 1 shows a diagram of how the different devices are set up. The monochromator is the light source and is capable of producing light in several different narrowband ranges of wavelengths. The light from the monochromator is made to fall on a standard reflectance target. The spectraradiometer measures the SPD of the light, while the camera to be calibrated takes an image of the same light. We collect measurements of SPDs and corresponding camera images to find $S$. - |[[Image:MonochromatorIntSphere.png|thumb|center|350px|'''Figure 2: Remove the integrating sphere''' We need to remove the integrating sphere that is typically attached to the monochromator. Otherwise, the intensity of light is very low and we can not acquire images in reasonable lengths of time.]] + + {| border="0" cellpadding="5" cellspacing="0" style="margin: 1em 1em 1em 0; background: #ffffff; border: 0px #aaa solid; border-collapse: collapse; font-size: 100%;" |- |- + | [[Image:cameracalibration.png|thumb|right|350px]] + | [[Image:MonochromatorIntSphere.png|thumb|center|350px]] |} |} - === Notes === + === Notes === - * Typically, light from the monochromator comes out through an integrating sphere that blurs the light over a uniform area. In our experiments, we found that the integrating sphere reduces light intensity too  much. So, we remove the integrating sphere; this gives a light with more focused energy. + - * The code for camera calibration is in PDC at: $svn/pdcprojects/LEDms/CameraCalibration. The relevant file is: filtersCharacterize.m + *Typically, light from the monochromator comes out through an integrating sphere that blurs the light over a uniform area. In our experiments, we found that the integrating sphere reduces light intensity too much. So, we remove the integrating sphere; this gives a light with more focused energy. - == Multispectral illuminator == + *The code for camera calibration is in PDC at:$svn/pdcprojects/LEDms/CameraCalibration. The relevant file is: filtersCharacterize.m - Figures 3-5 show the multispectral illuminator setup in Packard 070. We have placed the illuminator on an optical grid platform facing a wall. Figure 5 indicates the connections we need to make: + == Multispectral illuminator  == - * The power supply. + - * The serial connection to a PC (we can use a ''Serial to USB'' adapter to operate the illuminator from newer PCs that do not have serial ports). + - * The cable to the remote shutter release of the camera. + - {| border="0" cellpadding="5" cellspacing="0" style="margin: 1em 1em 1em 0; background: #ffffff; border: 0px #aaa solid; border-collapse: collapse; font-size: 100%;" + Figures 3-5 show the multispectral illuminator setup in Packard 070. We have placed the illuminator on an optical grid platform facing a wall. Figure 5 indicates the connections we need to make: - |- + - |[[Image:ms illuminator front.jpg|thumb|center|330px|'''Figure 3 ''' Front view]] + *The power supply. - |[[Image:ms illuminator side.jpg|thumb|center|x500px|'''Figure 4 ''' From the side]] + *The serial connection to a PC (we can use a ''Serial to USB'' adapter to operate the illuminator from newer PCs that do not have serial ports). - |[[Image:ms illuminator ports.png|thumb|center|300px|'''Figure 5 ''' The other side - note locations of ports]] + *The cable to the remote shutter release of the camera. + + {| border="0" cellpadding="5" cellspacing="0" style="margin: 1em 1em 1em 0; background: #ffffff; border: 0px #aaa solid; border-collapse: collapse; font-size: 100%;" |- |- + | [[Image:Ms illuminator front.jpg|thumb|center|330px]] + | [[Image:Ms illuminator side.jpg|thumb|center]] + | [[Image:Ms illuminator ports.png|thumb|center|300px]] |} |} The interface used to control the illuminator operates through a Windows ''hyperterminal''. The properties of this hyperterminal are available on [http://scien.stanford.edu/class/psych221/projects/08/Klein/index.html Max Klein's project page]. There is an instance of hyperterminal with appropriate properties called '''leds.ht''' saved on the Desktop of the Lab PC. The interface used to control the illuminator operates through a Windows ''hyperterminal''. The properties of this hyperterminal are available on [http://scien.stanford.edu/class/psych221/projects/08/Klein/index.html Max Klein's project page]. There is an instance of hyperterminal with appropriate properties called '''leds.ht''' saved on the Desktop of the Lab PC. - '''Note''' Please follow the following sequence of operations while using the illuminator: + '''Note''' Please follow the following sequence of operations while using the illuminator: - # Connect the remote shutter release to the camera + - # Connect the serial cable to the PC + - # Open leds.ht on the PC + - # Connect power cable + - # Turn camera on + - This order is important. If you open leds.ht after power to the illuminator has been turned on, you will not receive the initialization commands used to take over control of the illuminator via the serial link. '''The initialization stage requires ''leds.ht'' to be open before the illuminator is powered on'''. + #Connect the remote shutter release to the camera + #Connect the serial cable to the PC + #Open leds.ht on the PC + #Connect power cable + #Turn camera on - === The illuminator's PC interface - '''leds.ht''' === + This order is important. If you open leds.ht after power to the illuminator has been turned on, you will not receive the initialization commands used to take over control of the illuminator via the serial link. '''The initialization stage requires ''leds.ht'' to be open before the illuminator is powered on'''. + + === The illuminator's PC interface - '''leds.ht''' === As soon as you power on the illuminator, a list of instructions will appear on your open led.ht hyperterminal window. Figure 6 is a screenshot of this list. You can return to this list at any point by typing '?' at the prompt. Note that backspace does not work in this hyperterminal. You can print the current configuration my typing 'p' at the prompt. Figure 7 is a screenshot of the default configuration. As soon as you power on the illuminator, a list of instructions will appear on your open led.ht hyperterminal window. Figure 6 is a screenshot of this list. You can return to this list at any point by typing '?' at the prompt. Note that backspace does not work in this hyperterminal. You can print the current configuration my typing 'p' at the prompt. Figure 7 is a screenshot of the default configuration. - {| border="0" cellpadding="5" cellspacing="0" style="margin: 1em 1em 1em 0; background: #ffffff; border: 0px #aaa solid; border-collapse: collapse; font-size: 100%;" + {| border="0" cellpadding="5" cellspacing="0" style="margin: 1em 1em 1em 0; background: #ffffff; border: 0px #aaa solid; border-collapse: collapse; font-size: 100%;" - |- + - |[[Image:Leds ht intro screen.png|thumb|center|420px|''' Figure 6 ''' leds.ht Introduction screen.]] + - |[[Image:Leds ht default.png|thumb|center|420px|'''Figure 7''' The default configuration. ]] + |- |- + | [[Image:Leds ht intro screen.png|thumb|center|420px]] + | [[Image:Leds ht default.png|thumb|center|420px]] |} |} - === An example === + === An example === - [[Image:Leds ht config visible.png|thumb|right|320px|'''Figure 8''' Disabled LEDs 1, 8, and 9.]] In the default case, all 9 LED types are enabled and the switching time is set to 400 ms. To capture images in the visible range, you will have to turn off LEDs 1, 8, and 9. You can do this by typing 'Dn' at the prompt one at a time, where n is the LED number. Figure 8 shows a screen shot with the sequence of commands used to set up the illuminator to capture visible range images. + [[Image:Leds ht config visible.png|thumb|right|320px]] In the default case, all 9 LED types are enabled and the switching time is set to 400 ms. To capture images in the visible range, you will have to turn off LEDs 1, 8, and 9. You can do this by typing 'Dn' at the prompt one at a time, where n is the LED number. Figure 8 shows a screen shot with the sequence of commands used to set up the illuminator to capture visible range images. - === Notes === + === Notes === - * You must set the '''wait time for each camera fire''' to some time greater than the exposure time of the camera. We also recommend leaving a buffer of about 150 ms for smooth operation. This allows sufficient time for the camera to transfer images from its buffer to the PC/CF card and ensures a smooth, continuous operation. For example, if you exposure time is 500 ms, you should set the '''wait time for each camera fire''' to at least 650 ms. + - * If the illuminator has been in operation for a long time, it may overheat. Max has built in some protection in case of such an event. At a board temperature of 50 C, the illuminator will turn off automatically. + - * A copy of '''leds.ht''' is in: $svn/pdcprojects/LEDms/Code/Tools/ + - == Spatial calibration == + *You must set the '''wait time for each camera fire''' to some time greater than the exposure time of the camera. We also recommend leaving a buffer of about 150 ms for smooth operation. This allows sufficient time for the camera to transfer images from its buffer to the PC/CF card and ensures a smooth, continuous operation. For example, if you exposure time is 500 ms, you should set the '''wait time for each camera fire''' to at least 650 ms. + *If the illuminator has been in operation for a long time, it may overheat. Max has built in some protection in case of such an event. At a board temperature of 50 C, the illuminator will turn off automatically. + *A copy of '''leds.ht''' is in:$svn/pdcprojects/LEDms/Code/Tools/ + + == Spatial calibration == The different LED types are arranged on a grid on the illuminator's PCB. Since the LED types are shifted with respect to each other, the light fall-off pattern due to each LED is different. Also, the beam widths of the LED types are not similar. To account for these effects, we must individually correct for the lens fall-off of each color channel for each LED type. To find the correction factors used to correct for lens/light combinations, we use images of a uniform gray chart. The different LED types are arranged on a grid on the illuminator's PCB. Since the LED types are shifted with respect to each other, the light fall-off pattern due to each LED is different. Also, the beam widths of the LED types are not similar. To account for these effects, we must individually correct for the lens fall-off of each color channel for each LED type. To find the correction factors used to correct for lens/light combinations, we use images of a uniform gray chart. Line 94: Line 96: The gray chart may have some texture. To prevent the fine texture on the gray chart from affecting the fall-off calibration, capture the gray chart images with a large defocus. You can do this manually by turning off the autofocus on the camera and manually setting the focus to infinity. Figure 9 shows one channel of an image taken with the ''blue'' LED. Note that some texture shows up in this image. Correcting scene images by dividing by the factors in the version of the calibration image introduces the the structure of the gray chart into the scene. To prevent this, we find the best polynomial fit to the gray chart image data, and use this for spatial calibration. The polynomial-fitted version of the image in Figure 9 is shown in Figure 10. The gray chart may have some texture. To prevent the fine texture on the gray chart from affecting the fall-off calibration, capture the gray chart images with a large defocus. You can do this manually by turning off the autofocus on the camera and manually setting the focus to infinity. Figure 9 shows one channel of an image taken with the ''blue'' LED. Note that some texture shows up in this image. Correcting scene images by dividing by the factors in the version of the calibration image introduces the the structure of the gray chart into the scene. To prevent this, we find the best polynomial fit to the gray chart image data, and use this for spatial calibration. The polynomial-fitted version of the image in Figure 9 is shown in Figure 10. - {| border="0" cellpadding="5" cellspacing="0" style="margin: 1em 1em 1em 0; background: #ffffff; border: 0px #aaa solid; border-collapse: collapse; font-size: 100%;" + {| border="0" cellpadding="5" cellspacing="0" style="margin: 1em 1em 1em 0; background: #ffffff; border: 0px #aaa solid; border-collapse: collapse; font-size: 100%;" - |- + - |[[Image:GrayExampleImage.png|thumb|center|384px|'''Figure 9''' An image (one color channel) of the Gray spatial calibration chart. Note the noise; using this image for calibration introduces spatial structure from the calibration target into scene images]] + - |[[Image:GrayExamplePolyfit.png|thumb|center|384px|'''Figure 10''' The polynomial-fitted version of the image of the Gray calibration image. We use a 2D polynomial fit. We scale each image in a pixelwise manner with the appropriate version of this calibration image.]] + |- |- + | [[Image:GrayExampleImage.png|thumb|center|384px]] + | [[Image:GrayExamplePolyfit.png|thumb|center|384px]] |} |} - === Notes === + === Notes === - * The code for spatial calibration is in $svn/pdcprojects/LEDms/Code/CameraCalibration. The relevant file is lensfalloff.m + - == Controlling the camera == + *The code for spatial calibration is in$svn/pdcprojects/LEDms/Code/CameraCalibration. The relevant file is lensfalloff.m - We use Nikon's camera control pro software to control the camera remotely from the PC. Figure 11 shows the main window. Camera settings can not be changed for different acquisitions (under different LEDs). Some parameters should always be fixed: + == Controlling the camera == - # '''Output type ''' - RAW at finest resolution + We use Nikon's camera control pro software to control the camera remotely from the PC. Figure 11 shows the main window. Camera settings can not be changed for different acquisitions (under different LEDs). Some parameters should always be fixed: - # '''ISO '''        - same setting at which camera was calibrated (ISO 100 for the D200 and the D2Xs) + - # '''AWB '''        - should not matter if the output is RAW; yet we set this to day light + - # ''' ''f'' # '''    - same setting ath which the camera was calibrated (''f''/8 for the Nikkor 50 mm ''f''/1.8 lens) + - # ''' Exposure mode ''' - manual + - # ''' Exposure compensation ''' - off + - The only parameter you will need to change is ''' shutter speed'''. Determine the best exposure by varying the shutter speed using the Nikon Camera Control Pro software. Look at the RGB histogram as you vary the shutter speed (you can see a rough histogram in the image preview window - Figure 12). The best exposure duration is the longest shutter for which none of the RGB channels are saturated. + #'''Output type ''' - RAW at finest resolution + #'''ISO ''' - same setting at which camera was calibrated (ISO 100 for the D200 and the D2Xs) + #'''AWB ''' - should not matter if the output is RAW; yet we set this to day light + #'''''f'' # ''' - same setting ath which the camera was calibrated (''f''/8 for the Nikkor 50 mm ''f''/1.8 lens) + #'''Exposure mode ''' - manual + #'''Exposure compensation ''' - off - {| border="0" cellpadding="5" cellspacing="0" style="margin: 1em 1em 1em 0; background: #ffffff; border: 0px #aaa solid; border-collapse: collapse; font-size: 100%;" + The only parameter you will need to change is '''shutter speed'''. Determine the best exposure by varying the shutter speed using the Nikon Camera Control Pro software. Look at the RGB histogram as you vary the shutter speed (you can see a rough histogram in the image preview window - Figure 12). The best exposure duration is the longest shutter for which none of the RGB channels are saturated. - |- + - |[[Image:Nikon ccp.png|thumb|center|384px|'''Figure 11''' Nikon Camera Control Pro - main window.]] + {| border="0" cellpadding="5" cellspacing="0" style="margin: 1em 1em 1em 0; background: #ffffff; border: 0px #aaa solid; border-collapse: collapse; font-size: 100%;" - |[[Image:Nikon ccp rgb histogram.png|thumb|center|384px|'''Figure 12''' Nikon Camera Control Pro - image preview window showing file transfer status and histograms of RGB values. You can use these histograms to roughly ascertain whether the image is noisy or saturated.]] + |- |- + | [[Image:Nikon ccp.png|thumb|center|384px]] + | [[Image:Nikon ccp rgb histogram.png|thumb|center|384px]] |} |} - === Notes === + === Notes === - * Use the Nikon Camera Control Pro software to specify where the data will be stored and the naming convention. We suggest that you name the calibration files "calib_N" where N is the order in which the lights turn on. Use the tools>download options to specify the folder. Select "edit" to specify the prefix (e.g. "calib") and starting number, N. + + *Use the Nikon Camera Control Pro software to specify where the data will be stored and the naming convention. We suggest that you name the calibration files "calib_N" where N is the order in which the lights turn on. Use the tools>download options to specify the folder. Select "edit" to specify the prefix (e.g. "calib") and starting number, N. - = Recovery = + = Recovery = - The modern recovery code (before Manu leaves) will + The modern recovery code (before Manu leaves) will - * Take in the 9-channel images from the Max Klein illuminator and Nikon no IR blocking filter set up. + *Take in the 9-channel images from the Max Klein illuminator and Nikon no IR blocking filter set up. - * Use look-up tables that are defined with labels comments and so forth and built by Steve + *Use look-up tables that are defined with labels comments and so forth and built by Steve - * Produce multi-spectral image representations + *Produce multi-spectral image representations - The way in which the illuminator and data acquisition software works should be above. + The way in which the illuminator and data acquisition software works should be above. - The principles of designing the look up tables should be in the [[Reflectance and Illuminant Estimation]] part of the wiki. These are the ideas that will be in Steve's dissertation. + The principles of designing the look up tables should be in the [[Reflectance and Illuminant Estimation]] part of the wiki. These are the ideas that will be in Steve's dissertation. - JEF and SL are very interested in evaluating the reconstructions. This will be in the [[Reflectance and Illuminant Estimation]] section (and possibly a paper) also. The way we can do this is using the PR-650 and the camera data along with the sLUTs in some specific cases. JEF wants to work with people (hard). BW and hopefully SL want to work with flowers. Or toy cars. Or stuff that doesn't mind staying still for an hour. + JEF and SL are very interested in evaluating the reconstructions. This will be in the [[Reflectance and Illuminant Estimation]] section (and possibly a paper) also. The way we can do this is using the PR-650 and the camera data along with the sLUTs in some specific cases. JEF wants to work with people (hard). BW and hopefully SL want to work with flowers. Or toy cars. Or stuff that doesn't mind staying still for an hour. - We could do the evaluation by printing large targets, or by using MCC targets, or by sewing cloth targets, or something. We can't really evaluate based on natural surfaces. We could put some of the targets on cylinders (tubes), or systematically change the position/angle. + We could do the evaluation by printing large targets, or by using MCC targets, or by sewing cloth targets, or something. We can't really evaluate based on natural surfaces. We could put some of the targets on cylinders (tubes), or systematically change the position/angle. In evaluating, we should probably get illumination numbers and do the evaluation for a known illuminant rather than for an unknown. If that works, we then expand to trying to go for the color signal or for the illuminant and surface. In evaluating, we should probably get illumination numbers and do the evaluation for a known illuminant rather than for an unknown. If that works, we then expand to trying to go for the color signal or for the illuminant and surface.

# Multispectral imaging

Multispectral imaging involves measuring the wavelength-dependent distribution of energy at imaged points in a scene. Sometimes, we will know the spectral power distribution (SPD) of the ambient illumination; then, we try to estimate the spectral reflectance functions of the scene points. In color digital imaging applications, usually SPDs are digitized by sampling them at every 10 nm in the wavelength interval [400,700] nm (roughly the visible range of the spectrum) and we need to measure the values of the SPD at 31 different wavelengths.

Commonly, special instruments designed to measure SPDs (spectraphotometers or spectraradiometers) make measurements that are averaged over a large number of points in the scene. To acquire 2D multispectral imgage data, a common techinque is to use a digital camera (color or monochrome) augmented in some way.

Digital color cameras usually give three readings. These are three differently weighted sums of the SPDs of scene points. The three sets of weights have dominating values in different regions of the spectrum, typically, red, green, and blue. Without augmenting these three measurements in some way, we would need to estimate at each point 31 values from only three measurements. This is usually a difficult task. Instead, it is common to take multiple acquisitions with a color camera to get more measurements. The multiple measurements are taken either with different optical filters or with different illuminants. An optical filter or an illuminant alter the weights associated with the summed SPDs and allow us to increase the number of measurements. For example, we may acquire three different images with a color camera under different illuminant combinations to make a total of 9 measurements. Then, we only have to estimate 31 values from 9 measurements. This is usually possible since common SPDs vary slowly with wavelength.

Mathematically, for each point in the scene we have to solve the problem:

$y = S^T x\,,$

where $S$ has size $31 \times n$. $n$ is the number of measurements and each column of $S$ has the weights associated with a camera color filter and illuminant (or optical filter) combination. There are several ways to address this problem. Some of them are described in the Reflectance and Illuminant Estimation page.

# Experiments - data acquisition

Our multispectral imaging setup is built around the LED-based illuminator Max Klein designed and built for his Psych 221 project (Winter 2008). The specifications of this illuminator and details about its design and construction (including CAD diagrams and Spice models) are online at his project webpage. Here we describe its use to acquire multispectral images. We need the following equipment:

• A calibrated camera
• The multispectral illuminator and its controller (hyperterminal on a PC)
• A uniform gray chart for spatial calibration

## Camera calibration

During camera calibration we determine the spectral sensitivity functions of the camera sensor. These functions describe the wavelength-dependent weights associated with the camera color channels. We can find these weights by relating camera measurements of a number of known monochromatic lights to their corresponding SPD measurements. We solve the following mathematical problem to find $S$:

$Y = S^T X\,,$

where we measure $k$ different lights with the camera and a spectraradiometer. $Y$ is of size $3 \times k$ and its columns hold the camera measurements. The columns of $X$ hold the corresponding SPD measurements.

Figure 1 shows a diagram of how the different devices are set up. The monochromator is the light source and is capable of producing light in several different narrowband ranges of wavelengths. The light from the monochromator is made to fall on a standard reflectance target. The spectraradiometer measures the SPD of the light, while the camera to be calibrated takes an image of the same light. We collect measurements of SPDs and corresponding camera images to find $S$.

### Notes

• Typically, light from the monochromator comes out through an integrating sphere that blurs the light over a uniform area. In our experiments, we found that the integrating sphere reduces light intensity too much. So, we remove the integrating sphere; this gives a light with more focused energy.

## Spatial calibration

The different LED types are arranged on a grid on the illuminator's PCB. Since the LED types are shifted with respect to each other, the light fall-off pattern due to each LED is different. Also, the beam widths of the LED types are not similar. To account for these effects, we must individually correct for the lens fall-off of each color channel for each LED type. To find the correction factors used to correct for lens/light combinations, we use images of a uniform gray chart.

The gray chart may have some texture. To prevent the fine texture on the gray chart from affecting the fall-off calibration, capture the gray chart images with a large defocus. You can do this manually by turning off the autofocus on the camera and manually setting the focus to infinity. Figure 9 shows one channel of an image taken with the blue LED. Note that some texture shows up in this image. Correcting scene images by dividing by the factors in the version of the calibration image introduces the the structure of the gray chart into the scene. To prevent this, we find the best polynomial fit to the gray chart image data, and use this for spatial calibration. The polynomial-fitted version of the image in Figure 9 is shown in Figure 10.

### Notes

• The code for spatial calibration is in \$svn/pdcprojects/LEDms/Code/CameraCalibration. The relevant file is lensfalloff.m

## Controlling the camera

We use Nikon's camera control pro software to control the camera remotely from the PC. Figure 11 shows the main window. Camera settings can not be changed for different acquisitions (under different LEDs). Some parameters should always be fixed:

1. Output type - RAW at finest resolution
2. ISO - same setting at which camera was calibrated (ISO 100 for the D200 and the D2Xs)
3. AWB - should not matter if the output is RAW; yet we set this to day light
4. f # - same setting ath which the camera was calibrated (f/8 for the Nikkor 50 mm f/1.8 lens)
5. Exposure mode - manual
6. Exposure compensation - off

The only parameter you will need to change is shutter speed. Determine the best exposure by varying the shutter speed using the Nikon Camera Control Pro software. Look at the RGB histogram as you vary the shutter speed (you can see a rough histogram in the image preview window - Figure 12). The best exposure duration is the longest shutter for which none of the RGB channels are saturated.

### Notes

• Use the Nikon Camera Control Pro software to specify where the data will be stored and the naming convention. We suggest that you name the calibration files "calib_N" where N is the order in which the lights turn on. Use the tools>download options to specify the folder. Select "edit" to specify the prefix (e.g. "calib") and starting number, N.

# Recovery

The modern recovery code (before Manu leaves) will

• Take in the 9-channel images from the Max Klein illuminator and Nikon no IR blocking filter set up.
• Use look-up tables that are defined with labels comments and so forth and built by Steve
• Produce multi-spectral image representations

The way in which the illuminator and data acquisition software works should be above.

The principles of designing the look up tables should be in the Reflectance and Illuminant Estimation part of the wiki. These are the ideas that will be in Steve's dissertation.

JEF and SL are very interested in evaluating the reconstructions. This will be in the Reflectance and Illuminant Estimation section (and possibly a paper) also. The way we can do this is using the PR-650 and the camera data along with the sLUTs in some specific cases. JEF wants to work with people (hard). BW and hopefully SL want to work with flowers. Or toy cars. Or stuff that doesn't mind staying still for an hour.

We could do the evaluation by printing large targets, or by using MCC targets, or by sewing cloth targets, or something. We can't really evaluate based on natural surfaces. We could put some of the targets on cylinders (tubes), or systematically change the position/angle.

In evaluating, we should probably get illumination numbers and do the evaluation for a known illuminant rather than for an unknown. If that works, we then expand to trying to go for the color signal or for the illuminant and surface.