ISET plans


Jump to: navigation, search

This list describes a range of minor and major adjustments we plan to make to ISET. These are being managed in ISET-4.0, as ISET-3.0 is fixed.

The items here are grouped loosely into categories.


Software (Generic)

Change ieSaveMultiSpectralImage.m to handle the new illuminant structure properly. See notes in the file. sceneFromFile does the right thing already and has an example of what to do.

Change vcAddAndSelect ... etc. to ieAddAndSelect ...

Use the sensor characterization worksheet in Scripts\Sensors as a basis for running an ISET simulation for Flextronics?

Font size fixes for design microlens and design CFA

Regularize plotting routine names

Here are the current plotting routines

plotScene, plotSceneRadiance, scenePlotCIE, scenePlotLuminance
plotOI, plotOTF, plotOIIrradiance, oiPlotCIE, oiPlotIlluminance
plotSensorEtendue, plotSensorFFT, plotSensorHistogram, plotSensorSNR, plotSensorVignetting

sensorPlotColor, sensorPlotLine, sensorPlotVignetting

Utility - plotContrastHistogram

There may be some vcimage related ones, but I didn't see them yet.

One organization is to force plots through gateway routines:

oiPlot, scenePlot, sensorPlot, vcimagePlot

and these gateway routines should be documented at the top and call all the other specialty routines.



Create an off-axis rendering for the image. Specify that the scene is not on-axis, but is located at some off-center location. Perhaps put in the field height and angle into the GUI on the OI window (below the RT pulldown). This could be adjusted by changing the imageCenter position computed in rtOTF. Allow specification of the chief ray angle at the center of the scene. This should allow us to test the image rendering at off-axis positions for ray-trace modeling.Not sure about rtGeometry.

Read in an array of rotated PSFs, not just field height

Complete the set(gcf,'userdata') calls in rtPlot

Add an additional blur for camera shake following Feng Xiao's paper.

Read in grid distortion file from Dmitry and use it instead of our currrent code. See zemaxLoadGridDistortion

Add possibility of geometric distortion for large field height to the shift-invariant calculation

Check for wavelength problems with ray trace when scene or optical image requirements differ from the ray trace data. In particular, what if the wavelength requirement is beyond the range of the ray trace information? Do we alert and compute? Or do we stop?

(Zemax from Dmitry)

Read the chief ray angle (CRA) data and make sure we can easily calculate (plot) CRA vs. distorted Image Height for any specified wavelength

Change all field of view to diagonal from horizontal

Create illustrative scripts using the ray trace toolbox.

Work on the Zemax interface Macro. We have a script, I think (s_opticsRTPSFandFigs)

More testing of the ray trace to about blocking issues - though these don't seem so bad right now.

When we set shift-invariant but there are not optics.OTF.OTF data, we should query the user for a file to go get the data.

We should have more microlens summary plots ...


Should DSNU be dependent on integration time? Currently it's modeled just as an offset, which is true for all the other offset FPN components which are integration time independent, but not DSNU. - Hmm. Well, this is what someone said to me. I am not sure that this is needed in a phenomonological model. If we have two sources of dark current, we can't distinguish them anyway. But I see no particular reason why the variance of the dark voltage offset is duration dependent. The dark voltage rise is. The rate of this rise is governed by the PRNU. The DSNU could be modeled as having a variance that increases with time, but I am not sure that is true or needed. This might matter when we get to a video module.


  • BUG: If there is an old sensor around, say with spectralQE that is specified at only one wavelength, and we create a new 3 wavelength scene/OI, then the sensor computation breaks. The sensor doesn't have enough information to interp down. So we need to figure out how to ask the person to interpolate the sensor data up? Why would we ever have set the sensor low rather than interpolate on the fly?
  • ESS: nBadPixels. How should that get set?

Probably with a checkbox the way we turn on and off column fpn. SensorSet/Get for badpixellist. Method for identifying dead pixels - put it into documentation better (ESS)

  • In sensor window, make a tool that dumps out a picture and verbal description of the sensor color filter array properties. Maybe a way to make some more Rory-like windows.
  • Re: Intel. This is a question about the units of PRNU. Amit asked me why this is a standard deviation instead of a hard unit. He also asked me why DSNU is not a standard deviation. So I told him that I liked hard units, not standard deviations. But then I had to deal with the fact that PRNU is a standard deviation.

"Given what I said in (1), you might ask why I didn't specify this term using the physical units of conversion gain (uv/electron). I am asking that myself. The answer is (a) moral weakness, and (b) when I first inserted the parameter I didn't have an intuition for PRNU in terms of conversion gain. Maybe I should change one or both of these in the next major release."

  • The CFA data are scattered around the sensor in sensor.color and We should restructure the sensor(Set/Get) routines so that the filterSpectra and filterNames and so forth are all inside of the field. We should then make the load cfa routine in the GUI load the structure, checking for wavelength consistency.
  • The routine sensorReadFilter and related data formats need to be fixed for reading/writing loading/saving CFAs. Read the comments in the header of that function.
  • Load Sensor Data ... check what it does and document
  • Printout a spec sheet from the sensor window that looks like the kind you might get from from a company (Excel format?).
  • Rotate sensor; Check the MCC selection with the rotation. (Done?)
  • Check that that the sensorCompute is based on this position being the upper left corner, not the center of the photodetector.



  • Implement alternative MTF methods to ISO 12233. Get reference (Joyce).

Color management

  • Explain what MCC Optimized does better in a script
  • Create a script that compares the sensorCCM calculation with the MCC Optimized approach. The sensorCCM uses actual MCC RGB values in the sensor to compute the best 3x3. The MCC Optimized calculation uses the sensor spectral sensitivity curves (not actual RGB values) to optimize the MCC rendering in the color space specified (sensor, XYZ, etc.) It selects a transform that optimizes the rendering of an MCC illuminated under a daylight (D65).
  • Build a data structure of transforms and how to map R/B into a transform (transform mixture).

Color processing should be expanded via tutorials and examples. It is started in s_imageIlluminantCorrection

There is some problem script for the color balancing part of the v_ suite. Have a look. The lines are wrong.


  • The Adaptive Laplacian doesn't seem to preserve the mean, so that comparing it with bilinear shows either a mean or contrast shift or something. Fix that.
  • Clarify the plane2mosaic call. Variable naming, logic.
  • Add other demosaicking routines from Brainard and Manu - see Notes directory
  • Improve demosaicking analyses


  • Image metrics problems with unequal size metric images
  • Add additional metrics like Spatial CIELAB. Fix metrics window and related.
  • Improve metrics window
  • Fix ColorBalancing hard-coded parameter issues in white world?
  • S-CIELAB calculation . See scApplyFilters near line 78. When the filter size is just a little different from the image size, the FFT calculation introduces a problem. A place to catch this condition is in scielab just prior to the call toe scPrepareFilters, near line 122.
   % We should check here whether the filter parameters for the size are
   % close to the image size.  If they are close, we should probably
   % simply make the filter support equal to the size of the image.


  • Wavelength by wavelength blurring. Introduce FFT blurring. Show effects of human chromatic aberration.
  • Show energy and quanta issues in defining sensors, and why if the signal is in energy we use one sensor, but if it is in photons, we use another sensor qe.
  • Defocus, and possibly depth of field.


  • Do a Lytros simulation - light field reconstruction stuff, microlens plus multiple detectors.
  • Write a script that illustrates the point source as we sweep out the focal distance. According to SN the blur from this is the same, independent of the depth. We could illustrate this by creating a 3D scene with lines at, say, 3 depths and then computing a set of depth of field images. We just add up the images and show that the blur is more or less the same. Must check.
  • Suppose someone provides a set of Macbeth RGB under two or three illuminants. Can we write a script to deduce something about the sensor? Is this a class tutorial or project?
  • Write more documentation about the tests and analysis of the optics. Illustrate the issues with Ray Trace and Zemax.
  • Measure point spread and geometric distortion of a camera using a display as the target
  • Scripts and tutorials for lux-sec vs. nyquist tradeoff
  • Script for ISO12233 simplification
  • Ability to read and store raw images
    • We need a standard ISET script for reading many raw images and storing in an array for analysis
    • Potential problem with Logitech – we don’t have duration information
    • Measurements
  • Noise (1-2 hours) --We need a general script for noise calculations ** Take many frames of dark images at different exposure durations and calculate dark voltage
    • take many frames of dark images at one very short exposure duration and calculate read noise and dsnu (dark signal non-uniformity)
    • take many frames of a uniformly illuminated scene at different exposure durations and calculate prnu (photo receptor non-uniformity)
  • Spectral Measurements (3-4 hours)
    • Take image of each narrow-band wavelength light (generated by the monochromator)
    • Run lab software to calculate spectral responsivities
  • Spatial Measurements – Optics (1-2 hours)
      • Capture an image of a slanted line at different field heights (distances from the center of the lens) and calculate the psf using the ISO 12233 method

Scene database

For those multispectral scenes that have illuminants, we can calculate the best 3x3 between arbitrary pairs of illuminants. It would be nice to calculate the 3x3 color transform in different ways and illustrate the effects. This could be done in a simple script.

Manuals and Tutorials

Search engine for the web manual - Google has a solution we could use for searching the imageval site, for example.

Glossary of technical terms for the web manual

Add to the manual - Sensor page, Image Processing Page

Discuss infrared stuff some more in tutorials



October 2012

Camera models – We are starting to introduce a higher level notion of a camera object, with cameraSet/Get/Create functions. We are implementing metrics that apply to cameras. There is a new camera directory.

Added space-varying illumination and scene depth in anticipation for integration with PBRT. See the files in scene/illumination

Methods of creating arbitrary charts and automated routines for specifying the central regions of interest (see gui/charts). They are named chart<TAB>

New reflectance data added for human skin (hyper-spectral)

Absorbance functions for oximetry simulations (data/absorbances)

New ISO slanted bar related functions in metrics/ISO

Additional features for computing ideal (XYZ) sensors and human cones

Renamed plane2mosaic as plane2rgb

Many new tutorials added in scripts/tutorials/vset and related (t_<TAB>)

Removed waitbar in many functions to speed up calculations when looping. Possible to use setpref to turn the “waitbar” on or off in many functions. Use setpref(‘ISET’,’waitbar’, 0) or setpref(‘ISET’,’waitbar’, 1)

Permit setting default white point as equal photons or equal energy using setpref(‘ISET’,’whitePoint’,’ee’) or setpref(‘ISET’,’whitePoint’,’ep’)

Using 32 bit, rather than 16 bit depth for spectral data in scene and optical image.

August 2011 Add L* step ramp and linear L-step ramp to the scene pattern pulldowns.

We can use empirical noise (like in bootstrap), by taking raw camera data and finding the read noise or other noise, and saving he distribution. We can then draw from that distribution during the simulations, rather than using the current Gaussian approximations.

Print the complete matrix for an image pipeline calculation

Something wrong with loading display information

Make black and white input images look black and white

Fix the showSelection option in both vcimage and sensor for MCC optmization.

Remove custom compute button and variables

Moving things here makes me feel better and serves as a reminder of what to tell people when we announce an update.

Worry about first letter defining filter type - also, how do we figure out which is the filter type from the spectrum? It seems like we are doing an analysis somewhere because it is figuring out a different color. Look at sensorFilterType.

Create a surface reflectance chart with randomly drawn surfaces from a natural database. Illuminate, and compute transforms for several lights.

Remove custom compute button and variables

Reduced number of directories, integrating teaching and scripts better.

Implemented validation scripts.

Implement new model for reading RGB data with respect to a display calibration.

Started 3D implementation with Maya and RenderToolbox

Implement display structure and include example displays in the data subdirectory.

Improve chromaticity diagram plotting functions.

Show locations (middle? rect?) for macbeth color checker on the vcimage screen (color metrics).

sceneSet(meanLuminance) ... logic needs improvement. See comments in sceneSet.

Include class tutorials in ISET distribution and update the scripts to work more with ISET.

Create a new surface reflecance chart with patches that are selected automatically or by the user and a light source that is chosen by the user. This can augment the MCC

  • cmatrix and colorChangeSpace should be eliminated and replaced by colorTransformMatrix and imageLinearTransform. There are a few routines in scielab that use changeColorSpace. Only those few need to be updated for the more modern routines.

August 2010 - Version

July 2010 - Version

The pixelSet function seems to have a bug in setting the fill factor. There seems to be a typo in the "case 'sizeconstantfillfactor'" part of the script. It's OK through the GUI.

Worked through calculations to compare colorimetry with the PsychToolBox and confirm they match.

Set the illuminant SPD units (sceneIlluminantScale) to be consistent with the known reflectance

Inserted reflectance information into the scene structure (sceneGet reflectance, known reflectance)

After renaming and saving the session file, the close didn't work properly.

The rendering into the vcimage window was changed so that the largest RGB value (0,1) is the same fraction as the largest sensor value divided by the largest possible sensor value (normally voltage swing)

The srgb calculations were adjusted to match the Wikipedia entries properly.

The font size adjustment code was edited and clarified in the comments

Finish sceneAdjustIlluminant function to change simulated illuminant SPD.

Adjusted Scene load of images so that RGB is preserved in the display (vcReadImage, SceneFromFile)

Adjusted ieSessionSet to allow rendering of various SPDs as (1,1,1) in scene and OI window

Fixed xyz2uv formula

Create a uniform target with Tungsten light from 350nm-1100nm. Add this into the Scene | Scene | (specify wave) grouping. Make one for D65 and Tungsten just as there is one for equal energy.

Added new SPD options to uniform pattern (SCENE) and slanted bar (SCENE)

Added easy way to adjust illuminant on a scene that already has an illuminant stored (e.g., a multispectral scene)

Fixed a bug in the custom rendering approach in the Processor window

Fix the code that fails to catch when no scene illuminant is stored. This is in sceneSPDScale.

Create options for the slantedBar target with different illuminants. Tungsten, D650, equal energy. And store the illuminant in the scene structure.

June 2010

Fixed OTF plotting routines for some RT calculations

Fixed bug in Scene | Scene | (user specified); added disp('User canceled') condition

Adjusted 'SKIP' to 'SKIP OTF' and re-arranged button display depending on condition.

Removed Custom button and those options, which were unused in the GUI.

Changed oiCompute to set the name of the oi to match the name of the scene

adjusted slantedBar scene creation to allow arbitrary wavelength

Fixed s_slantedBarInfraredMTF

Checked pixelSNR and sensorSNR calculations ... they seem OK but I added comments to clarify

Adjusted pixelSNR lux-sec and sensorSNR lux-sec plotting routines

Fixed ieMainW session renaming, load and so forth by inserting refreshMain (which was missing).

Adjusted how sessions are named to iset-DateTime format.

Added more functions to s_Exercise

Multichannel linear demosaicking is very slow. Speed it up for a simple linear method.

Added showCFAPattern

Added sensorColorFilter to create Gaussian (and in the future other) color filter lists

Developing s_irSensorSimulation script

Inserted iePoisson calculation into noiseShot - low noise levels really are Poisson now.

Fixed BUG: The sensor row/col resize should not allow creating a sensor that has only part of a pixel block. This can happen from the window when we resize the sensor with the pulldown. Presumably I am not calling the sensorAdjustSize ... routine from the window callback. Make sure it gets called there properly.

Added true size plot of image to scene window

Added new parameters for the display (dpi, viewing distance)

Added vSNR calculation in the vcimage window

Fixed an S-CIELAB filtering bug that padded the image and filter with zeros, causing problems at the image edge.

Speed up the birefringent calculation, which relies on interp2, modifying and substituting in qinterp2 from Matlab Central.

Color array extensions for CFA

New exposure models (bracketing, pixel-array dependent)

Extensions to IR range

Add denoising algorithm to Processor window.

Updated geometric distortion, rtPlot and so from from ISET-3.0, confirmed Zemax imports.

Precomputed PSFs for big ray trace speedup. Inserted the new rt code into oiCompute.

Added birefringent optical filter option

We did a lot of 'zemax testing a code clarification. This specifies the output files and their meaning that any Zemax/CodeV script should produce. We will have examples of the Zemax and hopefully some day Code V scripts. But we don't aim to provide these and check these for people (i.e., we don't support Code V or Zemax programming side).

Created a real Poisson noise using Matlab functions (iePoisson). From Link for above code snippet

% Poisson distribution
% Matlab Statistics Toolbox has built in command POISSRND for generating random numbers from the Poisson distribution.
% NxM-matrix of random numbers from Po(lambda)
poissrnd(lambda, n, m); 
% If not available use e.g.


Extended many features to IR sensors

Changed SNR-Lux compute to use uniform Equal Energy instead of D65. This made the IR computation a little more sensible - though it still needs more thought.

Make an image in the SCENE window of a waveband. In the same Analyze menu as luminance image; ask the user for the waveband range. (Done, I think)

Fill in function of sceneWindow Transform image calls

Add POCS demosaicing

Fix Bilinear size issue

The SCIELAB images in RGB are odd and should be moved to another location.

Adjust the Help files to go to some of the new manual pages. Get those pages up on the ImagEval site? Or have them be files within ISET?

Create web-page of Feng's HDR images.

ISET-2.0 doesn't create human optics properly. It appears to have an OTF in the wrong domain -- or something. JOV Materials illustrated this, but you can see it just be the pulldowns. It is probably the OTF data in the opticsCreate('human').

Draft a manual and procedure inside a manual directory within ISET.

Fix the figure titles in some of the plotOTF calls.

Create some scripts to illustrate something.

See plotOTF ls by wavelength and opticsGet(...psfData) where there is this really weird problem with fftshift Figure out why we need an fftshift in opticsGet(...psfData ) for the human case but not for other shift-invariant. This is also true for the plotOTF routines. See comments in there. Fixed all of the FFTSHIFT/FFT stuff in ISET-2.0. Fixed up plotOTF and psfMovie to go with it. Good day!

Create interface for building shift-invariant lens model; use siSynethic as the starting point.

Fix the vignetting plot routine based on mlAnalyzeEtendue call from the microlens window.

Get vignetting with proper shift implemented into the pipeline within sensorCompute using the microlens code. Now we just have bare, centered and skip. We need to allow for a shifted position.

Set plotting routines so that z-axis size is never less than 10%.

Build one vignetting routine, by comparing pvVignetting with mlRadiance(ml,0) and make sure that we end up with one that is close to right.

Units on the four graphs Check optimal offset case Write the manual and explanation while we click through it.

Case sensitive fixes

How do we make synchronization simpler? We should show it is not synchronized by a '*' or 'red'

Make a slanted line

fix plotSpectra

Hiroshi related demosaicking changes

OTF, MTF, etc with diffuser in Optics window

sensor = sensorSet(sensor,'pattern',pattern); sensor = sensorCompute(sensor,vcGetObject('oi'),0); vcReplaceAndSelectObject(sensor); sensorImageWindow;

Change sd in vertical and horizontal direction for oidiffuser

Arrange ieKeyVerify to return: [num2str(1951),date] And have the other routines check for this value. Add the check into a couple of additional crucial routines. Maybe sceneGet? Or some such. Figure out how to deal with this in CVS. Perhaps we should write a rm command into the script that makes the distribution. Also, we should write a script that makes all of the p-files.

Adjust model for optics. Create popup to select model. Make sure the shift invariant, diffraction limited, and related calls are done properly. Create some more shift-invariant examples based on human OTF. In fact, figure out whether human OTF can just be handled as a shift invariant case nicely.

Add PSF read and apply for custom compute in optics

Deal with isetLicense issue in demo copy

There is a problem with the adaptive laplacian. Use Macbeth, sensor BGGR. See what happens. It is bad and needs to be fixed before next release.

Create dummy ZemaxRT files for algorithm testing. Use small Gaussian in middle and larger (oriented?) as we go peripheral. Either use fspecial to make the gaussian, or use outer product of two gaussians

Added photometric exposure to Sensor | Analyze | SNR | Photom Exp (lux-sec)

Flag to turn off shot noise, and create a 0 (completely black) scene -- document it.

demosaic comments

Fix manual matrix entry in vcimage window

Write script for finding optimal color matrix transformation (3x3) for a light and selected set of surfaces to ....

Select RGB values from MCC sensor data automatically and compute linear transform from sensor to lRGB for color correction

displayRender: Check logic and more important make some of the pulldowns go away when we choose Manual Matrix Entry for Color Conversion.

Get Manual Matrix values when we switch into that state. Allow manual matrix values when we are in sensor mode. Rethink sensor/XYZ modes and what is allowed.

Remove Announcement about Stanford from ISET

Create a scene with the perfect colors and the measured colors embedded (color metrics).

Add analysis of noise in a uniform vcimage patch (noise metrics)

Create noise in terms of sd of luminance contrast, sd(luminance)/mean(luminance) (colormetrics)

vcReplaceObject should be vcAddAndSelectObject when the vcSESSION is empty.

Allow calling format vcAddAndSelectObject(obj), rather than vcAdd..('scene',scene)

Why does macbeth color sometimes produce a (0,0) chromaticity?

Use guide to fix Untitled1 and Untitled2 callbacks. File | Load and File | Save, I think.

Add possibility of evaluating the Macbeth function color error automatically. (color metrics)

Figure out what to do about the imrotate problem in rtPSFApply. The rotation introduces many odd infelicities (tested with pillbox). I think the solution is to disallow sampling less than, say, 16x16. (That's kind of what I did. I alert people to < 20x20 case).

Figure out the right orientation for the PSFs given the data from Dmity/Zemax

Add zoom and pan to the images for the various views. Or bring up the imview window with some form of the data.

Waveband calculation. Get rid of additional noise terms. Fix read-noise and dark current noise parts of wb calculation. See whether the photon/shot noise needs to be recomputed also. Think about CDS in this case. Should we also do the calculation image section by image section?

Convert the Huygens data to be readable in Matlab 6.5

The spatial dimension of the optical image is not the same in DL and RT calculations. One of them is wrong, probably the RT. Check. Or possibly it has to do with the magnification factor of the DL not matching that of the DL and both are right. Anyway, check.

imageSPD2RGB line 25, divide by zero

Rewrite vcMainWindow as ieMainWindow

Add bggr in addition to rggb (for ESS). Why is this not working with guide, properly?

Download the repository version of vcMainWindow. I broke it with version 7 stuff

Allow manual setting of spacing parameter in rtPSFApply calculation.

10-bit setting made to sensor does not refresh in the window

Re-write sceneOpen, oiOpen, etc. and place in license M files

Finish rtPSFApply by inserting into the opticsRayTrace function. More debugging.

|| instead of | in sensorCompute

Add Choose File -> {Multispectral, RGB, Monochrome} sub-items. Fix sceneFromFile and related callbacks in sceneWindow.

sensor comments

pixel comments

Draft introduction for programmers

Front page for the web manual

Delete obsolete routines

Read the data spacing and other header information from the Zemax file.

File comments

GUI comments

utility comments

imgproc comments

delete vcSelectName

Scripts comments

We need to make some other kinds of pictures of the irradiance at the detector.

Optics module for Code V and Zemax

Remove vcClickImageType

Microlens calculations

attached microlens window to sensor window

reduce and modify grid sampling

Create new macbeth surface file for Kartik

Fix announcement about version 5.0 Image Processing Toolbox.

Complete S-CIELAB update

ieSessionGet, not checking GUI stuff properly.

Figure out how to improve the Scene->OI computation so that the circularconv is avoided ... we really want to splash over and treat the scene as padded by the mean ...? Or zeroes? Or what ?

Compute ISO MTF using oriented bar.

Figure out about the msvrcp.dll stuff ...

Scene random noise doesn't work.

Metrics adjustments, particularly for vcxyz2lab

Fix optics calculation for wrapping of OTF

ISO 12233

Save, Edit, Copy, New ... programming details

plotTextString -- fix string placement

Reinterpolate PSF data to a common format, if needed, in zemaxLoad Interpolate the PSF data to 0.25 microns -- not needed

Configure the size and sampling per lens.

Field of view check in ray trace and rtOTF are comparing diagonal and horizontal field of views Fix.

Trap NaN Scene DR, when 0 print Inf instead of NaN

pixel layers set in sensor window does not work right

RB/RG plot in sensor window doesn't work right for digital values. Something about the scaling of the DV.

Add a line plot to the scene window

Add the plot radiance image on grid option to the scene window

Indicate OTF units in oiWindow Analyze Optics OTF (lines/mm). plotOTF. Should rename, too. Finish routines for showing the PSF function and the geometric distortion data. Specifically move the rtPlot from inside of plotOTF to conditional checks in oiWindow. rtPlot OTF plot line 659.? label the axes in the OTF plot

Get spatial units right on luminance mesh plot in scene (and oi)?

Personal tools