Jump to: navigation, search

A description of the scene2Absorptions function.


[edit] Introduction

The scene2Absorptions function uses the ISET toolbox to compute the cones absorptions from an image. It first computes a mean absorption and then adds noise to it to simulate a realistic photon noise. This makes the overall computation faster when you want to compute a lot of time steps for the same image. You can choose between a human or a mouse eye.

 [absorptions, sensor, oi, rSeed, scene] = scene2Absorptions(scene,sensor,oi,nTimeFrames, expT, mLum, fov, coneSpacing, monochromatic, noNoise,isMouse)

[edit] Output

  • absorptions: structure with fields:
    • : the actual cones absorptions)
    • absorptions.unadapted : the absorptions before adaptation.
    • absorptions.sensor : the sensor used
    • absorptions.oi : the oi used (see below). These fields are returned in the structure so the rgcParameters structure knows the context of the data.
    • absorptions.scene : scene used
    • absorptions.typeAdapt : 1 or 2, depending on the type of adaptation computation (see remarks)
    • absorptions.adaptGain : gain value or gain image used for adaptation.
  • rSeed : seed used to create the random color mosaic, this is useless if you have the sensor.

[edit] Input

All distances are in ┬Ám, all times in ms.

  • sensor : ISET structure describing the sensor, if you are using non monochromatic sensor and want to compute several absorptions with the same sensor, you should use the sensor so the same color mosaic gets used.
  • oi : ISET stucture describing the optics and the optical image. Default is human optics, but you can create it beforehand using:
    • oi = rgcOiCreate('human'); % (human optics)
    • oi = rgcOiCreate('default'); % (default = diffraction limited optics).
  • scene : Can be either an image represented by a N-by-M-by-3 double matrix with entries in [0 255], or an ISET scene structure describing the scene. (no default)
  • nTimeFrames : Number of time steps considered (default is 100 time steps)
  • expT : exposure time / duration of each time step given in ms. (default is 1ms)
  • mLum : mean luminance of the image (default is 300 cd/m^2).
  • fov : horizontal field of view in degrees (default is scene field of view if ISET scene provided, 10 otherwise)
  • coneSpacing : spacing of the cones in um (default is the value in sensor if sensor given, and 1.5 um otherwise)
  • monochromatic : (0 or 1) Is it a monochromatic or not? if 1 the 100% red cones, if 0 then 60% red cones, 40% green cones.
  • noNoise : 1 if you want all noise sources switched off (except shot noise), 0 if you want all noise sources on (shot and read noise)
  • isMouse : 1 if it's a mouse, 0 if not (usually a human, unless another sensor/oi is given.)

[edit] Example

 scene = sceneCreate('slantedBar');
 nFrames = 100; expT = 2;
 absorptions = scene2Absorptions(scene,[],[],nFrames,expT,[],0.3,1.5,0);

[edit] Remarks

So far two different human cone adaptations are implemented:

  • First method : after computation, the absorptions are multiplied by the following gain: 1e6*300/mLum. Of course this gives a poor SNR at low light levels.
  • Second method : compute the mean output of each cone type, and divide by it, so that each type of cone outputs in the same range. Then the signal is multiplied by mean(signal)*1e6*300, to give the same range of output as the first method. This 2nd method helps classification of absorptions in rgcContrastSensitivityAbsorptions. Mouse adaptation works in a similar way.

TODO : We should get rid of this 1e6*300 gain, but be careful that the variables for the spiking would be affected. The simplest would be to add it to the linear filters : inTR, fbTR and cpTR, so that the inputs to RGCs would be the same in the end.

NOTE : In a saturation case, the ISET sensor will saturate first, and the adaptation gain will be added next, so saturation will not be avoided. By doing the adaptation before the saturation, we could have infinite dynamic range I suppose... How do real cones work?

Personal tools