RgcContrastSensitivityISET

From VISTA LAB WIKI

Jump to: navigation, search
Contrast sensitivity curve

This function runs the simulator sinusoidal grating scenes, for a given spatial frequency, on different contrast values. It then uses a linear classifier to try distinguishing between spike outputs generated by gratings or blank scenes, and plots the resulting contrast sensitivity curve. You need the svmlib library installed to use this function.

We then fit a Weibull curve to the results, to extract the contrast at which the classification has a given accuracy.

Contents

[edit] Use of the function

 [pCorrect contrast blankData]  = rgcContrastSensitivityISET(rgcP, cpd, contrast, stimulusLength, blankData, plotRes, mLum,spikeRateGain, alreadyComputedLinTS, alreadyComputedBlankLinTS, linTSSavingFile, spikeSavingFile, uncoupled, dT) 

[edit] Inputs

  • rgcP : object of the rgcParameters class. Default is default object with one default layer. Several layers are possible, they are trained and classified separately so you can see the effect of different layer parameters
  • cpd : cycle per degree at which you want to run the simulation (default 10)
  • contrast : constrasts at which you want to compute the classification accuracy. (default [0 0.05 0.075 0.1 0.15 0.2 0.25 0.3])
  • stimulusLength : total length of the stimulus in ms. Longer stimulus gives less noisy results but takes longer to compute. (default 500)
  • blankData : computed spikes for the blank stimulus, may be saved to save some time when looping on the function. Optional.
  • plotRes : (0 or 1) do we plot a curve of the results at the end?
  • mLum : mean luminance of the image. (default = 300 cd/m2)
  • spikeRateGain : gain on the spike rate. The higher the gain, the more spikes. (default = 100. TODO: is this good?)
  • alreadyComputedLinTS : to make computations faster, you can compute the linTS once and for all.

Format: alreadyComputedLinTS{contrastNumber,trainOrTest}{layerNumber} = double[nRGC x nTimeFrames]

  • alreadyComputedBlankLinTS : same, for blank data.

Format: alreadyComputedBlankLinTS = cell(1,2) : for training and test.

  • linTSavingFile : if this is specified, the linTS will be saved, to be used for later computations.

Warning : this can take a lot of memory space : 3.5 MB per layer, for each contrast value, x2 + blank data.

  • uncoupled : if 1, coupling is disabled in layer. Only works when no rgcP is given.
  • dT : time between two frames. (= exposure time of the individual sensor samples.)


[edit] Outputs

  • pCorrect : percentage of correct guesses on the test data.
  • contrast : constrasts used in the function.
  • blankData : computed spikes for the blank stimulus, may be saved to save some time when looping on the function. Optional.

[edit] Integration function

Because individual spike images are random objects, we need to sum them over time to get an coherent output that we can classify. In the figure below, the first image is a single spike image, the second is the mean over time of 10 spike images, and the third over 100 spike images. In the third image, we see the grating appearing.

IntegratedSpikeImage.png

For this reason, the data fed to the classifier is integrated over time. This integration could be done in different ways, with a temporal integration function. In practice, it is a row vector of weights to assign to each image, for a certain number of frames (e.g. 10 or 100 in the example above). The default is an array of ones, and sums over 50ms.

With these parameters, the samples used as training data are running averages of the spike images over 50msm with a shift of dT between each sample.

[edit] svmLib

svmLib is a Matlab library that you need to run the classification functions. It is already present in the synapse/vrn/class/libsvm directory, you can download it from the synapse svn repository, or online. You also need to add the svmlib directory to the matlab path (which you should already have done in the synapse path).

The svn version is already compiled for several operating systems, but if you download it online you will need to compile it yourself (read the README file). If you do that, you will get (among other things), a file called svmtrain.mex*** (*** depends on your operating system), that you need to rename to libsvm_svmtrain.mex***, otherwise the RGC functions won't find it. (It is renamed to avoid confusion with another function of the bioinfo toolbox.)

[edit] Remarks

  • The format of the training data is complicated, there must be a way to simplify it. If it's for compatibility with spikeClassify, then maybe bypassing or rewriting spikeClassify would be clearer. The contrast experiment on the absorptions, rgcContrastSensitivityAbsorptions, does this.
  • Two consecutive training samples are almost the same : they are an average of 50ms of consecutive spike images, and differ only by one dT length. This means the information they give is VERY correlated, and that in practice we don't have that much information in the training data, even when we have many samples. Maybe we should consider the 500 frames of a total exposure time of 500ms as ONE training sample. This would require considerable time to run, but would give better results...
  • The sample time dT has an important impact on the noisiness of the simulation. With dT=4ms, you can get acceptable results. Less than that and the curves are too noisy to be fit with a Weibull function and the results don't make sense.
  • The contrast values are in [0,1], NOT in percentage.
Personal tools