Jump to: navigation, search

This function runs a contrast sensitivity experiment, on the absorptions data. For a given spatial frequency, different contrasted harmonic images are input in the ISET simulator, and the cone absorptions are computed and classified. You need to have svmLib installed to run this function.


[edit] Use of the function

 [contrast pCorrect cpd] = rgcContrastSensitivityAbsorptions(cpd, isMouse, saveDirectory)

[edit] Input

  • cpd : the spatial frequency, in cycles per degree.(default : 0.1cpd for mouse eye, 10cpd for human eye)
  • isMouse : 1 if the sensor is a mouse eye, 0 if it is a human.
  • saveDirectory : directory in which you want to save the results. If none is provided, the results are not saved.

Default contrast values : [.2 .3 .4 .5 .6 .7 .8 .9 1] Default fov : 0.5deg for human eye, 30deg for mouse (both make sensors of size about 100x100) Default noise : shot noise only for mouse eye, shot+read noise for human eye.

[edit] Output

[edit] Remarks

  • The format of the training and test data is simpler in this function as in rgcContrastSensitivityISET, and doesn't use the spikeClassify function. The data is computed in two formats : the first is for visualisation (height*width*numTimeFrames*numContrasts), and the second is for the input into the SVM classifier :
    • trainingLabels : [numexamples, 1] (column), 1 or 2 (half-half)
    • trainingShaped (data set) : [numexamples, numpixels] : images in line vector, stacked.
    • trainingOptions : -c val -t type (c is the cost parameter, t gives the kernel type, default 0=linear)
    • testingLabels : [numTestExamples, 1]
    • testingShaped (data set): [numTestExamples, numpixels] : images in line vectors, stacked.
  • Each absorption image gives enough information, so we don't need to integrate over time (not like spike images) :

So when we run 100 frames, we actually have 100 full samples, uncorrelated (see the remarks on rgcContrastSensitivityISET about this problem).

Personal tools