RGC Todo Page

From VISTA LAB WIKI

Revision as of 15:23, 10 August 2015 by Rjpatruno (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This page is for keeping track of what is being done on the RGC simulator, and the problems that still need to be addressed.

Contents

[edit] Done

[edit] By David, in prehistorical times

  • coded the first version, from Pillow's code and article

[edit] By Guillaume, from Spring 2009 to Winter 2010

  • Clean up the code and the data structures
  • Perception experiments
    • Contrast and frequency sensitivity
    • Vernier acuity

[edit] By Estelle, in Spring 2010

[edit] Stuff done with the Spiking

  • new spiking method:
    • Spiking is now really Poisson (see in rgcCalculateSpikesByLayer and Spike generation using a Poisson process)
    • New variable : spikeRateGain. A value of around 100 or 120 seems reasonable, but it need to be found using contrast/freq sensitivity, and spontaneous spike rate (see testing_spontaneousSpikeRate.m for values I computed already). (But BEFORE that, check the absorptions have sufficient information in them.)


Frequency sensitivity (spikeRateGain=120, dT=4ms)
Frequency sensitivity, uncoupled layer (spikeRateGain=120, dT=4ms)
    • needs dT=4ms at least so that the results are less noisy. With dT=2ms, like in Guillaume's code, the results are too noisy to be useful (both curves and spikeImages, that you can visualize with testing_spikeImages_contrastSensitivity.m). With dT=4ms, we get results that are similar to Guillaume's code (slightly lower)
    • With different response functions : I tried to imitate functions that looked more like Pillow's, to see if it impacted results. I'm not sure it did anything, but my data was overwritten, this should be redone.
      • Coupling : Gaussian function <math>Ke^{-(\frac{t-m}{\sigma})^2}</math>, with K=0.0005, m=5, sigma=15.
      • Feedback : same dip + twoGammas, but with different weights for gammas : f=[0.3,1] (instead of [1, 0.5], see here and in rgcLayer for how fbTR is built)
Pillow's functions and ours (original version)
Pillow-like response functions (new version)



  • lots of data saved : linTS and spikeData, for different values of cpd, contrast, and dT. Stored on celadon for now, and should be moved, they take up a lot of space! (40Gb) The majority of it is from experiments with dT=1 or 2ms, before I knew that the results were too noisy to be used, so I don't know if it's any use to keep them.


  • uncoupling layers, in rgcLayer
    • rgcP.getLayer(1).disableCoupling() : raises the cutoff value for the connexion matrix so that no connexions will be computed (the function checks this before computing, so it won't even go through the loop). Put the hasCoupling flag to 0. Does NOT recompute the connexion matrix is one is already there.
    • rgcP.egtLayer(1).enableCoupling() : puts the cutoff value back to default. Puts the hasCoupling flag to 1. Does NOT recompute the connexion matrix is one is already there.
    • Don't fiddle with the hasCoupling flag by hand, use diasbleCoupling() and enableCoupling().
    • You can also disable feedBack, by setting the flag hasFeedback to 0 or 1, by hand. The getter for fbTR will return 0 if hasFeedback=1.


[edit] Stuff done with the Absorptions

  • Mouse simulator : see Mouse vision, the class project page, for explanations.
    • works up to the absorptions, including classification. No spiking.
    • useful for comparing with human.


  • Adaptation : new method.
    • For each cone type, compute the mean signal and divide by the mean. Then add a gain of 1e6*300/mLum, to get the same kind of values as what we had before.(This works like mouse adaptation.)
    • Before adaptation, each cone type has its range of values, so adapting reduces noise visually. But since a given pixel/cone always outputs in the same range, whether this range is normalized or not, the classifier should be able to deal with the absorptions either way (test this!).
    • plotAbso plots absorptions before and after adaptation.
Human absorption output, before and after adaptation


  • Analysis of noise in absorptions (noise levels can be set in scene2Absorptions, or in pixelCreate)
    • Human and mouse get about the same amount of photons before the noise is added : see the Signal Current Density Image (scdImage), in sensorComputeSamples>sensorComputeMean>signalCurrent>SignalCurrentDensity), converted into photon units by dividing by conversionGain :
Number of photons on each human cone type
Number of photons on each mouse cone type
    • However, at the end of the sensor computing, the human absorption outputs are lower, compared to the mouse. The human absorptions are more similar to the mouse UV-part absorptions.
Human absorption output, before and after adaptation
Mouse absorption output, before and after adaptation
Mouse absorption output, before adaptation, zoom on UV part

The units are volts since they are sensor outputs. The human outputs values ten times lower than the mouse (6e-3 versis 6e-4). Why???

    • Noise measured a little in testing_measureAbsoNoise.m, with weird results. Shot noise only measured in testing_noiseStd.m, with sensible shot noise values.
    • I think somewhere the human is getting less photons, which explains the heavier photon noise. (??)


  • Contrast/frequency sensitivity on absorptions
    • mouse : ok
    • human without noise (no shot noise, no read noise) : ok, 100% accuracy.
    • human with only shot noise : not ok. Very bad results. Images are much noisier than with the mouse. You need to study and tune the noise first.
    • human with shot noise and read noise : not ok either. We should try to tune the quantity of read noise we want here.

[edit] New scripts

In synapse/vrn/rgc/scripts/

  • Spiking :
    • testing_rgcPlots : different plots for visualizing saved spiking data, maybe this should be made into individual functions.
    • testing_spikeRateGain_scenes.m : results of spiking for different types of scenes
    • testing_spikeImages_contrastSensitivity.m : plots spike output for different contrast values
    • testing_spontaneousSpikeRate.m : values of spontaneous spike rates, for different setups.
    • testing_uncoupling.m : plots to study the decoupled spike process.
  • Absorptions :
    • testing_measureAbsoNoise.m : attempt to measure noise on absorptions. Needs to be worked on more, I get strange results.
    • testing_noiseStd.m : better attempt at measuring shot noise only.
  • Other :
    • testing_displayScene.m : this displays an ISET scene in RGB. Isn't there a function that does this in ISET???
    • testing_myContrastSensitivity.m : this displays very small contrasts, to test my vision. The screen's shades of grey are too coarse to really see.
    • testing_cosineBase.m : visualized Pillow's cosine bump base

[edit] New functions

  • plotAbso : plots absorptions (and unadapted absorptions if present)
  • rgcGetDataFromFigure : obtain the data of the figure (useful when the results are saved as plots only)
  • rgcContrastSensitivityAbsorptions : classification on absorptions.
  • rgcFrequencySensitivityAbsorptions : same
  • rgcCreateSpikeProbaHistogram : plots the spike probability histogram and plot, to check on the Poisson process.
  • Mouse functions : mouseCore and mouseOTF, and modifications in the human functions.

[edit] New wiki pages

[edit] Todo

[edit] Absorptions

We know the contrast/frequency performance on spiking is too low, but the absorptions are also weird. It can be interesting to make comparisons with the mouse eye (has only shot noise) to see what behaviour is different and why.

  • Quantify the noise on absorptions (noise levels can be set in scene2Absorptions, on in pixelCreate)
    • Compare to mouse : why are the absorptions different if there are as many photons in input?
    • Read noise: what is optimal value? Tune it on contr/freq sens experiments.
    • Vary dT : how does the noise change? Shot noise should lessen with longer sensor exposure and less photons. Is read noise time-dependent, how? Is there a link with why we need longer exposure times (dT=4ms) for the contrats/freq sensisitivy to give acceptable results? What kind of values do we expect to have for a human eye?
    • Test : dT=1 vs. sum(10 images), dT=.1 : do we get the same thing, more/less noise? (no read noise)


  • Contrast/frequency sensitivity (rgcContrastSensitivityAbsorptions and rgcFrequencySensitivityAbsorptions)
    • Classification is not great, it doesn't distinguish gratings that are obvious to the eye. Make that better!
    • Does the classifier see a difference between adapted and unadapted absorptions? (not necessarily...?)
    • Make the sensitivity experiments work with the human eye.


  • Adaptation
    • get rid of the extra gain in scene2Absorptions. Put it on the spiking filters instead? See #Spiking
    • Other method : Each cone looks only at its own time history
    • Other method : Each cone looks at neighbors of its own type + itself + history

[edit] Spiking

  • We have a membrane potential <math>rgcV = linTS + spkTS + randV</math>, with randV an added gaussian voltage of mean rgcP.meanV and standard deviation rgcP.stdV.
    • Why do we have these values for rgcP.meanV and rgcP.stdV, where do they come from? Is randV a voltage, a current, what does is correspond to in real neurons? Why add a random component here when the spiking is already a random process?
    • Does rgcV really correspond to a membrane potential? Does it have units? Are the inputs from linTS and spkTS currents or voltages? If they are currents, how do we convert them to voltages to add to the membrane potential?


  • Influence of dT
    • We need dT=4ms at least to have the frequency/contrast sensitivity make sense. Less than that and the results are too noisy to be interpreted. This is obvious in the sensitivity curves, but also in the spike images (you can use testing_spikeImages_contrastSensitivity.m to visualize this). Why is this? what is dT in real eyes?
    • The spike probability does not follow the Poisson model satisfactorily for dT bigger than 1. Is this a problem? (see Spike generation using a Poisson process#Influence of sampling time). The amount of this deviation seems to depend a bit on the type of image input, but generally is coherent.


  • spikeRateGain : find the link with the performance of contrast/frequency sensitivity.
    • spikeRateGain = 100 seems ok.
    • Use the spontaneous spike rate : <math> \lambda = K e^{linTS + spkTS}</math>. Therefore, with no stimulus input and no spiking, we should get <math> \lambda = K </math>, the spontaneous spike rate. This could be a way of tuning K : ln(K) is the 'spontaneous input'.


  • Investigate the shape of the response functions. The coupling function cpTR is simple, but the feedback fbTR is NOT.
    • why is A = linF/tShift? why use two variables? (tShift is not a time shift in practice, is there a mistake there?)
    • The gamma functions in fbTR are used only in <math>\gamma(g_i)</math>, in one single point. The shape of the bumps does NOT come from the gamma functions, but from <math>t^5 e^{-t}</math>. The gamma functions are uselessly complicated gain values, and they are lost in the renormalization!!
    • We could add a refractory period : a flat negative zone in the first time steps of the feedback function. (is this really needed?)

The response functions are in rgcLayer.


  • Our scaling of response functions is very abritrary. This is important for the spike rate gains.
    • We should also add gains in front of linTS and spkTS, and use <math> \lambda = K e^{K_1*linTS + K_2*spkTS}</math>.
    • Equivalently, we could add gains to the filter functions (RFs and inTR for producing linTS, fbTR and cpTR for producing spkTS). So far, the filter functions are arbitrarily scaled.
    • In real neurons, linTS and spkTS represent real currents, so we should check (1)what their intensity is, and (2)what their relative intensity is: is the spiking driven more by the stimulus or by the feedback and coupling?
    • If we can't get measures of this, then looking at the relative size of Pillow's filter functions could be a good start for (2), using <math> \lambda = K e^{K_1(linTS + K_2spkTS)}</math> or <math> \lambda = K e^{K_1(K_2linTS + spkTS)}</math>, or even just <math> \lambda = K e^{K_2linTS + spkTS}</math>.
    • For now, there is a big gain on absorptions : gain = adaptGain*1e6*300/mLum. We should get rid of it before we start tuning the filter functions. We should also tune the relative weight of feedback and coupling (that make up spkTS), or at least imitate Pillow's relative weights.
    • Values : in the steady state, linTS oscillates on [0.1;0.6] and spkTS in [-0.03;0.07], so the spike rate is largely driven by the stimulus input (in absolute value, linTS is over 85% of the input.). There is a script called testing_rgcPlots for visualizing this and other things.
"Gamma" component of fbTR
Components of fbTR
Our and Pillow's coupling and feedback functions

[edit] Weibull function

The gradient descent method is not used, it doesn't find the optimum, so the grid search is used instead, but takes a lot more time.

  • Modification : Use the gradient descent to find an acceptable zone, and then grid search to find the minimum. Though this seems to be working pretty well as it is.
Personal tools