RGC programming

From VISTA LAB WIKI

Revision as of 15:24, 10 August 2015 by Rjpatruno (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This page describes functions and scripts for the vismodel RGC section of the vismodel project code. The vismodel page contains an overview of the project as well as installation instructions. If you who have no patience for manuals and want to see the code in action right away, run through the two functions

Another overview can be seen in the function rgcComputeEverything.

  • Note : all distances in the code are in um, and all times are in ms. All voltage levels are in ???? millivolts ???

Contents

[edit] Todo

Here is a todo list of things to change/check in the code:

  • Checking the scaling of the signal with dT (see line 136 in rgcCalculateSpikesByLayer.m)
  • More talkative Diplay functions rgcDisplayParameter.m and rgcDisplayLayerParameter.m for the parameters that don't have much explanations.

[edit] Overview

Conceptually, the code follows the box and arrows at the bottom of this image: RGC code pipeline

The RGC pipeline can be followed by reading and executing the two main functions: scene2Absorptions and rgcComputeSpikes.

  • scene2absorptions inputs a scene or a filename of an image (e.g., from the virtual environment), and passes the data through simulations of the human lens and cone absorption. This yields an output of cone absorptions over some period of time (typically 1 ms). This function relies on the ISET routines.
  • rgcComputeSpikes specifies a model of the receptive field of an array of retinal ganglion cells; the function inputs the cone absorption images and transforms them into a spike train for each of the simulated retinal ganglion cells

The comments at the top of rgcComputeSpikes describe how to run the full computation from an ISET scene (a definition of the image in physical units of spectral radiance) to retinal ganglion cell (rgc) spikes. The header of rgcComputeSpikes reflects the latest changes. The following basic example should work.

 scene       = sceneCreate('slantedBar');
 absorptions = scene2Absorptions(scene);
 figure; imagesc(sum(absorptions.data,3));
 rgcP = rgcParameters;
 rgcP.addOneLayer('default');
 spikes = rgcComputeSpikes(absorptions,rgcP);
 figure; imagesc(sum(spikes{1},3));

To make sure that all parameters are consistent between the absorptions computation and the spikes computation, you should use the rgcComputeEverything function. This function correctly handles the borders of the image during the computation.

 [spikes absorptions rgcInterm] = rgcComputeEverything(scene,rgcP);

After running this code the the field of view is specified for the image (rgcInterm.imageFOV), RF support (rgcInterm.networkFov.support), and the center of the RGC network (rgcInterm.networkFov.center).

[edit] Main functions and structures

[edit] Important remarks on the code

You're strongly advised to read these few remarks before using the code.

  • The main class organising the simulation is rgcParameters. You can create an rgcParameters object using:
 rgcP = rgcParameters;
  • The class organising the layers is rgcLayer. You should not create a layer alone, you should add a layer to an existing object from the rgcParameters class:
 rgcP.addOneLayer();
  • You can then retrieve a layer using the getLayer(layer number) method:
 layer = rgcP.getLayer(1);
  • For the 2 classes, there is a set, a get and a Display method that have to be used to manage the parameters (see details here).
  • These classes are subclasses of the handle class, this means that they are passed by reference. Then objects of these classes may be modified by functions without being returned. For example the previous command "rgcP.addOneLayer();" modifies the rgcP object. It also means that if you do rgcP2 = rgcP, you are actually not doing anything, and modifying one will modify the other. To get a copy of rgcP, you should use the method Copy. then you will have 2 independent rgcParameters objects but having the same values. It will automatically copy the attached layers in the same way. Their layers are then independent too.
 rgcP2 = rgcP.Copy();
  • The simulation remembers its state between several runs. This means that if you modify a parameter that affects the computation of the connectivity matrix, or the time series, or the spikes calculation, you should reset the current state of any affected layer by using the resetCurrentState method. If you don't do that, you may have errors, and results won't mean much.
 layer.resetCurrentState();
  • To add absorptions to the simulation (for instance abrosptions computed in another simulation), use the set method. absorptions can be either a structure (as returned by scene2Absorptions) or a matrix.
 rgcP.set('data',absorptions)

If 'absorptions' is a structure with 'sensor' and 'oi' as fields, the same (private) fields will be set in the rgcP object.

  • For a new rgcParameters object, the network is initialized in a zero state. Then to initialize the network, you probably want to run rgcComputeSpikesNoiseInit, for the first spikes computation. This initializes the network on a few white noise frames, to 'warm up' the network and avoid border effects in time (see the example in the rgcLayer page). For following runs, you should not use this function though, as you want to restart in the state you left.
  • If you change the rgcLayers.cellSpacing parameter, you may want to compute new distance function parameters. To do that, you can use the following code. It will recompute distance parameters in such a way that the coupling effect will have an effect at a distance of 2 cells around them.
 rgcFindGoodDistanceParameters(layer,'yes');

See the part on network connectivity for more details.

[edit] Controlling RGC parameters

[edit] General parameter management

Both rgcParameters and rgcLayer objects have specific methods to manage their parameters:

*set('name',value) to assign a value to a parameter.
*get('name') to retrieve the value of a parameter.
*Display('name',[options]) to print information about this parameter.

Some specific parameters and the way they have to handled are described in the sections below.

[edit] rgcParameters object

rgcParameters is a Matlab class, it is a subclass of handle.

This object controls all the layers, and all the parameters of the simulation that are layer independant. It has get, set, and Display functions.

You can create a default object of this class, and then add a layer to it, with:

 rgcP = rgcParameters;  
 rgcP.addOneLayer('layerType');

see rgcParameters for details

[edit] rgcLayer object

rgcLayer is a Matlab class, it is a subclass of handle see rgcLayer.

This object controls all the parameters for a particular layer. It is also attached to a rgcParameters object (its parent). You should not create a layer directly, but use the method described above.

The following script is illustrating how to make different types of layers on an example:

  s_rgcExampleLayer

see rgcLayer for details

[edit] Layer position

By default, a layer has a size that covers all the data, and is centered in the middle of the data. You can change the position and the size of a layer by overriding the defaults :

 layer.overrideLayerSize([10 20]); % to set a 10x30 um layer
 layer.set('layerCenter',[-10,20]); % to set the new coordinates of the center (in um).


[edit] Spatial receptive field

// This is outdated? //It is possible to change the spatial RF through the s parameter in each layer:
//s = [spi spj; sni snj]; with sp the variances of the positive component and sn those of the negative component. i and j are the axis.

//The default settings of the s parameter for the On layer are: [1 1; 2 2]. // end

It is possible to change the spatial RF of the ganglion cells. To do this you can set the covariance matrix of the center and surround fields :

 layername.set('RFcov', {centerCovMatrix, surroundCovMatrix})

You can also set in terms of standard deviations :

 layername.set('RFstd', {centerStd,surroundStd}) 
 % with std=[std along main direction, std along perpendicular direction, angle of direction]

Example : to make a vertical edge detector, you need to use specific receptive fields :

 rgcP = rgcParameters;
 rgcP.addOneLayer;
 % you can set directly the covariance matrix
 rgcP.getLayer(1).set('RFcov',{[16 0; 0 1] [36 0; 0 1]});
 % % You can also set the standard deviations:
 % rgcP.getLayer(1).set('RFstd',{[4 .5 pi/2] [8 .5 pi/2]});
 % To visualise the covariance and std:
 rgcP.Display('rfcov');
 rgcP.Display('rfstd');
 % To visualize the RF :
 rgcP.Display('rf');
 % making a square stimulus (faking cone absorptions):
 square = 200*ones(100,100,100);
 square(31:70,31:70,:) = 1400*ones(40,40,100);
 % computing the spikes 
 spikes = rgcComputeSpikes(square,rgcP);
 mSpikes = sum(spikes{1},3);
 figure;imagesc(mSpikes);

Here are the related pictures of the default and so forth

These simulations depend on a lot of different parameters, here are the main ones (Section needs to be reorganized ... )

[edit] Timing

The timing parameters of the network are used by both scene2Absorptions and rgcComputeSpikes. They are determined by the inputs of the scene2Absorptions function:

 scene2Absorptions(scene,sensor,oi,nTimeFrames, expT, mLum, fov, coneSpacing, monochromatic)

Where:

  • expT will be the exposure time for the cones, and also the duration between 2 spiking times. It has to be given in ms.
  • nTimeFrames is the number of absorption periods computed for the cones, and will also be the number of spiking times.

For example expT = 2; and nTimeFrames = 50; will give 50 spiking times separated by 2 ms each. This represents then spiking activities for a total duration of 100ms.

[edit] Network connectivity

Every time a neuron spikes, it induces a coupling effect on its neighbours. The farther the neighbor is, the less chance of coupling. There is a random component in the coupling.

A cell A is affected by a spike from cell B is f(A,B) > cutoff, here f is the connectivity function.

f is computed as : f(d) = F(d) + nRange*x, where d is the distance between the 2 cells, F is a deterministic function, nRange is a layer parameter, and x is a random value with a uniform distribution on [0,1].

The default F function is : F = wRange*exp(-d).

Comes from a paper?

For memory issues, we impose nRange > cutoff/2.

The coupling matrix describes the network connectivity. For computational reasons, we do not advise using layers with f(3*cellSpacing) > cutoff (i.e. a connectivity of depth 3 or more). If you do, spikes are probably going to be very long to compute.

Therefore, if you change the cellSpacing, you can recompute new distance parameters that are such that cells are going to have an effect at a distance of less than 3 cells. To do that, you can do:

 [wRange nRange cutoff] = rgcFindGoodDistanceParameters(layer,assign);

With assign = 1 to replace automatically the layers value with the new ones.

[edit] Feedback responses

There are 3 types of feedback responses:

  • cpTR (coupling)
  • inTR (input temporal response)
  • fbTR (feedback temporal response)

Effect of tShift and linF parameters: tShift;  % low tShift => high negative feedback: see figures below linF;  % low tShift => high positive feedback: see figures below


[edit] Spatial mapping

The code is written in such a way that you're always able to know the spatial position of any cone or rgc and the corresponding pixels in the stimulus image. The spatial parameters that you are controlling are the fov and the coneSpacing (arguments of scene2Arbsorptions) and the cellSpacing(layer parameter). If you wish to explore the precision of the positioning, the script s_rgcMappingTesting.m will give you several examples.

The following gallery gives us an idea of our spatial precision, the lines on the pictures are supposed to cross at [40 40], you can see the result after either having spikes on the whole network, or just by computing a sub-network using a small layer. By taking the difference, we can see that the 2 methods give very similar results.

[edit] NoBorder functions

%% obsolete, use rgcComputeEverything instead %%

If you want to compute a sub-image of a bigger image, then you'll have inaccuracies at the border of your network due to 0-padding during the RF convolution step. To avoid that problem and compute the network as if it were a sub-network of a bigger network processing the whole image, you have to use scene2AbsorptionsNoBorder and rgcSpikesComputeNoBorder. 2 Examples of code using these functions are given at the end of s_rgcMappingTesting.m. If you run this code you will obtain the following images

  1. for a square surrounded by a black background.
  2. for a square surrounded by a white background'


[edit] Reverse correlation for RF characterization

To measure the RF of a neuron using reverse correlation we have two functions.

  • One function uses white noise images and a rgcP class to create a time synchronized set of stimuli (white noise) and spikes.
  • The second function takes in the spikes and the stimuli. It returns a short video sequence for each simulated ganglion cell. The video sequence for a single cell is computed as follows: Whenever there is a spike on that cell, we take the previous N frames of the stimulus and add those frames into the output video.
  • Note: We could create this video based on the actual stimulus (prior to passing through the optics) or we could create this image based on the cone absorptions, or we could create this image based on the mean signal given the linear convolution of the cone absorptions with the spatial RF. All of these may be interesting at some point. I guess we should first start with the stimulus prior to the optics. Then we should go to the cone mosaic stimulus.


[edit] Computing layers reverse correlation

There is a function that you can use to compute the layers reverse correlation (on the absorptions, or on the stimulus (i.e. reversing the optics, as well)).

 reverseMapping = rgcReverseRFComputation(rgcP,monochromatic,nTrials,optics);

On the default layer, this gives us the following results:

Personal tools