Linderman Lab

Stanford University

Welcome to the Linderman Lab! We are part of the Statistics Department and the Wu Tsai Neurosciences Institute at Stanford University. We work at the intersection of machine learning and computational neuroscience, developing models and algorithms to better understand complex biological data. Check out some of our research below, and reach out if you'd like to learn more!

Scott W. Linderman

I'm an Assistant Professor of Statistics and Computer Science (by courtesy) at Stanford University. I'm also an Institute Scholar in the Wu Tsai Neurosciences Institute and a member of Stanford Bio-X and the Stanford AI Lab. Previously, I was a postdoctoral fellow with Liam Paninski and David Blei at Columbia University, and I completed my PhD in Computer Science at Harvard University with Ryan Adams and Leslie Valiant. Following family tradition, I slogged up Libe Slope as an undergraduate at Cornell University, just like my three brothers, parents, and a few generations of Lindermans before. Now I prefer Adirondack summers and California winters.

E-mail: scott.linderman@stanford.edu   CV Scholar Twitter

Research Group

Sophia Lu

Research Assistant (Statistics)

sophialu@stanford.edu

Jimmy Smith

Graduate Student (ICME)

jsmith14@stanford.edu

Alumni

Ben Antin

Research Assistant (2019-20)
(now PhD Student at Columbia)

bantin@stanford.edu

Here are some highlights of our work.

Hierarchical Recurrent State Space Models of Neural Activity

With Annika Nichols, David Blei, Manuel Zimmer, and Liam Paninski. bioRxiv, 2019

We develop hierarchical and recurrent state space models for whole brain recordings of neural activity in C. elegans. We find states of brain activity that correspond to discrete elements of worm behavior and dynamics that are modulated by brain state and sensory input.

Tree-structured Recurrent SLDS

With Josue Nassar, Monica Bugallo, and Il Memming Park. ICLR, 2019

We develop an extension of the rSLDS to capture hierarchical, multi-scale structure in dynamics via a tree-structured stick-breaking model. We recursively partition the latent space to obtain a piecewise linear approximation of nonlinear dynamics. A hierarchical prior smooths dynamics estimates, and inference is performed via an augmented Gibbs sampling algorithm.

Point process latent variable models of larval zebrafish behavior

With Anuj Sharma, Robert Johnson, and Florian Engert. NeurIPS 2018

We develop deep state space models with point process observation models to capture structure in larval zebrafish behavior. The models combine discrete and continuous latent variables. We marginalize the discrete states with message passing and perform inference with bidirectional LSTM recognition networks.

Variational Sequential Monte Carlo

With Christian Naesseth, Rajesh Ranganath, and David Blei. AISTATS 2018

We view SMC as a variational family indexed by the parameters of its proposal distribution and show how this generalizes the importance weighted autoencoder. As the number of particles goes to infinity, the variational approximation approaches the true posterior.

Rejection Sampling Variational Inference

With Christian Naesseth, Fran Ruiz, and David Blei. AISTATS 2017
Best Paper Award

Reparameterization gradients through rejection samplers for automatic variational inference in models with gamma, beta, and Dirichlet latent variables.

Recurrent Switching Linear Dynamical Systems

With Matt Johnson, Andy Miller, Ryan Adams, David Blei, and Liam Paninski. AISTATS, 2017

Bayesian learning and inference for models with co-evolving discrete and continuous latent states.

Dependent Multinomial Models Made Easy

With Matt Johnson and Ryan Adams. NIPS 2015

We use a stick-breaking construction and Pólya-gamma augmentation to derive block Gibbs samplers for linear Gaussian models with multinomial observations.

Publications

2020

  1. Williams, A. H., Degleris, A., Wang, Y., & Linderman, S. W. (2020). Point process models for sequence detection in high-dimensional neural spike trains. Advances in Neural Information Processing Systems (NeurIPS). Selected for Oral Presentation (1.1% of all submissions)
    arXiv Code
  2. Glaser, J. I., Whiteway, M., Cunningham, J. P., Paninski, L., & Linderman, S. W. (2020). Recurrent switching dynamical systems models for multiple interacting neural populations. Advances in Neural Information Processing Systems (NeurIPS).
    bioRxiv Code
  3. Low, I. I. C., Williams, A. H., Campbell, M. G., Linderman, S. W., & Giocomo, L. M. (2020). Dynamic and reversible remapping of network representations in an unchanging environment. BioRxiv.
    bioRxiv
  4. Tansey, W., Li, K., Zhang, H., Linderman, S. W., Rabadan, R., Blei, D. M., & Wiggins, C. H. (2020). Dose-response modeling in high-throughput cancer drug screenings: An end-to-end approach. Biostatistics (in Press).
    arXiv
  5. Zoltowski, D. M., Pillow, J. W., & Linderman, S. W. (2020). A general recurrent state space framework for modeling neural dynamics during decision-making. Proceedings of the International Conference on Machine Learning (ICML).
    Paper arXiv Code
  6. Johnson*, R. E., Linderman*, S. W., Panier, T., Wee, C. L., Song, E., Herrera, K. J., Miller, A., & Engert, F. (2020). Probabilistic models of larval zebrafish behavior reveal structure on many scales. Current Biology, 30(1), 70–82.
    Paper bioRxiv

2019

  1. Sun*, R., Linderman*, S. W., Kinsella, I., & Paninski, L. (2019). Scalable Bayesian inference of dendritic voltage via spatiotemporal recurrent state space models. Advances in Neural Information Processing Systems (NeurIPS). Selected for Oral Presentation (0.5% of all submissions)
    Paper Code
  2. Apostolopoulou, I., Linderman, S. W., Miller, K., & Dubrawski, A. (2019). Mutually regressive point processes. Advances in Neural Information Processing Systems (NeurIPS).
    Paper Code
  3. Schein, A., Linderman, S. W., Zhou, M., Blei, D., & Wallach, H. (2019). Poisson-randomized gamma dynamical systems. Advances in Neural Information Processing Systems (NeurIPS).
    Paper arXiv Code
  4. Batty^*, E., Whiteway^*, M., Saxena, S., Biderman, D., Abe, T., Musall, S., Gillis, W., Markowitz, J., Churchland, A., Cunningham, J., Linderman^\dagger, S. W., & Paninski^\dagger, L. (2019). BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos. Advances in Neural Information Processing Systems (NeurIPS).
    Paper Code
  5. Linderman, S. W., Nichols, A. L. A., Blei, D. M., Zimmer, M., & Paninski, L. (2019). Hierarchical recurrent state space models reveal discrete and continuous dynamics of neural activity in C. elegans. BioRxiv. https://doi.org/10.1101/621540
    bioRxiv
  6. Nassar, J., Linderman, S. W., Park, M., & Bugallo, M. (2019). Tree-structured locally linear dynamics model to uproot Bayesian neural data analysis. Computational and Systems Neuroscience (Cosyne) Abstracts.
  7. Raju, R. V., Li, Z., Linderman, S. W., & Pitkow, X. (2019). Inferring implicit inference. Computational and Systems Neuroscience (Cosyne) Abstracts.
  8. Glaser, J., Linderman, S. W., Whiteway, M., Perich, M., Dekleva, B., Miller, L., & Cunningham, L. P. J. (2019). State space models for multiple interacting neural populations. Computational and Systems Neuroscience (Cosyne) Abstracts.
  9. Markowitz, J., Gillis, W., Murmann, J., Linderman, S. W., Sabatini, B., & Datta, S. (2019). Resolving the neural mechanisms of reinforcement learning through new behavioral technologies. Computational and Systems Neuroscience (Cosyne) Abstracts.
  10. Linderman, S. W., Sharma, A., Johnson, R. E., & Engert, F. (2019). Point process latent variable models of larval zebrafish behavior. Computational and Systems Neuroscience (Cosyne) Abstracts.
  11. Nassar, J., Linderman, S. W., Bugallo, M., & Park, I. M. (2019). Tree-Structured Recurrent Switching Linear Dynamical Systems for Multi-Scale Modeling. International Conference on Learning Representations (ICLR).
    Paper arXiv

2018

  1. Sharma, A., Johnson, R. E., Engert, F., & Linderman, S. W. (2018). Point process latent variable models of freely swimming larval zebrafish. Advances in Neural Information Processing Systems (NeurIPS).
    Paper Code
  2. Markowitz, J. E., Gillis, W. F., Beron, C. C., Neufeld, S. Q., Robertson, K., Bhagat, N. D., Peterson, R. E., Peterson, E., Hyun, M., Linderman, S. W., Sabatini, B. L., & Datta, S. R. (2018). The Striatum Organizes 3D Behavior via Moment-to-Moment Action Selection. Cell. https://doi.org/doi: 10.1016/j.cell.2018.04.019
    Paper
  3. Linderman, S. W., Nichols, A., Blei, D. M., Zimmer, M., & Paninski, L. (2018). Hierarchical recurrent models reveal latent states of neural activity in C. elegans. Computational and Systems Neuroscience (Cosyne) Abstracts.
  4. Markowitz, J. E., Gillis, W. F., Beron, C. C., Neufeld, S. Q., Robertson, K., Bhagat, N. D., Peterson, R. E., Peterson, E., Hyun, M., Linderman, S. W., Sabatini, B. L., & Datta, S. R. (2018). Complementary Direct and Indirect Pathway Activity Encodes Sub-Second 3D Pose Dynamics in Striatum. Computational and Systems Neuroscience (Cosyne) Abstracts.
  5. Johnson*, R. E., Linderman*, S. W., Panier, T., Wee, C., Song, E., Herrera, K., Miller, A. C., & Engert, F. (2018). Revealing multiple timescales of structure in larval zebrafish behavior. Computational and Systems Neuroscience (Cosyne) Abstracts.
  6. Mena, G. E., Belanger, D., Linderman, S. W., & Snoek, J. (2018). Learning Latent Permutations with Gumbel-Sinkhorn Networks. International Conference on Learning Representations (ICLR).
    Paper Code
  7. Linderman, S. W., Mena, G. E., Cooper, H., Paninski, L., & Cunningham, J. P. (2018). Reparameterizing the Birkhoff Polytope for Variational Permutation Inference. Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS).
    Paper arXiv
  8. Naesseth, C. A., Linderman, S. W., Ranganath, R., & Blei, D. M. (2018). Variational Sequential Monte Carlo. Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS).
    Paper arXiv Code

2017

  1. Linderman, S. W., Wang, Y., & Blei, D. M. (2017). Bayesian inference for latent Hawkes processes. Advances in Approximate Bayesian Inference Workshop at the 31st Conference on Neural Information Processing Systems.
    Paper
  2. Buchanan, E. K., Lipschitz, A., Linderman, S. W., & Paninski, L. (2017). Quantifying the behavioral dynamics of C. elegans with autoregressive hidden Markov models. Workshop on Worm’s Neural Information Processing at the 31st Conference on Neural Information Processing Systems.
    Paper
  3. Mena, G. E., Linderman, S. W., Belanger, D., Snoek, J., Cunningham, J. P., & Paninski, L. (2017). Toward Bayesian permutation inference for identifying neurons in C. elegans. Workshop on Worm’s Neural Information Processing at the 31st Conference on Neural Information Processing Systems.
    Paper
  4. Linderman, S. W., & Johnson, M. J. (2017). Structure-Exploiting Variational Inference for Recurrent Switching Linear Dynamical Systems. IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing.
    Paper
  5. Linderman, S. W., & Blei, D. M. (2017). Comment: A Discussion of “Nonparametric Bayes Modeling of Populations of Networks.” Journal of the American Statistical Association, 112(520), 1543–1547.
    Paper Code
  6. Linderman, S. W., & Gershman, S. J. (2017). Using computational theory to constrain statistical models of neural data. Current Opinion in Neurobiology, 46, 14–24.
    Paper bioRxiv Code
  7. Linderman, S. W., Miller, A. C., Adams, R. P., Blei, D. M., Johnson, M. J., & Paninski, L. (2017). Neuro-behavioral Analysis with Recurrent switching linear dynamical systems. Computational and Systems Neuroscience (Cosyne) Abstracts.
  8. Linderman*, S. W., Johnson*, M. J., Miller, A. C., Adams, R. P., Blei, D. M., & Paninski, L. (2017). Bayesian learning and inference in recurrent switching linear dynamical systems. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS).
    Paper Slides Talk Code
  9. Naesseth, C. A., Ruiz, F. J. R., Linderman, S. W., & Blei, D. M. (2017). Reparameterization Gradients through Acceptance-Rejection Sampling Algorithms. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS). Best Paper Award.
    Paper Blog Post Code

2016

  1. Naesseth, C. A., Ruiz, F. J. R., Linderman, S. W., & Blei, D. M. (2016). Rejection sampling variational inference. Advances in Approximate Bayesian Inference Workshop at the 30th Conference on Neural Information Processing Systems.
    Paper
  2. Chen, Z., Linderman, S. W., & Wilson, M. A. (2016). Bayesian Nonparametric Methods For Discovering Latent Structures Of Rat Hippocampal Ensemble Spikes. IEEE Workshop on Machine Learning for Signal Processing.
    Paper Code
  3. Elibol, H. M., Nguyen, V., Linderman, S. W., Johnson, M. J., Hashmi, A., & Doshi-Velez, F. (2016). Cross-Corpora Unsupervised Learning of Trajectories in Autism Spectrum Disorders. Journal of Machine Learning Research, 17(133), 1–38.
    Paper
  4. Linderman, S. W., Adams, R. P., & Pillow, J. W. (2016). Bayesian latent structure discovery from multi-neuron recordings. Advances in Neural Information Processing Systems (NIPS).
    Paper Code
  5. Linderman, S. W. (2016). Bayesian methods for discovering structure in neural spike trains [PhD thesis]. Harvard University. Leonard J. Savage Award for Outstanding Dissertation in Applied Bayesian Methodology from the International Society for Bayesian Analysis
    Thesis Code
  6. Linderman, S. W., Johnson, M. J., Wilson, M. A., & Chen, Z. (2016). A Bayesian nonparametric approach to uncovering rat hippocampal population codes during spatial navigation. Journal of Neuroscience Methods, 263, 36–47.
    Paper Code
  7. Linderman, S. W., Tucker, A., & Johnson, M. J. (2016). Bayesian Latent State Space Models of Neural Activity. Computational and Systems Neuroscience (Cosyne) Abstracts.

2015

  1. Linderman*, S. W., Johnson*, M. J., & Adams, R. P. (2015). Dependent Multinomial Models Made Easy: Stick-Breaking with the Pólya-gamma Augmentation. Advances in Neural Information Processing Systems (NIPS), 3438–3446.
    Paper arXiv Code
  2. Linderman, S. W., & Adams, R. P. (2015). Scalable Bayesian Inference for Excitatory Point Process Networks. ArXiv Preprint ArXiv:1507.03228.
    arXiv Code
  3. Linderman, S. W., Adams, R. P., & Pillow, J. W. (2015). Inferring structured connectivity from spike trains under negative-binomial generalized linear models. Computational and Systems Neuroscience (Cosyne) Abstracts.
  4. Johnson, M. J., Linderman, S. W., Datta, S. R., & Adams, R. P. (2015). Discovering switching autoregressive dynamics in neural spike train recordings. Computational and Systems Neuroscience (Cosyne) Abstracts.

2014

  1. Linderman, S. W., Stock, C. H., & Adams, R. P. (2014). A framework for studying synaptic plasticity with neural spike train data. Advances in Neural Information Processing Systems (NIPS), 2330–2338.
    Paper arXiv
  2. Linderman, S. W. (2014). Discovering Latent States of the Hippocampus with Bayesian Hidden Markov Models. CBMM Memo 024: Abstracts of the Brains, Minds, and Machines Summer School.
    Paper
  3. Linderman, S. W., & Adams, R. P. (2014). Discovering Latent Network Structure in Point Process Data. Proceedings of the International Conference on Machine Learning (ICML), 1413–1421.
    Paper arXiv Talk Code
  4. Linderman, S. W., Stock, C. H., & Adams, R. P. (2014). A framework for studying synaptic plasticity with neural spike train data. Annual Meeting of the Society for Neuroscience.
    Paper
  5. Nemati, S., Linderman, S. W., & Chen, Z. (2014). A Probabilistic Modeling Approach for Uncovering Neural Population Rotational Dynamics. Computational and Systems Neuroscience (Cosyne) Abstracts.

2013

  1. Linderman, S. W., & Adams, R. P. (2013). Fully-Bayesian Inference of Structured Functional Networks in GLMs. Acquiring and Analyzing the Activity of Large Neural Ensembles Workshop at Neural Information Processing Systems (NIPS).
  2. Linderman, S. W., & Adams, R. P. (2013). Discovering structure in spiking networks. New England Machine Learning Day.
  3. Linderman, S. W., & Adams, R. P. (2013). Inferring functional connectivity with priors on network topology. Computational and Systems Neuroscience (Cosyne) Abstracts.