Linderman Lab

Stanford University

Welcome to the Linderman Lab! We belong to the Statistics Department and the Wu Tsai Neurosciences Institute at Stanford University. We work at the intersection of machine learning and computational neuroscience, developing models and algorithms to better understand complex biological data. Check out some of our research below, and reach out if you'd like to learn more!

Research

Modern recording technologies allow simultaneous measurements of hundreds, if not thousands, of neurons in freely moving animals. In parallel, advances in computer vision enable fast and accurate tracking of animal pose, yielding rich quantification of the nervous system's behavioral output. These technologies offer exciting opportunities to link brain activity to behavior, but they also pose statistical challenges: Neural and behavioral data are noisy, high-dimensional time series with nonlinear dynamics and substantial variability across contexts and subjects. We develop probabilistic models, scalable inference algorithms, and robust software tools to overcome these challenges and offer new insights into the neural basis of natural behavior.

State Space Models for Neural Data

State space models (SSMs) are probabilistic models that capture latent states underlying high-dimensional data. We develop SSMs tailored to neural data, like the recurrent switching linear dynamical system (rSLDS) (Linderman et al., 2017) and related models (Nassar et al., 2019; Glaser et al., 2020; Zoltowski et al., 2020; Smith et al., 2021; Lee et al., 2023). Recently, we developed the Gaussian process SLDS (Hu et al., 2024), which generalizes the rSLDS while retaining its interpretable structure. We work closely with experimental neuroscientists to apply our techniques and motivate new methodological work. For example, we are working with Prof. David Anderson at Caltech to study attractor dynamics in the hypothalamus during social interactions using our methods (Nair et al., 2023; Liu*, Nair* et al., 2024; Mountoufaris et al., 2024; Vinograd*, Nair* et al., 2024). Finally, we develop software packages like SSM and Dynamax (Linderman et al., 2024) for practitioners and methodological researchers alike.

Behavioral Time Series Models

Quantifying natural behavior poses both computational and statistical challenges, like how to track animals' posture, identify stereotyped patterns of movement, and relate movement to simultaneously recorded neural measurements. For several years, we have worked with Prof. Bob Datta at Harvard Medical School to extend and apply Motion Sequencing (MoSeq) (Wiltschko et al., 2015), an SSM that parses videos of freely moving animals into sequences of short, stereotyped movements called syllables. We have developed methods like Time-Warped MoSeq, which disentangles discrete and continuous forms of behavioral variation (Costacurta et al., 2022), and GIMBAL, a probabilistic model for 3D keypoint tracking (Zhang et al., 2021). Recently, we collaborated with the Datta Lab on Keypoint MoSeq (Weinreb et al., 2024), which combines GIMBAL and MoSeq to extract syllables directly from keypoint trajectories. Using these tools, we have studied how natural behavior relates to neural activity in the basal ganglia in mice (Markowitz et al., 2018, 2023), how it correlates with whole-brain activity patterns measured with Fos (Friedmann et al., 2024), and how internal state and external stimuli drive natural behavior of larval zebrafish (Johnson*, Linderman* et al., 2020).

Deep State Space Models

There has been a resurgence of interest in state space models within the machine learning community thanks to the impressive performance of deep SSMs (Gu et al., 2021, 2023). The architecture is surprisingly simple: each layer is a standard linear dynamical system, and the layers are nonlinearly coupled to one another to capture more complex relationships. We contributed to this line of work with S5 (Smith et al., 2023a,b), which simplified earlier work and paved the way for deep SSMs with time-varying dynamics. Now we are developing a theoretical framework for understanding deep SSMs (Smékal et al., 2024) and extending the parallel scan at the core of S5 to allow nonlinear dynamics within each layer (Gonzalez et al., 2024).

Point Processes

Neural spike trains are naturally modeled as multivariate point processes. My doctoral thesis focused on Bayesian models and inference algorithms for discovering latent network structure underlying point processes (Linderman and Adams, 2014; Linderman et al., 2016). More recently, we developed PP-Seq — a type of doubly-stochastic point process model called a Neyman-Scott process — for detecting sequential firing patterns in spike train data (Williams et al., 2020), and we made novel theoretical connections between Neyman-Scott processes and Bayesian nonparametric mixture models (Wang et al., 2023).

Scalable Inference Algorithms

As neural and behavioral recording technologies advance, the size of the resulting datasets is growing exponentially quickly. To fit complex models to these data, we need algorithms that can scale to the challenge. We develop approximate Bayesian inference algorithms for complex models and large-scale datasets, including novel approaches for gradient-based variational inference (VI) (Naesseth et al., 2017), methods that blend sequential Monte Carlo and VI (Naesseth et al., 2018; Lawson et al. 2022, 2023), and structure-exploiting inference algorithms for variational autoencoders (Zhao and Linderman, 2023).

Group


Scott W. Linderman

I'm an Assistant Professor of Statistics and an Institute Scholar in the Wu Tsai Neurosciences Institute at Stanford University. I hold a courtesy appointment in Computer Science, and I'm a member of Stanford Bio-X and the Stanford AI Lab. I was a postdoctoral fellow with Liam Paninski and David Blei at Columbia University, and I completed my PhD in Computer Science at Harvard University with Ryan Adams and Leslie Valiant. I obtained my undergraduate degree in Electrical and Computer Engineering from Cornell University and spent three years as a software engineer at Microsoft prior to graduate school.

E-mail: scott.linderman@stanford.edu   CV   Google Scholar

Alisa Levin

PhD Student (CS, co-advised by Prof. Jaimie Henderson)


Alumni

Ben Antin

Research Assistant (2019-20)
(now PhD Student at Columbia)

Lea Duncker

Postdoc (2021-23)
(now Asst. Prof. at Columbia)

Maxwell Kounga

Undergraduate RA (2023-24)
(now RA at Harvard Medical School)

Sophia Lu

Undergraduate RA (2020-21)
(now PhD Student at Stanford)

Alex Williams

Postdoc (2019-21)
(now Asst. Prof. at NYU and Research Scientist at the Flatiron Institute)

Teaching

STAT 320: Machine Learning Methods for Neural Data Analysis

This course is organized around a series of coding labs. Each week, we introduce the theory behind a state-of-the-art method for neural data analysis. Then, in the lab, we develop a minimal version of that method from scratch, in Python. The methods include: spike sorting and calcium deconvolution methods for extracting relevant signals from raw data; markerless tracking methods for estimating animal pose in behavioral videos; generalized linear models and deep learning models for neural encoding and decoding; and state space models for analysis of high-dimensional neural and behavioral time-series.
Online Textbook (Still in development): https://slinderman.github.io/ml4nd

STAT 305B: Applied Statistics II

This is a course about models and algorithms for discrete data. We cover models ranging from generalized linear models to sequential latent variable models, autoregressive models, and transformers. On the algorithm side, we cover a few techniques for convex optimization, as well as approximate Bayesian inference algorithms like MCMC and variational inference. I think the best way to learn these concepts is to implement them from scratch, so coding is a big focus of this course. By the end, you will have a strong grasp of classical techniques as well as modern methods for modeling discrete data.
Course Reader: https://slinderman.github.io/stats305b

STAT 305C: Applied Statistics III

This course is about probabilistic modeling and (approximate) Bayesian inference algorithms for high dimensional data. Topics include multivariate Gaussian models, probabilistic graphical models, hierarchical Bayesian models, MCMC and variational Bayesian inference, principal components analysis, factor analysis, matrix completion, topic modeling, state space models, variational autoencoders, Gaussian processes, and point processes. Each week pairs a family of models with an approximate inference algorithm. The course involves extensive Python programming using PyTorch and applied statistical analyses of real datasets.
Course Reader: https://slinderman.github.io/stats305c

Publications

2024

  1. Friedmann, D., Gonzalez, X., Moses, A., Watts, T., Degleris, A., Ticea, N., Song, J. H., Datta, S. R., Linderman^*, S. W., & Luo^*, L. (2024). Concerted modulation of spontaneous behavior and time-integrated whole-brain neuronal activity by serotonin receptors. BioRxiv. https://doi.org/10.1101/2024.08.02.606282
    bioRxiv
  2. Vinograd^*, A., Nair^*, A., Kim, J., Linderman, S. W., & Anderson, D. J. (2024). Causal evidence of a line attractor encoding an affective state. Nature. https://doi.org/10.1038/s41586-024-07915-x
    Paper bioRxiv
  3. Mountoufaris, G., Nair, A., Yang, B., Kim, D. W., Vinograd, A., Kim, S., Linderman, S. W., & Anderson, D. J. (2024). Neuropeptide signaling is required to implement a line attractor encoding a persistent internal behavioral state. Cell. (In press.)
  4. Liu^*, M., Nair^*, A., Coria, N., Linderman, S. W., & Anderson, D. J. (2024). Encoding of female mating dynamics by a hypothalamic line attractor. Nature. https://doi.org/10.1038/s41586-024-07916-w
    Paper bioRxiv
  5. Bedbrook, C., Nath, R., Zhang, E., Linderman, S. W., Brunet, A., & Deisseroth, K. (2024). Life-long behavioral monitoring reveals dynamics that forecast lifespan and discrete transitions defining an architecture of aging. (Under review.)
  6. Linderman, S. W., Chang, P., Harper-Donnelly, G., Kara, A., Li, X., Duran-Martin, G., & Murphy, K. P. (2024). Dynamax: A Python package for probabilistic state space modeling with JAX. (Under review.)
    Paper Code
  7. Hu, A., Zoltowski, D., Nair, A., Anderson, D., Duncker, L., & Linderman, S. (2024). Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dynamical Systems. ArXiv. (Under review.)
    arXiv
  8. Costacurta, J. C., Bhandarkar, S., Zoltowski, D. M., & Linderman, S. W. (2024). Structured flexibility in recurrent neural networks via neuromodulation. BioRxiv. https://doi.org/10.1101/2024.07.26.605315 (Under review.)
    bioRxiv
  9. Gonzalez, X., Warrington, A., Smith, J. T. H., & Linderman, S. W. (2024). Towards Scalable and Stable Parallelization of Nonlinear RNNs. ArXiv. https://doi.org/10.48550/arXiv.2407.19115 (Under review.)
    arXiv
  10. Zhao, Y., Shi, J., Mackey, L., & Linderman, S. (2024). Informed Correctors for Discrete Diffusion Models. ArXiv. https://doi.org/10.48550/arXiv.2407.21243 (Under review.)
    Paper
  11. Vloeberghs, R., Urai, A. E., Desender, K., & Linderman, S. W. (2024). A Bayesian Hierarchical Model of Trial-to-Trial Fluctuations in Decision Criterion. BioRxiv. https://doi.org/10.1101/2024.07.30.605869
    bioRxiv
  12. Gershman, S. J., Assad, J. A., Datta, S. R., Linderman, S. W., Sabatini, B. L., Uchida, N., & Wilbrecht, L. (2024). Explaining dopamine through prediction errors and beyond. Nature Neuroscience, 1–11.
    Paper
  13. Weinreb, C., Pearl, J. E., Lin, S., Osman, M. A. M., Zhang, L., Annapragada, S., Conlin, E., Hoffmann, R., Makowska, S., Gillis, W. F., Jay, M., Ye, S., Mathis, A., Mathis, M. W., Pereira, T., Linderman^*, S. W., & Datta^*, S. R. (2024). Keypoint-MoSeq: parsing behavior by linking point tracking to pose dynamics. Nature Methods, 21(7), 1329–1339. https://doi.org/10.1038/s41592-024-02318-2
    Paper bioRxiv Code
  14. Smékal, J., Smith, J. T. H., Kleinman, M., Biderman, D., & Linderman, S. W. (2024). Towards a theory of learning dynamics in deep state space models. ICML 2024 Workshop on Next Generation of Sequence Modeling Architectures. Selected for Spotlight Presentation
    arXiv

2023

  1. Smith, J. T. H., Mello, S. D., Kautz, J., Linderman, S., & Byeon, W. (2023). Convolutional State Space Models for Long-Range Spatiotemporal Modeling. Thirty-Seventh Conference on Neural Information Processing Systems.
    arXiv Code
  2. Lawson, D., Li, M. Y., & Linderman, S. (2023). NAS-X: Neural Adaptive Smoothing via Twisting. Thirty-Seventh Conference on Neural Information Processing Systems.
    arXiv Code
  3. Lee, H. D., Warrington, A., Glaser, J. I., & Linderman, S. (2023). Switching Autoregressive Low-rank Tensor Models. Thirty-Seventh Conference on Neural Information Processing Systems.
    arXiv Code
  4. Hennig, J., Pinto, S. A. R., Yamaguchi, T., Linderman, S. W., Uchida, N., & Gershman, S. J. (2023). Emergence of belief-like representations through reinforcement learning. PLoS Computational Biology. https://doi.org/10.1101/2023.04.04.535512
    bioRxiv
  5. Wang, Y., Degleris, A., Williams, A. H., & Linderman, S. W. (2023). Spatiotemporal Clustering with Neyman-Scott Processes via Connections to Bayesian Nonparametric Mixture Models. Journal of the American Statistical Association.
    arXiv Code
  6. Bukwich, M., Campbell, M. G., Zoltowski, D., Kingsbury, L., Tomov, M. S., Stern, J., Kim, H. G. R., Drugowitsch, J., Linderman, S. W., & Uchida, N. (2023). Competitive integration of time and reward explains value-sensitive foraging decisions and frontal cortex ramping dynamics. BioRxiv.
    bioRxiv
  7. Zhao, Y., & Linderman, S. W. (2023). Revisiting Structured Variational Autoencoders. International Conference on Machine Learning (ICML).
    Paper arXiv Code
  8. Smith, J. T. H., Warrington, A., & Linderman, S. W. (2023). Simplified State Space Layers for Sequence Modeling. International Conference on Learning Representations (ICLR). Selected for Oral Presentation (top 5% of accepted papers, top 1.5% of all submissions)
    Paper arXiv Code
  9. Markowitz, J., Gillis, W., Jay, M., Wood, J., Harris, R., Cieszkowski, R., Scott, R., Brann, D., Koveal, D., Kuila, T., Weinreb, C., Osman, M., Pinto, S. R., Uchida, N., Linderman, S. W., Sabatini, B., & Datta, S. R. (2023). Spontaneous behavior is structured by reinforcement without exogenous reward. Nature. https://doi.org/https://doi.org/10.1038/s41586-022-05611-2
    Paper
  10. Nair, A., Karigo, T., Yang, B., Ganguli, S., Schnitzer, M. J., Linderman, S. W., Anderson, D. J., & Kennedy, A. (2023). An approximate line attractor in the hypothalamus encodes an aggressive state. Cell, 186(1), 178–193.
    Paper bioRxiv

2022

  1. Lawson, D., Raventos, A., Warrington, A., & Linderman, S. (2022). SIXO: Smoothing Inference with Twisted Objectives. Advances in Neural Information Processing Systems. Selected for Oral Presentation
    Paper arXiv Code
  2. Costacurta, J. C., Duncker, L., Sheffer, B., Gillis, W., Weinreb, C., Markowitz, J. E., Datta, S. R., Williams, A. H., & Linderman, S. (2022). Distinguishing discrete and continuous behavioral variability using warped autoregressive HMMs. Advances in Neural Information Processing Systems.
    Paper bioRxiv
  3. Beller, A., Xu, Y., Linderman, S. W., & Gerstenberg, T. (2022). Looking into the past: Eye-tracking mental simulation in physical inference. Proceedings of the Annual Meeting of the Cognitive Science Society, 44(44).
    Paper arXiv
  4. Beron, C. C., Neufeld, S. Q., Linderman^*, S. W., & Sabatini^*, B. L. (2022). Mice exhibit stochastic and efficient action switching during probabilistic decision making. Proceedings of the National Academy of Sciences, 119(15), e2113961119. https://doi.org/10.1073/pnas.2113961119
    Paper bioRxiv
  5. Lin, A., Witvliet, D., Hernandez-Nunez, L., Linderman, S. W., Samuel, A. D. T., & Venkatachalam, V. (2022). Imaging whole-brain activity to understand behaviour. Nature Reviews Physics, 1–14.
    Paper
  6. Linderman, S. W. (2022). Weighing the evidence in sharp-wave ripples. Neuron, 110(4), 568–570. https://doi.org/https://doi.org/10.1016/j.neuron.2022.01.036
    Paper Code

2021

  1. Williams, A. H., & Linderman, S. W. (2021). Statistical neuroscience in the single trial limit. Current Opinion in Neurobiology, 70, 193–205. https://doi.org/https://doi.org/10.1016/j.conb.2021.10.008
    Paper arXiv
  2. Smith, J. T. H., Linderman, S. W., & Sussillo, D. (2021). Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems. Advances in Neural Information Processing Systems (NeurIPS).
    Paper arXiv Code
  3. Williams, A. H., Kunz, E., Kornblith, S., & Linderman, S. W. (2021). Generalized Shape Metrics on Neural Representations. Advances in Neural Information Processing Systems (NeurIPS).
    Paper arXiv Code
  4. Yu, X., Creamer, M. S., Randi, F., Sharma, A. K., Linderman, S. W., & Leifer, A. M. (2021). Fast deep neural correspondence for tracking and identifying neurons in C. elegans using semi-synthetic training. Elife, 10, e66410.
    Paper arXiv
  5. Low, I. I. C., Williams, A. H., Campbell, M. G., Linderman, S. W., & Giocomo, L. M. (2021). Dynamic and reversible remapping of network representations in an unchanging environment. Neuron.
    Paper bioRxiv
  6. Zhang, L., Marshall, J. D., Dunn, T., Ölveczky, B., & Linderman, S. W. (2021). Animal pose estimation from video data with a hierarchical von Mises-Fisher-Gaussian model. Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS).
    Paper

2020

  1. Mittal, A., Linderman, S. W., Paisley, J., & Sajda, P. (2020). Bayesian recurrent state space model for rs-fMRI. Machine Learning for Health (ML4H) Workshop at NeurIPS 2020.
    arXiv
  2. Williams, A. H., Degleris, A., Wang, Y., & Linderman, S. W. (2020). Point process models for sequence detection in high-dimensional neural spike trains. Advances in Neural Information Processing Systems (NeurIPS). Selected for Oral Presentation (1.1% of all submissions)
    Paper arXiv Code
  3. Glaser, J. I., Whiteway, M., Cunningham, J. P., Paninski, L., & Linderman, S. W. (2020). Recurrent switching dynamical systems models for multiple interacting neural populations. Advances in Neural Information Processing Systems (NeurIPS).
    Paper bioRxiv Code
  4. Tansey, W., Li, K., Zhang, H., Linderman, S. W., Rabadan, R., Blei, D. M., & Wiggins, C. H. (2020). Dose-response modeling in high-throughput cancer drug screenings: An end-to-end approach. Biostatistics.
    Paper arXiv
  5. Zoltowski, D. M., Pillow, J. W., & Linderman, S. W. (2020). A general recurrent state space framework for modeling neural dynamics during decision-making. Proceedings of the International Conference on Machine Learning (ICML).
    Paper arXiv Code
  6. Johnson*, R. E., Linderman*, S. W., Panier, T., Wee, C. L., Song, E., Herrera, K. J., Miller, A., & Engert, F. (2020). Probabilistic models of larval zebrafish behavior reveal structure on many scales. Current Biology, 30(1), 70–82.
    Paper bioRxiv

2019

  1. Sun*, R., Linderman*, S. W., Kinsella, I., & Paninski, L. (2019). Scalable Bayesian inference of dendritic voltage via spatiotemporal recurrent state space models. Advances in Neural Information Processing Systems (NeurIPS). Selected for Oral Presentation (0.5% of all submissions)
    Paper Code
  2. Apostolopoulou, I., Linderman, S. W., Miller, K., & Dubrawski, A. (2019). Mutually regressive point processes. Advances in Neural Information Processing Systems (NeurIPS).
    Paper Code
  3. Schein, A., Linderman, S. W., Zhou, M., Blei, D., & Wallach, H. (2019). Poisson-randomized gamma dynamical systems. Advances in Neural Information Processing Systems (NeurIPS).
    Paper arXiv Code
  4. Batty^*, E., Whiteway^*, M., Saxena, S., Biderman, D., Abe, T., Musall, S., Gillis, W., Markowitz, J., Churchland, A., Cunningham, J., Linderman^\dagger, S. W., & Paninski^\dagger, L. (2019). BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos. Advances in Neural Information Processing Systems (NeurIPS).
    Paper Code
  5. Linderman, S. W., Nichols, A. L. A., Blei, D. M., Zimmer, M., & Paninski, L. (2019). Hierarchical recurrent state space models reveal discrete and continuous dynamics of neural activity in C. elegans. BioRxiv. https://doi.org/10.1101/621540
    bioRxiv
  6. Nassar, J., Linderman, S. W., Park, M., & Bugallo, M. (2019). Tree-structured locally linear dynamics model to uproot Bayesian neural data analysis. Computational and Systems Neuroscience (Cosyne) Abstracts.
  7. Raju, R. V., Li, Z., Linderman, S. W., & Pitkow, X. (2019). Inferring implicit inference. Computational and Systems Neuroscience (Cosyne) Abstracts.
  8. Glaser, J., Linderman, S. W., Whiteway, M., Perich, M., Dekleva, B., Miller, L., & Cunningham, L. P. J. (2019). State space models for multiple interacting neural populations. Computational and Systems Neuroscience (Cosyne) Abstracts.
  9. Markowitz, J., Gillis, W., Murmann, J., Linderman, S. W., Sabatini, B., & Datta, S. (2019). Resolving the neural mechanisms of reinforcement learning through new behavioral technologies. Computational and Systems Neuroscience (Cosyne) Abstracts.
  10. Linderman, S. W., Sharma, A., Johnson, R. E., & Engert, F. (2019). Point process latent variable models of larval zebrafish behavior. Computational and Systems Neuroscience (Cosyne) Abstracts.
  11. Nassar, J., Linderman, S. W., Bugallo, M., & Park, I. M. (2019). Tree-Structured Recurrent Switching Linear Dynamical Systems for Multi-Scale Modeling. International Conference on Learning Representations (ICLR).
    Paper arXiv

2018

  1. Sharma, A., Johnson, R. E., Engert, F., & Linderman, S. W. (2018). Point process latent variable models of freely swimming larval zebrafish. Advances in Neural Information Processing Systems (NeurIPS).
    Paper Code
  2. Markowitz, J. E., Gillis, W. F., Beron, C. C., Neufeld, S. Q., Robertson, K., Bhagat, N. D., Peterson, R. E., Peterson, E., Hyun, M., Linderman, S. W., Sabatini, B. L., & Datta, S. R. (2018). The Striatum Organizes 3D Behavior via Moment-to-Moment Action Selection. Cell. https://doi.org/doi: 10.1016/j.cell.2018.04.019
    Paper
  3. Linderman, S. W., Nichols, A., Blei, D. M., Zimmer, M., & Paninski, L. (2018). Hierarchical recurrent models reveal latent states of neural activity in C. elegans. Computational and Systems Neuroscience (Cosyne) Abstracts.
  4. Markowitz, J. E., Gillis, W. F., Beron, C. C., Neufeld, S. Q., Robertson, K., Bhagat, N. D., Peterson, R. E., Peterson, E., Hyun, M., Linderman, S. W., Sabatini, B. L., & Datta, S. R. (2018). Complementary Direct and Indirect Pathway Activity Encodes Sub-Second 3D Pose Dynamics in Striatum. Computational and Systems Neuroscience (Cosyne) Abstracts.
  5. Johnson*, R. E., Linderman*, S. W., Panier, T., Wee, C., Song, E., Herrera, K., Miller, A. C., & Engert, F. (2018). Revealing multiple timescales of structure in larval zebrafish behavior. Computational and Systems Neuroscience (Cosyne) Abstracts.
  6. Mena, G. E., Belanger, D., Linderman, S. W., & Snoek, J. (2018). Learning Latent Permutations with Gumbel-Sinkhorn Networks. International Conference on Learning Representations (ICLR).
    Paper Code
  7. Linderman, S. W., Mena, G. E., Cooper, H., Paninski, L., & Cunningham, J. P. (2018). Reparameterizing the Birkhoff Polytope for Variational Permutation Inference. Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS).
    Paper arXiv
  8. Naesseth, C. A., Linderman, S. W., Ranganath, R., & Blei, D. M. (2018). Variational Sequential Monte Carlo. Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS).
    Paper arXiv Code

2017

  1. Linderman, S. W., Wang, Y., & Blei, D. M. (2017). Bayesian inference for latent Hawkes processes. Advances in Approximate Bayesian Inference Workshop at the 31st Conference on Neural Information Processing Systems.
    Paper
  2. Buchanan, E. K., Lipschitz, A., Linderman, S. W., & Paninski, L. (2017). Quantifying the behavioral dynamics of C. elegans with autoregressive hidden Markov models. Workshop on Worm’s Neural Information Processing at the 31st Conference on Neural Information Processing Systems.
    Paper
  3. Mena, G. E., Linderman, S. W., Belanger, D., Snoek, J., Cunningham, J. P., & Paninski, L. (2017). Toward Bayesian permutation inference for identifying neurons in C. elegans. Workshop on Worm’s Neural Information Processing at the 31st Conference on Neural Information Processing Systems.
    Paper
  4. Linderman, S. W., & Johnson, M. J. (2017). Structure-Exploiting Variational Inference for Recurrent Switching Linear Dynamical Systems. IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing.
    Paper
  5. Linderman, S. W., & Blei, D. M. (2017). Comment: A Discussion of “Nonparametric Bayes Modeling of Populations of Networks.” Journal of the American Statistical Association, 112(520), 1543–1547.
    Paper Code
  6. Linderman, S. W., & Gershman, S. J. (2017). Using computational theory to constrain statistical models of neural data. Current Opinion in Neurobiology, 46, 14–24.
    Paper bioRxiv Code
  7. Linderman, S. W., Miller, A. C., Adams, R. P., Blei, D. M., Johnson, M. J., & Paninski, L. (2017). Neuro-behavioral Analysis with Recurrent switching linear dynamical systems. Computational and Systems Neuroscience (Cosyne) Abstracts.
  8. Linderman*, S. W., Johnson*, M. J., Miller, A. C., Adams, R. P., Blei, D. M., & Paninski, L. (2017). Bayesian learning and inference in recurrent switching linear dynamical systems. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS).
    Paper Slides Talk Code
  9. Naesseth, C. A., Ruiz, F. J. R., Linderman, S. W., & Blei, D. M. (2017). Reparameterization Gradients through Acceptance-Rejection Sampling Algorithms. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS). Best Paper Award.
    Paper Blog Post Code

2016

  1. Naesseth, C. A., Ruiz, F. J. R., Linderman, S. W., & Blei, D. M. (2016). Rejection sampling variational inference. Advances in Approximate Bayesian Inference Workshop at the 30th Conference on Neural Information Processing Systems.
    Paper
  2. Chen, Z., Linderman, S. W., & Wilson, M. A. (2016). Bayesian Nonparametric Methods For Discovering Latent Structures Of Rat Hippocampal Ensemble Spikes. IEEE Workshop on Machine Learning for Signal Processing.
    Paper Code
  3. Elibol, H. M., Nguyen, V., Linderman, S. W., Johnson, M. J., Hashmi, A., & Doshi-Velez, F. (2016). Cross-Corpora Unsupervised Learning of Trajectories in Autism Spectrum Disorders. Journal of Machine Learning Research, 17(133), 1–38.
    Paper
  4. Linderman, S. W., Adams, R. P., & Pillow, J. W. (2016). Bayesian latent structure discovery from multi-neuron recordings. Advances in Neural Information Processing Systems (NIPS).
    Paper Code
  5. Linderman, S. W. (2016). Bayesian methods for discovering structure in neural spike trains [PhD thesis]. Harvard University. Leonard J. Savage Award for Outstanding Dissertation in Applied Bayesian Methodology from the International Society for Bayesian Analysis
    Thesis Code
  6. Linderman, S. W., Johnson, M. J., Wilson, M. A., & Chen, Z. (2016). A Bayesian nonparametric approach to uncovering rat hippocampal population codes during spatial navigation. Journal of Neuroscience Methods, 263, 36–47.
    Paper Code
  7. Linderman, S. W., Tucker, A., & Johnson, M. J. (2016). Bayesian Latent State Space Models of Neural Activity. Computational and Systems Neuroscience (Cosyne) Abstracts.

2015

  1. Linderman*, S. W., Johnson*, M. J., & Adams, R. P. (2015). Dependent Multinomial Models Made Easy: Stick-Breaking with the Pólya-gamma Augmentation. Advances in Neural Information Processing Systems (NIPS), 3438–3446.
    Paper arXiv Code
  2. Linderman, S. W., & Adams, R. P. (2015). Scalable Bayesian Inference for Excitatory Point Process Networks. ArXiv Preprint ArXiv:1507.03228.
    arXiv Code
  3. Linderman, S. W., Adams, R. P., & Pillow, J. W. (2015). Inferring structured connectivity from spike trains under negative-binomial generalized linear models. Computational and Systems Neuroscience (Cosyne) Abstracts.
  4. Johnson, M. J., Linderman, S. W., Datta, S. R., & Adams, R. P. (2015). Discovering switching autoregressive dynamics in neural spike train recordings. Computational and Systems Neuroscience (Cosyne) Abstracts.

2014

  1. Linderman, S. W., Stock, C. H., & Adams, R. P. (2014). A framework for studying synaptic plasticity with neural spike train data. Advances in Neural Information Processing Systems (NIPS), 2330–2338.
    Paper arXiv
  2. Linderman, S. W. (2014). Discovering Latent States of the Hippocampus with Bayesian Hidden Markov Models. CBMM Memo 024: Abstracts of the Brains, Minds, and Machines Summer School.
    Paper
  3. Linderman, S. W., & Adams, R. P. (2014). Discovering Latent Network Structure in Point Process Data. Proceedings of the International Conference on Machine Learning (ICML), 1413–1421.
    Paper arXiv Talk Code
  4. Linderman, S. W., Stock, C. H., & Adams, R. P. (2014). A framework for studying synaptic plasticity with neural spike train data. Annual Meeting of the Society for Neuroscience.
    Paper
  5. Nemati, S., Linderman, S. W., & Chen, Z. (2014). A Probabilistic Modeling Approach for Uncovering Neural Population Rotational Dynamics. Computational and Systems Neuroscience (Cosyne) Abstracts.

2013

  1. Linderman, S. W., & Adams, R. P. (2013). Fully-Bayesian Inference of Structured Functional Networks in GLMs. Acquiring and Analyzing the Activity of Large Neural Ensembles Workshop at Neural Information Processing Systems (NIPS).
  2. Linderman, S. W., & Adams, R. P. (2013). Discovering structure in spiking networks. New England Machine Learning Day.
  3. Linderman, S. W., & Adams, R. P. (2013). Inferring functional connectivity with priors on network topology. Computational and Systems Neuroscience (Cosyne) Abstracts.