Research

My research focuses on advancing basic systems neuroscience and neuroengineering to develop more effective brain-computer interfaces (BCIs). I have been pursuing this direction in two complementary ways: (1) sensory prosthesis for writing-in neural activity corresponding to a stimulus, and (2) motor prosthesis for reading-out neural activity corresponding to an intended movement. Below, I outline some of the key projects I have led or been involved in.

Retinal prosthesis

Our primary aim is to restore vision in individuals suffering from retinal degeneration by electrically stimulating the remaining layers of the retina, specifically the Retinal Ganglion Cells (RGCs), at single-cell resolution. Below are some of projects I worked on as a PhD student (2014-2020) with Prof. EJ Chilchilnisky and as a member of the Stanford Artificial Retina group (2016-current):

  • How do the RGCs encode visual information?

To answer this question, we need to create an encoding model that converts visual stimuli to neural responses in a blind human retina. Using large scale ex-vivo recordings in healthy primate retina, we developed a subunit model to predict visually evoked responses by identifying spatial nonlinearities with the receptive field (Shah et al, 2020). We further applied this method to identify unusual properties of new RGC types (Rhoades et al, 2019). In subsequent work, we translated visual response models from white noise stimuli to naturalistic stimuli (Brackbill et al., 2020), from healthy retina to a blind retina (Zaidi et al., 2023), and from primates to humans (Kling et al., 2020).

In an attempt to close the gap between basic neuroscience literature and clinical requirements of an artificial retina, we used data from nearly a hundred recordings to identify a low dimensional manifold describing inter-individual variability in neural encoding and used it to efficiently characterize visual encoding in a previously unseen retina (Shah et al., 2022).

  • How do RGCs respond to cellular resolution electrical stimulation?

We first developed an efficient approach to characterize neural responses to single-cell resolution electrical stimulation by identifying a simple relationship of spike activation threshold to spike amplitudes recorded in parallel (Madugula et al. 2022, Madugula et al. 2023), and used it for developing an adaptive, closed-loop experiments (Shah et al., 2019). Recently, we have been working on understanding how neurons respond to multi-electrode stimulation patterns and developing efficient characterization methods for these stimuli (Vasireddy et al., 2023).

  • How do we convert incoming visual stimuli into electrical stimuli?

We developed a new approach for reproducing a given pattern of neural activity by combining simple, single electrode electrical stimuli using spatial multiplexing and temporal multiplexing (Shah et al., 2019 and Shah et al., 2022). Subsequently, we created an efficient, real-time version of this algorithm by decomposing the problem into smaller, non-overlapping partitions, mimicking the geometry of RGC axons (Lotlikar et al., 2023).

Motor prosthesis

Our goal is to enable neural decoding of rapid and dexterous movements, such as those of multiple fingers. As a postdoc at Neural Prosthetics Translational Lab with Prof. Jaimie Henderson (2020-current) and Prof. Krishna Shenoy (2020-2022), and a member of the BrainGate2 consortium (2020-current), I have been pursuing this research using Utah array recordings from human participants with paralysis. Below are some of my ongoing lines of research:

  • How does the motor cortex combine the representation of single finger movements to represent the simultaneous movement of multiple fingers?

  • How do we enable real-time closed loop control of multiple fingers? How do we use this control for a useful brain computer interface, such as typing?