Mohsen's photo

Mohsen Bayati

Assistant Professor of Operations, Information and Technology at Graduate School of Business
Assistant Professor of Electrical Engineering (by courtesy)
Advising faculty in Biomedical Informatics
Stanford University


Announcements


Teaching

  • Spring 2015: OIT 673 OIT 673 Data-driven Decision Making in Healthcare (PhD)
  • Winter 2015: OIT 367 Analytics from Big Data (MBA Core)
  • Winter 2014: OIT 367 Analytics from Big Data (MBA Core)
  • Winter 2013: OIT 367 Analytics from Big Data (MBA Core)
  • Spring 2012: OIT 267 Data and Decisions - Accelerated (MBA Core)

Research Interests

My research interests include applications of graphical models, machine learning, probability theory and statistical physics in:

  • Data-driven healthcare: incentives, predictive models, optimization, and decisions
  • Design and analysis of algorithms for large-scale data
  • Graphical models and message-passing algorithms
  • Theory of algorithms, applied probability and statistics


Selected Projects (for a complete list click here)

Data-Driven Decision Making in Healthcare

  • M. Bayati, M. Braverman, M. Gillam, K. Mack, G. Ruiz, M. Smith, and E. Horvitz, Predictive Models and Policies for Minimizing Rehospitalizations for Congestive Heart Failure, Submitted.
  • Summary: Rehospitalization -patient admission to a hospital soon after the discharge- is both common and costly. Nearly one in every five patients is readmitted to the hospital within 30 days of their discharge. The estimated cost of unplanned rehospitalizations to Medicare in 2004 was around $17.4 billion. Research shows that hospital initiatives such as: patient education programs, follow-up home visits by pharmacists, extensive discharge packages, etc., can avert many rehospitalizations. However, proper allocation of these costly and limited resources is a challenging problem. In this project, we use machine learning and optimization tools for identifying patients with highest risk of being rehospitalized and obtain a decision support mechanism for allocating scarce resources to post-discharge support.

    Video

Graphical Models, Approximate Message Passing, and Statistical Learning

  • M. Bayati, and A. Montanari, The dynamics of message passing on dense graphs, with applications to compressed sensing, IEEE Transcations on Information Theory, Vol. 57, No. 2, 2011
  • M. Bayati, and A. Montanari, The LASSO risk for gaussian matrices, IEEE Transcations on Information Theory, Vol. 58, No. 4, 2012
  • M. Bayati, M. Lelarge and A. Montanari, Universality in Polytope Phase Transitions and Message Passing Algorithms, Submitted, 2012
  • M Bayati, M. A. Erdogdu, and A. Montanari, Estimating LASSO risk and noise level, Accepted to NIPS, 2013.
  • Summary: Recently, Donoho, Maleki and Montanari introduced approximate message passing (AMP) as an extremely effective algorithm for reconstructing high dimensional sparse signals from a small number of observations. They also showed (through extensive numerical experiments) that dynamics of AMP is accurately tracked by a simple one-dimensional iteration termed "state evolution". In these papers we provided the first rigorous foundation to state evolution. We proved that indeed it holds asymptotically in the large system limit for sensing matrices with i.i.d. gaussian entries. Our techniques also provide a new approach for the analysis of message-passing algorithms on dense graphs.

    In the second paper, we applied our results to the popular approach for dimensionality reduction known as the LASSO. We proved asymptotically sharp predictions and explicit expressions for operating characteristics of the LASSO (e.g., mean square error) that is the first rigorous derivation of such explicit formulas for random instances. Through simulations on real data matrices (gene expression data and hospital medical records) we observed that these results can be relevant in a broad array of practical applications.


Modeling of Large Networks with Given Constraints

  • M. Bayati, J. H. Kim and A. Saberi, A sequential algorithm for generating random graphs, Algorithmica, Vol. 58, No. 4, 2010
    Software (by David Gleich)
  • Summary: A central hurdle in working with large networks is the lack of good random graph models that capture their specific properties. Such random graph models can be useful for example in simulating algorithms for the Internet or detecting motifs (patterns) in biological systems. Unfortunately, the existing algorithms for generating random graphs with given degrees have large running times (quadratic on problem size), making them impractical to use for networks with millions of nodes. In this paper we designed the fastest (almost linear running time) algorithm and proved its correctness when the node degrees are in certain range. Recently, with Andrea Montanari and Amin Saberi, we extended that algorithm to design an efficient algorithm for generating random graphs with no short cycles that appeared in SODA 2009. These graphs are used to construct high performance codes that can achieve the Shannon capacity.


Belief Propagation Algorithm for Distributed Optimization: Theory and Applications

  • M Bayati, C. Borgs, J. Chayes, R. Zecchina, Belief-Propagation for Weighted b-Matchings on Arbitrary Graphs and its Relation to Linear Programs with Integer Solutions SIAM Journal in Discrete Math, 25, pp. 989-1011, 2011
  • M.Bayati, D. Shah and M. Sharma, Max-product for maximum weight matching: convergence, correctness and LP duality, IEEE Transactions on Information Theory, Vol. 54, No 3, 2008
  • Summary: Belief propagation (BP) is a distributed algorithm that has been very successfully used in many areas including machine learning and optimization. Despite its spectacular success, the BP algorithm is known to be correct only in cycle-free networks. However, in practice, the (loopy) BP algorithm works rather well in networks that have many cycles. In these papers we used BP algorithm for solving the well-known maximum weight matchings (MWM) problem. We proved that BP solves this problem correctly on all graphs when the linear programming (LP) relaxation of the problem has integer solutions. We also proved that BP and LP are equivalent for finding MWM. The proof technique gives a better understanding of the often noted, but poorly understood connection between BP and LP.

    In the article below we used BP for the NP-hard problem of finding minimum weight Steiner tree and obtained a very efficient algorithm. Our algorithm is very effective in systems biology for finding previously undetected protein associations in cell signaling, as well as in data clustering.

  • M. Bayati, C. Borgs, A. Braunstein, J. Chayes, A. Ramezanpour, and R. Zecchina, Statistical Mechanics of Steiner Trees, Physical Review Letters (PRL), 101, 037208, 2008

Fast Algorithms for Aligning Large Networks

  • M Bayati, D. F. Gleich, A. Saberi, Y. Wang, Message-passing algorithms for sparse network alignment, ACM Transactions of Knowledge Discovery and Data mining (TKDD), vol 7, pages 3:1-3-31, 2013
    Software and Data
  • Summary: Network alignment generalizes and unifies several approaches for forming a matching or alignment between the nodes of two networks. We propose a mathematical programming framework for network alignment problem and a sparse variation of it where only a small number of matches between the nodes of the two networks are possible. We propose new message passing algorithms that allow us to compute, very efficiently, near optimal solutions to the sparse network alignment problems with network sizes as large as hundreds of thousands of nodes. Using (rigorous) linear programming upper-bounds we show that our algorithms produce near optimal solutions on problems in bioinformatics, in ontology matching, and on synthetic problems.

    Network alignment

Contact Info

Knight Management Center
655 Knight Way, E363
Stanford, CA 94305
Phone: (650) 725-2285
Fax:(650)724-7402
E-mail: aaa@stanford.edu
aaa= bayati

                  Assistant: Sandra Davis
Knight Management Center
655 Knight Way, E324
Phone:(650)736-0939
Fax:(650)724-7402
Email: bbb@stanford.edu
bbb=srdavis@stanford.edu