Weihua Hu (胡 緯華)

400px 

I am a third-year Computer Science Ph.D. student at Stanford, advised by Prof. Jure Leskovec.

I received a B.E. in Mathematical Engineering in 2016, and an M.S. in Computer Science in 2018, both from the University of Tokyo, where I worked with Prof. Masashi Sugiyama on machine learning and Prof. Hirosuke Yamamoto on information theory. I also worked with Prof. Jun'ichi Tsujii and Prof. Hideki Mima on natural language processing.

[CV] [Google Scholar]

Research Interests

Machine learning for graph-structured data

  1. Developing machine learning methods that can efficiently and effectively handle graph-structured complex data.

  2. Leveraging large amounts of data to learn useful graph representations.

  3. Developing large-scale benchmark for machine learning on graphs.

Publications

2021

  1. Weihua Hu, Matthias Fey, Hongyu Ren, Maho Nakata, Yuxiao Dong, Jure Leskovec.
    OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs.
    KDD Cup 2021.
    [arXiv] [project page] [code]

  2. Weihua Hu, Muhammed Shuaibi, Abhishek Das, Siddharth Goyal, Anuroop Sriram, Jure Leskovec, Devi Parikh, C. Lawrence Zitnick.
    ForceNet: A Graph Neural Network for Large-Scale Quantum Calculations.
    ICLR 2021 workshop at Deep Learning for Simulation. (Best paper award)
    [arXiv] [code] [talk]

  3. Lowik Chanussot*, Abhishek Das*, Siddharth Goyal*, Thibaut Lavril*, Muhammed Shuaibi*, Morgane Riviere, Kevin Tran, Javier Heras-Domingo, Caleb Ho, Weihua Hu, Aini Palizhati, Anuroop Sriram, Brandon Wood, Junwoong Yoon, Devi Parikh, C. Lawrence Zitnick, Zachary Ulissi.
    The Open Catalyst 2020 (OC20) Dataset and Community Challenges.
    ACS Catalysis, 2021.
    [arXiv] [project page] [code]

  4. C. Lawrence Zitnick, Lowik Chanussot, Abhishek Das, Siddharth Goyal, Javier Heras-Domingo, Caleb Ho, Weihua Hu, Thibaut Lavril, Aini Palizhati, Morgane Riviere, Muhammed Shuaibi, Anuroop Sriram, Kevin Tran, Brandon Wood, Junwoong Yoon, Devi Parikh, Zachary Ulissi.
    An Introduction to Electrocatalyst Design using Machine Learning for Renewable Energy Storage.
    [arXiv]

  5. Pang Wei Koh*, Shiori Sagawa*, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton A. Earnshaw, Imran S. Haque, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, Percy Liang
    WILDS: A benchmark of in-the-wild distribution shifts.
    International Conference on Machine Learning (ICML), 2021. (long talk)
    [arXiv] [project page] [code]

2020

  1. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, Jure Leskovec.
    Open Graph Benchmark: Datasets for Machine Learning on Graphs.
    Neural Information Processing Systems (NeurIPS), 2020. (spotlight)
    [arXiv] [project page] [code] [talk]

  1. Weihua Hu*, Bowen Liu*, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, Jure Leskovec.
    Strategies for Pre-training Graph Neural Networks.
    International Conference on Learning Representations (ICLR), 2020. (spotlight)
    NeurIPS 2019 workshop at Graph Representation Learning. (oral)
    [OpenReview] [project page] [code]

  2. Hongyu Ren*, Weihua Hu*, Jure Leskovec.
    Query2box: Reasoning over Knowledge Graphs in Vector Space Using Box Embeddings.
    International Conference on Learning Representations (ICLR), 2020.
    [OpenReview] [project page] [code]

2019

  1. Keyulu Xu*, Weihua Hu*, Jure Leskovec, Stefanie Jegelka.
    How Powerful are Graph Neural Networks?
    International Conference on Learning Representations (ICLR), 2019. (oral)
    [OpenReview] [arXiv] [code]

  2. Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama.
    Unsupervised Discrete Representation Learning.
    Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer, Cham, 2019. 97-119.
    (Book chapter contribution of our ICML 2017 work.)
    [Chapter]

2018

  1. Weihua Hu, Gang Niu, Issei Sato, Masashi Sugiyama.
    Does Distributionally Robust Supervised Learning Give Robust Classifiers?
    International Conference on Machine Learning (ICML), 2018.
    [arXiv]

  2. Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, Masashi Sugiyama.
    Co-teaching: Robust training of deep neural networks with noisy labels.
    Neural Information Processing Systems (NeurIPS), 2018.
    [arXiv]

2017

  1. Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama.
    Learning Discrete Representations via Information Maximizing Self Augmented Training.
    International Conference on Machine Learning (ICML), 2017.
    [arXiv][code][talk]

  2. Weihua Hu, Hirosuke Yamamoto, Junya Honda.
    Worst-case Redundancy of Optimal Binary AIFV Codes and their Extended Codes.
    IEEE Transactions on Information Theory, vol.63, no.8, pp.5074-5086, August 2017.
    [arXiv]

  3. Takashi Ishida, Gang Niu, Weihua Hu, Masashi Sugiyama.
    Learning from Complementary Labels.
    Neural Information Processing Systems (NeurIPS), 2017.
    [arXiv]

2016

  1. Weihua Hu, Hirosuke Yamamoto, Junya Honda.
    Tight Upper Bounds on the Redundancy of Optimal Binary AIFV Codes.
    IEEE International Symposium on Information Theory (ISIT), 2016.
    [paper][slide]

  2. Weihua Hu, Jun'ichi Tsujii.
    A Latent Concept Topic Model for Robust Topic Inference Using Word Embeddings.
    The annual meeting of the Association for Computational Linguistics (ACL), 2016.
    [paper][poster][code]

Awards

Professional Services

Talk

Workshop Organization

Teaching

Work Experiences

Contact

Email: weihuahu [at] stanford.edu
URL: http://web.stanford.edu/~weihuahu/
Github: https://github.com/weihua916/