Weihua Hu (胡 緯華)

400px 

I am a second-year Computer Science Ph.D. student at Stanford, advised by Prof. Jure Leskovec.

I received a B.E. in Mathematical Engineering in 2016, and an M.S. in Computer Science in 2018, both from the University of Tokyo, where I worked with Prof. Masashi Sugiyama on machine learning and Prof. Hirosuke Yamamoto on information theory. I also worked with Prof. Jun'ichi Tsujii and Prof. Hideki Mima on natural language processing.

[CV] [Google Scholar]

Research Interests

Machine learning for graph-structured data

  1. Developing machine learning methods that can efficiently and effectively handle graph-structured complex data.

  2. Leveraging large amounts of data to learn useful graph representations.

  3. Applying graph representation learning to scientific domains, e.g., chemistry and biology.

Publications

Pre-print

  1. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, Jure Leskovec.
    Open Graph Benchmark: Datasets for Machine Learning on Graphs.
    [arXiv] [project page] [code]

2020

  1. Weihua Hu*, Bowen Liu*, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, Jure Leskovec.
    Strategies for Pre-training Graph Neural Networks.
    International Conference on Learning Representations (ICLR), 2020. (spotlight)
    NeurIPS 2019 workshop at Graph Representation Learning (oral)
    [OpenReview] [project page] [code]

  2. Hongyu Ren*, Weihua Hu*, Jure Leskovec.
    Query2box: Reasoning over Knowledge Graphs in Vector Space Using Box Embeddings.
    International Conference on Learning Representations (ICLR), 2020.
    [OpenReview] [project page] [code]

2019

  1. Keyulu Xu*, Weihua Hu*, Jure Leskovec, Stefanie Jegelka.
    How Powerful are Graph Neural Networks?
    International Conference on Learning Representations (ICLR), 2019. (oral)
    [OpenReview] [arXiv] [code]

  2. Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama.
    Unsupervised Discrete Representation Learning.
    Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer, Cham, 2019. 97-119.
    (Book chapter contribution of our ICML 2017 work.)
    [Chapter]

2018

  1. Weihua Hu, Gang Niu, Issei Sato, Masashi Sugiyama.
    Does Distributionally Robust Supervised Learning Give Robust Classifiers?
    International Conference on Machine Learning (ICML), 2018.
    [arXiv]

  2. Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, Masashi Sugiyama.
    Co-teaching: Robust training of deep neural networks with noisy labels.
    Neural Information Processing Systems (NeurIPS), 2018.
    [arXiv]

2017

  1. Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama.
    Learning Discrete Representations via Information Maximizing Self Augmented Training.
    International Conference on Machine Learning (ICML), 2017.
    [arXiv][code][talk]

  2. Weihua Hu, Hirosuke Yamamoto, Junya Honda.
    Worst-case Redundancy of Optimal Binary AIFV Codes and their Extended Codes.
    IEEE Transactions on Information Theory, vol.63, no.8, pp.5074-5086, August 2017.
    [arXiv]

  3. Takashi Ishida, Gang Niu, Weihua Hu, Masashi Sugiyama.
    Learning from Complementary Labels.
    Neural Information Processing Systems (NeurIPS), 2017.
    [arXiv]

2016

  1. Weihua Hu, Hirosuke Yamamoto, Junya Honda.
    Tight Upper Bounds on the Redundancy of Optimal Binary AIFV Codes.
    IEEE International Symposium on Information Theory (ISIT), 2016.
    [paper][slide]

  2. Weihua Hu, Jun'ichi Tsujii.
    A Latent Concept Topic Model for Robust Topic Inference Using Word Embeddings.
    The annual meeting of the Association for Computational Linguistics (ACL), 2016.
    [paper][poster][code]

Awards

Professional Services

Talk

Work Experiences

Contact

Email: weihuahu [at] stanford.edu
URL: http://web.stanford.edu/~weihuahu/
Github: https://github.com/weihua916/