Beliz Gunel

alt text 

PhD Candidate at Stanford University
PhD Advisor: Prof. John Pauly

Resume

Contact

Email: bgunel [at] stanford (dot) edu
Twitter: belizgunel

Google Scholar; Semantic Scholar; LinkedIn

Please do not add me on LinkedIn without sending a message/email. I'm not interested in joining any startups, and my expected PhD graduation date is December 2022.

About me

I am originally from Izmir, Turkey, and my friends & family from this lovely laid-back Mediterrenean culture still don't understand why I'd like to "work this much". I received my bachelors in Electrical Engineering and Computer Science with Honors from University of California, Berkeley in 2017, where I spent more time in Caffe Strada then in any classroom. During my time in Berkeley, I was extremely fortunate to meet and work with Prof. Steven Conolly for 3 years on all things related to Magnetic Particle Imaging -- a new(ish) imaging modality that enables cell tracking, targeted drug delivery, and has great potential to enable early cancer detection. I will be forever grateful to Steve for introducing me to the joys of feeling stupid in research.

I have been working on my PhD in Electrical Engineering at Stanford since Autumn 2017, where I'm very fortunate to be advised by Prof. John Pauly. I'm humbled by John's kindness and genius every single day. My primary research interest is in representation learning for medical imaging and natural language processing. I am interested in learning better data representations ("structure of your network/embedding should resonate with the structure of your data"). My secondary research interest is in building data-efficient and robust machine learning systems that work in the real world along with the data management challenges that naturally arise. I collaborate very closely with Prof. Akshay Chaudhari and Prof. Shreyas Vasanawala.

Throughout my PhD, I have had the incredible opportunity to work with many amazing researchers across Google AI, Facebook AI, and Microsoft Research in the form of research internships (and ongoing collaborations after). Also, I was very fortunate to work with Prof. Christopher Ré on some projects related to non-Euclidean machine learning.

Recent News

[4/9/21] I served as Program Committee on Graph Neural Networks and Systems Workshop held in conjunction with MLSys 2021.

[3/11/21] Our paper Self-training Improves Pretraining for Natural Language Understanding got accepted to NAACL 2021! This was a big group effort with many amazing collaborators across Facebook AI. Code is open-sourced here.

[2/24/21] Our paper on weakly supervised MR image reconstruction using untrained neural networks got accepted to ISMRM 2021!

[1/15/21] Our paper Glean: Structured Extractions from Templatic Documents got accepted to VLDB 2021! This was a big group effort with many amazing collaborators across Google AI and Google Cloud.

[1/12/21] Our paper Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning got accepted to ICLR 2021! This work is in collaboration with Jingfei, Alexis, and Ves from Facebook AI.

[6/12/20] The project I worked on as a research intern at Google Research on representation learning for form-like documents in Sandeep Tata's team was featured in Google AI blog.

[5/26/20] I started my research internship at Facebook AI working on representation learning for few-shot natural language understanding in Ves Stoyanov's team in Necip Fazil Ayan's LATTE org.

[1/27/20] I started my research internship at Google AI working on representation learning for information extraction in Sandeep Tata's team in Marc Najork's org.

[1/15/20] Jane Street, Google Ads AI, and IBM Research invited me to give a talk about my work on fact-aware abstractive summarization that I did as a research intern at Microsoft Research with Chenguang Zhu in Xuedong Huang's org.

Publications

  • Glean: Structured Extractions from Templatic Documents
    Sandeep Tata, Navneet Potti, James B. Wendt, Lauro Beltrão Costa, Marc Najork, Beliz Gunel
    In VLDB, August 2021.

    [paper]

  • Self-training Improves Pre-training for Natural Language Understanding
    Jingfei Du, Edouard Grave, Beliz Gunel, Vishrav Chaudhary, Onur Celebi, Michael Auli, Ves Stoyanov, Alexis Conneau
    In NAACL, June 2021.

    [paper]

  • Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning
    Beliz Gunel, Jingfei Du, Alexis Conneau, Ves Stoyanov
    In ICLR, May 2021.

    [paper]

  • Weakly Supervised MR Image Reconstruction using Untrained Neural Networks
    Beliz Gunel, Morteza Mardani, Akshay Chaudhari, Shreyas Vasanawala, John Pauly.
    In ISMRM, May 2021.

    [paper]

  • Mind The Facts: Knowledge-Boosted Coherent Abstractive Text Summarization
    Beliz Gunel, Chenguang Zhu, Michael Zeng, Xuedong Huang
    In Knowledge Representation & Reasoning Meets Machine Learning at NeurIPS, December 2019.

    [paper]

  • Learning Mixed-Curvature Representations in Products of Model Spaces
    Albert Gu, Fred Sala, Beliz Gunel, Christopher Ré
    In ICLR, May 2019.
    [paper]

  • Hyperbolic Word Embeddings
    Beliz Gunel, Fred Sala, Christopher Ré
    In WiML at NeurIPS, December 2018

    [embedding release]

  • Optimal Broadband Noise Matching to Inductive Sensors: Application to Magnetic Particle Imaging
    Bo Zheng et al.
    In IEEE Transactions on Biomedical Circuits and Systems, October 2017.
    [paper]

  • Quantitative Magnetic Particle Imaging Monitors the Transplantation, Biodistribution, and Clearance of Stem Cells In Vivo
    Bo Zheng et al.
    In Theranostics, January 2016.
    [paper]

Teaching

Teaching assistant at University of California, Berkeley for:

Honors and Professional Service

  • First rank in Turkish National Board Exam out of 1.5 million students.

  • Stanford University Electrical Engineering Departmental Fellowship

  • Reviewer for Relational Representational Learning (NeurIPS 2018), Women in Machine Learning (NeurIPS 2018 & 2019), Representation Learning on Graphs and Manifolds (ICLR 2019), Learning and Reasoning with Graph-Structured Data (ICML 2019), Graph Representation Learning (NeurIPS 2019), Graph Representation Learning and Beyond (ICML 2020), DiffGeo4DL (NeurIPS 2020), NAACL 2021, ICML 2021, ACL 2021, EMNLP 2021, NeurIPS 2021.

  • Program Committee in Graph Neural Networks and Systems workshop in MLSys 2021.

  • Co-organizer for Representation Learning on Graphs and Manifolds workshop in ICLR 2019.

Interests

I am passionate about better healthcare, public policy, and effective mentorship. I love all things comedy, music, and languages/cultures/traveling.