Beliz Gunel

alt text 

PhD Candidate at Stanford University
Resume

Contact

Email: bgunel [at] stanford (dot) edu
Twitter: belizgunel

Google Scholar; Semantic Scholar; LinkedIn

I will be graduating from my PhD in August 2022, and will be starting at Google AI as a Research Scientist in Fall 2022! If you are an undergrad/masters/early PhD and interested in working with me, please reach out via email. For my list of publications, please refer to my Google Scholar.

About me

I am originally from Izmir, Turkey. I received my bachelors in Electrical Engineering and Computer Science with Honors from University of California, Berkeley in 2017, where I spent more time in Caffe Strada then in any classroom. During my time in Berkeley, I was extremely fortunate to meet and work with Prof. Steven Conolly for 3 years on Magnetic Particle Imaging -- a novel imaging modality that enables cell tracking, targeted drug delivery, and has great potential to enable early cancer detection. I will be forever grateful to Steve for introducing me to the joys of feeling stupid in research.

I have been working on my PhD at Stanford since Autumn 2017, where I'm very fortunate to be advised by Prof. John Pauly. I am broadly interested in developing machine learning methods for natural language processing and healthcare that leverage the structure of the underlying data and domain knowledge; and building data-efficient machine learning systems that are robust to distribution drifts. I have collaborated closely with Prof. Akshay Chaudhari and Prof. Shreyas Vasanawala.

Throughout my PhD, I have had the incredible opportunity to work with many amazing researchers across Google AI, Google Brain, Facebook (Meta) AI, and Microsoft Research in the form of research internships (and ongoing collaborations after). Also, I was very fortunate to work with Prof. Christopher RĂ© on some projects related to non-Euclidean machine learning.

Recent News

[2/28/22] Our paper VORTEX: Physics-Driven Data Augmentations for Consistency Training for Robust Accelerated MRI Reconstruction got accepted to MIDL 2022 as an Oral!

[2/24/22] I got invited as a panelist to Stanford AIMI's Language Models in Medicine AI Happy Hour that can be found here.

[2/4/22] Three papers got accepted to ISMRM 2022! Two of them were led by students I (co)mentored.

[10/19/21] Our paper SSFD: Self-Supervised Feature Distance as an MR Image Reconstruction Quality Metric got accepted to NeurIPS Deep Inverse 2021!

[9/24/21] I started part-time at Google Brain working on discrete sentence-level language modeling based on VQ-VAEs for coherent long-form generation and paraphrasing with Yao Zhao, Peter Liu, and David Grangier.

[9/2/21] I got invited to give a talk at Stanford's MedAI. I covered many of our past and ongoing work related to self-training, weak supervision, consistency-based approaches, and more in the context of AI in medical imaging -- which can be found here.

[8/15/21] Our paper Data-Efficient Information Extraction from Form-Like Documents got accepted to KDD-DI 2021! Our approach is currently in production use at Google AI, and I gave a talk on it at KDD-DI that can be found here.

[4/9/21] I served as Program Committee on Graph Neural Networks and Systems Workshop held in conjunction with MLSys 2021.

[3/11/21] Our paper Self-training Improves Pretraining for Natural Language Understanding got accepted to NAACL 2021! This was a big group effort with many amazing collaborators across Facebook AI. Code is open-sourced here.

[2/24/21] Our paper on weakly supervised MR image reconstruction using untrained neural networks got accepted to ISMRM 2021!

[1/15/21] Our paper Glean: Structured Extractions from Templatic Documents got accepted to VLDB 2021! This was a big group effort with many amazing collaborators across Google AI and Google Cloud.

[1/12/21] Our paper Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning got accepted to ICLR 2021! This work is in collaboration with Jingfei, Alexis, and Ves from Facebook AI.

[6/12/20] The project I worked on as a research intern at Google Research on representation learning for form-like documents in Sandeep Tata's team was featured in Google AI blog.

[5/26/20] I started my research internship at Facebook AI working on representation learning for few-shot natural language understanding in Ves Stoyanov's team in Necip Fazil Ayan's LATTE org.

[1/27/20] I started my research internship at Google AI working on representation learning for information extraction in Sandeep Tata's team in Marc Najork's org.

[1/15/20] Jane Street, Google Ads AI, and IBM Research invited me to give a talk about my work on fact-aware abstractive summarization that I did as a research intern at Microsoft Research with Chenguang Zhu in Xuedong Huang's org.

Teaching

Teaching assistant at University of California, Berkeley for:

Honors and Professional Service

  • First rank in Turkish National Board Exam out of 1.5 million students.

  • Stanford University Electrical Engineering Departmental Fellowship

  • Reviewer for Relational Representational Learning (NeurIPS 2018), Women in Machine Learning (NeurIPS 2018 & 2019), Representation Learning on Graphs and Manifolds (ICLR 2019), Learning and Reasoning with Graph-Structured Data (ICML 2019), Graph Representation Learning (NeurIPS 2019), Graph Representation Learning and Beyond (ICML 2020), DiffGeo4DL (NeurIPS 2020), NAACL 2021, ICML 2021, ACL 2021, EMNLP 2021, NeurIPS 2021.

  • Program Committee in Graph Neural Networks and Systems workshop in MLSys 2021.

  • Co-organizer for Representation Learning on Graphs and Manifolds workshop in ICLR 2019.

Interests

I am passionate about better healthcare, public policy, and effective mentorship. I love all things comedy, music, and languages/cultures/traveling.