Final poster session

We thank our sponsor, Sky9 Capital, for supporting the poster session!
The poster session was held at the AOERC basketball courts from 7-10 PM on March 18th, 2024.

Outstanding Projects

Student choice for best poster

Outstanding custom project reports

Outstanding default project reports

Custom Projects

Fast, Interpretable AI-Generated Text Detection Using Style Embeddings Giulia Zoe Socolof, Ritika Kacholia
CapNet: Making Science More Accessible via a Neural Caption Generator Aman Ladia, Tirth Dharmesh Surti
Do LLMs exhibit Nominal Compound Understanding, or just Nominal Understanding? Elijah Song, Nathan Andrew Chi
Using Language Model for Emission Factor Mapping Amarkumar Kallappa Gadkari, Esteban Jose Barrero-Hernandez, Gerald Chavin Kang
Affective Emotional Layer for Conversational LLM Agents Aditya Bora, Nikhil Suresh
Synthesized Strategy for Mental Health Support David Yuan, Evelyn Song
Text2Gloss: Translation into Sign Language Gloss with Transformers Jenna Sara Mansueto, Luke C Babbitt
BondBERT: An ensemble-based model for named entity recognition in materials science texts Bella Crouch
The Beat Goes On: Symbolic Music Generation with Text Controls Javokhir B Arifov, Nathanael James Cadicamo, Philip Andrew Baillargeon
Near-Infinite Sub-Quadratic Convolutional Attention James Poetzscher
Llama-UL2: Emerging New Capabilities with Continued Pretraining using UL2 Jason Shu-Yang Wang
Feedback or Autonomy? Analyzing LLMs’ Ability to Self-Correct Kai Mica Fronsdal
Brain-to-Text Ethan Trepka
Finetuning Provides a Window Into Transformer Circuits Sachal Sohan Srivastava-Malick
Funding Sources and Values of NLP Research Parth Sarin, Patricia Wei, Vyoma Raman
Understanding Complex Emotions in Sentences Hannah Rachel Levin, Juan Pablo Triana Martinez, Samir Agarwala
Lyricade: An Integrated Acoustic Signal-Processing Transformer for Lyric Generation Arnav Somayaji Krishnamoorthi, Klara Bjork Andra-Thomas, Rhea Malhotra
Training a Chinese RapStar: Applying Rapformer Model to Generate Chinese Rap Lyrics Bihan Liu, Mike Yang
20 Questions: Efficient Adaptation for Individualized LLM Personalization Michael Joseph Ryan
LLMs for Google Maps Sukrut Oak
BERT one-shot movie recommender system Chi Trung Nguyen
Over-Complicating GPT Daniel Li Yang
Semantics of Empire: A Neural Machine Translation Approach for Ottoman Turkish Texts Merve Tekgurler
Outrageously Fast LLMs: Faster Inference and Fine-Tuning with Moefication and LoRA Chi Yo Tsai, Jay Martin
Speaking the Language of Sight Jean Rodmond Junior Laguerre, Vicky Wu
Reliable Ambient Intelligence Through Large Language Models David Dai
Predicting Big Brother Brasil 2024 Evictions Through Sentiment Analysis of Tweets Laura Fiuza Dubugras
Count Your Words Before They Hatch: Investigating Word Count Control Katherine Li
Novelty: Optimizing StreamingLLM for Novel Plot Generation Joyce Chuyi Chen, Megan Mou
Multimodal MoE for InfographicsVQA Manolo Alvarez
Patent Acceptance Prediction With Large Language Models Miguel Gerena Rivera, Akayla Hackson
Salience-Based Adversarial Attacks for Empirical Evaluation of NLP Classification Robustness Fletcher Lee Newell
Computing semantic textual similarity through transformer-based encoders and combining multiple content similarity measures Ethan Yi Ko, Kayson Taka Hansen, Peng Hao Lu
AI-Driven Fashion Cataloging: Transforming Images into Textual Descriptions Nishant Gopinath, Si Yi Ma
A Multi-tiered Approach to Debiasing Language Models Aman Kansal, Saanvi Chawla, Shreya Shankar
Deciphering the Dynamics of Reddit Comment Popularity Abdulwahab Omira
Natural Language Enhanced Neural Program Synthesis for Abstract Reasoning Task Yijia Wang
Self-Improvement for Math Problem-Solving in Small Language Models Artyom Shaposhnikov, Roberto Garcia Torres, Shubhra Mishra
Guided Image Concept Decomposition using Textual Inversion Yvette Yinyin Lin
From Beethoven to Beyoncé: A Deep Learning Approach to Music Genre Classification Dominic Joseph DeMarco, Eric Martz, Regina T.H. Ta
Identifying and Neutralizing Gender Bias from Text Dante Emanuel Danelian, Liam Michael Smith, Maya Bedge
Efficient Alignment of Medical Language Models using Direct Preference Optimization Brendan Murphy
Boosting Embodied Reasoning in LLMs in Multi-agent Mixed Incentive Environments Agam Mohan Singh Bhatia
Virgilian Poetry Generation with LSTM Networks August Wyatt Burton, Jonathan Daniel Merchan
Agent Retrieval on Textual and Relational Knowledge Bases Shiyu Zhao
High-fidelity Human Representation for Large Language Models Brian Xu, Henry Jin Weng
Improving Human-LLM Interactions by Redesigning the Chatbot Rashon Poole
GRAPHGEM: Improving Graph Reasoning in Language Models with Synthetic Data Tony Sun
Mathematical Reasoning Through LLM Finetuning Karthik Vinay Seetharaman, Yash Mehta
Effects of Pre-training and Fine-tuning Time on the Linear Connectivity of Language Models for Natural Language Inference Kushal Thaman
Cross-Lingual Summarization of Notice to Air Missions (NOTAMs) Zixi Liu
GILgaMeSH: Glyph-Interpreting Language Models for Sumerian History Cole Simmons
A Good Novelist Should be a Good Coder: From Language Critics to Automatic Code Generation Brian Manuel Munoz, Mu-sheng Lin
Wrestling Mamba: Exploring Early Fine-Tuning Dynamics on Mamba and Transformer Architectures Daniel Guo, Lucas Emmanuel Brennan-Almaraz
An Inside Look Into How LLMs Fail to Express Complete Certainty: Are LLMs Purposely Lying? Joshua De Guzman Fajardo
Spatial-Enhanced Summarization of Placement Preferences For Robot-Action Personalization Sid Potti
RoSA Text Style Transfer & Evaluation Arnav Gupta, Ayaan Naveed Malik, MacVincent Somtochukwu Agha-Oko
SUPaHOT: Universally Scalable and Private Method to Demystify FHIR Health Records Hamed Hekmat, Michael N Brockman, Nina Boord
The Development of Facticity—from Preliminary Findings to Accepted Implicit Knowledge: Case Studies Jingruo Sun, Tianyu Du, Yuze Sui
Short Text Classification of Political Reddit Posts Shirley Cheng
Predicting Patent Litigation Risk Using RoBERTa and Metadata Augmentation Techniques Brian Park, Nikita Bhardwaj, Simone Yi-Yi Hsu
Puzzle in a Haystack: Understanding & Enhancing Long Context Reasoning Jessica Chudnovsky, Salman Abdullah, Sudharsan Sundar
Graph-based Logical Reasoning for Legal Judgement Ein Jun
Enabling Cross-Linguistic Compatibility in Image Generation: Text Embedding Alignment Techniques for CLIP Models Bofei Zhu
Autoformalization with Backtranslation: Training an Automated Mathematician Jakob Nordhagen
Tree-Based Retrieval Using Gaussian Statistics Irfan Nafi, Luke Jongsung Park, Ray Hotate
Predicting Protein-Protein Interaction via Protein Textual Description using Large Language Model Khoa Hoang
Information Dense Question Answering for RLHF Chet Anand Bhateja
Evaluating the Culture-awareness in Pre-trained Language Model Ryan Li, Yutong Zhang, Zhiyu Xie
Stanford CS Course + Quarter Classification based on CARTA Reviews Enok Choe, Juben Rana
Robotics Tasks Generation through Factorization Angela Yi, Ranajit Gangopadhyay, Yihan Zhou
Learning Strategic Play with Language Agents in Text-Adventure Games Miranda Lin Li, Nic Becker
Detect failure root cause and predict faults from software logs Sandip Pal
Direct Clinician Preference Optimization: Clinical Text Summarization via Expert Feedback-Integrated LLMs Mike Timmerman, Onat Dalmaz, Tim Niklaus Reinhart
Exploring Unsupervised Machine Translation for Highly Under resourced Languages(Hausa) Sajid Omar Farook, Zouberou Sayibou
Automated Extraction of ICD-10 Diagnosis Codes from Clinical Notes Arjun Jain, Devanshu Ladsaria, Rishi Raj Verma
Detecting Misinformation in News Articles via Natural Language Processing Siya Goel, Thu Le, Tia Vasudeva
Model Mixture: Merging Task-Specific Language Models Elizabeth Zhu, Sherry Xie
Prototype-then-Refine: A Neurosymbolic Approach for Improved Logical Reasoning with LLMs Bassem Akoush, Hashem Elezabi
Minimal Clues for Maximal Understanding: Solving Linguistic Puzzles with RNNs, Transformers, and LLMs Anavi Baddepudi, Emma Wang, Ishan Khare
Difficulty-Controllable Text Generation Aligned to Human Preferences Allison Guman, Nikhil Pandit, Udayan Mandal
Optimized Linear Attention for TPU Hardware Gabrael Levine
Claim-level Uncertainty Estimation through Graph Mingjian Jiang
Extracting Material Measurement Knowledge Graphs from Academic Research Papers Arthur Cerqueira Campello
GaS – Graph and Sequence Modeling for Web Agent Pathing Arantxa Ramos del Valle, Marcel Arzhang Heshmati Roed, Matthew Noto
SpamResponder: Automatic Response System for Voice Phishing Chae Young Lee
Improving Low-Resource POS Tagging with Transfer Learning: A Case in Cantonese Manh D Dao, Ting Lin, Trevor William Carrell
Fine-tuning CodeLlama-7B on Synthetic Training Data for Fortran Code Generation using PEFT Andrew C Shi, Soham Govande, Taeuk Kang
Logic-LangChain: Translating Natural Language to First Order Logic for Logical Fallacy Detection Abhinav Lalwani, Ishikaa Lunawat
Comparative Analysis of Preference-Informed Alignment Techniques for Language Model Alignment Soo Wei Koh
How you can convince ChatGPT the world is flat Julian Cheng
Exploring Machine Unlearning in Large Language Models Jay Gupta, Lawrence Y Chai
MediGANdist: Improving Smaller Model’s Medical Reasoning via GAN-inspired Distillation Allen Kiriroath Chau, Aryan Siddiqui
PromptCom: Cost Optimization of Language Models Based on Prompt Complexity Mengze Gao, Yonatan Urman
Extractive Question Answering On Large Structural Engineering Documents Adam Usmani Banga, Thomas Sounack
Task-Agnostic Low-Rank Dialectal Adapters for Speech-Text Models Amy Shilin Guan, Azure Siyi Zhou, Claire Lynn Shao
NatuRel: Advancing Relational Understanding in Vision-Language Models with Natural Language Variations Ashna Khetan, Isabel Paz Reyes Sieh, Laya Balaji Iyer
More Effectively Searching Trees of Thought for Increased Reasoning Ability in Large Language Models Kamyar John Salahi, Pranav Gurusankar, Sathya Edamadaka
Improving performance in large language models through diversity of thoughts Cornelia Weinzierl, Sreethu Sura, Suguna Varshini Velury
Increasing the Efficiency of the Sophia Optimizer: Continuous Adaptive Information Averaging Caia Mai Costello, Jason Daniel Lazar
Unstructured Data Abstraction utilizing Selective Prediction-Oriented Neural Networks in Healthcare Settings Emily Yiduo Chen, Nathan N. Mohit, Nicole Tong
Claim Verification for Fictional Narratives with Large Language Models Jennifer Jing Xu, Lauren Yumi Kong
Enhancing Factuality in Language Models through Knowledge-Guided Decoding Jirayu Burapacheep
Temporal Grounding of Activities using Multimodal Large Language Models Young Chol Song
AI Lie Detection: Is the Hype Justified? Jack Ryan
A Comparative Study of Deep Learning Architectures for Long Text Classification in Mental Health Ivy Sun, Siqi Ma, Yiran Fan
Towards Natural Language Reasoning for Unified Robotics Description Format Files Aakash Mishra, Austin Anil Patel, Neil Nie
SEER-MoE: Sparse Expert Efficiency through Regularization for Mixture-of-Experts Alex Muzio, Alex Sun, Churan He
Compression Ratio Controlled Text Summarization Zheng Wang
Multi-Agent Frameworks in Domain-Specific Question Answering Tasks Ethan Duncan He-Li Hellman, Maria Angelika-Nikita, Spencer Louis Paul
Understanding Visual Shortcomings of Multimodal Large Language Model Through Training Data Distribution Alan Li, Binxu Li
Clinically relevant summarization of multimodal emergency medical data Elsa Bismuth, Jan Michael Krause, Lucas A Leanza
LLMs with Low-Resource Translation: Syriac-to-English Case Study Andrew Tin-Lok Lee
Llama2.pi: Running LLMs on the Bleeding Edge Matthew Ding
“Not All Information is Created Equal”: Leveraging Metadata for Enhanced Knowledge Curation Hong Meng Yam, Yucheng Jiang
GeoPolitical Risk Predictor Lucas P Bosman, William Toby Denton
Multimodal Social Media Sentiment Analysis Mubarak Ali Seyed Ibrahim, Pratyush Muthukumar
Few-Shot Prompt-Tuning: An Extension to a Finetuning Alternative James J Morice, Samuel Edward Kwok
On Fairness Implications and Evaluations of Low-Rank Adaptation of Large Language Models Zhoujie Ding
Predicting Yelp Star Ratings: An Analysis of Different Models and Fine-Tuned RoBERTa Model Rishi Alluri, Upamanyu Dass-Vattam

Default Projects

SMARTer BERT Alexey Alexandrovich Tuzikov, Naijing Guo, Tatiana Veremeenko
Using Stochastic Layer Dropping as a Regularization Tool to Improve Downstream Prediction Accuracy Karthik Jetty
AdaptBert: Parameter Efficient Multitask Bert Jieting Qiu, Shweta Agrawal
An Exploration of Fine-Tuning Techniques on minBERT Optimizations Gabriela Cortes, Iris T Fu, Victoria Hsieh
Three Heads are Better than One: Implementing Multiple Models with Task-Specific BERT Heads Matt Alexander Kaplan, Prerit Choudhary, Sina Mohammadi
Multitask BERT Bradley Hu, Shannon Xiao
2-Tier SimCSE: Elevating BERT for Robust Sentence Embeddings Aubrey Wang, Candice Wang, Ziran Zhou
Sentence-BERT-inspired Improvements to minBERT Raj V Pabari
minBERT and Downstream Tasks Optimization with Disentangled Attention Jeremy Linfield, Sean Bai
Choose Your PALs Wisely Zach Peter Rotzal
Good Things Come to Those Who Weight: Effective Pairing Strategies for Multi-Task Fine-Tuning Nachat Jatusripitak, Pawan Wirawarn
Semantic Symphonies: BERTrilogy and BERTriad Ensembles Haoyi Duan, Yaohui Zhang
MinBERT and PALs: Multi-Task Leaning for Downstream Tasks Tetsuya Hayashi
minBERT Multi Tasks Augustin Boissier, Maxime Pedron
Simple Contrastive Learning for Multitask Finetuning Annie Z Zhu, Gui David, Khaing Su Mon
minBERT, NLP Tasks, and More Tiankai Yan
Exploring Pretraining, Finetuning and Regularization for Multitask Learning of minBERT Weicheng Song, Xinyu Hu, Zhiyin Pan
Enhancing BERT for NLP Tasks: Pretraining, Fine-tuning, and Model Augmentation Bryant Perkins, Dylan Ryan Dipasupil
Loss Weighting in Multi-Task Language Learning Anna Little
Optimizing minBERT on Downstream Tasks Using Pretraining and Siamese Network Architecture Edwin Antonio Pua
Learning by Prediction and Diversity with BERT Alex Lin
Exploring Challenges in Multi-task BERT Optimization Isabel Michel
SMARTCS: Additional Pretraining and Robust Finetuning on BERT Ayesha Khawaja, Rachel Sinai Clinton, Yasmine Fatima Mabene
Even Language Models Have to Multitask in This Economy Jacqueline Pang, Paul Woringer
BERTogether: Multitask Ensembling with Hyperparameter Optimization Erik Luna, Ivan Miranda Liongson
Implementation of BERT with Projected Attention Layers and Its Effectiveness Dayoung Kim, Wanbin Song
Experiments in Improving NLP Multitask Performance Carl Shan
ExTraBERT: Exclusive Training for BERT Language Models Chinmay Keshava Lalgudi, Medhanie Isaias Irgau
Enhancing MinBert Embeddings for Multiple Downstream Tasks Donald Stephens
minBERT: Contrastive Learning Method Long D Pham
An examination of multitask training strategies for different BERT downstream tasks Bjorn Engdahl, Matthias Heubi
Beyond Fine-tuning: Iterative Ensemble Strategies for Enhanced BERT Generalizability Megan Dass, Riya Dulepet, Shreya D'Souza
Progressive Layer Sharing on BERT Nathaniel Thomas Grudzinski
Improving minBERT and Its Downstream Tasks Madhumita Vijay Dange, Yuwen Yang
GradAttention: Attention-Based Gradient Surgery for Multitask Fine-Tuning Anil Yildiz
OptiMinBERT: A Comparative Study on the Efficacy of Multitask Versus Specialist Neural Networks Paras Malhotra
Regular(izing) BERT Eric Zhu, Parker Thomas Kasiewicz
Margin for Error: Exploration of a Dynamic Margin for Cosine-Similarity Embedding Loss and Gradient Surgery to Enhance minBERT on Downstream Tasks Alex Kwon, Jimming He
MinBERT Task Prioritization, Cross-Attention and Other Extensions for Downstream Tasks Armando Alejandro Borda, Parker Joseph Stewart
Less is More: Exploring BERT and Beyond for Multitask Learning Liuxin Yang, Yichun Qian
Exploring LoRA Adaptation of minBERT Model on Downstream NLP Tasks James Joseph Hennessy, Suxi Li
SMART Multitask MinBERT Weilun Chen
MinBERT and Downstream Tasks Wenlong Ji
Learning with PALs: Enhancing BERT for Multi-Task Learning Michael Qui Sung Hoang
Improving minBERT Embeddings Through Multi-Task Learning Rahul Thapa, Rohit Khurana
BERTina Aguilera: Extensions in a Bottle Kokhinur Kalandarova, Mhar Eisen Santos Tenorio, Sam Prieto Serrano
Implementation of minBERT and contrastive learning to improve Sentence Embeddings Akshit Goel, Linyin Lyu, Nourya A Cohen
Finetuning minBERT for Downstream Tasks with Multitasking Niall Thomas Kehoe, Pranav Sai Ravella
MultiBERT: Enhanced Multi-Task Fine-Tuning on minBERT Christina Tsangouri
Enhanced Sentence we Embeddings with SimCSE Brendan Lee Adams McLaughlin, Christo Dimitrov Hristov, William Shane Healy
A Bilingual BERT Model Ensemble for English-based Multitask Fine-tuning Ryan James Dwyer
Pretrain and Fine-tune BERT for Multiple NLP Tasks Mengge Pu, Yawen Guo
Combining Contrastive Learning with Adaptive Attention and Experimental Dropout to Improve mini-BERT Performance Janene Rachana Kim, Lucy Zimmerman, Rachel Liu
Minhbert Minh Vu
Implementing RO-BERT from Scratch: A BERT Model Fine-Tuned through Regularized Optimization for Improved Performance on Sentence-Level Downstream Tasks Cat Gonzales Fergesen, Clarisse Yu Hokia
UmBERTo: Enhancing Performance in NLP Tasks through Model Expansion, SMARTLoss, and Ensemble Techniques Julian Rodriguez Cardenas, May Levin
Effects of Appropriate Modeling of Tasks and Hyperparameters on Downstream Tasks Adrian L Gamarra Lafuente, Avi Udash
MiniBERT: Training Jointly on Multiple Tasks Manasven Grover, Xiyuan Wang
Finetune minBERT for Multi-Tasks Learning Yingbo Li
BERTology: Improving Sentence Embeddings for Multi-Task Success Kyuil Lee
Not-So-SMART BERT Eliot Krzysztof Jones
Multi-Tasking BERT: The Swiss Army Knife of NLP Esteban Wanhoe Wu, Nicole Garcia, Simba Xu
Exploring LSTM minBERT with DCT Benita Wong, Tina Wu
BERT Extension Using Sentence-BERT for Sentence Embedding Anicet Dushime Wa Mungu
Extending Min-BERT for Multi-Task Prediction Capabilities Grace Yang, Xianchen Yang
Multitask Learning for BERT Model Chunwei Chan, Shuojia Fu
Optimizing minBERT for Downstream Tasks using Multitask Fine-Tuning Carlos Emmanuelle Ayala Bellido
Enhancing BERT through Multitask Fine-Tuning, Multiple Negatives Ranking and Cosine-Similarity Emily Broadhurst, Michael Maffezzoli
minBERT and Downstream Tasks Qian Zhong
Enhancing Multi-Task Learning on BERT Paris Zhang, Yiming Ni
BERT’s Odyssey: Enhancing BERT for Multifaceted Downstream Tasks Haoming Zou, Minghe Zhang
Efficient Multi-Task MinBERT for Three Default Tasks and Question Answering Fanglin Lu, Gerardus de Bruijn, Rachel Ruijia Yang
BERT but BERT-er Hamzah Daud
BERT Mastery: Explore Multitask Learning Chu Lin, Fuhu Xiao
minBERT using PALs with Gradient Episodic Memory Christopher Nguyen
Jack of All Trades, Master of Some: Improving BERT for Multitask Learning Chijioke Mgbahurike, Iddah Mlauzi, Kwame Ocran
Multi-task Learning and Fine-tuning with BERT Mel Guo
Enhanced TreeBERT: High-Performance, Computationally Efficient Multi-Task Model Pann Sripitak, Thanawan Atchariyachanvanit
The Best of BERT Worlds: Improving minBERT with multi-task extensions Julius Hillebrand
Improving BERT – Lessons from RoBERTa Channing Lee, Hannah Gail Prausnitz-Weinbaum, Haoming Song
A Rigorous Analysis on Bert’s Language Capabilities Gaurav Kiran Rane
minBERT and Multitask Learning Enhancements Naman Govil
Multi-BERT: A Multi-Task BERT Approach with the Variation of Projected Attention Layer Haijing Zhang
QuarBERT: Optimizing BERT with Multitask Learning and Quartet Ensemble Carrie Gu, Ericka Liu, Zixin Li
BERT-icus, Transform and Ensemble! Helen April He, Maya Waleria Czeneszew, Sidra Nadeem
Exploring Improvements on BERT Sara Hong, Sophie Wu
minBERT and Downstream Tasks Shouzhong Shi
Efficient Fine-Tuning of BERT with ELECTRA Akshay Dev Gupta, Erik Rozi, Vincent Jianlin Huang
Implementing BERT for multiple downstream tasks Hamad M Musa
Multi Task Fine Tuning of BERT Using Adversarial Regularization and Priority Sampling Chloe Trujillo, Mohsen Mahvashmohammady
EquiBERT: (An Attempt At) Equivariant Fine-Tuning of Pretrained Large Language Models Patrick James Sicurello
"That was smooth": Exploration of S-BERT with Multiple Negatives Ranking Loss and Smoothness-Inducing Regularization Johnny Chang, Kanu Grover, Kaushal Atul Alate
Enhancing BERT for Advanced Language Understanding: A Multitask Learning Approach with Task-Specific Tuning Anusha Aditi Kuppahally, Malavi Ravindran, Ziyue (Julia) Wang
Triple-Batch vs Proportional Sampling: Investigating Multitask Learning Architectures on minBERT Ethan Sargo Tiao, Rikhil Paresh Vagadia
Extending Applications of Layer Selecting Rank Reduction Abraham Alappat
Improving BERT for Downstream Tasks Kevin Nguyen Phan
BEAKER: Exploring Enhancements of BERT through Learning Rate Schedules, Contrastive Learning, and CosineEmbeddingLoss Elizabeth Theresa Baena
Gradient Descent in Multi-Task Learning David Saykin, Kfir Shmuel Dolev
SMART Surgery: Combining Finetuning Methods for Multitask BERT Ethan Paul Foster
Fine-tuning minBERT For Multi-task Classification Jimmy Otieno Ogada
Three Headed Mastery: minBERT as a Jack of All Trades in Multi-Task NLP Ifdita Hasan Orney, Rafael Perez Martinez, Valerie Ann Fanelle
Balancing Performance and Computational Efficiency: Exploring Low-Rank Adaptation for Multi-Transferring Learning Caroline Santos Marques da Silva
ExtraBERT: Applying BERT to Multiple Downstream Language Tasks Isaac I. Gorelik, Rishi Dange
BERT and Beyond: A Study of Multitask Learning Strategies for NLP Febie Jane Lin, Jack P Le
Mini Bert Optimized for Multi Tasks Lin Lin
Methods to Improve Downstream Generalization of minBERT Ramgopal Venkateswaran
Maximizing MinBert for Multi-Task Learning Jordan Andy Paredes, Shumann R Xu
minBERT Multi-Task Fine-Tuning Antonio Davi Macedo Coelho de Castro
Fine-tuning minBERT for multi-task prediction Ishita Mangla
Grid Search for Improvements to BERT Harsh Goyal
SMART loss vs DeBERTa Michael Liu, Michael Phillip Hayashi, Roberto Lobato Lopez
Extending Phrasal Paraphrase Classification Techniques to Non-Semantic NLP Tasks Nikhil Sharma, Samy Cherfaoui
Loss Weighting in Multi-Task Language Learning Julia Kwak
Task-specific attention Chaoqun Jia
minBERT and Downstream Tasks Xinpei Yu
Integrating Cosine Similarity into minBERT for Paraphrase and Semantic Analysis Gerald John Sufleta
SlapBERT: Shared Layers and Projected Attention For Enhancing Multitask Learning with minBERT Alex He Zhai, Allison Jia, Deven Kirit Pandya
MT-DNN with SMART Regularisation and Task-Specific Head to Capture the Pairwise and Contextually Significant Words Interplay Haoyu Wang
Speedy SBERT Leyth Ramez Toubassy, Renee Duarte White
Using Gradient Surgery, Cosine Similarity, and Additional Data to Improve BERT on Downstream Tasks Chanse H. Bhakta, Joseph Anthony Seiba, Kasen Stephensen
Multitask BERT Model with Regularized Optimization and Gradient Surgery Jenny Xu
BitBiggerBERT: An Extended BERT Model with Custom Attention Mechanisms, Enhanced Fine-Tuning, and Dynamic Weights Khanh V Tran, Thomas Charles Hatcher, Vladimir A Gonzalez Migal
minBERT and Downstream Tasks Final Report Bingqing Zu, Yixuan Lin
Evaluating Contrastive Learning Strategies for Enhanced Performance in Downstream Tasks Georgios Christoglou, Zachary Evans Behrman
A SMARTer minBERT Arisa Sugiyama Chue, Daphne Liu, Poonam Sahoo
OptimusBERT: Exploring BERT Transformer with Multi-Task Fine-Tuning, Gradient Surgery, and Adaptive Multiple Negative Rank Loss Learning Fine-Tuning Gabe Eduardo Seir, Ryder Thompson Matheny, Shawn Charles
Optimizing minBert via Cosine Similarity and Negative Sampling Ananya Siri Vasireddy, Neha Vinjapuri