SMARTer BERT |
Alexey Alexandrovich Tuzikov, Naijing Guo, Tatiana Veremeenko |
Using Stochastic Layer Dropping as a Regularization Tool to Improve Downstream Prediction Accuracy |
Karthik Jetty |
AdaptBert: Parameter Efficient Multitask Bert |
Jieting Qiu, Shweta Agrawal |
An Exploration of Fine-Tuning Techniques on minBERT Optimizations |
Gabriela Cortes, Iris T Fu, Victoria Hsieh |
Three Heads are Better than One: Implementing Multiple Models with Task-Specific BERT Heads |
Matt Alexander Kaplan, Prerit Choudhary, Sina Mohammadi |
Multitask BERT |
Bradley Hu, Shannon Xiao |
2-Tier SimCSE: Elevating BERT for Robust Sentence Embeddings |
Aubrey Wang, Candice Wang, Ziran Zhou |
Sentence-BERT-inspired Improvements to minBERT |
Raj V Pabari |
minBERT and Downstream Tasks Optimization with Disentangled Attention |
Jeremy Linfield, Sean Bai |
Choose Your PALs Wisely |
Zach Peter Rotzal |
Good Things Come to Those Who Weight: Effective Pairing Strategies for Multi-Task Fine-Tuning |
Nachat Jatusripitak, Pawan Wirawarn |
Semantic Symphonies: BERTrilogy and BERTriad Ensembles |
Haoyi Duan, Yaohui Zhang |
MinBERT and PALs: Multi-Task Leaning for Downstream Tasks |
Tetsuya Hayashi |
minBERT Multi Tasks |
Augustin Boissier, Maxime Pedron |
Simple Contrastive Learning for Multitask Finetuning |
Annie Z Zhu, Gui David, Khaing Su Mon |
minBERT, NLP Tasks, and More |
Tiankai Yan |
Exploring Pretraining, Finetuning and Regularization for Multitask Learning of minBERT |
Weicheng Song, Xinyu Hu, Zhiyin Pan |
Enhancing BERT for NLP Tasks: Pretraining, Fine-tuning, and Model Augmentation |
Bryant Perkins, Dylan Ryan Dipasupil |
Loss Weighting in Multi-Task Language Learning |
Anna Little |
Optimizing minBERT on Downstream Tasks Using Pretraining and Siamese Network Architecture |
Edwin Antonio Pua |
Learning by Prediction and Diversity with BERT |
Alex Lin |
Exploring Challenges in Multi-task BERT Optimization |
Isabel Michel |
SMARTCS: Additional Pretraining and Robust Finetuning on BERT |
Ayesha Khawaja, Rachel Sinai Clinton, Yasmine Fatima Mabene |
Even Language Models Have to Multitask in This Economy |
Jacqueline Pang, Paul Woringer |
BERTogether: Multitask Ensembling with Hyperparameter Optimization |
Erik Luna, Ivan Miranda Liongson |
Implementation of BERT with Projected Attention Layers and Its Effectiveness |
Dayoung Kim, Wanbin Song |
Experiments in Improving NLP Multitask Performance |
Carl Shan |
ExTraBERT: Exclusive Training for BERT Language Models |
Chinmay Keshava Lalgudi, Medhanie Isaias Irgau |
Enhancing MinBert Embeddings for Multiple Downstream Tasks |
Donald Stephens |
minBERT: Contrastive Learning Method |
Long D Pham |
An examination of multitask training strategies for different BERT downstream tasks |
Bjorn Engdahl, Matthias Heubi |
Beyond Fine-tuning: Iterative Ensemble Strategies for Enhanced BERT Generalizability |
Megan Dass, Riya Dulepet, Shreya D'Souza |
Progressive Layer Sharing on BERT |
Nathaniel Thomas Grudzinski |
Improving minBERT and Its Downstream Tasks |
Madhumita Vijay Dange, Yuwen Yang |
GradAttention: Attention-Based Gradient Surgery for Multitask Fine-Tuning |
Anil Yildiz |
OptiMinBERT: A Comparative Study on the Efficacy of Multitask Versus Specialist Neural Networks |
Paras Malhotra |
Regular(izing) BERT |
Eric Zhu, Parker Thomas Kasiewicz |
Margin for Error: Exploration of a Dynamic Margin for Cosine-Similarity Embedding Loss and Gradient Surgery to Enhance minBERT on Downstream Tasks |
Alex Kwon, Jimming He |
MinBERT Task Prioritization, Cross-Attention and Other Extensions for Downstream Tasks |
Armando Alejandro Borda, Parker Joseph Stewart |
Less is More: Exploring BERT and Beyond for Multitask Learning |
Liuxin Yang, Yichun Qian |
Exploring LoRA Adaptation of minBERT Model on Downstream NLP Tasks |
James Joseph Hennessy, Suxi Li |
SMART Multitask MinBERT |
Weilun Chen |
MinBERT and Downstream Tasks |
Wenlong Ji |
Learning with PALs: Enhancing BERT for Multi-Task Learning |
Michael Qui Sung Hoang |
Improving minBERT Embeddings Through Multi-Task Learning |
Rahul Thapa, Rohit Khurana |
BERTina Aguilera: Extensions in a Bottle |
Kokhinur Kalandarova, Mhar Eisen Santos Tenorio, Sam Prieto Serrano |
Implementation of minBERT and contrastive learning to improve Sentence Embeddings |
Akshit Goel, Linyin Lyu, Nourya A Cohen |
Finetuning minBERT for Downstream Tasks with Multitasking |
Niall Thomas Kehoe, Pranav Sai Ravella |
MultiBERT: Enhanced Multi-Task Fine-Tuning on minBERT |
Christina Tsangouri |
Enhanced Sentence we Embeddings with SimCSE |
Brendan Lee Adams McLaughlin, Christo Dimitrov Hristov, William Shane Healy |
A Bilingual BERT Model Ensemble for English-based Multitask Fine-tuning |
Ryan James Dwyer |
Pretrain and Fine-tune BERT for Multiple NLP Tasks |
Mengge Pu, Yawen Guo |
Combining Contrastive Learning with Adaptive Attention and Experimental Dropout to Improve mini-BERT Performance |
Janene Rachana Kim, Lucy Zimmerman, Rachel Liu |
Minhbert |
Minh Vu |
Implementing RO-BERT from Scratch: A BERT Model Fine-Tuned through Regularized Optimization for Improved Performance on Sentence-Level Downstream Tasks |
Cat Gonzales Fergesen, Clarisse Yu Hokia |
UmBERTo: Enhancing Performance in NLP Tasks through Model Expansion, SMARTLoss, and Ensemble Techniques |
Julian Rodriguez Cardenas, May Levin |
Effects of Appropriate Modeling of Tasks and Hyperparameters on Downstream Tasks |
Adrian L Gamarra Lafuente, Avi Udash |
MiniBERT: Training Jointly on Multiple Tasks |
Manasven Grover, Xiyuan Wang |
Finetune minBERT for Multi-Tasks Learning |
Yingbo Li |
BERTology: Improving Sentence Embeddings for Multi-Task Success |
Kyuil Lee |
Not-So-SMART BERT |
Eliot Krzysztof Jones |
Multi-Tasking BERT: The Swiss Army Knife of NLP |
Esteban Wanhoe Wu, Nicole Garcia, Simba Xu |
Exploring LSTM minBERT with DCT |
Benita Wong, Tina Wu |
BERT Extension Using Sentence-BERT for Sentence Embedding |
Anicet Dushime Wa Mungu |
Extending Min-BERT for Multi-Task Prediction Capabilities |
Grace Yang, Xianchen Yang |
Multitask Learning for BERT Model |
Chunwei Chan, Shuojia Fu |
Optimizing minBERT for Downstream Tasks using Multitask Fine-Tuning |
Carlos Emmanuelle Ayala Bellido |
Enhancing BERT through Multitask Fine-Tuning, Multiple Negatives Ranking and Cosine-Similarity |
Emily Broadhurst, Michael Maffezzoli |
minBERT and Downstream Tasks |
Qian Zhong |
Enhancing Multi-Task Learning on BERT |
Paris Zhang, Yiming Ni |
BERT’s Odyssey: Enhancing BERT for Multifaceted Downstream Tasks |
Haoming Zou, Minghe Zhang |
Efficient Multi-Task MinBERT for Three Default Tasks and Question Answering |
Fanglin Lu, Gerardus de Bruijn, Rachel Ruijia Yang |
BERT but BERT-er |
Hamzah Daud |
BERT Mastery: Explore Multitask Learning |
Chu Lin, Fuhu Xiao |
minBERT using PALs with Gradient Episodic Memory |
Christopher Nguyen |
Jack of All Trades, Master of Some: Improving BERT for Multitask Learning |
Chijioke Mgbahurike, Iddah Mlauzi, Kwame Ocran |
Multi-task Learning and Fine-tuning with BERT |
Mel Guo |
Enhanced TreeBERT: High-Performance, Computationally Efficient Multi-Task Model |
Pann Sripitak, Thanawan Atchariyachanvanit |
The Best of BERT Worlds: Improving minBERT with multi-task extensions |
Julius Hillebrand |
Improving BERT – Lessons from RoBERTa |
Channing Lee, Hannah Gail Prausnitz-Weinbaum, Haoming Song |
A Rigorous Analysis on Bert’s Language Capabilities |
Gaurav Kiran Rane |
minBERT and Multitask Learning Enhancements |
Naman Govil |
Multi-BERT: A Multi-Task BERT Approach with the Variation of Projected Attention Layer |
Haijing Zhang |
QuarBERT: Optimizing BERT with Multitask Learning and Quartet Ensemble |
Carrie Gu, Ericka Liu, Zixin Li |
BERT-icus, Transform and Ensemble! |
Helen April He, Maya Waleria Czeneszew, Sidra Nadeem |
Exploring Improvements on BERT |
Sara Hong, Sophie Wu |
minBERT and Downstream Tasks |
Shouzhong Shi |
Efficient Fine-Tuning of BERT with ELECTRA |
Akshay Dev Gupta, Erik Rozi, Vincent Jianlin Huang |
Implementing BERT for multiple downstream tasks |
Hamad M Musa |
Multi Task Fine Tuning of BERT Using Adversarial Regularization and Priority Sampling |
Chloe Trujillo, Mohsen Mahvashmohammady |
EquiBERT: (An Attempt At) Equivariant Fine-Tuning of Pretrained Large Language Models |
Patrick James Sicurello |
"That was smooth": Exploration of S-BERT with Multiple Negatives Ranking Loss and Smoothness-Inducing Regularization |
Johnny Chang, Kanu Grover, Kaushal Atul Alate |
Enhancing BERT for Advanced Language Understanding: A Multitask Learning Approach with Task-Specific Tuning |
Anusha Aditi Kuppahally, Malavi Ravindran, Ziyue (Julia) Wang |
Triple-Batch vs Proportional Sampling: Investigating Multitask Learning Architectures on minBERT |
Ethan Sargo Tiao, Rikhil Paresh Vagadia |
Extending Applications of Layer Selecting Rank Reduction |
Abraham Alappat |
Improving BERT for Downstream Tasks |
Kevin Nguyen Phan |
BEAKER: Exploring Enhancements of BERT through Learning Rate Schedules, Contrastive Learning, and CosineEmbeddingLoss |
Elizabeth Theresa Baena |
Gradient Descent in Multi-Task Learning |
David Saykin, Kfir Shmuel Dolev |
SMART Surgery: Combining Finetuning Methods for Multitask BERT |
Ethan Paul Foster |
Fine-tuning minBERT For Multi-task Classification |
Jimmy Otieno Ogada |
Three Headed Mastery: minBERT as a Jack of All Trades in Multi-Task NLP |
Ifdita Hasan Orney, Rafael Perez Martinez, Valerie Ann Fanelle |
Balancing Performance and Computational Efficiency: Exploring Low-Rank Adaptation for Multi-Transferring Learning |
Caroline Santos Marques da Silva |
ExtraBERT: Applying BERT to Multiple Downstream Language Tasks |
Isaac I. Gorelik, Rishi Dange |
BERT and Beyond: A Study of Multitask Learning Strategies for NLP |
Febie Jane Lin, Jack P Le |
Mini Bert Optimized for Multi Tasks |
Lin Lin |
Methods to Improve Downstream Generalization of minBERT |
Ramgopal Venkateswaran |
Maximizing MinBert for Multi-Task Learning |
Jordan Andy Paredes, Shumann R Xu |
minBERT Multi-Task Fine-Tuning |
Antonio Davi Macedo Coelho de Castro |
Fine-tuning minBERT for multi-task prediction |
Ishita Mangla |
Grid Search for Improvements to BERT |
Harsh Goyal |
SMART loss vs DeBERTa |
Michael Liu, Michael Phillip Hayashi, Roberto Lobato Lopez |
Extending Phrasal Paraphrase Classification Techniques to Non-Semantic NLP Tasks |
Nikhil Sharma, Samy Cherfaoui |
Loss Weighting in Multi-Task Language Learning |
Julia Kwak |
Task-specific attention |
Chaoqun Jia |
minBERT and Downstream Tasks |
Xinpei Yu |
Integrating Cosine Similarity into minBERT for Paraphrase and Semantic Analysis |
Gerald John Sufleta |
SlapBERT: Shared Layers and Projected Attention For Enhancing Multitask Learning with minBERT |
Alex He Zhai, Allison Jia, Deven Kirit Pandya |
MT-DNN with SMART Regularisation and Task-Specific Head to Capture the Pairwise and Contextually Significant Words Interplay |
Haoyu Wang |
Speedy SBERT |
Leyth Ramez Toubassy, Renee Duarte White |
Using Gradient Surgery, Cosine Similarity, and Additional Data to Improve BERT on Downstream Tasks |
Chanse H. Bhakta, Joseph Anthony Seiba, Kasen Stephensen |
Multitask BERT Model with Regularized Optimization and Gradient Surgery |
Jenny Xu |
BitBiggerBERT: An Extended BERT Model with Custom Attention Mechanisms, Enhanced Fine-Tuning, and Dynamic Weights |
Khanh V Tran, Thomas Charles Hatcher, Vladimir A Gonzalez Migal |
minBERT and Downstream Tasks Final Report |
Bingqing Zu, Yixuan Lin |
Evaluating Contrastive Learning Strategies for Enhanced Performance in Downstream Tasks |
Georgios Christoglou, Zachary Evans Behrman |
A SMARTer minBERT |
Arisa Sugiyama Chue, Daphne Liu, Poonam Sahoo |
OptimusBERT: Exploring BERT Transformer with Multi-Task Fine-Tuning, Gradient Surgery, and Adaptive Multiple Negative Rank Loss Learning Fine-Tuning |
Gabe Eduardo Seir, Ryder Thompson Matheny, Shawn Charles |
Optimizing minBert via Cosine Similarity and Negative Sampling |
Ananya Siri Vasireddy, Neha Vinjapuri |