Final poster session

We thank our sponsors (Hudson River Trading, Forethought AI, Huggingface, ServiceNow, Jane Street, Mem, Apple) for co-sponsoring the poster session!

The poster session was at the Oak Lounge at Tresidder Union. This event was not open to the general public, only to the Stanford community and invited guests. Click here for a guide on poster printing. The schedule of the event was:

4:30-5pm Session A Check-in
5:00-6:00pm Session A
6:00-6:30pm Break (Session B Check-in)
6:30-7:30pm Session B
7:30-8:00pm Break (Session C Check-in)
8:00pm-9:00pm Session C

Prizes

Congratulations to the following teams, who produced exceptional, prize-winning projects!

Best custom projects

Best default projects

TAs choice for best poster

Sponsor's prize for best poster

Student choice for best poster

Custom Projects

Project nameAuthors
Optimizing Encoder for Retrieval via Multi-Vector Late InteractionXin Ran Song
Using Named Entity Recognition to Supplement The Ocean Cleanup’s Global Beached Plastics DatasetStephen J Peng, Mitty Yu, Joe Jamison
Bert-Powered Book Genre ClassificationJessica Xinting Chen, Karen Wang
Dynamic Fed AttentionAmar Venugopal
Automated Basketball Video CaptioningLucas Andrew Pauker, Beri Kohen Behar
Few-Shot Causal DistillationThomas Starshak
Data Generation for NLP Classification Dataset Augmentation: Using Existing LLMs to Improve Dataset QualityElliot Kenneth Dauber, Sahit Dendekuri
Automating English Language Proficiency AssessmentsEthan Tejas Allavarpu, Spencer Teal Siegel, Duncan Ross
Hate Speech Detection Using Natural Language ProcessingDurga Prasad Malladi, Neha Keshari, Utkarsh Mittal
Statistically-augmented Neural Detection of Al-generated textMichael Jarek Yan, Jeffrey Heo, Simon Kim
Using Knowledge Graph Embeddings from Biomedical Language Models to Infer Drug Repurposing Candidates for Rare DiseasesYash Sanjay Patil, John N Wang
Compositional Generalization Based on Semantic Interpretation: Where can Neural Networks Improve?Carolyn Qu, Rodrigo J Nieto
Performing and Analyzing Named Entity Recognition on Foreign English ContextsAlex Zhang Shan
Generative Word Embeddings with New Similarity Techniques for Legal LinkingAndres Felipe Suarez, Jared William Azevedo
Guideline for GPT-2: Investigating Model Variants For Different Computational BudgetsRyan Kang
Enlightened Imagery: Multi-modal Image Captioning with Transformer-Based unified architecturePrashan Malintha Somapala
Adapting the Contrast-Consistent Search Method to Multiclass ClassificationSantiago Hernandez, Tomas Pfeffer, Diego Zancaneli
Universal Tabular Data Generator with Large Language ModelsJulian Chu
Language Modelling using Latent Diffusion ModelsSebastian Charmot, Ryan Lok Him Po
Knowing What You Do Not Know: Investigating Large Language Models for Out-of-Domain Intent ClassificationClaire Tang
ASKIT: Search and Ask ModelAdam C Klein, Matthew Jordan Villescas
DeepLyrics: GPT2 for lyrics generation with finetuning and prompting techniquesXiaoli Yang, Li Tian
Novel Data Augmentation for resource constrained Image captioningParth Nilesh Dodhia, Anirudh Sriram
SuperHF: Supervised Finetuning from Human FeedbackGabriel Mugisa Mukobi, Wilder Dwight Abraham Fulford, Peter Samuel Chatain
Looking Under the Hood of DetectGPTMax Du, Kaien Yang, Ryan Lian
AV-HuBERT with Multi-Resolution AttentionPaul Thomas Calamia, Jacob Donley
Multi-Modal Model for Speech to Text EntityXiang Jiang
Abstractive Summarization of Legal Text Corpuses Using Transfer LearningAlexander Antonio Alvarado-Barahona, Michael Zhang
Haiku Generation with Large Language ModelsBrennan Emily Megregian, Victoria Larisa DiMelis
Nano Backpack Language Model on Chinese CharactersHao Sun
Legal-SBERT: Creating a Sentence Tranformer for the Legal Domain and Generating DataJayendra Singh Chauhan
Building a Natural Language Chess Engine with Pretraining and Instruction Fine-TuningBowen Jiang
ConTAXt Retrieval for Long-Form Question-AnsweringWinston Shum, Usman Iqbal Hanif, Will Frank Roberts
Enabling Interpretable Histopathology Representation Learning via Multimodal Language Guided Self-SupervisionEkin Yokhan Tiu, Tom Thuc Ky Nguyen
Question Span Extraction from Chats of Instant Messaging PlatformsAbhishek Kumar
Paper Trading From Sentiment Analysis on Twitter and Reddit PostsChinmaya Mohan Andukuri, Eden Y Wang, Shobha Dasari
Generating Recipe Ingredients and Instructions with Controlled Text GenerationKerrie Wu, Justine Breuch, Ben Alexander Randoing
Are Attention Flows All You Need?Tomas Mika Bosschieter
Next-Song Recommendations for Spotify Playlists Using GPT-2 and T5Janice Yeuhthong Teoh, Carrie Jiayi Chen
Semantic Code SearchDinesh Rathinasamy Thangavel, Suhit Anand Pathak
Multimodal Patient Evaluation for Depression and AnxietyAlly Nakamura, Roshan Swaroop
Classifying Partisan Bias in News Articles: Leveraging an Understanding of Political Language and Article StructureEdoardo Yin, Emily Jin
Al Can Look Up StackOverflow too: Retrieval-Augmented Code GenerationShreyas Vinayakumar, Minh Tue Vo Thanh, Swagata Ashwani
Deep Learning Approach to Predicting Success of Medical Crowdfunding CampaignsAdvait Avinash Patil
Semantic Understanding of Genius Music AnnotationsWesley Tjangnaka, Brent Ju, Andrew Victor Li
Examining Misinformation via Search DirectivesAmy Dunphy, Michal Maciej Adamkiewicz
Making the Most of Your Data: Few Shot Learning for Automated Essay ScoringAbel Philip John
Ambiguity Resolution in Conversational Question Answering through Minimal Question IdentificationSahil Kulkarni
Looking Outside the Context Window: In-Context Learning with Up to Hundreds of ExamplesVarun Shenoy, Linden Sky Li
Few-shot Classification of Disaster-related TweetsJubayer Ibn Hamid, Jitendra Nath Pandey, Sheikh Rifayet Daiyan Srijon
Are Distilled Models Just Deep-Sea Octopi? Probing Linguistic Representations of Distillation-Finetuned ModelsChristos Polzak, Joy Yun
CodeSage: A Generative Approach to Improving Code QualityShounak Ray, Michael Deb Nath, Joseph Tey
RDF Triple-Text-Story: A Integrated Workflow for Controllable Short Story GenerationYuer Zhou, Yifu Han
MetaMapper: Interpretable Metaphor DetectionYining Mao
BERT Injections: Fine-Tuning BERT Using Tree-Based Word Representations to Address Syntactical AmbiguityAakriti Lakshmanan, Sathvik Nallamalli, Aditya Srinivas Tadimeti
Predicting Associated Comorbidities of Obesity from MIMIC 1V Clinical NotesAna Delphin Selvaraj, Om Balkrishna Jahagirdar, Peyton Chen
Rewriting Stack Overflow Questions to Improve Writing QualityAllison Sandoval Casasola, Maximilien Angelo Munz Cura
Won’t You Be My Neighbor? Probing Informational Spread in Contextual Representations of Natural LanguageSevahn Kayaneh Vorperian, Hagop Jake Chinchinian, Avi Gupta
GPTNo: A Deep Learning LLM to Beat the Turing TestWill Z Li, Sri Jaladi, Abhinav Sinha
MetaMapper: Interpretable Metaphor DetectionZiwen Chen
Improving Neural Machine Translation of Spanish to Quechua with Transfer LearningKaiyu Ren
Unpacking Social Biases: An Analysis of Sense Embeddings Using the Backpack ModelCamron Timothy Sallade, Vedant Garg, Molly Cantillon
Human Writing is as Uniform as Machine WritingRyan Tan, Raghav Mittal Garg, Jacob Eleftherios Stavrianos
GhostWriter: Dynamic Programming and Deep Learning for Lyric GenerationKiran Vincent Bhat, Niveditha Subramanyam Iyer, Tejas Narayanan
Multimodal Transformer-Based Lyric Generation from MIDI and Text DataVivek Vajipey, Steven Sun Zhao, Anthony Zhan
Investigating Methods of Using Context to Augment pre-trained Language Models for Question AnsweringSanjay Nagaraj, Rohan Reddy Davidi, Josh Sanyal
Engagement-based response generation for open-domain dialogueMarcelo Pena, Ernesto Sung Woo Nam Song
Style EmuLoRAtion in Text Generation: A Case Study with Joe Biden and Donald TrumpLuke Joseph Mann, Ori Spector
STaR-plus: building robust and efficient language model reasonersKunal Sinha
Prompting for Diverse Responses: Making Large Language Models More TruthfulEric YE, Matthew Joseph Kerr Smith
Generating Tricky Multiple-Choice QA Pairs from Contexts using Hierarchical Conditional VAEsDavyn Christoper Sudirdjo
Learning Word Embedding from Dictionary DefinitionsMadhurima Mahajan, Keertana Veeramony Chidambaram, Handi Zhao
Contrastive Learning for Sentence Embeddings in BERT and its Smaller VariantsVrishab Krishna, Rohan Bansal
GAN-BERT for Automated Essay ScoringTheodore Asa Kanell, Griffin Bryan Holt
Investigating SoTA Entity-Linking Methods for DialogueIsaac Dan Zhao, Arpit Arvind Ranasaria, Katherine Yang Yu
Bidirectional Transformer with Phonetic EmbeddingJiabin Wang
Natural Language Generation with PixelsRajan Pathe Vivek, Gautam Mittal
Investigating Disfluency Generation for the Creation of Humanlike Utterances in ConversationZuyi Liz Zhao, Alice Bai Zhang, Ayushi Gupta
VoBERTal: Variant Objective BERT Pretraining Approaches with HyperLinksYangyi Shen, Joey Ji
Deep Q-Learning for Text GenerationFelix Meng, Liwen Ouyang, Phee Nimitsurachat
DialogDiffAE: Dialogue Generation with Diffusion-Equipped Auto-EncoderFangzhao Zhang, Xiaohan Song
Detoxifying Language Model with Context DistillationAndrew Hyungmin Lee
Novel Genre-Based Story Summary GenerationAlexis Catherine Echano, Minh Chau Mai
Making the Most of Your Data: Few Shot Learning for Automated Essay ScoringSamarth Eshwar Kadaba
ADRAGGAN: ADversarial training for RAtionale Generation: a GAN for moral dilemmasPoojan Pandya, Priya Khandelwal, Kavin Anand
Reading Between the Lines: MeasuringAndy Viet Huynh, David Haikuo Wang, Katherine Whitney Crandell
Unsupervised Question Answering Using Custom NLP Library Built for Egyptian ArabicAhmed Mostafa Sharaf, Michael Samouel Ghatas Souliman
Interpretability and Controllability of Backpack LMsTae Kyu Kim, Sarah Li Chen
Activation Sparsity: An Insight into the Interpretability of Trained TransformersCarolyn Akua Asante Dartey, Jaime Eli Mizrachi Eshkenazi, Anushree Aggarwal
Interpreting Transformers using Spectral AnalysisTulika Jha, Vishal Mohanty, Rishu Garg
Improved Methods for Solving Diverse Winograd SchemasMax Atsunobu Vandervelden, Rohan Kumar Cherivirala
PiGGyBacking off of PEGASUS: Pre-training with Gap-sentences for Government BillsEvelyn Hejin Choi, Karsen Lee Wahal, Alice Zhaoyi Chen
Constructing a Transformer-Based Architecture for Explainable Conversational RecommendationBrock Grassy
ED Radiology Report Label ExtractionSerena Zhang, Jenny Shi, Iris Xia
Filtering Out Unreliable Language Model Outputs Using Contrast-Consistent SearchMichael Byun, Mauricio Baker
MEDI-BERPT: A Novel Multitask Approach to Streamlining Chinese HealthcareSunny Sun, Bi Tian Yuan
Semantic-Augment: Augmenting the Semantic Space of Transformers Improves GeneralizationEmirhan Kurtulus
Steering Natural Language Generation by Optimizing Vector-to-Cluster DistanceAniketh Nandakumar Iyengar, Vrushank Yatish Gunjur
Adapting to Word Shifts: Teaching LLMs the Urban DictionaryJustin Wu, Sheryl Hsu
Adaptation, Sensitivity, and Introspection: Investigating the Capabilities of LLMs as HypernetworksJoseph Thomas Guman, Joey Coleman O'Brien, Christopher Lawrence Marcelino Pondoc
Argue Better: Using Large Language Models to Generate Better Examples for Ineffective Persuasive Essay ArgumentsAnjali Ragupathi, Ashley Zhang
DeepRhymes: Efficient End-to-end Conditional Rap Lyrics GenerationBessie Zhang, Catherine Kung, Ivan Villa-Renteria
Applying Natural Language Processing in Answering Multiple-Choice Questions for Assessing Child/Youth and Adults Needs and Strengths (CANS®/ANSA)Kalikant Ganeshchandra Jha, Bohdan Metchko Junior
Leveraging Patient Portal Messages to Predict Emergency Department VisitsJasmine Selin Bilir, Tran Le
Leveraging Patient Portal Messages to Predict Emergency Department VisitsJulia L Kadie
Are GPT-3 Models Pragmatic Reasoners?Ariane Lee
Contextual Counterspeech GenerationTanvi Misra Deshpande
Automatic Speech Recognition Error Correction on ICU Clinical Narration DatasetZhuoyi Huang, Han Bai, Adam Sun
Summarizing Charts and Graphs with ContextNandita S Naik, Akankshita Dash
Longformer-based Automated Writing Assessment for English Language LearnersPeiqi Zhang
How well can Hippos learn? A Novel Foray into the In-Context Learning Capabilities of H3Shreyas Kar, Andres Carranza, Dhruv Bhandarkar Pai
Multi Distribution Dense Information RetrievalSoumya Chatterjee
Data Augmentation for Low-resourced Language ModelingShubo Yang, Wanyue Zhai
MOPS: Memory Occupancy and Performance Surveying when using Late-Stage Hard Parameter Sharing for BERT Multitask LearningCallum Jan Burgess, Mark Peter Bechthold
Multi-Task Learning BERT Model with Task-Specific DecodersZhen Li
Exploring the Effect of Semantic Similarity on Model GeneralizationDustin Ryan Zubke, Hong Ju Jeon
Minimum Generative Pre-trained Transformer with Human FeedbackYanjia Li
Calibrated Contrast-Consistent SearchHolly McCann, Lucas Tao, Felipe Calero Forero
More Informative Relative Position Encoding for Table-to-Text GenerationYuan Wang
Efficient Two-stage Approach for Long Document SummarizationFengmin Tang, Jialuo Yuan, Benson Zu
Predicting Emergency Department Disposition from Radiology ReportsKaren Garcia Mesa, Andy Zhang
Audio-Text Cross-Modal RetrievalVladimir Tourbabin, Zamir Ben-Hur
Bias in clinical notesBetty Xiong
NEWS2DIAL: News to Dialogue UtteranceRishi Agarwal, Pratyush Agarwal, Ali Rehan
Finetuning minBERT Model for Multiple Downstream TasksYuan Wang
Today Years Old: Adapting Language Models to Word ShiftsJason Jin Chen, Zachary Xi, Olivia Y Lee
Rationale Belief Aggregation for Self-Verified ReasoningVaish Shrivastava
Multi-Task Zero-shot modeling with test Domain Shift: an exploration of sampling and fine-tuning techniques on DistilGPT-2 and BIG-benchLara Malinov
Deep Auctions: Using Economics to Improve NMT DecodingAbhy Ravi Devalapura, Logan Mondal Bhamidipaty
Does Learning Syntax Help Models Learn Language?Lian Wang
TAKG: Importance-augmented Knowledge GraphsJosh Cho
Transformer-based solutions using transfer learning and instruction fine-tuning conditional on context input data for downstream NLP tasks in the domain of job application pain pointsAris Aristorenas
Controlling Toxicity using BackpacksAdvaya Gupta, Apoorva Dixit, Aditya Ashwini Agrawal
Domain Adaptation to Climate Change with Improved BLEU Evaluation MethodYunan Li
BabyLLM Challenge: Encouraging Tree-Structured Calculations in TransformersVincelot Ravoson, Thomas James Little
Embedding Freedom? An NLP Approach to Uncovering Pre- and Post-Abolition Racial Bias in Brazilian LiteratureAna Carolina Queiroz
Bringing Back Black Boxes: Classification of TV news using neural nets *Jennifer A Wu, Shun Yamaya
Reinforcement Learning for Language ModelsWanqiao Xu, Paul Dupenloup, Gary Lurui Qian
Measuring Mission Deviation in California Non-Profit HospitalsNova Josephine Bradford, Pranay Agrawal, Cesar Augusto Portocarrero Rodriguez
Text Classification with language models and graph structuresJian Xu, Fang Shu
Probing Frozen NL Models for Alignment with Human ReasoningClara Greene MacAvoy, Claire Cheng
Contextual Question Answering using variations of BiDAF and QANetAchilleas Martinis
DetectChatGPT: Black-Box Zero-Shot Detection of LLM-Generated TextJulia Park
DetectChatGPT: Black-Box Zero-Shot Detection of LLM-Generated TextArmaan Rashid
Tweet Sentiment Analysis to Predict Stock MarketChristian Luther Palomo
Interpreting Transformers through Activation SparsityQuinn Isaiah Smalling, Dmitri Michelangelo Saberi
Generating Molecules from Natural Language with Multimodal Contrastive Pre-TrainingRomain Lacombe, Kateryna Pistunova, David Ludeke
Exploring the Logical and Mathematical Capabilities of the BERT Embedding Space using Contrastive LearningMona Anvari

Default Projects

Project nameAuthors
Unifying Different NLP Tasks with A Question-answering ModelPolycarpos Yiorkadjis, Yiyuan Wang
Extending the BERT Model to a Multitask Loss Function Using Gradient SurgeryAli Lasemi
Finetuning minBERT for Downstream TasksPete Rushton, Tyler Lee Nichols
Default Project: minBERT and Downstream TasksRachel Yu, Mabel Jiang
BERT’s Multitask Learning AdventuresYipeng Liu, Jonathan Richard Larkin
Adjusting Dropout in Contrastive Learning of Sentence EmbeddingsGuillermo Frontera Sanchez, Maurice Andre Georgi
minBERT and Downstream TasksRegina Li Wang, Reva Parag Agashe, Jennie Jaeyoung Chung
minBERT and Downstream TasksJason Alexander Chan
Enhancing Multi-Task Text Classification with Contrastive Learning and Dataset Augmentation in BERT-like ModelsPhillip Yao-Lakaschus
PALs of Alto: Comparing Adapter Modules and Projected Attention Layers for Multitask LearningJonah Gordon Cader
minBERT and Downstream TasksIbrahim Gulluk
swissBERT: A Ready-to-Use Multitask TransformerYixin Liu, Tom Shen, Violet Yao
Improved BERT embeddings through Negative Rank LossAarya Cyril Mecwan, Torstein Orbeck Eliassen, Natalie V Bishay
Exploring Strategies for Improved Performance in Multi-Task Learning with Pretrained-BERTXiaolei Shi
Is training all you need? Exploring further pretraining and multi-task finetuning on BERTRichard Liu, Umar Dizon Maniku
BERT-MTS: Fine Tuning BERT for Multi-Task ServingNishant Bharat Kanakia
MinBERT and Downstream TasksAdam Lida Zhao, Rohan Virani, Priyanka Mathikshara Mathialagan
Multi-task NLP with BERTChristopher Edward King
Implementing Projected Attention Layers (PALs) for Joint Multi-Task LearningTJ Tan
Losing to Win: Evaluating the Success of Various Loss Functions Within Multitask SettingsJadon Armon Geathers
minBERT and Multi-Task Learning for Downstream TasksJonathan Nathaniel Coronado, Michael Song Zhu
Multiple Strategies to Improve minBERT Multitask LearningColin Hall Kalicki
Investigating BERT through Fine-Tuned Regularization and Layer-Level Adaptations for Multi-Task Performance.Arjun Pandey, Neel S Narayan
Adapting the Contrast-Consistent Search Method to Multiclass ClassificationDiego Zancaneli
Multi-Task Learning With a BERT-y Good ModelNabil Ahmed, David Karamardian
Multitask Learning with Pre-trained BERTYuntao Ma, Kevin Li, Jack Albright
Pals and Gradient Vaccine for NLP Multitask LearningHannah Cussen, Michela Marchini, Kate Madeline Callon
Fine-Tuning BERT with Multi-Task Learning, Gradient Surgery, and Masked Language Modeling for Downstream NLP TasksGabriela F Aranguiz-Dias, Janelle Cheung
Cuts and Stitches: Does Model Merging Produce Better Multitask Learners?Koren Gilbai Koren, Suppakit Waiwitlikhit, Akshana Mario Dassanaike-Perera
minBERT for Multi-Task LearningMaoan Wang, Emily Chanel Stanford
BERT Multi-Task Cosine Surgery: Applying Cosine Similarity and Gradient Surgery in a BERT Multi-Task Fine-Tuning SettingGraciela Magdalena Maria Smet, Nick Hisaka Aughney Walker
Learning Better Together: Exploring Multi-Task Learning for Natural LanguageKapil E Iyer
BERT’s Mean Teacher and Multitask Fine-TuningKevin Tran, Anthony Qin
Contrastive Pretraining of minBERT to Improve Performance in Downstream TasksNick Phillips
Improving MinBERT: Gradient Surgery and Mixed-Precision TrainingMaxwell Chen
Extending BERT for General Task ApplicabilityBen Jeon
Around the BERT model: from a basic implementation to advanced optimizations and Multi-Task LearningJoachim Studnia, Yoni David Gozlan, Ines Dormoy
Multitask BERTCaroline Kelsey Zanze, Drew Wadsworth
Exploring Methods to Improve Robustness of Downstream Tasks for the BERT Language ModelKenny Dao, Viraj Mehta, Jeremy Tian
Exploring Multitask BERT Optimizations for Sentiment Classification, Paraphrase Detection, and Semantic Textual SimilarityGashon Halif Hussein
BERT-CF: Contrastive Flows for MultiTask-BERTGeorge Hu
Investigate multitask Performance of minBERT EnsembleCheng Chang
Multi-task Fine-tuning with BERTSanjaye Elayattu
BERT With Multitask Fine-Tuning and Loss ConstructionPrarthna Khemka, Grace Casarez
Sentence part-enhanced minBERT: Incorporating sentence parts to improve BERT performance on downstream tasksAaron Long Wan
Improving Multitask MinBERT with Regularized Optimization and Contrastive LearningZhengdan Li, Weian Yin
Multi-task Learning with BERT in NLPFan Wang
Unitary Scalarization or Gradient Surgery? Best Practices for Multitask Fine-TuningJohn David McEnany
Generalizing BERT through Multi-Task LearningCaroline Wang
minBERT and extensions for downstream tasksShiqi Xia, Yixing Jiang
Multitask BERT: Exploration and ExtensionAqil Daud Naeem
CS 224N: MinBERT and Downstream TasksRita Tlemcani, Cole Porter Sohn
BERT Fine-Tuning with Contrastive Loss and Smoothness-Inducing RegularizationLaura Wu, Frank Zhao
BERT Extension Using SMART and Cosine Similarity MethodologyVictor Cheruiyot, Donghun Daniel Kim, Xinwei Liu
Improving minBERT Performance on Multiple Tasks through In-domain Pretraining, Negatives Ranking Loss Learning, and Hyperparameter OptimizationCatherine Huang, Addison Reese Jadwin
miniBERT and Multitasking: An Architectural AnalysisJack Francis Michaels
Fine-tuning Multi-Task Learning in BERT ModelEmily Guo, Cole Hobbs Crichton
Failures of Improving minBERT with Similarity-based Triplet NetworksJINPU CAO
Improving MiniBERT’s Semantic Performance with Semantic-rich Sentence EmbeddingsMelvin Orichi Socana, Julia Rose Chin, Jay Sahil Chiruvolu
Walk Less and Only Down Smooth ValleysJulian Edwin Lovett Cooper, Thomas Brink, Quinn Hollister
Use Siamese BERT-Networks to fine-tune minBERT with downstream tasksZihan Yi
Exploring Multi-Task Learning for Robust Language Encoding with BERTLaura Maria Bravo Sanchez, Eduardo Alejandro Lozano Garcia
BERT and MNRLLie: Extending minBERT with Deep Metric Learning and Gradient SurgeryJorge Martinez Alba, Henry Alexander Bradley, Ben Auslin
Impact of BERT Extensions on Cross-Domain Text ClassificationMichelle Wa Lok, Arun Karthikeyan
Fine-tuning BERT for Sentiment Analysis, Paraphrase Detection and Semantic Textual SimilarityAnnie Ma, Alexander Peng, Joseph Zhang
Optimizing Multi-Task Classification Finetuning in BERT: a Multi-Pronged ApproachBar Weiner, Soham Konar, Aadi Nashikkar
Style EmuLoRAtion in Text Generation: A Case Study with Joe Biden and Donald TrumpOri Spector
Investigating BERT Model’s Abilities in Multi-Task Learning and Methods for Performance ImprovementMac Ya, Tommy Li
Investigating Methods of Using Context to Augment pre-trained Language Models for Question AnsweringRohan Reddy Davidi
Engagement-based response generation for open-domain dialogueErnesto Sung Woo Nam Song, Marcelo Pena
Style EmuLoRAtion in Text Generation: A Case Study with Joe Biden and Donald TrumpLuke Joseph Mann
BERT and Learnie: Multi-task Classification Across Semantics StreetFlora Huang, Sonia Hangjie Chu, Kachachan Chotitamnavee
Combining Improvements for a Better BERTDiego Mitsutaka Ahmad-Stein, Emily Wesel
BERT Goes to College: Exploring additional pretraining and multitask fine-tuning strategies with minBERTPeyton M Lee, Michelle Fu, Eric Zhang
Fine-tuning minBERT on Downstream Tasks with Gradient Surgery and Weighted LossesAndrew J Gan, Gareth A Cockroft, Tee Monsereenusorn
Efficient Finetuning for Multi-tasking minBERTTz-Wei Mo, Annie Ho
Enhancing BERT with Self-Supervised AttentionJoshua Christopher Francis
BERT Fine-tuning with Meta LearningHui Xue
Multitask Finetuning on BERT using Gradient Surgery and Linear Probing before FinetuningArvind Venkat Mahankali
Low Rank Adaptation for Multitask BERTMarco Tacke, Johannes Fuest
Three Heads Are Better Than OneJesus E Meza Rosales
Multi-task Learning using BERTNina Cruz, Pankhuri Aggarwal
Enhancing minBERT for Sentence Similarity with Cosine Similarity and Contrastive LearningXiaomiao Zhang, Yi-Chin Huang
Evaluating fine-tuning methods for robust multi-task sentence embeddingsConnor Toups, Ammar A Alinur, Kaleb Berhe Tsegay
Fine-tuning minBERT for Various Downstream TasksSiqi Wang, Longling Tian
Pre-training BERT: Swapped Subject Phrase DetectionMaggie Wu
BERT: A Master of All Trades or Jack of None?Tom Pritsky, Josselin Martin Somerville Roberts, Marie Amale Huynh
Fine Tuning Multi Downstream Tasks based on BERT with Gradient SurgeryJiwen Chen
Three Heads Are Better Than OneEsteban Cambronero Saba
Building Robust Adaptation for Multi-Task Learning over minBERTJoyce Pan, Tess Tao, Guhui Zhang
minBERT Optimization with the SMART Learning FrameworkYihan Shi, Zixuan Xu, Zeyu Sun
Multi-task BERT ClassificationShen Gao
Multitask minBertEmma E Passmore, Sajel Galhotra, Riya Shirish Sankhe
BERT Finetuning AnalysisXueying Xie
BERT++: Trustworthy MultiTask Learning with BERTZilu Wang, Yuwei Wu, Anh Hoang Nguyen
Multi-Task Learning for Robust Contextualized Sentence Embedding GenerationYash Dalmia, Santino L Ramos
PolarBERT: Enhancing Robustness and Generalizability of BERT Sentence Embeddings through Multiple Negatives Ranking Loss and Contrastive LearningTyler Allan Hanson, Jaisal Kothari, Lucy Zhu
Multi-Task Learning with BERTNaveen Kumar
Investigating minBERT’s Performance on Negated Sentence ClassificationEmily Ito Okabe
Enhance minBERT’s Performance on Multiple Sentence-Level Tasks using Downstream TechniquesVibhaakar Sharma, Mandy Leung, Chung Ching Cheung
minBERT and Downstream TasksHarvey Cai
Data Augmentation with Feedback Control for BERT Multitask FinetuningKevin Titat Supakkul, Ryan Mason Beauchamp
BERT-based Multi-task LearningSahar Kazemzadeh
MultitaskBERT with Contrastive Pretraining and Fine-Grained Feature LearningRui Deng, Jack Chen, ZHENGJI YANG
minBERT and Multi-task Training with Gradient SurgeryYihe Tang, Yunqi Li
A Comprehensive Analysis of Fine-Tuning Strategies for BERTAdam Hyungsuk Chun, Emily Angel Hsu, Emily Quynh Nguyen
SuperBERT: Multi-task Finetuning with Domain AdaptationMohamed Ibrahim Osman, Mohamed A Owda
SMARTBert: Improving BERT Model Performance on Downstream Tasks Using Smoothness Inducing Adversarial RegularizationRoy Yuan, Jennifer He
Adversarial Transfer Learning for Continuous Natural Language RepresentationJinyoung Kim, Matthew Jonathan Turk, Zhaoqiang Bai
SerBERTus: A SMART Three-Headed BERT EnsembleMatthew John Hayes
Techniques for Extracting Meaningful BERT Sentence Embeddings for Downstream TasksJacob Anwar Mejia, Michael Yuanyi Xue, Matthew Harvill
Contrastive Learning for Generalizable Sentence EmbeddingsShenghan Chen
MT-BERT: Fine-tuning BERT for Downstream Tasks Using Multi-Task LearningNeha Kunjal, Hermann Nyuykonge Kumbong
Prototypical Pre-Training for Robust Multi-Task Learning in Natural Language ProcessingAndre Yu Yeung, Rohan Sikand
minBERT and Multiple Downstream TasksXianling Zhang
Pals for PALs: Exploring Extensions to Projected Attention Layers for Sentence-Level TasksLainey Yifei Wang
How to Fine-Tune BERT for Multiple Tasks?Jingru Cheng, Bohao He
Finetuning a multitask BERT for downstream tasksChenchen Gu
Genre Classifications using Book and Film DescriptionsAri Webb, Mattheus Borges Wolff, Shaunak Bhandarkar
Multitask Bert with Task Embedded Attentions (TEA-BERT)Chunjiang Mou, Sally Yao, Zifei Xu
Fine-Tuning BERT for Sentiment Analysis, Paraphrase Detection and Semantic Text Similarity NLP TasksSwathi Gangaraju, Andrew Cheng
MOPS: Memory Occupancy and Performance Surveying when using Late-Stage Hard Parameter Sharing for BERT Multitask LearningCallum Jan Burgess, Mark Peter Bechthold, Anthony David Weng
Multi-Task Learning BERT Model with Task-Specific DecodersZhen Li
Default Final Project: Improving minBERT with Entailment LearningSaksham Consul, Akash Rajesh Chaurasia, Carlota Pares Morlans
Leave It To BERT: Exploring Methods for Robust Multi-Task PerformanceAbhi Kumar, Finn Alexander Dayton, Christopher Moffitt
Extending BERT with Multi-task and Meta-learningCam Scott Anton Burton, Anna NING
Default Final Project: minBERT and Downstream Tasks (Multi-task Learning)Maximilian Sabayev, Samuel Chian
Freeze Your LayersAlex Scott Thiesmeyer, Gautham Ryota Gorti
Robust Embeddings using Contrastive and Multiple Negatives on BERTIshan Sabane, Shoaib Mohammed
A Sentence-BERT Extension to the minBERT ModelLisa Xuejie Yi, Shruti Sridhar
BERTer Multi-task Fine-tuning for Sentence-Level TasksDanny SungIn Park, Stanley Yang, Andrew S Chen
Enhanced generalizable minBERT model for multiple downstream tasksYi Qi
Finetuning minBERT Model for Multiple Downstream TasksYuan Wang
Adapting BERT for Multi-Task Learning with PALsKathleen Cheng
Beyond BERT: Deepening Natural Language Understanding with Multi-Task Learning and Advanced Embedding TechniquesMatt Peng, Varun Madhu Kutirakulam, Mohammed Minhajuddin Majid
Extension-BERT: Multitask Learning with BERTJingwen Wu
SBRRT: Investigating Extensions on BERTDiego Adrian Valdez Duran
Data Augmentation for Multi-task BERT models Stanford CS224N Default ProjectAditya Chandrasekar
Multitasking with BERTJack Jin Hung, Jiacheng Hu
minBERT and Extensions over Downstream TasksTaiqi Zhao, Weimin Wan, Jerry Lin
Enhancing miniBERT: Exploring Methods to Improve BERT PerformanceShivangi Agarwal, Ben Charles Hora, Yasmin Salehi
We do it BERTer: Comparison of Finetuning Methods to Improve Sentence EmbeddingsAlex Hodges, Ramya Ayyagari
Exploring minBERT Performance OptimizationsMarie Chu, Emmy Thamakaison
Implementing minBERT and Extensions for Multi-Task LearningYan Wang, Jiani Wang, Qinchen Wang
Parameter Efficient Fine-tuning for Multi-task LearningJeffery Shen, Chih-Ying Liu
Comparative Analysis of SimCSE for minBERT Optimization with Multiple Downstream TasksRunqiu Zhang
SimCSE Lessens your Need to Seek BERT’s AttentionAndre Klawa
Optimizing minBERT for Multiple Classification TasksSavitha Srinivasan, Edmond John Dilworth, Priyanka Shrestha
Multitasking with minBERTJungSuk Lee
minBERT for Sentiment Analysis, Paraphrase Detection, and Semantic Textual SimilarityShelly Goel, Haya Hidayatullah, Yoko Nagafuchi
minBERT and Downstream TasksYvonne Hong, Hodan Farah, Simrin Kalkat
Training MinBERT with Contrastive LearningRobert Walter Markham Thompson, Sal Rocco Spina, Patrick John Donohue
Improving BERT computational efficiencyJulio Alberto Oscanoa Aida
Glaucoma Surgery Outcome Prediction Using Progress Notes: A Comparative StudySamuel Barry, Sarvesh R. Babu
BERT for Sentiment Analysis, Paraphrase Detection and Semantic Textual Similarity with Cosine SimilarityDebolina Paul
Multitasking with a single set of BERT embeddingsAdrien Lemercier
Enhancing minBert Embeddings for Multiple Downstream Tasks Stanford CS224N Default ProjectDonald Stephens
Convolutional Gated Unit for Improved Multi-Task LearningPrincess Vongchanh, Daniel Contreras-Esquivel
BERT Downstream task training utilizing different pooling methodsKaushik Sampath