Ensemble-based Fine-Tuning Strategy for Temporal Relation Extraction from the Clinical Narrative

Lijing Wang, Timothy Miller, Steven Bethard, Guergana Savova

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

In this paper, we investigate ensemble methods for fine-tuning transformer-based pretrained models for clinical natural language processing tasks, specifically temporal relation extraction from the clinical narrative. Our experimental results on the THYME data show that ensembling as a fine-tuning strategy can further boost model performance over single learners optimized for hyperparameters. Dynamic snapshot ensembling is particularly beneficial as it fine-tunes a wide array of parameters and results in a 2.8% absolute improvement in F1 over the base single learner.

Original languageEnglish (US)
Title of host publicationClinicalNLP 2022 - 4th Workshop on Clinical Natural Language Processing, Proceedings
EditorsTristan Naumann, Steven Bethard, Kirk Roberts, Anna Rumshisky
PublisherAssociation for Computational Linguistics (ACL)
Pages103-108
Number of pages6
ISBN (Electronic)9781955917773
StatePublished - 2022
Externally publishedYes
Event4th Workshop on Clinical Natural Language Processing, ClinicalNLP 2022 - Seattle, United States
Duration: Jul 14 2022 → …

Publication series

NameClinicalNLP 2022 - 4th Workshop on Clinical Natural Language Processing, Proceedings

Conference

Conference4th Workshop on Clinical Natural Language Processing, ClinicalNLP 2022
Country/TerritoryUnited States
CitySeattle
Period7/14/22 → …

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software

Fingerprint

Dive into the research topics of 'Ensemble-based Fine-Tuning Strategy for Temporal Relation Extraction from the Clinical Narrative'. Together they form a unique fingerprint.

Cite this