The CLASSE GATOR (CLinical Acronym SenSE disambiGuATOR): A Method for predicting acronym sense from neonatal clinical notes - ScienceDirect

peter.suber's bookmarks 2020-08-17

Summary:

Abstract:  Objective

To develop an algorithm for identifying acronym ‘sense’ from clinical notes without requiring a clinically annotated training set.

Materials and Methods

Our algorithm is called CLASSE GATOR: Clinical Acronym SenSE disambiGuATOR. CLASSE GATOR extracts acronyms and definitions from PubMed Central (PMC). A logistic regression model is trained using words associated with specific acronym-definition pairs from PMC. CLASSE GATOR uses this library of acronym-definitions and their corresponding word feature vectors to predict the acronym ‘sense’ from Beth Israel Deaconess (MIMIC-III) neonatal notes.

Results

We identified 1,257 acronyms and 8,287 definitions including a random definition from 31,764 PMC articles on prenatal exposures and 2,227,674 PMC open access articles. The average number of senses (definitions) per acronym was 6.6 (min = 2, max = 50). The average internal 5-fold cross validation was 87.9 % (on PMC). We found 727 unique acronyms (57.29 %) from PMC were present in 105,044 neonatal notes (MIMIC-III). We evaluated the performance of acronym prediction using 245 manually annotated clinical notes with 9 distinct acronyms. CLASSE GATOR achieved an overall accuracy of 63.04 % and outperformed random for 8/9 acronyms (88.89 %) when applied to clinical notes. We also compared our algorithm with UMN's acronym set, and found that CLASSE GATOR outperformed random for 63.46 % of 52 acronyms when using logistic regression, 75.00 % when using Bert and 76.92 % when using BioBert as the prediction algorithm within CLASSE GATOR.

Conclusions

CLASSE GATOR is the first automated acronym sense disambiguation method for clinical notes. Importantly, CLASSE GATOR does not require an expensive manually annotated acronym-definition corpus for training.

From the body of the paper: "Patients are shared decision makers in their healthcare [1] and those who read more of their clinicians’ notes are more engaged and able to share further in the decision making process [1]. Some academic medical centers, including Beth Israel Deaconess, have made their clinical notes available to patients themselves [2]. The reported feedback from patients viewing their notes is generally positive [3,4]. However, approximately 7.6 % of patients reading their clinicians’ notes reported that they were more confusing then helpful and 2.9 % reported that they were worried more after reading their clinicians’ notes [5]. Further, patients may vary in their reading abilities. For example, patients with more comorbidities and medications have demonstrated lower numeracy scores [6], indicating, perhaps, that their ability to comprehend medical text may be even lower."

 

Link:

https://www.sciencedirect.com/science/article/pii/S1386505619312122

Updated:

08/17/2020, 10:47

From feeds:

Open Access Tracking Project (OATP) » peter.suber's bookmarks

Tags:

oa.ai oa.intelligibility oa.patients oa.medicine oa.lay

Date tagged:

08/17/2020, 14:46

Date published:

05/01/2020, 10:47