BioALBERT: A Simple and Effective Pre-trained Language Model for Biomedical Named Entity Recognition
Naseem, Usman, Khushi, Matloob, Reddy, Vinay, Rajendran, Sakthivel, Razzak, Imran, and Kim, Jinman (2021) BioALBERT: A Simple and Effective Pre-trained Language Model for Biomedical Named Entity Recognition. In: Proceedings of the 2021 International Joint Conference on Neural Networks. From: IJCNN: 2021 International Joint Conference on Neural Networks, 18-22 July 2021, Shenzhen, China.
PDF (Published Version)
- Published Version
Restricted to Repository staff only |
Abstract
In recent years, with the growing amount of biomedical documents, coupled with advancement in natural language processing algorithms, the research on biomedical named entity recognition (BioNER) has increased exponentially. However, BioNER research is challenging as NER in the biomedical domain are: (i) often restricted due to limited amount of training data, (ii) an entity can refer to multiple types and concepts depending on its context and, (iii) heavy reliance on acronyms that are sub-domain specific. Existing BioNER approaches often neglect these issues and directly adopt the state-of-the-art (SOTA) models trained in general corpora, which often yields unsatisfactory results. We propose biomedical ALBERT (A Lite Bidirectional Encoder Representations from Transformers for Biomedical Text Mining) - bioALBERT - an effective domain-specific pre-trained language model trained on a huge biomedical corpus designed to capture biomedical context-dependent NER. We adopted a self-supervised loss function used in ALBERT that targets modelling inter-sentence coherence to better learn context-dependent representations and incorporated parameter reduction strategies to minimise memory usage and enhance the training time in BioNER. In our experiments, BioALBERT outperformed comparative SOTA BioNER models on 8 biomedical NER benchmark datasets with 4 different entity types. The performance is increased for; (i) disease type corpora by 7.47% (NCBI- disease) and 10.63% (BC5CDR-disease); (ii) drug-chem type corpora by 4.61 % (BC5CDR-Chem) and 3.89% (BC4CHEMD); (iii) gene-protein type corpora by 12.25% (BC2GM) and 6.42% (JNLPBA); and (iv) species type corpora by 6.19% (LINNAEUS) and 23.71 % (Species-800) is observed which leads to a state-of-the-art results. The performance of a proposed model on four different biomedical entity types shows that our model is robust and generalisable in recognising biomedical entities in text.
Item ID: | 79236 |
---|---|
Item Type: | Conference Item (Research - E1) |
ISBN: | 978-1-6654-3900-8 |
Copyright Information: | © 2021 IEEE |
Date Deposited: | 06 Jul 2023 00:23 |
FoR Codes: | 46 INFORMATION AND COMPUTING SCIENCES > 4602 Artificial intelligence > 460208 Natural language processing @ 100% |
SEO Codes: | 22 INFORMATION AND COMMUNICATION SERVICES > 2204 Information systems, technologies and services > 220403 Artificial intelligence @ 100% |
More Statistics |