Explainable hybrid word representations for sentiment analysis of financial news

Adhikari, Surabhi, Thapa, Surendrabikram, Naseem, Usman, Lu, Hai Ya, Bharathy, Gnana, and Prasad, Mukesh (2023) Explainable hybrid word representations for sentiment analysis of financial news. Neural Networks, 164. pp. 115-123.

[img] PDF (Published Version) - Published Version
Restricted to Repository staff only

View at Publisher Website: https://doi.org/10.1016/j.neunet.2023.04...
6


Abstract

Due to the increasing interest of people in the stock and financial market, the sentiment analysis of news and texts related to the sector is of utmost importance. This helps the potential investors in deciding what company to invest in and what are their long-term benefits. However, it is challenging to analyze the sentiments of texts related to the financial domain, given the enormous amount of information available. The existing approaches are unable to capture complex attributes of language such as word usage, including semantics and syntax throughout the context, and polysemy in the context. Further, these approaches failed to interpret the models’ predictability, which is obscure to humans. Models’ interpretability to justify the predictions has remained largely unexplored and has become important to engender users’ trust in the predictions by providing insight into the model prediction. Accordingly, in this paper, we present an explainable hybrid word representation that first augments the data to address the class imbalance issue and then integrates three embeddings to involve polysemy in context, semantics, and syntax in a context. We then fed our proposed word representation to a convolutional neural network (CNN) with attention to capture the sentiment. The experimental results show that our model outperforms several baselines of both classic classifiers and combinations of various word embedding models in the sentiment analysis of financial news. The experimental results also show that the proposed model outperforms several baselines of word embeddings and contextual embeddings when they are separately fed to a neural network model. Further, we show the explainability of the proposed method by presenting the visualization results to explain the reason for a prediction in the sentiment analysis of financial news.

Item ID: 79247
Item Type: Article (Research - C1)
ISSN: 1879-2782
Keywords: Contextual embeddings, Explainability, Explainable sentiment analysis, Hybrid word embeddings, Natural Language Processing, XAI
Copyright Information: © 2023 Elsevier Ltd. All rights reserved.
Date Deposited: 13 Dec 2023 01:24
FoR Codes: 46 INFORMATION AND COMPUTING SCIENCES > 4602 Artificial intelligence > 460208 Natural language processing @ 100%
SEO Codes: 22 INFORMATION AND COMMUNICATION SERVICES > 2204 Information systems, technologies and services > 220403 Artificial intelligence @ 100%
More Statistics

Actions (Repository Staff Only)

Item Control Page Item Control Page