General Words Representation Method for Modern Language Model

Abbas Saliimi, Lokman and Mohamed Ariff, Ameedeen and Ngahzaifa, Ab. Ghani (2023) General Words Representation Method for Modern Language Model. Journal of Telecommunication, Electronic and Computer Engineering, 15 (1). pp. 1-5. ISSN 2180-1843 (Print); 2289-8131 (Online). (Unpublished) (Unpublished)

6241-Article Text-18137-1-10-20230324.pdf - Published Version

Download (405kB) | Preview


This paper proposes a new word representation method emphasizes general words over specific words. The main motivation for developing this method is to address the weighting bias in modern Language Models (LMs). Based on the Transformer architecture, contemporary LMs tend to naturally emphasize specific words through the Attention mechanism to capture the key semantic concepts in a given text. As a result, general words, including question words are often neglected by LMs, leading to a biased word significance representation (where specific words have heightened weight, while general words have reduced weights). This paper presents a case study, where general words' semantics are as important as specific words' semantics, specifically in the abstractive answer area within the Natural Language Processing (NLP) Question Answering (QA) domain. Based on the selected case study datasets, two experiments are designed to test the hypothesis that "the significance of general words is highly correlated with its Term Frequency (TF) percentage across various document scales”. The results from these experiments support this hypothesis, justifying the proposed intention of the method to emphasize general words over specific words in any corpus size. The output of the proposed method is a list of token (word)- weight pairs. These generated weights can be used to leverage the significance of general words over specific words in suitable NLP tasks. An example of such task is the question classification process (classifying question text whether it expects factual or abstractive answer). In this context, general words, particularly the question words are more semantically significant than the specific words. This is because the same specific words in different questions might require different answers based on their question words (e.g. "How many items are on sale?" and "What items are on sale?" questions). By employing the general weight values produced by this method, the weightage of question and specific words can be heightened, making it easier for the classification system to differentiate between these questions. Additionally, the token (word)-weight pair list is made available online at

Item Type: Article
Uncontrolled Keywords: Word representation; Word embedding; Language model; QA System
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Q Science > QA Mathematics > QA76 Computer software
Faculty/Division: Faculty of Computing
Depositing User: Mr. Abbas Saliimi Lokman
Date Deposited: 30 Aug 2023 00:57
Last Modified: 30 Aug 2023 00:57
Download Statistic: View Download Statistics

Actions (login required)

View Item View Item