eprintid: 5660 rev_number: 8 eprint_status: archive userid: 2 dir: disk0/00/00/56/60 datestamp: 2023-02-01 23:30:06 lastmod: 2023-02-01 23:30:20 status_changed: 2023-02-01 23:30:06 type: article metadata_visibility: show creators_name: Hafeez, Rabab creators_name: Anwar, Muhammad Waqas creators_name: Jamal, Muhammad Hasan creators_name: Fatima, Tayyaba creators_name: Martínez Espinosa, Julio César creators_name: Dzul López, Luis Alonso creators_name: Bautista Thompson, Ernesto creators_name: Ashraf, Imran creators_id: creators_id: creators_id: creators_id: creators_id: ulio.martinez@unini.edu.mx creators_id: luis.dzul@uneatlantico.es creators_id: ernesto.bautista@unini.edu.mx creators_id: title: Contextual Urdu Lemmatization Using Recurrent Neural Network Models ispublished: pub subjects: uneat_eng divisions: uneatlantico_produccion_cientifica divisions: unincol_produccion_cientifica divisions: uninimx_produccion_cientifica divisions: uninipr_produccion_cientifica divisions: unic_produccion_cientifica full_text_status: public keywords: neural networks; natural language processing; inflectional morphology; derivational morphology; MSC: 68T50 abstract: In the field of natural language processing, machine translation is a colossally developing research area that helps humans communicate more effectively by bridging the linguistic gap. In machine translation, normalization and morphological analyses are the first and perhaps the most important modules for information retrieval (IR). To build a morphological analyzer, or to complete the normalization process, it is important to extract the correct root out of different words. Stemming and lemmatization are techniques commonly used to find the correct root words in a language. However, a few studies on IR systems for the Urdu language have shown that lemmatization is more effective than stemming due to infixes found in Urdu words. This paper presents a lemmatization algorithm based on recurrent neural network models for the Urdu language. However, lemmatization techniques for resource-scarce languages such as Urdu are not very common. The proposed model is trained and tested on two datasets, namely, the Urdu Monolingual Corpus (UMC) and the Universal Dependencies Corpus of Urdu (UDU). The datasets are lemmatized with the help of recurrent neural network models. The Word2Vec model and edit trees are used to generate semantic and syntactic embedding. Bidirectional long short-term memory (BiLSTM), bidirectional gated recurrent unit (BiGRU), bidirectional gated recurrent neural network (BiGRNN), and attention-free encoder–decoder (AFED) models are trained under defined hyperparameters. Experimental results show that the attention-free encoder-decoder model achieves an accuracy, precision, recall, and F-score of 0.96, 0.95, 0.95, and 0.95, respectively, and outperforms existing models date: 2023 publication: Mathematics volume: 11 number: 2 pagerange: 435 id_number: doi:10.3390/math11020435 refereed: TRUE issn: 2227-7390 official_url: http://doi.org/10.3390/math11020435 access: open language: en citation: Artículo Materias > Ingeniería Universidad Europea del Atlántico > Investigación > Producción Científica Fundación Universitaria Internacional de Colombia > Investigación > Producción Científica Universidad Internacional Iberoamericana México > Investigación > Producción Científica Universidad Internacional Iberoamericana Puerto Rico > Investigación > Producción Científica Universidad Internacional do Cuanza > Investigación > Producción Científica Abierto Inglés In the field of natural language processing, machine translation is a colossally developing research area that helps humans communicate more effectively by bridging the linguistic gap. In machine translation, normalization and morphological analyses are the first and perhaps the most important modules for information retrieval (IR). To build a morphological analyzer, or to complete the normalization process, it is important to extract the correct root out of different words. Stemming and lemmatization are techniques commonly used to find the correct root words in a language. However, a few studies on IR systems for the Urdu language have shown that lemmatization is more effective than stemming due to infixes found in Urdu words. This paper presents a lemmatization algorithm based on recurrent neural network models for the Urdu language. However, lemmatization techniques for resource-scarce languages such as Urdu are not very common. The proposed model is trained and tested on two datasets, namely, the Urdu Monolingual Corpus (UMC) and the Universal Dependencies Corpus of Urdu (UDU). The datasets are lemmatized with the help of recurrent neural network models. The Word2Vec model and edit trees are used to generate semantic and syntactic embedding. Bidirectional long short-term memory (BiLSTM), bidirectional gated recurrent unit (BiGRU), bidirectional gated recurrent neural network (BiGRNN), and attention-free encoder–decoder (AFED) models are trained under defined hyperparameters. Experimental results show that the attention-free encoder-decoder model achieves an accuracy, precision, recall, and F-score of 0.96, 0.95, 0.95, and 0.95, respectively, and outperforms existing models metadata Hafeez, Rabab; Anwar, Muhammad Waqas; Jamal, Muhammad Hasan; Fatima, Tayyaba; Martínez Espinosa, Julio César; Dzul López, Luis Alonso; Bautista Thompson, Ernesto y Ashraf, Imran mail SIN ESPECIFICAR, SIN ESPECIFICAR, SIN ESPECIFICAR, SIN ESPECIFICAR, ulio.martinez@unini.edu.mx, luis.dzul@uneatlantico.es, ernesto.bautista@unini.edu.mx, SIN ESPECIFICAR (2023) Contextual Urdu Lemmatization Using Recurrent Neural Network Models. Mathematics, 11 (2). p. 435. ISSN 2227-7390 document_url: http://repositorio.unincol.edu.co/id/eprint/5660/1/mathematics-11-00435.pdf