Competitive Coevolution-Based Improved Phasor Particle Swarm Optimization Algorithm for Solving Continuous Problems
Artículo
Materias > Ingeniería
Universidad Europea del Atlántico > Investigación > Producción Científica
Fundación Universitaria Internacional de Colombia > Investigación > Artículos y libros
Universidad Internacional Iberoamericana México > Investigación > Producción Científica
Universidad Internacional Iberoamericana Puerto Rico > Investigación > Producción Científica
Universidad Internacional do Cuanza > Investigación > Producción Científica
Abierto
Inglés
SIN ESPECIFICAR
metadata
Ali, Omer; Abbas, Qamar; Mahmood, Khalid; Bautista Thompson, Ernesto; Arambarri, Jon y Ashraf, Imran
mail
SIN ESPECIFICAR, SIN ESPECIFICAR, SIN ESPECIFICAR, ernesto.bautista@unini.edu.mx, jon.arambarri@uneatlantico.es, SIN ESPECIFICAR
(2023)
Competitive Coevolution-Based Improved Phasor Particle Swarm Optimization Algorithm for Solving Continuous Problems.
Mathematics, 11 (21).
p. 4406.
ISSN 2227-7390
|
Texto
mathematics-11-04406.pdf Available under License Creative Commons Attribution. Descargar (995kB) |
| Tipo de Documento: | Artículo |
|---|---|
| Palabras Clave: | particle swarm optimization; phasor PSO; PSO coevolution; optimization; multi-swarm |
| Clasificación temática: | Materias > Ingeniería |
| Divisiones: | Universidad Europea del Atlántico > Investigación > Producción Científica Fundación Universitaria Internacional de Colombia > Investigación > Artículos y libros Universidad Internacional Iberoamericana México > Investigación > Producción Científica Universidad Internacional Iberoamericana Puerto Rico > Investigación > Producción Científica Universidad Internacional do Cuanza > Investigación > Producción Científica |
| Depositado: | 25 Oct 2023 23:30 |
| Ultima Modificación: | 25 Oct 2023 23:30 |
| URI: | https://repositorio.unincol.edu.co/id/eprint/9376 |
Acciones (logins necesarios)
![]() |
Ver Objeto |
<a href="/27825/1/s41598-026-39196-x_reference.pdf" class="ep_document_link"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
Histopathological evaluation is necessary for the diagnosis and grading of prostate cancer, which is still one of the most common cancers in men globally. Traditional evaluation is time-consuming, prone to inter-observer variability, and challenging to scale. The clinical usefulness of current AI systems is limited by the need for comprehensive pixel-level annotations. The objective of this research is to develop and evaluate a large-scale benchmarking study on a weakly supervised deep learning framework that minimizes the need for annotation and ensures interpretability for automated prostate cancer diagnosis and International Society of Urological Pathology (ISUP) grading using whole slide images (WSIs). This study rigorously tested six cutting-edge multiple instance learning (MIL) architectures (CLAM-MB, CLAM-SB, ILRA-MIL, AC-MIL, AMD-MIL, WiKG-MIL), three feature encoders (ResNet50, CTransPath, UNI2), and four patch extraction techniques (varying sizes and overlap) using the PANDA dataset (10,616 WSIs), yielding 72 experimental configurations. The methodology used distributed cloud computing to process over 31 million tissue patches, implementing advanced attention mechanisms to ensure clinical interpretability through Grad-CAM visualizations. The optimum configuration (UNI2 encoder with ILRA-MIL, 256 256 patches, 50% overlap) achieved 78.75% accuracy and 90.12% quadratic weighted kappa (QWK), outperforming traditional methods and approaching expert pathologist-level diagnostic capability. Overlapping smaller patches offered the best balance of spatial resolution and contextual information, while domain-specific foundation models performed noticeably better than generic encoders. This work is the first large-scale, comprehensive comparison of weekly supervised MIL methods for prostate cancer diagnosis and grading. The proposed approach has excellent clinical diagnostic performance, scalability, practical feasibility through cloud computing, and interpretability using visualization tools.
Naveed Anwer Butt mail , Dilawaiz Sarwat mail , Irene Delgado Noya mail irene.delgado@uneatlantico.es, Kilian Tutusaus mail kilian.tutusaus@uneatlantico.es, Nagwan Abdel Samee mail , Imran Ashraf mail ,
Butt
<a href="/27968/1/sensors-26-01516-v2.pdf" class="ep_document_link"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
Human Activity Recognition in Domestic Settings Based on Optical Techniques and Ensemble Models
Human activity recognition (HAR) is essential in many applications, such as smart homes, assisted living, healthcare monitoring, rehabilitation, physiotherapy, and geriatric care. Conventional methods of HAR use wearable sensors, e.g., acceleration sensors and gyroscopes. However, they are limited by issues such as sensitivity to position, user inconvenience, and potential health risks with long-term use. Optical camera systems that are vision-based provide an alternative that is not intrusive; however, they are susceptible to variations in lighting, intrusions, and privacy issues. The paper uses an optical method of recognizing human domestic activities based on pose estimation and deep learning ensemble models. The skeletal keypoint features proposed in the current methodology are extracted from video data using PoseNet to generate a privacy-preserving representation that captures key motion dynamics without being sensitive to changes in appearance. A total of 30 subjects (15 male and 15 female) were sampled across 2734 activity samples, including nine daily domestic activities. There were six deep learning architectures, namely, the Transformer (Transformer), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Multilayer Perceptron (MLP), One-Dimensional Convolutional Neural Network (1D CNN), and a hybrid Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM) architecture. The results on the hold-out test set show that the CNN–LSTM architecture achieves an accuracy of 98.78% within our experimental setting. Leave-One-Subject-Out cross-validation further confirms robust generalization across unseen individuals, with CNN–LSTM achieving a mean accuracy of 97.21% ± 1.84% across 30 subjects. The results demonstrate that vision-based pose estimation with deep learning is a useful, precise, and non-intrusive approach to HAR in smart healthcare and home automation systems.
Muhammad Amjad Raza mail , Nasir Mehmood mail , Hafeez Ur Rehman Siddiqui mail , Adil Ali Saleem mail , Roberto Marcelo Álvarez mail roberto.alvarez@uneatlantico.es, Yini Airet Miró Vera mail yini.miro@uneatlantico.es, Isabel de la Torre Díez mail ,
Raza
<a href="/27153/1/fpls-16-1720471.pdf" class="ep_document_link"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
Introduction: Jackfruit cultivation is highly affected by leaf diseases that reduce yield, fruit quality, and farmer income. Early diagnosis remains challenging due to the limitations of manual inspection and the lack of automated and scalable disease detection systems. Existing deep-learning approaches often suffer from limited generalization and high computational cost, restricting real-time field deployment. Methods: This study proposes CNNAttLSTM, a hybrid deep-learning architecture integrating Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM) units, and an attention mechanism for multi-class classification of algal leaf spot, black spot, and healthy jackfruit leaves. Each image is divided into ordered 56×56 spatial patches, treated as pseudo-temporal sequences to enable the LSTM to capture contextual dependencies across different leaf regions. Spatial features are extracted via Conv2D, MaxPooling, and GlobalAveragePooling layers; temporal modeling is performed by LSTM units; and an attention mechanism assigns adaptive weights to emphasize disease-relevant regions. Experiments were conducted on a publicly available Kaggle dataset comprising 38,019 images, using predefined training, validation, and testing splits. Results: The proposed CNNAttLSTM model achieved 99% classification accuracy, outperforming the baseline CNN (86%) and CNN–LSTM (98%) models. It required only 3.7 million parameters, trained in 45 minutes on an NVIDIA Tesla T4 GPU, and achieved an inference time of 22 milliseconds per image, demonstrating high computational efficiency. The patch-based pseudo-temporal approach improved spatial–temporal feature representation, enabling the model to distinguish subtle differences between visually similar disease classes. Discussion: Results show that combining spatial feature extraction with temporal modeling and attention significantly enhances robustness and classification performance in plant disease detection. The lightweight design enables real-time and edge-device deployment, addressing a major limitation of existing deep-learning techniques. The findings highlight the potential of CNNAttLSTM for scalable, efficient, and accurate agricultural disease monitoring and broader precision agriculture applications.
Gaurav Tuteja mail , Fuad Ali Mohammed Al-Yarimi mail , Amna Ikram mail , Rupesh Gupta mail , Ateeq Ur Rehman mail , Jeewan Singh mail , Irene Delgado Noya mail irene.delgado@uneatlantico.es, Luis Alonso Dzul López mail luis.dzul@uneatlantico.es,
Tuteja
<a href="/27156/1/s41598-025-29667-y.pdf" class="ep_document_link"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
Enhancing fault detection in new energy vehicles via novel ensemble approach
New energy vehicles (NEVs) has emerged as a sustainable alternative to conventional vehicles, however have unresolved reliability challenges due to their complex electronic systems and varying operating conditions. Faults in drivetrain and battery systems, occurring at rates up to 12% annually, present significant barriers to the widespread adoption of NEVs. This study proposes a robust fault detection framework that applies multiple machine learning and deep learning models to address these challenges. The research utilizes the benchmark NEV fault diagnosis dataset, which contains real-world sensor data from NEVs. The models tested include logistic regression, passive-aggressive classifier, ridge classifier, perceptron, gated recurrent unit (GRU), convolutional neural network, and artificial neural network. The proposed ensemble GRULogX model stands out among the implemented model, leveraging GRU with logistic regression and other key classifiers, and achieved 99% accuracy, demonstrating high precision and recall. Cross-validation and hyperparameter optimization were adopted to further ensure the model’s generalizability and reliability. This research enhances the fault detection capabilities of NEVs, thereby improving their reliability and supporting the wider adoption of clean energy transportation solutions.
Iqra Akhtar mail , Mahnoor Nabeel mail , Umair Shahid mail , Kashif Munir mail , Ali Raza mail , Irene Delgado Noya mail irene.delgado@uneatlantico.es, Santos Gracia Villar mail santos.gracia@uneatlantico.es, Imran Ashraf mail ,
Akhtar
<a class="ep_document_link" href="/17885/1/s41598-025-26052-7.pdf"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
Mango is one of the most beloved fruits and plays an indispensable role in the agricultural economies of many tropical countries like Pakistan, India, and other Southeast Asian countries. Similar to other fruits, mango cultivation is also threatened by various diseases, including Anthracnose and Red Rust. Although farmers try to mitigate such situations on time, early and accurate detection of mango diseases remains challenging due to multiple factors, such as limited understanding of disease diversity, similarity in symptoms, and frequent misclassification. To avoid such instances, this study proposes a multimodal deep learning framework that leverages both leaf and fruit images to improve classification performance and generalization. Individual CNN-based pre-trained models, including ResNet-50, MobileNetV2, EfficientNet-B0, and ConvNeXt, were trained separately on curated datasets of mango leaf and fruit diseases. A novel Modality Attention Fusion (MAF) mechanism was introduced to dynamically weight and combine predictions from both modalities based on their discriminative strength, as some diseases are more prominent on leaves than on fruits, and vice versa. To address overfitting and improve generalization, a class-aware augmentation pipeline was integrated, which performs augmentation according to the specific characteristics of each class. The proposed attention-based fusion strategy significantly outperformed individual models and static fusion approaches, achieving a test accuracy of 99.08%, an F1 score of 99.03%, and a perfect ROC-AUC of 99.96% using EfficientNet-B0 as the base. To evaluate the model’s real-world applicability, an interactive web application was developed using the Django framework and evaluated through out-of-distribution (OOD) testing on diverse mango samples collected from public sources. These findings underline the importance of combining visual cues from multiple organs of plants and adapting model attention to contextual features for real-world agricultural diagnostics.
Muhammad Mohsin mail , Muhammad Shadab Alam Hashmi mail , Irene Delgado Noya mail irene.delgado@uneatlantico.es, Helena Garay mail helena.garay@uneatlantico.es, Nagwan Abdel Samee mail , Imran Ashraf mail ,
Mohsin
