DiabSense: early diagnosis of non-insulin-dependent diabetes mellitus using smartphone-based human activity recognition and diabetic retinopathy analysis with Graph Neural Network
Artículo
Materias > Biomedicina
Materias > Ingeniería
Universidad Europea del Atlántico > Investigación > Producción Científica
Fundación Universitaria Internacional de Colombia > Investigación > Artículos y libros
Universidad Internacional Iberoamericana México > Investigación > Producción Científica
Universidad Internacional Iberoamericana Puerto Rico > Investigación > Producción Científica
Universidad de La Romana > Investigación > Producción Científica
Abierto
Inglés
Non-Insulin-Dependent Diabetes Mellitus (NIDDM) is a chronic health condition caused by high blood sugar levels, and if not treated early, it can lead to serious complications i.e. blindness. Human Activity Recognition (HAR) offers potential for early NIDDM diagnosis, emerging as a key application for HAR technology. This research introduces DiabSense, a state-of-the-art smartphone-dependent system for early staging of NIDDM. DiabSense incorporates HAR and Diabetic Retinopathy (DR) upon leveraging the power of two different Graph Neural Networks (GNN). HAR uses a comprehensive array of 23 human activities resembling Diabetes symptoms, and DR is a prevalent complication of NIDDM. Graph Attention Network (GAT) in HAR achieved 98.32% accuracy on sensor data, while Graph Convolutional Network (GCN) in the Aptos 2019 dataset scored 84.48%, surpassing other state-of-the-art models. The trained GCN analyzed retinal images of four experimental human subjects for DR report generation, and GAT generated their average duration of daily activities over 30 days. The daily activities in non-diabetic periods of diabetic patients were measured and compared with the daily activities of the experimental subjects, which helped generate risk factors. Fusing risk factors with DR conditions enabled early diagnosis recommendations for the experimental subjects despite the absence of any apparent symptoms. The comparison of DiabSense system outcome with clinical diagnosis reports in the experimental subjects was conducted using the A1C test. The test results confirmed the accurate assessment of early diagnosis requirements for experimental subjects by the system. Overall, DiabSense exhibits significant potential for ensuring early NIDDM treatment, improving millions of lives worldwide.
metadata
Alam, Md Nuho Ul; Hasnine, Ibrahim; Bahadur, Erfanul Hoque; Masum, Abdul Kadar Muhammad; Briones Urbano, Mercedes; Masías Vergara, Manuel; Uddin, Jia; Ashraf, Imran y Samad, Md. Abdus
mail
SIN ESPECIFICAR, SIN ESPECIFICAR, SIN ESPECIFICAR, SIN ESPECIFICAR, mercedes.briones@uneatlantico.es, manuel.masias@uneatlantico.es, SIN ESPECIFICAR, SIN ESPECIFICAR, SIN ESPECIFICAR
(2024)
DiabSense: early diagnosis of non-insulin-dependent diabetes mellitus using smartphone-based human activity recognition and diabetic retinopathy analysis with Graph Neural Network.
Journal of Big Data, 11 (1).
ISSN 2196-1115
![]() |
Texto
s40537-024-00959-w.pdf Available under License Creative Commons Attribution Non-commercial No Derivatives. Descargar (4MB) |
Resumen
Non-Insulin-Dependent Diabetes Mellitus (NIDDM) is a chronic health condition caused by high blood sugar levels, and if not treated early, it can lead to serious complications i.e. blindness. Human Activity Recognition (HAR) offers potential for early NIDDM diagnosis, emerging as a key application for HAR technology. This research introduces DiabSense, a state-of-the-art smartphone-dependent system for early staging of NIDDM. DiabSense incorporates HAR and Diabetic Retinopathy (DR) upon leveraging the power of two different Graph Neural Networks (GNN). HAR uses a comprehensive array of 23 human activities resembling Diabetes symptoms, and DR is a prevalent complication of NIDDM. Graph Attention Network (GAT) in HAR achieved 98.32% accuracy on sensor data, while Graph Convolutional Network (GCN) in the Aptos 2019 dataset scored 84.48%, surpassing other state-of-the-art models. The trained GCN analyzed retinal images of four experimental human subjects for DR report generation, and GAT generated their average duration of daily activities over 30 days. The daily activities in non-diabetic periods of diabetic patients were measured and compared with the daily activities of the experimental subjects, which helped generate risk factors. Fusing risk factors with DR conditions enabled early diagnosis recommendations for the experimental subjects despite the absence of any apparent symptoms. The comparison of DiabSense system outcome with clinical diagnosis reports in the experimental subjects was conducted using the A1C test. The test results confirmed the accurate assessment of early diagnosis requirements for experimental subjects by the system. Overall, DiabSense exhibits significant potential for ensuring early NIDDM treatment, improving millions of lives worldwide.
Tipo de Documento: | Artículo |
---|---|
Palabras Clave: | Graph Neural Network; Diabetic retinopathy; Human activity recognition; Diabetes; NIDDM |
Clasificación temática: | Materias > Biomedicina Materias > Ingeniería |
Divisiones: | Universidad Europea del Atlántico > Investigación > Producción Científica Fundación Universitaria Internacional de Colombia > Investigación > Artículos y libros Universidad Internacional Iberoamericana México > Investigación > Producción Científica Universidad Internacional Iberoamericana Puerto Rico > Investigación > Producción Científica Universidad de La Romana > Investigación > Producción Científica |
Depositado: | 19 Sep 2024 23:30 |
Ultima Modificación: | 19 Sep 2024 23:30 |
URI: | https://repositorio.unincol.edu.co/id/eprint/14282 |
Acciones (logins necesarios)
![]() |
Ver Objeto |
<a class="ep_document_link" href="/17140/1/s41598-025-90616-w.pdf"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
Efficient CNN architecture with image sensing and algorithmic channeling for dataset harmonization
The process of image formulation uses semantic analysis to extract influential vectors from image components. The proposed approach integrates DenseNet with ResNet-50, VGG-19, and GoogLeNet using an innovative bonding process that establishes algorithmic channeling between these models. The goal targets compact efficient image feature vectors that process data in parallel regardless of input color or grayscale consistency and work across different datasets and semantic categories. Image patching techniques with corner straddling and isolated responses help detect peaks and junctions while addressing anisotropic noise through curvature-based computations and auto-correlation calculations. An integrated channeled algorithm processes the refined features by uniting local-global features with primitive-parameterized features and regioned feature vectors. Using K-nearest neighbor indexing methods analyze and retrieve images from the harmonized signature collection effectively. Extensive experimentation is performed on the state-of-the-art datasets including Caltech-101, Cifar-10, Caltech-256, Cifar-100, Corel-10000, 17-Flowers, COIL-100, FTVL Tropical Fruits, Corel-1000, and Zubud. This contribution finally endorses its standing at the peak of deep and complex image sensing analysis. A state-of-the-art deep image sensing analysis method delivers optimal channeling accuracy together with robust dataset harmonization performance.
Khadija Kanwal mail , Khawaja Tehseen Ahmad mail , Aiza Shabir mail , Li Jing mail , Helena Garay mail helena.garay@uneatlantico.es, Luis Eduardo Prado González mail uis.prado@uneatlantico.es, Hanen Karamti mail , Imran Ashraf mail ,
Kanwal
<a class="ep_document_link" href="/17392/1/journal.pone.0317863.pdf"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
Efficient image retrieval from a variety of datasets is crucial in today's digital world. Visual properties are represented using primitive image signatures in Content Based Image Retrieval (CBIR). Feature vectors are employed to classify images into predefined categories. This research presents a unique feature identification technique based on suppression to locate interest points by computing productive sum of pixel derivatives by computing the differentials for corner scores. Scale space interpolation is applied to define interest points by combining color features from spatially ordered L2 normalized coefficients with shape and object information. Object based feature vectors are formed using high variance coefficients to reduce the complexity and are converted into bag-of-visual-words (BoVW) for effective retrieval and ranking. The presented method encompass feature vectors for information synthesis and improves the discriminating strength of the retrieval system by extracting deep image features including primitive, spatial, and overlayed using multilayer fusion of Convolutional Neural Networks(CNNs). Extensive experimentation is performed on standard image datasets benchmarks, including ALOT, Cifar-10, Corel-10k, Tropical Fruits, and Zubud. These datasets cover wide range of categories including shape, color, texture, spatial, and complicated objects. Experimental results demonstrate considerable improvements in precision and recall rates, average retrieval precision and recall, and mean average precision and recall rates across various image semantic groups within versatile datasets. The integration of traditional feature extraction methods fusion with multilevel CNN advances image sensing and retrieval systems, promising more accurate and efficient image retrieval solutions.
Jyotismita Chaki mail , Aiza Shabir mail , Khawaja Tehseen Ahmed mail , Arif Mahmood mail , Helena Garay mail helena.garay@uneatlantico.es, Luis Eduardo Prado González mail uis.prado@uneatlantico.es, Imran Ashraf mail ,
Chaki
<a class="ep_document_link" href="/17450/1/ejaz-et-al-2025-fundus-image-classification-using-feature-concatenation-for-early-diagnosis-of-retinal-disease.pdf"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
Fundus image classification using feature concatenation for early diagnosis of retinal disease
Background Deep learning models assist ophthalmologists in early detection of diseases from retinal images and timely treatment. Aim Owing to robust and accurate results from deep learning models, we aim to use convolutional neural network (CNN) to provide a non-invasive method for early detection of eye diseases. Methodology We used a hybridized CNN with deep learning (DL) based on two separate CNN blocks, to identify multiple Optic Disc Cupping, Diabetic Retinopathy, Media Haze, and Healthy images. We used the RFMiD dataset, which contains various categories of fundus images representing different eye diseases. Data augmenting, resizing, coping, and one-hot encoding are used among other preprocessing techniques to improve the performance of the proposed model. Color fundus images have been analyzed by CNNs to extract relevant features. Two CCN models that extract deep features are trained in parallel. To obtain more noticeable features, the gathered features are further fused utilizing the Canonical Correlation Analysis fusion approach. To assess the effectiveness, we employed eight classification algorithms: Gradient boosting, support vector machines, voting ensemble, medium KNN, Naive Bayes, COARSE- KNN, random forest, and fine KNN. Results With the greatest accuracy of 93.39%, the ensemble learning performed better than the other algorithms. Conclusion The accuracy rates suggest that the deep learning model has learned to distinguish between different eye disease categories and healthy images effectively. It contributes to the field of eye disease detection through the analysis of color fundus images by providing a reliable and efficient diagnostic system.
Sara Ejaz mail , Hafiz U Zia mail , Fiaz Majeed mail , Umair Shafique mail , Stefanía Carvajal-Altamiranda mail stefania.carvajal@uneatlantico.es, Vivian Lipari mail vivian.lipari@uneatlantico.es, Imran Ashraf mail ,
Ejaz
<a class="ep_document_link" href="/17594/1/s41598-025-95290-6.pdf"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
The aim of the present work was to determine the correlation between the State-Trait Anxiety Inventory (STAI) score and pupillary diameter and whether this correlation exists to develop a predictive model of anxiety with the pupillary diameter of students exposed to high-fidelity clinical simulation. This was a randomized, blinded, simulation-based clinical trial. The study was conducted at the Advanced Clinical Simulation Center, Faculty of Medicine, Valladolid University (Spain), from February 1 to April 15, 2023, and involved volunteer sixth-year undergraduate medical students. The STAI score, vital signs (oxygen saturation, perfusion index, blood pressure, heart rate, and temperature), and pupillary response were assessed. The primary outcomes were the delta (pre/postsimulation) of the state STAI and the delta of the pupillary diameter. Sixty-one sixth-year students fulfilled the inclusion criteria. There was no difference regarding the clinical scenario. There was a statistically significant correlation between the state STAI score and pupillary diameter. The predictive model had an AUC of 0.876, with the delta diameter of the pupillary being the only statistically significant variable for anxiety prediction. Our results showed that both the pupillary response and the STAI score allowed the identification of students with disabling anxiety. These results could pave the way for appropriate protocol development that allows for personalized tutoring of students with elevated anxiety levels.
Francisco Martín-Rodríguez mail , Rafael Martín-Sánchez mail , Carlos del Pozo Vegas mail , Raúl Lopez-Izquierdo mail , José Luis Martín-Conty mail , Eduardo René Silva Alvarado mail eduardo.silva@funiber.org, Santos Gracia Villar mail santos.gracia@uneatlantico.es, Luis Alonso Dzul López mail luis.dzul@uneatlantico.es, Silvia Aparicio Obregón mail silvia.aparicio@uneatlantico.es, Rubén Calderón Iglesias mail ruben.calderon@uneatlantico.es, Ancor Sanz-García mail , Miguel Ángel Castro Villamor mail ,
Martín-Rodríguez
<a class="ep_document_link" href="/15983/1/Food%20Science%20%20%20Nutrition%20-%202025%20-%20Tanveer%20-%20Novel%20Transfer%20Learning%20Approach%20for%20Detecting%20Infected%20and%20Healthy%20Maize%20Crop.pdf"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
Novel Transfer Learning Approach for Detecting Infected and Healthy Maize Crop Using Leaf Images
Maize is a staple crop worldwide, essential for food security, livestock feed, and industrial uses. Its health directly impacts agricultural productivity and economic stability. Effective detection of maize crop health is crucial for preventing disease spread and ensuring high yields. This study presents VG-GNBNet, an innovative transfer learning model that accurately detects healthy and infected maize crops through a two-step feature extraction process. The proposed model begins by leveraging the visual geometry group (VGG-16) network to extract initial pixel-based spatial features from the crop images. These features are then further refined using the Gaussian Naive Bayes (GNB) model and feature decomposition-based matrix factorization mechanism, which generates more informative features for classification purposes. This study incorporates machine learning models to ensure a comprehensive evaluation. By comparing VG-GNBNet's performance against these models, we validate its robustness and accuracy. Integrating deep learning and machine learning techniques allows VG-GNBNet to capitalize on the strengths of both approaches, leading to superior performance. Extensive experiments demonstrate that the proposed VG-GNBNet+GNB model significantly outperforms other models, achieving an impressive accuracy score of 99.85%. This high accuracy highlights the model's potential for practical application in the agricultural sector, where the precise detection of crop health is crucial for effective disease management and yield optimization.
Muhammad Usama Tanveer mail , Kashif Munir mail , Ali Raza mail , Laith Abualigah mail , Helena Garay mail helena.garay@uneatlantico.es, Luis Eduardo Prado González mail uis.prado@uneatlantico.es, Imran Ashraf mail ,
Tanveer