Ultra-Wide Band Radar Empowered Driver Drowsiness Detection with Convolutional Spatial Feature Engineering and Artificial Intelligence
Artículo
Materias > Ingeniería
Universidad Europea del Atlántico > Investigación > Producción Científica
Fundación Universitaria Internacional de Colombia > Investigación > Artículos y libros
Universidad Internacional Iberoamericana México > Investigación > Producción Científica
Universidad Internacional do Cuanza > Investigación > Producción Científica
Universidad de La Romana > Investigación > Producción Científica
Abierto
Inglés
Driving while drowsy poses significant risks, including reduced cognitive function and the potential for accidents, which can lead to severe consequences such as trauma, economic losses, injuries, or death. The use of artificial intelligence can enable effective detection of driver drowsiness, helping to prevent accidents and enhance driver performance. This research aims to address the crucial need for real-time and accurate drowsiness detection to mitigate the impact of fatigue-related accidents. Leveraging ultra-wideband radar data collected over five minutes, the dataset was segmented into one-minute chunks and transformed into grayscale images. Spatial features are retrieved from the images using a two-dimensional Convolutional Neural Network. Following that, these features were used to train and test multiple machine learning classifiers. The ensemble classifier RF-XGB-SVM, which combines Random Forest, XGBoost, and Support Vector Machine using a hard voting criterion, performed admirably with an accuracy of 96.6%. Additionally, the proposed approach was validated with a robust k-fold score of 97% and a standard deviation of 0.018, demonstrating significant results. The dataset is augmented using Generative Adversarial Networks, resulting in improved accuracies for all models. Among them, the RF-XGB-SVM model outperformed the rest with an accuracy score of 99.58%.
metadata
Siddiqui, Hafeez Ur Rehman; Akmal, Ambreen; Iqbal, Muhammad; Saleem, Adil Ali; Raza, Muhammad Amjad; Zafar, Kainat; Zaib, Aqsa; Dudley, Sandra; Arambarri, Jon; Kuc Castilla, Ángel Gabriel y Rustam, Furqan
mail
SIN ESPECIFICAR, SIN ESPECIFICAR, SIN ESPECIFICAR, SIN ESPECIFICAR, SIN ESPECIFICAR, SIN ESPECIFICAR, SIN ESPECIFICAR, SIN ESPECIFICAR, jon.arambarri@uneatlantico.es, SIN ESPECIFICAR, SIN ESPECIFICAR
(2024)
Ultra-Wide Band Radar Empowered Driver Drowsiness Detection with Convolutional Spatial Feature Engineering and Artificial Intelligence.
Sensors, 24 (12).
p. 3754.
ISSN 1424-8220
![]() |
Texto
sensors-24-03754 (1).pdf Available under License Creative Commons Attribution. Descargar (7MB) |
Resumen
Driving while drowsy poses significant risks, including reduced cognitive function and the potential for accidents, which can lead to severe consequences such as trauma, economic losses, injuries, or death. The use of artificial intelligence can enable effective detection of driver drowsiness, helping to prevent accidents and enhance driver performance. This research aims to address the crucial need for real-time and accurate drowsiness detection to mitigate the impact of fatigue-related accidents. Leveraging ultra-wideband radar data collected over five minutes, the dataset was segmented into one-minute chunks and transformed into grayscale images. Spatial features are retrieved from the images using a two-dimensional Convolutional Neural Network. Following that, these features were used to train and test multiple machine learning classifiers. The ensemble classifier RF-XGB-SVM, which combines Random Forest, XGBoost, and Support Vector Machine using a hard voting criterion, performed admirably with an accuracy of 96.6%. Additionally, the proposed approach was validated with a robust k-fold score of 97% and a standard deviation of 0.018, demonstrating significant results. The dataset is augmented using Generative Adversarial Networks, resulting in improved accuracies for all models. Among them, the RF-XGB-SVM model outperformed the rest with an accuracy score of 99.58%.
Tipo de Documento: | Artículo |
---|---|
Palabras Clave: | drowsiness; ultra-wideband radar; convolutional neural network; spatial features; ensemble models |
Clasificación temática: | Materias > Ingeniería |
Divisiones: | Universidad Europea del Atlántico > Investigación > Producción Científica Fundación Universitaria Internacional de Colombia > Investigación > Artículos y libros Universidad Internacional Iberoamericana México > Investigación > Producción Científica Universidad Internacional do Cuanza > Investigación > Producción Científica Universidad de La Romana > Investigación > Producción Científica |
Depositado: | 17 Jun 2024 23:30 |
Ultima Modificación: | 17 Jun 2024 23:30 |
URI: | https://repositorio.unincol.edu.co/id/eprint/12747 |
Acciones (logins necesarios)
![]() |
Ver Objeto |
<a href="/17140/1/s41598-025-90616-w.pdf" class="ep_document_link"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
Efficient CNN architecture with image sensing and algorithmic channeling for dataset harmonization
The process of image formulation uses semantic analysis to extract influential vectors from image components. The proposed approach integrates DenseNet with ResNet-50, VGG-19, and GoogLeNet using an innovative bonding process that establishes algorithmic channeling between these models. The goal targets compact efficient image feature vectors that process data in parallel regardless of input color or grayscale consistency and work across different datasets and semantic categories. Image patching techniques with corner straddling and isolated responses help detect peaks and junctions while addressing anisotropic noise through curvature-based computations and auto-correlation calculations. An integrated channeled algorithm processes the refined features by uniting local-global features with primitive-parameterized features and regioned feature vectors. Using K-nearest neighbor indexing methods analyze and retrieve images from the harmonized signature collection effectively. Extensive experimentation is performed on the state-of-the-art datasets including Caltech-101, Cifar-10, Caltech-256, Cifar-100, Corel-10000, 17-Flowers, COIL-100, FTVL Tropical Fruits, Corel-1000, and Zubud. This contribution finally endorses its standing at the peak of deep and complex image sensing analysis. A state-of-the-art deep image sensing analysis method delivers optimal channeling accuracy together with robust dataset harmonization performance.
Khadija Kanwal mail , Khawaja Tehseen Ahmad mail , Aiza Shabir mail , Li Jing mail , Helena Garay mail helena.garay@uneatlantico.es, Luis Eduardo Prado González mail uis.prado@uneatlantico.es, Hanen Karamti mail , Imran Ashraf mail ,
Kanwal
<a href="/17392/1/journal.pone.0317863.pdf" class="ep_document_link"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
Efficient image retrieval from a variety of datasets is crucial in today's digital world. Visual properties are represented using primitive image signatures in Content Based Image Retrieval (CBIR). Feature vectors are employed to classify images into predefined categories. This research presents a unique feature identification technique based on suppression to locate interest points by computing productive sum of pixel derivatives by computing the differentials for corner scores. Scale space interpolation is applied to define interest points by combining color features from spatially ordered L2 normalized coefficients with shape and object information. Object based feature vectors are formed using high variance coefficients to reduce the complexity and are converted into bag-of-visual-words (BoVW) for effective retrieval and ranking. The presented method encompass feature vectors for information synthesis and improves the discriminating strength of the retrieval system by extracting deep image features including primitive, spatial, and overlayed using multilayer fusion of Convolutional Neural Networks(CNNs). Extensive experimentation is performed on standard image datasets benchmarks, including ALOT, Cifar-10, Corel-10k, Tropical Fruits, and Zubud. These datasets cover wide range of categories including shape, color, texture, spatial, and complicated objects. Experimental results demonstrate considerable improvements in precision and recall rates, average retrieval precision and recall, and mean average precision and recall rates across various image semantic groups within versatile datasets. The integration of traditional feature extraction methods fusion with multilevel CNN advances image sensing and retrieval systems, promising more accurate and efficient image retrieval solutions.
Jyotismita Chaki mail , Aiza Shabir mail , Khawaja Tehseen Ahmed mail , Arif Mahmood mail , Helena Garay mail helena.garay@uneatlantico.es, Luis Eduardo Prado González mail uis.prado@uneatlantico.es, Imran Ashraf mail ,
Chaki
<a href="/17450/1/ejaz-et-al-2025-fundus-image-classification-using-feature-concatenation-for-early-diagnosis-of-retinal-disease.pdf" class="ep_document_link"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
Fundus image classification using feature concatenation for early diagnosis of retinal disease
Background Deep learning models assist ophthalmologists in early detection of diseases from retinal images and timely treatment. Aim Owing to robust and accurate results from deep learning models, we aim to use convolutional neural network (CNN) to provide a non-invasive method for early detection of eye diseases. Methodology We used a hybridized CNN with deep learning (DL) based on two separate CNN blocks, to identify multiple Optic Disc Cupping, Diabetic Retinopathy, Media Haze, and Healthy images. We used the RFMiD dataset, which contains various categories of fundus images representing different eye diseases. Data augmenting, resizing, coping, and one-hot encoding are used among other preprocessing techniques to improve the performance of the proposed model. Color fundus images have been analyzed by CNNs to extract relevant features. Two CCN models that extract deep features are trained in parallel. To obtain more noticeable features, the gathered features are further fused utilizing the Canonical Correlation Analysis fusion approach. To assess the effectiveness, we employed eight classification algorithms: Gradient boosting, support vector machines, voting ensemble, medium KNN, Naive Bayes, COARSE- KNN, random forest, and fine KNN. Results With the greatest accuracy of 93.39%, the ensemble learning performed better than the other algorithms. Conclusion The accuracy rates suggest that the deep learning model has learned to distinguish between different eye disease categories and healthy images effectively. It contributes to the field of eye disease detection through the analysis of color fundus images by providing a reliable and efficient diagnostic system.
Sara Ejaz mail , Hafiz U Zia mail , Fiaz Majeed mail , Umair Shafique mail , Stefanía Carvajal-Altamiranda mail stefania.carvajal@uneatlantico.es, Vivian Lipari mail vivian.lipari@uneatlantico.es, Imran Ashraf mail ,
Ejaz
<a href="/17594/1/s41598-025-95290-6.pdf" class="ep_document_link"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
The aim of the present work was to determine the correlation between the State-Trait Anxiety Inventory (STAI) score and pupillary diameter and whether this correlation exists to develop a predictive model of anxiety with the pupillary diameter of students exposed to high-fidelity clinical simulation. This was a randomized, blinded, simulation-based clinical trial. The study was conducted at the Advanced Clinical Simulation Center, Faculty of Medicine, Valladolid University (Spain), from February 1 to April 15, 2023, and involved volunteer sixth-year undergraduate medical students. The STAI score, vital signs (oxygen saturation, perfusion index, blood pressure, heart rate, and temperature), and pupillary response were assessed. The primary outcomes were the delta (pre/postsimulation) of the state STAI and the delta of the pupillary diameter. Sixty-one sixth-year students fulfilled the inclusion criteria. There was no difference regarding the clinical scenario. There was a statistically significant correlation between the state STAI score and pupillary diameter. The predictive model had an AUC of 0.876, with the delta diameter of the pupillary being the only statistically significant variable for anxiety prediction. Our results showed that both the pupillary response and the STAI score allowed the identification of students with disabling anxiety. These results could pave the way for appropriate protocol development that allows for personalized tutoring of students with elevated anxiety levels.
Francisco Martín-Rodríguez mail , Rafael Martín-Sánchez mail , Carlos del Pozo Vegas mail , Raúl Lopez-Izquierdo mail , José Luis Martín-Conty mail , Eduardo René Silva Alvarado mail eduardo.silva@funiber.org, Santos Gracia Villar mail santos.gracia@uneatlantico.es, Luis Alonso Dzul López mail luis.dzul@uneatlantico.es, Silvia Aparicio Obregón mail silvia.aparicio@uneatlantico.es, Rubén Calderón Iglesias mail ruben.calderon@uneatlantico.es, Ancor Sanz-García mail , Miguel Ángel Castro Villamor mail ,
Martín-Rodríguez
<a href="/15983/1/Food%20Science%20%20%20Nutrition%20-%202025%20-%20Tanveer%20-%20Novel%20Transfer%20Learning%20Approach%20for%20Detecting%20Infected%20and%20Healthy%20Maize%20Crop.pdf" class="ep_document_link"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
Novel Transfer Learning Approach for Detecting Infected and Healthy Maize Crop Using Leaf Images
Maize is a staple crop worldwide, essential for food security, livestock feed, and industrial uses. Its health directly impacts agricultural productivity and economic stability. Effective detection of maize crop health is crucial for preventing disease spread and ensuring high yields. This study presents VG-GNBNet, an innovative transfer learning model that accurately detects healthy and infected maize crops through a two-step feature extraction process. The proposed model begins by leveraging the visual geometry group (VGG-16) network to extract initial pixel-based spatial features from the crop images. These features are then further refined using the Gaussian Naive Bayes (GNB) model and feature decomposition-based matrix factorization mechanism, which generates more informative features for classification purposes. This study incorporates machine learning models to ensure a comprehensive evaluation. By comparing VG-GNBNet's performance against these models, we validate its robustness and accuracy. Integrating deep learning and machine learning techniques allows VG-GNBNet to capitalize on the strengths of both approaches, leading to superior performance. Extensive experiments demonstrate that the proposed VG-GNBNet+GNB model significantly outperforms other models, achieving an impressive accuracy score of 99.85%. This high accuracy highlights the model's potential for practical application in the agricultural sector, where the precise detection of crop health is crucial for effective disease management and yield optimization.
Muhammad Usama Tanveer mail , Kashif Munir mail , Ali Raza mail , Laith Abualigah mail , Helena Garay mail helena.garay@uneatlantico.es, Luis Eduardo Prado González mail uis.prado@uneatlantico.es, Imran Ashraf mail ,
Tanveer