Image and Signal Processing

A special issue of Technologies (ISSN 2227-7080). This special issue belongs to the section "Information and Communication Technologies".

Deadline for manuscript submissions: closed (31 July 2023) | Viewed by 31943

Special Issue Editors


E-Mail Website
Guest Editor
Department of Embedded Systems Engineering, Incheon National University, 119 Academy-ro, Yeonsu-gu, Incheon 22012, Republic of Korea
Interests: remote sensing; deep learning; artificial intelligence; image processing; signal processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computing and Information Science, Anglia Ruskin University Cambridge, East Road, Cambridge CB1 1PT, UK
Interests: data science; machine learning; computer vision; image processing

Special Issue Information

Dear Colleagues,

Over the last few years, we have witnessed huge developments in technologies that adopt artificial intelligence in image and signal processing and its applications. Deep learning has impacted modern life more than any other technology since its appearance in 2012 at the ImageNet challenge. This development is toward the use of deep learning based on artificial intelligence, including fuzzy and rough systems, neural networks, and evolutionary algorithms. Deep learning requires a huge amount of data in order to be appropriately trained for real-life applications. To abbreviate this issue, lightweight deep-learning models and explainable deep-learning models are actively being studied. This Special Issue welcomes authors to submit their novel and more recent research in deep-learning applications for image and signal processing. Possible contributions include but are not limited to research works in new deep learning algorithms, deep learning for data mining, computer vision, forecasting, natural language processing, clustering, image filtering, restoration and enhancement, image segmentation, video segmentation and tracking, feature extraction and analysis, motion detection and estimation, computer vision, pattern recognition, and content-based image retrieval. Furthermore, contributions to the application of deep learning in fields such as robotics, industrial automation, autonomous systems, or gaming are highly welcome.

Dr. Gwanggil Jeon
Dr. Imran Ahmed
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Technologies is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • signal processing
  • image processing
  • artificial intelligence
  • deep neural network
  • deep learning

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 4403 KiB  
Communication
Exploiting PlanetScope Imagery for Volcanic Deposits Mapping
by Maddalena Dozzo, Gaetana Ganci, Federico Lucchi and Simona Scollo
Technologies 2024, 12(2), 25; https://doi.org/10.3390/technologies12020025 - 08 Feb 2024
Viewed by 1319
Abstract
During explosive eruptions, tephra fallout represents one of the main volcanic hazards and can be extremely dangerous for air traffic, infrastructures, and human health. Here, we present a new technique aimed at identifying the area covered by tephra after an explosive event, based [...] Read more.
During explosive eruptions, tephra fallout represents one of the main volcanic hazards and can be extremely dangerous for air traffic, infrastructures, and human health. Here, we present a new technique aimed at identifying the area covered by tephra after an explosive event, based on processing PlanetScope imagery. We estimate the mean reflectance values of the visible (RGB) and near infrared (NIR) bands, analyzing pre- and post-eruptive data in specific areas and introducing a new index, which we call the ‘Tephra Fallout Index (TFI)’. We use the Google Earth Engine computing platform and define a threshold for the TFI of different eruptive events to distinguish the areas affected by the tephra fallout and quantify the surface coverage density. We apply our technique to the eruptive events occurring in 2021 at Mt. Etna (Italy), which mainly involved the eastern flank of the volcano, sometimes two or three times within a day, making field surveys difficult. Whenever possible, we compare our results with field data and find an optimal match. This work could have important implications for the identification and quantification of short-term volcanic hazard assessments in near real-time during a volcanic eruption, but also for the mapping of other hazardous events worldwide. Full article
(This article belongs to the Special Issue Image and Signal Processing)
Show Figures

Figure 1

22 pages, 5470 KiB  
Article
An Intelligent System-Based Coffee Plant Leaf Disease Recognition Using Deep Learning Techniques on Rwandan Arabica Dataset
by Eric Hitimana, Omar Janvier Sinayobye, J. Chrisostome Ufitinema, Jane Mukamugema, Peter Rwibasira, Theoneste Murangira, Emmanuel Masabo, Lucy Cherono Chepkwony, Marie Cynthia Abijuru Kamikazi, Jeanne Aline Ukundiwabo Uwera, Simon Martin Mvuyekure, Gaurav Bajpai and Jackson Ngabonziza
Technologies 2023, 11(5), 116; https://doi.org/10.3390/technologies11050116 - 01 Sep 2023
Viewed by 2524
Abstract
Rwandan coffee holds significant importance and immense value within the realm of agriculture, serving as a vital and valuable commodity. Additionally, coffee plays a pivotal role in generating foreign exchange for numerous developing nations. However, the coffee plant is vulnerable to pests and [...] Read more.
Rwandan coffee holds significant importance and immense value within the realm of agriculture, serving as a vital and valuable commodity. Additionally, coffee plays a pivotal role in generating foreign exchange for numerous developing nations. However, the coffee plant is vulnerable to pests and diseases weakening production. Farmers in cooperation with experts use manual methods to detect diseases resulting in human errors. With the rapid improvements in deep learning methods, it is possible to detect and recognize plan diseases to support crop yield improvement. Therefore, it is an essential task to develop an efficient method for intelligently detecting, identifying, and predicting coffee leaf diseases. This study aims to build the Rwandan coffee plant dataset, with the occurrence of coffee rust, miner, and red spider mites identified to be the most popular due to their geographical situations. From the collected coffee leaves dataset of 37,939 images, the preprocessing, along with modeling used five deep learning models such as InceptionV3, ResNet50, Xception, VGG16, and DenseNet. The training, validation, and testing ratio is 80%, 10%, and 10%, respectively, with a maximum of 10 epochs. The comparative analysis of the models’ performances was investigated to select the best for future portable use. The experiment proved the DenseNet model to be the best with an accuracy of 99.57%. The efficiency of the suggested method is validated through an unbiased evaluation when compared to existing approaches with different metrics. Full article
(This article belongs to the Special Issue Image and Signal Processing)
Show Figures

Figure 1

20 pages, 17498 KiB  
Article
A Foreign Object Detection Method for Belt Conveyors Based on an Improved YOLOX Model
by Rongbin Yao, Peng Qi, Dezheng Hua, Xu Zhang, He Lu and Xinhua Liu
Technologies 2023, 11(5), 114; https://doi.org/10.3390/technologies11050114 - 26 Aug 2023
Cited by 2 | Viewed by 1748
Abstract
As one of the main pieces of equipment in coal transportation, the belt conveyor with its detection system is an important area of research for the development of intelligent mines. Occurrences of non-coal foreign objects making contact with belts are common in complex [...] Read more.
As one of the main pieces of equipment in coal transportation, the belt conveyor with its detection system is an important area of research for the development of intelligent mines. Occurrences of non-coal foreign objects making contact with belts are common in complex production environments and with improper human operation. In order to avoid major safety accidents caused by scratches, deviation, and the breakage of belts, a foreign object detection method is proposed for belt conveyors in this work. Firstly, a foreign object image dataset is collected and established, and an IAT image enhancement module and an attention mechanism for CBAM are introduced to enhance the image data sample. Moreover, to predict the angle information of foreign objects with large aspect ratios, a rotating decoupling head is designed and a MO-YOLOX network structure is constructed. Some experiments are carried out with the belt conveyor in the mine’s intelligent mining equipment laboratory, and different foreign objects are analyzed. The experimental results show that the accuracy, recall, and mAP50 of the proposed rotating frame foreign object detection method reach 93.87%, 93.69%, and 93.68%, respectively, and the average inference time for foreign object detection is 25 ms. Full article
(This article belongs to the Special Issue Image and Signal Processing)
Show Figures

Figure 1

18 pages, 7564 KiB  
Article
A Novel Approach to Quantitative Characterization and Visualization of Color Fading
by Woo Sik Yoo, Kitaek Kang, Jung Gon Kim and Yeongsik Yoo
Technologies 2023, 11(4), 108; https://doi.org/10.3390/technologies11040108 - 08 Aug 2023
Viewed by 1579
Abstract
Color fading naturally occurs with time under light illumination. It is triggered by the high photon energy of light. The rate of color fading and darkening depends on the substance, lighting condition, and storage conditions. Color fading is only observed after some time [...] Read more.
Color fading naturally occurs with time under light illumination. It is triggered by the high photon energy of light. The rate of color fading and darkening depends on the substance, lighting condition, and storage conditions. Color fading is only observed after some time has passed. The current color of objects of interest can only be compared with old photographs or the observer’s perception at the time of reference. Color fading and color darkening rates between two or more points in time in the past can only be determined using photographic images from the past. For objective characterization of color difference between two or more different times, quantification of color in either digital or printed photographs is required. A newly developed image analysis and comparison software (PicMan) has been used for color quantification and pixel-by-pixel color difference mapping in this study. Images of two copies of Japanese wood-block prints with and without color fading have been selected for the exemplary study of quantitative characterization of color fading and color darkening. The fading occurred during a long period of exposure to light. Pixel-by-pixel, line-by-line, and area-by-area comparisons of color fading and darkening between two images were very effective in quantifying color change and visualization of the phenomena. RGB, HSV, CIE L*a*b* values between images and their differences of a single pixel to areas of interest in any shape can be quantified. Color fading and darkening analysis results were presented in numerical, graphical, and image formats for completeness. All formats have their own advantages and disadvantages over the other formats in terms of data size, complexity, readability, and communication among parties of interest. This paper demonstrates various display options for color analysis, a summary of color fading, or color difference among images of interest for practical artistic, cultural heritage conservation, and museum applications. Color simulation for various moments in time was proposed and demonstrated by interpolation or extrapolation of color change between images, with and without color fading, using PicMan. The degree of color fading and color darkening over the various moments in time (past and future) can be simulated and visualized for decision-making in public display, storage, and restoration planning. Full article
(This article belongs to the Special Issue Image and Signal Processing)
Show Figures

Figure 1

14 pages, 6158 KiB  
Communication
Adapting the H.264 Standard to the Internet of Vehicles
by Yair Wiseman
Technologies 2023, 11(4), 103; https://doi.org/10.3390/technologies11040103 - 03 Aug 2023
Cited by 4 | Viewed by 1155
Abstract
We suggest two steps of reducing the amount of data transmitted on Internet of Vehicle networks. The first step shifts the image from a full-color resolution to only an 8-color resolution. The reduction of the color numbers is noticeable; however, the 8-color images [...] Read more.
We suggest two steps of reducing the amount of data transmitted on Internet of Vehicle networks. The first step shifts the image from a full-color resolution to only an 8-color resolution. The reduction of the color numbers is noticeable; however, the 8-color images are enough for the requirements of common vehicles’ applications. The second step suggests modifying the quantization tables employed by H.264 to different tables that will be more suitable to an image with only 8 colors. The first step usually reduces the size of the image by more than 30%, and when continuing and performing the second step, the size of the image decreases by more than 40%. That is to say, the combination of the two steps can provide a significant reduction in the amount of data required to be transferred on vehicular networks. Full article
(This article belongs to the Special Issue Image and Signal Processing)
Show Figures

Figure 1

25 pages, 7137 KiB  
Article
Comparative Analysis of Image Classification Models for Norwegian Sign Language Recognition
by Benjamin Svendsen and Seifedine Kadry
Technologies 2023, 11(4), 99; https://doi.org/10.3390/technologies11040099 - 15 Jul 2023
Viewed by 1888
Abstract
Communication is integral to every human’s life, allowing individuals to express themselves and understand each other. This process can be challenging for the hearing-impaired population, who rely on sign language for communication due to the limited number of individuals proficient in sign language. [...] Read more.
Communication is integral to every human’s life, allowing individuals to express themselves and understand each other. This process can be challenging for the hearing-impaired population, who rely on sign language for communication due to the limited number of individuals proficient in sign language. Image classification models can be used to create assistive systems to address this communication barrier. This paper conducts a comprehensive literature review and experiments to find the state of the art in sign language recognition. It identifies a lack of research in Norwegian Sign Language (NSL). To address this gap, we created a dataset from scratch containing 24,300 images of 27 NSL alphabet signs and performed a comparative analysis of various machine learning models, including the Support Vector Machine (SVM), K-Nearest Neighbor (KNN), and Convolutional Neural Network (CNN) on the dataset. The evaluation of these models was based on accuracy and computational efficiency. Based on these metrics, our findings indicate that SVM and CNN were the most effective models, achieving accuracies of 99.9% with high computational efficiency. Consequently, the research conducted in this report aims to contribute to the field of NSL recognition and serve as a foundation for future studies in this area. Full article
(This article belongs to the Special Issue Image and Signal Processing)
Show Figures

Figure 1

11 pages, 2334 KiB  
Communication
Identifying Growth Patterns in Arid-Zone Onion Crops (Allium Cepa) Using Digital Image Processing
by David Duarte-Correa, Juvenal Rodríguez-Reséndiz, Germán Díaz-Flórez, Carlos Alberto Olvera-Olvera and José M. Álvarez-Alvarado
Technologies 2023, 11(3), 67; https://doi.org/10.3390/technologies11030067 - 10 May 2023
Cited by 1 | Viewed by 1520
Abstract
The agricultural sector is undergoing a revolution that requires sustainable solutions to the challenges that arise from traditional farming methods. To address these challenges, technical and sustainable support is needed to develop projects that improve crop performance. This study focuses on onion crops [...] Read more.
The agricultural sector is undergoing a revolution that requires sustainable solutions to the challenges that arise from traditional farming methods. To address these challenges, technical and sustainable support is needed to develop projects that improve crop performance. This study focuses on onion crops and the challenges presented throughout its phenological cycle. Unmanned aerial vehicles (UAVs) and digital image processing were used to monitor the crop and identify patterns such as humid areas, weed growth, vegetation deficits, and decreased harvest performance. An algorithm was developed to identify the patterns that most affected crop growth, as the average local production reported was 40.166 tons/ha. However, only 25.00 tons/ha were reached due to blight caused by constant humidity and limited sunlight. This resulted in the death of leaves and poor development of bulbs, with 50% of the production being medium-sized. Approximately 20% of the production was lost due to blight and unfavorable weather conditions. Full article
(This article belongs to the Special Issue Image and Signal Processing)
Show Figures

Figure 1

19 pages, 11909 KiB  
Article
Image-Based Quantification of Color and Its Machine Vision and Offline Applications
by Woo Sik Yoo, Kitaek Kang, Jung Gon Kim and Yeongsik Yoo
Technologies 2023, 11(2), 49; https://doi.org/10.3390/technologies11020049 - 29 Mar 2023
Cited by 2 | Viewed by 2646
Abstract
Image-based colorimetry has been gaining relevance due to the wide availability of smart phones with image sensors and increasing computational power. The low cost and portable designs with user-friendly interfaces, and their compatibility with data acquisition and processing, are very attractive for interdisciplinary [...] Read more.
Image-based colorimetry has been gaining relevance due to the wide availability of smart phones with image sensors and increasing computational power. The low cost and portable designs with user-friendly interfaces, and their compatibility with data acquisition and processing, are very attractive for interdisciplinary applications from art, the fashion industry, food science, medical science, oriental medicine, agriculture, geology, chemistry, biology, material science, environmental engineering, and many other applications. This work describes the image-based quantification of color and its machine vision and offline applications in interdisciplinary fields using specifically developed image analysis software. Examples of color information extraction from a single pixel to predetermined sizes/shapes of areas, including customized regions of interest (ROIs) from various digital images of dyed T-shirts, tongues, and assays, are demonstrated. Corresponding RGB, HSV, CIELAB, Munsell color, and hexadecimal color codes, from a single pixel to ROIs, are extracted for machine vision and offline applications in various fields. Histograms and statistical analyses of colors from a single pixel to ROIs are successfully demonstrated. Reliable image-based quantification of color, in a wide range of potential applications, is proposed and the validity is verified using color quantification examples in various fields of applications. The objectivity of color-based diagnosis, judgment and control can be significantly improved by the image-based quantification of color proposed in this study. Full article
(This article belongs to the Special Issue Image and Signal Processing)
Show Figures

Figure 1

19 pages, 10511 KiB  
Article
Mobilenetv2_CA Lightweight Object Detection Network in Autonomous Driving
by Peicheng Shi, Long Li, Heng Qi and Aixi Yang
Technologies 2023, 11(2), 47; https://doi.org/10.3390/technologies11020047 - 23 Mar 2023
Cited by 1 | Viewed by 1450
Abstract
A lightweight network target detection algorithm was proposed, based on MobileNetv2_CA, focusing on the problem of high complexity, a large number of parameters, and the missed detection of small targets in the target detection network based on candidate regions and regression methods in [...] Read more.
A lightweight network target detection algorithm was proposed, based on MobileNetv2_CA, focusing on the problem of high complexity, a large number of parameters, and the missed detection of small targets in the target detection network based on candidate regions and regression methods in autonomous driving scenarios. First, Mosaic image enhancement technology is used in the data pre-processing stage to enhance the feature extraction of small target scenes and complex scenes; second, the Coordinate Attention (CA) mechanism is embedded into the Mobilenetv2 backbone feature extraction network, combined with the PANet and Yolo detection heads for multi-scale feature fusion; finally, a Lightweight Object Detection Network is built. The experimental test results show that the designed network obtained the highest average detection accuracy of 81.43% on the Voc2007 + 2012 dataset, and obtained the highest average detection accuracy of 85.07% and a detection speed of 31.84 FPS on the KITTI dataset. The total amount of network parameters is only 39.5 M. This is beneficial to the engineering application of MobileNetv2 network in automatic driving. Full article
(This article belongs to the Special Issue Image and Signal Processing)
Show Figures

Figure 1

22 pages, 19077 KiB  
Article
GDAL and PROJ Libraries Integrated with GRASS GIS for Terrain Modelling of the Georeferenced Raster Image
by Polina Lemenkova and Olivier Debeir
Technologies 2023, 11(2), 46; https://doi.org/10.3390/technologies11020046 - 22 Mar 2023
Cited by 6 | Viewed by 2106
Abstract
Libraries with pre-written codes optimize the workflow in cartography and reduce labour intensive data processing by iteratively applying scripts to implementing mapping tasks. Most existing Geographic Information System (GIS) approaches are based on traditional software with a graphical user’s interface which significantly limits [...] Read more.
Libraries with pre-written codes optimize the workflow in cartography and reduce labour intensive data processing by iteratively applying scripts to implementing mapping tasks. Most existing Geographic Information System (GIS) approaches are based on traditional software with a graphical user’s interface which significantly limits their performance. Although plugins are proposed to improve the functionality of many GIS programs, they are usually ad hoc in finding specific mapping solutions, e.g., cartographic projections and data conversion. We address this limitation by applying the principled approach of Geospatial Data Abstraction Library (GDAL), library for conversions between cartographic projections (PROJ) and Geographic Resources Analysis Support System (GRASS) GIS for geospatial data processing and morphometric analysis. This research presents topographic analysis of the dataset using scripting methods which include several tools: (1) GDAL, a translator library for raster and vector geospatial data formats used for converting Earth Global Relief Model (ETOPO1) GeoTIFF in XY Cartesian coordinates into World Geodetic System 1984 (WGS84) by the ‘gdalwarp’ utility; (2) PROJ projection transformation library used for converting ETOPO1 WGS84 grid to cartographic projections (Cassini–Soldner equirectangular, Equal Area Cylindrical, Two-Point Equidistant Azimuthal, and Oblique Mercator); and (3) GRASS GIS by sequential use of the following modules: r.info, d.mon, d.rast, r.colors, d.rast.leg, d.legend, d.northarrow, d.grid, d.text, g.region, and r.contour. The depth frequency was analysed by the module ‘d.histogram’. The proposed approach provided a systematic way for morphometric measuring of topographic data and combine the advantages of the GDAL, PROJ, and GRASS GIS tools that include the informativeness, effectiveness, and representativeness in spatial data processing. The morphometric analysis included the computed slope, aspect, profile, and tangential curvature of the study area. The data analysis revealed the distribution pattern in topographic data: 24% of data with elevations below 400 m, 13% of data with depths −5000 to −6000 m, 4% of depths have values −3000 to −4000 m, the least frequent data (−6000 to 7000 m) <1%, 2% of depths have values −2000 to 3000 m in the basin, while other values are distributed proportionally. Further, by incorporating the generic coordinate transformation software library PROJ, the raster grid was transformed into various cartographic projections to demonstrate distortions in shape and area. Scripting techniques of GRASS GIS are demonstrated for applications in topographic modelling and raster data processing. The GRASS GIS shows the effectiveness for mapping and visualization, compatibility with libraries (GDAL, PROJ), technical flexibility in combining Graphical User Interface (GUI), and command-line data processing. The research contributes to the technical cartographic development. Full article
(This article belongs to the Special Issue Image and Signal Processing)
Show Figures

Figure 1

33 pages, 11421 KiB  
Article
Identifying Historic Buildings over Time through Image Matching
by Kyriaki A. Tychola, Stamatis Chatzistamatis, Eleni Vrochidou, George E. Tsekouras and George A. Papakostas
Technologies 2023, 11(1), 32; https://doi.org/10.3390/technologies11010032 - 17 Feb 2023
Viewed by 2156
Abstract
The buildings in a city are of great importance. Certain historic buildings are landmarks and indicate the city’s architecture and culture. The buildings over time undergo changes because of various factors, such as structural changes, natural disaster damages, and aesthetic interventions. The form [...] Read more.
The buildings in a city are of great importance. Certain historic buildings are landmarks and indicate the city’s architecture and culture. The buildings over time undergo changes because of various factors, such as structural changes, natural disaster damages, and aesthetic interventions. The form of buildings in each period is perceived and understood by people of each generation, through photography. Nevertheless, each photograph has its own characteristics depending on the camera (analog or digital) used for capturing it. Any photo, even depicting the same object, is impossible to capture in the same way in terms of illumination, viewing angle, and scale. Hence, to study two or more photographs depicting the same object, first they should be identified and then properly matched. Nowadays, computer vision contributes to this process by providing useful tools. In particular, for this purpose, several feature detection and description algorithms of homologous points have been developed. In this study, the identification of historic buildings over time through feature correspondence techniques and methods is investigated. Especially, photographs from landmarks of Drama city, in Greece, on different dates and conditions (weather, light, rotation, scale, etc.), were gathered and experiments on 2D pairs of images, implementing traditional feature detectors and descriptors algorithms, such as SIFT, ORB, and BRISK, were carried out. This study aims to evaluate the feature matching procedure focusing on both the algorithms’ performance (accuracy, efficiency, and robustness) and the identification of the buildings. SIFT and BRISK are the most accurate algorithms while ORB and BRISK are the most efficient. Full article
(This article belongs to the Special Issue Image and Signal Processing)
Show Figures

Figure 1

13 pages, 2857 KiB  
Article
Development of Static and Dynamic Colorimetric Analysis Techniques Using Image Sensors and Novel Image Processing Software for Chemical, Biological and Medical Applications
by Woo Sik Yoo, Jung Gon Kim, Kitaek Kang and Yeongsik Yoo
Technologies 2023, 11(1), 23; https://doi.org/10.3390/technologies11010023 - 28 Jan 2023
Cited by 3 | Viewed by 1883
Abstract
Colorimetric sensing techniques for point(s), linear and areal array(s) were developed using image sensors and novel image processing software for chemical, biological and medical applications. Monitoring and recording of colorimetric information on one or more specimens can be carried out by specially designed [...] Read more.
Colorimetric sensing techniques for point(s), linear and areal array(s) were developed using image sensors and novel image processing software for chemical, biological and medical applications. Monitoring and recording of colorimetric information on one or more specimens can be carried out by specially designed image processing software. The colorimetric information on real-time monitoring and recorded images or video clips can be analyzed for point(s), line(s) and area(s) of interest for manual and automatic data collection. Ex situ and in situ colorimetric data can be used as signals for process control, process optimization, safety and security alarms, and inputs for machine learning, including artificial intelligence. As an analytical example, video clips of chromatographic experiments using different colored inks on filter papers dipped in water and randomly blinking light-emitting-diode-based decorative lights were used. The colorimetric information on points, lines and areas, with different sizes from the video clips, were extracted and analyzed as a function of time. The video analysis results were both visualized as time-lapse images and RGB (red, green, blue) color/intensity graphs as a function of time. As a demonstration of the developed colorimetric analysis technique, the colorimetric information was expressed as static and time-series combinations of RGB intensity, HSV (hue, saturation and value) and CIE L*a*b* values. Both static and dynamic colorimetric analysis of photographs and/or video files from image sensors were successfully demonstrated using a novel image processing software. Full article
(This article belongs to the Special Issue Image and Signal Processing)
Show Figures

Figure 1

24 pages, 4236 KiB  
Article
Embedded System Performance Analysis for Implementing a Portable Drowsiness Detection System for Drivers
by Minjeong Kim and Jimin Koo
Technologies 2023, 11(1), 8; https://doi.org/10.3390/technologies11010008 - 30 Dec 2022
Viewed by 1923
Abstract
Drowsiness on the road is a widespread problem with fatal consequences; thus, a multitude of systems and techniques have been proposed. Among existing methods, Ghoddoosian et al. utilized temporal blinking patterns to detect early signs of drowsiness, but their algorithm was tested only [...] Read more.
Drowsiness on the road is a widespread problem with fatal consequences; thus, a multitude of systems and techniques have been proposed. Among existing methods, Ghoddoosian et al. utilized temporal blinking patterns to detect early signs of drowsiness, but their algorithm was tested only on a powerful desktop computer, which is not practical to apply in a moving vehicle setting. In this paper, we propose an efficient platform to run Ghoddoosian’s algorithm, detail the performance tests we ran to determine this platform, and explain our threshold optimization logic. After considering the Jetson Nano and Beelink (Mini PC), we concluded that the Mini PC is most efficient and practical to run our embedded system in a vehicle. To determine this, we ran communication speed tests and evaluated total processing times for inference operations. Based on our experiments, the average total processing time to run the drowsiness detection model was 94.27 ms for the Jetson Nano and 22.73 ms for the Beelink (Mini PC). Considering the portability and power efficiency of each device, along with the processing time results, the Beelink (Mini PC) was determined to be most suitable. Additionally, we propose a threshold optimization algorithm, which determines whether the driver is drowsy, or alert based on the trade-off between the sensitivity and specificity of the drowsiness detection model. Our study will serve as a crucial next step for drowsiness detection research and its application in vehicles. Through our experiments, we have determined a favorable platform that can run drowsiness detection algorithms in real-time and can be used as a foundation to further advance drowsiness detection research. In doing so, we have bridged the gap between an existing embedded system and its actual implementation in vehicles to bring drowsiness technology a step closer to prevalent real-life implementation. Full article
(This article belongs to the Special Issue Image and Signal Processing)
Show Figures

Figure 1

22 pages, 66210 KiB  
Article
Evaluation of Machine Learning Algorithms for Classification of EEG Signals
by Francisco Javier Ramírez-Arias, Enrique Efren García-Guerrero, Esteban Tlelo-Cuautle, Juan Miguel Colores-Vargas, Eloisa García-Canseco, Oscar Roberto López-Bonilla, Gilberto Manuel Galindo-Aldana and Everardo Inzunza-González
Technologies 2022, 10(4), 79; https://doi.org/10.3390/technologies10040079 - 30 Jun 2022
Cited by 13 | Viewed by 6256
Abstract
In brain–computer interfaces (BCIs), it is crucial to process brain signals to improve the accuracy of the classification of motor movements. Machine learning (ML) algorithms such as artificial neural networks (ANNs), linear discriminant analysis (LDA), decision tree (D.T.), K-nearest neighbor (KNN), naive Bayes [...] Read more.
In brain–computer interfaces (BCIs), it is crucial to process brain signals to improve the accuracy of the classification of motor movements. Machine learning (ML) algorithms such as artificial neural networks (ANNs), linear discriminant analysis (LDA), decision tree (D.T.), K-nearest neighbor (KNN), naive Bayes (N.B.), and support vector machine (SVM) have made significant progress in classification issues. This paper aims to present a signal processing analysis of electroencephalographic (EEG) signals among different feature extraction techniques to train selected classification algorithms to classify signals related to motor movements. The motor movements considered are related to the left hand, right hand, both fists, feet, and relaxation, making this a multiclass problem. In this study, nine ML algorithms were trained with a dataset created by the feature extraction of EEG signals.The EEG signals of 30 Physionet subjects were used to create a dataset related to movement. We used electrodes C3, C1, CZ, C2, and C4 according to the standard 10-10 placement. Then, we extracted the epochs of the EEG signals and applied tone, amplitude levels, and statistical techniques to obtain the set of features. LabVIEW™2015 version custom applications were used for reading the EEG signals; for channel selection, noise filtering, band selection, and feature extraction operations; and for creating the dataset. MATLAB 2021a was used for training, testing, and evaluating the performance metrics of the ML algorithms. In this study, the model of Medium-ANN achieved the best performance, with an AUC average of 0.9998, Cohen’s Kappa coefficient of 0.9552, a Matthews correlation coefficient of 0.9819, and a loss of 0.0147. These findings suggest the applicability of our approach to different scenarios, such as implementing robotic prostheses, where the use of superficial features is an acceptable option when resources are limited, as in embedded systems or edge computing devices. Full article
(This article belongs to the Special Issue Image and Signal Processing)
Show Figures

Figure 1

Back to TopTop