sensors-logo

Journal Browser

Journal Browser

Sensing Technologies and Applications in Infrared and Visible Imaging

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (15 February 2023) | Viewed by 18010

Special Issue Editors


E-Mail Website
Guest Editor
School of Mechanical Engineering, Zhejiang University, Hangzhou 310023, China
Interests: infrared thermography; computer vision; intelligent transportation; sensor fusion; visual inspection; non-destructive testing

E-Mail Website
Guest Editor
School of Electrical Engineering & Computer Science, and Center for Computation & Technology, Louisiana State University, Baton Rouge, LA 70808, USA
Interests: computer vision; robotic vision; 3D reconstruction; real-time AI algorithms for virtual reality (VR); computer-aided design; medical image analysis

E-Mail Website
Guest Editor
Huawei Sensor Application Innovation Lab, 32 Rue Gustave Eiffel, 38000 Grenoble, France
Interests: embedded imaging systems; computer vision; multimodal biometric system; medical image analysis; thermal image sensors

Special Issue Information

Dear Colleagues,

The recent advances in multispectral imaging technologies will facilitate the development of better-performing optical sensing, processing, and analysis applications. For instance, visible images can provide abundant texture details with high spatial resolutions which are consistent with human visual perception. In contrast, thermal/infrared sensors may provide valuable radiation information of targets with high contrast to surroundings under different lighting conditions. Therefore, it is desirable to develop multisensor hardware designs, on and off chip digital image processing methods, and multispectral feature fusion schemes to fully utilize thermal radiation information in infrared images and detailed texture information in visible images.

This Special Issue will highlight the latest advances in infrared and visible sensing technologies and applications. The included articles will emphasize the fusion of complementary visible and thermal data, theoretical to experimental demonstration, and applications of the latest advances in multispectral imaging research, and innovative sensing and imaging applications in medical diagnosis, security surveillance, autonomous driving, and industrial inspection.

Dr. Yanpeng Cao
Prof. Dr. Xin Li
Dr. Christel-Loic Tisse
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Advances in multispectral imaging
  • Computational imaging
  • Multisensor fusion
  • Thermal imaging
  • Medical diagnosis
  • Security surveillance
  • Autonomous driving
  • Industrial inspection
  • Non-destructive testing

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 7182 KiB  
Article
Laser-Visible Face Image Translation and Recognition Based on CycleGAN and Spectral Normalization
by Mingyu Qin, Youchen Fan, Huichao Guo and Laixian Zhang
Sensors 2023, 23(7), 3765; https://doi.org/10.3390/s23073765 - 06 Apr 2023
Viewed by 1279
Abstract
The range-gated laser imaging instrument can capture face images in a dark environment, which provides a new idea for long-distance face recognition at night. However, the laser image has low contrast, low SNR and no color information, which affects observation and recognition. Therefore, [...] Read more.
The range-gated laser imaging instrument can capture face images in a dark environment, which provides a new idea for long-distance face recognition at night. However, the laser image has low contrast, low SNR and no color information, which affects observation and recognition. Therefore, it becomes important to convert laser images into visible images and then identify them. For image translation, we propose a laser-visible face image translation model combined with spectral normalization (SN-CycleGAN). We add spectral normalization layers to the discriminator to solve the problem of low image translation quality caused by the difficulty of training the generative adversarial network. The content reconstruction loss function based on the Y channel is added to reduce the error mapping. The face generated by the improved model on the self-built laser-visible face image dataset has better visual quality, which reduces the error mapping and basically retains the structural features of the target compared with other models. The FID value of evaluation index is 36.845, which is 16.902, 13.781, 10.056, 57.722, 62.598 and 0.761 lower than the CycleGAN, Pix2Pix, UNIT, UGATIT, StarGAN and DCLGAN models, respectively. For the face recognition of translated images, we propose a laser-visible face recognition model based on feature retention. The shallow feature maps with identity information are directly connected to the decoder to solve the problem of identity information loss in network transmission. The domain loss function based on triplet loss is added to constrain the style between domains. We use pre-trained FaceNet to recognize generated visible face images and obtain the recognition accuracy of Rank-1. The recognition accuracy of the images generated by the improved model reaches 76.9%, which is greatly improved compared with the above models and 19.2% higher than that of laser face recognition. Full article
(This article belongs to the Special Issue Sensing Technologies and Applications in Infrared and Visible Imaging)
Show Figures

Figure 1

15 pages, 5236 KiB  
Article
Methodology for Designing an Optimal Test Stand for Camera Thermal Drift Measurements and Its Stability Verification
by Kohhei Nimura and Marcin Adamczyk
Sensors 2022, 22(24), 9997; https://doi.org/10.3390/s22249997 - 19 Dec 2022
Cited by 1 | Viewed by 1551
Abstract
The effects of temperature changes on cameras are realized by observing the drifts of characteristic points in the image plane. Compensation for these effects is crucial to maintain the precision of cameras applied in machine vision systems and those expected to work in [...] Read more.
The effects of temperature changes on cameras are realized by observing the drifts of characteristic points in the image plane. Compensation for these effects is crucial to maintain the precision of cameras applied in machine vision systems and those expected to work in environments with varying factors, including temperature changes. Generally, mathematical compensation models are built by measuring the changes in the intrinsic and extrinsic parameters under the temperature effect; however, due to the assumptions of certain factors based on the conditions of the test stand used for the measurements, errors can become apparent. In this paper, test stands for thermal image drift measurements used in other works are assessed, and a methodology to design a test stand, which can measure thermal image drifts while eliminating other external influences on the camera, is proposed. A test stand was built accordingly, and thermal image drift measurements were performed along with a measurement to verify that the test stand did eliminate external influences on the camera. The experiment was performed for various temperatures from 5 °C to 45 5 °C, and as a result, the thermal image drift measured with the designed test stand showed its maximum error of 16% during its most rapid temperature change from 25 °C to 5 °C. Full article
(This article belongs to the Special Issue Sensing Technologies and Applications in Infrared and Visible Imaging)
Show Figures

Figure 1

15 pages, 52917 KiB  
Article
Performance of QR Code Detectors near Nyquist Limits
by Przemysław Skurowski, Karolina Nurzyńska, Magdalena Pawlyta and Krzysztof A. Cyran
Sensors 2022, 22(19), 7230; https://doi.org/10.3390/s22197230 - 23 Sep 2022
Cited by 3 | Viewed by 4050
Abstract
For the interacting with real world, augmented reality devices need lightweight yet reliable methods for recognition and identification of physical objects. In that regard, promising possibilities are offered by supporting computer vision with 2D barcode tags. These tags, as high contrast and visually [...] Read more.
For the interacting with real world, augmented reality devices need lightweight yet reliable methods for recognition and identification of physical objects. In that regard, promising possibilities are offered by supporting computer vision with 2D barcode tags. These tags, as high contrast and visually well-defined objects, can be used for finding fiducial points in the space or to identify physical items. Currently, QR code readers have certain demands towards the size and visibility of the codes. However, the increase of resolution of built-in cameras makes it possible to identify smaller QR codes in the scene. On the other hand, growing resolutions cause the increase to the computational effort of tag location. Therefore, resolution reduction in decoders is a common trade-off between processing time and recognition capabilities. In this article, we propose the simulation method of QR codes scanning near limits that stem from Shannon’s theorem. We analyze the efficiency of three publicly available decoders versus different size-to-sampling ratios (scales) and MTF characteristics of the image capture subsystem. The MTF we used is based on the characteristics of real devices, and it was modeled using Gaussian low-pass filtering. We tested two tasks—decoding and locating-and-decoding. The findings of the work are several-fold. Among others, we identified that, for practical decoding, the QR-code module should be no smaller than 3–3.5 pixels, regardless of MTF characteristics. We confirmed the superiority of Zbar in practical tasks and the worst recognition capabilities of OpenCV. On the other hand, we identified that, for borderline cases, or even below Nyquist limit where the other decoders fail, OpenCV is still capable of decoding some information. Full article
(This article belongs to the Special Issue Sensing Technologies and Applications in Infrared and Visible Imaging)
Show Figures

Figure 1

13 pages, 8278 KiB  
Article
Land Use Land Cover Labeling of GLOBE Images Using a Deep Learning Fusion Model
by Sergio Manzanarez, Vidya Manian and Marvin Santos
Sensors 2022, 22(18), 6895; https://doi.org/10.3390/s22186895 - 13 Sep 2022
Cited by 5 | Viewed by 2030
Abstract
Most of the land use land cover classification methods presented in the literature have been conducted using satellite remote sensing images. High-resolution aerial imagery is now being used for land cover classification. The Global Learning and Observations to Benefit, the Environment land cover [...] Read more.
Most of the land use land cover classification methods presented in the literature have been conducted using satellite remote sensing images. High-resolution aerial imagery is now being used for land cover classification. The Global Learning and Observations to Benefit, the Environment land cover image database, is created by citizen scientists worldwide who use their handheld cameras to take a set of six images per land cover site. These images have clutter due to man-made objects, and the pixel uncertainties result in incorrect labels. The problem of accurate labeling of these land cover images is addressed. An integrated architecture that combines Unet and DeepLabV3 for initial segmentation, followed by a weighted fusion model that combines the segmentation labels, is presented. The land cover images with labels are used for training the deep learning models. The fusion model combines the labels of five images taken from the north, south, east, west, and down directions to assign a unique label to the image sets. 2916 GLOBE images have been labeled with land cover classes using the integrated model with minimal human-in-the-loop annotation. The validation step shows that our architecture of labeling the images results in 90.97% label accuracy. Our fusion model can be used for labeling large databases of land cover classes from RGB images. Full article
(This article belongs to the Special Issue Sensing Technologies and Applications in Infrared and Visible Imaging)
Show Figures

Figure 1

17 pages, 11245 KiB  
Article
Infrared and Visible Image Fusion Method Using Salience Detection and Convolutional Neural Network
by Zetian Wang, Fei Wang, Dan Wu and Guowang Gao
Sensors 2022, 22(14), 5430; https://doi.org/10.3390/s22145430 - 20 Jul 2022
Cited by 4 | Viewed by 1951
Abstract
This paper presents an algorithm for infrared and visible image fusion using significance detection and Convolutional Neural Networks with the aim of integrating discriminatory features and improving the overall quality of visual perception. Firstly, a global contrast-based significance detection algorithm is applied to [...] Read more.
This paper presents an algorithm for infrared and visible image fusion using significance detection and Convolutional Neural Networks with the aim of integrating discriminatory features and improving the overall quality of visual perception. Firstly, a global contrast-based significance detection algorithm is applied to the infrared image, so that salient features can be extracted, highlighting high brightness values and suppressing low brightness values and image noise. Secondly, a special loss function is designed for infrared images to guide the extraction and reconstruction of features in the network, based on the principle of salience detection, while the more mainstream gradient loss is used as the loss function for visible images in the network. Afterwards, a modified residual network is applied to complete the extraction of features and image reconstruction. Extensive qualitative and quantitative experiments have shown that fused images are sharper and contain more information about the scene, and the fused results look more like high-quality visible images. The generalization experiments also demonstrate that the proposed model has the ability to generalize well, independent of the limitations of the sensor. Overall, the algorithm proposed in this paper performs better compared to other state-of-the-art methods. Full article
(This article belongs to the Special Issue Sensing Technologies and Applications in Infrared and Visible Imaging)
Show Figures

Figure 1

25 pages, 12588 KiB  
Article
Seamless Navigation, 3D Reconstruction, Thermographic and Semantic Mapping for Building Inspection
by Adrian Schischmanow, Dennis Dahlke, Dirk Baumbach, Ines Ernst and Magdalena Linkiewicz
Sensors 2022, 22(13), 4745; https://doi.org/10.3390/s22134745 - 23 Jun 2022
Cited by 8 | Viewed by 2063
Abstract
We present a workflow for seamless real-time navigation and 3D thermal mapping in combined indoor and outdoor environments in a global reference frame. The automated workflow and partly real-time capabilities are of special interest for inspection tasks and also for other time-critical applications. [...] Read more.
We present a workflow for seamless real-time navigation and 3D thermal mapping in combined indoor and outdoor environments in a global reference frame. The automated workflow and partly real-time capabilities are of special interest for inspection tasks and also for other time-critical applications. We use a hand-held integrated positioning system (IPS), which is a real-time capable visual-aided inertial navigation technology, and augment it with an additional passive thermal infrared camera and global referencing capabilities. The global reference is realized through surveyed optical markers (AprilTags). Due to the sensor data’s fusion of the stereo camera and the thermal images, the resulting georeferenced 3D point cloud is enriched with thermal intensity values. A challenging calibration approach is used to geometrically calibrate and pixel-co-register the trifocal camera system. By fusing the terrestrial dataset with additional geographic information from an unmanned aerial vehicle, we gain a complete building hull point cloud and automatically reconstruct a semantic 3D model. A single-family house with surroundings in the village of Morschenich near the city of Jülich (German federal state North Rhine-Westphalia) was used as a test site to demonstrate our workflow. The presented work is a step towards automated building information modeling. Full article
(This article belongs to the Special Issue Sensing Technologies and Applications in Infrared and Visible Imaging)
Show Figures

Figure 1

18 pages, 3711 KiB  
Article
ROADS—Rover for Bituminous Pavement Distress Survey: An Unmanned Ground Vehicle (UGV) Prototype for Pavement Distress Evaluation
by Alessandro Mei, Emiliano Zampetti, Paola Di Mascio, Giuliano Fontinovo, Paolo Papa and Antonio D’Andrea
Sensors 2022, 22(9), 3414; https://doi.org/10.3390/s22093414 - 29 Apr 2022
Cited by 4 | Viewed by 2439
Abstract
Maintenance has a major impact on the financial plan of road managers. To ameliorate road conditions and reduce safety constraints, distress evaluation methods should be efficient and should avoid being time consuming. That is why road cadastral catalogs should be updated periodically, and [...] Read more.
Maintenance has a major impact on the financial plan of road managers. To ameliorate road conditions and reduce safety constraints, distress evaluation methods should be efficient and should avoid being time consuming. That is why road cadastral catalogs should be updated periodically, and interventions should be provided for specific management plans. This paper focuses on the setting of an Unmanned Ground Vehicle (UGV) for road pavement distress monitoring, and the Rover for bituminOus pAvement Distress Survey (ROADS) prototype is presented in this paper. ROADS has a multisensory platform fixed on it that is able to collect different parameters. Navigation and environment sensors support a two-image acquisition system which is composed of a high-resolution digital camera and a multispectral imaging sensor. The Pavement Condition Index (PCI) and the Image Distress Quantity (IDQ) are, respectively, calculated by field activities and image computation. The model used to calculate the IROADS index from PCI had an accuracy of 74.2%. Such results show that the retrieval of PCI from image-based approach is achievable and values can be categorized as “Good”/“Preventive Maintenance”, “Fair”/“Rehabilitation”, “Poor”/“Reconstruction”, which are ranges of the custom PCI ranting scale and represents a typical repair strategy. Full article
(This article belongs to the Special Issue Sensing Technologies and Applications in Infrared and Visible Imaging)
Show Figures

Graphical abstract

20 pages, 4900 KiB  
Article
Vertical Cracks Excited in Lock-in Vibrothermography Experiments: Identification of Open and Inhomogeneous Heat Fluxes
by Arantza Mendioroz, Alazne Castelo, Ricardo Celorrio and Agustín Salazar
Sensors 2022, 22(6), 2336; https://doi.org/10.3390/s22062336 - 17 Mar 2022
Cited by 1 | Viewed by 1602
Abstract
Lock-in vibrothermography has proven to be very useful to characterizing kissing cracks producing ideal, homogeneous, and compact heat sources. Here, we approach real situations by addressing the characterization of non-compact (strip-shaped) heat sources produced by open cracks and inhomogeneous fluxes. We propose combining [...] Read more.
Lock-in vibrothermography has proven to be very useful to characterizing kissing cracks producing ideal, homogeneous, and compact heat sources. Here, we approach real situations by addressing the characterization of non-compact (strip-shaped) heat sources produced by open cracks and inhomogeneous fluxes. We propose combining lock-in vibrothermography data at several modulation frequencies in order to gather penetration and precision data. The approach consists in inverting surface temperature amplitude and phase data by means of a least-squares minimization algorithm without previous knowledge of the geometry of the heat source, only assuming knowledge of the vertical plane where it is confined. We propose a methodology to solve this ill-posed inverse problem by including in the objective function penalty terms based on the expected properties of the solution. These terms are described in a comprehensive and intuitive manner. Inversions of synthetic data show that the geometry of non-compact heat sources is identified correctly and that the contours are rounded due to the penalization. Inhomogeneous smoothly varying fluxes are also qualitatively retrieved, but steep variations of the flux are hard to recover. These findings are confirmed by inversions of experimental data taken on calibrated samples. The proposed methodology is capable of identifying heat sources generated in lock-in vibrothermography experiments. Full article
(This article belongs to the Special Issue Sensing Technologies and Applications in Infrared and Visible Imaging)
Show Figures

Figure 1

Back to TopTop