sensors-logo

Journal Browser

Journal Browser

Feature Papers in "Sensing and Imaging" Section 2023

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 10092

Special Issue Editors


E-Mail Website
Guest Editor
Laboratoire Hubert Curien, CNRS UMR 5516, Université de Lyon, 42000 Saint-Étienne, France
Interests: fiber sensors; optical sensors; image sensors; optical materials; radiation effects
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Department of Computer Science, University of Applied Sciences and Arts Dortmund (FH Dortmund), 44227 Dortmund, Germany
2. Institute for Medical Informatics, Biometry and Epidemiology (IMIBE), University Hospital Essen, 45147 Essen, Germany
Interests: machine learning; computational intelligence; biomedical applications; interpretable machine learning; natural language processing (NLP); computer vision; augmented reality; information extraction; information retrieval; image processing; biostatistics; bioinformatics; mathematics for computer science
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to announce that the Sensors Section ‘Sensing and Imaging’ is now compiling a collection of papers submitted by the Editorial Board Members (EBMs) of our section and outstanding scholars in this research field. We welcome contributions as well as recommendations from the EBMs.

We expect original papers and review articles showing state-of-the-art theoretical and applicative advances, new experimental discoveries, and novel technological improvements regarding sensing and imaging. We expect these papers to be widely read and highly influential within the field. All papers in this Special Issue will be well promoted.

We would also like to take this opportunity to call on more excellent scholars to join the Section ‘Sensing and Imaging’ so that we can work together to further develop this exciting field of research.

Prof. Dr. Sylvain Girard
Prof. Dr. Christoph M. Friedrich
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 2190 KiB  
Article
The Effectiveness of an Adaptive Method to Analyse the Transition between Tumour and Peritumour for Answering Two Clinical Questions in Cancer Imaging
by Margherita Mottola, Rita Golfieri and Alessandro Bevilacqua
Sensors 2024, 24(4), 1156; https://doi.org/10.3390/s24041156 - 09 Feb 2024
Viewed by 600
Abstract
Based on the well-known role of peritumour characterization in cancer imaging to improve the early diagnosis and timeliness of clinical decisions, this study innovated a state-of-the-art approach for peritumour analysis, mainly relying on extending tumour segmentation by a predefined fixed size. We present [...] Read more.
Based on the well-known role of peritumour characterization in cancer imaging to improve the early diagnosis and timeliness of clinical decisions, this study innovated a state-of-the-art approach for peritumour analysis, mainly relying on extending tumour segmentation by a predefined fixed size. We present a novel, adaptive method to investigate the zone of transition, bestriding tumour and peritumour, thought of as an annular-like shaped area, and detected by analysing gradient variations along tumour edges. For method validation, we applied it on two datasets (hepatocellular carcinoma and locally advanced rectal cancer) imaged by different modalities and exploited the zone of transition regions as well as the peritumour ones derived by adopting the literature approach for building predictive models. To measure the zone of transition’s benefits, we compared the predictivity of models relying on both “standard” and novel peritumour regions. The main comparison metrics were informedness, specificity and sensitivity. As regards hepatocellular carcinoma, having circular and regular shape, all models showed similar performance (informedness = 0.69, sensitivity = 84%, specificity = 85%). As regards locally advanced rectal cancer, with jagged contours, the zone of transition led to the best informedness of 0.68 (sensitivity = 89%, specificity = 79%). The zone of transition advantages include detecting the peritumour adaptively, even when not visually noticeable, and minimizing the risk (higher in the literature approach) of including adjacent diverse structures, which was clearly highlighted during image gradient analysis. Full article
(This article belongs to the Special Issue Feature Papers in "Sensing and Imaging" Section 2023)
Show Figures

Figure 1

22 pages, 5890 KiB  
Article
Decomposed Multilateral Filtering for Accelerating Filtering with Multiple Guidance Images
by Haruki Nogami, Yamato Kanetaka, Yuki Naganawa, Yoshihiro Maeda and Norishige Fukushima
Sensors 2024, 24(2), 633; https://doi.org/10.3390/s24020633 - 19 Jan 2024
Viewed by 540
Abstract
This paper proposes an efficient algorithm for edge-preserving filtering with multiple guidance images, so-called multilateral filtering. Multimodal signal processing for sensor fusion is increasingly important in image sensing. Edge-preserving filtering is available for various sensor fusion applications, such as estimating scene properties and [...] Read more.
This paper proposes an efficient algorithm for edge-preserving filtering with multiple guidance images, so-called multilateral filtering. Multimodal signal processing for sensor fusion is increasingly important in image sensing. Edge-preserving filtering is available for various sensor fusion applications, such as estimating scene properties and refining inverse-rendered images. The main application is joint edge-preserving filtering, which can preferably reflect the edge information of a guidance image from an additional sensor. The drawback of edge-preserving filtering lies in its long computational time; thus, many acceleration methods have been proposed. However, most accelerated filtering cannot handle multiple guidance information well, although the multiple guidance information provides us with various benefits. Therefore, we extend the efficient edge-preserving filters so that they can use additional multiple guidance images. Our algorithm, named decomposes multilateral filtering (DMF), can extend the efficient filtering methods to the multilateral filtering method, which decomposes the filter into a set of constant-time filtering. Experimental results show that our algorithm performs efficiently and is sufficient for various applications. Full article
(This article belongs to the Special Issue Feature Papers in "Sensing and Imaging" Section 2023)
Show Figures

Figure 1

19 pages, 6466 KiB  
Article
Quasi Real-Time Apple Defect Segmentation Using Deep Learning
by Mirko Agarla, Paolo Napoletano and Raimondo Schettini
Sensors 2023, 23(18), 7893; https://doi.org/10.3390/s23187893 - 14 Sep 2023
Cited by 1 | Viewed by 798
Abstract
Defect segmentation of apples is an important task in the agriculture industry for quality control and food safety. In this paper, we propose a deep learning approach for the automated segmentation of apple defects using convolutional neural networks (CNNs) based on a U-shaped [...] Read more.
Defect segmentation of apples is an important task in the agriculture industry for quality control and food safety. In this paper, we propose a deep learning approach for the automated segmentation of apple defects using convolutional neural networks (CNNs) based on a U-shaped architecture with skip-connections only within the noise reduction block. An ad-hoc data synthesis technique has been designed to increase the number of samples and at the same time to reduce neural network overfitting. We evaluate our model on a dataset of multi-spectral apple images with pixel-wise annotations for several types of defects. In this paper, we show that our proposal outperforms in terms of segmentation accuracy general-purpose deep learning architectures commonly used for segmentation tasks. From the application point of view, we improve the previous methods for apple defect segmentation. A measure of the computational cost shows that our proposal can be employed in real-time (about 100 frame-per-second on GPU) and in quasi-real-time (about 7/8 frame-per-second on CPU) visual-based apple inspection. To further improve the applicability of the method, we investigate the potential of using only RGB images instead of multi-spectral images as input images. The results prove that the accuracy in this case is almost comparable with the multi-spectral case. Full article
(This article belongs to the Special Issue Feature Papers in "Sensing and Imaging" Section 2023)
Show Figures

Figure 1

21 pages, 3703 KiB  
Article
Robust PDF Watermarking against Print–Scan Attack
by Lei Li, Hong-Jun Zhang, Jia-Le Meng and Zhe-Ming Lu
Sensors 2023, 23(17), 7365; https://doi.org/10.3390/s23177365 - 23 Aug 2023
Viewed by 1067
Abstract
Portable document format (PDF) files are widely used in file transmission, exchange, and circulation because of their platform independence, small size, good browsing quality, and the ability to place hyperlinks. However, their security issues are also more thorny. It is common to distribute [...] Read more.
Portable document format (PDF) files are widely used in file transmission, exchange, and circulation because of their platform independence, small size, good browsing quality, and the ability to place hyperlinks. However, their security issues are also more thorny. It is common to distribute printed PDF files to different groups and individuals after printing. However, most PDF watermarking algorithms currently cannot resist print–scan attacks, making it difficult to apply them in leak tracing of both paper and scanned versions of PDF documents. To tackle this issue, we propose an invisible digital watermarking technology based on modifying the edge pixels of text strokes to hide information in PDFs, which achieves high robustness to print–scan attacks. Moreover, it cannot be detected by human perception systems. This method focuses on the representation of text by embedding watermarks by changing the features of the text to ensure that changes in these features can be reflected in the scanned PDF after printing. We first segment each text line into two sub-blocks, then select the row of pixels with the most black pixels, and flip the edge pixels closest to this row. This method requires the participation of original PDF documents in detection. The experimental results show that all peak signal-to-noise ratio (PSNR) values of our proposed method exceed 32 dB, which indicates satisfactory invisibility. Meanwhile, this method can extract the hidden information with 100% accuracy under the JPEG compression attack, and has high robustness against noise attacks and print–scan attacks. In the case of no attacks, the watermark can be recovered without any loss. In terms of practical applications, our method can be applied in the practical leak tracing of official paper documents after distribution. Full article
(This article belongs to the Special Issue Feature Papers in "Sensing and Imaging" Section 2023)
Show Figures

Figure 1

21 pages, 7329 KiB  
Article
Development of Debiasing Technique for Lung Nodule Chest X-ray Datasets to Generalize Deep Learning Models
by Michael J. Horry, Subrata Chakraborty, Biswajeet Pradhan, Manoranjan Paul, Jing Zhu, Hui Wen Loh, Prabal Datta Barua and U. Rajendra Acharya
Sensors 2023, 23(14), 6585; https://doi.org/10.3390/s23146585 - 21 Jul 2023
Cited by 1 | Viewed by 1503
Abstract
Screening programs for early lung cancer diagnosis are uncommon, primarily due to the challenge of reaching at-risk patients located in rural areas far from medical facilities. To overcome this obstacle, a comprehensive approach is needed that combines mobility, low cost, speed, accuracy, and [...] Read more.
Screening programs for early lung cancer diagnosis are uncommon, primarily due to the challenge of reaching at-risk patients located in rural areas far from medical facilities. To overcome this obstacle, a comprehensive approach is needed that combines mobility, low cost, speed, accuracy, and privacy. One potential solution lies in combining the chest X-ray imaging mode with federated deep learning, ensuring that no single data source can bias the model adversely. This study presents a pre-processing pipeline designed to debias chest X-ray images, thereby enhancing internal classification and external generalization. The pipeline employs a pruning mechanism to train a deep learning model for nodule detection, utilizing the most informative images from a publicly available lung nodule X-ray dataset. Histogram equalization is used to remove systematic differences in image brightness and contrast. Model training is then performed using combinations of lung field segmentation, close cropping, and rib/bone suppression. The resulting deep learning models, generated through this pre-processing pipeline, demonstrate successful generalization on an independent lung nodule dataset. By eliminating confounding variables in chest X-ray images and suppressing signal noise from the bone structures, the proposed deep learning lung nodule detection algorithm achieves an external generalization accuracy of 89%. This approach paves the way for the development of a low-cost and accessible deep learning-based clinical system for lung cancer screening. Full article
(This article belongs to the Special Issue Feature Papers in "Sensing and Imaging" Section 2023)
Show Figures

Figure 1

17 pages, 3829 KiB  
Article
An Investigation about Modern Deep Learning Strategies for Colon Carcinoma Grading
by Pierluigi Carcagnì, Marco Leo, Luca Signore and Cosimo Distante
Sensors 2023, 23(9), 4556; https://doi.org/10.3390/s23094556 - 08 May 2023
Cited by 2 | Viewed by 1894
Abstract
Developing computer-aided approaches for cancer diagnosis and grading is currently receiving an increasing demand: this could take over intra- and inter-observer inconsistency, speed up the screening process, increase early diagnosis, and improve the accuracy and consistency of the treatment-planning processes.The third most common [...] Read more.
Developing computer-aided approaches for cancer diagnosis and grading is currently receiving an increasing demand: this could take over intra- and inter-observer inconsistency, speed up the screening process, increase early diagnosis, and improve the accuracy and consistency of the treatment-planning processes.The third most common cancer worldwide and the second most common in women is colorectal cancer (CRC). Grading CRC is a key task in planning appropriate treatments and estimating the response to them. Unfortunately, it has not yet been fully demonstrated how the most advanced models and methodologies of machine learning can impact this crucial task.This paper systematically investigates the use of advanced deep models (convolutional neural networks and transformer architectures) to improve colon carcinoma detection and grading from histological images. To the best of our knowledge, this is the first attempt at using transformer architectures and ensemble strategies for exploiting deep learning paradigms for automatic colon cancer diagnosis. Results on the largest publicly available dataset demonstrated a substantial improvement with respect to the leading state-of-the-art methods. In particular, by exploiting a transformer architecture, it was possible to observe a 3% increase in accuracy in the detection task (two-class problem) and up to a 4% improvement in the grading task (three-class problem) by also integrating an ensemble strategy. Full article
(This article belongs to the Special Issue Feature Papers in "Sensing and Imaging" Section 2023)
Show Figures

Figure 1

18 pages, 4444 KiB  
Article
Training a Two-Layer ReLU Network Analytically
by Adrian Barbu
Sensors 2023, 23(8), 4072; https://doi.org/10.3390/s23084072 - 18 Apr 2023
Cited by 2 | Viewed by 1537
Abstract
Neural networks are usually trained with different variants of gradient descent-based optimization algorithms such as the stochastic gradient descent or the Adam optimizer. Recent theoretical work states that the critical points (where the gradient of the loss is zero) of two-layer ReLU networks [...] Read more.
Neural networks are usually trained with different variants of gradient descent-based optimization algorithms such as the stochastic gradient descent or the Adam optimizer. Recent theoretical work states that the critical points (where the gradient of the loss is zero) of two-layer ReLU networks with the square loss are not all local minima. However, in this work, we will explore an algorithm for training two-layer neural networks with ReLU-like activation and the square loss that alternatively finds the critical points of the loss function analytically for one layer while keeping the other layer and the neuron activation pattern fixed. Experiments indicate that this simple algorithm can find deeper optima than stochastic gradient descent or the Adam optimizer, obtaining significantly smaller training loss values on four out of the five real datasets evaluated. Moreover, the method is faster than the gradient descent methods and has virtually no tuning parameters. Full article
(This article belongs to the Special Issue Feature Papers in "Sensing and Imaging" Section 2023)
Show Figures

Figure 1

9 pages, 4526 KiB  
Article
Cognitive CAPTCHA Password Reminder
by Natalia Krzyworzeka, Lidia Ogiela and Marek R. Ogiela
Sensors 2023, 23(6), 3170; https://doi.org/10.3390/s23063170 - 16 Mar 2023
Viewed by 1456
Abstract
In recent years, the number of personal accounts assigned to one business user has been constantly growing. There could be as many as 191 individual login credentials used by an average employee, according to a 2017 study. The most recurrent problems associated with [...] Read more.
In recent years, the number of personal accounts assigned to one business user has been constantly growing. There could be as many as 191 individual login credentials used by an average employee, according to a 2017 study. The most recurrent problems associated with this situation faced by users are the strength of passwords and ability to recall them. Researchers have proven that “users are aware of what constitutes a secure password but may forgo these security measures in terms of more convenient passwords, largely depending on account type”. Reusing the same password across multiple platforms or creating one with dictionary words has also been proved to be a common practice amongst many. In this paper, a novel password-reminder scheme will be presented. The goal was that the user creates a CAPTCHA-like image with a hidden meaning, that only he or she can decode. The image must be in some way related to that individual’s memory or her/his unique knowledge or experience. With this image, being presented each time during logging in, the user is asked to associate a password consisting of two or more words and a number. If the image is selected properly and strong association with a person’s visual memory has been linked to it, the chances of recalling a lengthy password he/she created should not present a problem. Full article
(This article belongs to the Special Issue Feature Papers in "Sensing and Imaging" Section 2023)
Show Figures

Figure 1

Back to TopTop