Computer-Aided Image Processing and Analysis

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 June 2024 | Viewed by 5700

Special Issue Editor

College of Computer and Information Engineering, Hohai University, Nanjing 211100, China
Interests: intelligent information acquisition and processing; pattern recognition and artificial intelligence; remote sensing telemetry system

Special Issue Information

Dear Colleagues,

The last decade has witnessed the remarkable progress of computer-aided image processing and analysis technology. Recently, deep learning has yielded a qualitative leap in this field. However, increasing attention has been paid to the potential drawbacks of the deep learning architecture, e.g., the high cost of the computability and data. Moreover, it is difficult to analytically explain the super-performance of deep learning that works like a “black box”. It is now the time to reconsider the future development of image processing and analysis.

Recently, explainable models have attracted increasing attention. For example, explainable networks, bio-inspired models, and have demonstrated promising performance in many areas, which indicates a potential direction for future research efforts. Therefore, this Special Issue is intended for the presentation of new ideas and experimental results in the field of explainable models and their applications in image processing and analysis. The comparisons between the explainable models and “black box” models are especially encouraged. This Special Issue will publish high-quality, original research papers in the overlapping fields of: 

  • Explainable model-based image processing and analysis
  • Bio-inspired models for image detection and recognition
  • Mapping knowledge domains for image processing and analysis
  • Semantic model-based image analysis
  • Image processing and analysis in challenging scenes
  • Optical imaging-based image processing and analysis
  • Comparisons between different image processing strategies

Dr. Zhe Chen
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable model
  • bio-inspired model
  • mapping knowledge domain
  • semantic model
  • challenging scene
  • optical imaging
  • performance comparison and evaluation
  • image object detection
  • image recognition
  • optical image
  • sonar image

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 10427 KiB  
Article
Ensemble Learning-Based Solutions: An Approach for Evaluating Multiple Features in the Context of H&E Histological Images
by Jaqueline J. Tenguam, Leonardo H. da Costa Longo, Guilherme F. Roberto, Thaína A. A. Tosta, Paulo R. de Faria, Adriano M. Loyola, Sérgio V. Cardoso, Adriano B. Silva, Marcelo Z. do Nascimento and Leandro A. Neves
Appl. Sci. 2024, 14(3), 1084; https://doi.org/10.3390/app14031084 - 26 Jan 2024
Viewed by 847
Abstract
In this paper, we propose an approach based on ensemble learning to classify histology tissues stained with hematoxylin and eosin. The proposal was applied to representative images of colorectal cancer, oral epithelial dysplasia, non-Hodgkin’s lymphoma, and liver tissues (the classification of gender and [...] Read more.
In this paper, we propose an approach based on ensemble learning to classify histology tissues stained with hematoxylin and eosin. The proposal was applied to representative images of colorectal cancer, oral epithelial dysplasia, non-Hodgkin’s lymphoma, and liver tissues (the classification of gender and age from liver tissue samples). The ensemble learning considered multiple combinations of techniques that are commonly used to develop computer-aided diagnosis methods in medical imaging. The feature extraction was defined with different descriptors, exploring the deep learning and handcrafted methods. The deep-learned features were obtained using five different convolutional neural network architectures. The handcrafted features were representatives of fractal techniques (multidimensional and multiscale approaches), Haralick descriptors, and local binary patterns. A two-stage feature selection process (ranking with metaheuristics) was defined to obtain the main combinations of descriptors and, consequently, techniques. Each combination was tested through a rigorous ensemble process, exploring heterogeneous classifiers, such as Random Forest, Support Vector Machine, K-Nearest Neighbors, Logistic Regression, and Naive Bayes. The ensemble learning presented here provided accuracy rates from 90.72% to 100.00% and offered relevant information about the combinations of techniques in multiple histological images and the main features present in the top-performing solutions, using smaller sets of descriptors (limited to a maximum of 53), which involved each ensemble process and solutions that have not yet been explored. The developed methodology, i.e., making the knowledge of each ensemble learning comprehensible to specialists, complements the main contributions of this study to supporting the development of computer-aided diagnosis systems for histological images. Full article
(This article belongs to the Special Issue Computer-Aided Image Processing and Analysis)
Show Figures

Figure 1

18 pages, 3508 KiB  
Article
Improving Mobile-Based Cervical Cytology Screening: A Deep Learning Nucleus-Based Approach for Lesion Detection
by Vladyslav Mosiichuk, Ana Sampaio, Paula Viana, Tiago Oliveira and Luís Rosado
Appl. Sci. 2023, 13(17), 9850; https://doi.org/10.3390/app13179850 - 31 Aug 2023
Cited by 1 | Viewed by 974
Abstract
Liquid-based cytology (LBC) plays a crucial role in the effective early detection of cervical cancer, contributing to substantially decreasing mortality rates. However, the visual examination of microscopic slides is a challenging, time-consuming, and ambiguous task. Shortages of specialized staff and equipment are increasing [...] Read more.
Liquid-based cytology (LBC) plays a crucial role in the effective early detection of cervical cancer, contributing to substantially decreasing mortality rates. However, the visual examination of microscopic slides is a challenging, time-consuming, and ambiguous task. Shortages of specialized staff and equipment are increasing the interest in developing artificial intelligence (AI)-powered portable solutions to support screening programs. This paper presents a novel approach based on a RetinaNet model with a ResNet50 backbone to detect the nuclei of cervical lesions on mobile-acquired microscopic images of cytology samples, stratifying the lesions according to The Bethesda System (TBS) guidelines. This work was supported by a new dataset of images from LBC samples digitalized with a portable smartphone-based microscope, encompassing nucleus annotations of 31,698 normal squamous cells and 1395 lesions. Several experiments were conducted to optimize the model’s detection performance, namely hyperparameter tuning, transfer learning, detected class adjustments, and per-class score threshold optimization. The proposed nucleus-based methodology improved the best baseline reported in the literature for detecting cervical lesions on microscopic images exclusively acquired with mobile devices coupled to the µSmartScope prototype, with per-class average precision, recall, and F1 scores up to 17.6%, 22.9%, and 16.0%, respectively. Performance improvements were obtained by transferring knowledge from networks pre-trained on a smaller dataset closer to the target application domain, as well as including normal squamous nuclei as a class detected by the model. Per-class tuning of the score threshold also allowed us to obtain a model more suitable to support screening procedures, achieving F1 score improvements in most TBS classes. While further improvements are still required to use the proposed approach in a clinical context, this work reinforces the potential of using AI-powered mobile-based solutions to support cervical cancer screening. Such solutions can significantly impact screening programs worldwide, particularly in areas with limited access and restricted healthcare resources. Full article
(This article belongs to the Special Issue Computer-Aided Image Processing and Analysis)
Show Figures

Figure 1

17 pages, 6113 KiB  
Article
Non-Uniform-Illumination Image Enhancement Algorithm Based on Retinex Theory
by Xiu Ji, Shuanghao Guo, Hong Zhang and Weinan Xu
Appl. Sci. 2023, 13(17), 9535; https://doi.org/10.3390/app13179535 - 23 Aug 2023
Cited by 2 | Viewed by 1231
Abstract
To address the issues of fuzzy scene details, reduced definition, and poor visibility in images captured under non-uniform lighting conditions, this paper presents an algorithm for effectively enhancing such images. Firstly, an adaptive color balance method is employed to address the color differences [...] Read more.
To address the issues of fuzzy scene details, reduced definition, and poor visibility in images captured under non-uniform lighting conditions, this paper presents an algorithm for effectively enhancing such images. Firstly, an adaptive color balance method is employed to address the color differences in low-light images, ensuring a more uniform color distribution and yielding a low-light image with improved color consistency. Subsequently, the image obtained is transformed from the RGB space to the HSV space, wherein the multi-scale Gaussian function is utilized in conjunction with the Retinex theory to accurately extract the lighting components and reflection components. To further enhance the image quality, the lighting components are categorized into high-light areas and low-light areas based on their pixel mean values. The low-light areas undergo improvement through an enhanced adaptive gamma correction algorithm, while the high-light areas are enhanced using the Weber–Fechner law for optimal results. Then, each block area of the image is weighted and fused, leading to its conversion back to the RGB space. And a multi-scale detail enhancement algorithm is utilized to further enhance image details. Through comprehensive experiments comparing various methods based on subjective visual perception and objective quality metrics, the algorithm proposed in this paper convincingly demonstrates its ability to effectively enhance the brightness of non-uniformly illuminated areas. Moreover, the algorithm successfully retains details in high-light regions while minimizing the impact of non-uniform illumination on the overall image quality. Full article
(This article belongs to the Special Issue Computer-Aided Image Processing and Analysis)
Show Figures

Figure 1

23 pages, 7654 KiB  
Article
DMA-Net: Decoupled Multi-Scale Attention for Few-Shot Object Detection
by Xijun Xie, Feifei Lee and Qiu Chen
Appl. Sci. 2023, 13(12), 6933; https://doi.org/10.3390/app13126933 - 08 Jun 2023
Cited by 1 | Viewed by 952
Abstract
As one of the most important fields in computer vision, object detection has undergone marked development in recent years. Generally, object detection requires many labeled samples for training, but it is not easy to collect and label samples in many specialized fields. In [...] Read more.
As one of the most important fields in computer vision, object detection has undergone marked development in recent years. Generally, object detection requires many labeled samples for training, but it is not easy to collect and label samples in many specialized fields. In the case of few samples, general detectors typically exhibit overfitting and poor generalizability when recognizing unknown objects, and many FSOD methods also cannot make good use of support information or manage the potential problem of information relationships between the support branch and the query branch. To address this issue, we propose in this paper a novel framework called Decoupled Multi-scale Attention (DMA-Net), the core of which is the Decoupled Multi-scale Attention Module (DMAM), which consists of three primary parts: a multi-scale feature extractor, a multi-scale attention module, and a decoupled gradient module (DGM). DMAM performs multi-scale feature extraction and layer-to-layer information fusion, which can use support information more efficiently, and DGM can reduce the impact of potential optimization information exchange between two branches. DMA-Net can implement incremental FSOD, which is suitable for practical applications. Extensive experimental results demonstrate that DMA-Net has comparable results on generic FSOD benchmarks, particularly in the incremental FSOD setting, where it achieves a state-of-the-art performance. Full article
(This article belongs to the Special Issue Computer-Aided Image Processing and Analysis)
Show Figures

Figure 1

19 pages, 13408 KiB  
Article
EYOLOv3: An Efficient Real-Time Detection Model for Floating Object on River
by Lili Zhang, Zhiqiang Xie, Mengqi Xu, Yi Zhang and Gaoxu Wang
Appl. Sci. 2023, 13(4), 2303; https://doi.org/10.3390/app13042303 - 10 Feb 2023
Cited by 1 | Viewed by 1182
Abstract
At present, the surveillance of river floating in China is labor-intensive, time-consuming, and may miss something, so a fast and accurate automatic detection method is necessary. The two-stage convolutional neural network models appear to have high detection accuracy, but it is hard to [...] Read more.
At present, the surveillance of river floating in China is labor-intensive, time-consuming, and may miss something, so a fast and accurate automatic detection method is necessary. The two-stage convolutional neural network models appear to have high detection accuracy, but it is hard to reach real-time detection, while on the other hand, the one-stage models are less time-consuming but have lower accuracy. In response to the above problems, we propose a one-stage object detection model EYOLOv3 to achieve real-time and high accuracy detection of floating objects in video streams. Firstly, we design a multi-scale feature extraction and fusion module to improve the feature extraction capability of the network. Secondly, a better clustering algorithm is used to analyze the size characteristics of floating objects to design the anchor box, enabling the network to detect objects more effectively. Then a focus loss function is proposed to make the network effectively overcome the sample imbalance problem, and finally, an improved NMS algorithm is proposed to solve the object suppressed problem. Experiments show that the proposed model is efficient in detection of river floating objects, and has better performance than the classical object detection method and the latest method, realizing real-time floating detection in video streams. Full article
(This article belongs to the Special Issue Computer-Aided Image Processing and Analysis)
Show Figures

Figure 1

Back to TopTop