Advances in Digital Image Processing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (25 June 2023) | Viewed by 40795

Special Issue Editor


E-Mail Website
Guest Editor
School of Instrumentation Science and Engineering, Harbin Institute of Technology, Heilongjiang 150001, China
Interests: computational optical imaging; super resolution imaging; coherent diffraction imaging; fractional fourier transforms; image process and analysis

Special Issue Information

Dear Colleagues,

Digital image processing is a very useful technique for information acquisition, analysis, and application. It is related to the fields of vision, imaging, display, medicine, image understanding, virtual reality, and so on. In particular, deep learning has been developed for promoting the algorithms of image processing and has recently achieved great success in conventional techniques. Now, the data dimensionality of imaging has increased also, such as hyperspectral images and video. Recently, there has been exponentially increased interest in research of digital image processing. The purpose of this Special Issue is to provide the latest progress in relation to digital images. It provides a new way of thinking and future prospect for the research of image processing. We welcome innovative work on the research of digital images to contribute to this issue.

Prof. Dr. Zhengjun Liu
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical image processing
  • pattern recognition and analysis
  • visualization
  • image coding and compression
  • super-resolution imaging
  • image segmentation
  • 3D and surface reconstruction
  • radar image processing
  • sonar image processing
  • spectral analysis
  • image filtering
  • fast algorithms
  • data mining techniques
  • motion detection
  • video signal processing
  • image security
  • computational imaging

Published Papers (22 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

14 pages, 4602 KiB  
Article
Improved Retinex-Theory-Based Low-Light Image Enhancement Algorithm
by Jiarui Wang, Hanjia Wang, Yu Sun and Jie Yang
Appl. Sci. 2023, 13(14), 8148; https://doi.org/10.3390/app13148148 - 13 Jul 2023
Cited by 1 | Viewed by 1057
Abstract
Researchers working on image processing have had a hard time handling low-light images due to their low contrast, noise, and brightness. This paper presents an improved method that uses the Retinex theory to enhance low-light images, with a network model mainly composed of [...] Read more.
Researchers working on image processing have had a hard time handling low-light images due to their low contrast, noise, and brightness. This paper presents an improved method that uses the Retinex theory to enhance low-light images, with a network model mainly composed of a Decom-Net and an Enhance-Net. Residual connectivity is fully utilized in both the Decom-Net and Enhance-Net to reduce the possible loss of image details. Additionally, Enhance-Net introduces a positional pixel attention mechanism that directly incorporates the global information of the image. Specifically, Decom-Net serves to decompose the low-light image into illumination and reflection maps, and Enhance-Net serves to increase the brightness of the illumination map. Finally, via adaptive image fusion, the reflectance map and the enhanced illuminance map are fused to obtain the final enhanced image. Experiments show better results in terms of both subjective visual aspects and objective evaluation indicators. Compared to RetinexNet, the proposed method shows improvements in the full-reference evaluation metrics, including a 4.6% improvement in PSNR, a 1.8% improvement in SSIM, and a 10.8% improvement in LPIPS. Additionally, it achieved an average improvement of 17.3% in the no-reference evaluation metric NIQE. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

14 pages, 4393 KiB  
Article
Hybrid Dark Channel Prior for Image Dehazing Based on Transmittance Estimation by Variant Genetic Algorithm
by Long Wu, Jie Chen, Shuyu Chen, Xu Yang, Lu Xu, Yong Zhang and Jianlong Zhang
Appl. Sci. 2023, 13(8), 4825; https://doi.org/10.3390/app13084825 - 12 Apr 2023
Viewed by 2704
Abstract
Image dehazing has always been one of the main areas of research in image processing. The traditional dark channel prior algorithm (DCP) has some shortcomings, such as incomplete fog removal and excessively dark images. In order to obtain haze-free images with high quality, [...] Read more.
Image dehazing has always been one of the main areas of research in image processing. The traditional dark channel prior algorithm (DCP) has some shortcomings, such as incomplete fog removal and excessively dark images. In order to obtain haze-free images with high quality, a hybrid dark channel prior (HDCP) algorithm is proposed in this paper. HDCP first employs Retinex to remove the interference of the illumination component. The variant genetic algorithm (VGA) is then used to obtain the guidance image required by the guided filter to optimize the atmospheric transmittance. Finally, the modified dark channel prior algorithm is used to obtain the dehazed image. Compared with three other modified DCP algorithms, HDCP has the best subjective visual effects of haze removal and color fidelity. HDCP also shows superior objective indexes in the mean square error (MSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and information entropy (E) for different haze degrees. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

34 pages, 2116 KiB  
Article
Novel Block Sorting and Symbol Prediction Algorithm for PDE-Based Lossless Image Compression: A Comparative Study with JPEG and JPEG 2000
by Časlav Livada, Tomislav Horvat and Alfonzo Baumgartner
Appl. Sci. 2023, 13(5), 3152; https://doi.org/10.3390/app13053152 - 28 Feb 2023
Cited by 2 | Viewed by 1113
Abstract
In this paper, we present a novel compression method based on partial differential equations complemented by block sorting and symbol prediction. Block sorting is performed using the Burrows–Wheeler transform, while symbol prediction is performed using the context mixing method. With these transformations, the [...] Read more.
In this paper, we present a novel compression method based on partial differential equations complemented by block sorting and symbol prediction. Block sorting is performed using the Burrows–Wheeler transform, while symbol prediction is performed using the context mixing method. With these transformations, the range coder is used as a lossless compression method. The objective and subjective quality evaluation of the reconstructed image illustrates the efficiency of this new compression method and is compared with the current standards, JPEG and JPEG 2000. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

18 pages, 777 KiB  
Article
Secure Image Signal Transmission Scheme Using Poly-Polarization Filtering and Orthogonal Matrix
by Zhangkai Luo, Zhongmin Pei, Chengwei Yang, Zhengjun Liu and Hang Chen
Appl. Sci. 2023, 13(4), 2513; https://doi.org/10.3390/app13042513 - 15 Feb 2023
Cited by 1 | Viewed by 911
Abstract
In this paper, a novel secure image signal transmission scheme was proposed in wireless systems, in which the poly-polarization filtering and the orthogonal matrix (PPF-OM) were combined to protect the image signal and eliminate the polarization dependent loss (PDL) at the same time, [...] Read more.
In this paper, a novel secure image signal transmission scheme was proposed in wireless systems, in which the poly-polarization filtering and the orthogonal matrix (PPF-OM) were combined to protect the image signal and eliminate the polarization dependent loss (PDL) at the same time, which was caused by the non-ideal wireless channel. This scheme divided the image information sequence into two parts in order to modulate and reshape the results into symbol matrices with the same size. Then, two sets of polarization states (PSs) and orthogonal matrices (OMs) were designed to process the symbols in order to enhance information protection and eliminate the PDL. Legitimate users were able to apply the shared PSs and OMs, step by step, so the information could be recovered. However, for eavesdroppers, the received signals were random symbols that were difficult to demodulate. Then, the bit error rate and the secrecy rate were derived to evaluate the performance of the PPF-OM scheme. Finally, the simulations demonstrated the superior performance of the PPF-OM scheme for enhancing the information security and eliminating the PDL. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

14 pages, 3959 KiB  
Article
Image Interpolation Based on Spiking Neural Network Model
by Mürsel Ozan İncetaş
Appl. Sci. 2023, 13(4), 2438; https://doi.org/10.3390/app13042438 - 14 Feb 2023
Cited by 2 | Viewed by 1453
Abstract
Image interpolation is used in many areas of image processing. It is seen that many techniques developed to date have been successful in both protecting edges and increasing image quality. However, these techniques generally detect edges with gradient-based linear calculations. In this study, [...] Read more.
Image interpolation is used in many areas of image processing. It is seen that many techniques developed to date have been successful in both protecting edges and increasing image quality. However, these techniques generally detect edges with gradient-based linear calculations. In this study, spiking neural networks (SNNs), which are known to successfully simulate the human visual system (HVS), are used to detect edge pixels instead of the gradient. With the help of the proposed SNN-based model, the pixels marked as edges are interpolated with a 1D directional filter. For the remaining pixels, the standard bicubic interpolation technique is used. Additionally, the success of the proposed method is compared to known methods using various metrics. The experimental results show that the proposed method is more successful than the other methods. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

15 pages, 1837 KiB  
Article
CFSR: Coarse-to-Fine High-Speed Motion Scene Reconstruction with Region-Adaptive-Based Spike Distinction
by Shangdian Du, Na Qi, Qing Zhu, Wei Xu and Shuang Jin
Appl. Sci. 2023, 13(4), 2424; https://doi.org/10.3390/app13042424 - 13 Feb 2023
Viewed by 1129
Abstract
As a novel bio-inspired vision sensor, spike cameras offer significant advantages over conventional cameras with a fixed low sampling rate, recording fast-moving scenes by firing a continuous stream of spikes. Reconstruction methods including Texture from ISI (TFI), Texture from Playback (TFP), and Texture [...] Read more.
As a novel bio-inspired vision sensor, spike cameras offer significant advantages over conventional cameras with a fixed low sampling rate, recording fast-moving scenes by firing a continuous stream of spikes. Reconstruction methods including Texture from ISI (TFI), Texture from Playback (TFP), and Texture from Adaptive threshold (TFA) produce undesirable noise or motion blur. A spiking neural model distinguishes the dynamic and static spikes before reconstruction, but the reconstruction of motion details is still unsatisfactory even with the advanced TFA method. To address this issue, we propose a coarse-to-fine high-speed motion scene reconstruction (CFSR) method with a region-adaptive-based spike distinction (RASE) framework to reconstruct the full texture of natural scenes from the spike data. We utilize the spike distribution of dynamic and static regions to propose the RASE to distinguish the spikes of different moments. After distinction, the TFI, TFP, and patch matching are exploited for image reconstruction in different regions, respectively, which does not introduce unexpected noise or motion blur. Experimental results on the PKU-SPIKE-RECON dataset demonstrate that our CFSR method outperforms the state-of-the-art approaches in terms of objective and subjective quality. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

19 pages, 15156 KiB  
Article
Retinex-Based Relighting for Night Photography
by Sou Oishi and Norishige Fukushima
Appl. Sci. 2023, 13(3), 1719; https://doi.org/10.3390/app13031719 - 29 Jan 2023
Cited by 6 | Viewed by 1709
Abstract
The lighting up of buildings is one form of entertainment that makes a city more colorful, and photographers sometimes change this lighting using photo-editing applications. This paper proposes a method for automatically performing such changes that follows the Retinex theory. Retinex theory indicates [...] Read more.
The lighting up of buildings is one form of entertainment that makes a city more colorful, and photographers sometimes change this lighting using photo-editing applications. This paper proposes a method for automatically performing such changes that follows the Retinex theory. Retinex theory indicates that the complex scenes caught by the human visual system are affected by surrounding colors, and Retinex-based image processing uses these characteristics to generate images. Our proposed method follows this approach. First, we propose a method for extracting a relighting saliency map using Retinex with edge-preserving filtering. Second, we propose a sampling method to specify the lighting area. Finally, we composite the additional light to match the human visual perception. Experimental results show that the proposed sampling method is successful in keeping the illuminated points in bright locations and equally spaced apart. In addition, the proposed various diffusion methods can enhance nighttime skyline photographs with various expressions. Finally, we can add in a new light by considering Retinex theory to represent the perceptual color. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

18 pages, 3676 KiB  
Article
A Novel Interval Iterative Multi-Thresholding Algorithm Based on Hybrid Spatial Filter and Region Growing for Medical Brain MR Images
by Yuncong Feng, Yunfei Liu, Zhicheng Liu, Wanru Liu, Qingan Yao and Xiaoli Zhang
Appl. Sci. 2023, 13(2), 1087; https://doi.org/10.3390/app13021087 - 13 Jan 2023
Cited by 4 | Viewed by 1469
Abstract
Medical image segmentation is widely used in clinical medicine, and the accuracy of the segmentation algorithm will affect the diagnosis results and treatment plans. However, manual segmentation of medical images requires extensive experience and knowledge, and it is both time-consuming and labor-intensive. To [...] Read more.
Medical image segmentation is widely used in clinical medicine, and the accuracy of the segmentation algorithm will affect the diagnosis results and treatment plans. However, manual segmentation of medical images requires extensive experience and knowledge, and it is both time-consuming and labor-intensive. To overcome the problems above, we propose a novel interval iterative multi-thresholding segmentation algorithm based on hybrid spatial filter and region growing for medical brain MR images. First, a hybrid spatial filter is designed to perform on the original image, which can make full use of the spatial information while denoising. Second, the interval iterative Otsu method based on region growing is proposed to segment the original image and its filtering layer. The initial thresholds can be quickly obtained by region growing algorithm, which can reduce the time complexity. The interval iterative algorithm is used to optimize the thresholds. Finally, a weighted strategy is used to refine the segmentation results. The segmentation results of our proposed algorithm outperform other comparison algorithms in both subjective and objective evaluations. Subjectively, the obtained segmentation results have clear edges, complete and consistent regions. We use the uniformity measure (U) for objective evaluation, and the U value is significantly higher than other comparison algorithms. The proposed algorithm achieved an average U value of 0.9854 across all test images. The proposed algorithm can segment medical images well and expand the doctor’s ability to utilize medical images. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

11 pages, 1528 KiB  
Article
Efficient Edge Detection Method for Focused Images
by Agnieszka Lisowska
Appl. Sci. 2022, 12(22), 11668; https://doi.org/10.3390/app122211668 - 17 Nov 2022
Cited by 2 | Viewed by 1629
Abstract
In many areas of image processing, we deal with focused images. Indeed, the most important object is focused and the background is smooth. Finding edges in such images is difficult, since state-of-the-art edge detection methods assume that edges should be sharp. In this [...] Read more.
In many areas of image processing, we deal with focused images. Indeed, the most important object is focused and the background is smooth. Finding edges in such images is difficult, since state-of-the-art edge detection methods assume that edges should be sharp. In this way, smooth edges are not detected. Therefore, these methods can detect the main object edges that skip the background. However, we are often also interested in detecting the background as well. Therefore, in this paper, we propose an edge detection method that can efficiently detect the edges of both a focused object and a smooth background alike. The proposed method is based on the local use of the k-Means algorithm from Machine Learning (ML). The local use is introduced by the proposed enhanced image filtering. The k-Means algorithm is applied within a sliding window in such a way that, as a result of filtering, we obtain a given square image area instead of just a simple pixel like in classical filtering. The results of the proposed edge detection method were compared with the best represented methods of different approaches of edge detection like pointwise, geometrical, and ML-based ones. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

15 pages, 6972 KiB  
Article
A Novel Method for Unexpected Obstacle Detection in the Traffic Environment Based on Computer Vision
by Wenyan Ci, Tianxiang Xu, Runze Lin and Shan Lu
Appl. Sci. 2022, 12(18), 8937; https://doi.org/10.3390/app12188937 - 06 Sep 2022
Cited by 3 | Viewed by 1832
Abstract
Obstacle detection is the basis for the Advanced Driving Assistance System (ADAS) to take obstacle avoidance measures. However, it is a very essential and challenging task to detect unexpected obstacles on the road. To this end, an unexpected obstacle detection method based on [...] Read more.
Obstacle detection is the basis for the Advanced Driving Assistance System (ADAS) to take obstacle avoidance measures. However, it is a very essential and challenging task to detect unexpected obstacles on the road. To this end, an unexpected obstacle detection method based on computer vision is proposed. We first present two independent methods for the detection of unexpected obstacles: a semantic segmentation method that can highlight the contextual information of unexpected obstacles on the road and an open-set recognition algorithm that can distinguish known and unknown classes according to the uncertainty degree. Then, the detection results of the two methods are input into the Bayesian framework in the form of probabilities for the final decision. Since there is a big difference between semantic and uncertainty information, the fusion results reflect the respective advantages of the two methods. The proposed method is tested on the Lost and Found dataset and evaluated by comparing it with the various obstacle detection methods and fusion strategies. The results show that our method improves the detection rate while maintaining a relatively low false-positive rate. Especially when detecting unexpected long-distance obstacles, the fusion method outperforms the independent methods and keeps a high detection rate. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

20 pages, 12576 KiB  
Article
A Novel Signature and Authentication Cryptosystem for Hyperspectral Image by Using Triangular Association Encryption Algorithm in Gyrator Domains
by Zhonglin Yang, Yanhua Cao, Shutian Liu, Camel Tanougast, Walter Blondel, Zhengjun Liu and Hang Chen
Appl. Sci. 2022, 12(15), 7649; https://doi.org/10.3390/app12157649 - 29 Jul 2022
Cited by 3 | Viewed by 1178
Abstract
A novel optical signature and authentication cryptosystem is proposed by applying triangular association encryption algorithm (TAEA) and 3D Arnold transform in Gyrator domains. Firstly, a triangular association encryption algorithm (TAEA) is designed, which makes it possible to turn the diffusion of pixel values [...] Read more.
A novel optical signature and authentication cryptosystem is proposed by applying triangular association encryption algorithm (TAEA) and 3D Arnold transform in Gyrator domains. Firstly, a triangular association encryption algorithm (TAEA) is designed, which makes it possible to turn the diffusion of pixel values within bands into the diffusion within and between bands. Besides, the image signature function is considered and utilized in the proposed cryptosystem. Without the image signature, the original image cannot be restored even if all of the keys are obtained. Moreover, the image integrity authentication function is provided to prevent pixel values from being tampered with. Through the numerical simulation of various types of attacks, the effectiveness and capability of the proposed hyperspectral data signature and authentication cryptosystem is verified. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

13 pages, 10538 KiB  
Article
Comparison of Human Intestinal Parasite Ova Segmentation Using Machine Learning and Deep Learning Techniques
by Chee Chin Lim, Norhanis Ayunie Ahmad Khairudin, Siew Wen Loke, Aimi Salihah Abdul Nasir, Yen Fook Chong and Zeehaida Mohamed
Appl. Sci. 2022, 12(15), 7542; https://doi.org/10.3390/app12157542 - 27 Jul 2022
Cited by 3 | Viewed by 1613
Abstract
Helminthiasis disease is one of the most serious health problems in the world and frequently occurs in children, especially in unhygienic conditions. The manual diagnosis method is time consuming and challenging, especially when there are a large number of samples. An automated system [...] Read more.
Helminthiasis disease is one of the most serious health problems in the world and frequently occurs in children, especially in unhygienic conditions. The manual diagnosis method is time consuming and challenging, especially when there are a large number of samples. An automated system is acknowledged as a quick and easy technique to assess helminth sample images by offering direct visibility on the computer monitor without the requirement for examination under a microscope. Thus, this paper aims to compare the human intestinal parasite ova segmentation performance between machine learning segmentation and deep learning segmentation. Four types of helminth ova are tested, which are Ascaris Lumbricoides Ova (ALO), Enterobious Vermicularis Ova (EVO), Hookworm Ova (HWO), and Trichuris Trichiura Ova (TTO). In this paper, fuzzy c-Mean (FCM) segmentation technique is used in machine learning segmentation, while convolutional neural network (CNN) segmentation technique is used for deep learning. The performance of segmentation algorithms based on FCM and CNN segmentation techniques is investigated and compared to select the best segmentation procedure for helminth ova detection. The results reveal that the accuracy obtained for each helminth species is in the range of 97% to 100% for both techniques. However, IoU analysis showed that CNN based on ResNet technique performed better than FCM for ALO, EVO, and TTO with values of 75.80%, 55.48%, and 77.06%, respectively. Therefore, segmentation through deep learning is more suitable for segmenting the human intestinal parasite ova. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

15 pages, 4075 KiB  
Article
Image Interpolation with Regional Gradient Estimation
by Zuhang Jia and Qingjiu Huang
Appl. Sci. 2022, 12(15), 7359; https://doi.org/10.3390/app12157359 - 22 Jul 2022
Cited by 1 | Viewed by 2118
Abstract
This paper proposes an image interpolation method with regional gradient estimation (GEI) to solve the problem of the nonlinear interpolation method not sufficiently considering non-edge pixels. First, the approach presented in this paper expanded on the edge diffusion idea used in CGI and [...] Read more.
This paper proposes an image interpolation method with regional gradient estimation (GEI) to solve the problem of the nonlinear interpolation method not sufficiently considering non-edge pixels. First, the approach presented in this paper expanded on the edge diffusion idea used in CGI and proposed a regional gradient estimation strategy to improve the problem of gradient calculation in the CGI method. Next, the gradient value was used to determine whether a pixel was an edge pixel. Then, a 1D directional filter was employed to process edge pixels while interpolating non-edge pixels using a 2D directionless filter. Finally, we experimented with various representative interpolation methods for grayscale and color images, including the one presented in this paper, and compared them in terms of subjective results, objective criteria, and computational complexity. The experimental results showed that GEI performed better than the other methods in an experiment concerning the visual effect, objective criteria, and computational complexity. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

11 pages, 1886 KiB  
Article
An Improved Algorithm for Low-Light Image Enhancement Based on RetinexNet
by Hao Tang, Hongyu Zhu, Huanjie Tao and Chao Xie
Appl. Sci. 2022, 12(14), 7268; https://doi.org/10.3390/app12147268 - 19 Jul 2022
Cited by 14 | Viewed by 2166
Abstract
Due to the influence of the environment and the limit of optical equipment, low-light images produce problems such as low brightness, high noise, low contrast, and color distortion, which have a great impact on their visual perception and the following image understanding tasks. [...] Read more.
Due to the influence of the environment and the limit of optical equipment, low-light images produce problems such as low brightness, high noise, low contrast, and color distortion, which have a great impact on their visual perception and the following image understanding tasks. In this paper, we take advantage of the independent nature of YCbCr color channels and incorporate RetinexNet into the brightness channel (Y) to reduce color distortion in the enhanced images. Meanwhile, to suppress the image noise generated during the enhancement, the enhanced image is also denoised. Finally, the original color and the enhanced brightness are recombined in the channel direction, converted back to the RGB color space, and adjusted to generate an enhanced result. The proposed algorithm is compared with other recently published counterparts on the LOL dataset. The experimental results demonstrate that the proposed algorithm achieved better performance in terms of both quantitative metrics and visual quality. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

18 pages, 12410 KiB  
Article
Potential of Deep Learning Methods for Deep Level Particle Characterization in Crystallization
by Janine Lins, Thomas Harweg, Frank Weichert and Kerstin Wohlgemuth
Appl. Sci. 2022, 12(5), 2465; https://doi.org/10.3390/app12052465 - 26 Feb 2022
Cited by 6 | Viewed by 1751
Abstract
Crystalline particle properties, which are defined throughout the crystallization process chain, are strongly tied to the quality of the final product bringing along the need of detailed particle characterization. The most important characteristics are the size, shape and purity, which are influenced by [...] Read more.
Crystalline particle properties, which are defined throughout the crystallization process chain, are strongly tied to the quality of the final product bringing along the need of detailed particle characterization. The most important characteristics are the size, shape and purity, which are influenced by agglomeration. Therefore, a pure size determination is often insufficient and a deep level evaluation regarding agglomerates and primary crystals bound in agglomerates is desirable as basis to increase the quality of crystalline products. We present a promising deep learning approach for particle characterization in crystallization. In an end-to-end fashion, the interactions and processing steps are minimized. Based on instance segmentation, all crystals containing single crystals, agglomerates and primary crystals in agglomerates are detected and classified with pixel-level accuracy. The deep learning approach shows superior performance to previous image analysis methods and reaches a new level of detail. In experimental studies, L-alanine is crystallized from aqueous solution. A detailed description of size and number of all particles including primary crystals is provided and characteristic measures for the level of agglomeration are given. This can lead to a better process understanding and has the potential to serve as cornerstone for kinetic studies. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

13 pages, 7433 KiB  
Article
Image Reconstruction Using Autofocus in Single-Lens System
by Xuyang Zhou, Xiu Wen, Yu Ji, Yutong Li, Shutian Liu and Zhengjun Liu
Appl. Sci. 2022, 12(3), 1378; https://doi.org/10.3390/app12031378 - 27 Jan 2022
Cited by 6 | Viewed by 2173
Abstract
To reconstruct the wavefront in a single-lens coherent diffraction imaging (CDI) system, we propose a closed-loop cascaded iterative engine (CIE) algorithm based on the known information of the imaging planes. The precision of diffraction distance is an important prerequisite for a perfect reconstruction [...] Read more.
To reconstruct the wavefront in a single-lens coherent diffraction imaging (CDI) system, we propose a closed-loop cascaded iterative engine (CIE) algorithm based on the known information of the imaging planes. The precision of diffraction distance is an important prerequisite for a perfect reconstruction of samples. For coherent diffraction imaging with a lens, autofocus is investigated to accurately determine the object distance and image distance. For the case of only the object distance being unknown, a diffuser is used to scatter the coherent beam for speckle illumination to improve the performance of autofocus. The optimal object distance is obtained stably and robustly by combing speckle imaging with clarity evaluation functions. SSIM and MSE, using the average pixel value of the reconstructed data set as a reference, are applied on two-unknown-distance autofocus. Simulation and experiment results are presented to prove the feasibility of the CIE and proposed auto-focusing method. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

17 pages, 5069 KiB  
Article
Improved Training of CAE-Based Defect Detectors Using Structural Noise
by Reina Murakami, Valentin Grave, Osamu Fukuda, Hiroshi Okumura and Nobuhiko Yamaguchi
Appl. Sci. 2021, 11(24), 12062; https://doi.org/10.3390/app112412062 - 17 Dec 2021
Cited by 1 | Viewed by 1766
Abstract
Appearances of products are important to companies as they reflect the quality of their manufacture to customers. Nowadays, visual inspection is conducted by human inspectors. This research attempts to automate this process using Convolutional AutoEncoders (CAE). Our models were trained using images of [...] Read more.
Appearances of products are important to companies as they reflect the quality of their manufacture to customers. Nowadays, visual inspection is conducted by human inspectors. This research attempts to automate this process using Convolutional AutoEncoders (CAE). Our models were trained using images of non-defective parts. Previous research on autoencoders has reported that the accuracy of image regeneration can be improved by adding noise to the training dataset, but no extensive analyse of the noise factor has been done. Therefore, our method compares the effects of two different noise patterns on the models efficiency: Gaussian noise and noise made of a known structure. The test datasets were comprised of “defective” parts. Over the experiments, it has mostly been observed that the precision of the CAE sharpened when using noisy data during the training phases. The best results were obtained with structural noise, made of defined shapes randomly corrupting training data. Furthermore, the models were able to process test data that had slightly different positions and rotations compared to the ones found in the training dataset. However, shortcomings appeared when “regular” spots (in the training data) and “defective” spots (in the test data) partially, or totally, overlapped. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

16 pages, 11694 KiB  
Article
Data-Driven Convolutional Model for Digital Color Image Demosaicing
by Francesco de Gioia and Luca Fanucci
Appl. Sci. 2021, 11(21), 9975; https://doi.org/10.3390/app11219975 - 25 Oct 2021
Viewed by 2126
Abstract
Modern digital cameras use specific arrangement of Color Filter Array to sample light wavelength corresponding to visible colors. The most common Color Filter Array is the Bayer filter that samples only one color per pixel. To recover the full resolution image, an interpolation [...] Read more.
Modern digital cameras use specific arrangement of Color Filter Array to sample light wavelength corresponding to visible colors. The most common Color Filter Array is the Bayer filter that samples only one color per pixel. To recover the full resolution image, an interpolation algorithm can be used. This process is called demosaicing and it is one of the first processing stages of a digital imaging pipeline. We introduce a novel data-driven model for demosaicing that takes into account the different requirements for reconstruction of the image Luma and Chrominance channels. The final model is a parallel composition of two reconstruction networks with individual architecture and trained with distinct loss functions. In order to solve the overfitting problem, we prepared a dataset that contains groups of patches that share common chromatic and spectral characteristics. We reported the reconstruction error on noise-free images and measured the effect of random noise and quantization noise in the demosaicing reconstruction. To test our model performance, we implemented the network on NVIDIA Jetson Nano, obtaining an end-to-end running time of less than one second for a full frame 12 MPixel image. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

10 pages, 5720 KiB  
Article
Blind Image Separation Method Based on Cascade Generative Adversarial Networks
by Fei Jia, Jindong Xu, Xiao Sun, Yongli Ma and Mengying Ni
Appl. Sci. 2021, 11(20), 9416; https://doi.org/10.3390/app11209416 - 11 Oct 2021
Cited by 2 | Viewed by 1345
Abstract
To solve the challenge of single-channel blind image separation (BIS) caused by unknown prior knowledge during the separation process, we propose a BIS method based on cascaded generative adversarial networks (GANs). To ensure that the proposed method can perform well in different scenarios [...] Read more.
To solve the challenge of single-channel blind image separation (BIS) caused by unknown prior knowledge during the separation process, we propose a BIS method based on cascaded generative adversarial networks (GANs). To ensure that the proposed method can perform well in different scenarios and to address the problem of an insufficient number of training samples, a synthetic network is added to the separation network. This method is composed of two GANs: a U-shaped GAN (UGAN), which is used to learn image synthesis, and a pixel-to-attention GAN (PAGAN), which is used to learn image separation. The two networks jointly complete the task of image separation. UGAN uses the unpaired mixed image and the unmixed image to learn the mixing style, thereby generating an image with the “true” mixing characteristics which addresses the problem of an insufficient number of training samples for the PAGAN. A self-attention mechanism is added to the PAGAN to quickly extract important features from the image data. The experimental results show that the proposed method achieves good results on both synthetic image datasets and real remote sensing image datasets. Moreover, it can be used for image separation in different scenarios which lack prior knowledge and training samples. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

23 pages, 8983 KiB  
Article
An Advanced AFWMF Model for Identifying High Random-Valued Impulse Noise for Image Processing
by Jieh-Ren Chang, You-Shyang Chen, Chih-Min Lo and Huan-Chung Chen
Appl. Sci. 2021, 11(15), 7037; https://doi.org/10.3390/app11157037 - 30 Jul 2021
Cited by 1 | Viewed by 1402
Abstract
In this study, a novel adaptive fuzzy weighted mean filter (AFWMF) model based on the directional median technique and fuzzy inference is presented for solving the restoring high-ratio random-valued noise in image processing. This study aims, not only to obtain information from each [...] Read more.
In this study, a novel adaptive fuzzy weighted mean filter (AFWMF) model based on the directional median technique and fuzzy inference is presented for solving the restoring high-ratio random-valued noise in image processing. This study aims, not only to obtain information from each direction of the filtering window, but also to gain information from every pixel of the filtering windows completely. Thus, in order to implement preserving details and textures for better restoration in high-noise cases, this study utilizes the directional median to build the membership function in fuzzy inference dynamically, then calculates the weighted window corresponding to the filtering window using fuzzy inference to represent the importance of valuable pixels. Finally, the restoration pixel is calculated using the weighted window and the filtering window for the weighted mean. Subsequently, this new AFWMF model significantly improves performances in the measurement of the peak signal to noise ratio (PSNR) value for preserving detail and fixed image in noise density within the range of 20–70% for the five well-known experimental images. In extensive experiments, this study also shows the better performance of identifying the proposed peak signal-to-removal noise ratio (PSRNR) and evaluating psycho-visual tests than other listed filter methods. Furthermore, the proposed AFWMF model also has a better structural similarity index measure (SSIM) value of another indicator. Conclusively, two interesting and meaning findings are identified: (1) the proposed AFWMF model is generally the best model among the 10 listed filtering methods for image processing in terms of the measurement of two quantitative indicators for both the PSNR and SSIM values; (2) different impulse noise densities should be made for different filtering methods, and thus, this is an important and interesting issue when aiming to identify an appropriate filtering model from a variety of images for processing various noise densities. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

16 pages, 4318 KiB  
Article
Binary Ghost Imaging Based on the Fuzzy Integral Method
by Xu Yang, Jiemin Hu, Long Wu, Lu Xu, Wentao Lyu, Chenghua Yang and Wei Zhang
Appl. Sci. 2021, 11(13), 6162; https://doi.org/10.3390/app11136162 - 02 Jul 2021
Viewed by 1471
Abstract
The reconstruction quality of binary ghost imaging depends on the speckle binarization process. In order to obtain better binarization speckle and improve the reconstruction quality of binary ghost imaging, a local adaptive binarization method based on the fuzzy integral is proposed in this [...] Read more.
The reconstruction quality of binary ghost imaging depends on the speckle binarization process. In order to obtain better binarization speckle and improve the reconstruction quality of binary ghost imaging, a local adaptive binarization method based on the fuzzy integral is proposed in this study. There are three steps in the proposed binarization process. The first step is to calculate the integral image of the speckle with the summed-area table algorithm. Secondly, the fuzzy integral image is calculated through the discrete Choquet integral. Finally, the binarization threshold of each pixel of the speckle is selected based on the calculated fuzzy integral result. The experiment results verify the feasibility of the proposed method. Compared with other methods qualitatively and quantitatively, the proposed method has high performance in reconstructing image of target. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

Review

Jump to: Research

20 pages, 2671 KiB  
Review
Review: A Survey on Objective Evaluation of Image Sharpness
by Mengqiu Zhu, Lingjie Yu, Zongbiao Wang, Zhenxia Ke and Chao Zhi
Appl. Sci. 2023, 13(4), 2652; https://doi.org/10.3390/app13042652 - 18 Feb 2023
Cited by 6 | Viewed by 3783
Abstract
Establishing an accurate objective evaluation metric of image sharpness is crucial for image analysis, recognition and quality measurement. In this review, we highlight recent advances in no-reference image quality assessment research, divide the reported algorithms into four groups (spatial domain-based methods, spectral domain-based [...] Read more.
Establishing an accurate objective evaluation metric of image sharpness is crucial for image analysis, recognition and quality measurement. In this review, we highlight recent advances in no-reference image quality assessment research, divide the reported algorithms into four groups (spatial domain-based methods, spectral domain-based methods, learning-based methods and combination methods) and outline the advantages and disadvantages of each method group. Furthermore, we conduct a brief bibliometric study with which to provide an overview of the current trends from 2013 to 2021 and compare the performance of representative algorithms on public datasets. Finally, we describe the shortcomings and future challenges in the current studies. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

Back to TopTop