Advances of Mathematical Image Processing

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Computational and Applied Mathematics".

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 7727

Special Issue Editor


E-Mail Website
Guest Editor
Information Technologies Institute, Center for Research and Technology Hellas, 57001 Thessaloniki, Greece
Interests: image and video analysis

Special Issue Information

Dear Colleagues,

This Special Issue of Mathematics (MDPI), titled “Advances of Mathematical Image Processing”, invites both original and survey manuscripts that bring together new mathematical tools, models, and techniques in order to solve image processing problems. Image processing has applications in research, industry, and our routine lives. It has applications in consumer images, medical images, outer-space images, radar images, seismic data, and natural images. Hence, it is useful in all fields of engineering, physical sciences, medicine, business, etc.

The purpose of this Special Issue is to gather a collection of articles reflecting the latest developments of mathematical image modelling and applications. We invite authors to contribute original research articles addressing significant issues and contributing to the development of new concepts, methodologies, applications, trends, and knowledge, in science. Review articles describing the current state-of-the-art are also welcome. The fields of interest include image restoration and reconstruction, image decomposition, image segmentation, image registration, image filtering (in spatial and frequency domain), feature detection, multi-scale image analysis, morphology, etc., as well as their applications for solving real problems in sciences and engineering.

Dr. Vassilios Solachidis
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image analysis
  • image restoration
  • image reconstruction
  • image decomposition
  • image segmentation
  • image registration
  • image filtering
  • feature detection
  • multi-scale image analysis

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 921 KiB  
Article
Generalized Quantification Function of Monogenic Phase Congruency
by Manuel G. Forero, Carlos A. Jacanamejoy, Maximiliano Machado and Karla L. Penagos
Mathematics 2023, 11(17), 3795; https://doi.org/10.3390/math11173795 - 04 Sep 2023
Viewed by 705
Abstract
Edge detection is a technique in digital image processing that detects the contours of objects based on changes in brightness. Edges can be used to determine the size, orientation, and properties of the object of interest within an image. There are different techniques [...] Read more.
Edge detection is a technique in digital image processing that detects the contours of objects based on changes in brightness. Edges can be used to determine the size, orientation, and properties of the object of interest within an image. There are different techniques employed for edge detection, one of them being phase congruency, a recently developed but still relatively unknown technique due to its mathematical and computational complexity compared to more popular methods. Additionally, it requires the adjustment of a greater number of parameters than traditional techniques. Recently, a unique formulation was proposed for the mathematical description of phase congruency, leading to a better understanding of the technique. This formulation consists of three factors, including a quantification function, which, depending on its characteristics, allows for improved edge detection. However, a detailed study of the characteristics had not been conducted. Therefore, this article proposes the development of a generalized function for quantifying phase congruency, based on the family of functions that, according to a previous study, yielded the best results in edge detection. Full article
(This article belongs to the Special Issue Advances of Mathematical Image Processing)
Show Figures

Figure 1

26 pages, 7328 KiB  
Article
Total Fractional-Order Variation-Based Constraint Image Deblurring Problem
by Shahid Saleem, Shahbaz Ahmad and Junseok Kim
Mathematics 2023, 11(13), 2869; https://doi.org/10.3390/math11132869 - 26 Jun 2023
Viewed by 807
Abstract
When deblurring an image, ensuring that the restored intensities are strictly non-negative is crucial. However, current numerical techniques often fail to consistently produce favorable results, leading to negative intensities that contribute to significant dark regions in the restored images. To address this, our [...] Read more.
When deblurring an image, ensuring that the restored intensities are strictly non-negative is crucial. However, current numerical techniques often fail to consistently produce favorable results, leading to negative intensities that contribute to significant dark regions in the restored images. To address this, our study proposes a mathematical model for non-blind image deblurring based on total fractional-order variational principles. Our proposed model not only guarantees strictly positive intensity values but also imposes limits on the intensities within a specified range. By removing negative intensities or constraining them within the prescribed range, we can significantly enhance the quality of deblurred images. The key concept in this paper involves converting the constrained total fractional-order variational-based image deblurring problem into an unconstrained one through the introduction of the augmented Lagrangian method. To facilitate this conversion and improve convergence, we describe new numerical algorithms and introduce a novel circulant preconditioned matrix. This matrix effectively overcomes the slow convergence typically encountered when using the conjugate gradient method within the augmented Lagrangian framework. Our proposed approach is validated through computational tests, demonstrating its effectiveness and viability in practical applications. Full article
(This article belongs to the Special Issue Advances of Mathematical Image Processing)
Show Figures

Figure 1

24 pages, 14664 KiB  
Article
Night Vision Anti-Halation Algorithm of Different-Source Image Fusion Based on Low-Frequency Sequence Generation
by Quanmin Guo, Jiahao Liang and Hanlei Wang
Mathematics 2023, 11(10), 2237; https://doi.org/10.3390/math11102237 - 10 May 2023
Viewed by 1201
Abstract
The abuse of high beam lights dazzles the opposite drivers when the vehicles meet at night, which can easily cause traffic accidents. The existing night vision anti-halation algorithms based on different-source image fusion can eliminate halation and obtain fusion images with rich color [...] Read more.
The abuse of high beam lights dazzles the opposite drivers when the vehicles meet at night, which can easily cause traffic accidents. The existing night vision anti-halation algorithms based on different-source image fusion can eliminate halation and obtain fusion images with rich color and details. However, the algorithms mistakenly eliminate some high-brightness important information. In order to address the problem, a night vision anti-halation algorithm based on low-frequency sequence generation is proposed. The low-frequency sequence generation model is constructed to generate image sequences with different degrees of halation elimination. According to the estimated illuminance for image sequences, the proposed sequence synthesis based on visual information maximization assigns a large weight to the areas with good brightness so as to obtain the fusion image without halation and with rich details. In four typical halation scenes covering most cases of night driving, the proposed algorithm effectively eliminates halation while retaining useful high-brightness information and has better universality than the other seven advanced comparison algorithms. The experimental results show that the fusion image obtained by the proposed algorithm is more suitable for human visual perception and helps to improve night driving safety. Full article
(This article belongs to the Special Issue Advances of Mathematical Image Processing)
Show Figures

Figure 1

21 pages, 9254 KiB  
Article
TGSNet: Multi-Field Feature Fusion for Glass Region Segmentation Using Transformers
by Xiaohang Hu, Rui Gao, Seungjun Yang and Kyungeun Cho
Mathematics 2023, 11(4), 843; https://doi.org/10.3390/math11040843 - 07 Feb 2023
Cited by 1 | Viewed by 1404
Abstract
Glass is a common object in living environments, but detecting it can be difficult because of the reflection and refraction of various colors of light in different environments; even humans are sometimes unable to detect glass. Currently, many methods are used to detect [...] Read more.
Glass is a common object in living environments, but detecting it can be difficult because of the reflection and refraction of various colors of light in different environments; even humans are sometimes unable to detect glass. Currently, many methods are used to detect glass, but most rely on other sensors, which are costly and have difficulty collecting data. This study aims to solve the problem of detecting glass regions in a single RGB image by concatenating contextual features from multiple receptive fields and proposing a new enhanced feature fusion algorithm. To do this, we first construct a contextual attention module to extract backbone features through a self-attention approach. We then propose a VIT-based deep semantic segmentation architecture called MFT, which associates multilevel receptive field features and retains the feature information captured by each level of features. It is shown experimentally that our proposed method performs better on existing glass detection datasets than several state-of-the-art glass detection and transparent object detection methods, which fully demonstrates the better performance of our TGSNet. Full article
(This article belongs to the Special Issue Advances of Mathematical Image Processing)
Show Figures

Figure 1

14 pages, 7952 KiB  
Article
Heterogeneous Feature Fusion Module Based on CNN and Transformer for Multiview Stereo Reconstruction
by Rui Gao, Jiajia Xu, Yipeng Chen and Kyungeun Cho
Mathematics 2023, 11(1), 112; https://doi.org/10.3390/math11010112 - 26 Dec 2022
Cited by 3 | Viewed by 2243
Abstract
For decades, a vital area of computer vision research has been multiview stereo (MVS), which creates 3D models of a scene using photographs. This study presents an effective MVS network for 3D reconstruction utilizing multiview pictures. Alternative learning-based reconstruction techniques work well, because [...] Read more.
For decades, a vital area of computer vision research has been multiview stereo (MVS), which creates 3D models of a scene using photographs. This study presents an effective MVS network for 3D reconstruction utilizing multiview pictures. Alternative learning-based reconstruction techniques work well, because CNNs (convolutional neural network) can extract only the image’s local features; however, they contain many artifacts. Herein, a transformer and CNN are used to extract the global and local features of the image, respectively. Additionally, hierarchical aggregation and heterogeneous interaction modules were used to improve these features. They are based on the transformer and can extract dense features with 3D consistency and global context that are necessary to provide accurate matching for MVS. Full article
(This article belongs to the Special Issue Advances of Mathematical Image Processing)
Show Figures

Figure 1

Back to TopTop