Topic Editors

Department of Computer Science, The University of South Dakota, Vermillion, SD 57069, USA
Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, 28040 Madrid, Spain

Research on the Application of Digital Signal Processing

Abstract submission deadline
closed (30 June 2023)
Manuscript submission deadline
closed (30 September 2023)
Viewed by
19349

Topic Information

Dear Colleagues,

Research on digital signal processing offers a variety of applications that range from the entertainment (music) industry to banking (economy). The next entertainment era is expected to have fully automated tools for music composition, where audio/signal processing is crucial. Regarding financing, most deals are agreed upon over telephone conversations/calls—and healthcare is no exception. According to a recent study by Reaction Data, 62% of healthcare providers surveyed said they are currently using speech recognition technologies for their records, 4% of healthcare providers stated that they are currently implementing medical speech recognition in electronic health records, and 11% of clinicians in the survey said they plan to adopt speech recognition in the next two years.

In this Topic, we invite papers on high-end machine learning models (deep learning) that can analyze big data (e.g., crowd-sourced, and fully labeled lab data) for multiple purposes.

Category 1: Entertainment industry (e.g., music separation and composition)
Category 2: Banking and marketing
Category 3: Healthcare (e.g., understanding emotions using speech data)
Category 4: Language learning (e.g., language recognition in multilingual scenarios)

Therefore, we aim to gather problem-driven research works on the following topics: signal processing, pattern recognition, anomaly detection, computer vision, machine learning, and deep learning. Original research works such as insightful research and practice notes, case studies, and surveys are invited. Submissions from academia, government, and industry are encouraged.

Dr. KC Santosh
Prof. Dr. Alejandro Rodríguez-Gon
Topic Editors

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
- - 2020 20.8 Days CHF 1600
Applied Sciences
applsci
2.7 4.5 2011 16.9 Days CHF 2400
Big Data and Cognitive Computing
BDCC
3.7 4.9 2017 18.2 Days CHF 1800
Digital
digital
- - 2021 22.7 Days CHF 1000
Healthcare
healthcare
2.8 2.7 2013 19.5 Days CHF 2700
Journal of Imaging
jimaging
3.2 4.4 2015 21.7 Days CHF 1800
Signals
signals
- - 2020 35.1 Days CHF 1000

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (12 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
16 pages, 1800 KiB  
Review
Source Camera Identification Techniques: A Survey
by Chijioke Emeka Nwokeji, Akbar Sheikh-Akbari, Anatoliy Gorbenko and Iosif Mporas
J. Imaging 2024, 10(2), 31; https://doi.org/10.3390/jimaging10020031 - 25 Jan 2024
Viewed by 1677
Abstract
The successful investigation and prosecution of significant crimes, including child pornography, insurance fraud, movie piracy, traffic monitoring, and scientific fraud, hinge largely on the availability of solid evidence to establish the case beyond any reasonable doubt. When dealing with digital images/videos as evidence [...] Read more.
The successful investigation and prosecution of significant crimes, including child pornography, insurance fraud, movie piracy, traffic monitoring, and scientific fraud, hinge largely on the availability of solid evidence to establish the case beyond any reasonable doubt. When dealing with digital images/videos as evidence in such investigations, there is a critical need to conclusively prove the source camera/device of the questioned image. Extensive research has been conducted in the past decade to address this requirement, resulting in various methods categorized into brand, model, or individual image source camera identification techniques. This paper presents a survey of all those existing methods found in the literature. It thoroughly examines the efficacy of these existing techniques for identifying the source camera of images, utilizing both intrinsic hardware artifacts such as sensor pattern noise and lens optical distortion, and software artifacts like color filter array and auto white balancing. The investigation aims to discern the strengths and weaknesses of these techniques. The paper provides publicly available benchmark image datasets and assessment criteria used to measure the performance of those different methods, facilitating a comprehensive comparison of existing approaches. In conclusion, the paper outlines directions for future research in the field of source camera identification. Full article
(This article belongs to the Topic Research on the Application of Digital Signal Processing)
Show Figures

Figure 1

10 pages, 4781 KiB  
Article
Defect Isolation from Whole to Local Field Separation in Complex Interferometry Fringe Patterns through Development of Weighted Least-Squares Algorithm
by Zhenkai Chen, Wenjing Zhou, Yingjie Yu, Vivi Tornari and Gilberto Artioli
Digital 2024, 4(1), 104-113; https://doi.org/10.3390/digital4010004 - 29 Dec 2023
Viewed by 536
Abstract
In this paper, based on Gaussian 1σ-criterion and histogram segmentation, a weighted least-squares algorithm is applied and validated on digital holographic speckle pattern interferometric data to perform phase separation on the complex interference fields. The direct structural diagnosis tool is used to investigate [...] Read more.
In this paper, based on Gaussian 1σ-criterion and histogram segmentation, a weighted least-squares algorithm is applied and validated on digital holographic speckle pattern interferometric data to perform phase separation on the complex interference fields. The direct structural diagnosis tool is used to investigate defects and their impact on a complex antique wall painting of Giotto. The interferometry data is acquired with a portable off-axis interferometer set-up with a phase-shifted reference beam coupled with the object beam in front of the digital photosensitive medium. A digital holographic speckle pattern interferometry (DHSPI) system is used to register digital recordings of interferogram sequences over time. The surface is monitored for as long as it deforms prior to returning to its initial reference equilibrium state prior to excitation. The attempt to separate the whole vs. local defect complex amplitudes from the interferometric data is presented. The main aim is to achieve isolation and visualization of each defect’s impact amplitude in order to obtain detailed documentation of each defect and its structural impact on the surface for structural diagnosis purposes. Full article
(This article belongs to the Topic Research on the Application of Digital Signal Processing)
Show Figures

Figure 1

16 pages, 9223 KiB  
Article
Simulation-Assisted Augmentation of Missing Wedge and Region-of-Interest Computed Tomography Data
by Vladimir O. Alekseychuk, Andreas Kupsch, David Plotzki, Carsten Bellon and Giovanni Bruno
J. Imaging 2024, 10(1), 11; https://doi.org/10.3390/jimaging10010011 - 29 Dec 2023
Viewed by 1316
Abstract
This study reports a strategy to use sophisticated, realistic X-ray Computed Tomography (CT) simulations to reduce Missing Wedge (MW) and Region-of-Interest (RoI) artifacts in FBP (Filtered Back-Projection) reconstructions. A 3D model of the object is used to simulate the projections that include the [...] Read more.
This study reports a strategy to use sophisticated, realistic X-ray Computed Tomography (CT) simulations to reduce Missing Wedge (MW) and Region-of-Interest (RoI) artifacts in FBP (Filtered Back-Projection) reconstructions. A 3D model of the object is used to simulate the projections that include the missing information inside the MW and outside the RoI. Such information augments the experimental projections, thereby drastically improving the reconstruction results. An X-ray CT dataset of a selected object is modified to mimic various degrees of RoI and MW problems. The results are evaluated in comparison to a standard FBP reconstruction of the complete dataset. In all cases, the reconstruction quality is significantly improved. Small inclusions present in the scanned object are better localized and quantified. The proposed method has the potential to improve the results of any CT reconstruction algorithm. Full article
(This article belongs to the Topic Research on the Application of Digital Signal Processing)
Show Figures

Figure 1

16 pages, 3538 KiB  
Article
Nearly Linear-Phase 2-D Recursive Digital Filters Design Using Balanced Realization Model Reduction
by Abdussalam Omar, Dale Shpak and Panajotis Agathoklis
Signals 2023, 4(4), 800-815; https://doi.org/10.3390/signals4040044 - 27 Nov 2023
Viewed by 614
Abstract
This paper presents a new method for the design of separable-denominator 2-D IIR filters with nearly linear phase in the passband. The design method is based on a balanced realization model reduction technique. The nearly linear-phase 2-D IIR filter is designed using 2-D [...] Read more.
This paper presents a new method for the design of separable-denominator 2-D IIR filters with nearly linear phase in the passband. The design method is based on a balanced realization model reduction technique. The nearly linear-phase 2-D IIR filter is designed using 2-D model reduction from a linear-phase 2-D FIR filter, which serves as the initial filter. The structured controllability and observability Gramians Ps and Qs serve as the foundation for this technique. onal positive-definite matrices that satisfy 2-D Lyapunov equations. An efficient method is used to compute these Gramians by minimizing the traces of Ps and Qs under linear matrix inequality (LMI) constraints. The use of these Gramians ensures that the resulting 2-D IIR filter preserves stability and can be implemented using a separable-denominator 2-D filter with fewer coefficients than the original 2-D FIR filter. Numerical examples show that the proposed method compares favorably with existing techniques. Full article
(This article belongs to the Topic Research on the Application of Digital Signal Processing)
Show Figures

Figure 1

20 pages, 686 KiB  
Article
High-Quality and Reproducible Automatic Drum Transcription from Crowdsourced Data
by Mickaël Zehren, Marco Alunno and Paolo Bientinesi
Signals 2023, 4(4), 768-787; https://doi.org/10.3390/signals4040042 - 10 Nov 2023
Viewed by 947
Abstract
Within the broad problem known as automatic music transcription, we considered the specific task of automatic drum transcription (ADT). This is a complex task that has recently shown significant advances thanks to deep learning (DL) techniques. Most notably, massive amounts of labeled data [...] Read more.
Within the broad problem known as automatic music transcription, we considered the specific task of automatic drum transcription (ADT). This is a complex task that has recently shown significant advances thanks to deep learning (DL) techniques. Most notably, massive amounts of labeled data obtained from crowds of annotators have made it possible to implement large-scale supervised learning architectures for ADT. In this study, we explored the untapped potential of these new datasets by addressing three key points: First, we reviewed recent trends in DL architectures and focused on two techniques, self-attention mechanisms and tatum-synchronous convolutions. Then, to mitigate the noise and bias that are inherent in crowdsourced data, we extended the training data with additional annotations. Finally, to quantify the potential of the data, we compared many training scenarios by combining up to six different datasets, including zero-shot evaluations. Our findings revealed that crowdsourced datasets outperform previously utilized datasets, and regardless of the DL architecture employed, they are sufficient in size and quality to train accurate models. By fully exploiting this data source, our models produced high-quality drum transcriptions, achieving state-of-the-art results. Thanks to this accuracy, our work can be more successfully used by musicians (e.g., to learn new musical pieces by reading, or to convert their performances to MIDI) and researchers in music information retrieval (e.g., to retrieve information from the notes instead of audio, such as the rhythm or structure of a piece). Full article
(This article belongs to the Topic Research on the Application of Digital Signal Processing)
Show Figures

Figure 1

14 pages, 1125 KiB  
Article
FPGA Implementation of a Higher SFDR Upper DDFS Based on Non-Uniform Piecewise Linear Approximation
by Xuan Liao, Longlong Zhang, Xiang Hu, Yuanxi Peng and Tong Zhou
Appl. Sci. 2023, 13(19), 10819; https://doi.org/10.3390/app131910819 - 29 Sep 2023
Viewed by 596
Abstract
We propose a direct digital frequency synthesizer (DDFS) by using an error-controlled piecewise linear (PWL) approximation method. For a given function and a preset max absolute error (MAE), this method iterates continuously from right to left within the input interval, dividing the entire [...] Read more.
We propose a direct digital frequency synthesizer (DDFS) by using an error-controlled piecewise linear (PWL) approximation method. For a given function and a preset max absolute error (MAE), this method iterates continuously from right to left within the input interval, dividing the entire interval into multiple segments. Within each segment, the least squares method is used to approximate the objective function, ensuring that each segment meets the error requirements. Based on this method, We first implemented a set of DDFS under different MAE to study the relationship between SFDR and MAE, and then evaluated its hardware overhead. In order to increase the frequency of the output signal, we implement a multi-core DDFS using time interleaving scheme. The experimental results show that our DDFS has significant advantages in SFDR, using fewer hardware resources to achieve high SFDR. Specifically, the SFDR of proposed DDFS can reach 114 dB using 399 LUTs, 66 flip flops and 3 DSPs. More importantly, we demonstrate through experiments that proposed DDFS breaks the SFDR theoretical upper bound of DDFS based on piecewise linear approximation methods. Full article
(This article belongs to the Topic Research on the Application of Digital Signal Processing)
Show Figures

Figure 1

18 pages, 8959 KiB  
Article
Beyond Staircasing Effect: Robust Image Smoothing via 0 Gradient Minimization and Novel Gradient Constraints
by Ryo Matsuoka and Masahiro Okuda
Signals 2023, 4(4), 669-686; https://doi.org/10.3390/signals4040037 - 26 Sep 2023
Cited by 1 | Viewed by 748
Abstract
In this paper, we propose robust image-smoothing methods based on 0 gradient minimization with novel gradient constraints to effectively suppress pseudo-edges. Simultaneously minimizing the 0 gradient, i.e., the number of nonzero gradients in an image, and the 2 data fidelity [...] Read more.
In this paper, we propose robust image-smoothing methods based on 0 gradient minimization with novel gradient constraints to effectively suppress pseudo-edges. Simultaneously minimizing the 0 gradient, i.e., the number of nonzero gradients in an image, and the 2 data fidelity results in a smooth image. However, this optimization often leads to undesirable artifacts, such as pseudo-edges, known as the “staircasing effect”, and halos, which become more visible in image enhancement tasks, like detail enhancement and tone mapping. To address these issues, we introduce two types of gradient constraints: box and ball. These constraints are applied using a reference image (e.g., the input image is used as a reference for image smoothing) to suppress pseudo-edges in homogeneous regions and the blurring effect around strong edges. We also present an 0 gradient minimization problem based on the box-/ball-type gradient constraints using an alternating direction method of multipliers (ADMM). Experimental results on important applications of 0 gradient minimization demonstrate the advantages of our proposed methods compared to existing 0 gradient-based approaches. Full article
(This article belongs to the Topic Research on the Application of Digital Signal Processing)
Show Figures

Figure 1

14 pages, 2625 KiB  
Article
Application of Wavelet Transform for the Detection of Cetacean Acoustic Signals
by Ruilin He, Yang Dai, Siyi Liu, Yuhao Yang, Yingdong Wang, Wei Fan and Shengmao Zhang
Appl. Sci. 2023, 13(7), 4521; https://doi.org/10.3390/app13074521 - 02 Apr 2023
Cited by 1 | Viewed by 1580
Abstract
Cetaceans are an important part of the ocean ecosystem and are widely distributed in seas across the world. Cetaceans are heavily reliant on acoustic signals for communication. Some Odontoceti can perceive their environments using their sonar system, including the detection, localization, discrimination, and [...] Read more.
Cetaceans are an important part of the ocean ecosystem and are widely distributed in seas across the world. Cetaceans are heavily reliant on acoustic signals for communication. Some Odontoceti can perceive their environments using their sonar system, including the detection, localization, discrimination, and recognition of objects. Acoustic signals are one of the most commonly used types of data for Cetacean research, and it is necessary to develop Cetacean acoustic signal detection methods. This study compared the performance of a manual method, short-time Fourier transform (STFT), and wavelet transform (WT) in Cetacean acoustic signal detection. The results showed that WT performs better in click detection. According to this research, we propose using STFT for whistle and burst-pulse marking and WT for click marking in dataset building. This research will be helpful in facilitating research on the habits and behaviors of groups and individuals, thus providing information to develop methods for protecting species and developing biological resources. Full article
(This article belongs to the Topic Research on the Application of Digital Signal Processing)
Show Figures

Figure 1

13 pages, 5323 KiB  
Article
ReUse: REgressive Unet for Carbon Storage and Above-Ground Biomass Estimation
by Antonio Elia Pascarella, Giovanni Giacco, Mattia Rigiroli, Stefano Marrone and Carlo Sansone
J. Imaging 2023, 9(3), 61; https://doi.org/10.3390/jimaging9030061 - 07 Mar 2023
Cited by 4 | Viewed by 2859
Abstract
The United Nations Framework Convention on Climate Change (UNFCCC) has recently established the Reducing Emissions from Deforestation and forest Degradation (REDD+) program, which requires countries to report their carbon emissions and sink estimates through national greenhouse gas inventories (NGHGI). Thus, developing automatic systems [...] Read more.
The United Nations Framework Convention on Climate Change (UNFCCC) has recently established the Reducing Emissions from Deforestation and forest Degradation (REDD+) program, which requires countries to report their carbon emissions and sink estimates through national greenhouse gas inventories (NGHGI). Thus, developing automatic systems capable of estimating the carbon absorbed by forests without in situ observation becomes essential. To support this critical need, in this work, we introduce ReUse, a simple but effective deep learning approach to estimate the carbon absorbed by forest areas based on remote sensing. The proposed method’s novelty is in using the public above-ground biomass (AGB) data from the European Space Agency’s Climate Change Initiative Biomass project as ground truth to estimate the carbon sequestration capacity of any portion of land on Earth using Sentinel-2 images and a pixel-wise regressive UNet. The approach has been compared with two literature proposals using a private dataset and human-engineered features. The results show a more remarkable generalization ability of the proposed approach, with a decrease in Mean Absolute Error and Root Mean Square Error over the runner-up of 16.9 and 14.3 in the area of Vietnam, 4.7 and 5.1 in the area of Myanmar, 8.0 and 1.4 in the area of Central Europe, respectively. As a case study, we also report an analysis made for the Astroni area, a World Wildlife Fund (WWF) natural reserve struck by a large fire, producing predictions consistent with values found by experts in the field after in situ investigations. These results further support the use of such an approach for the early detection of AGB variations in urban and rural areas. Full article
(This article belongs to the Topic Research on the Application of Digital Signal Processing)
Show Figures

Graphical abstract

21 pages, 5435 KiB  
Article
A Denoising Method for Seismic Data Based on SVD and Deep Learning
by Guoli Ji and Chao Wang
Appl. Sci. 2022, 12(24), 12840; https://doi.org/10.3390/app122412840 - 14 Dec 2022
Cited by 4 | Viewed by 1964
Abstract
When reconstructing seismic data, the traditional singular value decomposition (SVD) denoising method has the challenge of difficult rank selection. Therefore, we propose a seismic data denoising method that combines SVD and deep learning. In this method, seismic data with different signal-to-noise ratios (SNRs) [...] Read more.
When reconstructing seismic data, the traditional singular value decomposition (SVD) denoising method has the challenge of difficult rank selection. Therefore, we propose a seismic data denoising method that combines SVD and deep learning. In this method, seismic data with different signal-to-noise ratios (SNRs) are processed by SVD. Data sets are created from the decomposed right singular vectors and data sets divided into two categories: effective signal and noise. The lightweight MobileNetV2 network was chosen for training because of its quick response speed and great accuracy. We forecasted and categorized the right singular vectors by SVD using the trained MobileNetV2 network. The right singular vector (RSV) corresponding to the noise in the seismic data was removed during reconstruction, but the effective signal was kept. The effective signal was projected to smooth the RSV. Finally, the goal of low SNR denoising of two-dimensional seismic data was accomplished. This approach addresses issues with deep learning in seismic data processing, including the challenge of gathering sample data and the weak generalizability of the training model. Compared with the traditional denoising method, the improved denoising method performs well at removing Gaussian and irregular noise with strong amplitudes. Full article
(This article belongs to the Topic Research on the Application of Digital Signal Processing)
Show Figures

Figure 1

14 pages, 404 KiB  
Article
Finding Subsampling Index Sets for Kronecker Product of Unitary Matrices for Incoherent Tight Frames
by Jooeun Kwon and Nam Yul Yu
Appl. Sci. 2022, 12(21), 11055; https://doi.org/10.3390/app122111055 - 31 Oct 2022
Cited by 1 | Viewed by 1390
Abstract
Frames are recognized for their importance in many fields of communications, signal processing, quantum physics, and so on. In this paper, we design an incoherent tight frame by selecting some rows of a matrix that is the Kronecker product of Fourier and unitary [...] Read more.
Frames are recognized for their importance in many fields of communications, signal processing, quantum physics, and so on. In this paper, we design an incoherent tight frame by selecting some rows of a matrix that is the Kronecker product of Fourier and unitary matrices. The Kronecker-product-based frame allows its elements to have a small number of phases, regardless of the frame length, which is suitable for low-cost implementation. To obtain the Kronecker-product-based frame with low mutual coherence, we first derive an objective function by transforming the Gram matrix expression to compute the coherence. If the Hadamard matrix is employed as a unitary matrix, the objective function can be computed efficiently with low complexity. Then, we find a subsampling index set for the Kronecker-product-based frame by minimizing the objective function. In simulations, we show that the Kronecker-product-based frames can achieve similar mutual coherence to optimized harmonic frames of a large number of phases. We apply the frames to compressed sensing (CS) as the measurement matrices, where the Kronecker-product-based frames demonstrate reliable performance of sparse signal recovery. Full article
(This article belongs to the Topic Research on the Application of Digital Signal Processing)
Show Figures

Figure 1

9 pages, 1986 KiB  
Article
Absolute Distance Measurement Based on Self-Mixing Interferometry Using Compressed Sensing
by Li Li, Yue Zhang, Ye Zhu, Ya Dai, Xuan Zhang and Xuwen Liang
Appl. Sci. 2022, 12(17), 8635; https://doi.org/10.3390/app12178635 - 29 Aug 2022
Cited by 4 | Viewed by 1536
Abstract
An absolute distance measurement sensor based on self-mixing interferometry (SMI) is suitable for application in aerospace due to its small size and light weight. However, an SMI signal with a high sampling rate places a burden on sampling devices and other onboard sources. [...] Read more.
An absolute distance measurement sensor based on self-mixing interferometry (SMI) is suitable for application in aerospace due to its small size and light weight. However, an SMI signal with a high sampling rate places a burden on sampling devices and other onboard sources. SMI distance measurement using compressed sensing (CS) is proposed in this work to relieve this burden. The SMI signal was sampled via a measurement matrix at a sampling rate lower than Nyquist’s law and then recovered by the greedy pursuit algorithm. The recovery algorithm was improved to increase its robustness and iteration speed. On a distance measuring system with a measurement error of 60 µm, the difference between raw data with 1800 points and CS recovered data with 300 points was within 0.15 µm, demonstrating the feasibility of SMI distance measurement using CS. Full article
(This article belongs to the Topic Research on the Application of Digital Signal Processing)
Show Figures

Figure 1

Back to TopTop