sensors-logo

Journal Browser

Journal Browser

Digital Image Processing and Sensing Technologies

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 31 May 2024 | Viewed by 12809

Special Issue Editors


E-Mail Website
Guest Editor
EuroMov Digital Health in Motion, Université de Montpellier, IMT Mines Ales, 30100 Ales, France
Interests: image processing; multimedia security; digital images and videos; edge detection; computer vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Arts and Sciences, University of Nizwa, Nizwa 616, Oman
Interests: image processing information hiding; watermarking and steganography; data science/analytics; theoretical computer science; machine learning

E-Mail Website
Guest Editor
Media Integration and Communication Center (MICC), Department of Information Engineering (DINFO), University of Firenze, Via S. Marta 3, 50139 Firenze, Italy
Interests: multimedia; 3D computer vision; articifial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

Thanks to new technologies, digital images and videos form part of our daily routine, allowing for the easy capture and diffusion of visual information. Digital image processing (DIP) encompasses a broad spectrum of applications, especially manipulations of digital images in the context of computer-aided automation. The boundary between DIP and computer vision (CV) is vague and may thus encompass, in addition to core processing tasks, areas such as image understanding, feature extraction, detection, pattern recognition, object detection, and so on. Moreover, multimedia (image, video, audio, text, 3D, etc.) security, in the form of copyrighting, watermarking, and image encryption, is an important aspect of modern communication. Today, digital image/video processing quintessentially contributes to almost every field, ranging from medicine, astronomy, microscopy, and defense to biology, industry, robotics, security, remote sensing, and so on. This Special Issue aims to collect papers on state-of-the-art DIP and CV, with topics of interest including (but are not limited to) the following:

  • Image acquisition;
  • Image analysis;
  • Digital image forensics;
  • Multimedia Security (image and video);
  • Digital image watermarking;
  • Machine learning in DIP;
  • Image-based data hiding;
  • Image filtering;
  • Feature extraction;
  • Edge detection;
  • Corner extraction;
  • Keypoint detection;
  • Feature descriptor;
  • Image segmentation;
  • image compression;
  • Pattern recognition.

Dr. Baptiste Magnier
Dr. Khizar Hayat
Dr. Stefano Berretti
Dr. Jean-Baptiste Thomas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 2142 KiB  
Article
Efficient Image Retrieval Using Hierarchical K-Means Clustering
by Dayoung Park and Youngbae Hwang
Sensors 2024, 24(8), 2401; https://doi.org/10.3390/s24082401 - 09 Apr 2024
Viewed by 250
Abstract
The objective of content-based image retrieval (CBIR) is to locate samples from a database that are akin to a query, relying on the content embedded within the images. A contemporary strategy involves calculating the similarity between compact vectors by encoding both the query [...] Read more.
The objective of content-based image retrieval (CBIR) is to locate samples from a database that are akin to a query, relying on the content embedded within the images. A contemporary strategy involves calculating the similarity between compact vectors by encoding both the query and the database images as global descriptors. In this work, we propose an image retrieval method by using hierarchical K-means clustering to efficiently organize the image descriptors within the database, which aims to optimize the subsequent retrieval process. Then, we compute the similarity between the descriptor set within the leaf nodes and the query descriptor to rank them accordingly. Three tree search algorithms are presented to enable a trade-off between search accuracy and speed that allows for substantial gains at the expense of a slightly reduced retrieval accuracy. Our proposed method demonstrates enhancement in image retrieval speed when applied to the CLIP-based model, UNICOM, designed for category-level retrieval, as well as the CNN-based R-GeM model, tailored for particular object retrieval by validating its effectiveness across various domains and backbones. We achieve an 18-times speed improvement while preserving over 99% accuracy when applied to the In-Shop dataset, the largest dataset in the experiments. Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies)
Show Figures

Figure 1

26 pages, 3866 KiB  
Article
Lightweight and Real-Time Infrared Image Processor Based on FPGA
by Xiaoqing Wang, Xiang He, Xiangyu Zhu, Fu Zheng and Jingqi Zhang
Sensors 2024, 24(4), 1333; https://doi.org/10.3390/s24041333 - 19 Feb 2024
Viewed by 677
Abstract
This paper presents an FPGA-based lightweight and real-time infrared image processor based on a series of hardware-oriented lightweight algorithms. The two-point correction algorithm based on blackbody radiation is introduced to calibrate the non-uniformity of the sensor. With precomputed gain and offset matrices, the [...] Read more.
This paper presents an FPGA-based lightweight and real-time infrared image processor based on a series of hardware-oriented lightweight algorithms. The two-point correction algorithm based on blackbody radiation is introduced to calibrate the non-uniformity of the sensor. With precomputed gain and offset matrices, the design can achieve real-time non-uniformity correction with a resolution of 640×480. The blind pixel detection algorithm employs the first-level approximation to simplify multiple iterative computations. The blind pixel compensation algorithm in our design is constructed on the side-window-filtering method. The results of eight convolution kernels for side windows are computed simultaneously to improve the processing speed. Due to the proposed side-window-filtering-based blind pixel compensation algorithm, blind pixels can be effectively compensated while details in the image are preserved. Before image output, we also incorporated lightweight histogram equalization to make the processed image more easily observable to the human eyes. The proposed lightweight infrared image processor is implemented on Xilinx XC7A100T-2. Our proposed lightweight infrared image processor costs 10,894 LUTs, 9367 FFs, 4 BRAMs, and 5 DSP48. Under a 50 MHz clock, the processor achieves a speed of 30 frames per second at the cost of 1800 mW. The maximum operating frequency of our proposed processor can reach 186 MHz. Compared with existing similar works, our proposed infrared image processor incurs minimal resource overhead and has lower power consumption. Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies)
Show Figures

Figure 1

16 pages, 4691 KiB  
Article
Revisiting Mehrotra and Nichani’s Corner Detection Method for Improvement with Truncated Anisotropic Gaussian Filtering
by Baptiste Magnier and Khizar Hayat
Sensors 2023, 23(20), 8653; https://doi.org/10.3390/s23208653 - 23 Oct 2023
Viewed by 814
Abstract
In the early 1990s, Mehrotra and Nichani developed a filtering-based corner detection method, which, though conceptually intriguing, suffered from limited reliability, leading to minimal references in the literature. Despite its underappreciation, the core concept of this method, rooted in the half-edge concept and [...] Read more.
In the early 1990s, Mehrotra and Nichani developed a filtering-based corner detection method, which, though conceptually intriguing, suffered from limited reliability, leading to minimal references in the literature. Despite its underappreciation, the core concept of this method, rooted in the half-edge concept and directional truncated first derivative of Gaussian, holds significant promise. This article presents a comprehensive assessment of the enhanced corner detection algorithm, combining both qualitative and quantitative evaluations. We thoroughly explore the strengths, limitations, and overall effectiveness of our approach by incorporating visual examples and conducting evaluations. Through experiments conducted on both synthetic and real images, we demonstrate the efficiency and reliability of the proposed algorithm. Collectively, our experimental assessments substantiate that our modifications have transformed the method into one that outperforms established benchmark techniques. Due to its ease of implementation, our improved corner detection process has the potential to become a valuable reference for the computer vision community when dealing with corner detection algorithms. This article thus highlights the quantitative achievements of our refined corner detection algorithm, building upon the groundwork laid by Mehrotra and Nichani, and offers valuable insights for the computer vision community seeking robust corner detection solutions. Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies)
Show Figures

Figure 1

24 pages, 6790 KiB  
Article
Multi-Scale FPGA-Based Infrared Image Enhancement by Using RGF and CLAHE
by Jialong Liu, Xichuan Zhou, Zhenlong Wan, Xuefei Yang, Wei He, Rulong He and Yingcheng Lin
Sensors 2023, 23(19), 8101; https://doi.org/10.3390/s23198101 - 27 Sep 2023
Cited by 1 | Viewed by 1192
Abstract
Infrared sensors capture thermal radiation emitted by objects. They can operate in all weather conditions and are thus employed in fields such as military surveillance, autonomous driving, and medical diagnostics. However, infrared imagery poses challenges such as low contrast and indistinct textures due [...] Read more.
Infrared sensors capture thermal radiation emitted by objects. They can operate in all weather conditions and are thus employed in fields such as military surveillance, autonomous driving, and medical diagnostics. However, infrared imagery poses challenges such as low contrast and indistinct textures due to the long wavelength of infrared radiation and susceptibility to interference. In addition, complex enhancement algorithms make real-time processing challenging. To address these problems and improve visual quality, in this paper, we propose a multi-scale FPGA-based method for real-time enhancement of infrared images by using rolling guidance filter (RGF) and contrast-limited adaptive histogram equalization (CLAHE). Specifically, the original image is first decomposed into various scales of detail layers and a base layer using RGF. Secondly, we fuse detail layers of diverse scales, then enhance the detail information by using gain coefficients and employ CLAHE to improve the contrast of the base layer. Thirdly, we fuse the detail layers and base layer to obtain the image with global details of the input image. Finally, the proposed algorithm is implemented on an FPGA using advanced high-level synthesis tools. Comprehensive testing of our proposed method on the AXU15EG board demonstrates its effectiveness in significantly improving image contrast and enhancing detail information. At the same time, real-time enhancement at a speed of 147 FPS is achieved for infrared images with a resolution of 640 × 480. Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies)
Show Figures

Figure 1

15 pages, 3760 KiB  
Article
Shallow Marine High-Resolution Optical Mosaics Based on Underwater Scooter-Borne Camera
by Yiyuan Liu, Xinwei Wang, Liang Sun, Jianan Chen, Jun He and Yan Zhou
Sensors 2023, 23(19), 8028; https://doi.org/10.3390/s23198028 - 22 Sep 2023
Viewed by 673
Abstract
Optical cameras equipped with an underwater scooter can perform efficient shallow marine mapping. In this paper, an underwater image stitching method is proposed for detailed large scene awareness based on a scooter-borne camera, including preprocessing, image registration and post-processing. An underwater image enhancement [...] Read more.
Optical cameras equipped with an underwater scooter can perform efficient shallow marine mapping. In this paper, an underwater image stitching method is proposed for detailed large scene awareness based on a scooter-borne camera, including preprocessing, image registration and post-processing. An underwater image enhancement algorithm based on the inherent underwater optical attenuation characteristics and dark channel prior algorithm is presented to improve underwater feature matching. Furthermore, an optimal seam algorithm is utilized to generate a shape-preserving seam-line in the superpixel-restricted area. The experimental results show the effectiveness of the proposed method for different underwater environments and the ability to generate natural underwater mosaics with few artifacts or visible seams. Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies)
Show Figures

Figure 1

16 pages, 8745 KiB  
Article
A Second-Order Method for Removing Mixed Noise from Remote Sensing Images
by Ying Zhou, Chao Ren, Shengguo Zhang, Xiaoqin Xue, Yuanyuan Liu, Jiakai Lu and Cong Ding
Sensors 2023, 23(17), 7543; https://doi.org/10.3390/s23177543 - 30 Aug 2023
Viewed by 701
Abstract
Remote sensing image denoising is of great significance for the subsequent use and research of images. Gaussian noise and salt-and-pepper noise are prevalent noises in images. Contemporary denoising algorithms often exhibit limitations when addressing such mixed noise scenarios, manifesting in suboptimal denoising outcomes [...] Read more.
Remote sensing image denoising is of great significance for the subsequent use and research of images. Gaussian noise and salt-and-pepper noise are prevalent noises in images. Contemporary denoising algorithms often exhibit limitations when addressing such mixed noise scenarios, manifesting in suboptimal denoising outcomes and the potential blurring of image edges subsequent to the denoising process. To address the above problems, a second-order removal method for mixed noise in remote sensing images was proposed. In the first stage of the method, dilated convolution was introduced into the DnCNN (denoising convolutional neural network) network framework to increase the receptive field of the network, so that more feature information could be extracted from remote sensing images. Meanwhile, a DropoutLayer was introduced after the deep convolution layer to build the noise reduction model to prevent the network from overfitting and to simplify the training difficulty, and then the model was used to perform the preliminary noise reduction on the images. To further improve the image quality of the preliminary denoising results, effectively remove the salt-and-pepper noise in the mixed noise, and preserve more image edge details and texture features, the proposed method employed a second stage on the basis of adaptive median filtering. In this second stage, the median value in the original filter window median was replaced by the nearest neighbor pixel weighted median, so that the preliminary noise reduction result was subjected to secondary processing, and the final denoising result of the mixed noise of the remote sensing image was obtained. In order to verify the feasibility and effectiveness of the algorithm, the remote sensing image denoising experiments and denoised image edge detection experiments were carried out in this paper. When the experimental results are analyzed through subjective visual assessment, images denoised using the proposed method exhibit clearer and more natural details, and they effectively retain edge and texture features. In terms of objective evaluation, the performance of different denoising algorithms is compared using metrics such as mean square error (MSE), peak signal-to-noise ratio (PSNR), and mean structural similarity index (MSSIM). The experimental outcomes indicate that the proposed method for denoising mixed noise in remote sensing images outperforms traditional denoising techniques, achieving a clearer image restoration effect. Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies)
Show Figures

Figure 1

12 pages, 4301 KiB  
Article
Fourier Ptychographic Microscopic Reconstruction Method Based on Residual Hybrid Attention Network
by Jie Li, Jingzi Hao, Xiaoli Wang, Yongshan Wang, Yan Wang, Hao Wang and Xinbo Wang
Sensors 2023, 23(16), 7301; https://doi.org/10.3390/s23167301 - 21 Aug 2023
Cited by 2 | Viewed by 814
Abstract
Fourier ptychographic microscopy (FPM) is a novel technique for computing microimaging that allows imaging of samples such as pathology sections. However, due to the influence of systematic errors and noise, the quality of reconstructed images using FPM is often poor, and the reconstruction [...] Read more.
Fourier ptychographic microscopy (FPM) is a novel technique for computing microimaging that allows imaging of samples such as pathology sections. However, due to the influence of systematic errors and noise, the quality of reconstructed images using FPM is often poor, and the reconstruction efficiency is low. In this paper, a hybrid attention network that combines spatial attention mechanisms with channel attention mechanisms into FPM reconstruction is introduced. Spatial attention can extract fine spatial features and reduce redundant features while, combined with residual channel attention, it adaptively readjusts the hierarchical features to achieve the conversion of low-resolution complex amplitude images to high-resolution ones. The high-resolution images generated by this method can be applied to medical cell recognition, segmentation, classification, and other related studies, providing a better foundation for relevant research. Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies)
Show Figures

Figure 1

19 pages, 5356 KiB  
Article
Peripheral Blood Leukocyte Detection Based on an Improved Detection Transformer Algorithm
by Mingjing Li, Shu Fang, Xiaoli Wang, Shuang Chen, Lixia Cao, Jinye Han and Haijiao Yun
Sensors 2023, 23(16), 7226; https://doi.org/10.3390/s23167226 - 17 Aug 2023
Viewed by 974
Abstract
The combination of a blood cell analyzer and artificial microscopy to detect white blood cells is used in hospitals. Blood cell analyzers not only have large throughput, but they also cannot detect cell morphology; although artificial microscopy has high accuracy, it is inefficient [...] Read more.
The combination of a blood cell analyzer and artificial microscopy to detect white blood cells is used in hospitals. Blood cell analyzers not only have large throughput, but they also cannot detect cell morphology; although artificial microscopy has high accuracy, it is inefficient and prone to missed detections. In view of the above problems, a method based on Fourier ptychographic microscopy (FPM) and deep learning to detect peripheral blood leukocytes is proposed in this paper. Firstly, high-resolution and wide-field microscopic images of human peripheral blood cells are obtained using the FPM system, and the cell image data are enhanced with DCGANs (deep convolution generative adversarial networks) to construct datasets for performance evaluation. Then, an improved DETR (detection transformer) algorithm is proposed to improve the detection accuracy of small white blood cell targets; that is, the residual module Conv Block in the feature extraction part of the DETR network is improved to reduce the problem of information loss caused by downsampling. Finally, CIOU (complete intersection over union) is introduced as the bounding box loss function, which avoids the problem that GIOU (generalized intersection over union) is difficult to optimize when the two boxes are far away and the convergence speed is faster. The experimental results show that the mAP of the improved DETR algorithm in the detection of human peripheral white blood cells is 0.936. In addition, this algorithm is compared with other convolutional neural networks in terms of average accuracy, parameters, and number of inference frames per second, which verifies the feasibility of this method in microscopic medical image detection. Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies)
Show Figures

Figure 1

16 pages, 4990 KiB  
Article
A Comparative Study of Structural Deformation Test Based on Edge Detection and Digital Image Correlation
by Ruixiang Tang, Wenbing Chen, Yousong Wu, Hongbin Xiong and Banfu Yan
Sensors 2023, 23(8), 3834; https://doi.org/10.3390/s23083834 - 08 Apr 2023
Cited by 2 | Viewed by 1528
Abstract
Digital image-correlation (DIC) algorithms rely heavily on the accuracy of the initial values provided by whole-pixel search algorithms for structural displacement monitoring. When the measured displacement is too large or exceeds the search domain, the calculation time and memory consumption of the DIC [...] Read more.
Digital image-correlation (DIC) algorithms rely heavily on the accuracy of the initial values provided by whole-pixel search algorithms for structural displacement monitoring. When the measured displacement is too large or exceeds the search domain, the calculation time and memory consumption of the DIC algorithm will increase greatly, and even fail to obtain the correct result. The paper introduced two edge-detection algorithms, Canny and Zernike moments in digital image-processing (DIP) technology, to perform geometric fitting and sub-pixel positioning on the specific pattern target pasted on the measurement position, and to obtain the structural displacement according to the change of the target position before and after deformation. This paper compared the difference between edge detection and DIC in accuracy and calculation speed through numerical simulation, laboratory, and field tests. The study demonstrated that the structural displacement test based on edge detection is slightly inferior to the DIC algorithm in terms of accuracy and stability. As the search domain of the DIC algorithm becomes larger, its calculation speed decreases sharply, and is obviously slower than the Canny and Zernike moment algorithms. Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies)
Show Figures

Figure 1

14 pages, 7037 KiB  
Article
Tone Mapping Operator for High Dynamic Range Images Based on Modified iCAM06
by Yumei Li, Ningfang Liao, Wenmin Wu, Chenyang Deng, Yasheng Li, Qiumei Fan and Chuanjie Liu
Sensors 2023, 23(5), 2516; https://doi.org/10.3390/s23052516 - 24 Feb 2023
Cited by 3 | Viewed by 2263
Abstract
This study attempted to solve the problem of conventional standard display devices encountering difficulties in displaying high dynamic range (HDR) images by proposing a modified tone-mapping operator (TMO) based on the image color appearance model (iCAM06). The proposed model, called iCAM06-m, combined iCAM06 [...] Read more.
This study attempted to solve the problem of conventional standard display devices encountering difficulties in displaying high dynamic range (HDR) images by proposing a modified tone-mapping operator (TMO) based on the image color appearance model (iCAM06). The proposed model, called iCAM06-m, combined iCAM06 and a multi-scale enhancement algorithm to correct the chroma of images by compensating for saturation and hue drift. Subsequently, a subjective evaluation experiment was conducted to assess iCAM06-m considering other three TMOs by rating the tone mapped images. Finally, the objective and subjective evaluation results were compared and analyzed. The results confirmed the better performance of the proposed iCAM06-m. Furthermore, the chroma compensation effectively alleviated the problem of saturation reduction and hue drift in iCAM06 for HDR image tone-mapping. In addition, the introduction of multi-scale decomposition enhanced the image details and sharpness. Thus, the proposed algorithm can overcome the shortcomings of other algorithms and is a good candidate for a general purpose TMO. Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies)
Show Figures

Figure 1

Review

Jump to: Research

44 pages, 12990 KiB  
Review
Comprehensive Analysis of Compressible Perceptual Encryption Methods—Compression and Encryption Perspectives
by Ijaz Ahmad, Wooyeol Choi and Seokjoo Shin
Sensors 2023, 23(8), 4057; https://doi.org/10.3390/s23084057 - 17 Apr 2023
Cited by 3 | Viewed by 1440
Abstract
Perceptual encryption (PE) hides the identifiable information of an image in such a way that its intrinsic characteristics remain intact. This recognizable perceptual quality can be used to enable computation in the encryption domain. A class of PE algorithms based on block-level processing [...] Read more.
Perceptual encryption (PE) hides the identifiable information of an image in such a way that its intrinsic characteristics remain intact. This recognizable perceptual quality can be used to enable computation in the encryption domain. A class of PE algorithms based on block-level processing has recently gained popularity for their ability to generate JPEG-compressible cipher images. A tradeoff in these methods, however, is between the security efficiency and compression savings due to the chosen block size. Several methods (such as the processing of each color component independently, image representation, and sub-block-level processing) have been proposed to effectively manage this tradeoff. The current study adapts these assorted practices into a uniform framework to provide a fair comparison of their results. Specifically, their compression quality is investigated under various design parameters, such as the choice of colorspace, image representation, chroma subsampling, quantization tables, and block size. Our analyses have shown that at best the PE methods introduce a decrease of 6% and 3% in the JPEG compression performance with and without chroma subsampling, respectively. Additionally, their encryption quality is quantified in terms of several statistical analyses. The simulation results show that block-based PE methods exhibit several favorable properties for the encryption-then-compression schemes. Nonetheless, to avoid any pitfalls, their principal design should be carefully considered in the context of the applications for which we outlined possible future research directions. Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies)
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: Comprehensive Analysis of Compressible Perceptual Encryption Methods – Compression and Encryption Perspectives
Authors: Ijaz Ahmad; Wooyeol Choi; Seokjoo Shin
Affiliation: Department of Computer Engineering, Chosun University, Gwangju 61452, South Korea
Abstract: Perceptual encryption (PE) hides identifiable information of an image in such a way that its intrinsic characteristics remain intact. This recognizable perceptual quality can be used to enable computation in the encryption domain. A class of PE algorithms based on block level processing has recently gained popularity for their ability to generate JPEG compressible cipher images. A tradeoff in these methods, however, is between security efficiency and compression savings due to the chosen block size. Several methods (such as the processing of each color component independently, image representation, and sub-block level processing) have been proposed to effectively manage this tradeoff. The current study adapts these assorted practices into a uniform framework to provide a fair comparison of their results. Specifically, their compression quality is investigated under various design parameters such as the choice of colorspace, image representation, chroma subsampling, quantization tables, and block size. Also, their encryption quality is quantified in terms of several statistical analysis. The simulation results show that, although block-based PE methods exhibit favorable properties for encryption-then-compression schemes, they lack the diffusion property, as encryption is realized on a block level. Therefore, a careful consideration is required to resist differential attacks.

Back to TopTop