Recent Advances in Video Compression and Coding

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Processes".

Deadline for manuscript submissions: closed (20 January 2022) | Viewed by 11161

Special Issue Editors

School of Computer Engineering, Nanjing Institute of Technology, Nanjing 211167, China
Interests: cyber security; applied cryptography; multimedia security; privacy protection; biometrics; security management; location based service; cloud computing security
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the development of imaging and display technologies, high definition video has become more and more popular in our lifetime. However, with the increased quality of resolution, frame rate, sampling precision, etc., the volume of raw video data has increased significantly. The huge amount of raw video data is a challenge for signal processing, storage, and transmitting. Hence, efficient video compression coding technology becomes vitally important for the video to be widely used in multimedia applications.

Video compression is a practical implementation of source coding in information theory. It fits the scopes and the developing trends of the journal Information very well. This Special Issue focuses on the theoretical and practical design issues of video compression and coding. Our aim is to bring together researchers, industry practitioners, and individuals working on the related areas to share their new ideas, latest findings, and state-of-the-art achievements with others.

The topics of interest include, but are not limited to:

  • Low-complexity video coding
  • Optimization algorithms for video coding
  • Transform optimization algorithms for video coding
  • Transcoding algorithms
  • Video object detection algorithms
  • Coding algorithms for 3D/HDR/ videos
  • Video information hiding algorithms
  • Video broadcasting system
  • Advanced algorithms for video watermarking
  • Artificial intelligence for video processing

Dr. Zhaoqing Pan
Dr. Yuan Tian
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • multimedia processing
  • video compression
  • video coding
  • video transcoding

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 1515 KiB  
Article
A miRNA-Disease Association Identification Method Based on Reliable Negative Sample Selection and Improved Single-Hidden Layer Feedforward Neural Network
by Qinglong Tian, Su Zhou and Qi Wu
Information 2022, 13(3), 108; https://doi.org/10.3390/info13030108 - 24 Feb 2022
Cited by 3 | Viewed by 1703
Abstract
miRNAs are a category of important endogenous non-coding small RNAs and are ubiquitous in eukaryotes. They are widely involved in the regulatory process of post-transcriptional gene expression and play a critical part in the development of human diseases. By utilizing recent advancements in [...] Read more.
miRNAs are a category of important endogenous non-coding small RNAs and are ubiquitous in eukaryotes. They are widely involved in the regulatory process of post-transcriptional gene expression and play a critical part in the development of human diseases. By utilizing recent advancements in big data technology, using bioinformatics methods to identify causative miRNA becomes a hot spot. In this paper, a method called RNSSLFN is proposed to identify the miRNA-disease associations by reliable negative sample selection and an improved single-hidden layer feedforward neural network (SLFN). It involves, firstly, obtaining integrated similarity for miRNAs and diseases; next, selecting reliable negative samples from unknown miRNA-disease associations via distinguishing up-regulated or down-regulated miRNAs; then, introducing an improved SLFN to solve the prediction task. The experimental results on the latest data sets HMDD v3.2 and the framework of 5-fold cross-validation (CV) show that the average AUC and AUPR of RNSSLFN achieve 0.9316 and 0.9065 m, respectively, which are superior to the other three state-of-the-art methods. Furthermore, in the case studies of 10 common cancers, more than 70% of the top 30 predicted miRNA-disease association pairs are verified in the databases, which further confirms the reliability and effectiveness of the RNSSLFN model. Generally, RNSSLFN in predicting miRNA-disease associations has prodigious potential and extensive foreground. Full article
(This article belongs to the Special Issue Recent Advances in Video Compression and Coding)
Show Figures

Figure 1

13 pages, 3073 KiB  
Article
Robust Segmentation Based on Salient Region Detection Coupled Gaussian Mixture Model
by Xiaoyan Pan, Yuhui Zheng and Byeungwoo Jeon
Information 2022, 13(2), 98; https://doi.org/10.3390/info13020098 - 18 Feb 2022
Cited by 4 | Viewed by 1787
Abstract
The impressive progress on image segmentation has been witnessed recently. In this paper, an improved model introducing frequency-tuned salient region detection into Gaussian mixture model (GMM) is proposed, which is named FTGMM. Frequency-tuned salient region detection is added to achieve the saliency map [...] Read more.
The impressive progress on image segmentation has been witnessed recently. In this paper, an improved model introducing frequency-tuned salient region detection into Gaussian mixture model (GMM) is proposed, which is named FTGMM. Frequency-tuned salient region detection is added to achieve the saliency map of the original image, which is combined with the original image, and the value of the saliency map is added into the Gaussian mixture model in the form of spatial information weight. The proposed method (FTGMM) calculates the model parameters by the expectation maximization (EM) algorithm with low computational complexity. In the qualitative and quantitative analysis of the experiment, the subjective visual effect and the value of the evaluation index are found to be better than other methods. Therefore, the proposed method (FTGMM) is proven to have high precision and better robustness. Full article
(This article belongs to the Special Issue Recent Advances in Video Compression and Coding)
Show Figures

Figure 1

16 pages, 2737 KiB  
Article
Semantic Residual Pyramid Network for Image Inpainting
by Haiyin Luo and Yuhui Zheng
Information 2022, 13(2), 71; https://doi.org/10.3390/info13020071 - 1 Feb 2022
Cited by 4 | Viewed by 2513
Abstract
Existing image inpainting methods based on deep learning have made great progress. These methods either generate contextually semantically consistent images or visually excellent images, ignoring that both semantic and visual effects should be appreciated. In this article, we propose a Semantic Residual Pyramid [...] Read more.
Existing image inpainting methods based on deep learning have made great progress. These methods either generate contextually semantically consistent images or visually excellent images, ignoring that both semantic and visual effects should be appreciated. In this article, we propose a Semantic Residual Pyramid Network (SRPNet) based on a deep generative model for image inpainting at the image and feature levels. This method encodes a masked image by a residual semantic pyramid encoder and then decodes the encoded features into a inpainted image by a multi-layer decoder. At this stage, a multi-layer attention transfer network is used to gradually fill in the missing regions of the image. To generate semantically consistent and visually superior images, the multi-scale discriminators are added to the network structure. The discriminators are divided into global and local discriminators, where the global discriminator is used to identify the global consistency of the inpainted image, and the local discriminator is used to determine the consistency of the missing regions of the inpainted image. Finally, we conducted experiments on four different datasets. As a result, great performance was achieved for filling both the regular and irregular missing regions. Full article
(This article belongs to the Special Issue Recent Advances in Video Compression and Coding)
Show Figures

Figure 1

15 pages, 36464 KiB  
Article
Joint Subtitle Extraction and Frame Inpainting for Videos with Burned-In Subtitles
by Haoran Xu, Yanbai He, Xinya Li, Xiaoying Hu, Chuanyan Hao and Bo Jiang
Information 2021, 12(6), 233; https://doi.org/10.3390/info12060233 - 29 May 2021
Viewed by 4161
Abstract
Subtitles are crucial for video content understanding. However, a large amount of videos have only burned-in, hardcoded subtitles that prevent video re-editing, translation, etc. In this paper, we construct a deep-learning-based system for the inverse conversion of a burned-in subtitle video to a [...] Read more.
Subtitles are crucial for video content understanding. However, a large amount of videos have only burned-in, hardcoded subtitles that prevent video re-editing, translation, etc. In this paper, we construct a deep-learning-based system for the inverse conversion of a burned-in subtitle video to a subtitle file and an inpainted video, by coupling three deep neural networks (CTPN, CRNN, and EdgeConnect). We evaluated the performance of the proposed method and found that the deep learning method achieved high-precision separation of the subtitles and video frames and significantly improved the video inpainting results compared to the existing methods. This research fills a gap in the application of deep learning to burned-in subtitle video reconstruction and is expected to be widely applied in the reconstruction and re-editing of videos with subtitles, advertisements, logos, and other occlusions. Full article
(This article belongs to the Special Issue Recent Advances in Video Compression and Coding)
Show Figures

Figure 1

Back to TopTop