Topic Editors

Institute of Image Communication and Information Processing, Shanghai Jiao Tong University, Shanghai 200240, China
School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai 200241, China
Image & Vision Computing Lab. (IVC), Univerisity of Waterloo, Waterloo, ON N2L 3G1, Canada

Advances in Perceptual Quality Assessment of User Generated Contents

Abstract submission deadline
closed (31 December 2023)
Manuscript submission deadline
closed (31 March 2024)
Viewed by
20955

Topic Information

Dear Colleagues,

Due to the rapid development of mobile devices and wireless networks in recent years, creating, watching and sharing user-generated content (UGC) through various applications such as social media has become a popular daily activity for the general public. User-generated content in these applications exhibits markedly different characteristics than conventional, professionally generated content (PGC). Unlike professionally generated content, user-generated content is generally captured in the wild by ordinary people using diverse capture devices, and may suffer from complex real-world distortions, such as overexposure, underexposure, camera shakiness, etc., which also pose challenges for quality assessment. On one hand, an effective quality assessment (QA) model to evaluate the perceptual quality of user-generated contents can help the service provider recommend high-quality contents to users, and on the other hand can guide the development of more effective content processing algorithms.

Although subjective and objective quality assessments have been carried out in this area for many years, most of them focused on professionally generated content, without considering the specific characteristics of user-generated content. This Topic seeks original submissions and the latest technologies concerning the perceptual quality assessment of user-generated content, including—but not limited to—image/video/audio quality assessment databases/metrics for user-generated content, perceptual processing, compression, enhancement, and distribution of user-generated contents. Submissions pertaining to related practical applications and model development for user-generated content are also welcome.

Prof. Dr. Guangtao Zhai
Dr. Xiongkuo Min
Dr. Menghan Hu
Dr. Wei Zhou
Topic Editors

Keywords

  • user-generated content
  • perceptual quality
  • image/video/audio quality assessment
  • image analysis and image processing
  • video/audio signal processing
  • cameras
  • user-generated content based on a sensing system

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Sensors
sensors
3.9 6.8 2001 17 Days CHF 2600
Journal of Imaging
jimaging
3.2 4.4 2015 21.7 Days CHF 1800
Electronics
electronics
2.9 4.7 2012 15.6 Days CHF 2400
Applied Sciences
applsci
2.7 4.5 2011 16.9 Days CHF 2400
Entropy
entropy
2.7 4.7 1999 20.8 Days CHF 2600
Digital
digital
- - 2021 22.7 Days CHF 1000
Journal of Intelligence
jintelligence
3.5 2.5 2013 32.8 Days CHF 2600

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (12 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
14 pages, 9803 KiB  
Article
Blind Quality Assessment of Images Containing Objects of Interest
by Wentong He and Ze Luo
Sensors 2023, 23(19), 8205; https://doi.org/10.3390/s23198205 - 30 Sep 2023
Cited by 1 | Viewed by 763
Abstract
To monitor objects of interest, such as wildlife and people, image-capturing devices are used to collect a large number of images with and without objects of interest. As we are recording valuable information about the behavior and activity of objects, the quality of [...] Read more.
To monitor objects of interest, such as wildlife and people, image-capturing devices are used to collect a large number of images with and without objects of interest. As we are recording valuable information about the behavior and activity of objects, the quality of images containing objects of interest should be better than that of images without objects of interest, even if the former exhibits more severe distortion than the latter. However, according to current methods, quality assessments produce the opposite results. In this study, we propose an end-to-end model, named DETR-IQA (detection transformer image quality assessment), which extends the capability to perform object detection and blind image quality assessment (IQA) simultaneously by adding IQA heads comprising simple multi-layer perceptrons at the top of the DETRs (detection transformers) decoder. Using IQA heads, DETR-IQA carried out blind IQAs based on the weighted fusion of the distortion degree of the region of objects of interest and the other regions of the image; the predicted quality score of images containing objects of interest was generally greater than that of images without objects of interest. Currently, the subjective quality score of all public datasets is in accordance with the distortion of images and does not consider objects of interest. We manually extracted the images in which the five predefined classes of objects were the main contents of the largest authentic distortion dataset, KonIQ-10k, which was used as the experimental dataset. The experimental results show that with slight degradation in object detection performance and simple IQA heads, the values of PLCC and SRCC were 0.785 and 0.727, respectively, and exceeded those of some deep learning-based IQA models that are specially designed for only performing IQA. With the negligible increase in the computation and complexity of object detection and without a decrease in inference speeds, DETR-IQA can perform object detection and IQA via multi-tasking and substantially reduce the workload. Full article
Show Figures

Figure 1

24 pages, 4736 KiB  
Article
A Novel No-Reference Quality Assessment Metric for Stereoscopic Images with Consideration of Comprehensive 3D Quality Information
by Liquan Shen, Yang Yao, Xianqiu Geng, Ruigang Fang and Dapeng Wu
Sensors 2023, 23(13), 6230; https://doi.org/10.3390/s23136230 - 07 Jul 2023
Viewed by 939
Abstract
Recently, stereoscopic image quality assessment has attracted a lot attention. However, compared with 2D image quality assessment, it is much more difficult to assess the quality of stereoscopic images due to the lack of understanding of 3D visual perception. This paper proposes a [...] Read more.
Recently, stereoscopic image quality assessment has attracted a lot attention. However, compared with 2D image quality assessment, it is much more difficult to assess the quality of stereoscopic images due to the lack of understanding of 3D visual perception. This paper proposes a novel no-reference quality assessment metric for stereoscopic images using natural scene statistics with consideration of both the quality of the cyclopean image and 3D visual perceptual information (binocular fusion and binocular rivalry). In the proposed method, not only is the quality of the cyclopean image considered, but binocular rivalry and other 3D visual intrinsic properties are also exploited. Specifically, in order to improve the objective quality of the cyclopean image, features of the cyclopean images in both the spatial domain and transformed domain are extracted based on the natural scene statistics (NSS) model. Furthermore, to better comprehend intrinsic properties of the stereoscopic image, in our method, the binocular rivalry effect and other 3D visual properties are also considered in the process of feature extraction. Following adaptive feature pruning using principle component analysis, improved metric accuracy can be found in our proposed method. The experimental results show that the proposed metric can achieve a good and consistent alignment with subjective assessment of stereoscopic images in comparison with existing methods, with the highest SROCC (0.952) and PLCC (0.962) scores being acquired on the LIVE 3D database Phase I. Full article
(This article belongs to the Topic Advances in Perceptual Quality Assessment of User Generated Contents)
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

23 pages, 11737 KiB  
Article
Image Quality Assessment for Realistic Zoom Photos
by Zongxi Han, Yutao Liu, Rong Xie and Guangtao Zhai
Sensors 2023, 23(10), 4724; https://doi.org/10.3390/s23104724 - 13 May 2023
Viewed by 1576
Abstract
New CMOS imaging sensor (CIS) techniques in smartphones have helped user-generated content dominate our lives over traditional DSLRs. However, tiny sensor sizes and fixed focal lengths also lead to more grainy details, especially for zoom photos. Moreover, multi-frame stacking and post-sharpening algorithms would [...] Read more.
New CMOS imaging sensor (CIS) techniques in smartphones have helped user-generated content dominate our lives over traditional DSLRs. However, tiny sensor sizes and fixed focal lengths also lead to more grainy details, especially for zoom photos. Moreover, multi-frame stacking and post-sharpening algorithms would produce zigzag textures and over-sharpened appearances, for which traditional image-quality metrics may over-estimate. To solve this problem, a real-world zoom photo database is first constructed in this paper, which includes 900 tele-photos from 20 different mobile sensors and ISPs. Then we propose a novel no-reference zoom quality metric which incorporates the traditional estimation of sharpness and the concept of image naturalness. More specifically, for the measurement of image sharpness, we are the first to combine the total energy of the predicted gradient image with the entropy of the residual term under the framework of free-energy theory. To further compensate for the influence of over-sharpening effect and other artifacts, a set of model parameters of mean subtracted contrast normalized (MSCN) coefficients are utilized as the natural statistics representatives. Finally, these two measures are combined linearly. Experimental results on the zoom photo database demonstrate that our quality metric can achieve SROCC and PLCC over 0.91, while the performance of single sharpness or naturalness index is around 0.85. Moreover, compared with the best tested general-purpose and sharpness models, our zoom metric outperforms them by 0.072 and 0.064 in SROCC, respectively. Full article
(This article belongs to the Topic Advances in Perceptual Quality Assessment of User Generated Contents)
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

20 pages, 4091 KiB  
Article
Semantically Adaptive JND Modeling with Object-Wise Feature Characterization, Context Inhibition and Cross-Object Interaction
by Xia Wang, Haibing Yin, Yu Lu, Shiling Zhao and Yong Chen
Sensors 2023, 23(6), 3149; https://doi.org/10.3390/s23063149 - 15 Mar 2023
Viewed by 1303
Abstract
Performance bottlenecks in the optimization of JND modeling based on low-level manual visual feature metrics have emerged. High-level semantics bear a considerable impact on perceptual attention and subjective video quality, yet most existing JND models do not adequately account for this impact. This [...] Read more.
Performance bottlenecks in the optimization of JND modeling based on low-level manual visual feature metrics have emerged. High-level semantics bear a considerable impact on perceptual attention and subjective video quality, yet most existing JND models do not adequately account for this impact. This indicates that there is still much room and potential for performance optimization in semantic feature-based JND models. To address this status quo, this paper investigates the response of visual attention induced by heterogeneous semantic features with an eye on three aspects, i.e., object, context, and cross-object, to further improve the efficiency of JND models. On the object side, this paper first focuses on the main semantic features that affect visual attention, including semantic sensitivity, objective area and shape, and central bias. Following that, the coupling role of heterogeneous visual features with HVS perceptual properties are analyzed and quantified. Second, based on the reciprocity of objects and contexts, the contextual complexity is measured to gauge the inhibitory effect of contexts on visual attention. Third, cross-object interactions are dissected using the principle of bias competition, and a semantic attention model is constructed in conjunction with a model of attentional competition. Finally, to build an improved transform domain JND model, a weighting factor is used by fusing the semantic attention model with the basic spatial attention model. Extensive simulation results validate that the proposed JND profile is highly consistent with HVS and highly competitive among state-of-the-art models. Full article
(This article belongs to the Topic Advances in Perceptual Quality Assessment of User Generated Contents)
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

16 pages, 4228 KiB  
Article
Multi-Attention Segmentation Networks Combined with the Sobel Operator for Medical Images
by Fangfang Lu, Chi Tang, Tianxiang Liu, Zhihao Zhang and Leida Li
Sensors 2023, 23(5), 2546; https://doi.org/10.3390/s23052546 - 24 Feb 2023
Cited by 5 | Viewed by 1792
Abstract
Medical images are used as an important basis for diagnosing diseases, among which CT images are seen as an important tool for diagnosing lung lesions. However, manual segmentation of infected areas in CT images is time-consuming and laborious. With its excellent feature extraction [...] Read more.
Medical images are used as an important basis for diagnosing diseases, among which CT images are seen as an important tool for diagnosing lung lesions. However, manual segmentation of infected areas in CT images is time-consuming and laborious. With its excellent feature extraction capabilities, a deep learning-based method has been widely used for automatic lesion segmentation of COVID-19 CT images. However, the segmentation accuracy of these methods is still limited. To effectively quantify the severity of lung infections, we propose a Sobel operator combined with multi-attention networks for COVID-19 lesion segmentation (SMA-Net). In our SMA-Net method, an edge feature fusion module uses the Sobel operator to add edge detail information to the input image. To guide the network to focus on key regions, SMA-Net introduces a self-attentive channel attention mechanism and a spatial linear attention mechanism. In addition, the Tversky loss function is adopted for the segmentation network for small lesions. Comparative experiments on COVID-19 public datasets show that the average Dice similarity coefficient (DSC) and joint intersection over union (IOU) of the proposed SMA-Net model are 86.1% and 77.8%, respectively, which are better than those in most existing segmentation networks. Full article
(This article belongs to the Topic Advances in Perceptual Quality Assessment of User Generated Contents)
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

26 pages, 4867 KiB  
Article
NITS-IQA Database: A New Image Quality Assessment Database
by Jayesh Ruikar and Saurabh Chaudhury
Sensors 2023, 23(4), 2279; https://doi.org/10.3390/s23042279 - 17 Feb 2023
Viewed by 2803
Abstract
This paper describes a newly-created image database termed as the NITS-IQA database for image quality assessment (IQA). In spite of recently developed IQA databases, which contain a collection of a huge number of images and type of distortions, there is still a lack [...] Read more.
This paper describes a newly-created image database termed as the NITS-IQA database for image quality assessment (IQA). In spite of recently developed IQA databases, which contain a collection of a huge number of images and type of distortions, there is still a lack of new distortion and use of real natural images taken by the camera. The NITS-IQA database contains total 414 images, including 405 distorted images (nine types of distortion with five levels in each of the distortion type) and nine original images. In this paper, a detailed step by step description of the database development along with the procedure of the subjective test experiment is explained. The subjective test experiment is carried out in order to obtain the individual opinion score of the quality of the images presented before them. The mean opinion score (MOS) is obtained from the individual opinion score. In this paper, the Pearson, Spearman and Kendall rank correlation between a state-of-the-art IQA technique and the MOS are analyzed and presented. Full article
(This article belongs to the Topic Advances in Perceptual Quality Assessment of User Generated Contents)
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

18 pages, 5575 KiB  
Article
Pixel-Domain Just Noticeable Difference Modeling with Heterogeneous Color Features
by Tingyu Hu, Haibing Yin, Hongkui Wang, Ning Sheng and Yafen Xing
Sensors 2023, 23(4), 1788; https://doi.org/10.3390/s23041788 - 05 Feb 2023
Viewed by 1166
Abstract
With the rapidly emerging user-generated images, perception compression for color image is an inevitable mission. Whilst in existing just noticeable difference (JND) models, color-oriented features are not fully taken into account for coinciding with HVS perception characteristics, such as sensitivity, attention, and masking. [...] Read more.
With the rapidly emerging user-generated images, perception compression for color image is an inevitable mission. Whilst in existing just noticeable difference (JND) models, color-oriented features are not fully taken into account for coinciding with HVS perception characteristics, such as sensitivity, attention, and masking. To fully imitate the color perception process, we extract color-related feature parameters as local features, including color edge intensity and color complexity, as well as region-wise features, including color area proportion, color distribution position and color distribution dispersion, and inherent feature irrelevant to color content called color perception difference. Then, the potential interaction among them is analyzed and modeled as color contrast intensity. To utilize them, color uncertainty and color saliency are envisaged to emanate from feature integration in the information communication framework. Finally, color and uncertainty saliency models are applied to improve the conventional JND model, taking the masking and attention effect into consideration. Subjective and objective experiments validate the effectiveness of the proposed model, delivering superior noise concealment capacity compared with start-of-the-art works. Full article
(This article belongs to the Topic Advances in Perceptual Quality Assessment of User Generated Contents)
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

17 pages, 3790 KiB  
Article
Subjective Assessment of Objective Image Quality Metrics Range Guaranteeing Visually Lossless Compression
by Afnan, Faiz Ullah, Yaseen, Jinhee Lee, Sonain Jamil and Oh-Jin Kwon
Sensors 2023, 23(3), 1297; https://doi.org/10.3390/s23031297 - 23 Jan 2023
Cited by 4 | Viewed by 3034
Abstract
The usage of media such as images and videos has been extensively increased in recent years. It has become impractical to store images and videos acquired by camera sensors in their raw form due to their huge storage size. Generally, image data is [...] Read more.
The usage of media such as images and videos has been extensively increased in recent years. It has become impractical to store images and videos acquired by camera sensors in their raw form due to their huge storage size. Generally, image data is compressed with a compression algorithm and then stored or transmitted to another platform. Thus, image compression helps to reduce the storage size and transmission cost of the images and videos. However, image compression might cause visual artifacts, depending on the compression level. In this regard, performance evaluation of the compression algorithms is an essential task needed to reconstruct images with visually or near-visually lossless quality in case of lossy compression. The performance of the compression algorithms is assessed by both subjective and objective image quality assessment (IQA) methodologies. In this paper, subjective and objective IQA methods are integrated to evaluate the range of the image quality metrics (IQMs) values that guarantee the visually or near-visually lossless compression performed by the JPEG 1 standard (ISO/IEC 10918). A novel “Flicker Test Software” is developed for conducting the proposed subjective and objective evaluation study. In the flicker test, the selected test images are subjectively analyzed by subjects at different compression levels. The IQMs are calculated at the previous compression level, when the images were visually lossless for each subject. The results analysis shows that the objective IQMs with more closely packed values having the least standard deviation that guaranteed the visually lossless compression of the images with JPEG 1 are the feature similarity index measure (FSIM), the multiscale structural similarity index measure (MS-SSIM), and the information content weighted SSIM (IW-SSIM), with average values of 0.9997, 0.9970, and 0.9970 respectively. Full article
(This article belongs to the Topic Advances in Perceptual Quality Assessment of User Generated Contents)
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

10 pages, 1425 KiB  
Article
Person Re-Identification Based on Contour Information Embedding
by Hao Chen, Yan Zhao and Shigang Wang
Sensors 2023, 23(2), 774; https://doi.org/10.3390/s23020774 - 10 Jan 2023
Cited by 1 | Viewed by 1725
Abstract
Person re-identification (Re-ID) plays an important role in the search for missing people and the tracking of suspects. Person re-identification based on deep learning has made great progress in recent years, and the application of the pedestrian contour feature has also received attention. [...] Read more.
Person re-identification (Re-ID) plays an important role in the search for missing people and the tracking of suspects. Person re-identification based on deep learning has made great progress in recent years, and the application of the pedestrian contour feature has also received attention. In the study, we found that pedestrian contour feature is not enough in the representation of CNN. On this basis, in order to improve the recognition performance of Re-ID network, we propose a contour information extraction module (CIEM) and a contour information embedding method, so that the network can focus on more contour information. Our method is competitive in experimental data; the mAP of the dataset Market1501 reached 83.8% and Rank-1 reached 95.1%. The mAP of the DukeMTMC-reID dataset reached 73.5% and Rank-1 reached 86.8%. The experimental results show that adding contour information to the network can improve the recognition rate, and good contour features play an important role in Re-ID research. Full article
(This article belongs to the Topic Advances in Perceptual Quality Assessment of User Generated Contents)
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

27 pages, 3157 KiB  
Article
Client-Oriented Blind Quality Metric for High Dynamic Range Stereoscopic Omnidirectional Vision Systems
by Liuyan Cao, Jihao You, Yang Song, Haiyong Xu, Zhidi Jiang and Gangyi Jiang
Sensors 2022, 22(21), 8513; https://doi.org/10.3390/s22218513 - 04 Nov 2022
Cited by 1 | Viewed by 1258
Abstract
A high dynamic range (HDR) stereoscopic omnidirectional vision system can provide users with more realistic binocular and immersive perception, where the HDR stereoscopic omnidirectional image (HSOI) suffers distortions during its encoding and visualization, making its quality evaluation more challenging. To solve the problem, [...] Read more.
A high dynamic range (HDR) stereoscopic omnidirectional vision system can provide users with more realistic binocular and immersive perception, where the HDR stereoscopic omnidirectional image (HSOI) suffers distortions during its encoding and visualization, making its quality evaluation more challenging. To solve the problem, this paper proposes a client-oriented blind HSOI quality metric based on visual perception. The proposed metric mainly consists of a monocular perception module (MPM) and binocular perception module (BPM), which combine monocular/binocular, omnidirectional and HDR/tone-mapping perception. The MPM extracts features from three aspects: global color distortion, symmetric/asymmetric distortion and scene distortion. In the BPM, the binocular fusion map and binocular difference map are generated by joint image filtering. Then, brightness segmentation is performed on the binocular fusion image, and distinctive features are extracted on the segmented high/low/middle brightness regions. For the binocular difference map, natural scene statistical features are extracted by multi-coefficient derivative maps. Finally, feature screening is used to remove the redundancy between the extracted features. Experimental results on the HSOID database show that the proposed metric is generally better than the representative quality metric, and is more consistent with the subjective perception. Full article
(This article belongs to the Topic Advances in Perceptual Quality Assessment of User Generated Contents)
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

23 pages, 1560 KiB  
Article
FDMLNet: A Frequency-Division and Multiscale Learning Network for Enhancing Low-Light Image
by Haoxiang Lu, Junming Gong, Zhenbing Liu, Rushi Lan and Xipeng Pan
Sensors 2022, 22(21), 8244; https://doi.org/10.3390/s22218244 - 27 Oct 2022
Cited by 2 | Viewed by 1407
Abstract
Low-illumination images exhibit low brightness, blurry details, and color casts, which present us an unnatural visual experience and further have a negative effect on other visual applications. Data-driven approaches show tremendous potential for lighting up the image brightness while preserving its visual naturalness. [...] Read more.
Low-illumination images exhibit low brightness, blurry details, and color casts, which present us an unnatural visual experience and further have a negative effect on other visual applications. Data-driven approaches show tremendous potential for lighting up the image brightness while preserving its visual naturalness. However, these methods introduce hand-crafted holes and noise enlargement or over/under enhancement and color deviation. For mitigating these challenging issues, this paper presents a frequency division and multiscale learning network named FDMLNet, including two subnets, DetNet and StruNet. This design first applies the guided filter to separate the high and low frequencies of authentic images, then DetNet and StruNet are, respectively, developed to process them, to fully explore their information at different frequencies. In StruNet, a feasible feature extraction module (FFEM), grouped by multiscale learning block (MSL) and a dual-branch channel attention mechanism (DCAM), is injected to promote its multiscale representation ability. In addition, three FFEMs are connected in a new dense connectivity meant to utilize multilevel features. Extensive quantitative and qualitative experiments on public benchmarks demonstrate that our FDMLNet outperforms state-of-the-art approaches benefiting from its stronger multiscale feature expression and extraction ability. Full article
(This article belongs to the Topic Advances in Perceptual Quality Assessment of User Generated Contents)
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

24 pages, 7167 KiB  
Article
Dynamic Heterogeneous User Generated Contents-Driven Relation Assessment via Graph Representation Learning
by Ru Huang, Zijian Chen, Jianhua He and Xiaoli Chu
Sensors 2022, 22(4), 1402; https://doi.org/10.3390/s22041402 - 11 Feb 2022
Cited by 3 | Viewed by 1887
Abstract
Cross-domain decision-making systems are suffering a huge challenge with the rapidly emerging uneven quality of user-generated data, which poses a heavy responsibility to online platforms. Current content analysis methods primarily concentrate on non-textual contents, such as images and videos themselves, while ignoring the [...] Read more.
Cross-domain decision-making systems are suffering a huge challenge with the rapidly emerging uneven quality of user-generated data, which poses a heavy responsibility to online platforms. Current content analysis methods primarily concentrate on non-textual contents, such as images and videos themselves, while ignoring the interrelationship between each user post’s contents. In this paper, we propose a novel framework named community-aware dynamic heterogeneous graph embedding (CDHNE) for relationship assessment, capable of mining heterogeneous information, latent community structure and dynamic characteristics from user-generated contents (UGC), which aims to solve complex non-euclidean structured problems. Specifically, we introduce the Markov-chain-based metapath to extract heterogeneous contents and semantics in UGC. A edge-centric attention mechanism is elaborated for localized feature aggregation. Thereafter, we obtain the node representations from micro perspective and apply it to the discovery of global structure by a clustering technique. In order to uncover the temporal evolutionary patterns, we devise an encoder–decoder structure, containing multiple recurrent memory units, which helps to capture the dynamics for relation assessment efficiently and effectively. Extensive experiments on four real-world datasets are conducted in this work, which demonstrate that CDHNE outperforms other baselines due to the comprehensive node representation, while also exhibiting the superiority of CDHNE in relation assessment. The proposed model is presented as a method of breaking down the barriers between traditional UGC analysis and their abstract network analysis. Full article
(This article belongs to the Topic Advances in Perceptual Quality Assessment of User Generated Contents)
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Back to TopTop