entropy-logo

Journal Browser

Journal Browser

Information Theory and Graph Signal Processing

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (17 July 2020) | Viewed by 13430

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Telecommunications and Multimedia Applications, Universitat Politècnica de València, 46022 València, Spain
Interests: statistical signal processing; pattern recognition; machine learning; graph signal processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Telecommunications and Multimedia Applications, Universitat Politècnica de València, 46022 València, Spain
Interests: pattern recognition; signal processing on graphs; dynamic modeling; decision fusion; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Graph signal processing (GSP) is an emerging area of increasing interest. Essentially, the concept of a signal defined in a uniform time or space grid is extended to more general grids and domains. This dramatically opens new possibilities for the signal processing community, by establishing a bridge between signal and data processing. So, currently, many efforts are driven to define concepts, perspectives, and applications to demonstrate that GSP has its own merits regarding other related areas of data processing. Thus, for example, the classical concept of filtering has been extended to GSP by an appropriate definition of shift operators and graph eigenfunctions. Other basic concepts like stationarity have been extended in different ways yet this is still an open issue of theoretical research. All these definitions depend on the considered graph connectivity matrix (Laplacian, adjacency, etc.), hence another key aspect in GSP is the definition of the graph structure and connections. In many applications, this is made by considering context-related information which helps define pairwise connections. However, in a general setting of data processing, the connectivity should be learned from training data. Several methods to learn the graph connectivity under regularization constraints have been proposed so far. In most cases, Gaussianity is assumed to model the underlying multidimensional distributions, and partial or total correlations are considered to quantify the pairwise connections.

The main goal of this Special Issue is to contribute to progress in GSP by incorporating concepts emanating from information theory. In particular, new developments may include, but are not limited to the following:

  • Interpretation of existing concepts and methods of GSP from an information theory perspective.
  • New definitions of stationarity, localization, and uncertainty in GSP.
  • Connectivity learning: non-Gaussian models, pairwise connections based on information theory concepts.
  • New applications of GSP.

Prof. Luis Vergara
Dr. Addisson Salazar
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • graph signal processing
  • information theory
  • graph statistical models
  • graph learning

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 2411 KiB  
Article
Semi-Supervised Bidirectional Long Short-Term Memory and Conditional Random Fields Model for Named-Entity Recognition Using Embeddings from Language Models Representations
by Min Zhang, Guohua Geng and Jing Chen
Entropy 2020, 22(2), 252; https://doi.org/10.3390/e22020252 - 22 Feb 2020
Cited by 18 | Viewed by 4454
Abstract
Increasingly, popular online museums have significantly changed the way people acquire cultural knowledge. These online museums have been generating abundant amounts of cultural relics data. In recent years, researchers have used deep learning models that can automatically extract complex features and have rich [...] Read more.
Increasingly, popular online museums have significantly changed the way people acquire cultural knowledge. These online museums have been generating abundant amounts of cultural relics data. In recent years, researchers have used deep learning models that can automatically extract complex features and have rich representation capabilities to implement named-entity recognition (NER). However, the lack of labeled data in the field of cultural relics makes it difficult for deep learning models that rely on labeled data to achieve excellent performance. To address this problem, this paper proposes a semi-supervised deep learning model named SCRNER (Semi-supervised model for Cultural Relics’ Named Entity Recognition) that utilizes the bidirectional long short-term memory (BiLSTM) and conditional random fields (CRF) model trained by seldom labeled data and abundant unlabeled data to attain an effective performance. To satisfy the semi-supervised sample selection, we propose a repeat-labeled (relabeled) strategy to select samples of high confidence to enlarge the training set iteratively. In addition, we use embeddings from language model (ELMo) representations to dynamically acquire word representations as the input of the model to solve the problem of the blurred boundaries of cultural objects and Chinese characteristics of texts in the field of cultural relics. Experimental results demonstrate that our proposed model, trained on limited labeled data, achieves an effective performance in the task of named entity recognition of cultural relics. Full article
(This article belongs to the Special Issue Information Theory and Graph Signal Processing)
Show Figures

Figure 1

15 pages, 305 KiB  
Article
Nonasymptotic Upper Bounds on Binary Single Deletion Codes via Mixed Integer Linear Programming
by Albert No
Entropy 2019, 21(12), 1202; https://doi.org/10.3390/e21121202 - 06 Dec 2019
Cited by 3 | Viewed by 2162
Abstract
The size of the largest binary single deletion code has been unknown for more than 50 years. It is known that Varshamov–Tenengolts (VT) code is an optimum single deletion code for block length n 10 ; however, only a few upper bounds [...] Read more.
The size of the largest binary single deletion code has been unknown for more than 50 years. It is known that Varshamov–Tenengolts (VT) code is an optimum single deletion code for block length n 10 ; however, only a few upper bounds of the size of single deletion code are proposed for larger n. We provide improved upper bounds using Mixed Integer Linear Programming (MILP) relaxation technique. Especially, we show the size of single deletion code is smaller than or equal to 173 when the block length n is 11. In the second half of the paper, we propose a conjecture that is equivalent to the long-lasting conjecture that “VT code is optimum for all n”. This equivalent formulation of the conjecture contains small sub-problems that can be numerically verified. We provide numerical results that support the conjecture. Full article
(This article belongs to the Special Issue Information Theory and Graph Signal Processing)
16 pages, 985 KiB  
Article
Embedding Learning with Triple Trustiness on Noisy Knowledge Graph
by Yu Zhao, Huali Feng and Patrick Gallinari
Entropy 2019, 21(11), 1083; https://doi.org/10.3390/e21111083 - 06 Nov 2019
Cited by 12 | Viewed by 3514
Abstract
Embedding learning on knowledge graphs (KGs) aims to encode all entities and relationships into a continuous vector space, which provides an effective and flexible method to implement downstream knowledge-driven artificial intelligence (AI) and natural language processing (NLP) tasks. Since KG construction usually involves [...] Read more.
Embedding learning on knowledge graphs (KGs) aims to encode all entities and relationships into a continuous vector space, which provides an effective and flexible method to implement downstream knowledge-driven artificial intelligence (AI) and natural language processing (NLP) tasks. Since KG construction usually involves automatic mechanisms with less human supervision, it inevitably brings in plenty of noises to KGs. However, most conventional KG embedding approaches inappropriately assume that all facts in existing KGs are completely correct and ignore noise issues, which brings about potentially serious errors. To address this issue, in this paper we propose a novel approach to learn embeddings with triple trustiness on KGs, which takes possible noises into consideration. Specifically, we calculate the trustiness value of triples according to the rich and relatively reliable information from large amounts of entity type instances and entity descriptions in KGs. In addition, we present a cross-entropy based loss function for model optimization. In experiments, we evaluate our models on KG noise detection, KG completion and classification. Through extensive experiments on three datasets, we demonstrate that our proposed model can learn better embeddings than all baselines on noisy KGs. Full article
(This article belongs to the Special Issue Information Theory and Graph Signal Processing)
Show Figures

Figure 1

18 pages, 646 KiB  
Article
A New Surrogating Algorithm by the Complex Graph Fourier Transform (CGFT)
by Jordi Belda, Luis Vergara, Gonzalo Safont, Addisson Salazar and Zuzanna Parcheta
Entropy 2019, 21(8), 759; https://doi.org/10.3390/e21080759 - 04 Aug 2019
Cited by 21 | Viewed by 2681
Abstract
The essential step of surrogating algorithms is phase randomizing the Fourier transform while preserving the original spectrum amplitude before computing the inverse Fourier transform. In this paper, we propose a new method which considers the graph Fourier transform. In this manner, much more [...] Read more.
The essential step of surrogating algorithms is phase randomizing the Fourier transform while preserving the original spectrum amplitude before computing the inverse Fourier transform. In this paper, we propose a new method which considers the graph Fourier transform. In this manner, much more flexibility is gained to define properties of the original graph signal which are to be preserved in the surrogates. The complex case is considered to allow unconstrained phase randomization in the transformed domain, hence we define a Hermitian Laplacian matrix that models the graph topology, whose eigenvectors form the basis of a complex graph Fourier transform. We have shown that the Hermitian Laplacian matrix may have negative eigenvalues. We also show in the paper that preserving the graph spectrum amplitude implies several invariances that can be controlled by the selected Hermitian Laplacian matrix. The interest of surrogating graph signals has been illustrated in the context of scarcity of instances in classifier training. Full article
(This article belongs to the Special Issue Information Theory and Graph Signal Processing)
Show Figures

Figure 1

Back to TopTop