entropy-logo

Journal Browser

Journal Browser

Representation Learning: Theory, Applications and Ethical Issues II

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (30 November 2023) | Viewed by 5182

Special Issue Editors

Department of Mathematics, University of Padova, via Trieste 63, 35121 Padova, Italy
Interests: kernel methods; preference learning; recommender systems; multiple kernel learning; interpretable machine learning; deep neural networks
Special Issues, Collections and Topics in MDPI journals
Department of Computer Science, University of Torino, 10149 Torino, Italy
Interests: recommender systems; kernel methods; interpretable machine learning; security/privacy in machine learning; deep neural networks
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The representation problem has always been at the core of machine learning. Finding good data representation is the common denominator of many machine learning subtopics, such as feature selection, kernel learning, and deep learning. The recent rise of deep learning technologies has opened up new and fascinating possibilities for researchers in many fields. However, deep networks often fall short when it comes to being interpreted or explained. Hence, in addition to the effectiveness of a representation, there is the need to face many related problems, for example, interpretability, robustness, and fairness.

Information theory has shown great potential to solve many of the above-mentioned issues: (i) design robust loss functions for neural network training (e.g., Minimum Error Entropy Criterion); (ii) Information theoretic principles, such as the Information Bottleneck, to explain the generalization of DNNs; (iii) interpretation and explanation of DNNs through information-theoretic techniques; (iv) information-theoretic framework for designing fair predictors from data.

The purpose of this Special Issue is to highlight the state-of-the-art of the representation of learning both from a theoretical and a practical perspective, which has received a lot of attention from the research community. We particularly welcome work on information theory for deep and machine learning. In 2022, we will continue this trend with a 2nd volume. Possible topics include but are not limited to the following:

  • Deep and shallow representation learning.
  • Generative and adversarial representation learning.
  • Robust representations for security.
  • Representation learning for fair and ethical learning.
  • Representation learning for interpretable machine learning.
  • Representation learning under privacy constraints, e.g., federated learning.
  • Representation learning in other domains, e.g., recommender systems, natural language processing, cybersecurity, process mining.
  • Information-theoretic principles for the generalization and robustness of deep neural networks.
  • Interpretation/explanation of deep neural networks with information-theoretic methods.
  • Information-theoretic methods in generative models, causal representation learning and reinforcement learning

To see the first volume of the Special Issue, please see:

https://www.mdpi.com/journal/entropy/special_issues/Representation_Learning

Dr. Fabio Aiolli
Dr. Mirko Polato
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • representation learning
  • deep learning
  • interpretability
  • explainability
  • fairness
  • security & privacy
  • information theory

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

33 pages, 541 KiB  
Article
Interpretability Is in the Mind of the Beholder: A Causal Framework for Human-Interpretable Representation Learning
by Emanuele Marconato, Andrea Passerini and Stefano Teso
Entropy 2023, 25(12), 1574; https://doi.org/10.3390/e25121574 - 22 Nov 2023
Viewed by 1244
Abstract
Research on Explainable Artificial Intelligence has recently started exploring the idea of producing explanations that, rather than being expressed in terms of low-level features, are encoded in terms of interpretable concepts learned from data. How to reliably acquire such concepts is, however, [...] Read more.
Research on Explainable Artificial Intelligence has recently started exploring the idea of producing explanations that, rather than being expressed in terms of low-level features, are encoded in terms of interpretable concepts learned from data. How to reliably acquire such concepts is, however, still fundamentally unclear. An agreed-upon notion of concept interpretability is missing, with the result that concepts used by both post hoc explainers and concept-based neural networks are acquired through a variety of mutually incompatible strategies. Critically, most of these neglect the human side of the problem: a representation is understandable only insofar as it can be understood by the human at the receiving end. The key challenge in human-interpretable representation learning (hrl) is how to model and operationalize this human element. In this work, we propose a mathematical framework for acquiring interpretable representations suitable for both post hoc explainers and concept-based neural networks. Our formalization of hrl builds on recent advances in causal representation learning and explicitly models a human stakeholder as an external observer. This allows us derive a principled notion of alignment between the machine’s representation and the vocabulary of concepts understood by the human. In doing so, we link alignment and interpretability through a simple and intuitive name transfer game, and clarify the relationship between alignment and a well-known property of representations, namely disentanglement. We also show that alignment is linked to the issue of undesirable correlations among concepts, also known as concept leakage, and to content-style separation, all through a general information-theoretic reformulation of these properties. Our conceptualization aims to bridge the gap between the human and algorithmic sides of interpretability and establish a stepping stone for new research on human-interpretable representations. Full article
(This article belongs to the Special Issue Representation Learning: Theory, Applications and Ethical Issues II)
Show Figures

Figure 1

18 pages, 15469 KiB  
Article
Representation Learning Method for Circular Seal Based on Modified MLP-Mixer
by Yuan Cao, You Zhou, Zhiwen Zhang and Enyi Yao
Entropy 2023, 25(11), 1521; https://doi.org/10.3390/e25111521 - 06 Nov 2023
Viewed by 834
Abstract
This study proposes Stamp-MLP, an enhanced seal impression representation learning technique based on MLP-Mixer. Instead of using the patch linear mapping preprocessing method, this technique uses circular seal remapping, which reserves the seals’ underlying pixel-level information. In the proposed Stamp-MLP, the average pooling [...] Read more.
This study proposes Stamp-MLP, an enhanced seal impression representation learning technique based on MLP-Mixer. Instead of using the patch linear mapping preprocessing method, this technique uses circular seal remapping, which reserves the seals’ underlying pixel-level information. In the proposed Stamp-MLP, the average pooling is replaced by a global pooling of attention to extract the information more comprehensively. There were three classification tasks in our proposed method: categorizing the seal surface, identifying the product type, and distinguishing individual seals. The three tasks shared an identical dataset comprising 81 seals, encompassing 16 distinct seal surfaces, with each surface featuring six diverse product types. The experiment results showed that, in comparison to MLP-Mixer, VGG16, and ResNet50, the proposed Stamp-MLP achieved the highest classification accuracy (89.61%) in seal surface classification tasks with fewer training samples. Meanwhile, Stamp-MLP outperformed the others with accuracy rates of 90.68% and 91.96% in the product type and seal impression classification tasks, respectively. Moreover, Stamp-MLP had the fewest model parameters (2.67 M). Full article
(This article belongs to the Special Issue Representation Learning: Theory, Applications and Ethical Issues II)
Show Figures

Figure 1

21 pages, 1721 KiB  
Article
Adversarial Defense Method Based on Latent Representation Guidance for Remote Sensing Image Scene Classification
by Qingan Da, Guoyin Zhang, Wenshan Wang, Yingnan Zhao, Dan Lu, Sizhao Li and Dapeng Lang
Entropy 2023, 25(9), 1306; https://doi.org/10.3390/e25091306 - 07 Sep 2023
Viewed by 798
Abstract
Deep neural networks have made great achievements in remote sensing image analyses; however, previous studies have shown that deep neural networks exhibit incredible vulnerability to adversarial examples, which raises concerns about regional safety and production safety. In this paper, we propose an adversarial [...] Read more.
Deep neural networks have made great achievements in remote sensing image analyses; however, previous studies have shown that deep neural networks exhibit incredible vulnerability to adversarial examples, which raises concerns about regional safety and production safety. In this paper, we propose an adversarial denoising method based on latent representation guidance for remote sensing image scene classification. In the training phase, we train a variational autoencoder to reconstruct the data using only the clean dataset. At test time, we first calculate the normalized mutual information between the reconstructed image using the variational autoencoder and the reference image as denoised by a discrete cosine transform. The reconstructed image is selectively utilized according to the result of the image quality assessment. Then, the latent representation of the current image is iteratively updated according to the reconstruction loss so as to gradually eliminate the influence of adversarial noise. Because the training of the denoiser only involves clean data, the proposed method is more robust against unknown adversarial noise. Experimental results on the scene classification dataset show the effectiveness of the proposed method. Furthermore, the method achieves better robust accuracy compared with state-of-the-art adversarial defense methods in image classification tasks. Full article
(This article belongs to the Special Issue Representation Learning: Theory, Applications and Ethical Issues II)
Show Figures

Figure 1

17 pages, 760 KiB  
Article
A Lightweight Method for Defense Graph Neural Networks Adversarial Attacks
by Zhi Qiao, Zhenqiang Wu, Jiawang Chen, Ping’an Ren and Zhiliang Yu
Entropy 2023, 25(1), 39; https://doi.org/10.3390/e25010039 - 25 Dec 2022
Cited by 1 | Viewed by 1496
Abstract
Graph neural network has been widely used in various fields in recent years. However, the appearance of an adversarial attack makes the reliability of the existing neural networks challenging in application. Premeditated attackers, can make very small perturbations to the data to fool [...] Read more.
Graph neural network has been widely used in various fields in recent years. However, the appearance of an adversarial attack makes the reliability of the existing neural networks challenging in application. Premeditated attackers, can make very small perturbations to the data to fool the neural network to produce wrong results. These incorrect results can lead to disastrous consequences. So, how to defend against adversarial attacks has become an urgent research topic. Many researchers have tried to improve the model robustness directly or by using adversarial training to reduce the negative impact of an adversarial attack. However, the majority of the defense strategies currently in use are inextricably linked to the model-training process, which incurs significant running and memory space costs. We offer a lightweight and easy-to-implement approach that is based on graph transformation. Extensive experiments demonstrate that our approach has a similar defense effect (with accuracy rate returns of nearly 80%) as existing methods and only uses 10% of their run time when defending against adversarial attacks on GCN (graph convolutional neural networks). Full article
(This article belongs to the Special Issue Representation Learning: Theory, Applications and Ethical Issues II)
Show Figures

Figure 1

Back to TopTop