Information Visualization Theory and Applications

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Applications".

Deadline for manuscript submissions: 30 April 2024 | Viewed by 3957

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science and Information Technology, University of the District of Columbia, Washington, DC 20759, USA
Interests: HCI; visual analytics; big data analytics; information visualization

E-Mail Website
Guest Editor
Department of Computer Science, Bowie State University, Bowie, MD 20715, USA
Interests: data science; medical informatics and visualization; time-series analysis

E-Mail Website
Guest Editor
Department of Management and Decision Sciences, Coastal Carolina University, Conway, SC 29528, USA
Interests: data analytics; visualization; management information systems; HCI

Special Issue Information

Dear Colleagues,

In data science, information visualization has become a central component to understanding complex scientific problems or data through visualization. Thus, designing innovative visualization systems is essential to address and solve various domain problems. For this, a theoretical understanding of visualization models and design principles through visual encoding, interaction, and/or analysis tasks is critical to solving domain problems and understanding data effectively. Identifying possible implications from theories of perception, cognition, design, and/or aesthetics is also important. Automated design guidelines and visualization recommendations should be determined to find scientific limitations of understanding data through visualization.

This Special Issue aims to seek high-quality papers that highlight visualization challenges to be accomplished, elucidate solutions for understanding domain problems through visualizations, and theoretical visualization principles for advancing visualization techniques and applications.

Dr. Dong Hyun Jeong
Dr. Soo-Yeon Ji
Dr. Bong-Keun Jeong
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • information visualization
  • science of user interactions in visualization
  • visualization theory
  • knowledge-assisted visualization
  • visualization technique
  • visualization in machine learning
  • visualziatioin applications and design studies
  • evaluation and empirical research in visualization
  • visual data analysis and knowledge discovery
  • visual representation and interaction
  • visualization applications
  • visualization taxonomies and models
  • visualization algorithms and technologies
  • uncertainty visualization
  • visualization tools and systems for simulation and modeling

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 9658 KiB  
Article
Justification vs. Transparency: Why and How Visual Explanations in a Scientific Literature Recommender System
by Mouadh Guesmi, Mohamed Amine Chatti, Shoeb Joarder, Qurat Ul Ain, Clara Siepmann, Hoda Ghanbarzadeh and Rawaa Alatrash
Information 2023, 14(7), 401; https://doi.org/10.3390/info14070401 - 14 Jul 2023
Cited by 2 | Viewed by 1343
Abstract
Significant attention has been paid to enhancing recommender systems (RS) with explanation facilities to help users make informed decisions and increase trust in and satisfaction with an RS. Justification and transparency represent two crucial goals in explainable recommendations. Different from transparency, which faithfully [...] Read more.
Significant attention has been paid to enhancing recommender systems (RS) with explanation facilities to help users make informed decisions and increase trust in and satisfaction with an RS. Justification and transparency represent two crucial goals in explainable recommendations. Different from transparency, which faithfully exposes the reasoning behind the recommendation mechanism, justification conveys a conceptual model that may differ from that of the underlying algorithm. An explanation is an answer to a question. In explainable recommendation, a user would want to ask questions (referred to as intelligibility types) to understand the results given by an RS. In this paper, we identify relationships between Why and How explanation intelligibility types and the explanation goals of justification and transparency. We followed the Human-Centered Design (HCD) approach and leveraged the What–Why–How visualization framework to systematically design and implement Why and How visual explanations in the transparent Recommendation and Interest Modeling Application (RIMA). Furthermore, we conducted a qualitative user study (N = 12) based on a thematic analysis of think-aloud sessions and semi-structured interviews with students and researchers to investigate the potential effects of providing Why and How explanations together in an explainable RS on users’ perceptions regarding transparency, trust, and satisfaction. Our study shows qualitative evidence confirming that the choice of the explanation intelligibility types depends on the explanation goal and user type. Full article
(This article belongs to the Special Issue Information Visualization Theory and Applications)
Show Figures

Figure 1

17 pages, 12932 KiB  
Article
On Isotropy of Multimodal Embeddings
by Kirill Tyshchuk, Polina Karpikova, Andrew Spiridonov, Anastasiia Prutianova, Anton Razzhigaev and Alexander Panchenko
Information 2023, 14(7), 392; https://doi.org/10.3390/info14070392 - 10 Jul 2023
Cited by 2 | Viewed by 1871
Abstract
Embeddings, i.e., vector representations of objects, such as texts, images, or graphs, play a key role in deep learning methodologies nowadays. Prior research has shown the importance of analyzing the isotropy of textual embeddings for transformer-based text encoders, such as the BERT model. [...] Read more.
Embeddings, i.e., vector representations of objects, such as texts, images, or graphs, play a key role in deep learning methodologies nowadays. Prior research has shown the importance of analyzing the isotropy of textual embeddings for transformer-based text encoders, such as the BERT model. Anisotropic word embeddings do not use the entire space, instead concentrating on a narrow cone in such a pretrained vector space, negatively affecting the performance of applications, such as textual semantic similarity. Transforming a vector space to optimize isotropy has been shown to be beneficial for improving performance in text processing tasks. This paper is the first comprehensive investigation of the distribution of multimodal embeddings using the example of OpenAI’s CLIP pretrained model. We aimed to deepen the understanding of the embedding space of multimodal embeddings, which has previously been unexplored in this respect, and study the impact on various end tasks. Our initial efforts were focused on measuring the alignment of image and text embedding distributions, with an emphasis on their isotropic properties. In addition, we evaluated several gradient-free approaches to enhance these properties, establishing their efficiency in improving the isotropy/alignment of the embeddings and, in certain cases, the zero-shot classification accuracy. Significantly, our analysis revealed that both CLIP and BERT models yielded embeddings situated within a cone immediately after initialization and preceding training. However, they were mostly isotropic in the local sense. We further extended our investigation to the structure of multilingual CLIP text embeddings, confirming that the observed characteristics were language-independent. By computing the few-shot classification accuracy and point-cloud metrics, we provide evidence of a strong correlation among multilingual embeddings. Embeddings transformation using the methods described in this article makes it easier to visualize embeddings. At the same time, multiple experiments that we conducted showed that, in regard to the transformed embeddings, the downstream tasks performance does not drop substantially (and sometimes is even improved). This means that one could obtain an easily visualizable embedding space, without substantially losing the quality of downstream tasks. Full article
(This article belongs to the Special Issue Information Visualization Theory and Applications)
Show Figures

Figure 1

Back to TopTop