Selected Papers from Computer Graphics & Visual Computing (CGVC 2023)

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: closed (15 January 2024) | Viewed by 4845

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Aberystwyth University, Aberystwyth SY23 3FL, UK
Interests: vision perception

E-Mail Website
Guest Editor
Department of Information and Computing Sciences, Heidelberglaan 8, 3584 CS Utrecht, The Netherlands
Interests: visual perception; computer graphics; virtual reality
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, Aberystwyth University, Aberystwyth SY23 3FL, UK
Interests: computer graphics; virtual environments; visual perception and data visualisation

Special Issue Information

Dear Colleagues,

Computer Graphics and Visual Computing (CGVC) 2023, which will take place at Aberystwyth University, Wales, UK on 14-15 September 2023, is the 41st annual gathering on computer graphics, visualisation, and visual computing, organized by the Eurographics UK Chapter. For more information about the conference, please use this link: https://cgvc.org.uk/CGVC2023/.

Selected papers that are presented at the conference are invited to be submitted as extended versions to this Special Issue of the journal Computers after the conference. Submitted papers should be extended to the size of regular research or review articles, with a 50% extension of new results. All submitted papers will undergo a standard peer review procedure. Accepted papers will be published in open access format in Computers and will be collected on the Special Issue’s website. There is no page limitation.

Please prepare and format your paper according to the Instructions for Authors. Use the LaTeX or Microsoft Word template file of the journal (both are available from the Instructions for Authors page). Manuscripts should be submitted online via our susy.mdpi.com editorial system.

Dr. David Hunter
Dr. Peter Vangorp
Dr. Helen Miles
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 17840 KiB  
Article
User-Centered Pipeline for Synthetic Augmentation of Anomaly Detection Datasets
by Alexander Rosbak-Mortensen, Marco Jansen, Morten Muhlig, Mikkel Bjørndahl Kristensen Tøt and Ivan Nikolov
Computers 2024, 13(3), 70; https://doi.org/10.3390/computers13030070 - 08 Mar 2024
Viewed by 830
Abstract
Automatic anomaly detection plays a critical role in surveillance systems but requires datasets comprising large amounts of annotated data to train and evaluate models. Gathering and annotating these data is a labor-intensive task that can become costly. A way to circumvent this is [...] Read more.
Automatic anomaly detection plays a critical role in surveillance systems but requires datasets comprising large amounts of annotated data to train and evaluate models. Gathering and annotating these data is a labor-intensive task that can become costly. A way to circumvent this is to use synthetic data to augment anomalies directly into existing datasets. This far more diverse scenario can be created and come directly with annotations. This however also poses new issues for the computer-vision engineer and researcher end users, who are not readily familiar with 3D modeling, game development, or computer graphics methodologies and must rely on external specialists to use or tweak such pipelines. In this paper, we extend our previous work of an application that synthesizes dataset variations using 3D models and augments anomalies on real backgrounds using the Unity Engine. We developed a high-usability user interface for our application through a series of RITE experiments and evaluated the final product with the help of deep-learning specialists who provided positive feedback regarding its usability, accessibility, and user experience. Finally, we tested if the proposed solution can be used in the context of traffic surveillance by augmenting the train data from the challenging Street Scene dataset. We found that by using our synthetic data, we could achieve higher detection accuracy. We also propose the next steps to expand the proposed solution for better usability and render accuracy through the use of segmentation pre-processing. Full article
(This article belongs to the Special Issue Selected Papers from Computer Graphics & Visual Computing (CGVC 2023))
Show Figures

Figure 1

19 pages, 44295 KiB  
Article
A U-Net Architecture for Inpainting Lightstage Normal Maps
by Hancheng Zuo and Bernard Tiddeman
Computers 2024, 13(2), 56; https://doi.org/10.3390/computers13020056 - 19 Feb 2024
Viewed by 1014
Abstract
In this paper, we investigate the inpainting of normal maps that were captured from a lightstage. Occlusion of parts of the face during performance capture can be caused by the movement of, e.g., arms, hair, or props. Inpainting is the process of interpolating [...] Read more.
In this paper, we investigate the inpainting of normal maps that were captured from a lightstage. Occlusion of parts of the face during performance capture can be caused by the movement of, e.g., arms, hair, or props. Inpainting is the process of interpolating missing areas of an image with plausible data. We build on previous works about general image inpainting that use generative adversarial networks (GANs). We extend our previous work on normal map inpainting to use a U-Net structured generator network. Our method takes into account the nature of the normal map data and so requires modification of the loss function. We use a cosine loss rather than the more common mean squared error loss when training the generator. Due to the small amount of training data available, even when using synthetic datasets, we require significant augmentation, which also needs to take account of the particular nature of the input data. Image flipping and inplane rotations need to properly flip and rotate the normal vectors. During training, we monitor key performance metrics including the average loss, structural similarity index measure (SSIM), and peak signal-to-noise ratio (PSNR) of the generator, alongside the average loss and accuracy of the discriminator. Our analysis reveals that the proposed model generates high-quality, realistic inpainted normal maps, demonstrating the potential for application to performance capture. The results of this investigation provide a baseline on which future researchers can build with more advanced networks and comparison with inpainting of the source images used to generate the normal maps. Full article
(This article belongs to the Special Issue Selected Papers from Computer Graphics & Visual Computing (CGVC 2023))
Show Figures

Figure 1

16 pages, 1254 KiB  
Article
Investigating Color-Blind User-Interface Accessibility via Simulated Interfaces
by Amaan Jamil and Gyorgy Denes
Computers 2024, 13(2), 53; https://doi.org/10.3390/computers13020053 - 17 Feb 2024
Viewed by 1359
Abstract
Over 300 million people who live with color vision deficiency (CVD) have a decreased ability to distinguish between colors, limiting their ability to interact with websites and software packages. User-interface designers have taken various approaches to tackle the issue, with most offering a [...] Read more.
Over 300 million people who live with color vision deficiency (CVD) have a decreased ability to distinguish between colors, limiting their ability to interact with websites and software packages. User-interface designers have taken various approaches to tackle the issue, with most offering a high-contrast mode. The Web Content Accessibility Guidelines (WCAG) outline some best practices for maintaining accessibility that have been adopted and recommended by several governments; however, it is currently uncertain how this impacts perceived user functionality and if this could result in a reduced aesthetic look. In the absence of subjective data, we aim to investigate how a CVD observer might rate the functionality and aesthetics of existing UIs. However, the design of a comparative study of CVD vs. non-CVD populations is inherently hard; therefore, we build on the successful field of physiologically based CVD models and propose a novel simulation-based experimental protocol, where non-CVD observers rate the relative aesthetics and functionality of screenshots of 20 popular websites as seen in full color vs. with simulated CVD. Our results show that relative aesthetics and functionality correlate positively and that an operating-system-wide high-contrast mode can reduce both aesthetics and functionality. While our results are only valid in the context of simulated CVD screenshots, the approach has the benefit of being easily deployable, and can help to spot a number of common pitfalls in production. Finally, we propose a AAA–A classification of the interfaces we analyzed. Full article
(This article belongs to the Special Issue Selected Papers from Computer Graphics & Visual Computing (CGVC 2023))
Show Figures

Figure 1

15 pages, 14826 KiB  
Article
Achieving Better Energy Efficiency in Volume Analysis and Direct Volume Rendering Descriptor Computation
by Jacob D. Hauenstein and Timothy S. Newman
Computers 2024, 13(2), 51; https://doi.org/10.3390/computers13020051 - 13 Feb 2024
Viewed by 1040
Abstract
Approaches aimed at achieving improved energy efficiency for determination of descriptors—used in volumetric data analysis and one common mode of scientific visualisation—in one x86-class setting are described and evaluated. These approaches are evaluated against standard approaches for the computational setting. In all, six [...] Read more.
Approaches aimed at achieving improved energy efficiency for determination of descriptors—used in volumetric data analysis and one common mode of scientific visualisation—in one x86-class setting are described and evaluated. These approaches are evaluated against standard approaches for the computational setting. In all, six approaches for improved efficiency are considered. Four of them are computation-based. The other two are memory-based. The descriptors are classic gradient and curvature descriptors. In addition to their use in volume analyses, they are used in the classic ray-casting-based direct volume rendering (DVR), which is a particular application area of interest here. An ideal combination of the described approaches applied to gradient descriptor determination allowed them to to be computed with only 80% of the energy of a standard approach in the computational setting; energy efficiency was improved by a factor of 1.2. For curvature descriptor determination, the ideal combination of described approaches achieved a factor-of-two improvement in energy efficiency. Full article
(This article belongs to the Special Issue Selected Papers from Computer Graphics & Visual Computing (CGVC 2023))
Show Figures

Figure 1

Back to TopTop