Selected Papers from Computer Graphics & Visual Computing (CGVC 2022)

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: closed (15 January 2023) | Viewed by 5145

Special Issue Editors

Department of Computer Science, The University of Manchester, Oxford Rd, Manchester M13 9PL, UK
Interests: signal and image processing; information and scientific visualization
Special Issues, Collections and Topics in MDPI journals
Department of Information and Computing Sciences, Heidelberglaan 8, 3584 CS Utrecht, The Netherlands
Interests: visual perception; computer graphics; virtual reality
Special Issues, Collections and Topics in MDPI journals
Research Center for Creative Arts, University for the Creative Arts (UCA), Farnham GU9 7DS, UK
Interests: creative technologies (VR, AR, graphics design animation); computing (visualization, data mining, high performance cloud)

Special Issue Information

Dear Colleagues,

Computer Graphics and Visual Computing (CGVC) 2022, which will take place at Leeds Trinity University, England, UK, on 15–16 September 2022, is the 40th annual gathering on computer graphics, visualisation, and visual computing, organized by the Eurographics UK Chapter. For more information about the conference, please use this link: https://cgvc.org.uk/CGVC2022/.

Selected papers that are presented at the conference are invited to be submitted as extended versions to this Special Issue of the journal Computers after the conference. Submitted papers should be extended to the size of regular research or review articles, with a 50% extension of new results. All submitted papers will undergo a standard peer-review procedure. Accepted papers will be published in open access format in Computers and will be collected on this Special Issue website. There is no page limitation.

Please prepare and format your paper according to the Instructions for Authors. Use the LaTeX or Microsoft Word template file of the journal (both are available from the Instructions for Authors page). Manuscripts should be submitted online via our susy.mdpi.com editorial system.

Dr. Martin J. Turner
Dr. Peter Vangorp
Prof. Dr. Edmond Prakash
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 5196 KiB  
Article
Subdivision Shading for Catmull-Clark and Loop Subdivision Surfaces with Semi-Sharp Creases
by Jun Zhou, Jan Boonstra and Jiří Kosinka
Computers 2023, 12(4), 85; https://doi.org/10.3390/computers12040085 - 21 Apr 2023
Viewed by 1540
Abstract
Coarse meshes can be recursively subdivided into denser and denser meshes by dividing their faces into several smaller faces and repositioning the vertices according to carefully designed subdivision rules. This process leads to smooth surfaces, such as in the case of Catmull-Clark or [...] Read more.
Coarse meshes can be recursively subdivided into denser and denser meshes by dividing their faces into several smaller faces and repositioning the vertices according to carefully designed subdivision rules. This process leads to smooth surfaces, such as in the case of Catmull-Clark or Loop subdivision, but often suffers from shading artifacts near extraordinary points due to the lower quality of the normal field there, typically corresponding to only tangent-plane (and not higher) continuity at these points. The idea of subdivision shading is to apply the same subdivision rules that are used to subdivide geometry to also subdivide the normals associated with mesh vertices. This leads to smoother normal fields, which can be used for shading purposes, and this in turn removes the shading artifacts. However, the original subdivision shading method does not support sharp and semi-sharp creases, which are desired ingredients in subdivision surface modelling. We present two approaches to extending subdivision shading to work also on models with (semi-)sharp creases, and demonstrate this in the cases of Catmull-Clark as well as Loop subdivision. Full article
(This article belongs to the Special Issue Selected Papers from Computer Graphics & Visual Computing (CGVC 2022))
Show Figures

Figure 1

16 pages, 32019 KiB  
Article
A Comparative Study of Safety Zone Visualisations for Virtual and Physical Robot Arms Using Augmented Reality
by Yunus Emre Cogurcu, James A. Douthwaite and Steve Maddock
Computers 2023, 12(4), 75; https://doi.org/10.3390/computers12040075 - 10 Apr 2023
Viewed by 1510
Abstract
The use of robot arms in various industrial settings has changed the way tasks are completed. However, safety concerns for both humans and robots in these collaborative environments remain a critical challenge. Traditional approaches to visualising safety zones, including physical barriers and warning [...] Read more.
The use of robot arms in various industrial settings has changed the way tasks are completed. However, safety concerns for both humans and robots in these collaborative environments remain a critical challenge. Traditional approaches to visualising safety zones, including physical barriers and warning signs, may not always be effective in dynamic environments or where multiple robots and humans are working simultaneously. Mixed reality technologies offer dynamic and intuitive visualisations of safety zones in real time, with the potential to overcome these limitations. In this study, we compare the effectiveness of safety zone visualisations in virtual and real robot arm environments using the Microsoft HoloLens 2. We tested our system with a collaborative pick-and-place application that mimics a real manufacturing scenario in an industrial robot cell. We investigated the impact of safety zone shape, size, and appearance in this application. Visualisations that used virtual cage bars were found to be the most preferred safety zone configuration for a real robot arm. However, the results for this aspect were mixed for a virtual robot arm experiment. These results raise the question of whether or not safety visualisations can initially be tested in a virtual scenario and the results transferred to a real robot arm scenario, which has implications for the testing of trust and safety in human–robot collaboration environments. Full article
(This article belongs to the Special Issue Selected Papers from Computer Graphics & Visual Computing (CGVC 2022))
Show Figures

Figure 1

13 pages, 16987 KiB  
Article
Depth-Aware Neural Style Transfer for Videos
by Eleftherios Ioannou and Steve Maddock
Computers 2023, 12(4), 69; https://doi.org/10.3390/computers12040069 - 27 Mar 2023
Viewed by 1562
Abstract
Temporal consistency and content preservation are the prominent challenges in artistic video style transfer. To address these challenges, we present a technique that utilizes depth data and we demonstrate this on real-world videos from the web, as well as on a standard video [...] Read more.
Temporal consistency and content preservation are the prominent challenges in artistic video style transfer. To address these challenges, we present a technique that utilizes depth data and we demonstrate this on real-world videos from the web, as well as on a standard video dataset of three-dimensional computer-generated content. Our algorithm employs an image-transformation network combined with a depth encoder network for stylizing video sequences. For improved global structure preservation and temporal stability, the depth encoder network encodes ground-truth depth information which is fused into the stylization network. To further enforce temporal coherence, we employ ConvLSTM layers in the encoder, and a loss function based on calculated depth information for the output frames is also used. We show that our approach is capable of producing stylized videos with improved temporal consistency compared to state-of-the-art methods whilst also successfully transferring the artistic style of a target painting. Full article
(This article belongs to the Special Issue Selected Papers from Computer Graphics & Visual Computing (CGVC 2022))
Show Figures

Figure 1

Back to TopTop