Next Article in Journal
Design and Evaluation of a Memory-Recalling Virtual Reality Application for Elderly Users
Previous Article in Journal
Do Not Freak Me Out! The Impact of Lip Movement and Appearance on Knowledge Gain and Confidence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Show-and-Tell: An Interface for Delivering Rich Feedback upon Creative Media Artefacts

School of Computing, Faculty of Science, Agriculture & Engineering, Newcastle University, Newcastle upon Tyne NE1 7RU, UK
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2024, 8(3), 23; https://doi.org/10.3390/mti8030023
Submission received: 19 February 2024 / Revised: 4 March 2024 / Accepted: 11 March 2024 / Published: 14 March 2024

Abstract

:
In this paper, we explore an approach to feedback which could allow those learning creative digital media practices in remote and asynchronous environments to receive rich, multi-modal, and interactive feedback upon their creative artefacts. We propose the show-and-tell feedback interface which couples graphical user interface changes (the show) to a text-based explanations (the tell). We describe the rationale behind the design and offer a tentative set of design criteria. We report the implementation and deployment into a real-world educational setting using a prototype interface developed to allow either traditional text-only feedback or our proposed show-and tell feedback across four sessions. The prototype was used to provide formative feedback upon music students’ coursework resulting in a total of 103 pieces of feedback. Thematic analysis was used to analyse the data obtained through interviews and focus groups with both educators and students (i.e., feedback givers and receivers). Recipients considered show-and-tell feedback to possess greater clarity and detail in comparison with the single modality text-only feedback they are used to receiving. We also report interesting emergent issues around control and artistic vision, and we discuss how these issues could be mitigated in future iterations of the interface.

1. Introduction

The shift to online working and learning in response to the global pandemic has caused many of us to question the suitability of the tools used for online participation. This shift may have been particularly challenging for those working or learning in domains where hands-on, multi-modal, demonstration plays an important part in knowledge exchanged and skill acquisition. An important topic for technologists remains how to accommodate online more of the beneficial interactions that can occur in face-to-face learning settings. We believe this type of work has an important role to play in helping foster the participation of those who may struggle to access co-located and/or synchronous learning environments, be that due to geographical location, disability, caring responsibilities, etc.
The work presented here focuses on learning feedback within creative media domains and represents an attempt to bring aspects of multi-modal demonstration available in face-to-face settings to online communities. We present a novel feedback model that applies a ’track changes’ approach to non-text modality creative artefacts. Track changes has proven to be a useful and popular device aiding feedback provision (and co-authoring) for text-modality artefacts [1]. Document authors can easily toggle suggested changes on and off which aids comparison, and track changes are frequently paired with text comments explaining the thinking behind the proposed changes. Track changes’ ability to allow authors to compare pre-/post-feedback states may offer an additional benefit when applied in a creative digital media feedback context. This is because it aligns closely with a cornerstone practice known as A/B comparison [2], where a person pays close attention whilst switching between two artefact states, leading to subtle differences becoming apparent. It is the ability for feedback recipients to make close temporal comparisons which distinguishes the feedback approach explored in this paper from other remote/asynchronous demonstration methods such as screen recording with narration.
Applying a track changes approach to non-text modality digital artefacts necessitates exploring different tools and approaches for feedback creation and presentation. We propose calling such feedback ’show-and-tell feedback’ as it comprises two key components—a ’show’ and a ’tell’:
The ’show’ enables a feedback giver to demonstrate how they would manipulate a digital media artefact via a graphical user interface. To satisfy our requirement for close temporal comparison, recipients must be able to step in and take control of the interface, to explore the processing applied by the feedback giver and to make A/B comparisons between the pre/post feedback artefact states.
The ’tell’ represents descriptive information used to convey the feedback giver’s thought process which underpins their artefact manipulation.
As such, in this work we aim to answer two questions. The first states the larger challenge we want to address resulting in our proposed show-and-tell approach, and the second is more specific to our own response to the first question: (Q1) How can we design to support multi-modal, rich and interactive feedback upon creative media artefacts in online asynchronous environments? (Q2) How do music educators and learners perceive the learning experience, and the interactions, with our proposed show-and-tell based feedback interface?
In the remainder of this paper we present a pilot study where a show-and-tell interface was fully implemented and used to assist formative feedback provision to music students. The paper offers the following three contributions: (1) the concept of show-and-tell feedback is proposed; (2) core design criteria for show-and-tell feedback interfaces are proposed and demonstrated via a music-remixing interface; and (3) we present the perspectives of educators and students who used the show-and-tell feedback interface. The first two contributions correspond to answering the first research question and the third contribution contributes to answering the second research question. We conclude this paper by discussing insights from this pilot study, focusing on observed issues around clarity, level of detail, control and artistic vision. Through this work, we also aim to make a contribution to the community’s understanding of how the interactions with multimodal feedback impact music learning, part of the bigger question around interaction with multimodal technologies and perceptual processing and musical experiences as asked by Chao [3] in MITi’s especial issue on ’Musical Interactions’ in 2022.

Background and Related Work

Feedback is one of the most powerful influences upon learning and achievement [4] and therefore it deserves close attention from education technologists. Within the research community there is a general orientation towards viewing feedback as information provided to someone in order to try and reduce the gap between their current level of ability and a desired level of ability (e.g., [5]). Effective feedback requires that the recipients are able to make sense of the information they receive and understand how they can use it to enhance their future performance [6]. There exists huge variety in relation to the creation, content and delivery of feedback information with some approaches proving more effective than others.
Directive feedback is a type of formative feedback where prompts, hints, and direct instruction are used to make the feedback recipient aware of the aspects of their work needing remediation [7,8]. There is good evidence that directive feedback, in comparison to less-specific feedback, can enhance learning by decreasing cognitive load [9,10,11], particularly when the feedback contains the correct answer [12]. We believe our show-and-tell feedback aligns closely with the directive feedback classification due to the inclusion of demonstrations which are intended to add specificity.
An important issue relating to feedback within artistic domains is that ‘correct’ is frequently considered more a matter of subjective aesthetic judgement than objective truth [13,14]. Moving away from single-expert feedback towards multi-person feedback can be useful in many domains [15] but may be particularly important here because if aesthetic sensibilities are mismatched, purposeful artistic choice could be misinterpreted as oversight or error. Critique sessions can overcome this issue and are common in the arts, where novices receive feedback from both experts and peers, and benefit from the range of unique perspectives this entails [14].
Participating in peer feedback can benefit an individual whether they participate as a feedback provider or a recipient. Giving peer feedback can help novices view their own work more objectively and gain a better understanding of the standards by which it will be judged [16], whilst receiving peer feedback can serve as a powerful catalyst to trigger reflection, self-assessment, and corrective action [17]. Show-and-tell feedback can incorporate feedback from multiple feedback givers and, therefore, can be used in peer feedback and critique scenarios. Indeed, the pilot deployment presented subsequently is based around the critique concept with feedback from peers and experts tutors are accommodated within the interface.
While much is known about the impact of the content of feedback (e.g., [4,10]), a potentially significant but less explored factor influencing its effectiveness relates to the way the feedback information is represented and its modality. For example, feedback recipients have been shown to benefit when feedback is presented as both written text and recorded speech [18]. Similarly, researchers investigating an online photography critique community found some participants had adopted a practice of downloading, revising and re-uploading photos in order to demonstrate proposed edits, thus overcoming the limitations of text-based feedback representations [19]. In the context of music education, Bobbe [20] emphasized the importance of enabling meaningful communication between the teacher and the student in distance teaching. A show-and-tell feedback interface will support this type of behaviour by providing integrated tools allowing for in-place artefact processing, and will improve the visibility of feedback givers’ changes by both highlighting how the graphical user interface controls were used to process the artefact, and by supporting side by side pre/post processing artefact comparisons.
HCI researchers have taken a keen interest in developing digital tools to enrich feedback provision for those learning in remote and/or asynchronous contexts. For example, the RichReview system [21,22] enables feedback givers to augment text documents with annotation, voice comments and gestures. Similarly, VidCrit [23] is intended to enrich feedback provision around the creation of digital video content by capturing feedback givers’ spoken commentary and synchronising it to the corresponding location in the video to ensure recipients understand the context of the comments. Whilst these systems offer enriched, multimodal feedback they do not enable the feedback giver to directly manipulate the underlying digital artefact.
A related strand of HCI research explores how digital systems can support feedback from multiple peers. Both PeerPresents [24] and CritViz [14] are examples of this work; however, the aim was to support-not replace-face-to-face critique sessions. PeerStudio [25] and CrowdCrit [26], on the other hand, are oriented towards supporting the creation of text feedback in remote and asynchronous settings.
There has been work around the creation of digital systems to support demonstration. For example, Grabler et al. created a system that could automatically generate photo manipulation tutorials [27], whilst DemoCut [28] is intended to automate the production of instructional videos. However, as far as we are aware, these systems have not been explored from a feedback provision perspective.
Whilst HCI researchers have explored enriched feedback, feedback from multiple feedback givers, and systems to support demonstration, we have not found any work that applies a track changes approach to feedback upon non-textual creative media artefacts. As such, we view this work on show-and-tell feedback interfaces as a novel contribution to the field.

2. Materials and Methods

We will start by explaining the rationale behind the design rationale and criteria to support a show-and-tell feedback interface, then provide details of an implementation of such an interface for music-remixing, before explaining the details of our exploratory study.

2.1. Design Rationale

Our show-and-tell feedback interface is inspired by the idea of applying a track changes approach for feedback upon non-text modality artefacts. Our aim is to propose and explore an approach to feedback which could allow those learning creative digital media practices in remote and/or asynchronous environments to receive rich and interactive feedback upon their creative artefacts.
Non-text modality digital artefacts are often manipulated using tools on a graphical user interface (GUI). Therefore, a show-and-tell feedback interface requires a GUI featuring a typical range of tools used to process the type of artefact in question, e.g., Photoshop style features for photographic content, audio editing features for musical artefacts and so on. Feedback givers will use these tools to modify the artefact when creating the ’show’ part of their feedback, and recipients will look to these tools and notice differences between their own tool use and that of a feedback giver. For example, perhaps a feedback giver has adjusted some dials or sliders on the GUI. These adjustments should be highlighted to the feedback recipient. Thus, when a user switches between the original version of the artefact (created by the feedback seeker) and a modified version created by a feedback giver, the system must alter the processing applied to the artefact accordingly (so the user can see or hear the difference in the artefact), update the GUI’s tool display to reflect this new processing (so the user can see how tools were used to refine the artefact), and present any accompanying feedback text on screen.
In discussing affordances and the design of interactive multimodal systems, Rowe [29] proposed that ’the choice of architecture determines the kinds of parameters available for manipulation’. As such, we propose that for show-and-tell interfaces to be effective, they could utilise a relatively new model of media distribution known as Object-Based Media [30]. This essentially moves the task of constructing the artefact from the producer to the receiver. For example, instead of a piece of music being distributed as a single audio file (e.g., an MP3), a set of audio files, one for each instrument in the music, are packaged up alongside metadata containing instructions detailing the processing to apply to each file and how to combine them prior to their presentation to the recipient. So long as the recipient’s computing device is capable of applying this processing, the music can be reconstructed to represent the creator’s intended presentation. Similarly, digital imagery may be transmitted as a set of image layers, with the accompanying metadata defining the layers’ ordering, opacity, processing and so on. Video can be considered a combination of both of the above. All that is needed for this Object-Based Media to support show-and-tell feedback is for the metadata to contain additional sets of processing instructions—one set for each piece of feedback—along with the accompanying descriptive feedback component (typically, but not necessarily as text). The end user would then be able to select which metadata is used to construct the artefact, be that the metadata associated with the artefact’s original state or metadata associated with a feedback state. The artefact assembly process is depicted in Figure 1.
The show-and-tell feedback concept can leverage digital tools’ unique affordances [29] including persistence and non-destructive processing in order to enhance the effectiveness of the feedback provision. By persistence we mean that it enables a feedback recipient to return to the feedback multiple times which can help with retention [31] and assimilation [32]. Similarly, non-destructive processing enables feedback recipients to easily compare pre-/post-feedback artefact states, a practice which over time can enhance the practitioners’ perceptual skills [33,34].
When examined in the specific context of musical artefacts, our design aligns well with three of the five enabling dimensions to consider for musical interaction as suggested by Chao [3] and these are affordance, design alignment and temporal integration.

Design Criteria

The following design criteria (DC), derived from the literature and concept presented above, represent the core requirements for show-and-tell feedback provision:
  • DC1: The system must be able to transmit the creative digital artefact in component form, handle its reconstruction upon reception, and provide tools to support its manipulation by users.
  • DC2: Expose tool use. The system must make clear how feedback givers used tools to create the proposed change to the artefact, as tools play a pivotal role in digital artefact creation [35].
  • DC3: Support multimodal feedback. This must include (i) the modality of the creative digital artefact (e.g., auditory, visual or both), (ii) visual information to convey tool use, and (iii) a mode to convey descriptive ’tell’ information.
  • DC4: Ensure the recipient can easily switch between post-feedback and pre-feedback artefact states to facilitate close comparison, as this supports the development of sensory, perceptual tacit competency [33,36]. This necessitates feedback that is persistent and non-destructive.
  • DC5: Support feedback provision from multiple feedback givers, making it possible for recipients to benefit from a range of perspectives, opinions and recommendations [15], and increasing the likelihood of receiving feedback matching their aesthetic sensibility [13,14].

2.2. Music Remixing Implementation

A music mixing-oriented show-and-tell feedback interface is depicted in Figure 2. As the artefact is presented off-screen (via headphones or loudspeakers) it is possible to present all the tools used to manipulate the artefact on screen simultaneously, following a mixing desk design which is well suited to outputting both auditory (music) and visual (tool settings) information (DC3). A feedback giver would use these tools to shape the music when creating the ‘show’ component (DC1), and tools that have been adjusted will be highlighted by yellow boxes (DC2).
This interface also features a feedback panel positioned to the right. Submitted feedback (DC5) would still be listed below the text-entry box with each text feedback accompanied by a button used to toggle the auditory artefact feedback on or off thus facilitating close comparison (DC4).
We have opted not to include a feedback history list to keep this iteration and its implementation simple. This is a design decision we intend to reflect upon as our research moves forward.
The foundation for this implementation was an open source, web-based music production tool described in [37]. This tool was chosen because it provided a virtual mixing desk which could load, play and manipulate object-based music (i.e., music in component form). Being web-based meant it could easy be deployed ‘in-the-wild’ [38], requiring only a computer with a modern web browser to run. Extending this application’s codebase to match our mock-up involved writing standard web code (HTML, CSS, JavaScript) and integrating a MySQL database to store the feedback metadata (text comments and GUI component values).

2.3. Exploratory Deployment

Our show-and-tell feedback interface deployed into a post-compulsory education institute where it was used to provide peer and expert formative feedback on coursework prior to their submission. While our focus is on the show-and-tell approach, an additional ‘post comment’ button was added to the feedback panel, next to the ‘add suggestion’ button, to make it possible for feedback givers to submit text-only feedback to offer a familiar point of reference for both feedback givers and receivers to compare to.
Due to the exploratory nature of this deployment, we relied on well-utilised qualitative methods in HCI and including interviews and focus groups which were then subject to thematic analysis [39] in order to gain insight into participants’ experiences, as both feedback givers and receivers.

2.3.1. User Group

We worked with a class of 12 students who were enrolled in a one-year introductory music production course. There was one mature student with the rest of the group being recent school graduates (aged 16–19; 3 females, 9 males). The majority reported having little or no music production experience prior to starting the course eight months earlier. These students were recruited through their lecturer. Whilst all the class members participated at some point, only six of the group completed all phases of the deployment sessions. The lecturer reflected afterwards that this absentee rate was normal for his class. We were assisted in the feedback giving process at this site by six lecturers from the music and audio department and three students studying on the final year of an audio engineering degree program, who had around four years of prior formal music production training.

2.3.2. Deployment Description

The exploratory deployment was designed to provide participants with formative feedback on the music mixing aspect of their main piece of coursework prior to its submission at the end of the academic year. However, a related objective was for participants to gain experience of both giving and receiving show-and-tell feedback and text-only feedback. Four sessions, all attended by the lead researcher, were held during weekly classes lasting two hours. Whilst the study took place within a co-located environment, there were no face-to-face interactions and the exchange of the creative artefacts and the feedback occurred asynchronously which we believe provides a similar research context to that of a purely online environment. The sessions were structured as follows:
  • Session 1: Introduction and practice. The first session introduced the participants to the researcher and the show-and-tell feedback interface. Participants practised mixing music using Remix Portal, giving and receiving feedback but no data was collected during this session.
  • Session 2: Uploading mixes. Each participant uploaded their work-in-progress music to the system and created their mix.
  • Session 3: Giving feedback. Participants were given the instruction to “leave feedback that will help the recipient get better at mixing”. Each participant was given a set of cards detailing whom they should give feedback to and whether it should be show-and-tell feedback or text-only feedback. The cards were organised as best as possible to ensure each student both gave and received an even spread of text-only and show-and-tell feedback. Students were given 15 min to leave each piece of feedback. The final element of this session was a class-wide focus group aimed at gaining rich and detailed data about the feedback giving experience.
  • Before session 4, lecturers and senior students participated in feedback giving, followed by a 10–15 min semi-structured interview.
  • Session 4: Receiving feedback, reflecting, and remediating. Students reviewed the feedback left for them and acted upon it by remediating their work. They then completed a 5-min individual semi-structured interview intended to capture their experiences receiving feedback.

2.3.3. Data Collection and Analysis

The deployment resulted in 103 pieces of feedback being generated (55 pieces of show-and-tell feedback and 48 pieces of text-only feedback). Of that feedback, 48 pieces came from classmates, 18 pieces from senior students and 37 pieces from lecturers. Each student received on average 17 pieces of feedback on their submitted work.
Given the exploratory nature of this research, the research team decided to use four basic questions during the focus groups and interviews, to capture participants’ likes and dislikes of the two feedback methods (show-and-tell and text-only). For example, one of the questions was “What did you like about giving text-only feedback?”. Subsequent questions were used to follow any interesting angles that emerged. Transcribed interview and focus group data was then analysed using inductive thematic analysis, following the principles outlined in [40]. The lead researcher coded the transcripts and developed a set of candidate themes, then worked with the research team to revise and reduce overlaps between the themes.

3. Results

Through our thematic analysis of the qualitative data we produced four important themes: clarity, level of detail, the role of artistic vision, and ’who is in control?’. These themes are presented below along with supporting empirical evidence.

3.1. Clarity

When reflecting on feedback they had received, many participants considered that the ’show’ artefact manipulation demonstrations clarified the associated ’tell’ text comment: “This method is very helpful because you’re not only getting text feedback but an actual demonstration of what others would do to the mix from their perspective” [P13]. Some comments referenced show-and-tell feedback’s ability to let recipients see or hear the proposed changes to the artefact, whilst some comments referenced both: “This is much better. You can see and hear what they mean” [P12], or “I like how you’re able to hear and see how the person has changed the settings. It’s really helpful.” [P7]. In using the terms ‘hear’ and ‘see’ these participants are showing appreciation for this feedback method’s ability to allow them to perceive the change in the artefact, as well as its ability to make visible how tools on the interface have been used to implement this change. One participant described this feedback method as providing… “a nice map of how you could mix it yourself” [P9].
In contrast, many participants complained about text-only feedback’s lack of clarity. We received lots of comments such as “It could sometimes be difficult to understand what someone was trying to say” [P2], or “Some [text-only feedback comments] were very vague” [P3]. A weak connection between the text-only feedback mechanism and the proposed changes to the artefact was evident, for example: “[It’s] hard to understand how people think it should sound without hearing it” [P1], or “[I didn’t like] the fact you can’t actually hear what they mean, only an opinion of what it should be” [P7]. This difficulty was also expressed by feedback givers e.g., “It can be hard to put what you think into words to then comment about” [P2], or “It was harder to explain without showing” [P14]. Such text-only feedback requires interpretation by the person receiving, which may be particularly challenging for novices: “[Text-only feedback] can be slightly vague in that you have to second-guess what they mean… [It’s] useful if you already have an idea of what is being suggested. Probably better for more advanced mixers.” [P5]. Some participants appeared to be unable to interpret the feedback at all. For example, when asked what he didn’t like about text-only feedback [P11] stated “It doesn’t show you [the feedback giver’s] thought on the piece”.

3.2. Level of Detail

Participants described show-and-tell as being more detailed and precise. Feedback givers described being able to do more and one lecturer described how show-and-tell “opened up a new level of subtlety” to them (Lecturer 4). Another lecturer suggested that the precision comes from being able to try out and refine the feedback using the interface tools before submitting it to the recipient:
Lecturers often pretend that they know exactly what needs to happen to fix things but they can’t know that-not everyone knows that instantly-you need to test what you think, and it’s an experimental process.
(Lecturer 2)
Whilst our student participants were given exactly 15 min to leave each piece of feedback, lecturers were unconstrained by class times and therefore were provided a longer amount of time to contribute their thoughts. We observed that most of the lecturers appeared to demonstrate a higher level of immersion when giving show-and-tell feedback, and it frequently required our prompting for them to conclude giving this type of feedback. One lecturer explained how the process of creating show-and-tell feedback made him lose focus on his role as an educator, and instead step back into his former role as a music producer and mix engineer:
I noticed that with the text-only feedback I’ll say the most immediate thing in the mix whereas with the [show-and-tell] feedback I didn’t really know when to stop… I couldn’t really stop myself just continuing to mix it and mix it [laughs] and so perhaps the students wouldn’t be able to take on board all those changes at the one time. Maybe I was thinking more as a mix engineer rather than as a lecturer now I look back, because I was trying to make the best mix I could rather than help someone improve their mixing.
(Lecturer 3)
Some of our participants appeared to find the more expansive show-and-tell feedback left for them somewhat hard to digest: “Too many changes can be confusing and overwhelming” [P5]. Others suggested a tendency for the explanatory text component of show-and-tell to be somewhat overlooked: “Some people changed things in the controls but did not explain why” [P3], and “I’d have liked more explanation” [P2].
Several instructors appeared to pre-empt the issues described above and developed personal strategies to try and make their show-and-tell easier for the recipients to comprehend, which often involved splitting feedback over several entries so that each interface change and piece of explanatory text related to a single concept or tool within the interface:
You need to layout your explanatory text well using numbers or bullet points and making it quite concise so people know what they are looking at when they see the interface change. They’ll be like, “there’s the pan comment… oh now I see what you’ve done with the pan”.
(Lecturer 1)
Other lecturers appeared to start formulating a strategy upon reflection:
Maybe when using show-and-tell feedback I need to write more and do less. Maybe the balance isn’t quite right because I only wrote one sentence, two sentences at the most. So I was really just writing about [obvious] levelling changes and not about the subtleties of the mix. The show-and-tell maybe explored more subtleties of the mix.
(Lecturer 3)

3.3. The Role of Artistic Vision

Most of the feedback givers, whether novice students, senior students or lecturers reported leaving feedback that represented their own interpretation of the how the musical artefact should be modified, as opposed to considering the feedback recipient’s vision for the artefact and trying to help them achieve their aim: “It allows you to tell the artist how you think it should sound” [P1] or “you’ve got to listen to the song and the song will tell you what has to be done to it” (Lecturer 3).
The feedback givers’ artistic vision was more clearly transmitted though show-and-tell, and some participants were happy to try out any show-and-tell feedback given to them, sighting its potential to give them new ideas: “You can see how others think and maybe take their opinions” [P4], and “I can hear how other people prefer to mix certain sounds with others, affecting new ideas” [P3]. The majority of recipients however expressed some concern around feedback they received which did not align with their own artistic vision for the artefact: “Some suggestions were too different from my ideas” [P6].
One of our lecturers proposed that aligning feedback to the recipients artistic vision becomes more important the more advanced the recipient:
Maybe it depends what level, because, let’s say it’s an NC [novice] student, if you aren’t involved in lots of genres and your aesthetic awareness isn’t that good and your stylistic sympathies aren’t great you could still probably help out with the basic stuff couldn’t you? Like balance and panning. You could make a lot of helpful comments I think. But perhaps as you get up the levels and peoples’ mixes are becoming a bit less amateur or beginner then I think it does help to have someone who is within your niche, to make those comments that can push you forward in your craft”.
(Lecturer 1)

3.4. Who Is in Control?

An issue that emerged with show-and-tell feedback was that the feedback giver can ’take over’ control of the artefact. One of our senior student feedback givers reported being uncomfortable with this: “It’s sometimes hard to give feedback as it makes me feel bad when changing someone’s mix if they’ve worked on it” (Senior student 2).
Many of the feedback recipients expressed similar concerns such as “I wouldn’t want some random guy being like ‘You should change this thing’ because it’s like ‘bruv I don’t know you’, you know what I mean?” (P3).
In contrast, the ambiguity inherent in text-only feedback allowed the feedback recipient to feel like they retained ownership, with the feedback serving as a jumping off point for experimentation and self-reflection: “since [text-only feedback’s] not always exact, it can be fun to play around with” (P9).
The downside of text-only feedback’s ambiguity is that it does not make-visible the competence of the feedback giver, which emerged when asking participant 15 what they didn’t like about text-only feedback: “You are unsure of how musically advanced the person leaving the comments is”.

4. Discussion

Our goal with this work was to explore the potential, limitations and implications of a show-and-tell feedback interface, intended to offer rich feedback for those learning creative digital media practices.
Many participants specifically referenced the demonstrations that show-and-tell feedback facilitated. Our findings, within the themes of clarity and level of detail, evidence the quality of those demonstrations and the participants’ appreciation of them.
Unsurprisingly, our results confirm that the widely used text-only feedback (especially in distant/asynchronous learning settings) cannot convey sufficient clarity and detail when it comes to creative digital artefacts. This was expected given the auditory and the creative nature of the music mixing activity where we can “know more than we can tell” [41]. In our case, ambiguous musical terms contained within the feedback text (such as ‘bright’, ‘harsh’ or ‘boomy’) are made clear through the auditory information, letting the recipient hear what these features actually sound like, and understand which interface controls impact particular musical features.
Whilst our results demonstrate that the show-and-tell feedback interface can facilitate rich multi-modal feedback provision, they also indicate areas where this first iteration of the show-and-tell feedback interface could be improved. A common issue was that feedback givers did not always provide sufficient text-based explanations to accompany their demonstrations i.e., not enough tell accompanying the show; e.g., “Some people changed things in the controls but did not explain why” [P3]. This can be attributed to an over-reliance on the show element. We may address this going forward by trying to scaffold feedback givers to leave more comprehensive text explanations. One approach would be to append a sentence starter to the text entry box each time an interface control is altered, thus prompting the feedback giver to leave sufficient explanation. For example “I changed the volume slider because...”. Scaffolding feedback, particularly when it comes from non-experts, is a common technique used by those creating distributed feedback and critique systems [42], and sentence starters can be an effective mechanism [43].
A further improvement would be to clarifying the steps the feedback giver went through, as opposed having our interface simply present their completed feedback. This can be accommodated through a feedback history list (which was not accommodated in this first iteration of the music mixing interface design due to time constraints). It would be simple, yet potentially very useful, to add this feature to the music mixing interface to enable recipients to ‘step through’ the changes the feedback giver has made to the artefact. This would also address an issue that was raised by the participants, in that “Too many changes can be confusing and overwhelming” [P5] coupled with the fact that there was a tendency for some expert feedback givers to provide too much feedback.
The themes of artistic vision and control are indicative not just of the challenges of implementing a show-and-tell feedback interface, but for inducting students into critique processes generally. For example, it is known that feedback which recipients deem as controlling or critical can thwart efforts to improve performance [44]. They serve as a reminder that novices may find it hard to accept criticism, and may benefit from support as they learn how to take advantage of feedback from critique sessions [14]. Instructors should therefore give thought as to how best to support learners as they begin participating in show-and-tell feedback processes. This appeared particularly relevant when recipients believed the feedback giver possessed a different aesthetic sensibility. Designers might be able to alleviate this to some extent by creating mechanisms to help feedback seekers reflect upon and communicate where they would like feedback givers’ input-perhaps asking for help with a specific aspect of the artefact manipulation. This could enable the feedback recipient to maintain control of their artistic work as it should reduce the likelihood that feedback givers will creatively re-imagine the work at a point when it would be inappropriate, such as when the recipient is in the latter stages of their work and interested in help with refining their own interpretation.
Reflecting upon our design criteria, we saw good evidence in support of the multimodal feedback approach (DC3), the easy comparison of pre-/post-feedback states (DC4), and accommodation of multiple feedback givers (DC5). However, we now consider that design criteria 2 could have been interpreted in other ways. It called for the exposure of feedback givers’ tool use, and our interpretation was that only interface tools which produce a tangible change in the artefact should be recorded as part of these demonstrations. The advantage of this approach is that it helps keep the feedback simple and removes the potential for feedback recipients to waste energy scrutinising the artefact when there are no perceptible changes. However, an argument could be made that all interface control changes should be recorded, even those that don’t impact the artefact (such as controls used to zoom in and out, and start and stop playback) as they would give a more detailed picture of the feedback giver’s thought process and thus might help enhance the recipient’s developing cognitive model. We therefore acknowledge that this design criteria should be revisited in the future.
We assert that the show-and-tell concept may be relevant to all creative digital media areas, and presented show-and-tell feedback interface example in a music remixing context. We consider music as one of the more challenging contexts due to its temporal and dynamic nature as opposed to more static media such as digital imagery for example. However, implementing show-and-tell feedback interfaces for media like movies and animations may be a little bit more complex, requiring the feedback to be time-stamped and time-ranged in order to link the feedback to specific points within the media. Time-stamped comments were in fact implemented within the music remixing application but were not utilised in the reported deployment for the sake of simplicity. Similar issues around control and artistic vision are likely to exist in such creative domains too. For example, there is arguably no ’correct’ way of processing an image in the objective sense-it is a matter of personal aesthetics. Therefore any feedback suggestions which deviate too far from a feedback seeker’s personal aesthetic sensibilities may well be viewed negatively. Accordingly, our design suggestions (such as facilitating feedback seeker questions to maintain control, and step-through demonstration lists to enhance clarity) may help designers produce effective show-and-tell feedback interfaces for a broad range of creative digital media application areas. These interfaces may be particularly advantageous in asynchronous and/or remote contexts, but may still prove beneficial where face-to-face feedback is possible due to the persistence and interactivity afforded by show-and-tell feedback, as well as its tight coupling with the artefact production mechanism.
Show-and-tell feedback may provide another tool which educators can use to benefit their students, particularly those who need to learn remotely or asynchronously, but it creates other possibilities too. It could be used by online communities interested in digital media creation, to provide rich feedback to one another, or it could be used to help bring outside expertise into formal education settings providing creative channels for community engagement in education which has known benefits for students [37].

5. Conclusions

The work presented in this paper was motivated by a desire to contribute towards helping learners who need to participate remotely or asynchronously, to experience more of the learning opportunities available to those who can access face-to-face learning environments. Our focus was on accommodating demonstrations within learning feedback in creative digital media contexts.
In this work we explored the novel application of a track changes feedback approach to non-text-based creative digital media artefacts. This resulted in the show-and-tell feedback interface concept with a set of core design criteria which we demonstrated through mock-ups for music remixing, and one in the wild exploratory deployment into an education setting. This deployment provided insights into how the show-and-tell feedback interface can impact the experience of feedback givers and receivers. Our findings reveal that our show-and-tell feedback interface can facilitate clearer and more detailed feedback than what traditional text feedback provides, which recipients and providers viewed as beneficial. However, we also highlight issues around control and artistic vision that emerged through our deployment and so we conclude by discussing how designers might mitigated these issues.

6. Limitations and Future Work

During the pilot study, show-and-tell feedback was considered along with text-only feedback. We had to make a decision between using text-only as a reference point for our study or a feedback method capable of conveying artefact manipulation information to remote learners such as video screen capture with narration. This decision was made in collaboration with the music lecturers who were keen that participants would not change their practices too much, and text-only feedback retained the link to their conventional practices. This decision still aligned with our main aim of introducing, and exploring the potential of, show-and-tell interfaces for providing rich feedback, which could ultimately add to the set of tools available for creative media teachers and learners.
Whilst we argue that show-and-tell feedback may benefit remote learners, the class-based setting of our study meant that our main participant group were co-located. However, we were careful to minimise face-to-face interactions, peer feedback was still provided asynchronously, and the participating lecturers and senior students did provide remote and asynchronous feedback.
A possible limitation is the relatively small sample size used. However, this was a pilot study that occurred across multiple sessions in a realistic setting, and provided rich qualitative data leading to important insights about the potential of this approach, its limitations, implications and possible solution as discussed.
To demonstrate the potential of show-and-tell feedback to creative digital media domains, beyond the music mixing-focused interface used in the pilot study, and to refine this approach even further, we hope to implement and deploy show-and-tell interfaces for other domains as future work.

Author Contributions

Conceptualization, C.D. and A.K.; methodology, C.D.; software, C.D.; validation, C.D. and A.K.; formal analysis, C.D.; investigation, C.D.; resources, C.D.; data curation, C.D.; writing—original draft preparation, C.D.; writing—review and editing, C.D. and A.K.; visualization, C.D.; supervision, A.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the EPSRC Centre for Doctoral Training in Digital Civics at Newcastle University (EP/L016176/1).

Institutional Review Board Statement

Full ethical approval was from the Faculty of Science, Agriculture & Engineering ethics committee at Newcastle University before conducting this study.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The code used in this work is available as open source at https://github.com/ColinBD/RemixPortal (accessed on 13 March 2024).

Acknowledgments

We would like to thank all participants for their time and input.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. AbuSeileek, A.F. Using track changes and word processor to provide corrective feedback to learners in writing. J. Comput. Assist. Learn. 2013, 29, 319–333. [Google Scholar] [CrossRef]
  2. Everest, F.A. Critical Listening Skills for Audio Professionals; Thomson Course Technology: Boston, MA, USA, 2007. [Google Scholar]
  3. Choi, I. An Introduction to Musical Interactions. Multimodal Technol. Interact. 2022, 6, 4. [Google Scholar] [CrossRef]
  4. Hattie, J.; Timperley, H. The power of feedback. Rev. Educ. Res. 2007, 77, 81–112. [Google Scholar] [CrossRef]
  5. Sadler, D.R. Formative assessment and the design of instructional systems. Instr. Sci. 1989, 18, 119–144. [Google Scholar] [CrossRef]
  6. Boud, D.; Molloy, E. Rethinking models of feedback for learning: The challenge of design. Assess. Eval. High. Educ. 2013, 38, 698–712. [Google Scholar] [CrossRef]
  7. Shute, V.J. Focus on formative feedback. Rev. Educ. Res. 2008, 78, 153–189. [Google Scholar] [CrossRef]
  8. Hartman, H. Scaffolding & cooperative learning. In Human Learning and Instruction; City College of City University of New York: New York, NY, USA, 2002; pp. 23–69. [Google Scholar]
  9. Moreno, R. Decreasing cognitive load for novice students: Effects of explanatory versus corrective feedback in discovery-based multimedia. Instr. Sci. 2004, 32, 99–113. [Google Scholar] [CrossRef]
  10. Kluger, A.N.; DeNisi, A. The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol. Bull. 1996, 119, 254. [Google Scholar] [CrossRef]
  11. Sweller, J.; Van Merrienboer, J.J.; Paas, F.G. Cognitive architecture and instructional design. Educ. Psychol. Rev. 1998, 10, 251–296. [Google Scholar] [CrossRef]
  12. Phye, G.D.; Sanders, C.E. Advice and feedback: Elements of practice for problem solving. Contemp. Educ. Psychol. 1994, 19, 286–301. [Google Scholar] [CrossRef]
  13. Bergmann, S. An Objective Aesthetics? Implications for Arts Education. Philos. Inq. Educ. 1994, 8, 17–29. [Google Scholar] [CrossRef]
  14. Tinapple, D.; Olson, L.; Sadauskas, J. CritViz: Web-based software supporting peer critique in large creative classrooms. Bull. IEEE Tech. Comm. Learn. Technol. 2013, 15, 29. [Google Scholar]
  15. Cho, K.; Schunn, C.D. Scaffolded writing and rewriting in the discipline: A web-based reciprocal peer review system. Comput. Educ. 2007, 48, 409–426. [Google Scholar] [CrossRef]
  16. Nicol, D.J.; Macfarlane-Dick, D. Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Stud. High. Educ. 2006, 31, 199–218. [Google Scholar] [CrossRef]
  17. Boud, D.; Holmes, H. Self and peer marking in a large technical subject. In Enhancing Learning through Self Assessment; Kogan Page: London, UK, 1995. [Google Scholar]
  18. Ice, P.; Swan, K.; Diaz, S.; Kupczynski, L.; Swan-Dagen, A. An analysis of students’ perceptions of the value and efficacy of instructors’ auditory and text-based feedback modalities across multiple conceptual levels. J. Educ. Comput. Res. 2010, 43, 113–134. [Google Scholar] [CrossRef]
  19. Xu, A.; Bailey, B. What do you think? A case study of benefit, expectation, and interaction in a large online critique community. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, Seattle, WA, USA, 11–15 February 2012; pp. 295–304. [Google Scholar]
  20. Bobbe, T.; Oppici, L.; Lüneburg, L.M.; Münzberg, O.; Li, S.C.; Narciss, S.; Simon, K.H.; Krzywinski, J.; Muschter, E. What Early User Involvement Could Look Like—Developing Technology Applications for Piano Teaching and Learning. Multimodal Technol. Interact. 2021, 5, 38. [Google Scholar] [CrossRef]
  21. Yoon, D.; Chen, N.; Guimbretière, F.; Sellen, A. RichReview: Blending ink, speech, and gesture to support collaborative document review. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, Honolulu, HI, USA, 5–8 October 2014; pp. 481–490. [Google Scholar]
  22. Yoon, D.; Chen, N.; Randles, B.; Cheatle, A.; Löckenhoff, C.E.; Jackson, S.J.; Sellen, A.; Guimbretière, F. RichReview++ Deployment of a Collaborative Multi-modal Annotation System for Instructor Feedback and Peer Discussion. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, San Francisco, CA, USA, 27 February–2 March 2016; pp. 195–205. [Google Scholar]
  23. Pavel, A.; Goldman, D.B.; Hartmann, B.; Agrawala, M. VidCrit: Video-based asynchronous video review. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Tokyo, Japan, 16–19 October 2016; pp. 517–528. [Google Scholar]
  24. Shannon, A.; Hammer, J.; Thurston, H.; Diehl, N.; Dow, S. PeerPresents: A web-based system for in-class peer feedback during student presentations. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems, Brisbane, Australia, 4–8 June 2016; pp. 447–458. [Google Scholar]
  25. Kulkarni, C.E.; Bernstein, M.S.; Klemmer, S.R. PeerStudio: Rapid peer feedback emphasizes revision and improves performance. In Proceedings of the Second (2015) ACM Conference on Learning@ Scale, Vancouver, BC, Canada, 14–18 March 2015; pp. 75–84. [Google Scholar]
  26. Luther, K.; Tolentino, J.L.; Wu, W.; Pavel, A.; Bailey, B.P.; Agrawala, M.; Hartmann, B.; Dow, S.P. Structuring, aggregating, and evaluating crowdsourced design critique. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, Vancouver, BC, Canada, 14–18 March 2015; pp. 473–485. [Google Scholar]
  27. Grabler, F.; Agrawala, M.; Li, W.; Dontcheva, M.; Igarashi, T. Generating photo manipulation tutorials by demonstration. In Proceedings of the SIGGRAPH ’09: ACM SIGGRAPH 2009 Papers, New Orleans, LA, USA, 3–7 August 2009; pp. 1–9. [Google Scholar]
  28. Chi, P.Y.; Liu, J.; Linder, J.; Dontcheva, M.; Li, W.; Hartmann, B. Democut: Generating concise instructional videos for physical demonstrations. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, St. Andrews, UK, 8–11 October 2013; pp. 141–150. [Google Scholar]
  29. Rowe, R. Representations, Affordances, and Interactive Systems. Multimodal Technol. Interact. 2021, 5, 23. [Google Scholar] [CrossRef]
  30. Armstrong, M.; Brooks, M.; Churnside, A.; Evans, M.; Melchior, F.; Shotton, M. Object-based broadcasting-curation, responsiveness and user experience. In Proceedings of the IBC 2014 Conference, Amsterdam, The Netherlands, 11–15 September 2014. [Google Scholar]
  31. Ryan, T.; Henderson, M.; Phillips, M. Feedback modes matter: Comparing student perceptions of digital and non-digital feedback modes in higher education. Br. J. Educ. Technol. 2019, 50, 1507–1523. [Google Scholar] [CrossRef]
  32. Collins, A. Cognitive apprenticeship and instructional technology. Educ. Values Cogn. Instr. Implic. Reform 1991, 1991, 121–138. [Google Scholar]
  33. Schindler, J. Expertise and tacit knowledge in artistic and design processes: Results of an ethnographic study. J. Res. Pract. 2015, 11, 6. [Google Scholar]
  34. Schwartz, D.L.; Tsang, J.M.; Blair, K.P. The ABCs of How We Learn: 26 Scientifically Proven Approaches, How They Work, and When to Use Them; WW Norton & Company: New York, NY, USA, 2016. [Google Scholar]
  35. Skains, R.L. The adaptive process of multimodal composition: How developing tacit knowledge of digital tools affects creative writing. Comput. Compos. 2017, 43, 106–117. [Google Scholar] [CrossRef]
  36. Dahlbom, B.; Mathiassen, L. Computers in Context: The Philosophy and Practice of Systems Design; Blackwell Publishers, Inc.: Oxford, UK, 1993. [Google Scholar]
  37. Dodds, C.; Kharrufa, A.; Preston, A.; Preston, C.; Olivier, P. Remix portal: Connecting classrooms with local music communities. In Proceedings of the 8th International Conference on Communities and Technologies, Troyes, France, 26–30 June 2017; pp. 203–212. [Google Scholar]
  38. Chamberlain, A.; Crabtree, A.; Rodden, T.; Jones, M.; Rogers, Y. Research in the wild: Understanding’in the wild’approaches to design and development. In Proceedings of the Designing Interactive Systems Conference, Newcastle Upon Tyne, UK, 11–15 June 2012; pp. 795–796. [Google Scholar]
  39. Lazar, J.; Feng, J.H.; Hochheiser, H. Research Methods in Human-Computer Interaction; Morgan Kaufmann: Burlington, MA, USA, 2017. [Google Scholar]
  40. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
  41. Polanyi, M. The Tacit Dimension; University of Chicago Press: Chicago, IL, USA, 2009. [Google Scholar]
  42. Kou, Y.; Gray, C.M. Supporting distributed critique through interpretation and sense-making in an online creative community. ACM Hum.-Comput. Interact. 2017, 1, 60. [Google Scholar] [CrossRef]
  43. Lazonder, A.W.; Wilhelm, P.; Ootes, S.A. Using sentence openers to foster student interaction in computer-mediated learning environments. Comput. Educ. 2003, 41, 291–308. [Google Scholar] [CrossRef]
  44. Baron, R.A. Criticism (informal negative feedback) as a source of perceived unfairness in organizations: Effects, mechanisms, and countermeasures. In Justice in the Workplace: Approaching Fairness in Human Resource Management; Cropanzano, R., Ed.; Lawrence Erlbaum Associates, Inc.: Mahwah, NJ, USA, 1993; pp. 155–170. [Google Scholar]
Figure 1. Show-and-tell feedback structure and process as compared to traditional feedback.
Figure 1. Show-and-tell feedback structure and process as compared to traditional feedback.
Mti 08 00023 g001
Figure 2. The music mixing-oriented show-and-tell feedback interface and its application of the core design criteria.
Figure 2. The music mixing-oriented show-and-tell feedback interface and its application of the core design criteria.
Mti 08 00023 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dodds, C.; Kharrufa, A. Show-and-Tell: An Interface for Delivering Rich Feedback upon Creative Media Artefacts. Multimodal Technol. Interact. 2024, 8, 23. https://doi.org/10.3390/mti8030023

AMA Style

Dodds C, Kharrufa A. Show-and-Tell: An Interface for Delivering Rich Feedback upon Creative Media Artefacts. Multimodal Technologies and Interaction. 2024; 8(3):23. https://doi.org/10.3390/mti8030023

Chicago/Turabian Style

Dodds, Colin, and Ahmed Kharrufa. 2024. "Show-and-Tell: An Interface for Delivering Rich Feedback upon Creative Media Artefacts" Multimodal Technologies and Interaction 8, no. 3: 23. https://doi.org/10.3390/mti8030023

Article Metrics

Back to TopTop