Next Article in Journal
An IoT Based Mobile Augmented Reality Application for Energy Visualization in Buildings Environments
Next Article in Special Issue
Virtual Environments and Augmented Reality Applied to Heritage Education. An Evaluative Study
Previous Article in Journal
MAP-Vis: A Distributed Spatio-Temporal Big Data Visualization Framework Based on a Multi-Dimensional Aggregation Pyramid Model
Previous Article in Special Issue
A Digital Reconstruction of a Historical Building and Virtual Reintegration of Mural Paintings to Create an Interactive and Immersive Experience in Virtual Reality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Investigating the Learning Process of Folk Dances Using Mobile Augmented Reality

HCI Lab, Faculty of Informatics, Masaryk University, Botanickà 68 a, 60200 Brno, Czech Republic
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(2), 599; https://doi.org/10.3390/app10020599
Submission received: 4 November 2019 / Revised: 17 December 2019 / Accepted: 10 January 2020 / Published: 14 January 2020
(This article belongs to the Special Issue Virtual Reality and Its Application in Cultural Heritage)

Abstract

:
Learning how to dance is not an easy task and traditional teaching methods are the main approach. Digital technologies (such as video recordings of dances) have already been successfully used in combination with the traditional methods. However, there are other emerging technologies such as virtual and augmented reality that have the potential of providing greater assistance, in order to speed up the process as well as assisting the learners. This paper presents a prototype mobile augmented reality application for assisting the process of learning folk dances. Initially, a folk dance was digitized based on recordings from professional dancers. Avatar representations (of either male or female) are synchronized with the digital representation of the dance. To assess the effectiveness of mobile augmented reality, it was comparatively evaluated with a large back-projection system in laboratory conditions. Twenty healthy participants took part in the study, and their movements were captured using motion capture system and then compared with the recordings from the professional dancers. Experimental results indicate that augmented reality (AR) application has the potential to be used for learning process.

1. Introduction

Augmented reality (AR) has the ability to integrate virtual information into the real environment in real time performance [1]. Nowadays, with simultaneous localization and mapping (SLAM) algorithms, the integration is easier and gives the illusion that real and virtual objects co-exist in the same space. One of the important applications of AR is in the entertainment domain. Dance is a performing art and it consists of sequences of different movements. It can have various messages according to context, e.g., traditional dances are strongly connected to culture and culture heritage of place or nation [2]. Dances are mainly taught in two ways; attending dance lesson or self-learning by watching and imitating demonstrations presented in video or the three-dimensional (3D) virtual environment [3]. Major drawback of video or online websites, where dances can be found is a lack of interactive feedback but virtual reality (VR) and/or AR technology can overcome these limitations [4,5]. These approaches can also improve interaction and enable more functionalities, e.g., zoom in, watch the dancer from different angles, get feedback.
One the other hand, an obvious advantage of the AR application is that numerous dances can be added to the same application so users can learn many dances this way. Typically, teacher or professional dancer is recorded using different motion capture systems [6] and animations are added to avatars available in the application. During learning procedure, users’ movements are also captured, compared to the teacher’s and feedback is provided or captured data are stored and used for further offline analysis. Except providing additional functionalities and feedback, AR applications for dance learning can be used whenever and wherever users want to learn and practice on their own pace.
In this paper, a prototype mobile AR application for learning folk dances is presented and evaluated in laboratory conditions. An avatar of the professional dancer is shown to the users and they were instructed to imitate movements and try to learn the dance. Their movements are captured using optical motion capture system with passive markers. Functionalities like play/pause, switch between showcase and learning mode, change the speed, switch between different movements are implemented. It is possible to move avatar in space and examine the dance from different points. The AR application contains male and female version of one folk dance, Pašovská Sedlcká, was assessed by comparing it with a projection mode to examine whether it can assist process of learning. The pilot study that was conducted before the experiment showed that the time span of 10 min should be enough for training since the chosen dance is around 25 s long. After training, participants performed the dance three times just with music and the data were recorded using the same motion capture system. Our hypothesis is that mobile AR application is better for learning how to dance compared to a large projector screen. Data filtered using cubic interpolation is exported and analyzed using dynamic time warping (DTW) [7]. For each body joint, of each participant and professional dancer, change between rotations for each frame was calculated and stored. After that changes for corresponding joints were compared to the data from professional dancers using DTW and average score was calculated in order to get the score for the whole body. Experimental results of 20 participants indicate that that AR application shows greater potential to be used for learning process. The workflows for learning procedure and data analysis are presented on Figure 1 and Figure 2.

2. Background

Various applications for dance learning purposes have been proposed using different technologies. Magnenat Thalmann et al. [8] developed a 3D viewer for watching the dance. Interaction is provided by change of the point of view and zoom level or control of the speed are available. Another interface for dance learning is presented in [9] includes functionalities like start, stop, zoom, focus, change camera position, change speed. Creating game-like application is also a popular way to encourage specific behaviors and increase motivation and engagement [10]. Examples of game-like environments and interfaces can be found in [4,11,12]. They all propose similar way of learning; avatar of the teacher is shown, and user is supposed to imitate teacher’s movements. User’s movements are streamed to the application and feedback is given in form of score with corresponding message. Once the task is completed successfully, user can go to the next step.
VR offers an alternative experimental way how dance(s) can be taught. Kyan et al. [13] proposed a cave automatic virtual environment (CAVE) for ballet dance training. The environment consists of projectors screens placed to walls of room-sized cube where user can watch the virtual teacher and replicate movements. Training system based on the motion capture and VR technologies is presented in [14]. Virtual teacher’s movements are projected on the wall screen and user must imitate movements. In another prototype VR simulator users can preview folk dance segments performed by a 3D avatar and repeat them [15]. Intuitive feedback based on comparison with the template motions is provided. Folk dance training is described in [16]. Head-mounted device (HMD) was used for the dance presentation in VR. Users’ movements were captured and compared to the teacher’s, and feedback is given in real time. Moreover, a VR application used the 3D computer-generated animation of teacher’s movements [17]. Mousas in [18] presented a method for controlling a virtual partner’s motion based on a performance capture process. A hidden Markov model (HMM) was trained to learn the structure of the dance motions and during the runtime system predicts the progress of the user’s motion. To understand naturalness of synchronized motion and the control the user has on a partner’s synthesized motion, a user study was conducted.
AR has been also experimentally used in entertainment and dances [1,19]. A conceptual tool for simulated dancing uses smart glasses to project virtual dancers in the environment [20]. Multi and single-user scenarios are proposed. In prior two users dance together while not being present physically in the same room, and in latter user dances with avatars that have a predesigned animation. A large-scale AR mirror for user training is presented where the mirror allows users to see their movements providing them visual feedback [21]. In [22], AR is used to present virtual dancer as an instructor or a dance partner and enables user to see their body movements as an external observer. Clay et al. [23] adapted and integrated AR related tools to enhance the emotion involved in cultural performances. As a part of work, stage in a live performance was augmented, and dance was an application case.
Our prototype, presented in [24], offers users the ability to visualize the avatar of professional dancer with prerecorded animation as shown in Figure 3. Users can switch between different part of the dance without any restrictions. There is no need to complete one step to proceed to the next one so that users can organize the learning procedure in their own pace. In addition, music of the dance is available so that they can coordinate their movements according to it. Similarly to [8,9], it is possible to change the speed and/or play/pause the dance and offer a personalized experience.

3. Capturing and Visualization

3.1. Capturing of Professional Dancers

The first step was recruitment and recording of professional dancers. Recruitment was done through social media. The recordings from professional dancers are used as ground truth data for data analysis. After user testing, animations were exported from recordings and they were used for animating avatars shown in the application. For obtaining the data, the procedure illustrated below was followed. Professional dancers with many years of experience were invited to dance a folk dance that is usually performed in pair. Dancers were wearing motion capture suits with 37 passive markers from [25]. Markers are placed on specifically determined positions on dancer’s body and they are coated with a reflective material to reflect light that is produced near the cameras’ lens [3]. The first male dancer was asked to put on a suit with markers and performed his part of the dance alone with music, three times in a row. Then, the male dancer performed three times with music again but this time with a partner. The male dancer was still wearing a motion-capturing suit, while the female dancer was not. The next step was to perform the dance in a pair, three times, with music and this time both dancers were wearing a suit with markers. The same procedure was followed for a professional female dancer [26].
After the recording phase, post processing was done in Motive [27]. Motive has built-in several ways of interpolation to fill the gaps, e.g., constant, linear, cubic, model-based, and pattern-based interpolation. In this work, cubic interpolation was applied since it is the simplest method that offers continuity between segments. It is possible to set the maximum size of the gap filled using selected method. Gaps in marker trajectories were calculated using cubic interpolation. In case of professional dancers, maximum size of the gap was set to 10 frames, while for participants in the experiment it was 100 frames. Filtered data were exported as animations and comma-separated values (CSV) files. Animations with the lowest number of markers with gaps and with the smallest size of gaps were used in the application. The corresponding CSV files were used for further offline analysis. Data from recording of professional dancers and used in both applications can be found in [28].

3.2. Projection Screen Visualization

An alternative way of learning the dance was similar to what learners are using (TV or computer monitor). In our case, instead of using a large TV monitor, we used projection screen to learn from. In this case, the same application was ported to a computer and projected on the screen. Interaction was provided using a Bluetooth mouse. Available options were the same as in learning mode in AR application. To pause the dance, participants had to press left button and to change the speed they had to press right button. To go to the previous or to the next segment they had to hold, and release left or right button, respectively. Figure 4 depicts user interacting with the application on projector screen during the learning phase and Figure 5 shows dance sequence of female avatar presented to the users on the projection screen.

3.3. Mobile Augmented Reality Visualization

A mobile AR prototype application was developed for assisting the process of learning folk dances [29]. A brief overview is shown in Figure 3 Implementation was based on ARCore platform because it provides motion tracking, environmental understanding and light estimation [30]. Motion tracking allows phone to track its position relative to the world and this technology is used for identification of features and for tracking how they move over the time. In addition, ARCore can detect flat surfaces, e.g., floor, which is what was used in this research.
To facilitate learning in AR, an avatar of the professional dancer performing the folk dance was superimposed in front of the user as shown in Figure 6. The avatar was positioned on a user defined position. To ensure that the avatar stayed on the same pose (position and orientation) a virtual anchor was used. This allows the user to walk around the avatar to examine the dance from different viewpoints and different angles.

4. Methodology

4.1. Procedure

First of all, professional folk dancers, were recorded and animations were exported in fbx file format. Using Adobe Fuse CC [31] male and female avatars were created and uploaded to Mixamo [32]. Characters was rigged using Auto-Rigger and downloaded as fbx format. In Unity3D [33] exported animations were attached to avatars and added to the application.
A comparative user-study was conducted to assess the effectiveness of the AR application. Participants were divided into two groups where one group used head mounted viewer Google cardboard for AR application and the other group used a back-projection screen. Both applications consist of showcase and learning mode and the AR application can be used in portrait and landscape mode. In showcase mode, users can watch the whole dance without any interaction before they start learning. The idea is to introduce users to the dance so they can see what they are expected to learn and reproduce. After this step they can approach a learning mode. In this mode, the dance is split into two parts, since the dance consists of singing and dancing phase. Users can choose to watch the dance part by part or the whole dance at once.
In both cases, a user interface is available. In Figure 5 user interface available in landscape mode of the AR application is shown. Part of the interface that is shown on the top of the screen on the back-projection screen and in portrait mode of the AR application is the same for showcase and learning mode while the panel on the bottom changes depending on selected mode. The mode can be switched by selection of matching button. In the showcase mode users can pause/play the dance, change the speed of the animation and change the current part of the dance using slider. In the learning mode users can also pause/play the dance and change the animation speed. Instead of a slider, two buttons (previous and next) can be used to switch between parts of the dance. It is also possible to select to play all the steps. In addition, textual instructions are available (e.g., six times left hand swing). Using the interface, it is possible to switch between male and female version of the dance and move the avatar. If a user wants to move avatar after the corresponding button is pressed, user will see the message to tap on the green surface in order to place the avatar on the desired position.

4.2. Participants

A total of 20 healthy participants (12 males and 8 females), without prior knowledge of the dance (Pašovskà sedlckà), were employed for the study. This experiment was approved by The Research Ethics Committee of Masaryk University (reference number EKV-2019-067).
One of the participants was over 50 years old, while the rest were between 18–49 years old. The number of participants using AR application and projection screen was equal, 10 in each group. Two participants were excluded, one due to the technical issues and the other one due to the results that represented an outlier. Order of male and female participants and use of AR application and projection screen was randomized, shown in Table 1. Duration of the experiment was approximately 60 min. None of them reported any problems and all successfully completed the task. Participants were informed that they can drop out anytime throughout the experiment. All participants were informed about the task and gave their written consent.

4.3. Data Collection

Data collection was done using an optical motion capture system with passive markers, OptiTrack [1] and with proprietary software Motive. Our configuration contains 16 Prime 13W cameras and suits with 37 passive markers. Participants were asked to wear a suit during the training and performing phase and data were collected during both phases. The frame rate was set to 120 frames per second.
Participants were first introduced about experiment and given the consent form and pre experiment questionnaire to fill in. After that they were instructed to put on suit with markers and attempt to the central position of motion capture area. Experimenters placed marker on determined position on participant’s body and skeleton was created in Motive. The next step was to explain to them how to interact with the application. Participants had 10 min to learn the dance using the application. The dance in the application is short and contains only two movements and therefore is suitable for participants without prior knowledge of folk dances. A longer dance would require participants to spend more time learning it and it might be difficult for some participants to remember it. In addition, the experiment itself would last longer than 60 min and it could result with withdrawal of participants.
Since this dance is performed in pairs, male participants had to learn male part of the dance and male avatar was shown to them during the learning phase and female participants had to learn female version by observing female avatar. The dance was manually synchronized with music and it was provided in both applications. After the learning phase, they took off the headset or projection screen was turned off and they were asked to perform the dance three times in a row just with music still wearing a suit. During the performing phase, they had to begin every recording in a T-pose. T-Pose is a default pose for a 3D model’s skeleton before it is animated. It is also known also as a bind pose, where a character stands with their arms outstretched [34]. After the recording had started, participants moved to the position from where they started the dance once the music was played. When participants finished with the third performance, they took off the suit and they were asked to fill post-experiment questionnaires.

4.4. Questionnaires

Four questionnaires were used in total. Before experiment a pre-exposure simulator sickness questionnaire (SSQ) [35] was used. This four-point scale questionnaire includes list of several symptoms rated on scale from 0 = “None” to 3 = “Severe”. The same questionnaire was filled in post-experiment procedure. SSQ is most commonly used to report measure of cybersickness symptoms. This questionnaire was developed to measure sickness in the context of simulation, and it was derived from a measure of motion sickness [36]. Four representative scores can be found: nausea-related sub score (N), oculomotor-related sub-score (O), disorientation-related sub-score (D) and total score (TS). They were calculated according to [35,37]. After the experiment, except for post-exposure SSQ, participants were asked to fill two more questionnaires. To characterize experience in the environment, presence questionnaire (PQ) [38] was used. This questionnaire originally consists of 24 questions and a seven-point scale, 1 = “Not at all” to 7 = “Completely”. Since there is no haptic included in virtual environment, participants did not answer questions related to that. They answered 22 questions and scores were found as suggested in [38]. The NASA task load index (NASA-TLX) [39,40] was used to assess cognitive workload of participants. It consists of six questions on 21-point scales ranging from very low to very high for mental, physical and temporal demand, effort and frustration and from perfect to failure for performance. At the end, a debriefing session was performed, and participants were asked to give their comments and impressions.

5. Results

5.1. Motion Data Analysis Using Dynamic Time Warping

Motive offers users to export data in several different formats, e.g., fbx, CSV, bvh, C3D. Filtered motion data were exported to CSV files and analyzed using MATLAB [41]. Quaternions were extracted from CSV files and stored into 19 matrices. The number of rows in the matrix represented the total number of frames. The initial T-pose was cut from the performance. For data analysis, just data from performing phase was used and compared to the data from professional dancers. For each joint, the angle between quaternions in two successive frames was calculated and saved in vector. Quaternions were normalized. The corresponding vectors from participants were compared to vectors from professional dancers (e.g., vector for hip quaternions was compared with another vector for hip) using dynamic time warping (DTW).
DTW is a well-known technique to find an optimal alignment between two sequences [7,37]. The cost matrix is calculated, and it contains the similarities between compared motions. The goal is to fnd the optimal alignment between the signals having the minimal overall cost. MATLAB has built-in DTW function and it was used in this work. Arguments of the function were two vectors that were compared. Function returned distance between vectors; smaller distance means higher similarity of signals. The total score for whole body was found as the mean value of all values calculated for joints [42].

5.2. Motion Capture Data Comparison

Filtered data were analyzed as shown in Section 5.1. Our hypothesis is that mobile AR application is better for learning how to dance compared to a large projector screen. Mean values and standard deviation (SD) regarding used device for individual best scores (best performance AR and best performance projection screen (PS)) and average of three performances (average AR and average PS) obtained from the filtered data are shown in Table 2. Lower score means better result. Statistical tests were performed using IBM SPSS. A Shapiro–Wilk test was performed on best performance and Average scores regarding used type of visualization and data have normal distribution except for best performance PS scores (p = 0.027), presented in Figure 7. An independent samples T-test was used to examine significant differences between average scores and best performance scores regarded used device (AR application or 3D animation on projection screen) and p-values was read from the outcome table.
Table 2 represents results for filtered data and mean value and SD of scores regarding used device are shown. Mean values for AR application are lower but without significant difference (p = 0.682 for average scores and p = 0.891 for best performances).
Figure 8 shows device-based comparison of different body parts of participants, respectively. The greatest impact on total scores were from left and right arm and left and right leg for both, AR and back-projector screen and comparison will be based on these body parts. Participants that used AR performed better for right arm, while participants that used back-projector screen performed better for left arm. Again, this supports results that there is no significant difference between AR and back-projector screen.

5.3. Questionnaires Evaluation

Pre and post SSQ was used to examine cybersickness symptoms since one group of participants used AR. Statistical analysis was done for four representative scores, shown in Figure 9. Mean value and standard deviation for scores regarding used device were calculated. Although there is no significant difference, mean values for oculomotor, disorientation and total scores regarding AR on phone were higher after experiment, indicating that participants experienced symptoms stronger once the experiment was done. Highest difference was in disorientation score.
In the NASA TLX questionnaire, the mean value for mental demand and performance for phone was higher than for projector screen, but again without significant difference. According to this questionnaire participants experienced almost the same level of physical demand, while effort, temporal demand and frustration were lower for participants that used AR application. Results from this questionnaire are shown in Figure 10. Participants that used AR application estimated themselves more successful in completing the task than participants who used application on projector screen, since the lowest score on scale represent better result. Regarding PQ, again without significant difference, participants reported experience using AR application to be more realistic, it had better interface quality and they had better interaction with the environment.
In terms of qualitative feedback, all participants noted that they enjoyed the experiment. None of them has ever used mobile AR in combination with a motion tracking system. Of course, a significant factor here is the ’wow’ effect that is common for inexperienced users in AR applications. Another interesting point that was commented as impressive was the graphical representation of the avatar (for both male and female participants). Some participants underestimated themselves in evaluation how good they were in the end. Since they did not have any feedback it was extremely subjective and dependent on their self-confidence. At the end, all participants managed to remember and perform the dance without any difficulties. According to their comments, they find this experiment as good and refreshing experience.
Moreover, four participants (one that used projector screen while three used AR) mentioned that they could not see the avatar of the professional dancer while they were spinning. One participant complained that parts of a suite were sticking together during the movement, and the other one reported feeling odd with suit on. In addition, one participant did not realize that they had to follow the music. Seven participants from both groups found the experiment as funny, nice and a refreshing experience. One participant wrote that organization of the experiment was good. One participant suggested that it would be good to divide moves according to body parts, e.g., legs, hands and in the end to see full body movement. Two participants found interaction using a Bluetooth mouse clumsy and they had troubles during learning part since they had to hold the mouse.

6. Discussion

Participants’ motion capture data were compared with ground truth data from professional dancers to measure similarity between their movements. Determination of ground truth data for motion capture is still an open research question. Human motion data obtained using an optical motion capture system are considered as gold standard in motion capture. In case of professional dancers, maximum size of the gap was set to 10 frames and after filtering all gaps where filled. From this we can conclude that the size of gaps was less or equal to 10. Since the frame rate of the system was 120 Hz, we can consider these data good enough to be used as ground truth.
The prototype mobile AR application was designed to provide easier interaction and environment where participants could examine the dance from different angles and distance. This should facilitate learning. However, most participants did not use this advantage. In addition, interaction in AR is based on swipe gestures which was supposed to be natural for participants, since they all use smartphones in everyday life. Possible reasons why they struggled with commands is that the phone was placed in head-mounted cardboard. After explanation from the experimenter at the beginning, participants did not have any time to try and repeat commands but the learning phase started. Some participants spent significant amount of time intended for learning the dance on learning how to interact with the application. This is supported by results from PQ where score related to ’Possibility to Act’ that includes questions like ’How much were you able to control events?’ and ’How completely were you able to actively survey or search the environment using vision?’ was in favor of application presented on projector screen.
In our experiment some participants reported to feel some symptoms stronger after experiment but without significant difference. Since the folk dance, they had to learn, includes spinning, it could affect answers because participants that used projector screen also had to spin and they could experience vertigo or dizziness. This finding is in line with [43] which reported that only few participants experienced minimal discomfort. Moreover, Pettijohn et al. [17] reported that using AR/VR headset does not exacerbate motion sickness. It was indicated that variation of the latency over time can result with decrease in performance and increase in simulator sickness [44].
For both testing modes (AR and projection screen), the main limitation was avatar visualization within the field of view. When users move in space, an avatar is not following them and it can happen that they cannot see avatar at some moments. Interaction through cutout in Google cardboard is another part to be improved. Users could not see the cutout while they were in AR and they had to remember where it was placed. Gesture recognition using camera could be better solution.

7. Conclusions

This paper examined the use of AR application for learning folk dances. To prove the feasibility of the application, it was compared with a back-projector screen, which mimics traditional digital methods of learning how to dance. Experimental results, without significant difference, indicate that there is tendency for AR application to be better than 3D animation presented on projector screen. Results from NASA TLX supports expectation for higher mental demand in AR since participants are immersed in environment during the experiment. Even if the sample tested (20 participants) was too small to get some more generalized and inclusive conclusion, our results indicated how AR can be used for teaching folk dances as an assisting tool. The purpose is not to replace the teacher but provide a tool that could help learners to adapt and learn faster and easier. In the future, feedback at the end of performance will be provided in the form of a score. In addition, more dances will be incorporated into the application and evaluated.

Author Contributions

Conceptualization, I.K. and F.L.; Methodology, I.K.; Validation, I.K. and F.L.; Formal analysis, I.K.; Investigation, I.K.; Resources, I.K.; Data curation, I.K.; Writing—original draft preparation, I.K.; Writing—review and editing, F.L.; Visualization, I.K. and F.L.; Supervision, F.L.; Project administration, F.L.; Funding acquisition, F.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported under the H2020 European Union funded project TERPSICHORE: Transforming Intangible Folkloric Performing Arts into Tangible Choreographic Digital Objects, under the grant agreement 691218.

Acknowledgments

This experiment was approved by The Research Ethics Committee of Masaryk University (reference number EKV-2019-067).

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Weng, D.; Cheng, D.; Wang, Y.; Liu, Y. Display systems and registration methods for augmented reality applications. Optik 2012, 123, 769–774. [Google Scholar] [CrossRef]
  2. Iqbal, J.; Singh-Sidhu, M. A Framework for Correcting Human Motion Alignment for Traditional Dance Training Using Augmented Reality. In Proceedings of the Knowledge Management International Conference (KMICe), Chiang Mai, Thailand, 29–30 August 2016; pp. 59–63. [Google Scholar]
  3. Kico, I.; Grammalidis, N.; Christidis, Y.; Liarokapis, F. Digitization and Visualization of Folk Dances in Cultural Heritage: A review. Inventions 2018, 3, 72. [Google Scholar] [CrossRef] [Green Version]
  4. Stavrakis, E.; Aristidou, A.; Savva, M.; Himona, S.L.; Chrysanthou, Y. Digitization of Cypriot Folk Dances. In Lecture Notes in Computer Science, Proceedings of the 4th International Conference (EuroMed 2012), Limassol, Cyprus, 29 October–3 November 2012; Ioannides, M., Fritsch, D., Leissner, J., Davies, R., Remondino, F., Caffo, R., Eds.; Springer: Berlin/Heidelberg, 2012; pp. 404–413. [Google Scholar] [CrossRef]
  5. Drobny, D.; Borchers, J. Learning Basic Dance Choreographies with Different Augmented Feedback Modalities. In Proceedings of the Extended Abstracts on Human Factors in Computing Systems (CHI’10), Atlanta, GA, USA, 10–15 April 2010; ACM Press: New York, NY, USA, 2010; pp. 3793–3798. [Google Scholar] [CrossRef] [Green Version]
  6. Kennedy, R.S.; Stanney, K.M.; Dunlap, W.P. Duration and Exposure to Virtual Environments: Sickness Curves During and Across Sessions. Presence Teleoperators Virtual Environ. 2000, 9, 463–472. [Google Scholar] [CrossRef] [Green Version]
  7. Holt, G.T.; Reinders, M.J.T.; Hendriks, E. Multi-Dimensional Dynamic Time Warping for Gesture Recognition. In Proceedings of the Annual Conference of the Advanced School for Computing and Imaging, Heijen, The Netherlands, 13–15 June 2007. [Google Scholar]
  8. Magnenat-Thalmann, N.; Protopsaltou, D.; Kavakli, E. Learning How to Dance Using a Web 3D Platform. In Advances in Web Based Learning—ICWL 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 1–12. [Google Scholar] [CrossRef]
  9. Bakogianni, S.; Kavakli, E.; Karkou, V.; Tsakogianni, M. Teaching Traditional Dance Using E-learning Tools: Experience from the WebDANCE Project. In Proceedings of the 21st World Congress on Dance Research, Athens, Greece, 5–9 September 2007. [Google Scholar]
  10. Hamari, J.; Koivisto, J.; Sarsa, H. Does Gamification Work?—A Literature Review of Empirical Studies on Gamification. In Proceedings of the 47th Hawaii International Conference on System Science, Waikoloa, HI, USA, 6–9 January 2014; IEEE Computer Society: Washington, DC, USA, 2014; pp. 3025–3034. [Google Scholar] [CrossRef]
  11. Laraba, S.; Tilmanne, J. Dance performance evaluation using hidden Markov models. Comput. Animat. Virtual Worlds 2016, 27, 321–329. [Google Scholar] [CrossRef]
  12. Kitsikidis, A.; Dimitropoulos, K.; Ugurca, D.; Baycay, C.; Yilmaz, E.; Tsalakanidou, F.; Douka, S.; Grammalidis, N. A Game-Like Application for Dance Learning Using a Natural Human Computer Interface. In Lecture Notes in Computer Science, Proceedings of the International Conference on Universal Access in Human-Computer Interaction, Access to Learning, Health and Well-Being, Los Angeles, CA, USA, 2–7 August 2015; Antona, M., Stephanidis, C., Eds.; Springer: Cham, Switzerland, 2015; pp. 472–482. [Google Scholar] [CrossRef]
  13. Kyan, M.; Sun, G.; Li, H.; Zhong, L.; Muneesawang, P.; Dong, N.; Elder, B.; Guan, L. An Approach to Ballet Dance Training through MS Kinect and Visualization in a CAVE Virtual Reality Environment. ACM Trans. Intell. Syst. Technol. 2015, 6, 1–23. [Google Scholar] [CrossRef]
  14. Chan, C.P.J.; Leung, H.; Tang, K.T.J.; Komura, T. A Virtual Reality Dance Training System Using Motion Capture Technology. IEEE Trans. Learn. Technol. 2011, 4, 187–195. [Google Scholar] [CrossRef]
  15. Aristidou, A.; Stavrakis, E.; Charalambous, P.; Chrysanthou, Y.; Himona, S.L. Folk Dance Evaluation Using Laban Movement Analysis. ACM J. Comput. Cult. Herit. 2015, 8, 1–20. [Google Scholar] [CrossRef]
  16. Uzunova, Z.; Chotrov, D.; Maleshkov, S. Virtual Reality System for Motion Capture Analysis and Visualization for Folk Dance Training. In Proceedings of the 12th Annual International Conference on Computer Science and Education in Computer Science (CSECS 2016), Fulda, Germany, 1–2 July 2016. [Google Scholar]
  17. Pettijohn, K.A.; Lukos, J.R.; Peltier, C.; Norris, J.N.; Biggs, A.T. Comparison of Virtual Reality and Augmented Reality: Safety and Effectiveness, Technical Report, Naval Medical Research Unit Dayton Wright-Patterson AFB United States. 2019. Available online: https://apps.dtic.mil/dtic/tr/fulltext/u2/1068493.pdf (accessed on 25 October 2019).
  18. Mousas, C. Performance-Driven Dance Motion Control of a Virtual Partner Character. In Proceedings of the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Reutlingen, Germany, 18–22 March 2018; pp. 57–64. [Google Scholar]
  19. Hsiao, K.F.; Chen, N.S. The Development of the AR-Fitness System in Education. In Edutainment Technologies. Educational Games and Virtual Reality/Augmented Reality Applications, Edutainment 2011; Lecture Notes in Computer Science; Chang, M., Hwang, W.Y., Chen, M.P., MÃijller, W., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6872, pp. 2–11. [Google Scholar] [CrossRef]
  20. Saxena, V.V.; Feldt, T.; Goel, M. Augmented Telepresence as a Tool for Immersive Simulated Dancing in Experience and Learning. In Proceedings of the India HCI 2014 Conference on Human Computer Interaction (IndiaHCI’14), New Delhi, India, 7–9 December 2014; ACM Press: New York, NY, USA, 2014; pp. 86–89. [Google Scholar] [CrossRef]
  21. Anderson, F.; Grossman, T.; Matejka, J.; Fitzmaurice, G. YouMove: Enhancing Movement Training with an Augmented Reality Mirror. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST ’13), St. Andrews Scotland, UK, 9–11 October 2013; ACM Press: New York, NY, USA, 2013; pp. 311–320. [Google Scholar] [CrossRef]
  22. Yan, S.; Ding, G.; Guan, Z.; Sun, N.; Li, H.; Zhang, L. OutsideMe: Augmenting Dancer’s External Self-Image by Using a Mixed Reality System. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’15), Seoul, Korea, 18–23 April 2015; ACM Press: New York, NY, USA, 2015; pp. 965–970. [Google Scholar] [CrossRef]
  23. Clay, A.; Couture, N.; Nigay, L.; de la Rivière, J.; Martin, J.; Courgeon, M.; Desainte-Catherine, M.; Orvain, E.; Girondel, V.; Domengero, G. Interactions and Systems for Augmenting a Live Dance Performance. In Proceedings of the 2012 IEEE International Symposium on Mixed and Augmented Reality—Arts, Media, and Humanities (ISMARAMH), Altanta, GA, USA, 5–8 November 2012; pp. 29–38. [Google Scholar] [CrossRef] [Green Version]
  24. Kico, I.; Liarokapis, F. A Mobile Augmented Reality Interface for Teaching Folk Dances. In 25th ACM Symposium on Virtual Reality Software and Technology (VRST ‘19); Tomas, T., Simeon, S., Deborah, R., Anton, B., Thierry, D., Torsten, K., Huyen, N., Shigeo, M., Yuichi, I., Richard, S., et al., Eds.; ACM: New York, NY, USA, 2019; Volume 47, p. 2. [Google Scholar] [CrossRef]
  25. OptiTrack2019. OptiTrack—Motion Capture Systems. Available online: https://optitrack.com/ (accessed on 10 July 2019).
  26. Kico, I.; Liarokapis, F. Comparison of Trajectories and Quaternions of Folk-Dance Movements Using Dynamic Time Warping. In Proceedings of the 2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), Vienna, Austria, 4–6 September 2019; pp. 1–4. [Google Scholar] [CrossRef]
  27. Motive—Optical Motion Capture Software. 2019. Available online: https://optitrack.com/products/motive/ (accessed on 3 December 2019).
  28. HCI Lab—Datasets. HCILab 2018. Available online: http://hci.fi.muni.cz/datasets/ (accessed on 14 December 2019).
  29. Ivanicky, J. Gamification Techniques for Augmented Reality Dancing. Bachelor’s Thesis, Faculty of Informatics, Masaryk University, Brno, Czech Republic, 2019. [Google Scholar]
  30. ARCore. ARCore Overview. 2019. Available online: https://developers.google.com/ar/discover/ (accessed on 10 July 2019).
  31. Adobe Fuse. Create 3D Models, Characters. 2019. Available online: https://www.adobe.com/products/fuse.html (accessed on 10 July 2019).
  32. Mixamo. 2019. Available online: https://www.mixamo.com/#/ (accessed on 10 July 2019).
  33. Unity. Unity Real-Time Development Platform. 2019. Available online: https://unity.com/ (accessed on 10 July 2019).
  34. Autodesk. Bind Pose—Maya LT. 2018. Autodesk Knowledge Network. Autodesk, Inc. Available online: https://knowledge.autodesk.com/support/maya-lt (accessed on 3 December 2019).
  35. Kennedy, R.S.; Lane, N.E.; Berbaum, K.S.; Lilienthal, M.G. Simulator Sickness Questionnaire: An Enhanced Method for Quantifying Simulator Sickness. Int. J. Aviat. Psychol. 1993, 3, 203–220. [Google Scholar] [CrossRef]
  36. Stone, B.W. Psychometric Evaluation of the Simulator Sickness Questionnaire as a Measure of Cybersickness. Ph.D. Thesis, Iowa State University, Ames, IA, USA, 2017. [Google Scholar] [CrossRef] [Green Version]
  37. Sanguansat, P. Multiple Multidimensional Sequence Alignment Using Generalized Dynamic Time Warping. WSEAS Trans. Math. 2012, 11, 684–694. [Google Scholar]
  38. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. Adv. Psychol. 1988, 52, 139–183. [Google Scholar] [CrossRef]
  39. Hart, S.G. Nasa-Task Load Index (NASA-TLX); 20 Years Later. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2006, 50, 904–908. [Google Scholar] [CrossRef] [Green Version]
  40. Witmer, B.G.; Singer, M.J. Measuring Presence in Virtual Environments: A Presence Questionnaire. Presence Teleoperators Virtual Environ. 1998, 7, 225–240. [Google Scholar] [CrossRef]
  41. Matlab2019. MathWorks. 2019. Available online: https://www.mathworks.com/products/matlab.html (accessed on 10 July 2019).
  42. Hachaj, T.; Ogiela, M.R.; Piekarczyk, M.; Koptyra, K. Averaging Three-Dimensional Time-Varying Sequences of Rotations: Application to Preprocessing of Motion Capture Data. In Image Analysis; Sharma, P., Bianchi, F.M., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 17–28. [Google Scholar]
  43. Vovk, A.; Wild, F.; Guest, W.; Kuula, T. Simulator Sickness in Augmented Reality Training Using the Microsoft HoloLens. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18), Montreal, QC, Canada, 21–26 April 2018; ACM Press: New York, NY, USA, 2018; Volume 209, p. 9. [Google Scholar] [CrossRef]
  44. Moss, J.D.; Muth, E.R. Characteristics of head-mounted displays and their effects on simulator sickness. Hum. Factors 2011, 53, 308–319. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Workflow for learning procedure.
Figure 1. Workflow for learning procedure.
Applsci 10 00599 g001
Figure 2. Workflow for data analysis.
Figure 2. Workflow for data analysis.
Applsci 10 00599 g002
Figure 3. Dance animation examples of male avatar presented in augmented reality (AR) application.
Figure 3. Dance animation examples of male avatar presented in augmented reality (AR) application.
Applsci 10 00599 g003
Figure 4. Participant using projection screen.
Figure 4. Participant using projection screen.
Applsci 10 00599 g004
Figure 5. Dance animation of female avatar presented on projection screen.
Figure 5. Dance animation of female avatar presented on projection screen.
Applsci 10 00599 g005
Figure 6. User interface from user’s perspective in AR.
Figure 6. User interface from user’s perspective in AR.
Applsci 10 00599 g006
Figure 7. Boxplots for best performance scores for filtered data.
Figure 7. Boxplots for best performance scores for filtered data.
Applsci 10 00599 g007
Figure 8. Device-based comparison of raw and filtered data for different body parts.
Figure 8. Device-based comparison of raw and filtered data for different body parts.
Applsci 10 00599 g008
Figure 9. Average scores calculated from pre and post simulator sickness questionnaire (SSQ) questionnaires for AR application.
Figure 9. Average scores calculated from pre and post simulator sickness questionnaire (SSQ) questionnaires for AR application.
Applsci 10 00599 g009
Figure 10. Comparison of average scores for NASA task load index (NASA-TLX) questionnaire.
Figure 10. Comparison of average scores for NASA task load index (NASA-TLX) questionnaire.
Applsci 10 00599 g010
Table 1. Age and gender of participants and used type of visualization.
Table 1. Age and gender of participants and used type of visualization.
Number of ParticipantGenderAge GroupType of Visualization
1Female18–25AR application
2Male18–25AR application
3Male34–41AR application
4Male18–25AR application
5Male18–25Projection Screen
6Female18–25Projection Screen
7Female42–49Projection Screen
8MaleOver 50Projection Screen
9Female18–25Projection Screen
10Female26–33Projection Screen
11Male18–25Projection Screen
12Female18–25AR application
13Male18–25Projection Screen
14Male26–33AR application
15Female18–25AR application
16Female26–33AR application
17Male26–33AR application
18Male18–25Projection Screen
19Male26–33AR application
20Male26–33Projection Screen
Table 2. Performance comparison between AR and PS.
Table 2. Performance comparison between AR and PS.
ScoresMeanSD
Best performance AR1954.8196335.93332
Best performance PS1982.7728540.72122
Average AR2087.3117355.38190
Average PS2170.4143522.37585

Share and Cite

MDPI and ACS Style

Kico, I.; Liarokapis, F. Investigating the Learning Process of Folk Dances Using Mobile Augmented Reality. Appl. Sci. 2020, 10, 599. https://doi.org/10.3390/app10020599

AMA Style

Kico I, Liarokapis F. Investigating the Learning Process of Folk Dances Using Mobile Augmented Reality. Applied Sciences. 2020; 10(2):599. https://doi.org/10.3390/app10020599

Chicago/Turabian Style

Kico, Iris, and Fotis Liarokapis. 2020. "Investigating the Learning Process of Folk Dances Using Mobile Augmented Reality" Applied Sciences 10, no. 2: 599. https://doi.org/10.3390/app10020599

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop