Next Article in Journal
Assessment of Spatio-Temporal Kinematic Phenomena Observed along the Boundary of Triaxial Sand Specimens
Previous Article in Journal
WPFD: Active User-Side Detection of Evil Twins
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Effects of Synthesizing Music Using AI for Preoperative Management of Patients’ Anxiety

1
Korea Health Industry Development Institute, 187 Osongsaengmyeong 2-ro, Osong-eup, Heungdeok-gu, Cheongju-si 28159, Korea
2
Military Band, Ministry of National Defense, Seoul 04383, Korea
3
Office of Hospital Information, Seoul National University Hospital, Seoul 03080, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(16), 8089; https://doi.org/10.3390/app12168089
Submission received: 28 June 2022 / Revised: 8 August 2022 / Accepted: 8 August 2022 / Published: 12 August 2022

Abstract

:
Before a patient undergoes surgery, they are likely to complain of anxiety to various degrees. To address this issue, we designed and implemented a composition program using TensorFlow Recurrent Neural Networks (RNNs) to select music for learning. The nurses’ preferences and needs were assessed using the Geneva Emotional Music Scales-9 (GEMS-9) tool and focus group interview (FGI) methods for currently used sound sources and nurses at the operating room entrance. An FGI and GEMS-9 for preference analysis were conducted by nurses who currently work in the operating room, had experience with managing the operating room’s background music, and wished to participate voluntarily in this study on 31 January 2019 in an operating room simulation center. Interviews were held with a total of three nurse. The data were analyzed using a qualitative thematic analysis. Using GEMS-9 to evaluate 16 sample sources, the average of the sad–happy values was highest at four points, with a lower tension of 1.48. Happy, Joy, and Peaceful were classified as appropriate for background music in the operating room. Additionally, the top six songs were selected as suitable songs by calculating the difference in values among Sad, Tension, Tender, Nostalgia, and Trance, which were judged to be inappropriate along with Power and Wonder. The songs selected were two jazz songs, three bossa nova songs, and two piano classical songs. The results of this study show that music used in the operating room should contain a slow tempo such as slow classical, piano, strings, natural acoustics, and new age music. Music consisting of only musical instruments (preferably containing smaller arrangements of less than five instruments) is preferred over music containing human vocals. Based on the study findings, the conditions of the sound source to be used for learning were suggested after consulting with a music expert.

1. Introduction

Before a patient undergoes surgery, they are likely to complain of anxiety to various degrees [1,2]. Preoperative anxiety is also associated with adverse consequences, such as emergence delirium, higher postoperative pain levels, and postoperative maladaptive behavioral changes [3]. Some specific anxiety factors include pain from surgery, fear of loss of consciousness after anesthesia, physical damage and changes after surgery, and thinking about worst-case scenarios such as death due to poor surgery prognosis. Furthermore, this feeling of uneasiness begins far earlier than the scheduled date of surgery [4,5] and is aggravated starting at the entrance to the operating room, where patients are isolated from their families after entering the operating room. Physical stress can peak because of the complex medical environment, such as patients encountering unfamiliar surroundings.
Therefore, various interventional therapies have been applied to relieve surgical anxiety, but among them, music therapy is non-invasive and economical [6,7]. It has been reported as an intervention method that positively affects physical changes related to anxiety [8].
Compared with pharmacological sedation, music therapy is a cost-effective and risk-free alternative that can provide physical, emotional, and cultural benefits to patients and their families [9,10,11].
A study by Graff cited by the American Society of Regional Anesthesia and Pain Medicine suggests that common characteristics of calming music are melodies without excessive fluctuations and tempo ranges at 60–80 beats per minute, excluding any percussion [12,13]. This method is called “binaural beat-infused music”, which is also used to check the difference in frequency of the tones based on electroencephalography [14].
However, musical intervention used in the operating room usually does not follow the psychological relaxation standard. It is operated in a way that indirectly reflects the preferences of the operating room staff or patients. A nurse selects and plays quiet music without complying with any particular standards while in the operating room.
In this study, criteria for music in the pre-operation room setting are established through a literature review and group interviews. Based on the software development life cycle (SDLC) model, music without copyright issues was produced through machine learning. In addition, a focus group interview (FGI) was conducted for nurses involved in preparing patients about to undergo surgery at the entrance of the operating room; then, music experts selected appropriate music according to the FGI results. Music suitable for preoperative patients based on deep learning techniques, such as Numerical Python (NumPy), Pandas, MessagePack (msgpack), glob, tqdm (from the Arabic word taqaddum, which means “progress”), and Machine Intelligence TensorFlow, was implemented after finalizing the chosen songs.

2. Materials and Methods

This study was developed according to the four phases of the software development life cycle: analysis, design, implementation, and evaluation. Due to the study process’s limitations, it ended at the implementation stage.
The waterfall model of the software life cycle was used to guide the development. The Waterfall model involves the following phases in sequential order: gathering all possible requirements of the system in a requirement specification document; designing the system; implementation with each unit developed and testing for its functionality, which is referred to as unit testing; And integration of all the units developed in the implementation phase into a system after testing of each unit. After integration, the entire system is tested for any faults and failures. Once the functional and non-functional testing finishes; the product is deployed. The last step is maintenance [15,16].
SDLC is a methodology for developing high-quality software by following well-defined procedures. This model is basic and easy to comprehend, and the passageway and exit criteria are well defined; as a whole, the model can offer software with fine quality based on a systematic procedure, which is especially useful for planning a new type of system or software [17,18]. The model applied had three stepwise phases, as shown in Figure 1, and each phase is described as follows. The waterfall model of the software life cycle was used to guide the development.

2.1. Analysis

The analysis was carried out using an existing system analysis and by conducting a preference survey.
In order to synthesize music, the following process was used: first, analyzing the environmental characteristics of the existing sound and organizing the current usage of the musical characteristics collected; collecting the following environmental characteristics: information on the location, size, and where the music is played. The characteristics of the music patterns currently played in the operating room and the music list played were collected. Base on the characteristic information collected, the existing system was analyzed by a music expert who is a professional musician and a doctoral student in music studies. Based on the analysis results, a sample sound source for Geneva Emotional Music Scales-9 was selected by reflecting the characteristics of music in the operating room.
For the preference analysis, interviews were conducted with nurses and a GEMS-9 evaluation was used. The interview was performed through an FGI using semi-structured questions about the current status of music used in the operation preparation room. Specifically, the following questions were asked: ‘What are the limitations of the conventional method of providing music for the patients who are expecting surgery at the operating room entrance?’, ‘What is the expected effect if the automatic selection system for the operating room (waiting space) is applied? Would it be a better solution compared to the conventional way?’ and ‘If the automatic selection system is applied, what functions would be helpful to the system?’.
Subsequently, a sound analysis using the GEMS-9 tool was also performed. GEMS is the first model specifically designed to capture the emotions evoked by listening to music, with nine musical categories evoking different emotions. It is more sophisticated than other categories of devices based on non-music domains. Moreover, this representative measurement tool was cited the most between 2008 and 2013 in the American Psychological Association journal Emotion [11,19]. The participants listened to 16 sample sound sources in sequence and evaluated the sound sources using the GEMS-9 tool. The modified GEMS-9 tool was translated to Korean for convenience.
An FGI and GEMS-9 for preference analysis were conducted with nurses who currently work in the operating room, had experience with managing the operating room’s background music, and wished to participate voluntarily in this study on 31 January 2019 in an operating room simulation center. According to the literature, the ideal sample size for a focus group interview varies [20]. Interviews were held with a total of three operation-room nurses in the focus group interview. The data were analyzed using a qualitative thematic analysis. The data were evaluated by classifying and arranging the acquired data into related ideas [21].
This study does not collect or record sensitive information in accordance with Article 23 of the Personal Information Protection Act of the Republic of Korea with unspecified research subjects. Therefore, the study is subject to approval exemption. However, for ethical research, consent was obtained from the participants, who voluntarily agreed to participate in the study. All of the research processes were conducted in accordance with the Declaration of Helsinki.

2.2. Design

The musical genres were selected according to the analysis result. Moreover, the corresponding genres from the artificial intelligence (AI) implementation were listed for the background music in the operating room. Figure 2 shows the sound source generation process using deep learning.
TensorFlow is an open-source software library for programming dataflows for various tasks. It is a symbolic math library used in machine learning applications such as neural networks.

2.3. Development

Deep learning was performed in the subsequent development stage. However, the results generated were difficult to use immediately, so experts carried out secondary processing. The music used in the operating room was composed of the classical, jazz, and bossa nova genres without any special rules. A list of music for the interview was selected based on the preceding literature.

3. Results

3.1. Analysis

The results from summarizing the interviews with nurses collected through an FGI about the problems of the existing method and their opinions on the new method are as follows. Because the nurse working in the operation preparation room turns on the music according to the nurse’s preference, only a few sound tracks are repeated. Even worse, sometimes, the music is not even turned on. An advantage is that patients would be able to listen to new music that does not overlap repeatedly. As the nurses do not have to worry about the playlist, they can also focus on other medical issues. The sound system is only limited to the operation room entrance area. It would be better if the music was played throughout the operation room hallway as well. As the patients feel most anxious being transported by cart through the hallway.
As a result of analyzing the characteristics of the sound source currently used in the operating room, music that used pianos, string instruments, pad sounds, natural sounds, etc., such as in classical music and new age music were deemed sentimental rather than participatory, and the tempo was slow and unfelt. Moreover, these types of music were composed only with instruments instead of music containing human voices and had less than five small formations. Based on the comprehensive existing system analysis results, the specific selection criteria and the music list for GEMS-9 were confirmed (Table 1).
Table 2 presents the result of using the GEMS-9 tool to evaluate the 16 sample sources. The sad–happy values’ average was measured the highest at four points, with a lower tension of 1.48.
Among the GEM values measured for each song, Happy, Joy, and Peaceful were classified as appropriate for background music in the operating room. In addition, the top six songs were selected as suitable songs by calculating the difference in values with Sad, Tension, Tender, Nostalgia, and Trance, which were judged to be inappropriate, along with Power and Wonder. The selected songs included two jazz songs (6, 8), three bossa nova songs (9, 10, 11), and two classical piano songs (13, 14).

3.2. Design

The bossa nova, jazz, and classical piano music genres were chosen as learning data, and the music was studied using TensorFlow RNN. A music list for learning was prepared. Finally, there were 50 sound sources used for learning, and 48 were used as two sound sources with different tempo were removed.

3.3. Development

Sections with long pauses or excessive tones at the beginning or end of a song were excluded from the study data. The cycle was unified or only the repeated beat was learned as the correct answer, and machine learning was carried out for each theme if the tone or theme of the selected learning data was different. Based on several examinations of the learning results, the best results could be obtained by playing the musical instrument digital interface (MIDI) file of a certain length consisting of the desired section and the tempo of a similar genre. Consequently, a sound source was derived (Figure 3).
The source of the MIDI sound returned was raw data obtained through mathematical calculations, and it sounded as if random notes were repeated (Figure 3a). The AI-generated sound source was difficult to use right away in the operating room, so an electronic music expert synthesized the music and made it more appropriate for use in the operating room (Figure 3b).

4. Discussion

In this study, to synthesize music using AI for preoperative management of patients’ anxiety, a stepwise approach was applied to improve quality using SDLC. Nurses working at the operating room entrance were asked to assess their preference using a GEMS-9 evaluation and an FGI for selecting the music to be used for machine learning. The analysis results were included in the final database for machine learning based on the findings. As a result, a raw MIDI file consisting of random notes without any particular regularity was obtained, but better results can be achieved with the assistance of music experts.
As a result of the existing system analysis, despite various research results suggesting that preoperative music has a significant effect on patients’ anxiety levels [8], in this study, the nurses stated that music was not actively used in the medical setting. This is due to the tendency of prioritizing performing aseptic methods and quick treatment in the operating room. Consequently, it is thought that the use of music has relatively little clinical significance and is regarded as less important [22,23]. However, before anesthesia is administered to the patient, who is feeling extremely anxious about the surgery, feeling mentally stable and clam is important for the successful treatment outcome as well as the medical surgical procedure. Therefore, the importance of playing appropriate music that is developed according to the software development life cycle model can be a new effective approach.
The Geneva Emotional Music Scales were derived from confirmatory factor analyses of ratings of emotions evoked by various genres of music. According to the results of the preference analysis, each song has different characteristics. As a result of GEMS-9, bossa nova, jazz, and classical piano were selected as suitable genres. Our results that bossa nova music is related to relaxation are similar to those of previous studies [24,25]. Using the Geneva Emotional Music Scales results can provide evidence-based results to artificially synthesize music suitable for patients in the hospital setting.
The sound source obtained through the deep learning process in this study was not at a level that could be directly applied to the clinical field. This is due to the small number of scale sources used in deep learning. For practical use, conversion of the MIDI file to an original instrument using another music editing program is needed. Conversion of the MIDI file to an original song tempo, and performance with an accompaniment or guitar in addition to the main melody are recommended. Most existing studies related to surgery-related anxiety interventions selected the music preferred by the patient or provided music arbitrarily chosen by a relevant practitioner [26,27]. However, the operating room’s waiting area was occupied with tasks such as preparations for surgery, so selecting and providing personalized music was laborious. When the same music was offered to all patients waiting, it was difficult to find previous studies that analyzed what criteria should be selected and applied. Therefore, this study will support nurses in selecting and providing appropriate music in the operating room’s waiting area and help reduce patients’ anxiety before surgery. Although this study focused only on the anxiety of patients, considering that music is heard by everyone in the operating room, it is necessary to consider the communication of medical staff, as was considered in other studies on music preference [28,29].
It is crucial to directly observe the results of this study as it is an indirect research study through operating room nurses rather than a study on actual patients in the operating room’s waiting area. However, as the first study to consider detailed music selection in the operating room’s waiting area and to use a method different from existing studies, such as machine learning, this study is significant for subsequent studies [23]. Previous studies mainly focused on the effect of music on general patients. In this study, the synthesis of music using AI is not limited to simply producing random music. Gathering medical and musical professionals’ ideas before synthesizing music using AI is a cutting-edge approach in interdisciplinary research. Therefore, further studies applying AI synthesized music need to be carried out in the future.

5. Conclusions

This study saw to the design and implementation of a composition program using TensorFlow RNN to select music for learning. The nurses’ preferences and needs were assessed using the GEMS-9 tool and FGI methods for currently used sound sources and nurses at the operating room entrance. Based on the study findings, the conditions of the sound source to be used for learning were suggested after consultation with a music expert.
Although substantial sound sources were generated through machine learning, it was confirmed that immediately applying the sound sources generated was challenging, and additional expert correction was required. Composing new music from the sound source included in the first database through this machine learning and applying it to patients waiting for surgery would benefit future studies since the program developed can take over the nurse’s job of selecting music while avoiding the copyright problem. It is also expected to effectively manage patients’ anxiety before surgery.
This study is significant because machine learning was applied. Subsequent studies could focus on a comparative study on the effect on patient anxiety reduction after applying the music list used previously, the music selected through the demand survey, and the newly composed music based on machine learning.

6. Limitations

Nevertheless, there are limitations in this study because it has never been applied to actual surgery patients. Specifically, this study estimated that AI could be used immediately to create a sound source, but only the pitch was created through AI, which lacked rhythm, time signature, length, and strength. Hence, human input was required.

Author Contributions

Conceptualization, H.R.; resources, H.R.; methodology, H.R. and Y.-J.H.; validation, Y.-J.H.; data curation, J.H. and H.R.; writing—original draft preparation, H.R.; writing—review and editing, H.R. and Y.-J.H.; visualization, H.R. and J.H.; supervision, H.R.; project administration, H.R.; funding acquisition, H.R. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Ministry of Science and ICT (MSIT), Korea, under the Information Technology Research Center (ITRC) support program (IITP-2019-2018-0-01833), supervised by the Institute for Information & communications Technology Promotion (ITTP).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki.

Informed Consent Statement

Written informed consent was obtained from all participants involved in the study.

Data Availability Statement

The data supporting this study’s findings are not publicly available due to privacy and ethical concerns but can be provided upon request to the corresponding author.

Acknowledgments

We express our gratitude to Jeongeun Kim and musician Jiyoon Lim for their assistance in the study.

Conflicts of Interest

The authors declare no conflict of interest. The funder had no role in the design of the study; in the collection, analyses, or interpretation of data; in writing of the manuscript; or in the decision to publish the results.

References

  1. Aust, H.; Eberhart, L.; Sturm, T.; Schuster, M.; Nestoriuc, Y.; Brehm, F.; Rüsch, D. A cross-sectional study on preoperative anxiety in adults. J. Psychosom. Res. 2018, 111, 133–139. [Google Scholar] [CrossRef]
  2. Eberhart, L.; Aust, H.; Schuster, M.; Sturm, T.; Gehling, M.; Euteneuer, F.; Rüsch, D. Preoperative anxiety in adults—A cross-sectional study on specific fears and risk factors. BMC Psychiatry 2020, 20, 140. [Google Scholar] [CrossRef]
  3. Banchs, R.J.; Lerman, J. Preoperative anxiety management, emergence delirium, and postoperative behavior. Anesthesiol. Clin. 2014, 32, 1–23. [Google Scholar] [CrossRef]
  4. Jarmoszewicz, K.; Nowicka-Sauer, K.; Zemła, A.; Beta, S. Factors Associated with High Preoperative Anxiety: Results from Cluster Analysis. World J. Surg. 2020, 44, 2162–2169. [Google Scholar] [CrossRef] [PubMed]
  5. Cevik, B. The Evaluation of Anxiety Levels and Determinant Factors in Preoperative Patients. Int. J. Med. Sci. Public Health 2018, 7, 135–143. [Google Scholar]
  6. Gogoularadja, A.; Bakshi, A. A Randomized Study on the Efficacy of Music Therapy on Pain and Anxiety in Nasal Septal Surgery. Int. Arch. Otorhinolaryngol. 2020, 24, 232–236. [Google Scholar] [CrossRef]
  7. De Witte, M.; Spruit, A.; van Hooren, S.; Moonen, X.; Stams, G.-J. Effects of music interventions on stress-related outcomes: A systematic review and two meta-analyses. Health Psychol. Rev. 2020, 14, 294–324. [Google Scholar] [CrossRef] [PubMed]
  8. Gramaglia, C.; Gambaro, E.; Vecchi, C.; Licandro, D.; Raina, G.; Pisani, C.; Burgio, V.; Farruggio, S.; Rolla, R.; Deantonio, L.; et al. Outcomes of music therapy interventions in cancer patients—A review of the literature. Crit. Rev. Oncol./Hematol. 2019, 138, 241–254. [Google Scholar] [CrossRef] [PubMed]
  9. DeLoach Walworth, D. Procedural-Support Music Therapy in the Healthcare Setting: A Cost–Effectiveness Analysis. J. Pediatr. Nurs. 2005, 20, 276–284. [Google Scholar] [CrossRef]
  10. Loewy, J.; Hallan, C.; Friedman, E.; Martinez, C. Sleep/Sedation in Children Undergoing EEG Testing: A Comparison of Chloral Hydrate and Music Therapy. Am. J. Electroneurodiagn. Technol. 2006, 46, 343–355. [Google Scholar] [CrossRef]
  11. Loewy, J.; Stewart, K.; Dassler, A.M.; Telsey, A.; Homel, P. The Effects of Music Therapy on Vital Signs, Feeding, and Sleep in Premature Infants. Pediatrics 2013, 131, 902–918. [Google Scholar] [CrossRef]
  12. Gooding, L.; Swezey, S.; Zwischenberger, J.B. Using music interventions in perioperative care. South. Med. J. 2012, 105, 486–490. [Google Scholar] [CrossRef]
  13. Graff, V.; Cai, L.; Badiola, I.; Elkassabany, N.M. Music versus midazolam during preoperative nerve block placements: A prospective randomized controlled study. Reg. Anesth. Pain Med. 2019, 44, 796–799. [Google Scholar] [CrossRef]
  14. Graff, V. Role of Music in the Perioperative Setting. Reg. Anesth. Pain Med. 2017, 17, 27–29. [Google Scholar]
  15. Kramer, M. Best practices in systems development lifecycle: An analysis based on the waterfall model. Rev. Financ. Stud. 2018, 9, 77–84. [Google Scholar]
  16. Shylesh, S. A study of software development life cycle process models. In Proceedings of the National Conference on Reinventing Opportunities in Management, IT, and Social Sciences, Mangalore, India, 23–24 April 2017; pp. 534–541. [Google Scholar] [CrossRef]
  17. Ryu, H.; Piao, M.; Kim, H.; Yang, W.; Kim, K.H. Development of a Mobile Application for Smart Clinical Trial Subject Data Collection and Management. Appl. Sci. 2022, 12, 3343. [Google Scholar] [CrossRef]
  18. Hong, Y.J.; Piao, M.; Kim, J.; Lee, J.H. Development and Evaluation of a Child Vaccination Chatbot Real-Time Consultation Messenger Service during the COVID-19 Pandemic. Appl. Sci. 2021, 11, 12142. [Google Scholar] [CrossRef]
  19. Zentner, M.; Grandjean, D.; Scherer, K.R. Emotions Evoked by the Sound of Music: Characterization, Classification, and Measurement. Emotion 2008, 8, 494–521. [Google Scholar] [CrossRef] [PubMed]
  20. Carlsen, B.; Glenton, C. What about N? A Methodological Study of Sample-Size Reporting in Focus Group Studies. BMC Med. Res. Methodol. 2011, 11, 26. [Google Scholar] [CrossRef] [PubMed]
  21. Hsieh, H.F.; Shannon, S.E. Three Approaches to Qualitative Content Analysis. Qual. Health Res. 2005, 15, 1277–1288. [Google Scholar] [CrossRef]
  22. Association of Surgical Technologies (AST). Guidelines for Best Practices for Establishing the Sterile Field in the Operating Room Guideline. 2019. Available online: https://www.ast.org/AboutUs/Aseptic_Technique/ (accessed on 5 August 2022).
  23. Fu, V.X.; Oomens, P.; Merkus, N.; Jeekel, J. The perception and attitude toward noise and music in the operating room: A systematic review. J. Surg. Res. 2021, 263, 193–206. [Google Scholar] [CrossRef] [PubMed]
  24. Susino, M.; Schubert, E. Musical emotions in the absence of music: A cross-cultural investigation of emotion communication in music by extra-musical cues. PLoS ONE 2020, 15, e0241196. [Google Scholar] [CrossRef]
  25. Payrau, B.; Quere, N.; Breton, E.; Payrau, C. Fasciatherapy and reflexology compared to hypnosis and music therapy in daily stress management. Int. J. Ther. Massage Bodyw. Res. Educ. Pract. 2017, 10, 4–13. [Google Scholar] [CrossRef]
  26. Kühlmann, A.Y.R.; De Rooij, A.; Kroese, L.F.; van Dijk, M.; Hunink, M.G.M.; Jeekel, J. Meta-analysis evaluating music interventions for anxiety and pain in surgery. BJS 2018, 105, 773–783. [Google Scholar] [CrossRef]
  27. Kipnis, G.; Tabak, N.; Koton, S. Background Music Playback in the Preoperative Setting: Does It Reduce the Level of Preoperative Anxiety Among Candidates for Elective Surgery? J. Perianesth. Nurs. 2016, 31, 209–216. [Google Scholar] [CrossRef]
  28. Wilson, C.; Bungay, H.; Munn-Giddings, C.; Boyce, M. Healthcare professionals’ perceptions of the value and impact of the arts in healthcare settings: A critical review of the literature. Int. J. Nurs Stud. 2016, 56, 90–101. [Google Scholar] [CrossRef]
  29. Weldon, S.M.; Korkiakangas, T.; Bezemer, J.; Kneebone, R. Music and communication in the operating theatre. J. Adv. Nurs. 2015, 71, 2763–2774. [Google Scholar] [CrossRef]
Figure 1. Study process regarding synthesizing music.
Figure 1. Study process regarding synthesizing music.
Applsci 12 08089 g001
Figure 2. Deep learning process.
Figure 2. Deep learning process.
Applsci 12 08089 g002
Figure 3. (a) Produced sample sound source; (b) edited sample sound source https://youtu.be/OQszeBCE7Xg (accessed on 7 August 2022).
Figure 3. (a) Produced sample sound source; (b) edited sample sound source https://youtu.be/OQszeBCE7Xg (accessed on 7 August 2022).
Applsci 12 08089 g003
Table 1. Music selected for the Geneva Emotional Music Scales-9 (GEMS-9) tool.
Table 1. Music selected for the Geneva Emotional Music Scales-9 (GEMS-9) tool.
NMusic GenreTitle (Meaning/Composer)Main InstrumentsTempo (M.M.)
1World
(Gugak—Traditional Korean Music)
Cheongseong-gok
(The song of the clear, high sound)
Danso
(small-notch vertical flute)
♩. = 30
(6/8)
2World
(Contemporary Korean Music)
Bi-ik-ryun-ri
(Lovers)
Haegeum
(vertical fiddle with two strings)
♩ = 70
(4/4)
3World
(Contemporary Korean Music)
Eo-yeop-da
(No way)
Vocal Music♩. = 35
(6/8)
4Gugak
(Traditional Korean Music)
Cheongsanri
(In the blue mountains)
Vocal Music♩. = 27
(6/8)
5Jazz
(Trio)
The Green Hour
(Bob James)
Piano♩ = 45
(4/4)
6Jazz
(Trio)
Baroque and Blue
(Claude Bolling)
Flute♩. = 70
(6/8)
7Jazz
(Trio)
Romance
(Claude Bolling)
Violin♩ = 80
(4/4)
8Jazz
(Latin)
Return to Forever
(Airto Moreira)
Band♩ = 67
(4/4)
9Bossa novaTriste
(Antonio Carlos Jobim)
Band♩ = 156
(4/4)
10Bossa novaLook to The Sky
(Antonio Carlos Jobim)
Band♩ = 130
(4/4)
11Bossa novaAguas de Marco
(Antonio Carlos Jobim)
Band with Vocals♩ = 140
(4/4)
12Jazz
(New Age)
Luminescence
(Goro Ito, Jaques Morelenbaum)
Band with Vocals♩ = 78
(3/4)
13Classical MusicPiano Sonata No. 11 in A major, K. 331—I. Andante grazioso
(Wolfgang Amadeus Mozart)
Piano♩. = 44–52
(6/8)
14Classical MusicPiano Sonata No. 5 in G major, K. 283—I. Allegro
(Wolfgang Amadeus Mozart)
Piano♩ = 128–140
(3/4)
15Classical MusicString Quartet No. 3 in G Major, K. 156
(Wolfgang Amadeus Mozart)
Violin, Viola, Cello♩ = 110
(3/4)
16Classical MusicSicilienne et Allegro Giocoso: Largamente
(Gabriel Grovlez)
Bassoon, Piano♩ = 44–52
(4/4)
Table 2. GEMS-9 tool results from the nurses. The dark color indicates that the following emotional values are strong.
Table 2. GEMS-9 tool results from the nurses. The dark color indicates that the following emotional values are strong.
NWonderTrancePowerTenderNostalgiaPeacefulJoyfulSadTensionSad–HappyCalc *
11.67
(1.15)
3
(1.73)
2.33
(0.58)
2.33
(0.58)
3.67
(0.58)
3.67
(0.58)
1.33
(0.58)
3.67
(1.53)
2
(1)
2
(0)
2.00
22.67
(0.58)
3.67
(0.58)
2
(1.73)
4
(0)
3.67
(1.53)
4.67
(0.58)
2
(1)
3
(2)
1
(0)
3.33
(1.53)
3.33
31.67
(0.58)
3.33
(0.58)
2
(1.73)
2.67
(1.15)
2.67
(1.53)
3
(1)
2
(1)
3.33
(0.58)
1.67
(1.15)
3.67
(0.58)
3.67
41.67
(1.15)
1.67
(1.15)
1.67
(1.15)
1.33
(0.58)
1.67
(0.58)
2
(0)
1
(0)
2.67
(1.15)
2.33
(1.53)
3
(1)
3.00
52.67
(0.58)
3.33
(1.15)
2
(1)
4.33
(1.15)
2
(1)
4.67
(0.58)
2.33
(1.15)
2
(1)
1.33
(0.58)
3.67
(0.58)
3.67
62.67
(1.15)
2.33
(1.53)
3
(1.73)
4
(0)
1
(0)
2.67
(0.58)
4.33
(0.58)
1
(0)
1
(0)
5.33
(0.58)
5.33
72.67
(1.53)
3.33
(1.15)
3
(1.73)
2.67
(1.15)
2.33
(0.58)
2.67
(1.15)
1.67
(1.15)
3
(1)
1.33
(0.58)
4
(1)
4.00
81.67
(1.15)
2
(1)
2.67
(0.58)
1.67
(0.58)
2
(1.73)
1.67
(1.15)
2
(1)
1.33
(0.58)
1.67
(0.58)
4.67
(0.58)
4.67
93.33
(1.15)
3.67
(0.58)
2.67
(1.15)
4
(0)
1.67
(1.15)
3.33
(1.53)
3.33
(1.15)
1
(0)
1
(0)
5.67
(0.58)
5.67
102.33
(0.58)
2.33
(0.58)
1.67
(1.15)
2.67
(1.15)
1.67
(0.58)
3
(0)
1.67
(0.58)
1.33
(0.58)
1.33
(0.58)
4.33
(0.58)
4.33
113.67
(0.58)
3
(1)
3.67
(0.58)
3.33
(0.58)
1
(0)
1.67
(0.58)
4.33
(0.58)
1
(0)
1
(0)
6
(0)
6.00
122
(1)
2.33
(0.58)
1.67
(1.15)
3.33
(1.15)
3.33
(1.15)
3.67
(0.58)
1.67
(1.15)
3
(1)
1.33
(0.58)
3
(1)
3.00
131.67
(1.15)
2.33
(1.15)
1.33
(0.58)
4
(0)
2
(1)
4.33
(0.58)
2.33
(1.53)
1.67
(0.58)
1
(0)
4
(0)
4.00
142.67
(1.15)
2.67
(1.15)
3.67
(0.58)
3
(1.73)
1.33
(0.58)
4.33
(0.58)
3.67
(0.58)
1.33
(0.58)
2
(1.73)
5.33
(0.58)
5.33
152.33
(0.58)
2
(0)
1.33
(0.58)
2.67
(0.58)
2.33
(1.53)
3.33
(0.58)
1.33
(0.58)
2
(1.41)
1.67
(0.58)
3.33
(0.58)
3.33
162.33
(1.53)
2.67
(1.15)
2.33
(1.15)
2
(0)
2.67
(1.15)
2.67
(0.58)
1.33
(0.58)
2.33
(0.58)
2
(1)
2.67
(0.58)
2.67
Total2.352.732.313.002.193.212.272.101.484.002.35
* Calculation = (Happy + Joy + Peaceful + Power + Wonder) − (Sad + Tension + Tender + Nostalgia + Trance).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hong, Y.-J.; Han, J.; Ryu, H. The Effects of Synthesizing Music Using AI for Preoperative Management of Patients’ Anxiety. Appl. Sci. 2022, 12, 8089. https://doi.org/10.3390/app12168089

AMA Style

Hong Y-J, Han J, Ryu H. The Effects of Synthesizing Music Using AI for Preoperative Management of Patients’ Anxiety. Applied Sciences. 2022; 12(16):8089. https://doi.org/10.3390/app12168089

Chicago/Turabian Style

Hong, Yeong-Joo, Jaeyeon Han, and Hyeongju Ryu. 2022. "The Effects of Synthesizing Music Using AI for Preoperative Management of Patients’ Anxiety" Applied Sciences 12, no. 16: 8089. https://doi.org/10.3390/app12168089

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop