Next Article in Journal
Using the Van Hiele Theory to Explain Pre-Service Teachers’ Understanding of Similarity in Euclidean Geometry
Previous Article in Journal
Special Class Provision in Ireland: Where We Have Come from and Where We Might Go
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design and Validation of Software for the Training and Automatic Evaluation of Music Intonation on Non-Fixed Pitch Instruments for Novice Students

by
Jesús Tejada
1,* and
María Ángeles Fernández-Villar
2
1
Institute of Creativity and Educational Innovations, University of Valencia, 46022 Valencia, Spain
2
Department of Developmental and Educational Psychology, University of Murcia, 30107 Murcia, Spain
*
Author to whom correspondence should be addressed.
Educ. Sci. 2023, 13(9), 860; https://doi.org/10.3390/educsci13090860
Submission received: 8 June 2023 / Revised: 20 August 2023 / Accepted: 21 August 2023 / Published: 23 August 2023
(This article belongs to the Section Technology Enhanced Education)

Abstract

:
Music education, whether professional or amateur, includes learning musical instruments. Intonation is a critical factor in their training. The main objective of this research work is the design and validation of online educational software for the real-time training and evaluation of intonation in non-fixed pitch musical instruments, such as fretted string instruments (violin, viola, and cello) and brass instruments (trumpet, horn, and trombone). This software is intended to create a practice artefact for novice music students. A design sciences research methodology is adopted to achieve a product tested for functionality and usability. Novice students carry out the validation phase through a study consisting of previous practice with the software and the administration of a questionnaire with open-ended items grouped in technical-didactic, emotional, and overall dimensions, plus two additional questions. The results show evidence that the software was well received, confirming previous studies on the design and validation of educational music education software.

1. Introduction

Intonation is a complex musical skill that involves at least two related skills: (1) the ability to discriminate aurally between two different pitches and (2) the ability to reproduce or imitate a previously heard pitch [1]. These skills involve the activation of the functions of timing, sequencing, and spatial organisation of movements [2], which have to be coordinated for sound production. Although the existence of this coordination is suspected, very little is known about how it is carried out [2]. Some kind of mental representation of sounds is thought to mediate the coordination, but there is no consensus on its nature, origin, generation, content, or relationships to other cognitive entities, although there is evidence of its use by musicians [3]. Furthermore, this coordination requires the execution of some cognitive processes associated with production and perception, as well as the involvement of proprioceptive information stored in the learner’s knowledge schemas [4].
During intonation training, the teacher asks learners to sound accurately on their instruments, but in real life, this may not be the case as they often intentionally deviate from the frequency for expressive purposes in certain musical contexts [5,6]. This situation is problematic as it involves asking novice learners to develop mental representations based on models that they will then have to change. Other factors that add to the difficulties of teaching intonation are melodic and harmonic context, timbre, vibrato, and portamento [7,8]. Also, beginner students of these instruments, unlike those students who are more advanced in the instrument, lack the cognitive–motor schemas and mental imagery necessary for sound production [9]. This adds more cognitive load (difficulty) to learning [10].
Accordingly, it could be argued that intonation plays a critical role in the initial learning of non-fixed pitch instruments. Software that could offer students the opportunity to practice with the instrument and, in addition, provide an automatic evaluation of their practice could be of great help during instrumental training. Presumably, this would facilitate the construction of mental images of sounds and intervals. Furthermore, it would be even more beneficial if the software included a visual feedback system that would make it possible to visualise the differences between the student’s performance and the model to be imitated. Likewise, the software could facilitate the complex task of correcting the intonation problems of learners by the teacher, thus avoiding the use of verbal instructions. Natural language is often polysemous, which makes it difficult to provide efficient feedback during learning [11]. Consequently, the use of a visual feedback system provided by software would be a clearer and more accurate alternative.
This study presents the design and validation processes of the software, created specifically as an online educational artefact for the real-time training and assessment of intonation on non-fixed intonation instruments such as brass and fretted string instruments. The software has been developed as an online tool that allows real-time assessment of intonation, allowing customised settings, as well as the creation and edition of exercises by users.

2. Literature Review

2.1. Teaching and Learning Intonation on Musical Instruments

Compared to other aspects of performance, there is some consensus that there is no specific approach to teaching intonation on non-fixed pitch instruments, such as brass and fretted string instruments, as they depend on the particularities of each instrument. Some factors can be construction design [12,13], mouthpiece changes [14], and temperature [15]. Other factors are added by instrument performers, such as tongue position and bow [16] as well as air column [17]. All this makes intonation the most complex parameter to develop on non-fixed pitch instruments. Thus, students of brass instruments need specific strategies such as compensating with embouchure, improving breathing, using alternative fingerings, correcting stick position, and the like [12]. In addition, some of these instruments are transposing and some are not.
Although stringed instruments are more homogeneous in the intonation, several factors can affect the music production ability. An important influential aspect in this regard is the lack of automation. Novice bowed string players find their progress in producing sound somewhat restricted; this stems from the crucial need to internalise (automate) specific mental patterns prior to being able to adequately focus on achieving accurate intonation [18]. In conclusion, any software tool dedicated to intonation training and assessment should particularise the instruction for each instrument.
There are three main methods for intonation development: traditional, auditory, and audio–visual. The traditional method, based on learner practice with a teacher providing verbal feedback, has been suggested to be the least effective [12] due to the polysemy of verbal instructions [11]. The auditory method comes from learning by imitation or modelling [19]. It includes techniques such as imitation of the teacher’s physical or sound model, imitation of a recording, analysis of one’s own recordings, or co-assessment among learners [7,20]. The audio–visual method involves the use of specific hardware or software that integrates different modes of presenting musical information. The typical actions of the learner are to listen to the model and its input, visualise the visual representation of both on screen, and check if the information of both corresponds. This forms a process of audiovisual feedback that has been studied extensively, although with mixed results [4,20,21,22].
This disparity in results could be due to several factors that concur in the processing of musical information. Divided attention is one of those factors, which refers to learning materials forcing the learner to divide their attention between competing information [23], for example, sound and visual representations. Another factor is the specificity of the ways (modalities) in which musical information is presented in the materials. This factor is related to cognitive load, i.e., the mental effort a learner must expend to perform a task. It reflects the limitations of working memory associated with the amount of cognitive load that different modalities have [24,25]. Extrinsic cognitive load, i.e., the cognitive load that is not due to the difficulty of the subject itself (or intrinsic cognitive load), can be reduced by careful arrangement of the learning materials and consideration of the context and the agents involved [26,27].
Computers have the ability to integrate graphical and aural modalities in the presentation of music practice exercises. They can also obtain the user’s input and evaluate it in real-time using analysis algorithms, which can help reduce extrinsic cognitive load by bringing together complementary information from different modalities in a structured way, thus avoiding the division of the student’s attention.

2.2. Software for Musical Intonation

Research on the development of voiced intonation software has studied the effect of visual feedback on user input in contrast to the theoretical model of intonation. Some software with this specific task has been developed, for example, by Singad [28], Singad Sing and See [29,30], and WinSINGAD [31].
The aim of these applications is to visually represent the incoming audio so that the learner can contrast their intonation on screen with a reference model and improve it. It has been suggested that this visual feedback can help improve intonation accuracy during training [30,31]; however, the visual feedback from the software could have a distracting effect on the attention of the learner participant, which could have a negative effect on intonation [4].
The Intonia software [32] targets intonation on the violin, allowing the user to visualise their intonation through the real-time representation of their input by means of dynamic waves inserted within a coordinate graph (pitch over time) and overlapping with the representation of the sound model. The user can pre-specify the temperament to be used (equal, meantone, or Pythagorean). It is assumed that learning takes place from the learner’s analysis of the visual feedback.
Another software [33] was designed for the training and the automatic assessment of vocal intonation. For this, a number of features were implemented based on a literature review on factors influencing intonation in the sung voice: (1) exercise imitation [34]; (2) chord or drone as facilitators for intonation of the first sound of each exercise [5,35]; (3) different modes of pattern information representation to enhance information processing and avoid extrinsic cognitive load of learning materials [23]; also, to facilitate visual feedback [31]; (4) limitation of the number of sounds in the exercises so as not to exceed the user’s sensory memory buffer [36]; (5) real-time evaluation of the exercises; (6) an exercise editor to facilitate the creation-editing of patterns by students or teachers. Thus, the software is open and reusable. The results of teacher testing have validated this software, which is widely used in the early years of music education at music schools.

2.3. Online Software for Real-Time Instrumental Intonation

The didactic artefact of this study consists in software for the real-time training and evaluation of intonation on non-fixed pitch instruments (trumpet, trombone, French horn, violin, viola, and cello). It is intended for beginners and intermediate-level students. The software evaluates only the pitch, without taking into account the rhythmic aspects, in order to prevent the novice student from facing the problem of dealing with two difficulties simultaneously.
When running the software, a window on the screen prompts the user to choose the instrument to practice and then to select the audio input–output devices—microphone and speakers—which can be those of the computer’s audio card or any other. Once the instrument is chosen, another window asks the user to select the set of starting exercises. The software also asks the student to configure the evaluation system, i.e., the tolerance that the system should have in the evaluation of each sound of the exercise. This is shown in cents and asks the student to set two opposite values: (1) a lower limit—a tolerance value below which incoming sounds with an intonation deviation below this limit will be rated with the maximum score; and (2) an upper limit—a tolerance value above which incoming sounds with an intonation deviation above this limit will be rated with a score of zero points. Incoming sounds with an intonation deviation between the two limits are scored according to an algorithm, with the higher deviation from the model being penalised.
After this, the main practice window opens (Figure 1), which includes: (1) a top menu with the exercises to practice; (2) a left side menu containing the settings, the exercise editor, and the student’s practice report; and (3) a button to play the exercise—the model or user input, another button to evaluate the student’s performance, a sound input level, a countdown to align the response and the exercise and a button to automatically stop the exercise when the student has finished playing.
The software disregards the rhythm component—the time and interonset intervals— in the evaluation of the performance, so the student should not follow a set tempo during practice. The absence of a set tempo makes it necessary to automatically establish a correspondence between each tone value obtained (12 values for each second of audio) and its reference frequency. The best monotonically increasing alignment between the pitch values and the reference notes was therefore sought. For this purpose, a dynamic programming alignment algorithm was implemented.
As aforementioned, in the evaluation system, each sound in the exercise is independently evaluated. In a given exercise, each sound is assigned a total value of 10/N points, where N is the number of notes in the exercise. This is carried out by means of a configuration that the student can modify graphically, as mentioned before, allowing them to have a personalised evaluation system. The evaluation is carried out in real time and also provides the student with visual feedback by contrasting the waveform of their interpretation with the waveform of the model. It also provides feedback in the form of audio messages and texts indicating the errors of each performed sound.
In order to allow the student to set up a customised practice, the software incorporates a section for creating and modifying exercises (Figure 2), allowing the creation of single exercises, as well learning units—a mere collection of thematic exercises. As can be seen, an attempt has been made to avoid any complexity of the interface so that the student can quickly get in touch with the software.
The software is a web application consisting of three main components: a front-end based on the open-source JavaScript library React, a MySQL relational database, and a REST API based on Express, the most popular NodeJS web framework. Therefore, the software requires a web server to run the API and the MySQL database software in order to allow communication and data storage between the user (front-end) and the application. This uses HTML5 API, Web Audio API, a pitch detection algorithm, an intonation evaluation algorithm, a Java script for processing and synthesising audio in web applications, as well as a File API for handling file upload and manipulation (Figure 3).
To automatically evaluate the users’ performances, in a first step, the software records the audio of the performances through the user’s microphone using the HTML5 Web Audio API implemented by most web browsers. Once recorded, a pitch or fundamental frequency detection algorithm is used to obtain the pitch frequency (in Hz) over time corresponding to that execution. This is made through a sliding window of 125 ms and a jump length of 83 ms, that is, 12 values per second with a 50% overlap between consecutive windows. The evaluation takes into account the lower and upper limit settings made by the user in the initial window of the software (see above).

3. Method

3.1. Design

Design science (DS) is concerned with designing and crafting artefacts to produce satisfactory outcomes related to predetermined objectives. The fulfilment of these objectives involves a relationship of three elements [37]: “the purpose or goal, the character of the artefact, and the environment in which the artefact is used” (p. 56). In this work, a design science-based research methodology (DSRM) was adopted, a method suitable for the creation of information systems for educational purposes [38]. The paradigm present in the DSRM in information systems is very appropriate in a work oriented to the creation of educational artefacts; in this case, an information system aimed at specific objectives, e.g., software for intonation on non-fixed pitch instruments. The scheme followed in the methodology of this project can be seen in Figure 4.
In phase 1 (not shown here; see Section 4.1 for a summary), a needs assessment was carried out in order to pre-design the software. This detection of needs was addressed through eight focus groups with teachers of the musical instruments for which the software was designed. The conclusions obtained from this study [39] helped to delimit the problems to be solved as well as to determine the features that the software should include.
Phase 2 (partly shown here) included the design and technical evaluation of a prototype of the software and successive iterations through usability tests with students in groups of 5, as advised by [40], as well as functionality tests. This phase included a multiple case study with 4 students from 8 to 12 years old in order to verify the functionality of the software as well as its influence on students’ self-regulation in instrumental practice [41].
Phase 3, the core of this study, consisted of the validation of the software by instrumental students enrolled in music schools.

3.2. Sample

The sample consisted of 141 music school students (85 from Spain, 13 from Chile, and 43 anonymous students who were recruited via social networks. Sixty-one were male and eighty were female, aged between 8 and 51 years ( X ¯ = 13.9; SD = 6.40).
There were 86 string players and 55 wind players. Sixty-nine people were studying elementary level (four years) of their instrument in music schools and 72 people were studying intermediate level (six years) in music schools, conservatories, or authorised professional music teaching centres. The majority of participants (95%) had formal musical studies and had obtained good instrument grades during the previous year at their conservatory or music school ( X ¯ = 8.28; SD = 0.99; range = 0–10 points). Likewise, participants’ previous year’s grades in compulsory schooling were excellent ( X ¯ = 9.07; SD = 0.90; range = 0–10 points).

3.3. Validation Instrument and Dimensions

An anonymous questionnaire of perceptions of practice with the software was designed and validated to collect the data; it included 46 open and closed items. A 4-point scale was used to avoid the neutral point. The omission of the neutral point on a Likert scale forces the student to decide on one side of the scale, avoiding the routine assessment of items at the midpoint. Researchers use this format to prevent respondents from choosing a neutral response to avoid making a decision, an important issue in this research. The questionnaire collected participants’ perceptions in three evaluation dimensions: technical-didactic assessment, emotional assessment, and overall assessment of the software. The idea of using the emotional dimension as a criterion of the quality of this software arose from the fact that in the educational and social world, little evidence is needed to demonstrate the presence of emotions in the learning process, as all people experience emotions when participating in education. Based on a dialogic vision and the socio-genetic base of emotions, not only primary, basic, and globally shared emotions (joy, sadness, fear, etc.) were considered, but also secondary, culturally learned emotions which are experienced on certain occasions as a process of emotional scaffolding involved in human educational activity. This line of work on emotions was developed on a psychological basis using the socio-cultural approach, subsequently updated by many other authors, for example [42].
In addition to personal and academic data, self-assessments of musical competence and digital competence were included as covariates to see if they influenced the main dimensions of the validation. Finally, the questionnaire asked for preferred and non-preferred features of the software, as well as suggestions for software improvement.
The questionnaire was validated by two expert judges with more than 20 years of teaching musical instruments and who regularly used music technology in the classroom. These judges expressed their opinion on the appropriateness of each item of the questionnaire. If they did not agree with the suitability of some items, they suggested changes to be made in a space reserved for this purpose. After two validation runs of the questionnaire, a complete agreement was obtained (Kappa Cohen = 1).

3.4. Preliminary Context and Validation Procedure

In order to better understand the process of developing this software, it is necessary to provide the reader with summary information about the detection of needs and design phases. Thus, in the detection of needs phase, 8 focus groups were carried out in 8 Spanish cities in which a total of 32 music school teachers participated [39]. In summary, the information provided by music teachers has revealed certain inconsistencies in the teaching of intonation, which indicate the prevalence of intuitive practices based on experience and teacher training, as well as a relative absence of theoretical, pedagogical foundations to support practice. There is also a lack of uniformity in the sequencing of content, a problem that becomes even more complex if we consider a certain laxity on the part of teachers in the approach to intonation in the initial stages. There is a notable absence of a generalised assessment model and of effective coordination between music theory teachers and instrumental specialists, which would allow pupils from all instruments to fulfil the same objectives [39]. The use of ad hoc software could systematise the practice of intonation in the initial stages of learning non-fixed pitch instruments. In this sense, their use could provide certain unifying criteria for teachers that would help to generate more consistent pedagogical forms in intonation work with brass instruments. In addition, it could provide reinforcement and greater autonomy in students’ daily practice, as it would facilitate work at home and provide room for practice and evaluative feedback without the need for the teacher’s physical presence. It is also noted that it could be helpful in self-discipline within the study of the instrument (self-regulation).
With the information obtained from these groups, a functional prototype of the software was designed and implemented—a functionally reduced version—which was tested by students and teachers in a cyclical phase of iterations. This phase provided observational data and users’ perceptions that facilitated the refinement of the prototype.
The prototype was then demonstrated in the laboratory with regard to its ability to solve the problem and provide solutions to the identified needs. Afterward, the prototype was piloted in a real context (6 music schools in the Valencian Community, Spain), which provided more data on functionality and usability to improve the software. After the corresponding iterations with the design and implementation process, a final revised full version was produced. This version was tested by means of a qualitative multiple case study with 4 elementary level students aged 8–12 years old [41]. In addition to verifying the functionality of the software, the secondary objective of this study was to test the influence that the software could have on the students’ self-regulation strategies.
During the validation process, the participating students were given access to a video or face-to-face lecture aimed at providing information on how the software works and how to make an account on the software server. They were also asked to do some practice sessions with the software in order to be able to complete the questionnaire. The participating music education centres were Spanish (Comunidad Valenciana, Castilla La Mancha, Melilla, and Región de Murcia) and Chilean (Santiago, Valparaíso, Valdivia, and La Serena). Likewise, a link to the software’s webpage was also disseminated on social networks, with another link to complete the online questionnaire. The results of the validation are presented below.

4. Results

In this section, the analysis of the data obtained from the administered questionnaire are presented. First of all, three covariates were presented which could potentially influence students’ opinions: (1) personal and academic data; (2) digital self-competence; and (3) musical self-competence. Then, three evaluation dimensions were analysed: (1) technical-didactic dimension, which collects the opinions on the technical, interaction, and didactic elements of the software; (2) emotional balance, which collects the self-perception of emotions of well-being and discomfort during the practice with the software; (3) global evaluation dimension, which collects data related to the students’ preferences to continue using the software during the musical instrument studies, whether at home or not. Also, the results of two items included in the questionnaire were presented that did not constitute a measure in themselves: (1) the overall evaluation of the software; and (2) whether the student would recommend the use of the software to others. Finally, the questionnaire included three open questions to the students: (1) the elements of the software they liked the most; (2) the elements they liked the least; and (3) suggestions for software improvement.

4.1. Personal and Academic Covariate

In order to study covariates that could explain possible influences, the questionnaire included items related to personal data (gender, age) and academic data (instrument, level of instrument studies, and the previous year’s instrument course grade). The influence of these variables will be shown where relevant within each validation dimension.

4.2. Digital Self-Competence Covariate

This covariate has been chosen as such because there is a high probability that higher indicators of digital competence are related to better results in the use of this software and, therefore, could plausibly explain a bias in the students’ assessment, if pertinent. The items were scored on a four-point ordinal scale to avoid the neutral point.
A principal component analysis (PCA) was run and two explanatory factors of variance were detected: (1) skills (n = 141; α = 0.619); and (2) use of technology (n = 141; α = 0.621) (Table 1). Taken as a global scale, the reliability of the covariate is medium (Cronbach α = 0.609).
This covariate correlates directly with the grade from the previous school year (r = 0.27; p = 0.007) and the didactic assessment dimension (r = 0.444; p = 0.001). The higher the scores from the previous school year, the higher the self-assessment of the digital competences and the software exercises.

4.3. Covariates: Musical Self-Competence

This category of the questionnaire was created on the basis of a set of variables that could potentially systematically influence the perceptions of students participating in the software assessment. Four items were included that sought to ascertain these perceptions in three domains: general musical skills, instrumental musical skills, ability to perceive intonation correctly, and ability to produce intonation correctly (Table 2). The reliability of the scale was normal (α = 0.79). The perception of musical self-competence was high ( X ¯ = 7.49; SD = 1.72).
This category correlates significantly with the academic variables of instrument grade in the previous year (r = 0.319; p = 0.002) and overall course grade in the previous year (r = 0.272; p = 0.007). The higher the previous course grade, the higher the perception of musical self-competence.

4.4. Technical-Didactic Dimension

This dimension included closed-ended questions (degree of agreement–disagreement with a statement about the usefulness of technical, didactic, and interaction features) scored on a four-point ordinal scale to avoid the neutral point. A principal component analysis was used to calculate the main explanatory factors of variance as well as the reliability of the scale. The dimension was composed of three components: technical opinion (six items), interaction (four items), and didactic opinion. The results were high both in terms of reliability (Cronbach’s α = 0.733) and in terms of participants’ perceptions ( X ¯ = 3.35; SD = 0.36) (Table 3).
There is a correlation between the covariate musical self-competence (r = 0.24; p = 0.003) and the covariate of digital self-competence (r = 0.23; p = 0.006). This means that the more proficient students felt they were in both music and technology, the better they scored on this dimension. The most highly rated items on the scale were preference for the exercise creation module, which allowed them to create and edit their own exercises, and the immediate evaluation of exercises.
Regarding the covariates, neither age, gender, nor the type of instrument had a significant correlation with this dimension.

4.5. Emotional Balance Dimension

Emotion is considered an emergent phenomenon formed by processes that seek to activate the most congruent response to any situation or to deactivate incongruent responses [43]. Some works have highlighted the important role that emotions play during any learning activity [44,45]. Therefore, it was considered relevant to include an emotional balance in the software validation questionnaire, merely a set of well-being and discomfort emotions with no semantic differential and not paired. This balance was included in order to assess the emotional reception of the software by calculating the frequency of the number of emotions felt and recognised by each student. Thus, the questionnaire incorporated a multiple-choice item with a set of 14 emotions: 7 of discomfort (boredom, sadness, loneliness, tiredness, anger, worry, and despair) and 7 of well-being (enthusiasm, joy, security, reassurance, confidence, satisfaction, and independence) that participants might have felt during practice with the software. There was no limit to the number of emotions students could select. The results showed a predominance of emotions of well-being (blue) versus emotions of discomfort (red) (Figure 5). The students felt a total of 93 emotions of discomfort and 496 of well-being. This indicates an overall adequate learning climate from the emotional perspective during the practice from which the software evaluation was drawn (Figure 6).

4.6. Overall Assessment Dimension

The overall software evaluation dimension consisted of four questionnaire items (Table 4). The reliability analysis shows a high Cronbach’s Alpha (α = 0.80). This dimension correlates significantly with the global assessment question (r = 0.61; p < 0.001), with the software recommendation question (see below) (r = 0.24; p = 0.004), with the technical perception of the software (r = 0.51; p < 0.001) and with the didactic perception of the software (r = 0.471; p < 0.001). It does not correlate with musical self-competence or digital self-competence.
The overall rating was relatively high (n = 141; X ¯ = 3.27; SD = 0.76). It correlates significantly with medium-high values with the technical-didactic assessment dimension (r = 0.564; p = 0.001).
In addition to this evaluation and in order to validate the dimension, a question in the questionnaire asked the participants to rate the software overall on a 4-point scale. The result indicates an excellent reception and a good opinion of the software (n = 141; X ¯ = 3.58; SD = 0.49). This item correlated significantly with the overall software evaluation dimension (r = 0.62; p < 0.001).
The instrument level covariate showed significant differences: students with a lower instrumental level (elementary level) score the software better overall than students with a higher level (intermediate level) (χ² = 20; p = 0.029). Likewise, the variable of the previous year’s grade (academic data covariate) showed a significant negative correlation with the overall assessment (r = −0.249; p = 0.010). In other words, the lower the grade from the previous year, the higher the score given to the software.
Finally, another item in the questionnaire asked participants whether they would recommend the software to other students or friends. The result showed that 140 students would recommend using it compared to one student who would not. This item also correlated significantly with the overall software evaluation scale (r = 0.24; p = 0.004).

4.7. Preferred, Non-Preferred Elements, and Suggested Changes

Three open-ended items in the questionnaire asked about (1) the elements that participants had liked the most; (2) those that they had liked the least; and (3) suggestions for features to be included in the software.
What students liked most was the immediate evaluation of the exercise (32%), the ability to create and edit exercises (18%), and the software’s help with intonation (18%). The least liked were the graphical interface (12%) and that the software sometimes crashed due to ambient sound interference or poor microphone sensitivity on their computers (12%). Participants suggested changing the feedback voice of the evaluation (5%) and including other images in the interface (3.5%).
Summarising this Results section, the assessment of the different dimensions of the software (technical-didactic, and overall, the emotional balance and the global evaluation questions show highly promising results (Table 5).

5. Discussion and Conclusions

The results presented here do not differ substantially from those obtained in other studies in which software for music education has been designed and validated. In a work [46,47], ninety-eight primary music students practised with software for rhythmic training in two 60 min evaluation sessions, rating each session highly on an overall usefulness rating on a five-point scale ( X ¯ = 4.42; SD = 0.66) and showing an emotional balance with a high predominance of well-being emotions.
In another study [33], a software dedicated to the intonation of the sung voice was designed and validated by thirty music teachers. The teachers practised with the software for a week and then completed a questionnaire. Data regarding the technical applicability, effectiveness, navigability, organisation, content suitability, user-friendliness, interface suitability, functionality, versatility, and accessibility, and didactical (on-screen representation of inputs, evaluation, feedback, preset content, neutral syllable for solmisation, non-conventional representations, order of presentation) aspects of the software were considered. The results related to the didactical aspects were high ( X ¯ = 4.2; SD = 0.72; five-point scale), as well as those related to the technical aspects ( X ¯ = 4.1; SD = 0.69; five-point scale). The emotions that students believed they felt were overwhelmingly feel-good. These results converge with results from aforementioned study [46,47].
Regarding software design features, it is noteworthy that the usefulness assigned by students to visual feedback confirms the importance of this aspect, as reflected in the results of the related studies mentioned in the literature review [28,30,31]. Also noticeable are the students’ preferences (Table 3) for the immediate assessment of practice and the exercise creation module, confirming the findings of other study [33,47].
The main conclusion drawn from these results is that, according to the students’ opinion, the software presented here fulfils the objectives of this research, i.e., designing an effective information system for use as a didactic artefact in early instrumental music education. The use of ad hoc software could have the potential to serve as a significant tool to improve and systematise the daily tasks related to intonation in initial instrumental on non-fixed pitch instruments: “it seems axiomatic that the more instructional tools the teacher has for teaching intonation, the greater the possibility that the students can learn to perform with good intonation” ([48], p. 393).
However, given the unrepresentativeness of the sample and the fact that the data obtained refer to students’ opinions and not to the effect of the software on instrumental intonation skills, it is appropriate to use these results with caution. To overcome these limitations, it will be necessary to conduct a study with a larger sample of students, with a detailed, objective, and long-term evaluation to know if there is an effect of the use of the software on the instrumental intonation results of the students.
Another line of research to improve the validity of this study lies in the analysis of the psychological aspects related to instrumental practice, such as self-regulation. In the design phase of this work, a case study was conducted that has shown some evidence of the influence of this software on the self-regulation of violin and viola students. [41]. However, qualitative studies would be necessary in order to confirm this potential influence. In addition, a technical-didactic evaluation study of the software by instrument teachers would be necessary, in order to complement that of the students and to obtain global and more accurate data on the validity of the software.
The practical implications of this study are related to the transfer of knowledge to society. In the Valencian Community, around 650 public and private music schools make up a public network that favours social relations, inclusion, and disciplinary development in a region where music is a preferred activity, as evidenced by the existence of more than 600 music bands. To transfer the knowledge generated in this study, the researchers intend to make this research product free of charge to students and music schools in the region for two years. This will also allow for the aforementioned studies on instrumental intonation and the influence of software on the self-regulation of novice students of non-fixed pitch instruments. When positive data on the effect of the software on pupils’ instrumental intonation skills and teacher evaluation data are obtained, marketing actions—contact with companies, technical marketing specifications, and licensing—can be initiated.

Author Contributions

Conceptualisation, methodology, validation and writing, J.T.; data curation, methodology, and formal analysis, M.Á.F.-V. All authors have read and agreed to the published version of the manuscript.

Funding

This study has been funded by the Spanish Research Agency (code AEI/10.13039/50110001103), under grant to project “Plectrus” (grant number PID2019-105762GB-I00), with the collaboration of the European Regional Development Fund (ERDF).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Research in Humans of Universitat de Valencia on 2 December 2021.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical restrictions.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Morrison, S.J.; Fyk, J. Intonation. In The Science & Psychology of Music Performance: Creative Strategies for Teaching and Learning; Parncutt, R., McPherson, G., Eds.; Oxford University Press: Oxford, UK, 2002; pp. 182–197. [Google Scholar] [CrossRef]
  2. Zatorre, R.; Chen, J.; Penhune, V.B. When the brain plays music: Auditory-motor interactions in music perception and production. Nat. Rev. Neurosci. 2007, 8, 547–558. [Google Scholar] [CrossRef] [PubMed]
  3. Chaffin, R.; Imreh, G.; Lemieux, A.; Chen, C. Seeing the Big Picture: Piano Practice as Expert Problem Solving. Music Percept. 2003, 20, 465–490. [Google Scholar] [CrossRef]
  4. Pardue, L.S.; McPherson, A. Real-time aural and visual feedback for improving violin intonation. Front. Psychol. 2019, 10, 627. [Google Scholar] [CrossRef] [PubMed]
  5. Gardner, R. Extending the Discussion: Intonation Pedagogy for Bowed Stringed Instruments, Part 1. Update Appl. Res. Music. Educ. 2020, 38, 55–58. [Google Scholar] [CrossRef]
  6. Garzoli, J. Competing Epistemologies of Tuning, Intonation and Melody in the Performance of Thai Classical Music on Non-Fixed-Pitch Instruments. SOJOURN J. Soc. Issues Southeast Asia 2020, 35, 407–436. [Google Scholar] [CrossRef]
  7. Geringer, J.M.; MacLeod, R.B.; Madsen, C.K.; Nápoles, J. Perception of melodic intonation in performances with and without vibrato. Psychol. Music 2015, 43, 675–685. [Google Scholar] [CrossRef]
  8. Larrouy-Maestri, P.; Pfordresher, P.Q. Pitch perception in music: Do scoops matter? J. Exp. Psychol. Hum. 2018, 44, 1523–1541. [Google Scholar] [CrossRef]
  9. Aleman, A.; Nieuwenstein, M.R.; Böcker, K.B.; de Haan, E.H. Music training and mental imagery ability. Neuropsychologia 2000, 38, 1664–1668. [Google Scholar] [CrossRef]
  10. Stambaugh, L.A. Implications of extrinsic cognitive load on three levels of adult woodwind players. Psychol. Music 2016, 44, 1318–1330. [Google Scholar] [CrossRef]
  11. Johnson, M.; Larson, S. ‘Something in the Way She Moves’ Metaphors of Musical Motion. Metaphor. Symb. 2003, 18, 63–84. [Google Scholar] [CrossRef]
  12. Powell, S.R. Wind instrument intonation: A research synthesis. Bull. Coun. Res. Mus. Ed. 2010, 184, 79–96. [Google Scholar] [CrossRef]
  13. Schlegel, A.; Springer, D. Effects of accurate and inaccurate visual feedback on the tuning accuracy of high school and college trombonists. Int. J. Music Educ. 2018, 36, 394–406. [Google Scholar] [CrossRef]
  14. Dalmont, J.P.; Gazengel, B.; Gilbert, J.; Kergomard, J. Some aspects of tuning and clean intonation in reed instruments. Appl. Acoust. 1995, 46, 19–60. [Google Scholar] [CrossRef]
  15. Zendri, G.; Valdan, M.; Gratton, L.M.; Oss, S. Musical intonation of wind instruments and temperature. Phys. Educ. 2015, 50, 348. [Google Scholar] [CrossRef]
  16. Heyne, M.; Derrick, D.; Al-Tamimi, J. Native Language Influence on Brass Instrument Performance: An Application of Generalized Additive Mixed Models (GAMMs) to Midsagittal Ultrasound Images of the Tongue. Front. Psychol. 2019, 10, 2597. [Google Scholar] [CrossRef]
  17. Bucur, V. Resonant Air Column in Wind Instruments. In Handbook of Materials for Wind Musical Instruments; Springer: Cham, Switzerland, 2019; pp. 337–358. ISBN 978-3-030-19175-7. [Google Scholar]
  18. López-Calatayud, F. Music perception, sound production, and their relationships in bowed string instrumentalists: A systematic review. Rev. Electrónica LEEME 2023, 51, 55–81. [Google Scholar] [CrossRef]
  19. Rumjaun, A.; Narod, F. Social Learning Theory—Albert Bandura. In Education in Theory and Practice. An Introductory Guide to Learning Theory; Akpan, B., Kennedy, T.J., Eds.; Springer: Cham, Switzerland, 2020. [Google Scholar]
  20. Springer, D.G. Research to Resource: Evidence-Based Strategies for Improving Wind Intonation. Update Appl. Res. Music. Educ. 2020, 39, 4–7. [Google Scholar] [CrossRef]
  21. Blanco, A.D.; Tassani, S.; Ramírez, R. Effects of Visual and Auditory Feedback in Violin and Singing Voice Pitch Matching Tasks. Front. Psychol. 2021, 12, 684693. [Google Scholar] [CrossRef]
  22. Owens, P.; Sweller, J. Cognitive load theory and music instruction. Educ. Psychol. 2008, 28, 29–45. [Google Scholar] [CrossRef]
  23. Ayres, P.; Cierniak, G. Split-Attention Effect. In Encyclopedia of the Sciences of Learning; Seel, N.M., Ed.; Springer: Cham, Switzerland, 2012; pp. 3172–3175. ISBN 978-1-4419-1428-6_19. [Google Scholar]
  24. Castro-Alonso, J.C.; Sweller, J. The Modality Effect of Cognitive Load Theory. In Advances in Human Factors in Training, Education, and Learning Sciences; Karwowski, W., Ahram, T., Nazir, S., Eds.; Springer: Cham, Switzerland, 2020; pp. 75–84. [Google Scholar] [CrossRef]
  25. Low, R.; Sweller, J. The modality principle in multimedia learning. In The Cambridge Handbook of Multimedia Learning, 2nd ed.; Mayer, R.E., Ed.; Cambridge University Press: Cambridge, UK, 2014; pp. 227–246. [Google Scholar] [CrossRef]
  26. Liao, C.-W.; Chen, C.-H.; Shih, S.-J. The interactivity of video and collaboration for learning achievement, intrinsic motivation, cognitive load, and behavior patterns in a digital game-based learning environment. Comput. Educ. 2019, 133, 43–55. [Google Scholar] [CrossRef]
  27. Fonteles-Furtado, P.G.; Hirashima, T.; Hayashi, Y. Reducing Cognitive Load During Closed Concept Map Construction and Consequences on Reading Comprehension and Retention. IEEE Trans. Learn. Technol. 2019, 12, 402–412. [Google Scholar] [CrossRef]
  28. Welch, G.; Rush, C.; Howard, D. Real-time visual feedback in the development of vocal pitch accuracy in singing. Psychol. Music 1989, 17, 146–157. [Google Scholar] [CrossRef]
  29. Cantovation. Sing & See, version 1.5.8; Cantovation: Auckland, New Zeland, 2023. [Google Scholar]
  30. Wilson, P.; Lee, K.; Callaghan, J.; Thorpe, W. Learning to sing in tune: Does real-time visual feedback help? J. Interdiscip. Music. Stud. 2008, 2, 157–172. [Google Scholar]
  31. Howard, D.; Brereton, J.; Welch, G.; Himonides, E.; DeCosta, M.; Williams, J.; Howard, A. Are real-time displays of benefit in the singing studio? An exploratory study. J. Voice 2007, 21, 20–34. [Google Scholar] [CrossRef]
  32. Agin, J. Intonia, version 1.5.1; Intonia Software: Pittsburgh, PA, USA, 2021; Available online: http://intonia.com/index.shtml (accessed on 3 September 2022).
  33. Pérez-Gil, M.; Tejada, J.; Morant, R.; Pérez, A. Cantus: Construction and evaluation of a software for real-time vocal music training and musical intonation assessment for music education. J. Music. Technol. Educ. 2016, 9, 125–144. [Google Scholar] [CrossRef]
  34. Cummings, D. The Effects of Vocal Modeling, Musical Aptitude, and Home Environment on Pitch Accuracy of Young Children. Bull. Counc. Res. Music. Educ. 2006, 169, 39–50. [Google Scholar]
  35. Gordon, E. A Music Learning Theory for Newborn and Young Children; GIA Publications: Chicago, IL, USA, 2003; ISBN 978-1579990039. [Google Scholar]
  36. Miller, G. The magic number seven, plus or minus two: Some limits on our capacity for processing information. Psychol. Rev. 1956, 63, 81–97. [Google Scholar] [CrossRef]
  37. Dresch, A.; Pacheco, D.; Valle, J.A. Design Science Research. A Method for Science and Technology Advancement; Springer: Cham, Switzerland, 2015; ISBN 978-3-319-07374-3. [Google Scholar]
  38. Peffers, K.; Tuunanen, T.; Rothenberger, M.; Chatterjee, S. A Design Science Research Methodology for Information Systems Research. J. Manag. Inf. Syst. 2008, 24, 45–77. [Google Scholar] [CrossRef]
  39. Tejada, J.; Murillo, J.; Mateu-Luján, B. The Initial teaching of intonation in the brass wind instruments and music reading in Spain. An exploratory study with music school teachers. Revista Electrónica Complutense de Investigación en Educación Musical 2022, 19, 209–221. [Google Scholar] [CrossRef]
  40. Nielsen, J. Usability Testing with 5 Users: Design Process (Video Recordings). 2018. Available online: https://www.youtube.com/watch?v=RhgUirqki50 (accessed on 12 January 2023).
  41. López-Calatayud, F.; Tejada, J. Self-regulation strategies and behaviors in the initial learning of the viola and violin with the support of software for real-time instrumental intonation assessment. Res. Stud. Music. Educ. 2023. online first. [Google Scholar] [CrossRef]
  42. Cole, M. Cultural Psychology; The Belknap Press of Harvard University Press: London, UK, 1996; ISBN 0674179560. [Google Scholar]
  43. Barret, L.; Ochsner, K.; Gross, J. On the automaticity of emotion. In Social Psychology and the Unconscious: The Automaticity of Higher Mental Processes; Bargh, J.A., Ed.; Psychology Press: London, UK, 2007; pp. 173–217. ISBN 9781138010000. [Google Scholar]
  44. Bostock, S.; Lizhi, W. Gender in student online discussions. Innov. Educ. Teach. Int. 2005, 43, 73–85. [Google Scholar] [CrossRef]
  45. Pekrun, R. Progress and open problems in educational emotion research. Learn. Instr. 2005, 15, 497–506. [Google Scholar] [CrossRef]
  46. Tejada, J.; Pérez-Gil, M.; Pérez, R. Tactus: Didactic Design and Implementation of a Pedagogically Sound Based Rhythm-Training Computer Program. J. Music. Technol. Educ. 2011, 3, 155–165. [Google Scholar] [CrossRef]
  47. Tejada, J.; Pérez-Gil, M.; García-Pérez, R. Discere rhytmum, discere Tactus. Construcción y evaluación de un software educativo para el adiestramiento del ritmo musical. In Proceedings of the Congreso Reinventar la Profesión Docente, Málaga, Spain, 8 October 2010; pp. 399–418. [Google Scholar]
  48. Silvey, B.; Nápoles, J.; Springer, G. Effects of Pre-Tuning Vocalization Behaviors on the Tuning Accuracy of College Instrumentalists. J. Res. Music Educ. 2019, 66, 392–407. [Google Scholar] [CrossRef]
Figure 1. Main practice screen.
Figure 1. Main practice screen.
Education 13 00860 g001
Figure 2. Exercise creation and editing window (see the right menu for editing and creating exercises and learning units).
Figure 2. Exercise creation and editing window (see the right menu for editing and creating exercises and learning units).
Education 13 00860 g002
Figure 3. Software architecture.
Figure 3. Software architecture.
Education 13 00860 g003
Figure 4. Application of DSRM to the creation of the software (adapted from [37]). Red arrows are iterations between phases.
Figure 4. Application of DSRM to the creation of the software (adapted from [37]). Red arrows are iterations between phases.
Education 13 00860 g004
Figure 5. Frequency of the number of perceived individual emotions: (a) number of emotions of well-being; (b) number of emotions of discomfort.
Figure 5. Frequency of the number of perceived individual emotions: (a) number of emotions of well-being; (b) number of emotions of discomfort.
Education 13 00860 g005
Figure 6. Perceived emotions of well-being (blue) and discomfort (red). Number of emotions perceived (X) by each student (Y).
Figure 6. Perceived emotions of well-being (blue) and discomfort (red). Number of emotions perceived (X) by each student (Y).
Education 13 00860 g006
Table 1. Main factors of digital self-competence covariate.
Table 1. Main factors of digital self-competence covariate.
Skills Factor Items 1 X ¯ SD
Self-perception of technological skills3.390.73
Self-perception of learning with technology3.430.72
Use of technology factor items 1
Frequency of computer use2.230.86
Frequency of mobile phone use3.240.96
Frequency of ICT use for gaming2.571.04
1 (score, min = 1; max = 4).
Table 2. Items of the covariate musical self-competence 1.
Table 2. Items of the covariate musical self-competence 1.
Musical Self-Competence Items 1 X ¯ SD
General music skills7.941.45
Instrumental music skills7.491.65
Intonation skills (perception)7.141.61
Intonation skills (production)7.392.20
1 (score, min = 1; max = 10).
Table 3. Factors of the technical-didactic dimension (n = 141; score: min = 1; max = 4).
Table 3. Factors of the technical-didactic dimension (n = 141; score: min = 1; max = 4).
Items of the Technical Opinion Factor X ¯ SD
Evaluation system 3.370.72
Usefulness of audio message feedback3.140.83
Preference for visual feedback3.460.79
Preference for the exercise creation module3.700.49
Preference for immediate evaluation3.680.62
Ease of use of the interface3.480.65
Items of the interaction factor
Understanding actions to intonate3.490.69
Understanding the interface3.370.80
Understanding software operating instructions3.490.70
Software performance3.500.70
Items of the didactic opinion factor
Amenity of the exercises3.280.62
Ease of exercises3.460.75
Facilitating intonation practice 3.350.67
Facilitating understanding of intonation3.220.82
Table 4. Items of the scale “overall software evaluation”.
Table 4. Items of the scale “overall software evaluation”.
Scale Items 1 X ¯ SD
Preference to continue using the software in their studies3.420.74
Preference for using the software at home3.350.70
Preference for spend more time on intonation with software2.960.91
Preference for use in intonation activities3.360.72
1 (score, min = 1; max = 4).
Table 5. Summary of results.
Table 5. Summary of results.
X ¯ SD α Cronbach
Scales 1
Technical-didactic dimension3.350.360.73
Overall assessment dimension3.270.760.80
Overall questionnaire3.340.370.82
Global evaluation questions
Software rating question 13.580.49-
Software recommendation question 2yes = 140; no = 1
Emotional balance question 3well-being = 496; discomfort = 93
1 (score, min = 1; max = 4). 2 yes = do recommend; no = do not recommend; 3 number of emotions.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tejada, J.; Fernández-Villar, M.Á. Design and Validation of Software for the Training and Automatic Evaluation of Music Intonation on Non-Fixed Pitch Instruments for Novice Students. Educ. Sci. 2023, 13, 860. https://doi.org/10.3390/educsci13090860

AMA Style

Tejada J, Fernández-Villar MÁ. Design and Validation of Software for the Training and Automatic Evaluation of Music Intonation on Non-Fixed Pitch Instruments for Novice Students. Education Sciences. 2023; 13(9):860. https://doi.org/10.3390/educsci13090860

Chicago/Turabian Style

Tejada, Jesús, and María Ángeles Fernández-Villar. 2023. "Design and Validation of Software for the Training and Automatic Evaluation of Music Intonation on Non-Fixed Pitch Instruments for Novice Students" Education Sciences 13, no. 9: 860. https://doi.org/10.3390/educsci13090860

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop