Next Article in Journal
Detection of Primary User Emulation Attack Using the Differential Evolution Algorithm in Cognitive Radio Networks
Next Article in Special Issue
Balanced Foot Dorsiflexion Requires a Coordinated Activity of the Tibialis Anterior and the Extensor Digitorum Longus: A Musculoskeletal Modelling Study
Previous Article in Journal
Loquat Leaf Extract Inhibits Oxidative Stress-Induced DNA Damage and Apoptosis via AMPK and Nrf2/HO-1 Signaling Pathways in C2C12 Cells
Previous Article in Special Issue
Body’s Center of Mass Motion Relative to the Center of Pressure during Gait, and Its Correlation with Standing Balance in Patients with Lumbar Spondylosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accuracy of Hidden Markov Models in Identifying Alterations in Movement Patterns during Biceps-Curl Weight-Lifting Exercise

by
André B. Peres
1,
Mário C. Espada
2,3,4,*,
Fernando J. Santos
2,3,5,
Ricardo A. M. Robalo
2,5,
Amândio A. P. Dias
6,
Jesús Muñoz-Jiménez
7,
Andrei Sancassani
8,
Danilo A. Massini
8 and
Dalton M. Pessôa Filho
8
1
Instituto Federal de Educação, Ciência e Tecnologia de São Paulo (IFSP), Piracicaba, São Paulo 13414-155, Brazil
2
Instituto Politécnico de Setúbal, Escola Superior de Educação, CIEF, CDP2T, 2914-504 Setúbal, Portugal
3
Life Quality Research Centre (LQRC-CIEQV, Leiria), Complexo Andaluz, Apartado, 2040-413 Rio Maior, Portugal
4
CIPER, Faculdade de Motricidade Humana, Universidade de Lisboa, 1499-002 Lisboa, Portugal
5
Faculdade de Motricidade Humana, Universidade de Lisboa, 1499-002 Cruz Quebrada, Portugal
6
Egas Moniz School of Health and Science, Centro de Investigação Interdisciplinar Egas Moniz, 2829-511 Caparica, Portugal
7
Research Group in Optimization of Training and Sports Performance (GOERD), University of Extremadura, Av. De la Universidad, s/n, 10003 Cáceres, Spain
8
Department of Physical Education, São Paulo State University—UNESP, Bauru, São Paulo 17033-360, Brazil
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(1), 573; https://doi.org/10.3390/app13010573
Submission received: 18 November 2022 / Revised: 23 December 2022 / Accepted: 28 December 2022 / Published: 31 December 2022
(This article belongs to the Special Issue Biomechanics and Human Motion Analysis)

Abstract

:
This paper presents a comparison of mathematical and cinematic motion analysis regarding the accuracy of the detection of alterations in the patterns of positional sequence during biceps-curl lifting exercise. Two different methods, one with and one without metric data from the environment, were used to identify the changes. Ten volunteers performed a standing biceps-curl exercise with additional loads. A smartphone recorded their movements in the sagittal plane, providing information on joints and barbell sequential position changes during each lift attempt. An analysis of variance revealed significant differences in joint position (p < 0.05) among executions with three different loads. Hidden Markov models were trained with data from the bi-dimensional coordinates of the joint positional sequence to identify meaningful alteration with load increment. Tests of agreement tests between the results provided by the models with the environmental measurements, as well as those from image coordinates, were performed. The results demonstrated that it is possible to efficiently detect changes in the patterns of positional sequence with and without the necessity of measurement and/or environmental control, reaching an agreement of 86% between each other, and 100% and 86% for each respective method to the results of ANOVA. The method developed in this study illustrates the viability of smartphone camera use for identifying positional adjustments due to the inability to control limbs in an adequate range of motion with increasing load during a lifting task.

1. Introduction

Correct supervision during the performance of resistance exercises is imperative for exercise quality execution, which can generate improvements in performance and avoid injuries resulting from inadequate posture or incorrect loads [1,2]. Resources for resistance training exercises have increased in recent years, and this increases the number of injuries caused by this type of training, mainly due to a lack of proper supervision associated with execution [3].
An assessment of load, posture and movement during a resistance exercise is often conducted by an experienced coach, who, through visual examination, analyses exercise movements without using any equipment that provides qualitative data on the observed parameters [4]. Coaches perform this form of supervision as a qualitative analysis, based on a systematic observation method, using introspective judgment of the quality of human movement that is based solely on the experience of the evaluator [5].
An increase in load during the performance of a resistance exercise can cause a change in movement patterns. A study that analyzed a squat exercise [6] verified a change in movement patterns with increased load. The results revealed that when load was added to the exercise, hip and knee angles increased to accommodate the increased load.
Spatial trajectories, which are described by joint movements during the performance of an exercise, can also be used as references for movement pattern analysis. A trajectory can be defined as the set of all positions occupied by a moving body relative to a given reference [7]. This reference is considered to be the point of view of the evaluator who is supervising an exercise [5]; as such, physical exercises can be characterized by their movement patterns and specific space-time structures. Physical exercises are therefore susceptible to being modeled as temporal trajectories in a given space, using measurements that are intimately correlated with visual observations [8].
Studies that use trajectory analysis to evaluate human movement have been present in the literature for some time and can be considered a new resource for gesture recognition [9]. The trajectories described by the heel and toes during gait were previously analyzed [10]; according to the author of that study, these trajectories can be used to represent specific motor control tasks that are connected with posture and gait changes. Another study, an algorithm for extracting and classifying bidimensional movements in a frame sequence based on movement trajectories, was developed. The motion patterns were learned using trajectories extracted from neural networks [11] with a recognition rate of 94.42%. Suzuki et al. [12] proposed a method for learning movement patterns and detecting anomalies in these patterns by analyzing human trajectories through long-term observation. The spatial and temporal features of those trajectories were analyzed using hidden Markov models (HMMs). Another study examined human movement patterns in a space-time context by clustering geographic data extracted from smartphone global positioning systems [13] (confidence: 80%), and Calin et al. [14] explored movement analysis, dividing movement into three main components: body posture (posture), the range of motion (reach), and movement pattern (positional sequence), which obtained classification accuracy, with up to 56% for HMMs. With constant technological improvements, the motion capture of body movements has become more accessible and more widely utilized in sports activities. Currently, with the help of a smartphone camera, it is possible to evaluate movement patterns and measure physical performance [15]. Visual resources such as video assistant referee (VAR), which can perform reliable measurements of the studied object in a noninvasive manner and without interfering with the natural flow of the sport, are used, for example, in soccer.
In a similar manner, the use of noninvasive methods that are capable of tracking and automatically supervising resistance training exercises could provide great benefits to athletes [16]. For that to occur, it is necessary to use a statistical/mathematical model that can provide measurements or distinguish motion patterns that accurately detect significant changes in human movements. HMMs are suitable for pattern recognition and can be used to solve this problem [17]. Originating from Markov chains, they possess a finite number of states and use a double stochastic process that measures transitions between states and generates an exit symbol for each state, both linked to a probability of occurrence [18].
To apply the HMM, three different problems must be solved [18]: (1) the probability of the unknown sequence must be determined; (2) the hidden states of the best sequence must be estimated; and (3) the model parameters must be trained. In the model learning phase, the training data are divided into groups (clusters). Training is performed at maximum likelihood using the Baum–Welch algorithm [17]. After completion of the training phase, a recognition process based on choosing the model that provides the maximal probability of the analyzed observation sequences P(O|λ) is conducted [19]. In this study, we aimed to identify changes in load displacement that occur due to alterations in subjects’ movement trajectories during a biceps-curl exercise. Load displacement and joint positional sequence were quantified by means of bi-dimensional video capture while the subjects performed elbow flexion during the biceps-curl exercise, and the data so obtained were further used to train the HMMs. This work tested the hypothesis that HMMs provide accurate automatic detection of alterations in shoulder, elbow and load positional sequences that are considered significant according to cinematic assessment.

2. Methods

2.1. Participants

The participants in this study were ten male volunteers (age: 26.3 ± 4.9 years, height: 177.6 ± 8.0 cm, body weight: 86.2 ± 16.7 kg) who were recruited from a local health club. The inclusion criteria were as follows: (1) age ≥ 22 and ≤ 32 years; (2) engaged in regular exercise for at least three times a week in the nine months prior to data collection; and (3) at least six months of experience performing a biceps curl with a barbell. Participants with any disease or medical condition that might have affected their task performance were excluded. This study was conducted in accordance with the Declaration of Helsinki and approved by the UNESP ethics committee under number 17486119.0.0000.5398.

2.2. Procedures

Experimental data were collected at the Laboratory of Human Performance Optimization in Sport (LABOREH). For data collection, three green hemispherical markers (25 mm in diameter) were used. The markers were placed on the barbell and at the great tubercle and the lateral epicondyle of the humerus (Figure 1).
The subjects were instructed to do the following before testing: (i) maintain their usual dietary and sleep habits; (ii) avoid intake of any energy- or performance-enhancing supplements or drinks for a period of at least 24 h before testing; and (iii) avoid intake of beverages containing alcohol or caffeine for a period of at least 24 h before testing. They were also instructed to wear comfortable clothes during testing.
All subjects performed three sets of three repetitions with 10 min of rest between sets, following the technique fundamentals for barbell biceps-curl [20]. In the first set, the repetitions were performed using only the barbell; the second set was performed using an additional barbell that added a load equivalent to 25% of the subject’s body weight [21]. Finally, the third and last load was performed with 50% of the subject’s body weight as an additional overload.
Positional and temporal data were collected through the use of a smartphone digital camera (Galaxy S9 with 2 megapixels and UHD 4K resolution) that was placed so as to have a perpendicular view of the subject’s sagittal plane [22], as shown in Figure 2.
To calibrate the measurements, fixed markers were placed in the background in a plane coincident with the subject’s sagittal plane. Measurements of body segments (arm, forearm) and height were also taken to verify and certify the calibration measurements. Therefore, it was possible to measure the subjects’ movements from the subjects’ bidimensional coordinates in the sagittal plane.
Video capture in the MPEG-4 format was performed for all sets and repetitions at a frequency of 30 frames per second [23,24,25,26,27,28,29,30]. The duration of each video matched the duration of the repetitions. All videos were digitally edited using Wondershare Filmora version 9 (Wondershare Filmora, Hong Kong, China) [31] for Chroma key filter application [32] on the markers and Alfa channel application [33] for enhanced marker contrast. Kinovea 0.8.27 software (Kinovea, Bordeaux, France) [34] was used to track the markers, and the marker coordinates were exported to XLM (Extensive Markup Language) files. The origin of the coordinates in Cartesian plan (i.e., an ordered pair (x,y) representing position on the horizontal x-axis—from left to right, and on the vertical y-axis—from bottom to top) [35] was assigned to the marker placed in the shoulder, elbow, and barbell and recalibrated for each of the repetitions.

2.3. Displacement and Vertical Distance Measurements

Displacement measurements were obtained from the markers placed at the great tubercle (shoulder) and the lateral epicondyle (elbow) of the humerus, and at the bar (see Figure 1) according to the following formula for Euclidean distance calculation:
Δ d = ( x f x i ) 2 + ( y f y i ) 2
where Δd is the displacement value, ( x f , y f ) are the coordinates of the markers at final position, and ( x i , y i ) are the coordinates of the markers at initial positions.
For the barbell, the vertical distances were also estimated by its variation only along the y-axis:
Δ y = y f y i
where Δy is the vertical distance, y f is the y-axis value coordinate at the final position, and y i is the y-axis coordinate at the starting position. The barbell Δy might be an additional information indicating the undesirable adjustments in limb position (i.e., shoulder flexion, elbow rising and lumbar extension) with load increment [36,37].

2.4. Statistical Analysis

Statistical analysis was performed using IBM SPSS Statistics 26.0 (SPSS, Chicago, IL, USA). One-way ANOVA was used to verify the existence of significant differences in Δd and Δy measures for shoulder, elbow, and barbell during the upward/concentric phase of the biceps-curl exercise, in accordance with the different loads that were tested. A post hoc Tukey test was used. The sample power was determined with G*Power 3 from data including the effect size (partial eta squared, η2p) for ANOVA test between the Δd and Δy values for 0 vs. 25 and 50% lift attempts, actual N sample, and specifying α = 0.05 [38]. The eta square (η2) was calculated, and the correspondence to Cohen’s d was verified to classify the level of the effect according to small (d = 0.2), medium (d = 0.5), and large (d = 0.8) [39]. The significance level was set at 5%.

2.5. HMM Modeling

In HMMs, the state’s hidden sequence can be estimated based on the visible observation sequence [40] and characterized by five elements in a symbolic manner [18,41], where N represents the number of states in the model. For the variable q t , there is a corresponding state value, represented by S = { S 1 ,   S 2 , , S N } . The number of observation symbols for HMM exits is represented by M. The symbols are designated O = { O 1 ,   O 2 , , O M } . A transition probability matrix A = [ a i j ] represents the transition probabilities between states, where a i j = P r o b ( q t + 1 = S j | q t = S i ) ,   1 i ,   j N represents the probability value in the q t state for transitioning to state S j at time t + 1, since at time t, the state is Si. The exit probability matrix B = [ b j ( k ) ] represents the observation probabilities in each state, where b j ( k ) = P r o b ( O k ( t ) | q t = S j ) ,   1 j N ,   1 k M is the probability of the k-infinite observation symbol. Finally, an initial state distribution π 1 = { π i 1 } represents the probability of a new input sequence starting from a given state at the initial time t = 1, where π i 1 = P r o b ( q 1 = S i ) ,   1 i N .
The ANOVA results, which will analyze the significant differences in bi-dimensional displacement, will provided the criteria for the marker position data modeling in HMM. The Markovian model algorithms [42,43,44,45] were employed in GNU Octave 4.2.1 (GNU Octave, Madison, WI, USA) [46]. For analysis, the HMMs were provided with bi-dimensional data on the trajectories of each marker during the ascending phase of the exercise. Research on human movement [47,48,49,50] in which a linear type of topology was used [51] provided a reference for the general topology for the HMM.
The model’s learning phase occurs with clusters [47] using the k-means algorithm [52,53,54,55,56]. The term cluster are referring to the set of positional coordinates (x,y) defining a common segment of path during the biceps-curl for the changes in position of the shoulder, elbow and barbell. A simplified example of the relation between barbell positioning during the exercise’s upward movement and Markovian modeling is shown in Figure 3. Each of the three positions of the barbell, defined as states S1, S2 and S3, derived from the clustering process presents a transition probability (go to the next state or remain in the same state) and an observation probability for each state b j ( k ) [57].
For HMM subject modeling, the three repetitions performed solely with the barbell were used as the standard model of the positional sequence pattern for shoulder, elbow and barbell. These repetitions were used as the basis for training the model. Using the acquired data, the relation between the 25% and 50% loads was also tested, using the 25% body weight as training. In this state, the model parameters λ = (A, B, π) were calculated using the observed symbols O such that the observation probability P(O|λ) was augmented. For this computation, the Baum–Welch algorithm was employed, and this expanded the observation sequence probabilities [47].
The movement pattern recognition for the trajectories provided by the markers was based on the maximum likelihood criterion. For comparison between a trained model and a new observation sequence, the forward algorithm was used [18,58]. The algorithm estimated the probability that the observation data could have been generated by the model. With the application of HMM, the minimum number of states necessary to recognize the significant differences detected by ANOVA while the subject performed the exercise with different loads, in terms of changes in the patterns of execution of the movement while the exercise was performed, was obtained.
The results (i.e., the minimum number of states to recognize the difference detect to the ANOVA) obtained using data from measurements of the environment and those obtained using only the image coordinates were subjected to agreement analysis using the Bland and Altman plot [59] in MedCalc 19.0.3 (MedCalc Software, Ostend, Belgium); in this analysis, the agreement limits were defined as ±1.96 × SD (the standard deviation of the difference between the N for each method, i.e., the confidence interval of 95%) [60].

3. Results

The mean values, in centimeters, of the maximal Δy obtained for each marker during the upward phase of the exercise are presented in Table 1.
Using the values presented in Table 1, one-way ANOVA was conducted to verify the existence of significant differences in the maximal Δd and Δy values for each marker and each subject. Eight participants showed significant differences in the shoulder and elbow values in the 0%–50% load comparison. In the 25%–50% comparison, differences in the shoulder and elbow (Δd) values were identified in eight and five participants, respectively. For the barbell (Δy), a difference was observed only in the 0%–50% comparison, and that difference was observed in seven participants. The effect size (η2) for shoulder, elbow and barbell were: 0.590, 0.432 and 0.121, respectively. These effects size corresponded to Cohen d = 2.4, 1.7, and 0.7, which were associated to a sample power of >99%, 97% and 35%, respectively, for shoulder, elbow and barbell. Therefore, the number of participants fails to reach a satisfactory power only for the barbell results, despite of the medium to large probability of higher mean as load increase for all marks.
Through HMM modeling, was found the minimum number of states (N) required to recognize pattern differences in positional sequence for each mark. For this, the individual significance values for each joint in each subject in the comparison of the loads were considered. The data extracted from this modeling, together with metric data from the environment, are presented in Table 2. Table 2 also shows the states obtained with video analysis only (no metric or environmental coordinates); these were also subjected to HMM modeling in which the minimum number of states (N) for recognition of the differences in sequential pattern of positions was computed.
Based on the data presented in Table 2, it is possible to establish that few changes occur in the number of states necessary for HMM recognition when the two methods are compared. To compare the methods, a Bland–Altman analysis was performed. The results of this comparison are shown in Figure 4. They provide confirmation that, due to the few changes in the number of minimum states necessary, as seen in Table 2, there are no differences between the methods.

4. Discussion

This study examined whether an HMM trained using different types of coordinates derived from the same movement is able to detect significant variations in shoulder, elbow and load positional sequence, and therefore changes in movement pattern of the biceps-curl exercise. As demonstrated in Table 2, linear topology HMM, trained with Cartesian coordinates [47,61] and corrected using environmental measurements, was able to detect significant adjustments in movement patterns in the sagittal plane, and this was confirmed by ANOVA. Therefore, the findings evidenced the confidence in HMM for detecting undesired joint positional adjustments when comparing the standard reference of simple human lift movement (e.g., single-joint action with no load) to the lift attempts with heavy loads.
For the ANOVA test, maximal displacement values with the three different loads were provided, and the HMMs were trained with three executions for each load (0% or 25%) for comparison. With HMMs, the use of a small number of training models provides a more rigorous analysis and yields results that are in better agreement with the reference model [62]. In this study, we searched for the limits, aiming at no recognition of the pattern of the reference model. In other words, we searched for instances in which the movement pattern was altered. In this way, a minimum number of states were found to draw attention to the changes in movement pattern executed by each subject, and these states indicated movements that were inadequate for lifting the specific load.
To dispense with the need for measurements of the environment and/or subject, HMM modeling was completed using only image coordinates. The positional sequence of coordinates provided by the markers while the subjects performed an upward movement with the barbell were obtained through HMM training and testing. Table 2 presents the results of these methods (nonmetric); they differ little from the results obtained using the environmental data. Of 36 cases shown to have significant variation in movements indicated by the environmental measurements, 31 were also identified by the method that uses only image coordinates, an agreement of approximately 86% between the methods.
It is noteworthy that the analysis of the movement described in this work evaluates the adequacy of the movement by comparing the results with a reference that already presents great similarity to the analyzed movement. Wearable sensors that are used to identify movements that are made during different actions, such as movements in a badminton game for example, can provide accuracies of up to 97% in recognizing the type of movement [63]. The authors of the cited study also sought to discriminate the skill levels of professional and amateur badminton athletes, obtaining satisfactory accuracy values ranging from 83.3% to 90%, which is leveled with the percentage of recognition obtained by applying HMM in our research. Current studies in which wearable sensors are used to recognize daily activities such as walking, going up or down stairs, sitting, standing and lying down, achieve an accuracy of 93.77% in the recognition of these activities [64] in the absence of information about the skills or evolution of the practitioner. Hence, the accuracy in recognizing movement patterns from methods using image with no information on the metric of the environment has attaining ~90%, which is an average close to that observed for HMM.
As in the work of Ghorbani Faal et al. [65], to conduct an adequate comparison between methods, Bland–Altman agreement tests were used to compare the data load acquired with the image coordinates (CI) and the environmental data (CA). Figure 4 shows Bland–Altman plot comparisons of different methods of HMM modeling for all the loads tested. The plots show that the mean difference between methods is close to zero and that most of the obtained values are within the agreement limits. For the elbow, all values found are within the confidence interval. This similarity between the two methods (CI and CA) emphasizes that movement pattern analysis, in the absence of data from the environment, is sufficient to detect changes in movement patterns during exercise.
The use of image coordinates alone for movement pattern analysis by means of the performed trajectory of the objects, joints and limbs during the exercise could assist in exercise supervision, since it is an ecologic method. No information from the environment or any other type of information is necessary. Additionally, the analysis does not hinder exercise execution. Furthermore, this analysis could provide a more solid basis for qualitative exercise analysis, as described by Knudson and Morrison [5], as well as single out inadequate loads [6] or postures during the performance of resistance exercises. For example, the HMM method has been explored as an additional tool to help amateur athletes learn correct movements for a specific action in a sport, aiming to improve their skill level [63], as well as an analysis to support personalized sport training prescriptions according to the uncommon abilities of practitioners [66].
Unlike previously described methods [63], the method described in this work does not require the use of environmental measurements, expensive computational resources, or wearable sensors, all of which can increase the cost of performing a supervised activity [62]. Based on videos, it is possible to analyze the movements performed in exercises even if the videos were not recorded for this purpose. Thus, it is possible to analyze videos recorded in the past using the method described here.
A limitation of our method is that it analyzes movement displacement in a two-dimensional (2D) way. In turn, the three-dimensional (3D) analyses could provide detailed information on motion in other planes of movement, and therefore provide information of the movements in the transverse plane, which were possible changes in joint position that might occur to maintain postural adequacy with increased load. In addition, the 3D sagittal model would provide more accurate data and is not subject to the parallax and perspective errors that occur in 2D analyses. However, the biceps-curl movement is basically in the sagittal plane rather than in the transverse and frontal planes [67], and although the 3D approach has an advantage over the 2D approach, the 2D analyses such as the one described in this study allow the analysis of motion using data collected in any type of video. Regardless, the analysis by HMM using 3D data could identify further alterations caused by the increase in load, and thus provide more information about changes in the trajectories of movements through simultaneous modeling in different planes [68].
Future work should include the analyses of subjects’ movements in a plane other than the sagittal plane (i.e., the use of 3D coordinates). This would make it possible to verify whether the positional sequence of joints and objects can be further refined and whether HMM training can be extended beyond the results obtained with bi-dimensional analysis. Therefore, this makes HMM a potential tool to support the human-based diagnosis of motor patterns, with application varying from disorders in neural pathologies to the motor optimization of athlete’s performance.

5. Conclusions

The findings of the current study evidenced that the trajectory of the movement is adjusted modifying the positional sequence of the barbell and joints as the amount of load was increased during biceps-curl exercise. In addition, the ability of HMM to accurately detect alterations in the upward trajectories of object and body parts were also demonstrated, supporting the speculation that HMM might be a suitable automatized method for analyzing the ability to perform single-joint resistance exercises.
Hence, one possible application of the current findings is movement analysis using simple video recordings, which might be a practical and effective alternative for exercise supervision. For example, it could be useful to provide clues in maintaining the correct load for the proper execution of resistance exercises without causing postural damage or modifying muscle recruitment. In addition, the HMM method can be easily implemented into mobile devices through which video could be recorded, which highlights the practicality of automated supervision for movement monitoring using smartphone cameras.

Author Contributions

Conceptualization, A.B.P. and D.M.P.F.; methodology, A.B.P. and D.M.P.F.; formal analysis, A.B.P., M.C.E., F.J.S. and D.M.P.F.; investigation, A.B.P., M.C.E., F.J.S. and D.M.P.F.; supervision, A.B.P. and D.M.P.F.; data curation, A.B.P., M.C.E., A.S., D.A.M. and D.M.P.F.; writing—original draft preparation, A.B.P., M.C.E., F.J.S., R.A.M.R., A.A.P.D., J.M.-J.., A.S., D.A.M. and D.M.P.F.; writing—review and editing, A.B.P., M.C.E., F.J.S., R.A.M.R., A.A.P.D., J.M.-J., A.S., D.A.M. and D.M.P.F.; visualization, A.B.P., M.C.E., F.J.S., R.A.M.R., A.A.P.D., J.M.-J., A.S., D.A.M. and D.M.P.F.; funding acquisition, M.C.E., F.J.S., R.A.M.R. and D.M.P.F. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to thank the São Paulo Research Foundation—FAPESP (PROCESS 2016/04544-3) and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brazil (CAPES-Finance Code 001) for partial financial support. This research was also funded by the Foundation for Science and Technology, I.P., Grant/Award Number UIDB/04748/2020 and by the Instituto Politécnico de Setúbal.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and was approved by the UNESP ethics committee under number 17486119.0.0000.5398.

Informed Consent Statement

All subjects signed an informed consent form prior to participation in the research.

Data Availability Statement

The data that support the findings of this study are available from the last author (dalton.pessoa-filho@unesp.br) upon reasonable request.

Acknowledgments

The authors would like to thank the team of research assistants who helped with data collection for this study as well as all the research participants for their time in completing the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Faigenbaum, A.D.; Myer, G.D. Resistance Training Among Young Athletes: Safety, Efficacy and Injury Prevention Effects. Br. J. Sports Med. 2010, 44, 56–63. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Gentil, P.; Bottaro, M. Influence of Supervision Ratio on Muscle Adaptations to Resistance Training in Nontrained Subjects. J. Strength Cond. Res. 2010, 24, 639–643. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Kemler, E.; Noteboom, L.; van Beijsterveldt, A.M. Characteristics of Fitness-Related Injuries in The Netherlands: A Descriptive Epidemiological Study. Sports 2022, 10, 187. [Google Scholar] [CrossRef] [PubMed]
  4. Bonilla, D.A.; Cardozo, L.A.; Vélez-Gutiérrez, J.M.; Arévalo-Rodríguez, A.; Vargas-Molina, S.; Stout, J.R.; Kreider, R.B.; Petro, J.L. Exercise Selection and Common Injuries in Fitness Centers: A Systematic Integrative Review and Practical Recommendations. Int. J. Environ. Res. Public Health 2022, 19, 12710. [Google Scholar] [CrossRef] [PubMed]
  5. Knudson, D.V.; Morrison, G.S. Qualitative Analysis of Human Movement; Human Kinetics: Champaign, IL, USA, 2002. [Google Scholar]
  6. McKean, M.R.; Dunn, P.K.; Burkett, B.J. Quantifying the Movement and the Influence of Load in the Back Squat Exercise. J. Strength Cond. Res. 2010, 24, 1671–1679. [Google Scholar] [CrossRef] [PubMed]
  7. Sugiura, K.; Iwahashi, N.; Kashioka, H. Motion Generation by Reference-point-dependent Trajectory HMMs. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011. [Google Scholar] [CrossRef]
  8. Psarrou, A.; Gong, S.; Walter, M. Recognition of Human Gestures and Behaviour Based on Motion Trajectories. Image Vis. Comput. 2002, 20, 349–358. [Google Scholar] [CrossRef]
  9. Nagaya, S.; Seki, S.; Oka, R. A Theoretical Consideration of Pattern Space Trajectory for Gesture Spotting Recognition. In Proceedings of the Second International Conference on Automatic Face and Gesture Recognition, Killington, VT, USA, 14–16 October 1996. [Google Scholar] [CrossRef]
  10. Winter, D.A. Foot Trajectory in Human Gait: A Precise and Multifactorial Motor Control Task. Phys. Ther. 1992, 72, 45–53; discussion 54–56. [Google Scholar] [CrossRef]
  11. Yang, M.H.; Ahuja, N.; Tabb, M. Extraction of 2D Motion Trajectories and its Application to Hand Gesture Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 1061–1074. [Google Scholar] [CrossRef]
  12. Suzuki, N.; Hirasawa, K.; Tanaka, K.; Kobayashi, Y.; Sato, Y.; Fujino, Y. Learning Motion Patterns and Anomaly Detection by Human Trajectory Analysis. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Montreal, QC, Canada, 7–10 October 2007. [Google Scholar] [CrossRef]
  13. Ghosh, S.; Ghosh, S.K. THUMP: Semantic Analysis on Trajectory Traces to Explore Human Movement Pattern. In Proceedings of the 25th International Conference Companion on World Wide Web, Montreal, QC, Canada, 11–15 April 2016. [Google Scholar] [CrossRef]
  14. Calin, A.D.; Pop, H.F.; Boian, R.F. Improving Movement Analysis in Physical Therapy Systems Based on Kinect Interaction. In Proceedings of the 31st International BCS Human Computer Interaction Conference, Sunderland, UK, 3–6 July 2017. [Google Scholar] [CrossRef]
  15. Balsalobre-Fernández, C.; Glaister, M.; Lockey, R.A. The Validity and Reliability of an iPhone app for Measuring Vertical Jump Performance. J. Sports Sci. 2015, 33, 1574–1579. [Google Scholar] [CrossRef]
  16. Crema, C.; Depari, A.; Flammini, A.; Sisinni, E.; Haslwanter, T.; Salzmann, S. Characterization of a Wearable System for Automatic Supervision of Fitness Exercises. Measurement 2019, 147, 106810. [Google Scholar] [CrossRef]
  17. Juang, B.H.; Rabiner, L.R. Hidden Markov Models for Speech Recognition. Technometrics 1991, 33, 251–272. [Google Scholar] [CrossRef]
  18. Rabiner, L.R. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proc. IEEE 1989, 77, 257–286. [Google Scholar] [CrossRef] [Green Version]
  19. Hong, K.; Lin, G. State Classification of Transformers using Nonlinear Dynamic Analysis and Hidden Markov models. Measurement 2019, 147, 106851. [Google Scholar] [CrossRef]
  20. Baechle, T.R.; Earle, R.W. Essentials of Strength Training and Conditioning, 3rd ed.; Human Kinetics: Champaign, IL, USA, 2008. [Google Scholar]
  21. Vanderburgh, P.M.; Mahar, M.T.; Chou, C.H. Allometric Scaling of Grip Strength by Body Mass in College-age Men and Women. Res. Q. Exerc. Sport 1995, 66, 80–84. [Google Scholar] [CrossRef] [PubMed]
  22. Mota, Y.L.; Mochizuki, L.; Carvalho, G.A. Influência da Resolução e da Distância da Câmera nas Medidas Feitas pelo Software de Avaliação Postural (sapo). Rev. Bras. Med. Esporte 2011, 17, 334–338. [Google Scholar] [CrossRef]
  23. Ahmad, M.; Lee, S.W. Human Action Recognition using Multiview Image Sequences. In Proceedings of the International Conference on Pattern Recognition, Las Vegas, NV, USA, 26–29 June 2006. [Google Scholar] [CrossRef]
  24. Chen, H.-S.; Tsai, W.-J. A Framework for Video Event Classification by Modeling Temporal Context of Multimodal Features using HMM. J. Vis. Commun. 2014, 25, 285–295. [Google Scholar] [CrossRef]
  25. Raheja, J.L.; Minhas, M.; Prashanth, D.; Shah, T.; Chaudhry, A. Robust Gesture Recognition using Kinect: A Comparison between DTW and HMM. Opt. – Int. J. Light Electron. Opt. 2015, 126, 1098–1104. [Google Scholar] [CrossRef]
  26. Shin, J.; Lee, J.; Kim, D. Real-time Lip Reading System for Isolated Korean Word Recognition. Pattern Recognit. 2011, 44, 559–571. [Google Scholar] [CrossRef]
  27. Tran, C.; Doshi, A.; Trivedi, M.M. Modeling and Prediction of Driver Behavior by Foot Gesture Analysis. Comput. Vis. Image Underst. 2012, 116, 435–445. [Google Scholar] [CrossRef]
  28. Yamato, J.; Ohya, J.; Ishii, K. Recognizing Human Action in Time-sequential Images using Hidden Markov model. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Champaign, IL, USA, 15–18 June 1992; pp. 379–385. [Google Scholar] [CrossRef]
  29. Yang, T.H.; Wu, C.H.; Huang, K.Y. Coupled HMM-based Multimodal Fusion for Mood Disorder Detection Through Elicited Audio-visual Signals. J. Ambient. Intell. Hum. Comput. 2017, 8, 895–906. [Google Scholar] [CrossRef]
  30. Zimmermann, M.; Ghazi, M.M.; Ekenel, H.K.; Thiran, J.-P. Visual Speech Recognition Using PCA Networks and LSTMs in a Tandem GMM-HMM System. In Lecture Notes in Computer Science; Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics; Springer: Cham, Switzerland, 2017; pp. 264–276. [Google Scholar] [CrossRef] [Green Version]
  31. Wondershare Filmora. Available online: https://filmora.wondershare.com/pt-br/ (accessed on 15 March 2022).
  32. Van Den Bergh, F.; Lalioti, V. Software chroma keying in an immersive virtual environment: Research article. S. Afr. Comput. J. 1999, 24, 155–162. [Google Scholar]
  33. Matlani, P.; Shrivastava, M. Hybrid Deep VGG-net Convolutional Classifier for Video Smoke Detection. Comput. Model. Eng. Sci. 2019, 119, 427–458. [Google Scholar] [CrossRef] [Green Version]
  34. Kinovea. Available online: https://www.kinovea.org/ (accessed on 20 May 2022).
  35. Volkwyn, T.S.; Gregorcic, B.; Airey, J.; Linder, C. Learning to use Cartesian coordinate systems to solve physics problems: The case of ‘movability’. Eur. J. Phys. 2020, 41, 045701. [Google Scholar] [CrossRef]
  36. Oliveira, L.F.; Matta, T.T.; Alves, D.S.; Garcia, M.A.; Vieira, T.M. Effect of the shoulder position on the biceps brachii EMG in different dumbbell curls. J. Sports Sci. Med. 2009, 8, 24. [Google Scholar] [PubMed]
  37. Signorile, J.F.; Rendos, N.K.; Vargas, H.H.H.; Alipio, T.C.; Regis, R.C.; Eltoukhy, M.M.; Nargund, R.; Romero, M.A. Differences in muscle activation and kinematics between cable-based and selectorized weight training. J. Strength Cond. Res. 2017, 31, 313–322. [Google Scholar] [CrossRef] [PubMed]
  38. Faul, F.; Erdfelder, E.; Lang, A.-G.; Buchner, A. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav. Res. Methods 2007, 39, 175–191. [Google Scholar] [CrossRef] [PubMed]
  39. Fritz, C.O.; Morris, P.E.; Richler, J.J. Effect size estimates: Current use, calculations, and interpretation. J. Exp. Psychol. Genet. 2012, 141, 2–18. [Google Scholar] [CrossRef] [Green Version]
  40. Wang, S.; Chen, J.; Wang, H.; Zhang, D. Degradation Evaluation of Slewing Bearing using HMM and Improved GRU. Measurement 2019, 146, 385–396. [Google Scholar] [CrossRef]
  41. Chen, C.-N.; Liu, T.-K.; Chen, Y.J. Human-Machine Interaction: Adapted Safety Assistance in Mentality Using Hidden Markov Chain and Petri Net. Appl. Sci. 2019, 9, 5066. [Google Scholar] [CrossRef] [Green Version]
  42. Ghahramani, Z. Zoubin Ghahramani Software. 2019. Available online: http://mlg.eng.cam.ac.uk/zoubin/software.html (accessed on 20 May 2022).
  43. González, M.R. Advanced Techniques for Human Activity Classification; Universidad de Oviedo: Oviedo, Spain, 2012; Available online: http://digibuo.uniovi.es/dspace/handle/10651/5399 (accessed on 12 May 2022).
  44. Herta, C. K-Mean Cluster Algorithm. Available online: http://www.christianherta.de/kmeans.php (accessed on 15 May 2022).
  45. Mathworks. Hidden Markov Models (HMM)—MATLAB & Simulink. 2022. Available online: https://www.mathworks.com/help/stats/hidden-markov-models-hmm.html (accessed on 24 May 2022).
  46. Gnu Octave. 2022. Available online: https://www.gnu.org/software/octave/ (accessed on 27 May 2022).
  47. Fujii, K.; Gras, G.; Salerno, A.; Yang, G.Z. Gaze Gesture Based Human Robot Interaction for Laparoscopic Surgery. Med. Image Anal. 2018, 44, 196–214. [Google Scholar] [CrossRef]
  48. Kim, I.-C.; Chien, S.I. Analysis of 3D Hand Trajectory Gestures Using Stroke-based Composite Hidden Markov Models. Appl. Intell. 2001, 15, 131–143. [Google Scholar] [CrossRef]
  49. Urgo, M.; Tarabini, M.; Tolio, T. A Human Modelling and Monitoring Approach to Support the Execution of Manufacturing Operations. CIRP Ann. 2019, 68, 5–8. [Google Scholar] [CrossRef]
  50. Yamada, K.; Matsuura, K.; Hamagami, K.; Inui, H. Motor Skill Development using Motion Recognition Based on an HMM. Procedia Comput. Sci. 2013, 22, 1112–1120. [Google Scholar] [CrossRef]
  51. Fink, G.A. Markov Models for Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar] [CrossRef]
  52. Andersson, M.; Gustafsson, F.; St-Laurent, L.; Prevost, D. Recognition of Anomalous Motion Patterns in Urban Surveillance. IEEE J. Sel. Top. Signal Process. 2013, 7, 102–110. [Google Scholar] [CrossRef] [Green Version]
  53. Attal, F.; Mohammed, S.; Dedabrishvili, M.; Chamroukhi, F.; Oukhellou, L.; Amirat, Y. Physical Human Activity Recognition Using Wearable Sensors. Sensors 2015, 15, 31314–31338. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Cointault, F.; Gouton, P. Texture or Color Analysis in Agronomic Images for Wheat Ear Counting. In Proceedings of the Third International IEEE Conference on Signal-Image Technologies and Internet-Based System, Shanghai, China, 16–18 December 2007; pp. 696–701. [Google Scholar] [CrossRef]
  55. Gusmão, G.; Machado, S.C.; Rodrigues, M.A. A New Algorithm for Segmenting and Counting Aedes Aegypti Eggs in Ovitraps. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Minneapolis, MN, USA, 3–6 September 2009; pp. 6714–6717. [Google Scholar] [CrossRef]
  56. Kothari, S.; Chaudry, Q.; Wang, M.D. Automated Cell Counting and Cluster Segmentation using Concavity Detection and Ellipse Fitting Techniques. In Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Boston, MA, USA, 8 June–1 July 2009; pp. 795–798. [Google Scholar] [CrossRef] [Green Version]
  57. Ong, W.H.; Koseki, T.; Palafox, L. An Unsupervised Approach for Human Activity Detection and Recognition. Int. J. Simul. Syst. Sci. Technol. 2013, 14, 42–49. [Google Scholar] [CrossRef]
  58. Lin, J.F.-S.; Kulic, D. Automatic Human Motion Segmentation and Identification using Feature Guided HMM for Physical Rehabilitation Exercises. In Proceedings of the Workshop on Robotics for Neurology and Rehabilitation, IEEE International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011. [Google Scholar]
  59. Bland, J.M.; Altman, D.G. Measuring Agreement in Method Comparison Studies. Stat. Methods Med. Res. 1999, 8, 135–160. [Google Scholar] [CrossRef]
  60. Giavarina, D. Understanding Bland Altman Analysis. Biochem. Med. (Zagreb) 2015, 25, 141–151. [Google Scholar] [CrossRef] [Green Version]
  61. Singla, A.; Roy, P.P.; Dogra, D.P. Visual Rendering of Shapes on 2D Display Devices Duided by Hand Gestures. Displays 2019, 57, 18–33. [Google Scholar] [CrossRef]
  62. Riccardi, G.; Hakkani-Tur, D. Active and Unsupervised Learning for Automatic Speech Recognition. In Proceedings of the 8th European Conference on Speech Communication and Technology, EUROSPEECH, Geneva, Switzerland, 1–4 September 2003; pp. 1–4. [Google Scholar] [CrossRef]
  63. Wang, Y.; Chen, M.; Wang, X.; Chan, R.H.M.; Li, W.J. IoT for Next-Generation Racket Sports Training. IEEE Internet Things J. 2018, 5, 4558–4566. [Google Scholar] [CrossRef]
  64. Mekruksavanich, S.; Jitpattanakul, A. Smartwatch-based Human Activity Recognition Using Hybrid LSTM Network. In Proceedings of the IEEE Sensors, Rotterdam, The Netherlands, 25–28 October 2020; pp. 1–4. [Google Scholar] [CrossRef]
  65. Ghorbani Faal, S.; Shirzad, E.; Sharifnezhad, A.; Ashrostaghi, M.; Naemi, R. A Novel Method for Field Measurement of Ankle Joint Stiffness in Hopping. Appl. Sci. 2021, 11, 12140. [Google Scholar] [CrossRef]
  66. Cui, C. Using Wireless Sensor Network to Correct Posture in Sports Training Based on Hidden Markov Matching Algorithm. J. Sens. 2021, 5, 1–11. [Google Scholar] [CrossRef]
  67. Nolte, K.; Krüger, P.E.; Els, P.S. Three dimensional musculoskeletal modelling of the seated biceps curl resistance training exercise. Sports Biomech. 2011, 10, 146–160. [Google Scholar] [CrossRef] [PubMed]
  68. Lv, F.; Nevatia, R. Recognition and Segmentation of 3-D Human Action Using HMM and Multi-class AdaBoost. In Proceedings of the Computer Vision—ECCV, 9th European Conference on Computer Vision, Graz, Austria, 7–13 May 2006. [Google Scholar] [CrossRef]
Figure 1. Hemispherical marker locations for data collection.
Figure 1. Hemispherical marker locations for data collection.
Applsci 13 00573 g001
Figure 2. Schematic representation of the video capture setup.
Figure 2. Schematic representation of the video capture setup.
Applsci 13 00573 g002
Figure 3. Relation between bi-dimensional positions during movement with Markovian modeling in the upward phase of the exercise.
Figure 3. Relation between bi-dimensional positions during movement with Markovian modeling in the upward phase of the exercise.
Applsci 13 00573 g003
Figure 4. Bland–Altman plots showing the mean difference (y-axis) and 95% agreement limits (±1.96 × SD, dotted lines) between the two HMM methods (with environmental measurements (CA) and using only image coordinates without environmental measurements (CI) for the analysis of the number of states (N) required to recognize differences in positional sequence of elbow, shoulder and barbell during biceps-curl. Panels A and B depicted comparisons for shoulder (with load at 50% and 25%), Panels C and D for elbow (with load at 50% and 25%), and Panel E for barbell (with load at 50%).
Figure 4. Bland–Altman plots showing the mean difference (y-axis) and 95% agreement limits (±1.96 × SD, dotted lines) between the two HMM methods (with environmental measurements (CA) and using only image coordinates without environmental measurements (CI) for the analysis of the number of states (N) required to recognize differences in positional sequence of elbow, shoulder and barbell during biceps-curl. Panels A and B depicted comparisons for shoulder (with load at 50% and 25%), Panels C and D for elbow (with load at 50% and 25%), and Panel E for barbell (with load at 50%).
Applsci 13 00573 g004
Table 1. Mean maximal values achieved for each marker at the end of the trajectory (shoulder, elbow (Δd), and barbell (Δy)) for the three different loads (cm).
Table 1. Mean maximal values achieved for each marker at the end of the trajectory (shoulder, elbow (Δd), and barbell (Δy)) for the three different loads (cm).
LoadShoulderElbowBarbell
0%1.7 ± 0.44.4 ± 1.756.8 ± 7.7
25%3.0 ± 1.36.4 ± 2.959.6 ± 8.5
50%7.9 ± 3.810.4 ± 4.064.0 ± 9.0
Table 2. Minimum number of states necessary for HMM recognition of differences in the execution of biceps-curl with different loads, and with and without metric of environment.
Table 2. Minimum number of states necessary for HMM recognition of differences in the execution of biceps-curl with different loads, and with and without metric of environment.
MarkersLoadsSubjects
01020304050607080910
MetricNon-metricMetricNon-metricMetricNon-metricMetricNon-metricMetricNon-metricMetricNon-metricMetricNon-metricMetricNon-metricMetricNon-metricMetricNon-metric
Elbow0%–50%202035353030454520 *23 *404025253535
25%–50%353515 *16 *202035354040
Shoulder0%–50%151515 *20 *454535352525454520203030
25%–50%252520202525151545452525353520 *21 *
Barbell0%–50%30304545606035354545555540 *42 *
*—different number of states necessary when comparing the HMM models. …—no significant differences found by ANOVA.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Peres, A.B.; Espada, M.C.; Santos, F.J.; Robalo, R.A.M.; Dias, A.A.P.; Muñoz-Jiménez, J.; Sancassani, A.; Massini, D.A.; Pessôa Filho, D.M. Accuracy of Hidden Markov Models in Identifying Alterations in Movement Patterns during Biceps-Curl Weight-Lifting Exercise. Appl. Sci. 2023, 13, 573. https://doi.org/10.3390/app13010573

AMA Style

Peres AB, Espada MC, Santos FJ, Robalo RAM, Dias AAP, Muñoz-Jiménez J, Sancassani A, Massini DA, Pessôa Filho DM. Accuracy of Hidden Markov Models in Identifying Alterations in Movement Patterns during Biceps-Curl Weight-Lifting Exercise. Applied Sciences. 2023; 13(1):573. https://doi.org/10.3390/app13010573

Chicago/Turabian Style

Peres, André B., Mário C. Espada, Fernando J. Santos, Ricardo A. M. Robalo, Amândio A. P. Dias, Jesús Muñoz-Jiménez, Andrei Sancassani, Danilo A. Massini, and Dalton M. Pessôa Filho. 2023. "Accuracy of Hidden Markov Models in Identifying Alterations in Movement Patterns during Biceps-Curl Weight-Lifting Exercise" Applied Sciences 13, no. 1: 573. https://doi.org/10.3390/app13010573

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop