Next Article in Journal
Generating Digital Twins for Path-Planning of Autonomous Robots and Drones Using Constrained Homotopic Shrinking for 2D and 3D Environment Modeling
Previous Article in Journal
Environment- and Genotype-Dependent Irrigation Effect on Soybean Grain Yield and Grain Quality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using AI Motion Capture Systems to Capture Race Walking Technology at a Race Scene: A Comparative Experiment

1
College of P.E. and Sports, Beijing Normal University, Beijing 100875, China
2
Physical Education Institute, Langfang Normal University, Langfang 065000, China
3
Physical Education Institute, Hebei Normal University, Shijiazhuang 050024, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(1), 113; https://doi.org/10.3390/app13010113
Submission received: 21 November 2022 / Revised: 1 December 2022 / Accepted: 6 December 2022 / Published: 22 December 2022
(This article belongs to the Section Applied Biosciences and Bioengineering)

Abstract

:
Background: This study tested the reliability of the 3D coordinates of human joint points obtained by using an AI motion capture system at a race walking scene. Methods: Using a direct linear transformation (DLT) 3D video recording method, 15 race walking athletes were photographed. We compared the average values, standard deviations, and 95% confidence intervals of the multiple correlation coefficients and differences in the 3D coordinate–time curve of the human joint points that were automatically parsed by the AI motion capture system with those that were manually parsed. Results: Except for the left shoulder y coordinates, left hip y and z coordinates, and left toe tip z coordinates, the multiple correlation coefficients between the curve obtained via the automatic analysis and the average curve obtained via the manual analysis of the other coordinates were greater than 0.90, while the difference between the curve obtained via the automatic analysis and the curve obtained via the manual analysis of the left hand, the left wrist, the left hip, and the left toe was less than 0.025 m. Conclusion: The 3D coordinates of the human joint points obtained via the AI motion capture system were highly similar to the average value of the 3D coordinates obtained via the manual analysis, supporting the use of the AI motion capture system as a highly reliable means to capture the technical motion of race walking in the race walking competition context.

1. Introduction

The analysis and diagnosis of race walking technology can help athletes to improve their technical actions [1], reduce the intensity of the sports [2], prevent sports injuries [3], and develop the best action procedures [4].
Image analysis involves the storing of a captured race walking video in a digital form, inputting the video into relevant software for analysis, and extracting the data for research [5]. Image analysis has the advantages of high accuracy, repeatability, and easy operation, but the disadvantages are its laborious nature and the large amounts of time it requires, and the test results are prone to deviation due to the operator’s proficiency. Currently, image analysis is still the main analysis method in race walking technology at home and abroad [6,7,8].
Human motion recognition is the process of tracking the motion of some key points in the time domain to record human motion and transform it into a mathematical expression of motion. It can be considered to be a classification problem of time-varying data, which means that pattern recognition is conducted through test sequences and pre-calibrated reference sequences [9]. In recent years, inertial measurement technologies based on accelerometers, gyroscopes, magnetometers, and wearable technologies based on electromyography have been developed rapidly, but there are also limitations to them, such as equipment that may affect sports performance and that cannot be used in the competition environment [10].
Computer vision uses cameras and computers to capture, track, and measure the targets, and it uses AI algorithms to achieve automatic motion recognition [11], breaking through many limitations of traditional motion recognition technologies.
Beijing Sport University, Dalian Fast Move Technology Co., Ltd. (Dalian, China), and China Track and Field Association jointly developed an AI motion capture system (Fastmove 3D Motion AI, Dalian Fast Move Technology Co., Ltd., Dalian, China). This system is a neural network model that is used for automatic human motion tracking and pose estimation based on computer vision deep learning technology [12]. The system uses optical flow tracking technology to reduce the image blur interference and background interference caused by motion in the computer system which is tracking a moving human body. After accurately capturing the moving human body, the system uses the convolutional neural network method to conduct in-depth learning by using a large number of parsed training datasets, and it is trained to generate a neural network model which is then used for estimating the human posture and for establishing a probability distribution map of the positions of joint points within the outer contours of the human body. In this way, the human posture estimation neural network model can identify human joint points in different movements, and it can automatically read the pixel coordinates of the joint points [13]. After the 3D coordinate data of the joint points are collected, the model-driven calculation function is used to automatically calculate the human kinematic data.
At present, the kinematic analysis of race walking technology mainly relies on the manual analysis of race walking videos. Although the analysis results can be guaranteed in terms of their accuracy, the analysis itself takes a substantial amount of time. The changes in race walking technology during competition and training cannot be fed back to the coach in a timely manner, thus losing the coach’s best opportunity to provide technical guidance to athletes. If Fastmove 3D Motion AI can be applied to the analysis of race walking technology, the problem of lagging feedback in race walking technology analysis will be solved.
This study captured 3D photos at a race walking event to test the similarities and differences between the 3D coordinate–time curve of the human joint points which were automatically identified by the AI motion capture system and the 3D coordinate–time curve of joint points which were identified via the manual analysis. The results were expected to determine the reliability of the race walking technology as captured by the AI motion capture system during the event to verify the system’s applicability.

2. Materials and Methods

2.1. Test Object

We captured 3D video of a women’s 20 km race walking event during the 2022 National Walking Championships (Jinzhou, China), from which the top 15 athletes were selected as subjects. See Table 1 for basic information on the subjects chosen.

2.2. Research Assumptions

(1) The multiple correlation coefficient between the 3D coordinate–time curve of the human joint points which were automatically identified by the AI motion capture system and the 3D coordinate–time curve of the joint points which were identified via the manual analysis should not be less than 0.90.
(2) The difference between the 3D coordinate–time curve of the human joint points which were automatically identified by the AI motion capture system and the 3D coordinate–time curve of the joint points which were identified via the manual analysis should be less than 0.025 m.

2.3. Test Tools

The AI motion capture system developed by China Dalian Ruidong Technology Co., Ltd. (Dalian, China) using two high-definition cameras from Japan SONY Company (Tokyo, Japan) were used for the AI motion capture. The focal length of the camera was set to 26 mm, the shooting frequency was set to 60 fps, and the shutter speed was set to 1/1200 s. Before carrying out the AI motion capture, space calibration was performed on the athletes’ route, and the size of the calibration space was 5 m × 5 m × 2.5 m. The video resolution was 1920 × 1080 P. The AI motion capture system automatically carried out human body key point recognition and 3D model reconstruction on the captured video. The 3D reconstruction model consisted of 21 landmark points. See Table 2 for the location of the landmark points.
When we were manually analyzing the race walking video, we double clicked the Fastmove Motion 3D software (Fast Move, Dalian, China) icon on the computer desktop to enter the system home page. We clicked “New” to enter the workflow interface, entered the project name and camera number, selected a new DLT calibration, imported the DLT model file and calibration video into the system, and clicked “Save Project” after completing the DLT calibration as required.
We clicked the “New” button in the workflow interface to add a new video group and to complete the video group creation and file addition. We clicked the “Start” button to enter the motion analysis mark interface, and gradually complete the file processing, analysis processing, data processing, and the processing of other contents (see Figure 1 for the data processing interface). After processing the 3D analytical data, we clicked “Save” and “Next” to enter the curve representation interface. At this time, it was possible to view the original data and the filtered data. We could also observe curve changes of different degrees by adjusting the truncation frequency. We could also select the data we wanted to export and click the “Export CSV” or “OneKeyExport” button to export the data.

2.4. Data Collection

Before the competition, we set up 4 columns on the walking route, took the small balls suspended on each column as the reference points (see Figure 2), and we measured the distance between the four strings of small balls on the ground projection center. Four strings of small balls formed a 5 m × 5 m × 2.5 m calibration space. We set 5 ground coordinate marker points on the ground in the calibration space to establish a geodetic coordinate system. The x axis points toward the travel direction, the y axis points toward the left of the travel route, and the z axis points upward.
We used the direct linear transformation (DLT) [14] 3D video-recording method, and the resolution of the 2 high-definition cameras (SONY FDR-AX700, SONY Company, Japan) which took the 3D photos of the test object’s race walking technology was 1920 × 1080.
The cameras were numbered. The main optical axis of camera 1 was perpendicular to the athletes’ travel direction, and it was about 15 m away from the center of the travel route. The main optical axis of camera 2 faced the direction of the athletes, which formed an angle of about 45 ° with the main optical axis of camera 1, and it was about 15 m away from the center of the athletes’ route (see Figure 1).
The camera was connected to the AI motion capture system by wire. The camera’s main optical axis was set to about 1.10 m high. The focal length was adjusted to make the imaging size and position appropriate.
After determining the settings of the calibration space, all of the calibration settings were removed, and then, the race walking event was recorded. The camera was turned on when an athlete was about to enter the calibration space, and it was turned off when an athlete left the calibration space.

2.5. Video Processing

The competition field of the adult women’s 20 km race walking event was 1 km/lap, and the athletes were expected to walk 20 laps. We filmed every lap of the walking race of all of the athletes. By calculating the average speed of the athletes in each lap, we found that the speed of almost all of the athletes in laps 6, 7, and 8 was very close to the average speed of the whole journey. The athletes were very energetic in laps 6, 7, and 8, and they were rarely interfered with by other athletes. So, we chose the competition video of the athletes in laps 6, 7, and 8 for the analysis.
The AI motion capture system was used to automatically analyze the race walking video and to obtain the two-dimensional coordinates of 21 joints of the human body, which were recorded by the two cameras [15], and the direct linear transformation (DLT) method was used to synthesize the analytical data into 3D coordinates in the geodetic coordinate system. The Butterworth low-pass filtering method was used to filter the analytical 3D coordinate–time curve, and the truncation frequency was 10 Hz.
Six professional researchers used the manual analysis function of the AI motion capture system to independently, manually analyze the race walking video of each athlete, and a 3D coordinate–time curve of 21 joints of the human body was obtained.
The average value, standard deviations, and 95% confidence intervals of the multiple correlation coefficients were calculated, and the differences between the 3D coordinate–time curve of the human joint points obtained via the automatic analysis and the 3D coordinate–time curve of the human joint points obtained via the manual analysis were determined. For the calculation method, see [16,17,18].

3. Results

The artificial intelligence motion capture system developed by China Dalian Ruidong Technology Co., Ltd. uses image recognition and deep learning technology to automatically recognize the key points of the human body without marks and to reconstruct a three-dimensional model. The tracking efficiency is improved by using the optical flow method to track the target. The special training conducted by the convolutional neural network should be used to improve the recognition robustness and to increase the time constraint for continuous motion in order to improve recognition accuracy. The system realizes the fast collection of the athletes’ special technical action data and performs 3D kinematic analyses. If the AI motion capture system can be improved in terms of algorithm optimization [19], feature selection [20], neural network connection [21], and other aspects in future upgraded versions, then the system performance will greatly improve.
Except for the left shoulder y coordinates, the left hip y and z coordinates, and the left toe tip z coordinates, the average value and the lower limit of the 95% confidence interval of the multiple correlation coefficients between the joint point coordinate–time curve which was obtained via the automatic analysis and the average curve which was obtained via the manual analysis are all greater than 0.90. The average value of the multiple correlation coefficients and the lower limit of the 95% confidence interval of 93% of the curves which were obtained via the automatic analysis and the corresponding average curves which were obtained via the manual analysis are higher than 0.95. The average value of the multiple correlation coefficients and the lower limit of the 95% confidence interval of 65% of the curve which was obtained via the automatic analysis and the corresponding average curve which was obtained via the manual analysis are higher than 0.98 (see Table 3).
The average value and the upper limit of the 95% confidence interval of the difference between the joint point coordinate–time curve which was obtained via the automatic analysis and the average curve which was obtained via the manual analysis are less than 0.025 m. The average value of the 92% difference between the joint point coordinate–time curve which was obtained via the automatic analysis and the average curve which was obtained via the manual analysis and the upper limit of the 95% confidence interval is less than 0.015 m. The upper limit of the mean value and the 95% confidence interval of the 53% difference between the joint point coordinate–time curve which was obtained via the automatic analysis and the average curve which was obtained via the manual analysis are less than 0.010 m (see Table 4).
We compared the kinematic parameters of the body center displacement velocity, torso tilt angle, upper limb joint angle, and lower limb joint angle obtained via the automatic analysis and the manual analysis at the time of left foot landing (see Table 5). It was found that the difference between the shoulder joint angle during a backward arm swing and the elbow joint angle a during backward arm swing was relatively large, but the difference was not significant. Overall, the kinematic parameters obtained via the automatic analysis and the manual analysis are consistent.

4. Discussion

The research results basically support the assumption that the multiple correlation coefficient between the human joint point 3D coordinate–time curve obtained automatically via the AI motion capture system and the joint point 3D coordinate–time curve obtained via the manual analysis is not less than 0.90. Except for the left shoulder y coordinates, left hip y and z coordinates, and left toe tip z coordinates, the multiple correlation coefficients between the curve which was obtained via the automatic analysis and the average curve which was obtained via the manual analysis of the other coordinates are greater than 0.90, which may be related to the two cameras on the right side of the travel route. A further inspection of the data shows that the difference between the automatic analysis curve and the manual analysis curve of the left hand, left wrist, left hip, and left toe tip is less than 0.025 m, indicating that the shape of the human joint point coordinate–time curve which was automatically obtained via the AI is highly similar to that of the human joint point coordinate–time curve which was manually analyzed. This shows that the use of the AI motion capture system in race walking competitions has high reliability.
The research results support the hypothesis that the difference between the human joint point 3D coordinate–time curve automatically obtained via the AI motion capture system and the joint point 3D coordinate–curve obtained via the manual analysis is less than 0.025 m. The results show that the average value and the upper limit of the 95% confidence interval of the difference between the joint point coordinate–time curve which was obtained via the automatic analysis and the average curve which was obtained via the manual analysis are less than 0.025 m, and the average value and the upper limit of the 95% confidence interval of the 92% difference of the joint point coordinate–time curve which was obtained via the automatic analysis and the average curve which was obtained via the manual analysis are less than 0.015 m. These results show that the joint point coordinate–time curve which was obtained via the automatic analysis is highly similar to the average value of the curve which was obtained via the manual analysis, which further supports the high reliability of the AI motion capture system in race walking competitions.
To date, the AI motion capture system has been applied in actual competitions of throwing and jumping and ice and snow events [22,23], and our research has also applied the AI motion capture system to the race walking scene, with a very good performance and effects. The industrial camera version of the AI motion capture system can provide biomechanical data of motion technology within 1–2 min, and the data feedback speed is basically the same as that of the video analysis system which is currently used to collect data via reflective points. These application results of the AI motion capture system and the results of this study testing the reliability of the system show that the AI motion capture system cannot only provide effective joint point coordinate data, but it can also greatly save manpower and improve the speed of data feedback.
In the future, when one is using the joint point 3D coordinate data collected via the AI motion capture system, the accuracy of the collected data can be further improved by increasing the number of cameras. We used two cameras in our research, which meets the minimum requirements for 3D coordinate data acquisition. In the shooting process, as long as one camera is blocked by other parts, or the position of the limb relative to the camera is not conducive to identifying the joint points, the collected data may have relatively large errors. This is the main reason for the large difference between the automatic resolution curve of the left joint point coordinates and the manual resolution average curve in this study. Therefore, increasing the number of cameras and synthesizing 3D coordinate data using only the data of the cameras which clearly identify a given joint point can significantly improve the accuracy of 3D joint point coordinates [24].

5. Conclusions

The AI motion capture system can quickly and accurately obtain the 3D coordinate data of human joints in outdoor sports scenes, and it can provide the biomechanical parameters and effect the indicators of athletes. It is verified that the human joint 3D coordinates obtained via the AI motion capture system are highly similar to the average value of the 3D coordinates obtained via the manual analysis, and the walking technique motion capture has high reliability for walking competitions. The system solves the problem of the requirement of placing reflective points on athletes under non-laboratory conditions in order to obtain their kinematic data. At the same time, there is no need to manually identify and analyze videos frame by frame or point by point, which greatly reduces the workload and improves the efficiency of motion technology analysis.

Author Contributions

Conceptualization, D.Z. and W.J.; methodology, Z.J.; software, W.J.; validation, Z.J. and G.J.; formal analysis, W.J.; investigation, G.J.; resources, Z.J.; data curation, W.J.; writing—original draft preparation, D.Z.; writing—review and editing, W.J.; visualization, D.Z.; supervision, G.J.; project administration, Z.J.; funding acquisition, D.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Interdisciplinary Fund Project of Beijing Normal University, grant number BNUXKJC2012.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data generated or analyzed during this study are included in this published article.

Acknowledgments

We thank the Interdisciplinary Fund Project of Beijing Normal University for supporting this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ma, J. Relationship between velocity and technology of elite young race walkers. J. Shandong Sport Univ. 2020, 36, 102–110. [Google Scholar]
  2. Li, H.L.; Tan, Z.Z.; Jing, Y.; Gao, C.; Zhou, X.L.; Tong, Y.Y.; Ren, Y. Research on Kinematics Characteristics of Cai Ze-lin Race Walk Technique and Broken on the Olympic Games Silver Medal Winning. J. Cap. Univ. Phys. Educ. Sport 2017, 29, 451–458. [Google Scholar]
  3. Li, J.C.; Li, H.L.; Xu, X. Research on Technique Pattern Construction of High Speed Range of World Elite Women Youth Athletes. J. Cap. Univ. Phys. Educ. Sport 2017, 29, 244–248. [Google Scholar]
  4. Zhang, D.T. Research on the elements of competitive ability system. Xi’an Xi’an Phys. Educ. Univ. 2015, 7, 27–31. [Google Scholar]
  5. Li, H.L.; Zhang, L.; Sun, J.N.; Ding, H.T.; Jing, Y. Kinematic technique diagnosis of Chinese elite female 20 km race walkers. J. Shandong Sport Univ. 2018, 34, 105–112. [Google Scholar]
  6. Ping, J. Kinematics Analysis of Key Technical Links of Junior Male Race Walkers. Int. J. Educ. Econ. 2020, 3, 473–481. [Google Scholar]
  7. Dibendu, K.B. Kinematic Analysis of Support Phase Characteristics in Women Race Walking. Am. J. Sport. Sci. 2020, 8, 1024–1033. [Google Scholar]
  8. Gomez-Ezeiza, J.; Santos-Concejero, J.; Torres-Unda, J.; Hanley, B.; Tam, N. Muscle Activation Patterns Correlate With Race Walking Economy in Elite Race Walkers: A Waveform Analysis. Int. J. Sport. Physiol. Perform. 2019, 1, 1251–1259. [Google Scholar] [CrossRef]
  9. Poppe, R. A survey on vision-based human action recognition. Image Vis. Comput. 2010, 28, 976–990. [Google Scholar] [CrossRef]
  10. Wen, X.; Wang, Z.F. AI Enabling Sports: Application of Computer Vision in Human Motion Recognition. J. Shanghai Univ. Sport 2020, 44, 25. [Google Scholar]
  11. Dario, C.; Gaspare, P.; Ezio, P. Can coordination variability identify performance factors and skill level in competitive sport? Case Race Walk. 2016, 5, 35–43. [Google Scholar]
  12. Alexander, M.; Pranav, M.; Cury, K.M.; Abe, T.; Murthy, V.N.; Mathis, M.W.; Bethge, M. DeepLabCut: Markerless pose estimation of user-defined body parts with deep learning. Nat. Neuroence 2018, 21, 1281–1289. [Google Scholar]
  13. Liu, H.; Li, H.J.; Qu, Y.; He, X.G.; Zhou, Z.P.; Yu, B. Validity of an Artificial Intelligence System for Markerless Human Movement Automatic Capture. J. Beijing Sport Univ. 2021, 44, 125–133. [Google Scholar]
  14. Abdel-Aziz, Y.I.; Karara, H.M.; Hauck, M. Direct linear transformation from comparator coordinates into object space coordinates in close-range photogrammetry. Photogramm. Eng. Remote Sens. 2015, 81, 103–107. [Google Scholar] [CrossRef]
  15. HAYJG. The Biomechanics of Sports Techniques, 4th ed.; Prentice-Hall: Englewood Cliffs, NJ, USA, 1993; pp. 12–15. [Google Scholar]
  16. Li, H.J.; Tong, L.P.; Zhou, X.L.; Qu, F. Comparison of Kinematie Data of Lower Extremity between Image Analysis and Real Time Motion Capture System. J. Beijing Sport Univ. 2011, 34, 126–128. [Google Scholar]
  17. Bing, Y. Effect of external marker sets on between-day reproducibility of knee kinematics and kinetics in stair climbing and level walking. Res. Sport. Med. 2003, 11, 209–218. [Google Scholar]
  18. Kadaba, M.P.; Ramakrishnan, H.K.; Wootten, M.E.; Gainey, J.; Gorton, G.; Cochran, G.V. Repeatability of kinematic, kinetic, and electromyographic data in normal adult gait. J. Orthop. Res. 1989, 7, 849–860. [Google Scholar] [CrossRef]
  19. Chen, K.; Yao, L.; Zhang, D.; Wang, X.; Chang, X.; Nie, F. A Semisupervised Recurrent Convolutional Attention Model for Human Activity Recognition. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 1747–1756. [Google Scholar] [CrossRef]
  20. Luo, M.; Chang, X.; Nie, L.; Yang, Y. An Adaptive Semisupervised Feature Analysis for Video Semantic Recognition. IEEE Trans. Cybern. 2018, 48, 648–660. [Google Scholar] [CrossRef]
  21. Zhang, D.; Yao, L.; Chen, K.; Wang, S.; Chang, X.; Liu, Y. Making Sense of Spatio-Temporal Preserving Representations for EEG-Based Human Intention Recognition. IEEE Trans. Cybern. 2020, 50, 3033–3044. [Google Scholar] [CrossRef]
  22. Li, H.J.; Liu, J.; Ye, K.G.; Yu, B.; Liu, H. The Impact of Speed Rhythm on Hammer Throw Distance: A Case Study of an Elite Athlete. J. Beijing Sport Univ. 2020, 43, 108–115. [Google Scholar]
  23. Zhang, M.S.; Qu, Y.; Cui, J.; Liu, H. Application of Artificial Intelligence System for Motion Capture in Speed Skating. Sci. Technol. Eng. 2022, 22, 5674–5680. [Google Scholar]
  24. Ji, Z.Q.; Li, J.H.; Zhao, P.C.; Jiang, G.P. Biomechanical characteristics of children with different body weights during vertical jump. Chin. J. Tissue Eng. Res. 2021, 25, 5281–5287. [Google Scholar]
Figure 1. Data processing interface of artificial intelligence 3D motion analysis system.
Figure 1. Data processing interface of artificial intelligence 3D motion analysis system.
Applsci 13 00113 g001
Figure 2. Setup of calibration space and geodetic coordinate system. A1~A6, B1~B6, C1~C6, and D1~D6 are calibration mark points; G0~G4 are landmark points of ground coordinates.
Figure 2. Setup of calibration space and geodetic coordinate system. A1~A6, B1~B6, C1~C6, and D1~D6 are calibration mark points; G0~G4 are landmark points of ground coordinates.
Applsci 13 00113 g002
Table 1. Basic information on the chosen subjects * (N = 15).
Table 1. Basic information on the chosen subjects * (N = 15).
NameAgeHeight (cm)Weight (kg)Competition ResultsRanking
Niu, Z.M.2815947.51:33:311
Qi, J.Z.2516146.41:34:472
Zhang, L.F.2516348.21:35:483
Jiang, J.Y.2916346.41:35:504
Zhang, X.L.2216549.51:36:195
Shi, Y.X.2417050.81:37:436
Gao, W.J.2716853.11:39:137
Yang, L.J.2616344.71:39:298
Liu, R.M.2416246.51:39:569
Tan, Y.F.2916748.41:40:1210
Peng, L.2916346.31:41:3111
Ji, F.R.2316146.61:43:5912
Yang, W.Q.2916043.81:44:2713
Su, W.X.2116448.71:44:5314
Chen, M.Y.2716652.41:45:4215
* The subjects are from an adult women’s 20 km race walking event.
Table 2. Locations of landmark points of 3D reconstruction model.
Table 2. Locations of landmark points of 3D reconstruction model.
NumberNamePosition
1HeadVertex
2ChinCusp of chin
3NeckMidpoint of neck
4LshoulderMidpoint of left shoulder joint
5RshoulderMidpoint of right shoulder joint
6LElbowMidpoint of left elbow joint
7RElbowMidpoint of right elbow joint
8LWristMidpoint of left wrist
9RWristMidpoint of right wrist
10LHandMidpoint of left hand
11RHandMidpoint of right hand
12LHipLeft hip midpoint
13RHipRight hip midpoint
14LKneeMidpoint of left knee joint
15RKneeMidpoint of right knee joint
16LAnkleMidpoint of left ankle joint
17RAnkleMidpoint of right ankle joint
18LHeelMidpoint of left heel
19RHeelMidpoint of right heel
20LBigToeMidpoint of the first metatarsal bone of the left foot
21RBigToeMidpoint of the second metatarsal bone of the right foot
Table 3. Multiple correlation coefficients of human joint point coordinate–time average curve obtained via automatic analysis and manual analysis.
Table 3. Multiple correlation coefficients of human joint point coordinate–time average curve obtained via automatic analysis and manual analysis.
Body Partsx Coordinatey Coordinatez Coordinate
M ¯ ± SD 95% CI M ¯ ± SD 95% CI M ¯ ± SD 95% CI
Top0.9995 ± 0.00020.9992~0.99980.9981 ± 0.01530.9935~0.99910.9934 ± 0.00950.9828~0.9973
Mandible/neck0.9994 ± 0.00050.9992~0.99960.9888 ± 0.01150.9806~0.99230.9734 ± 0.00810.9636~0.9782
Suprasternal notch0.9996 ± 0.00020.9994~0.99990.9924 ± 0.00780.9860~0.99370.9921 ± 0.01630.9813~0.9951
L-shoulder0.9997 ± 0.00010.9993~0.99980.8854 ± 0.01120.8638~0.94320.9799 ± 0.00510.9750~0.9919
L-elbow0.9996 ± 0.00050.9991~0.99990.9911 ± 0.01160.9849~0.99610.9911 ± 0.00190.9857~0.9923
L-wrist0.9993 ± 0.00030.9991~0.99950.9837 ± 0.00590.9819~0.99230.9938 ± 0.00820.9899~0.9981
L-hand0.9995 ± 0.00040.9992~0.99960.9877 ± 0.00880.9758~0.99210.9969 ± 0.0810.9852~0.9983
R-shoulder0.9993 ± 0.00020.9992~0.99950.9788 ± 0.01380.9728~0.98370.9923 ± 0.02120.9788~0.9977
R-elbow0.9996 ± 0.00050.9994~0.99980.9955 ± 0.01160.9880~0.99790.9801 ± 0.01260.9755~0.9963
R-wrist0.9997 ± 0.00030.9995~0.99980.9895 ± 0.01610.9872~0.99790.9696 ± 0.00110.9651~0.9887
R-hand0.9994 ± 0.00010.9991~0.99960.9978 ± 0.00430.9881~0.99960.9794 ± 0.00310.9688~0.9931
L-hip0.9993 ± 0.00030.9991~0.99980.8840 ± 0.01940.8767~0.93510.8978 ± 0.00830.8624~0.9395
L-knee0.9995 ± 0.00020.9993~0.99970.9728 ± 0.02230.9686~0.98950.9688 ± 0.02110.9529~0.9732
L-ankle0.9996 ± 0.00040.9994~0.99980.9936 ± 0.01390.9811~0.99790.9852 ± 0.00530.9755~0.9893
L-heel0.9993 ± 0.00010.9991~0.99960.9983 ± 0.01510.9853~0.99930.9916 ± 0.01210.9810~0.9943
L-tiptoe0.9992 ± 0.00020.9991~0.99950.9852 ± 0.01170.9758~0.99910.8968 ± 0.00390.8658~0.9186
R-hip0.9997 ± 0.00050.9996~0.99990.9816 ± 0.00380.9763~0.99460.9891 ± 0.00790.9829~0.9967
R-knee0.9995 ± 0.00020.9993~0.99980.9735 ± 0.01520.9713~0.98720.9925 ± 0.02150.9849~0.9968
R-ankle0.9993 ± 0.00010.9991~0.99960.9838 ± 0.00550.9799~0.996 40.9682 ± 0.00290.9586~0.9832
R-heel0.9993 ± 0.00020.9991~0.99950.9915 ± 0.00810.9878~0.99810.9925 ± 0.00510.9828~0.9985
R-tiptoe0.9995 ± 0.00040.9992~0.99990.9793 ± 0.00620.9675~0.99570.9787 ± 0.00830.9642~0.9863
Table 4. Difference between the average human joint point coordinate–time curve obtained via automatic analysis and manual analysis (m).
Table 4. Difference between the average human joint point coordinate–time curve obtained via automatic analysis and manual analysis (m).
Body Partsx Coordinatey Coordinatez Coordinate
M ¯ ± SD 95% CI M ¯ ± SD 95% CI M ¯ ± SD 95% CI
Top0.0058 ± 0.00230.0045~0.00730.0082 ± 0.00360.0058~0.01330.0071 ± 0.00210.0051~0.0093
Mandible/neck0.0087 ± 0.00290.0063~0.00950.0028 ± 0.00360.0011~0.00870.0065 ± 0.00180.0028~0.0082
Suprasternal notch0.0017 ± 0.00130.0013~0.00450.0013 ± 0.00440.0011~0.00570.0059 ± 0.00370.0023~0.0079
L-shoulder0.0051 ± 0.00300.0035~0.00860.0089 ± 0.00270.0066~0.01530.0122 ± 0.00230.0113~0.0151
L-elbow0.0138 ± 0.00210.0111~0.01450.0116 ± 0.00220.0096~0.01380.0083 ± 0.00230.0053~0.0129
L-wrist0.0093 ± 0.00190.0073~0.01320.0031 ± 0.00420.0016~0.00860.0071 ± 0.00190.0056~0.0091
L-hand0.0034 ± 0.00250.0011~0.00760.0083 ± 0.00310.0063~0.01120.0069 ± 0.00380.0046~0.0083
R-shoulder0.0034 ± 0.00370.0013~0.00860.0093 ± 0.00370.0062~0.01460.0086 ± 0.00320.0062~0.0112
R-elbow0.0118 ± 0.00260.0095~0.01320.0107 ± 0.00430.0089~0.01290.0096 ± 0.00250.0073~0.0139
R-wrist0.0093 ± 0.00140.0079~0.01360.0061 ± 0.00250.0032~0.00810.0095 ± 0.00110.0056~0.0129
R-hand0.0014 ± 0.00290.0011~0.00470.0088 ± 0.00290.0061~0.01320.0062 ± 0.00280.0021~0.0098
L-hip0.0075 ± 0.00250.0069~0.01250.0097 ± 0.00380.0075~0.01910.0081 ± 0.00210.0044~0.0158
L-knee0.0063 ± 0.00390.0042~0.00990.0108 ± 0.00490.0083~0.01230.0053 ± 0.00380.0026~0.0082
L-ankle0.0094 ± 0.00100.0107~0.01210.0031 ± 0.00440.0016~0.00880.0058 ± 0.00230.0016~0.0091
L-heel0.0012 ± 0.00230.0007~0.00190.0028 ± 0.00260.0019~0.00810.0099 ± 0.00130.0055~0.0145
L-tiptoe0.0059 ± 0.00360.0033~0.01350.0018 ± 0.00340.0012~0.00370.0091 ± 0.00230.0062~0.0166
R-hip0.0016 ± 0.00370.0011~0.00220.0102 ± 0.00410.0067~0.01180.0069 ± 0.00260.0024~0.0081
R-knee0.0031 ± 0.00160.0013~0.00770.0025 ± 0.00350.0014~0.00410.0068 ± 0.00330.0031~0.0093
R-ankle0.0016 ± 0.00240.0012~0.00520.0058 ± 0.00220.0013~0.00870.0055 ± 0.00120.0028~0.0089
R-heel0.0108 ± 0.00160.0086~0.01460.0023 ± 0.00390.0016~0.00460.0084 ± 0.00270.0042~0.0134
R-tiptoe0.0072 ± 0.00240.0055~0.01180.0093 ± 0.00470.0075~0.01320.0077 ± 0.00280.0045~0.0098
Table 5. Comparison of kinematic parameters of time of athletes’ left foot landing obtained via automatic analysis and manual analysis.
Table 5. Comparison of kinematic parameters of time of athletes’ left foot landing obtained via automatic analysis and manual analysis.
Kinematic ParameterAutomatic AnalysisManual Analysisp
M ¯ ± SD 95% CI M ¯ ± SD 95% CI
Body center displacement velocity3.35 ± 0.133.28~3.433.36 ± 0.133.28~3.430.0020.978
Shoulder joint angle when swinging forward10.9 ± 4.98.9~12.710.8 ± 4.98.9~12.60.0090.995
Shoulder joint angle during backward arm swing39.8 ± 5.637.8~41.838.5 ± 5.836.6~41.40.2630.726
Elbow joint angle when swinging the arm forward77.4 ± 9.574.2~80.377.3 ± 9.374.3~80.30.0150.988
Elbow joint angle during backward arm swing73.3 ± 5.171.5~75.271.7 ± 5.270.4~75.90.2970.663
Torso tilt angle3.0 ± 0.92.5~3.63.1 ± 0.92.5~3.60.0130.970
Hip joint angle when feet touch the ground22.2 ± 2.920.5~23.822.1 ± 2.920.5~23.70.0100.990
Knee joint angle of supporting leg179.3 ± 0.6178.9~179.6179.2 ± 0.6178.9~179.60.0060.902
Knee joint angle of swinging leg116.4 ± 3.2114.6~118.2116.4 ± 3.3114.6~118.20.0110.994
Ankle angle of landing foot85.9 ± 2.884.2~87.485.9 ± 2.984.3~87.40.0010.998
Angle of ankle joint of swinging foot111.5 ± 4.7108.8~114.1111.5 ± 4.7108.8~114.10.0010.999
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, D.; Ji, Z.; Jiang, G.; Jiao, W. Using AI Motion Capture Systems to Capture Race Walking Technology at a Race Scene: A Comparative Experiment. Appl. Sci. 2023, 13, 113. https://doi.org/10.3390/app13010113

AMA Style

Zhang D, Ji Z, Jiang G, Jiao W. Using AI Motion Capture Systems to Capture Race Walking Technology at a Race Scene: A Comparative Experiment. Applied Sciences. 2023; 13(1):113. https://doi.org/10.3390/app13010113

Chicago/Turabian Style

Zhang, Dongtao, Zhongqiu Ji, Guiping Jiang, and Weiwei Jiao. 2023. "Using AI Motion Capture Systems to Capture Race Walking Technology at a Race Scene: A Comparative Experiment" Applied Sciences 13, no. 1: 113. https://doi.org/10.3390/app13010113

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop