Next Article in Journal
Monitoring of the Endangered Cave Salamander Speleomantes sarrabusensis
Next Article in Special Issue
Electromyographic and Kinematic Comparison of the Leading and Trailing Fore- and Hindlimbs of Horses during Canter
Previous Article in Journal
The Thoracic Inlet Heart Size, a New Approach to Radiographic Cardiac Measurement
Previous Article in Special Issue
Efficient Development of Gait Classification Models for Five-Gaited Horses Based on Mobile Phone Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Is Markerless More or Less? Comparing a Smartphone Computer Vision Method for Equine Lameness Assessment to Multi-Camera Motion Capture

1
Sleip AI, Birger Jarlsgatan 58, 11426 Stockholm, Sweden
2
Department of Anatomy, Physiology and Biochemistry, Swedish University of Agricultural Sciences, 75007 Uppsala, Sweden
3
KTH Royal Institute of Technology, Division of Robotics, Perception and Learning, 10044 Stockholm, Sweden
*
Author to whom correspondence should be addressed.
Animals 2023, 13(3), 390; https://doi.org/10.3390/ani13030390
Submission received: 19 November 2022 / Revised: 10 January 2023 / Accepted: 13 January 2023 / Published: 24 January 2023
(This article belongs to the Special Issue Equine Gait Analysis: Translating Science into Practice)

Abstract

:

Simple Summary

Lameness, an alteration of the gait due to pain or dysfunction of the locomotor system, is the most common disease symptom in horses. Yet, it is difficult for veterinarians to correctly assess by visual inspection. Objective tools that can aid clinical decision making and provide early disease detection through sensitive lameness measurements are needed. In this study, we describe how an AI-powered measurement tool on a smartphone can detect lameness in horses without the need to mount equipment on the horse. We compare it to a state-of-the-art multi-camera motion capture system by simultaneous, synchronised recordings from both systems. The mean difference between the systems’ output of lameness metrics was below 2.2 mm. Therefore, we conclude that the smartphone measurement tool can detect lameness at relevant levels with easy-of-use for the veterinarian.

Abstract

Computer vision is a subcategory of artificial intelligence focused on extraction of information from images and video. It provides a compelling new means for objective orthopaedic gait assessment in horses using accessible hardware, such as a smartphone, for markerless motion analysis. This study aimed to explore the lameness assessment capacity of a smartphone single camera (SC) markerless computer vision application by comparing measurements of the vertical motion of the head and pelvis to an optical motion capture multi-camera (MC) system using skin attached reflective markers. Twenty-five horses were recorded with a smartphone (60 Hz) and a 13 camera MC-system (200 Hz) while trotting two times back and forth on a 30 m runway. The smartphone video was processed using artificial neural networks detecting the horse’s direction, action and motion of body segments. After filtering, the vertical displacement curves from the head and pelvis were synchronised between systems using cross-correlation. This rendered 655 and 404 matching stride segmented curves for the head and pelvis respectively. From the stride segmented vertical displacement signals, differences between the two minima (MinDiff) and the two maxima (MaxDiff) respectively per stride were compared between the systems. Trial mean difference between systems was 2.2 mm (range 0.0–8.7 mm) for head and 2.2 mm (range 0.0–6.5 mm) for pelvis. Within-trial standard deviations ranged between 3.1–28.1 mm for MC and between 3.6–26.2 mm for SC. The ease of use and good agreement with MC indicate that the SC application is a promising tool for detecting clinically relevant levels of asymmetry in horses, enabling frequent and convenient gait monitoring over time.

1. Introduction

Objective measurement of a horse’s motion at the trot has become an important part of the diagnostic procedures performed during clinical lameness investigation. These measurements, which have been used in clinical practice for more than a decade, trace the vertical displacement of axial body segments: the head, the pelvis and sometimes the withers. Using reflective markers or inertial sensors attached to a point on each body segment, a time series signal is generated. In trot, the vertical displacement signal takes the shape of a sinusoidal double wave from each stride and it is the position of the two peaks and valleys of this signal which are used for lameness analysis.
The degree of asymmetry in the vertical displacement signal i.e., the difference between the two peaks and valleys respectively is known to indicate asymmetric loading of the left versus right limb during the midstance and the push-off phases of the stride [1,2,3]. Measurements of these asymmetries provide the veterinarian with high-resolution data that help overcome the limited time resolution of the human visual system [4,5]. These objective data seem crucial for quality control of the clinical procedure, since subjective lameness assessment has been shown to have moderate to low agreement between veterinarians [6,7] and is affected by expectation bias [8]. The metrics derived from objective motion analysis show high sensitivity for single-limb lameness, acting as early indicators to detect asymmetric loading of the limbs [9]. However, the specificity for lameness on a population level is less clear. Motion asymmetries are commonly observed in cross-sectional studies of different horse populations, such as Warmblood riding horses [10,11], Thoroughbred race horses [12], working Polo horses [13], elite eventing horses [14], endurance horses [15] and young Standardbred trotters [16], with a prevalence ranging between 50–90 percent. Although these asymmetries are of the same magnitude as in horses investigated for lameness in a clinical setting [17], it is currently unknown if these asymmetry levels indicate that a large proportion of horses in training are lame or if the asymmetries can be explained by other factors, such as laterality. A key approach to the further investigation of this issue is to perform longitudinal monitoring of individual horses over time. For this to be possible, a reliable, ease of use and low-cost measuring system is required.
Several motion analysis systems have been developed for clinical use based on inertial measurement units (IMUs) [18,19,20]. Also available is a multi-camera marker-based motion capture (MC) system [21]. This MC-system, is considered to be the gold standard for measuring body segment movement for kinematic gait analysis [22]. It relies on reflective markers attached to a horse’s body. These markers are detected and tracked by a set of cameras that are geometrically calibrated and temporally synchronized. By using the synchronized tracks of the marker positions in the camera images, MC-systems reconstruct the 3D coordinates using multi-view triangulation. It has been shown that under favourable conditions, the accuracy of the computed estimates of the 3D positions over time is less than a millimetre for the MC-systems [23]. However, placing markers on the horse are resource and time-consuming in a clinical situation, and the equipment is a substantial financial investment for a veterinary practice. This impedes the system’s large-scale clinical and scientific use.
During the previous decade, the field of computer vision was revolutionized by methods based on deep neural networks [24,25,26]. These networks are computer algorithms that consist of multi-layered (referred to as deep) compositions of parametric functions that can be trained on large datasets to perform classification and regression tasks. Deep learning has demonstrated increased robustness to differing scenarios, light conditions, and noise levels compared to traditional computer vision methods. Estimation of poses of the human body from images has been enabled by deep learning and as a result of this development, it has become possible to perform motion analysis from video, e.g., from a smartphone camera [27]. Recent works on horse lameness classification [28,29] have demonstrated a progressive movement towards the application of computer vision and deep learning within objective motion analysis. However, a binary disease classifier of “lame” versus “not lame” is a difficult approach for clinical use, given that lameness is often not a binary state and that the deep learning algorithms act in a black box manner, rendering distrust from medical professionals [30]. Instead, providing a clinician with computer vision derived metrics to support medical decision making is a more implementable approach. But until now, the methodological accuracy of deep neural networks for quantification of clinically used lameness metrics has not been investigated.
In this work, we validate a new single-camera markerless (SC) system designed for equine lameness assessment which uses images from a smartphone camera video stream. To achieve robust detection and tracking, the system employs a series of neural networks. These networks were trained to detect and track the pelvis, head and hooves in video streams of trotting horses. The system also detects the trotting direction of the horse, away from or towards the camera, to determine which parts of the horse are visible for measuring. The network designs were inspired by previously proposed methods for object detection [28,31] and segmentation [32,33]. Unlike the MC-system, the deep neural networks of the SC-system do not require that markers are placed on the horse. Instead, the networks learn to detect the points of interest on the horse’s body visible in the images through training on large datasets.
The specific aim of the study was to compare this new markerless smartphone system to a state-of-the-art multi-camera marker-based system with respect to waveform similarity of the derived vertical displacement signals and the limits of agreement for their extracted lameness metrics.

2. Materials and Methods

2.1. Study Protocol

Twenty-five horses were recorded as they underwent motion analysis at the orthopedic gait laboratory situated in the Equine Clinic of the University Animal Hospital in Uppsala, Sweden. The recordings were performed simultaneously with a multi-camera marker-based motion capture system and a single smartphone camera markerless system. The experimental setup is illustrated in Figure 1. The study subjects were a convenience sample selected from the horses visiting the clinic during the 10-day data collection period, without any exclusion criteria. The horses were of different breeds, sizes (range 128 to 180 cm to the withers) and colours (black, bay, chestnut, grey). All owners gave their written informed consent to participate. The study did not in any way alter the clinical procedures or add physical manipulation of the animals. Hence, no ethical approval was required according to the national animal ethics legislation.

2.2. Data Collection

During data collection, each horse was guided by a handler (the owner, or a researcher running at the horse’s left side) to trot at least two times back and forth on a 30 m concrete runway in a corridor. The horses were jogged at the handler’s preferred speed.
We employed a MC-system with 13 cameras (Qualisys AB, Motion Capture Systems). The MC cameras were placed 4 m over the ground and in a manner such that the union of the field of view of each camera covered as much of the runway as possible while still maintaining sufficient overlap between neighboring cameras to perform tracking and triangulation of the marker positions. The recording rate was set to 200 Hz. We attached spherical reflective markers in the median plane over the poll on the horse’s head and between the tubera sacrale of the pelvis, allowing the MC-system to detect and track the markers over time. Additional markers were placed, but not used in this study (see Figure 2).
The smartphone (iPhone12 Pro Max) was placed 1.6 m above the ground on a tripod facing the direction of the horse trot recording 4k video ( 2160 × 3840 pixels) at 60 Hz. The video streams recorded were input to the SC-system. Example frames from the smartphone camera recordings are shown in Figure 2.
For each recording, the MC-system and the smartphone camera were triggered at approximately the same time such that the data streams from both systems would cover the same sequence of events.

2.3. Signal Extraction

The MC-system software (Qualisys Track Manager—QTM, Qualisys AB) automatically tracked the reflective markers and generated 3D coordinates corresponding to the positions of the markers in each frame. The 3D marker coordinates from QTM were exported to .mat files (MATLAB). From these coordinates, vertical displacements were extracted for each frame, one for the head marker and one for the pelvis marker. This resulted in two vertical displacement signals (VDS), y m c h e a d ( t ) and y m c p e l v i s ( t ) for head and pelvis respectively, where t is time in seconds.
For the SC-system, deep neural networks were applied (software of Sleip AI AB) on the input video stream from the smartphone camera. The deep neural networks were trained to output the pixel coordinates of horse body parts for each frame of the video. The training material for the deep neural networks contained horses of many different coat colours and varying conformation, but none had physical markers attached to the skin. Head ( y s c h e a d ( t ) ) and pelvis ( y s c p e l v i s ( t ) ) VDS was calculated from the pixel coordinates. Additionally, the VDS of all four hooves were extracted from the SC-system for stride splitting purposes (see Section 2.4). Both the MC-system and the SC-system data were further processed using custom written python scripts.
Note that the pelvis was visible to the SC-system only when the horse was trotting away from the camera, while the head was mostly visible in both directions. Consequently, SC generally produced data from a higher number of strides with head tracking than strides with pelvic tracking. The following analyses only included strides with data matched from both systems for the body segment in question (head and pelvis). Horses were removed from the dataset where less than 10 matching strides were available for either head or pelvis since we deemed that insufficient for statistical relevance.

2.4. Stride Split and Signal Filtering

The recorded data contained noise, due to measurement errors and because the horses seldom trot in a consistent manner throughout a trot-up. This noise was present for both the MC-system and the SC-system data. Thus, the VDS had to be band-pass filtered in order to remove the noise without affecting the frequency content of the signal that related to movement asymmetries and lameness [34].
In order to perform the VDS filtering described in [34], the within horse mean stride frequency of a measurement was needed. This was estimated by extracting strides from the hoof VDS of the SC-system. Firstly, we performed a pre band-pass filtering of the hoof VDS to remove trends and high-frequency noise. Specifically, a 7th order Butterworth digital filter with a lower bound cut-off frequency of 0.6 Hz and an upper bound cut-off frequency of 2.2 Hz. This allowed us to determine in which time intervals the left hoof was above the right hoof and vice versa, ultimately enabling the classification of left and right strides.
Next, the lengths of the intervals were used to compute the stride frequency, which in turn was used to set the bounds of the 10th order Butterworth band-pass filter applied to the VDS of the pelvis and head. Specifically, we set the lower bound to 0.75 times the stride frequency, to not alter frequency content related to the movement asymmetry [34]. Similarly, the upper bound was set to 2.42 times the stride frequency to not attenuate the frequency content related to the symmetric movement, while omitting higher frequency content and noise.

2.5. Signal Synchronization

Since the MC and SC recordings were triggered manually, the extracted signals were not adequately synchronized in time. To synchronize the vertical displacement signals we computed cross-correlations to find the relative time shift t s h i f t that solved the following maximization problem:
t s h i f t = arg max t ( y m c p e l v i s 🟉 y s c p e l v i s ) ( t ) + ( y m c h e a d 🟉 y s c h e a d ) ( t ) .
Here 🟉 denotes the correlation operator. The signals were band-pass filtered according to Section 2.4 before synchronization.

2.6. Asymmetry Quantification

In this section, we introduce a number of definitions that we use in the remaining parts of the paper. First, we define a stride segment as a section of the VDS corresponding to a time interval of a full stride. To extract the stride segment we utilize the extreme values (see the illustration in Figure 3). Tracing of the vertical displacement of the horse’s head or pelvis while it trots yields a sine-shaped signal as depicted in Figure 3. The valleys (local minima) and peaks (local maxima) of the signal are associated with the vertical forces generated during impact (the more force, the lower the valley position) and the relative timing of horizontal and vertical forces during the propulsive phase of the stride respectively (less push-off rendering a lower peak position). Differences between consecutive peaks or valleys can be quantified into an asymmetry index [22], which can be used as an indicator of lameness severity. Depending on the measurement technique and due to variation in horse size, these values may need to be normalised to be comparable [35,36]. Therefore, normalisation of these values to the range of motion (R) is used in the SC-system to obtain values that are more independent of horse size.

2.6.1. Extraction of Valleys and Peaks

To find the extreme values within each stride segment (the VDS from one stride), consecutive data points were compared to find points at which the derivative of the VDS was zero. Let y ( t ) be the VDS value at time t. We defined the peaks p i = y ( t p i ) , i = 1 , 2 , , as the local maximum values and the valleys v j = y ( t v j ) , j = 1 , 2 , , as the local minimum values. Further, we assumed that t p i < t p i + 1 and t v j < t v j + 1 . We extracted a sequence of consecutive peaks and valleys p i , p i + 1 , v j , v j + 1 such that t p i < t v j < t p i + 1 < t v j + 1 < t p i + 2 .
In horses showing moderate to severe lameness, the changes in motion asymmetry can cause extreme asymmetries in y ( t ) . In these cases, local extreme values might be canceled out, interrupting the assumed stride pattern of two peaks and two valleys in sequence. Instead, a single peak or valley signal pattern occurs. To handle this, we implemented a robust extreme value extraction method. The reasoning behind this method is that y ( t ) contains two dominant harmonics [1]. The first harmonic corresponds to the stride frequency, thus contributing to the asymmetry of the signal. The second harmonic corresponds to twice the stride frequency and should dominate y ( t ) if the horse is healthy. In our approach, we extracted extreme values based on the curve shape of the second harmonic. We first removed the asymmetric component of y ( t ) by performing high-pass filtering with a frequency bound higher than the first harmonic. Next, we employed local extreme value extraction on the high-pass filtered signal. Finally, we refined the selection by selecting the extreme values from the original y ( t ) within 50 ms of the estimated value. As a result, we were able to estimate normalised peak/valley differences at any degree of lameness.

2.6.2. Normalised Differences for Valleys and Peaks

From the extracted stride peaks we computed local extreme value differences per stride i, consisting of the following scalars,
MinDiff i = v i + 1 v i
MaxDiff i = p i p i + 1 .
These values provide information about the asymmetries between the right and left leg impact and push-off. However, since different horses have different amplitudes in their vertical displacement trajectories, these values are not comparable. In this work, we instead use the normalised extreme value differences (NEVd), which show more independence from the scale of the vertical displacement. The NEVd-values were computed using the following operations,
V i = MinDiff i R i
P i = MaxDiff i R i where R i = max ( p i , p i + 1 ) min ( v i , v i + 1 ) .
Here, normalization was performed by division by the range-of-motion R i . Thus, the NEVd-values measures asymmetry as a rate of R i .

2.6.3. Outlier Removal

To remove occasional strides with erroneous measurements from the analysis, we performed a series of outlier removal steps.
While the noise in the vertical displacement signals is partly suppressed by band-pass filtering, the signal quality becomes inadequate for asymmetry analysis when the noise is too prevalent. Therefore, we first removed strides where the stride segments contained a substantial amount of high-frequency noise. A stride segment was deemed to contain too much high frequency noise when the majority of the frequency content amplitudes were found above 10 Hz in at least a quarter of a stride interval.
As a second step, we performed a linear discriminant analysis (LDA). In addition to MinDiff i and MaxDiff i from Equation (4), we used the peak valley differences p i v i and p i + 1 v i + 1 to represent each stride as features. We then removed NEVd-values from the analysis that corresponded to strides that were considered outliers in the LDA.

2.7. System Comparisons

We compared the MC and SC systems on the data set of trotting horses described in Section 2.2. For each recorded sample, we used MC and SC to generate vertical displacement signals. From these, we extracted and compared stride segments and NEVd-values between the two systems. The following sections detail the implementation and setup of the comparison.

2.7.1. Comparison Metrics

To compare the extracted asymmetry indices between the MC and SC systems the following deviations were calculated from the synchronized NEVd-vales,
Δ V i = MinDiff i s c MinDiff i m c
Δ P i = MaxDiff i s c MaxDiff i m c .
To recover an estimate of geometric deviation, we multiplied Δ V i and Δ P i with the range of motion R i , m c computed from the NEVd-values of the MC signal.
In practice, lameness indication is deduced from the trial mean of the NEVd-values V ¯ = 1 / N i N MinDiff i and P ¯ = 1 / N i N MaxDiff i , where N is the number of strides in the trial after the outlier removal in Section 2.6.3. We computed the trial mean deviations as,
Δ V ¯ = V ¯ s c V ¯ m c
Δ P ¯ = P ¯ s c P ¯ m c .
We further scaled Δ V ¯ and Δ P ¯ with the mean range of motion R ¯ m c = 1 / N i N R i , m c to estimate the geometric deviation.

2.7.2. Statistical Analysis

Bland-Altman analysis [37] was used to evaluate the statistical agreement between the MC and SC-systems for head and pelvis NEVd-values. The Bland-Altman analysis was subdivided into deviations for MinDiff and MaxDiff and was performed both on trial and stride level.
In addition to the comparison of the NEVd-values, we compared the shapes of the band pass filtered vertical displacement signals using the root mean square deviations (RMSD),
R M S D = m = 1 M i R i , s c m c · y s c ( t m , s c i ) y m c ( t m , m c i ) 2 M i where R i , s c m c = R i , m c R i , s c .
Here, M = 23 is the number of trials, i.e number of horses in the dataset. t m , s c i and t m , m c i are equally spaced points in time for sampling the vertical displacement signals corresponding to the ith synchronized stride, i.e t 1 , s c i = p i , s c , t M , s c i = p i + 2 , s c , t 1 , m c i = p i , m c and t M i , m c i = p i + 2 , m c . Note that we scaled the y s c samples to the geometric scale of y m c . Further, since y s c and y m c have different frame rates, we aligned and re-sampled the signals with linear interpolation before applying (10).

3. Results

In this section, we outline the results from the experiments described in Section 2.7. In total, the results below were generated from 23 of the 25 horses in the initial data set, after the exclusion of two horses with less than 10 synchronized strides. From the included horses we extracted a total of 655 stride observations of the head motion and 404 stride observations of the pelvic motion. Descriptive statistics of the head measurements from the 23 trials can be found in Table 1 and the pelvis measurement in Table 2.
We split the comparisons into per-stride and per-trial comparisons. In the per-stride comparisons, we treated each stride as a sample and performed statistical analysis on the stride-based deviations Δ V i and Δ P i . In the per-trial comparison, we treated each horse as a sample and performed statistical analysis on deviations computed from the mean NEVd values Δ V ¯ i and Δ P ¯ i .

3.1. Per-Stride Comparisons

An example of the time-domain curves from the MC and SC for all strides in a recorded trial (horse 11) are displayed in Figure 4. The displayed stride segments of the two systems show high resemblance, resulting in similar conclusions on lameness diagnosis. We provide more examples from the experiment in Appendix A.
The Bland-Altman analysis for head and pelvis lameness metrics is illustrated in Figure 5 and Figure 6 respectively. The deviations are similar for both valley and peak differences and generally higher for head than for pelvis signals. Moreover, the correlations between the NEVd-values are high and the deviations are generally small, rarely exceeding 21 mm for the head signals and 14 mm for pelvis.
We further provide histograms over the deviations for head and pelvis in Figure 7. The histograms include both V and P-values. In addition, these plots show the empirical estimates of normal distributions computed from the stride samples. The distributions of the absolute values for stride mean residuals are presented in Figure 8.
Finally, we provide mean RMSD values in Table 3 to give an estimate of curve similarity. Note that RMSD, as computed in (Table 3), is sensitive to small time shifts. As the synchronization between signals y s c ( t ) and y m c ( t ) is approximate, the RMSD does not only reflect curve similarity but also errors in the synchronization.

3.2. Per-Trial Comparisons

In this section, we provide the results from the per-trial comparison between MC and SC. In Table 3 we present statistics over the entire dataset from the per-trial mean NEVd-values. In this case, we combine V ¯ and P ¯ . Thus, each stride contributes with two deviation values. For these, we compute the overall absolute mean, maximum and minimum absolute deviations as,
D ¯ = m = 1 M | Δ V ¯ m | + m = 1 M | Δ P ¯ m | 2 M
max D = max ( { | Δ V ¯ m | } m = 1 M { | Δ P ¯ m | } m = 1 M )
min D = min ( { | Δ V ¯ m | } m = 1 M { | Δ P ¯ m | } m = 1 M ) ,
where M = 23 is the number of trials, i.e number of horses in the dataset, and ∪ denotes the union of the sets of { | Δ V ¯ m | } m = 1 M and { | Δ P ¯ m | } m = 1 M .
Similar to the per-stride comparison in Section 3.1, we use Bland-Altman plots [37] to inspect the statistical agreements. The plots for head and pelvis are shown in Figure 9 and Figure 10, respectively. Not unexpectedly, the deviations between systems are smaller for the per-trial mean NEVd-values than for the per-stride comparison in Section 3.1, rarely exceeding 6.4 ms. Similar to the per-stride comparison, the differences between V ¯ and P ¯ are small. However, the per-trial deviations show similar values for pelvis and head, which could be due to the fact that the two systems jointly observed more strides for the head signal than the pelvis.
In addition, we provide histograms over all the deviations (both V ¯ and P ¯ ) for head and pelvis in Figure 11. These plots also show the empirical estimates of normal distributions computed from the stride samples.

4. Discussion

In this work, we demonstrated that deep neural networks and computer vision can be applied to reliably perform orthopaedic gait analysis for horses when trotted in hand on the straight during lameness assessment. The recorded average per-trial errors of 2.17 mm (head) and 2.19 mm (pelvis) are well below the previously recorded between-measurement-variation, of 18 mm (head) and 6 mm (pelvis) of horses trotting in a MC-system with several repetitions over two consecutive days and a one-month follow-up [21]. The results thus indicate that the average per-trial errors of the SC-extracted variables compared to the MC-extracted variables were small enough to not be a major hindrance when used for objective lameness assessment.
From a clinical perspective, the ease-of-use of the SC-system studied has a clear benefit since it allows affordable, repeated observations of equine patients, which can help to understand the considerable between trial variations observed in gait asymmetry [21].
When comparing the results of this study to another validation study performed on an IMU-based system (compared to a MC-system), it was found that stride-by-stride limits of agreement for the pelvic variables were approximately twice the magnitude for the SC-system described in this work [20]. It has to be acknowledged that comparing results from different samples can be a confounding factor in this case, but it still presents a general indication of how a computer vision based solution may compare to an IMU-based solution. Unfortunately, limits of agreement for the IMU-system in the Bosch et al. study [20] versus our SC-system can not be compared for the head variables since these were not presented in the IMU-system validation study.
The reported accuracy of the SC-system could also be compared to a previous test-retest repeatability study of an IMU-system, where the 95% confidence interval was reported as approximately 6 mm for head asymmetries and 3 mm for pelvic asymmetries [38]. However, a later study comparing that IMU-system to a different IMU-system, also developed for detecting equine lameness, found that the limits of agreement between the two systems were in this same range [36]. Further, it was found that the system used by [38] consistently underestimated the amount of movement asymmetry compared to the other IMU-system, which had previously been shown to give values comparable to optical motion capture. It has been suggested that the confidence intervals reported by Keegan et al [38] should be adjusted to 8 and 4 mm [13] for the other IMU-system, and this is likely a suitable adjustment also for an MC-system.
There were two outlier trials with a recorded per-trial mean deviation of 8.71 mm (head) and 6.54 mm (pelvis). Notably, these outliers occurred for horses with large asymmetries (see horse 2 in Table 1 and horse 4 in Table 2), and did not change the sign of the calculated variables or the clinical interpretation of the gait data. Hence, these deviations would not confuse which limb was affected by the asymmetry. We hypothesized that these errors might be due to difficulties in detecting and tracking the head and pelvis when they were occluded e.g., the horse lowering its head and hiding it behind the trunk, or lifting the tail obscuring the pelvic region.
Another SC-system utilizing deep neural networks to detect and track trotting horses for the purpose of lameness assessment has been previously described [39]. Although their approach was similar to the method presented in this study, the authors did not quantify movement symmetry metrics directly. Instead, they focused on lame limb classification. It is also worthwhile noting that a SC-system has been developed and shown to be able to perform reliable gait analysis of human subjects [27]. The level of precision described is said to open up for several potential applications in human medicine.
Is markerless more or less? A dichotomous answer cannot be given here. There are technical drawbacks related to using computer vision techniques implemented in a SC-system aimed towards objective equine lameness assessment, lower accuracy and lack of 3D motion to name two. However, these drawbacks have to be weighed against the benefit of having a lightweight, portable and low-cost system available for data collection, that allows repeatable observations of the horse. Other systems, such as IMUs and MC are typically more expensive, are sometimes limited to laboratory environments and require more interaction in terms of placing markers and sensors on the horse. Inevitably, this leads to fewer measurements being done and it is well known that low sample sizes of horses are the standard in many equine biomechanics studies. This study has shown that a simple application on a smartphone can be a tool for flexible and reliable collection of kinematic asymmetry data from horses. By extension, this opens up opportunities for larger-scale biomechanics research studies in non-laboratory environments. Coupling this with current advances in machine learning, where computers efficiently learn from data to perform predictions, we suggest that SC can be used to accelerate our understanding of horse locomotion and horse welfare.

Limitations

In the current study, all measurements were performed in the same clinical indoor environment on a limited number of horses (n = 23). The SC-system would likely be more challenged if there was a severe lack of light or if for example heavy rain obscured the visibility of the horse in the video. However, testing under such conditions was not within the scope of this study and would have been impossible to perform given that the state-of-the-art MC-system is a permanent indoor installation. The smartphone was placed on a tripod during the data collection, as handheld recordings would demand a stabilisation algorithm to be applied to the video. As such, further research is required to evaluate the SC-system under handheld conditions. Also, the length of the runway was 30 m, hence a greater distance between the camera and the horse has not been investigated. We did, however, analyse the error per stride index and did not find increasing deviations of the lameness metrics studied.
There are today several iPhone models with different camera specifications, such as image resolution and sampling rate. This study was limited to the use of an iPhone12 Pro Max where the resolution was set to 2160 × 3840 pixels and the frame rate to 60 Hz. Newer iPhone models come with even better camera specifications. Further research should investigate whether these models would improve the output from the SC-system even further.
The neural networks used in the SC-system were trained on image data of horses that did not have markers attached to the skin. Therefore, the skin-attached reflective markers used in this validation study could be suspected to partly obscure the anatomical region of interest and potentially present a problem to the SC-system. Visual inspection of the tracking from the SC-system, did however show very stable detection of the anatomical segments. This is confirmed by the comparison to the output of the MC-system.
This study only investigated vertical movement asymmetry of head and pelvis in a straight line. 3D motion comparisons would be of interest in the future, in order to provide a more detailed analysis of horse locomotion e.g., limb retraction and protraction angles studied from lateral view or on a circle. However, 3D lifting from a 2D image is an inherently complicated computer vision task where more advanced methods would be required, and so these kinematic variables were not investigated in this preliminary study.

5. Conclusions

We conclude that objective gait analysis for lameness assessment in horses can be reliably performed using a smartphone and computer vision analysis built on deep neural networks. The measurement deviation, when compared to a state-of-the-art motion capture system, is larger compared to IMU-based systems [20], but the error is clearly lower than observed levels of “between trial variation” from earlier studies [21]. The ease-of-use of the system makes repeated observations of a horse’s lameness more feasible, which can provide more objective data points for treatment evaluation.

Author Contributions

Conceptualisation, E.H., M.R., P.H.A. and H.K.; methodology, E.H., F.J.L., A.B. and C.R.; software, F.J.L., C.R., M.S., M.A. and A.B.; validation, F.J.L., E.H., C.R. and A.B.; formal analysis, F.J.L.; investigation, F.J.L., E.H.; resources, E.H., M.R.; data curation, F.J.L.; writing—original draft preparation, F.J.L., E.H. and C.R.; writing—review and editing, all authors; visualisation, F.J.L.; supervision, E.H. and H.K.; project administration, E.H.; funding acquisition, E.H., P.H.A. and M.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Swedish Research Council FORMAS grant number 2018-00737, Marie-Claire Cronstedts Stiftelse grant number 2018.4.2-2084 and CareNet grant number SLU ua 2020.4.1-4651.

Institutional Review Board Statement

Animal ethical approval was not applicable for this study according to Swedish legislation since it involves privately owned horses which were not subjected to any invasive procedures or alterations to the clinical decisions due to entering the study.

Informed Consent Statement

Written informed consent has been obtained from the the animal owners to publish this paper.

Data Availability Statement

Raw stride-level data from all included horses (and all strides) are presented as Appendix A to the manuscript.

Acknowledgments

Thanks to the staff at Equine Orthopaedic Clinic of the University Animal Hospital in Uppsala for support with recordings.

Conflicts of Interest

The funder had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. Authors affiliated to the company Sleip AI (F.J.L., C.R., M.A., M.S. and E.H.) declare a conflict of interest since the company provides a commercially available diagnostic tool for detecting lameness in horses from a smartphone. The developed computer vision technique is validated in the current paper.

Abbreviations

The following abbreviations are used in this manuscript:
SCSingle-camera markerless system
MCMulti-camera marker-based system
VDSVertical displacement signal
NEVdNormalised extreme value differences
MaxDiffDifference between local maxima values within a stride
MinDiffDifference between local minima within a stride
PNormalised difference between local maxima values of the VDS per stride
VNormalised difference between local minima values of the VDS per stride
P ¯ Trial mean P
V ¯ Trial mean V
σ PTrial standard deviation of P
σ VTrial standard deviation of V
Δ P   Deviation between two corresponding P’s from different systems
Δ V   Deviation between two corresponding V’s from different systems
Δ P ¯   Deviation between two corresponding P ¯ ’s from different systems
Δ V ¯   Deviation between two corresponding V ¯ ’s from different systems
D ¯   Mean absolute deviation over dataset
max D   Maximum absolute deviation over dataset
min D   Minimum absolute deviation over dataset
R   Range of motion of the VDS per stride
L D A   Linear Discriminant Analysis
L o A    Limit of Agreement

Appendix A. All Stride Curves

Figure A1. Vertical displacement signals per stride for the head and pelvis for horse 1. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A1. Vertical displacement signals per stride for the head and pelvis for horse 1. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a1
Figure A2. Vertical displacement signals per stride for the head and pelvis for horse 2. Each subplot contains the matched stride segments the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A2. Vertical displacement signals per stride for the head and pelvis for horse 2. Each subplot contains the matched stride segments the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a2
Figure A3. Vertical displacement signals per stride for the head and pelvis for horse 3. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A3. Vertical displacement signals per stride for the head and pelvis for horse 3. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a3
Figure A4. Vertical displacement signals per stride for the head and pelvis for horse 4. Each subplot contains the matched stride segments forthe single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A4. Vertical displacement signals per stride for the head and pelvis for horse 4. Each subplot contains the matched stride segments forthe single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a4
Figure A5. Vertical displacement signals per stride for the head and pelvis for horse 5. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A5. Vertical displacement signals per stride for the head and pelvis for horse 5. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a5
Figure A6. Vertical displacement signals per stride for the head and pelvis for horse 6. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A6. Vertical displacement signals per stride for the head and pelvis for horse 6. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a6
Figure A7. Vertical displacement signals per stride for the head and pelvis for horse 7. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A7. Vertical displacement signals per stride for the head and pelvis for horse 7. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a7
Figure A8. Vertical displacement signals per stride for the head and pelvis for horse 9. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A8. Vertical displacement signals per stride for the head and pelvis for horse 9. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a8
Figure A9. Vertical displacement signals per stride for the head and pelvis for horse 10. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A9. Vertical displacement signals per stride for the head and pelvis for horse 10. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a9
Figure A10. Vertical displacement signals per stride for the head and pelvis for horse 11. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A10. Vertical displacement signals per stride for the head and pelvis for horse 11. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a10
Figure A11. Vertical displacement signals per stride for the head and pelvis for horse 12. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A11. Vertical displacement signals per stride for the head and pelvis for horse 12. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a11
Figure A12. Vertical displacement signals per stride for the head and pelvis for horse 13. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A12. Vertical displacement signals per stride for the head and pelvis for horse 13. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a12
Figure A13. Vertical displacement signals per stride for the head and pelvis for horse 14. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A13. Vertical displacement signals per stride for the head and pelvis for horse 14. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a13
Figure A14. Vertical displacement signals per stride for the head and pelvis for horse 15. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A14. Vertical displacement signals per stride for the head and pelvis for horse 15. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a14
Figure A15. Vertical displacement signals per stride for the head and pelvis for horse 16. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A15. Vertical displacement signals per stride for the head and pelvis for horse 16. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a15
Figure A16. Vertical displacement signals per stride for the head and pelvis for horse 17. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A16. Vertical displacement signals per stride for the head and pelvis for horse 17. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a16
Figure A17. Vertical displacement signals per stride for the head and pelvis for horse 18. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A17. Vertical displacement signals per stride for the head and pelvis for horse 18. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a17
Figure A18. Vertical displacement signals per stride for the head and pelvis for horse 19. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A18. Vertical displacement signals per stride for the head and pelvis for horse 19. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a18
Figure A19. Vertical displacement signals per stride for the head and pelvis for horse 20. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A19. Vertical displacement signals per stride for the head and pelvis for horse 20. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a19
Figure A20. Vertical displacement signals per stride for the head and pelvis for horse 21. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A20. Vertical displacement signals per stride for the head and pelvis for horse 21. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a20
Figure A21. Vertical displacement signals per stride for the head and pelvis for horse 22. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A21. Vertical displacement signals per stride for the head and pelvis for horse 22. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a21
Figure A22. Vertical displacement signals per stride for the head and pelvis for horse 23. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A22. Vertical displacement signals per stride for the head and pelvis for horse 23. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a22
Figure A23. Vertical displacement signals per stride for the head and pelvis for horse 24. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A23. Vertical displacement signals per stride for the head and pelvis for horse 24. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red. The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a23
Figure A24. Vertical displacement signals per stride for the head and pelvis for horse 25. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red). The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Figure A24. Vertical displacement signals per stride for the head and pelvis for horse 25. Each subplot contains the matched stride segments for the single-camera markerless system (SC) in green and the multi-camera marker-based system (MC) in red). The y-axis shows the vertical displacement in millimeters. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2.
Animals 13 00390 g0a24

References

  1. Roepstorff, C.; Gmel, A.I.; Arpagaus, S.; Bragança, F.M.S.; Hernlund, E.; Roepstorff, L.; Rhodin, M.; Weishaupt, M.A. Modelling fore- and hindlimb peak vertical force differences in trotting horses using upper body kinematic asymmetry variables. J. Biomech. 2022, 137, 111097. [Google Scholar] [CrossRef] [PubMed]
  2. Bell, R.; Reed, S.K.; Schoonover, M.; Whitfield, C.; Yonezawa, Y.; Maki, H.; Pai, F.P.; Keegan, K.G. Associations of force plate and body-mounted inertial sensor measurements for identification of hind limb lameness in horses. Am. J. Vet. Res. 2016, 77, 337–345. [Google Scholar] [CrossRef] [PubMed]
  3. Buchner, H.H.; Savelberg, H.H.; Schamhardt, H.C.; Barneveld, A. Head and trunk movement adaptations in horses with experimentally induced fore- or hindlimb lameness. Equine Vet. J. 1996, 28, 71–76. [Google Scholar] [CrossRef] [PubMed]
  4. Parkes, R.S.; Weller, R.; Groth, A.M.; May, S.; Pfau, T. Evidence of the development of ‘domain-restricted’ expertise in the recognition of asymmetric motion characteristics of hindlimb lameness in the horse. Equine Vet. J. 2009, 41, 112–117. [Google Scholar] [CrossRef]
  5. Holcombe, A.O. Seeing slow and seeing fast: Two limits on perception. Trends Cogn. Sci. 2009, 13, 216–221. [Google Scholar] [CrossRef]
  6. Keegan, K.G.; Dent, E.V.; Wilson, D.A.; Janicek, J.; Kramer, J.; Lacarrubba, A.; Walsh, D.M.; Cassells, M.W.; Schiltz, P.; Frees, K.E.; et al. Repeatability of subjective evaluation of lameness in horses. Equine Vet. J. 2010, 42, 92–97. [Google Scholar] [CrossRef]
  7. Hammarberg, M.; Egenvall, A.; Pfau, T.; Rhodin, M. Rater agreement of visual lameness assessment in horses during lungeing. Equine Vet. J. 2016, 48, 78–82. [Google Scholar] [CrossRef] [Green Version]
  8. Arkell, M.; Archer, R.M.; Guitian, F.J.; May, S.A. Evidence of bias affecting the interpretation of the results of local anaesthetic nerve blocks when assessing lameness in horses. Vet. Rec. 2006, 159, 346–349. [Google Scholar] [CrossRef]
  9. Mccracken, M.J.; Kramer, J.; Keegan, K.G.; Lopes, M.; Wilson, D.A.; Reed, S.K.; Lacarrubba, A.; Rasch, M. Comparison of an inertial sensor system of lameness quantification with subjective lameness evaluation. Equine Vet. J. 2012, 44, 652–656. [Google Scholar] [CrossRef]
  10. Rhodin, M.; Egenvall, A.; Andersen, P.H.; Pfau, T. Head and pelvic movement asymmetries at trot in riding horses in training and perceived as free from lameness by the owner. PLoS ONE 2017, 12, 1–16. [Google Scholar] [CrossRef] [Green Version]
  11. Müller-Quirin, J.; Dittmann, M.T.; Roepstorff, C.; Arpagaus, S.; Latif, S.N.; Weishaupt, M.A. Riding Soundness—Comparison of Subjective With Objective Lameness Assessments of Owner-Sound Horses at Trot on a Treadmill. J. Equine Vet. Sci. 2020, 95, 103314. [Google Scholar] [CrossRef]
  12. Pfau, T.; Noordwijk, K.; Caviedes, M.F.S.; Persson-Sjodin, E.; Barstow, A.; Forbes, B.; Rhodin, M. Head, withers and pelvic movement asymmetry and their relative timing in trot in racing Thoroughbreds in training. Equine Vet. J. 2017, 50, 117–124. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Pfau, T.; Parkes, R.S.; Burden, E.R.; Bell, N.; Fairhurst, H.; Witte, T.H. Movement asymmetry in working polo horses. Equine Vet. J. 2016, 48, 517–522. [Google Scholar] [CrossRef] [PubMed]
  14. Scheidegger, M.D.; Gerber, V.; Dolf, G.; Burger, D.; Flammer, S.A.; Ramseyer, A. Quantitative Gait Analysis Before and After a Cross-country Test in a Population of Elite Eventing Horses. J. Equine Vet. Sci. 2022, 117. [Google Scholar] [CrossRef] [PubMed]
  15. Lopes, M.A.; Eleuterio, A.; Mira, M.C. Objective Detection and Quantification of Irregular Gait With a Portable Inertial Sensor-Based System in Horses During an Endurance Race—A Preliminary Assessment. J. Equine Vet. Sci. 2018, 70, 123–129. [Google Scholar] [CrossRef]
  16. Kallerud, A.S.; Fjordbakk, C.T.; Hendrickson, E.H.S.; Persson-Sjodin, E.; Hammarberg, M.; Rhodin, M.; Hernlund, E. Objectively measured movement asymmetry in yearling Standardbred trotters. Equine Vet. J. 2020, 53, 590–599. [Google Scholar] [CrossRef]
  17. Maliye, S.; Voute, L.C.; Marshall, J.F. Naturally-occurring forelimb lameness in the horse results in significant compensatory load redistribution during trotting. Vet. J. 2015, 204, 208–213. [Google Scholar] [CrossRef]
  18. Pfau, T.; Witte, T.H.; Wilson, A.M. A method for deriving displacement data during cyclical movement using an inertial sensor. J. Exp. Biol. 2005, 208, 2503–2514. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Keegan, K.G.; Yonezawa, Y.; Pai, P.F.; Wilson, D.A. Accelerometer-based system for the detection of lameness in horses. Biomed. Sci. Instrum. 2002, 38, 107–112. [Google Scholar]
  20. Bosch, S.; Bragança, F.M.S.; Marin-Perianu, M.; Marin-Perianu, R.; van der Zwaag, B.; Voskamp, J.; Back, W.; van Weeren, R.; Havinga, P. EquiMoves: A Wireless Networked Inertial Measurement System for Objective Examination of Horse Gait. Sensors 2018, 18, 850. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Hardeman, A.M.; Bragança, F.M.S.; Swagemakers, J.H.; van Weeren, P.R.; Roepstorff, L. Variation in gait parameters used for objective lameness assessment in sound horses at the trot on the straight line and the lunge. Equine Vet. J. 2019, 51, 831–839. [Google Scholar] [CrossRef] [Green Version]
  22. Serra Bragança, F.M.; Rhodin, M.; van Weeren, P.R. On the brink of daily clinical application of objective gait analysis: What evidence do we have so far from studies using an induced lameness model? Vet. J. 2018, 234, 11–23. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Aurand, A.M.; Dufour, J.S.; Marras, W.S. Accuracy map of an optical motion capture system with 42 or 21 cameras in a large measurement volume. J. Biomech. 2017, 58, 237–240. [Google Scholar] [CrossRef] [PubMed]
  24. LeCun, Y.; Bengio, Y. Convolutional networks for images, speech, and time series. Handb. Brain Theory Neural Netw. 1995, 3361, 1995. [Google Scholar]
  25. Fukushima, K. Neocognitron: A hierarchical neural network capable of visual pattern recognition. Neural Netw. 1988, 1, 119–130. [Google Scholar] [CrossRef]
  26. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  27. Azhand, A.; Rabe, S.; Müller, S.; Sattler, I.; Heimann-Steinert, A. Algorithm based on one monocular video delivers highly valid and reliable gait parameters. Sci. Rep. 2021, 11, 1–10. [Google Scholar] [CrossRef]
  28. Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 3349–3364. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Li, C.; Ghorbani, N.; Broomé, S.; Rashid, M.; Black, M.J.; Hernlund, E.; Kjellström, H.; Zuffi, S. hSMAL: Detailed Horse Shape and Pose Reconstruction for Motion Pattern Recognition. arXiv 2021, arXiv:2106.10102. [Google Scholar]
  30. Hatherley, J.J. Limits of trust in medical AI. J. Med. Ethics 2020, 46, 478–481. [Google Scholar] [CrossRef]
  31. Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  32. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  33. Bhat, G.; Lawin, F.J.; Danelljan, M.; Robinson, A.; Felsberg, M.; Gool, L.V.; Timofte, R. Learning what to learn for video object segmentation. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 777–794. [Google Scholar]
  34. Bragança, F.S.; Roepstorff, C.; Rhodin, M.; Pfau, T.; Van Weeren, P.; Roepstorff, L. Quantitative lameness assessment in the horse based on upper body movement symmetry: The effect of different filtering techniques on the quantification of motion symmetry. Biomed. Signal Process. Control 2020, 57, 101674. [Google Scholar] [CrossRef]
  35. Keegan, K.G.; Yonezawa, Y.; Pai, F.; Wilson, D.A.; Kramer, J. Evaluation of a sensor-based system of motion analysis for detection and quantification of forelimb and hind limb lameness in horses. Am. J. Vet. Res. 2004, 65, 665–670. [Google Scholar] [CrossRef]
  36. Pfau, T.; Boultbee, H.; Davis, H.; Walker, A.; Rhodin, M. Agreement between two inertial sensor gait analysis systems for lameness examinations in horses. Equine Vet. Educ. 2016, 28, 203–208. [Google Scholar] [CrossRef] [Green Version]
  37. Bland, J.M.; Altman, D. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1986, 327, 307–310. [Google Scholar] [CrossRef]
  38. Keegan, K.G.; Kramer, J.; Yonezawa, Y.; Maki, H.; Pai, P.F.; Dent, E.V.; Kellerman, T.E.; Wilson, D.A.; Reed, S.K. Assessment of repeatability of a wireless, inertial sensor–based lameness evaluation system for horses. Am. J. Vet. Res. 2011, 72, 1156–1163. [Google Scholar] [CrossRef]
  39. Wang, Y.; Li, J.; Zhang, Y.; Sinnott, R.O. Identifying lameness in horses through deep learning. In Proceedings of the ACM Symposium on Applied Computing, Virtual, 22–26 March 2021; pp. 976–985. [Google Scholar] [CrossRef]
Figure 1. Illustration of the experimental setup. We recorded horses trotting back and forth in a corridor with a multi-camera system and a single smartphone camera. The multi-camera system detected and tracked reflective markers attached to the horse. Marker positions were triangulated into 3D coordinates from which vertical displacement curves were extracted. The single-camera system used deep neural networks to predict the vertical displacement curves of the head and pelvis from the images in the smartphone video stream. We then compared the displacement curves from the two systems using normalised peak and valley differences.
Figure 1. Illustration of the experimental setup. We recorded horses trotting back and forth in a corridor with a multi-camera system and a single smartphone camera. The multi-camera system detected and tracked reflective markers attached to the horse. Marker positions were triangulated into 3D coordinates from which vertical displacement curves were extracted. The single-camera system used deep neural networks to predict the vertical displacement curves of the head and pelvis from the images in the smartphone video stream. We then compared the displacement curves from the two systems using normalised peak and valley differences.
Animals 13 00390 g001
Figure 2. Example images from data set (horse 21) taken from the video recorded by the markerless single-camera system (SC). Attached to the horse’s skin by double adhesive tape are the spherical reflective markers used by the multi-camera marker-based system (MC) for tracking head and pelvic motion. The markers on the poll and between the tubera sacrale were used.
Figure 2. Example images from data set (horse 21) taken from the video recorded by the markerless single-camera system (SC). Attached to the horse’s skin by double adhesive tape are the spherical reflective markers used by the multi-camera marker-based system (MC) for tracking head and pelvic motion. The markers on the poll and between the tubera sacrale were used.
Animals 13 00390 g002
Figure 3. Example of the vertical displacement signal from head or pelvis, with local minima denoted with v for valley and local maxima with p for peak. Metrics for lameness quantification are calculated as the difference between the two minima (MinDiff) and the two maxima (MaxDiff) per stride. A stride segment y s i ( t ) was defined as the section of y ( t ) , starting at t p i and ending at t p i + 2 .
Figure 3. Example of the vertical displacement signal from head or pelvis, with local minima denoted with v for valley and local maxima with p for peak. Metrics for lameness quantification are calculated as the difference between the two minima (MinDiff) and the two maxima (MaxDiff) per stride. A stride segment y s i ( t ) was defined as the section of y ( t ) , starting at t p i and ending at t p i + 2 .
Animals 13 00390 g003
Figure 4. Example of the vertical displacement signal per stride for the head and pelvis for horse 11. Each subplot contains the matched stride segments for the markerless single-camera system (SC) in green and the multi-camera marker-based motion capture system (MC) in red. The y-axis shows the vertical displacement in millimetres. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2. We observe that despite the high variability in curve shape between strides, there is a notable resemblance between the two systems.
Figure 4. Example of the vertical displacement signal per stride for the head and pelvis for horse 11. Each subplot contains the matched stride segments for the markerless single-camera system (SC) in green and the multi-camera marker-based motion capture system (MC) in red. The y-axis shows the vertical displacement in millimetres. The four dots on each curve indicate the positions of the two peaks and two valleys extracted using the approach in Section 2.6.2. We observe that despite the high variability in curve shape between strides, there is a notable resemblance between the two systems.
Animals 13 00390 g004
Figure 5. Scatter plots of the head metrics obtained per stride (n = 655) observed by the multi-camera marker-based motion capture system (MC) plotted against the observation by the single-camera markerless system (SC) are shown in the left sub-panels. Agreements between the systems, with limits of agreement (LoA) displayed as orange horizontal lines, are presented in the Bland-Altman plots in the right sub-panels. In the top row, we show the Δ V (MinDiff) and in the bottom row, we show Δ P (MaxDiff) defined in Section 2.7.1. For the purpose of visibility, we have fixed the range of the y-axis, causing a few samples with large residuals to be out of range. In the left plot, we have out of range residuals at 97.0 and 55.2 , and in the right plot out of range residuals are at 48.9 , 81.5 , 70.9 , 56.0 , 47.1 71.5 , 44.4 . While these samples have large errors they have little impact on the overall statistics.
Figure 5. Scatter plots of the head metrics obtained per stride (n = 655) observed by the multi-camera marker-based motion capture system (MC) plotted against the observation by the single-camera markerless system (SC) are shown in the left sub-panels. Agreements between the systems, with limits of agreement (LoA) displayed as orange horizontal lines, are presented in the Bland-Altman plots in the right sub-panels. In the top row, we show the Δ V (MinDiff) and in the bottom row, we show Δ P (MaxDiff) defined in Section 2.7.1. For the purpose of visibility, we have fixed the range of the y-axis, causing a few samples with large residuals to be out of range. In the left plot, we have out of range residuals at 97.0 and 55.2 , and in the right plot out of range residuals are at 48.9 , 81.5 , 70.9 , 56.0 , 47.1 71.5 , 44.4 . While these samples have large errors they have little impact on the overall statistics.
Animals 13 00390 g005
Figure 6. Scatter plots of the pelvic metrics obtained per stride (n = 404) observed by the multi-camera marker-based motion capture system (MC) plotted against the observation by the single-camera markerless system (SC) are shown in the left sub-panels. Agreements between the systems with limits of agreement (LoA) displayed as orange horizontal lines are presented in the Bland-Altman plots (right sub-panels). In the top row, we show the Δ V (MinDiff) and the bottom row we show Δ P (MaxDiff) defined in Section 2.7.1.
Figure 6. Scatter plots of the pelvic metrics obtained per stride (n = 404) observed by the multi-camera marker-based motion capture system (MC) plotted against the observation by the single-camera markerless system (SC) are shown in the left sub-panels. Agreements between the systems with limits of agreement (LoA) displayed as orange horizontal lines are presented in the Bland-Altman plots (right sub-panels). In the top row, we show the Δ V (MinDiff) and the bottom row we show Δ P (MaxDiff) defined in Section 2.7.1.
Animals 13 00390 g006
Figure 7. Distributions of the per-stride deviations for head (left) and pelvis (right). Here we have combined both peak and valley deviations defined in Section 2.7.1 ( Δ V and Δ P ). The dashed line displays the normal distribution estimated from the means and standard deviations of the between-systems deviations.
Figure 7. Distributions of the per-stride deviations for head (left) and pelvis (right). Here we have combined both peak and valley deviations defined in Section 2.7.1 ( Δ V and Δ P ). The dashed line displays the normal distribution estimated from the means and standard deviations of the between-systems deviations.
Animals 13 00390 g007
Figure 8. Distributions of the absolute values for stride mean residuals for head (left) and pelvis (right).
Figure 8. Distributions of the absolute values for stride mean residuals for head (left) and pelvis (right).
Animals 13 00390 g008
Figure 9. Trial-level scatter plots of the head metrics from the 23 included horses measured by the marker-based motion capture system (MC) versus the single-camera markerless system (SC) are presented in the left panel. Agreements between the systems, with limits of agreement (LoA) displayed as orange horizontal lines, are presented in the Bland-Altman plots in the right sub-panels. In the top row, we show Δ V ¯ (trial mean MinDiff) and in the bottom row, we show Δ P ¯ (trial mean MaxDiff) defined in Section 2.7.1.
Figure 9. Trial-level scatter plots of the head metrics from the 23 included horses measured by the marker-based motion capture system (MC) versus the single-camera markerless system (SC) are presented in the left panel. Agreements between the systems, with limits of agreement (LoA) displayed as orange horizontal lines, are presented in the Bland-Altman plots in the right sub-panels. In the top row, we show Δ V ¯ (trial mean MinDiff) and in the bottom row, we show Δ P ¯ (trial mean MaxDiff) defined in Section 2.7.1.
Animals 13 00390 g009
Figure 10. Trial-level scatter plots of the pelvic metrics from the 23 included horses measured by the marker-based motion capture system (MC) versus the single-camera markerless system (SC) are presented in the left panel. Agreements between the systems, with limits of agreement (LoA) displayed as orange horizontal lines, are presented in the Bland-Altman plots in the right sub-panels.In the top, we show Δ V ¯ (trial mean MinDiff) and in the bottom row, we show Δ P ¯ (trial mean MaxDiff) defined in Section 2.7.1.
Figure 10. Trial-level scatter plots of the pelvic metrics from the 23 included horses measured by the marker-based motion capture system (MC) versus the single-camera markerless system (SC) are presented in the left panel. Agreements between the systems, with limits of agreement (LoA) displayed as orange horizontal lines, are presented in the Bland-Altman plots in the right sub-panels.In the top, we show Δ V ¯ (trial mean MinDiff) and in the bottom row, we show Δ P ¯ (trial mean MaxDiff) defined in Section 2.7.1.
Animals 13 00390 g010
Figure 11. Distributions of the per-trial deviations for head (left) and pelvis (right). Here we have combined both peak and valley deviations defined in Section 2.7.1 ( Δ V ¯ and Δ P ¯ ). The dashed line shows the normal distribution estimated from the means and standard deviations of the deviations.
Figure 11. Distributions of the per-trial deviations for head (left) and pelvis (right). Here we have combined both peak and valley deviations defined in Section 2.7.1 ( Δ V ¯ and Δ P ¯ ). The dashed line shows the normal distribution estimated from the means and standard deviations of the deviations.
Animals 13 00390 g011
Table 1. Descriptive statistics for head measurements from the 23 included horses showing number of matched strides per trial (N) and the mean trial deviations between the two systems for the valley values Δ V ¯ (MinDiff) and peak values Δ P ¯ (MaxDiff). Also, the actual trial means for the V ¯ (MinDiff) and the P ¯ are presented per trial for the multi-camera marker-based ( m c ) and the single-camera markerless ( s c ) systems, followed by their within trial standard deviation ( σ V and σ P ). From the m c , the trial mean range of motion of the vertical displacement signal is presented.
Table 1. Descriptive statistics for head measurements from the 23 included horses showing number of matched strides per trial (N) and the mean trial deviations between the two systems for the valley values Δ V ¯ (MinDiff) and peak values Δ P ¯ (MaxDiff). Also, the actual trial means for the V ¯ (MinDiff) and the P ¯ are presented per trial for the multi-camera marker-based ( m c ) and the single-camera markerless ( s c ) systems, followed by their within trial standard deviation ( σ V and σ P ). From the m c , the trial mean range of motion of the vertical displacement signal is presented.
HorseN Δ V ¯ Δ P ¯ V ¯ sc V ¯ mc P ¯ sc P ¯ mc σ V sc σ V mc σ P sc σ P mc R ¯ mc
116−0.4−4.8−1.5−1.113.117.915.215.418.116.566.6
223−2.18.7−9.4−7.343.034.312.310.419.720.882.2
3292.3−1.9−1.6−3.99.511.417.715.013.614.170.4
4384.13.4−39.8−44.0−0.8−4.214.315.714.816.871.2
528−0.32.270.070.3−17.5−19.817.415.217.214.5109.6
6264.45.7−39.2−43.6−3.9−9.616.819.115.815.868.5
7193.3−3.15.42.13.36.313.514.219.921.177.5
822−0.6−3.77.17.78.812.515.711.911.010.473.6
9291.0−5.0−39.7−40.715.720.69.110.69.510.070.8
1036−0.60.61.82.4−19.1−19.822.322.020.721.495.2
11270.00.4−57.8−57.913.012.511.513.116.415.290.1
1228−2.51.9−11.7−9.2−18.5−20.414.313.517.514.971.0
1322−0.01.662.862.8−3.7−5.38.98.210.810.379.8
14220.3−0.9−1.6−1.98.79.69.713.86.110.639.0
15291.5−1.827.025.5−14.1−12.413.813.111.211.675.2
1619−6.80.222.829.610.910.719.126.218.628.175.5
1734−0.2−3.73.73.9−4.2−0.518.917.519.618.795.0
18241.8−3.114.913.1−13.1−10.18.37.913.611.457.9
1935−0.4−2.10.20.624.226.36.96.48.66.051.0
2041−0.51.7−21.6−21.07.05.311.912.69.18.571.4
2116−0.41.2−23.5−23.05.84.68.47.76.77.443.0
22362.2−1.8−13.0−15.2−1.50.38.58.212.310.550.9
2356−4.2−0.1−18.2−14.0−16.3−16.218.717.618.818.974.7
mean28.51.72.621.521.812.012.613.613.714.314.572.2
Table 2. Descriptive statistics for pelvic measurements from the 23 included horses showing the number of matched strides per trial (N) and the mean trial deviations between the two systems for the valley values Δ V ¯ (MinDiff) and peak values Δ P ¯ (MaxDiff). Also, the actual trial means for the V ¯ (MinDiff) and the P ¯ are presented per trial for the multi camera marker based ( m c ) and the single camera markerless system ( s c ), followed by their within trial standard deviation ( σ V and σ P ). From the m c , the trial mean range of motion of the vertical displacement signal is presented.
Table 2. Descriptive statistics for pelvic measurements from the 23 included horses showing the number of matched strides per trial (N) and the mean trial deviations between the two systems for the valley values Δ V ¯ (MinDiff) and peak values Δ P ¯ (MaxDiff). Also, the actual trial means for the V ¯ (MinDiff) and the P ¯ are presented per trial for the multi camera marker based ( m c ) and the single camera markerless system ( s c ), followed by their within trial standard deviation ( σ V and σ P ). From the m c , the trial mean range of motion of the vertical displacement signal is presented.
HorseN Δ V ¯ Δ P ¯ V ¯ sc V ¯ mc P ¯ sc P ¯ mc σ V sc σ V mc σ P sc σ P mc R ¯ mc
1131.0−0.1−0.1−1.16.66.68.46.18.28.383.4
2151.74.3−0.4−2.10.6−3.811.08.610.08.682.3
316−1.82.1−0.61.2−2.7−4.86.54.08.95.682.6
4244.36.5−15.8−20.121.014.59.37.99.68.192.5
5175.5−0.55.70.24.34.85.14.310.311.379.3
6154.85.9−12.8−17.627.221.311.07.58.96.392.8
7171.50.0−0.1−1.5−5.2−5.26.86.910.58.574.8
8131.13.12.11.0−8.6−11.68.08.19.24.776.7
915−0.82.2−13.0−12.2−4.3−6.57.14.66.06.067.5
10224.62.19.04.42.20.16.57.715.816.077.7
1115−1.70.7−7.0−5.3−11.9−12.66.48.310.29.688.0
1214−0.91.1−4.5−3.62.61.56.44.87.37.964.7
13113.92.4−12.6−16.59.06.69.76.98.18.570.8
14141.30.9−5.1−6.413.012.05.86.79.08.674.8
1514−2.3−3.05.88.1−16.9−13.95.73.16.03.375.3
16134.2−1.111.87.7−25.8−24.710.513.013.88.775.7
1718−1.4−0.8−10.6−9.3−1.4−0.612.211.211.411.5103.7
1815−3.2−0.3−2.80.4−13.1−12.85.64.67.13.385.5
19261.2−1.2−9.6−10.84.45.63.63.44.34.639.6
2021−4.91.3−7.2−2.30.1−1.26.56.611.69.087.8
21210.82.6−38.0−38.948.546.07.16.812.78.697.7
2225−1.52.8−10.2−8.7−1.4−4.26.65.15.56.167.0
23301.00.4−21.5−22.612.512.18.99.88.88.072.6
mean17.62.42.09.08.810.610.17.66.89.37.978.8
Table 3. Results summary for the measurement deviations between the systems over the entire dataset. In the top 6 rows we provide combined per-trial deviations ( D ¯ ) using both deviations for the normalised MinDiff ( Δ V ¯ ) and MaxDiff ( Δ P ¯ ). For both head and pelvis, we compute the overall absolute mean, maximum and minimum deviations. In the bottom two rows we provide mean root mean square deviations (RMSD) as a comparison of the shape of the vertical displacement signals.
Table 3. Results summary for the measurement deviations between the systems over the entire dataset. In the top 6 rows we provide combined per-trial deviations ( D ¯ ) using both deviations for the normalised MinDiff ( Δ V ¯ ) and MaxDiff ( Δ P ¯ ). For both head and pelvis, we compute the overall absolute mean, maximum and minimum deviations. In the bottom two rows we provide mean root mean square deviations (RMSD) as a comparison of the shape of the vertical displacement signals.
Per Trial
head D ¯ 2.2 mm
pelvis D ¯ 2.2 mm
head max D 8.7 mm
pelvis max D 6.5 mm
head min D 0.0 mm
pelvis min D 0.0 mm
Per Stride
head mean RMSD5.0 mm
pelvis mean RMSD3.5 mm
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lawin, F.J.; Byström, A.; Roepstorff, C.; Rhodin, M.; Almlöf, M.; Silva, M.; Andersen, P.H.; Kjellström, H.; Hernlund, E. Is Markerless More or Less? Comparing a Smartphone Computer Vision Method for Equine Lameness Assessment to Multi-Camera Motion Capture. Animals 2023, 13, 390. https://doi.org/10.3390/ani13030390

AMA Style

Lawin FJ, Byström A, Roepstorff C, Rhodin M, Almlöf M, Silva M, Andersen PH, Kjellström H, Hernlund E. Is Markerless More or Less? Comparing a Smartphone Computer Vision Method for Equine Lameness Assessment to Multi-Camera Motion Capture. Animals. 2023; 13(3):390. https://doi.org/10.3390/ani13030390

Chicago/Turabian Style

Lawin, Felix Järemo, Anna Byström, Christoffer Roepstorff, Marie Rhodin, Mattias Almlöf, Mudith Silva, Pia Haubro Andersen, Hedvig Kjellström, and Elin Hernlund. 2023. "Is Markerless More or Less? Comparing a Smartphone Computer Vision Method for Equine Lameness Assessment to Multi-Camera Motion Capture" Animals 13, no. 3: 390. https://doi.org/10.3390/ani13030390

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop