Next Article in Journal
A Survey on Current-Mode Interfaces for Bio Signals and Sensors
Previous Article in Journal
Formal Analysis of Trust and Reputation for Service Composition in IoT
Previous Article in Special Issue
Compressed Sensing Data with Performing Audio Signal Reconstruction for the Intelligent Classification of Chronic Respiratory Diseases
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Camera- and Viewpoint-Agnostic Evaluation of Axial Postural Abnormalities in People with Parkinson’s Disease through Augmented Human Pose Estimation

1
Department of Engineering for Innovation Medicine, University of Verona, 37134 Verona, Italy
2
Department of Neuroscience “Rita Levi Montalcini”, University of Turin, 10124 Turin, Italy
3
Neurology 2 Unit, Azienda Ospedaliero-Universitaria Città della Salute e della Scienza di Torino, 10126 Turin, Italy
4
Neurology Unit, Movement Disorders Division, Department of Neurosciences Biomedicine and Movement Sciences, University of Verona, 37129 Verona, Italy
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(6), 3193; https://doi.org/10.3390/s23063193
Submission received: 30 January 2023 / Revised: 2 March 2023 / Accepted: 13 March 2023 / Published: 16 March 2023

Abstract

:
Axial postural abnormalities (aPA) are common features of Parkinson’s disease (PD) and manifest in over 20% of patients during the course of the disease. aPA form a spectrum of functional trunk misalignment, ranging from a typical Parkinsonian stooped posture to progressively greater degrees of spine deviation. Current research has not yet led to a sufficient understanding of pathophysiology and management of aPA in PD, partially due to lack of agreement on validated, user-friendly, automatic tools for measuring and analysing the differences in the degree of aPA, according to patients’ therapeutic conditions and tasks. In this context, human pose estimation (HPE) software based on deep learning could be a valid support as it automatically extrapolates spatial coordinates of the human skeleton keypoints from images or videos. Nevertheless, standard HPE platforms have two limitations that prevent their adoption in such a clinical practice. First, standard HPE keypoints are inconsistent with the keypoints needed to assess aPA (degrees and fulcrum). Second, aPA assessment either requires advanced RGB-D sensors or, when based on the processing of RGB images, they are most likely sensitive to the adopted camera and to the scene (e.g., sensor–subject distance, lighting, background–subject clothing contrast). This article presents a software that augments the human skeleton extrapolated by state-of-the-art HPE software from RGB pictures with exact bone points for posture evaluation through computer vision post-processing primitives. This article shows the software robustness and accuracy on the processing of 76 RGB images with different resolutions and sensor–subject distances from 55 PD patients with different degrees of anterior and lateral trunk flexion.

1. Introduction

Parkinson’s disease (PD) is the second most common neurodegenerative disease and is characterized by non-motor and motor symptoms [1,2,3,4,5,6]. Among the latter, axial postural abnormalities (aPA) are a frequent complication associated with back pain, reduced mobility and postural instability, thus leading to higher risk of falls and reduced quality of life [7,8,9]. Clear definitions and cut-off values for axial postural abnormalities in people with PD and atypical Parkinsonisms were recently given to avoid heterogeneity of the reported results and lack of clarity in the literature, and to foster advances on diagnosis, management and prevention [10].
Among aPA, camptocormia (CC) and Pisa syndrome (PS) indicate reversible severe flexions of the trunk on the sagittal (with thoracic fulcrum—tCC: anterior flexion at C7–T12 vertebrae > 45 ; with lumbar fulcrum—lCC: anterior flexion at L1–L5 vertebrae > 30 and hip flexion) and coronal planes (lateral flexion > 10 ), respectively, [10]. Their reliable evaluation and early recognition may help tuning pharmacological [11] and physical [12] therapies for their management [13].
The analysis of posture is normally performed with stereophotogrammetric systems, force platforms [14,15] and inertial sensors [16]. To the best of the authors’ knowledge, these approaches are not normally used to assess CC and PS. The only exception is given in [17], where the authors proposed a marker-based approach complemented with a ground reaction force analysis to investigate the effects of PS on standing posture and gait symmetry, with a particular focus on joint kinematics and weight distribution. However, no direct assessment of aPA was proposed.
aPA have been recently evaluated with Kinovea [18]. Kinovea is an open-source video annotation tool able to measure angles after virtual palpation of landmarks on RGB images/videos (www.kinovea.org, accessed on 2 September 2022). Following an equivalent approach, NeuroPostureApp© (www.neuroimaging.uni-kiel.de, accessed on 2 September 2022) provides clinicians with PS and CC measures, both with the lumbar and thoracic fulcrum [19].
The Task Force on Postural Abnormalities in Parkinsonism, within the International Movement Disorders Society, has recently established the consensus on nosology and cut-off values, and recommended the use of either the NeuroPostureApp©, or the wall goniometer [20] as state-of-the-art methods to evaluate aPA, with the wall goniometer potentially underestimating such measures [10].
The NeuroPostureApp© has been developed by Kiel University, following the definitions given in [19,21] and calls for an operator to collect a picture of the undressed subject and to virtually palpate landmarks on that picture. Then, NeuroPostureApp© can calculate the following angles: (i) tCC, defined as the external angle between the line joining the fulcrum of the spine flexion and the fifth lumbar vertebra process (L5), and the line connecting the fulcrum of the spine flexion to the seventh cervical vertebra process (C7); (ii) lCC, defined as the external angle between the line joining L5 and the visible lateral malleolus, and the line connecting L5 to C7; (iii) PS, which is the external angle between the line joining the midpoint between the feet and L5, and the line connecting L5 to C7 [19]. The intra-subject test–retest and the inter-operator reliability were found to be excellent for CC and good for PS evaluation [21]. However, virtual palpation of landmarks is strongly operator-dependent, calling for extensive training and is thus time-consuming.
Software based on human pose estimation (HPE) [22] could be a valid markerless alternative to virtual landmark recognition and palpation. HPE algorithms are based on convolutional neural networks (CNN) that automatically identify feature points of the human body, defined as keypoints, on images captured through standard digital cameras [23,24]. There is increasing interests from the scientific community in the application of HPE algorithms to study motion and posture, with many validation studies have been published [25,26].
A HPE approach has been recently used to assess aPA in people with PD [27]. Nevertheless, keypoints normally identified by HPE algorithms are not sufficient to assess aPAs as defined by the Movement Disorders Society criteria [10]. Specifically, the missing keypoints needed to measure CC with thoracic and lumbar fulcra, and PS are: the last cervical vertebra (C7), the last lumbar vertebra (L5), the mid-point between the two ankles (MA), and the most distant point on the participant silhouette from the line joining C7 and L5 on the sagittal view (FC).
Zhang et al. proposed the use of a depth camera (RGB-D sensor, Microsoft Kinect v2) to extend the standard set of HPE keypoints, detecting the human silhouette [28]. Recently, several solutions exist to extrapolate depth information both directly (i.e., through RGB and matrix depth sensors) or indirectly (i.e., stereo RGB cameras) [26]. Although being validated and reliable solutions, there is a trade-off between the usability, portability and accuracy of such systems [29]. Moreover, these methods call for specific devices to be available, or specialized users to be involved, possibly limiting their applicability to every-day use.
More recently, a software-based tool, called AutoPosturePD, was proposed to automatically evaluate PS, lCC and tCC from RGB pictures of people with PD, taken with single off-the-shelf cameras (i.e., with no need of depth information) and no additional human input (e.g., virtual landmark palpation) [30]. AutoPosturePD measures PS, lCC and tCC as defined by the Task Force on Postural Abnormalities in Parkinsonism of the International Movement Disorders Society [10]. The authors presented an agreement analysis between the measures obtained with NeuroPostureApp© (i.e., the gold standard) [19] and the newly proposed AutoPosturePD, with the aim of evaluating its accuracy in the diagnosis of aPA (reporting Bland–Altman plots, intra-class correlation coefficients, standard error of measurements and Cohen’s kappa), and encompassing sensitivity and specificity in the diagnosis of PS and CC against the current gold standard [30]. AutoPosturePD was found to be a valid tool for the clinical assessment of of PS, lCC and tCC in PD, supporting their diagnosis [30]. AutoPosturePD was used to measure the aPA of each participant and classify whether they had passed the thresholds [10] to define a pathological condition, thus running a sensitivity and specificity analysis of the software and obtaining excellent results [30].
It is worth noting that AutoPosturePD automatically measures the aPA starting from images taken with a single off-the-self camera, with no additional information to be fed to the algorithm. Parameters that could prevent AutoPosturePD from accurately identifying landmarks include the subject’s anthropometry, image resolution, the ratio between the subject image size and total image size, and the hue saturation values of the images.
However, AutoPosturePD’s robustness to participant anthropometry and picture characteristics, ensuring its portability to different devices, was not tested.
The aim of this work is to fill this gap, presenting a secondary analysis on the same dataset used in [30] and to test the robustness of AutoPosturePD outcome measures to the above-mentioned parameters, ensuring its portability to different devices and different environmental settings (i.e., viewpoints, background colours, room lighting, etc.). This article also presents a more extended and deep description of the software.

2. Materials and Methods

2.1. The AutoPosturePD Software

The state-of-the-art HPE solutions to implement inference on images (or video frames) to extrapolate a set of human body keypoints ( K P S ):
K P S = { k p j i : i = 1 | V F | , j = 1 | C N N _ k p s | }
where k p j i is the j-th keypoint at the i-th frame, and | C N N _ k p s | is the total number of keypoints detected through the adopted convolutional neural network (CNN) per frame, with | V F | being the number of processed video frames. All HPE platforms provide a common subset of canonical keypoints (i.e., estimates of the human joint centres or segment centroid), including left and right shoulders ( L S H , R S H ), elbows ( L E , R E ), wrists ( L W , R W ), pelvis (P), knees ( L K , R K ), ankles ( L A , R A ), and face points such as nose (N), eyes ( L E y e , R E y e ), and ears ( L E a r , R E a r ).
Figure 1 shows the platform overview used to augment the set of canonical K P S with two additional sets of keypoints, F K P S and S K P S as proposed in [30], for the assessment of PS, and lCC and tCC, respectively. OpenPose [31], being an accurate state-of-the-art HPE software [32], was used to extrapolate the canonical 2D human keypoints. We selected the B O D Y 25 model trained with the COCO [33] and MPII datasets [34] to train OpenPose and extrapolate a set of 25 keypoints.

2.1.1. F K P S for Frontal View Analysis—PS Assessment

We defined the subset F K P S = { C 7 , L 5 , M A } , where C 7 , L 5 , and M A are the seventh cervical vertebra, the fifth lumbar vertebra and the mid-point between the two ankles, respectively. They are extrapolated geometrically from the canonical K P S as follows from a frontal plane image of the subject. C 7 is the most prominent point on the sagittal view of the neck along the spine [35] (see Figure 2). AutoPosturePD geometrically identifies this point as the intersection of the segments connecting the shoulder keypoints ( L S H , R S H ) to the ear keypoints ( L E a r , R E a r ) , as shown in Figure 2. L 5 corresponds to the first spinal process under an imaginary line connecting the two iliac crests [35]. AutoPosturePD first identifies the middle point ( M H ) between the left hip (LH) and right hip (RH). Starting from M H , it draws a vertical segment and identifies L 5 at a distance d M H L 5 :
d M H L 5 = K 1 % ( a v g [ ( L H L K ) , ( R H R K ) ] )
This is a parametric percentage ( K 1 = 20 ) of the average left and right leg length, estimated as the distance between the hip ( L H and R H for the left and right side, respectively) and the knee keypoint ( L K and R K for the left and right side, respectively). K 1 is a user-defined parameter that was empirically extrapolated from our experimental results to map L 5 on the fifth lumbar vertebra by taking advantage of the ground truth.
Finally, AutoPosturePD identifies M A as the mid-point between the two ankle keypoints ( L A , R A ).

2.1.2. S K P S for Sagittal View Analysis—lCC and tCC assessment

We defined the subset S K P S = { C 7 , F C , L 5 , M A } , given on subjects’ images taken from the sagittal view, and with F C being the most distant point from the line joining C7 and L5 lying on the subject’s silhouette. The points C 7 and L 5 also lie on the subject’s silhouette. Different from [28], which relied on the depth information provided by RGB-D sensors (i.e., Microsoft Kinect v2), AutoPosturePD extrapolates the subject’s silhouette through the processing of the RGB pictures.
Matching between the subject underwear and background colours, as well as the environment lighting can strongly impact the accuracy of the subject edge extrapolation. To reduce accuracy loss, AutoPosturePD implements silhouette rendering and masking as a first step. This procedure applies a graph cut algorithm (Figure 3a) to extract three class variants [36]: certain foreground, probable background, certain background. The software iteratively processes the image and finds the best solution to map each pixel in one of the following classes:
  • Red—background: pixels outside the box ( Y w i d t h × Y h e i g t h ) created around the subject, where Y w i d t h = R E a r y R A y . Y w i d t h is defined by the user. These pixels are not considered for the edge extrapolation to reduce false positive pixels.
  • Green—foreground: pixels inside the bands connecting adjacent joints: ear with shoulder ( Γ R E a r , R S H ), shoulder with hip ( Γ R S H , R H ), hip with knee ( Γ R H , R K ), knee with ankle ( Γ R K J , R A ); likely representing the subject’s limbs.
  • Yellow—probable background: pixels are neither of the previous classes.
The foreground is identified as follows:
  • the segment σ i j = P i P j ¯ joining two keypoints P i and P j , { i , j } = 1 | C N N _ k p s | ;
  • the segment thickness τ i j R 0 + , which is upper bounded by the radius of the body segment, obtained geometrically through distances between the HPE keypoints;
  • the band Γ i j = σ i j , τ i j , defined as the area covered by σ i j when isotropically expanded by τ i j .
From the segmented image, AutoPosturePD extrapolates C 7 as follows (see Figure 3b). It first identifies A in the segment R E a r R S H such as:
d R S H A = K 2 % ( R E a r R S H )
This is a parametric percentage ( K 2 = 40 ) of the distance between the right ear and shoulder keypoints ( R E a r and R S H , respectively). The K 2 was chosen as the value that empirically minimizes the error between C 7 extrapolated by the software and ground truth. Starting from A, it identifies C 7 as the last point of the mask within the line perpendicular to the segment connecting the ear and shoulder, passing via A.
L 5 is extrapolated via two phases. AutoPosturePD extrapolates L 5 with the same approach used for L 5 in the frontal view. Then, starting from L 5 , it implements a search process to find the last segmented pixel of the subject silhouette. With the same value K 1 = 20 as for the frontal view, we empirically observed an estimation error that is negligible for most of the subjects. The software extrapolates F C starting from the segment C 7 L 5 and moving perpendicularly backwards to the silhouette. For the sagittal view, AutoPosturePD identifies M A as coincident with the right ankle keypoint.

2.2. Participants and Ethics Statement

76 pictures were collected from 55 PD outpatients from sagittal and frontal views when clinically relevant. PD participants (39 males; age: 71 ± 9 years old; BMI: 25.07 ± 3.37 kg/m2) were enrolled at the Neurology Unit, University Hospital of Verona (Verona, Italy), and at the Neurology 2 Unit, University-Hospital “Città della Salute e della Scienza” (Turin, Italy) [30]. More clinical and demographical data are provided in [30].
The study was approved by the Ethics Committee for clinical trials of Verona and Rovigo (protocol code 1655CESC, 14 March 2018) and all participants provided their written informed consent prior to participating in the study.

2.3. Procedure

Participants were asked to stand barefoot as still as possible, wearing their underwear only in front of a neutral wall (i.e., the background was not corrupted by other images or elements). Considering participants’ clinical evaluation, a total of 76 pictures were taken: 25 recording the frontal plane of participants, which were used to detect and measure PS; and 51 recording the sagittal plane of participants, which were used to detect and measure lCC and tCC [10]. Pictures were taken with the lens on level of patients’ hip and from either a strict posterior (for PS evaluation), or lateral view (for lCC and tCC evaluation) [19].
Each picture was analysed both with the NeuroPostureApp© [19], whose results were taken as the ground truth, and with AutoPosturePD.

2.4. Statistical Analysis

The validity of the measures obtained with the AutoPosturePD against those taken as the ground truth (i.e., those obtained with the NeuroPostureApp© [19]) were tested in [30] and are beyond the aim of the present research.
Given the described processing procedure, it is worth highlighting that: (i) images used to test and develop AutoPosturePD were taken with various off-the-shelf devices and, thus, at various resolutions and hue–saturation–variance; (ii) operators were not instructed to take pictures at specific distances from the targeted person with PD; and (iii) an intrinsic variability of participants’ anthropometry (i.e., the body mass index—BMI) could serve as an additional confusing factor for the algorithm.
To check for AutoPosturePD measure sensitivity to all these quantities, the Pearson’s correlation coefficient (R) and the significance of the correlation ( p < 0.05 ) were computed between the AutoPosturePD measured angle and: (i) the participants’ body mass index (BMI); (ii) the image size in pixels (width, height and area); (iii) the participants’ cover factor, obtained as the ratio between the subject image size and the total image size in pixels (width, height and area); and (iv) the colour characteristics of the analysed picture (hue–saturation—variance (HSV)).
Moreover, to also check for AutoPosturePD measurement error sensitivity to the same quantities and strengthen the validation study proposed in [30], the Pearson’s correlation coefficient (R) and the significance of the correlation ( p < 0.05 ) were computed between the measurement error (i.e., the difference between AutoPosturePD and NeuroPostureApp© measurements) and: (i) the participants’ body mass index (BMI); (ii) the image size in pixels (width, height and area); (iii) the participants’ cover factor, obtained as described above; and (iv) the colour characteristics of the analysed picture (HSV).
Statistical analyses were performed in RStudio (version 2022.12.0+353, Boston, MA, USA).

3. Results

Correlation of the AutoPosturePD measures to participants’ anthropometry were not significant considering all the aPA ( 0.23 < R < 0.0065 and p > 0.30 ; see Figure 4). Similar results were obtained when looking at the correlation of the measured angles with the picture characteristics: no significant correlations with colour characteristics ( 0.17 < R < 0.068 and p > 0.39 ; Figure 5), and image size ( 0.21 < R < 0.45 and p > 0.066 ; Figure 6). Significant correlation was instead obtained for: tCC measures with respect to image width ( R = 0.46 and p = 0.026 ; Figure 6a); and lCC measures with respect to width cover factor ( R = 0.51 and p = 0.0084 ; Figure 7a), height cover factor ( R = 0.43 and p = 0.027 ; Figure 7b), and area cover factor ( R = 0.43 and p = 0.03 ; Figure 7c).
Correlation of the AutoPosturePD measurement error to participants’ anthropometry and to picture characteristics (i.e., image size, participants cover factor, and colour characteristics) were not significant considering all the aPA ( 0.21 < R < 0.19 and p > 0.29 ). See Figure 8, Figure 9, Figure 10 and Figure 11 for details. Exceptions were obtained for the lCC abnormality with respect to: image height ( R = 0.46 and p = 0.019 , Figure 10b); width cover factor ( R = 0.43 and p = 0.028 , Figure 11a); and area cover factor ( R = 0.44 and p = 0.024 , Figure 11c).

4. Discussion

Early and reliable detection of axial postural abnormalities (aPA) in people with PD, such as camptocormia and Pisa syndrome, is clinically relevant for their prompt management and treatment [10]. A previous study presented a novel low-cost solution, called AutoPosturePD, for the automatic and reliable evaluation of camptocormia with lumbar and thoracic fulcrum (lCC and tCC, respectively), and Pisa syndrome (PS) [30]. The proposed algorithm automatically builds a set of keypoints through silhouette extraction [36] and geometrical post-processes images of people with PD taken with off-the-shelf RGB cameras, initially processed with a state-of-the-art HPE platform [31,32]. The strengths of AutoPosturePD are: (i) to not only consider the canonical keypoints obtained with HPE algorithms [27], which are not sufficient to estimate aPA when dealing with people with PD [10]; (ii) to not call for any external reference as in [27]; (iii) to call for 2D, rather than 3D images, as in [28], avoiding the need of specific instruments to be purchased and managed; and (iv) to be potentially used retrospectively on pictures taken beyond the clinic, avoiding patients needing to travel to clinical facilities. Moreover, the results obtained with the present study demonstrated that AutoPosturePD is robust to participants’ anthropometry (i.e., BMI) and to picture characteristics (i.e, image size, the ratio between the pixels covered by the participant and the total image picture, and hue–saturation–variance), ensuring its portability to different devices and different environmental settings (i.e., viewpoints, background colours, room lighting, etc.).
AutoPosturePD measures have been proven to be in agreement with those taken as the ground truth (i.e., those obtained with the NeuroPostureApp© [19]) and encourage the use of this novel tool to evaluate PS, lCC and tCC [30]. Indeed, correlation analysis and agreement between the two measures were very good [30], with systematic bias and the limit of agreements lower than the minimal detectable changes ( M D C ) for the same measures obtained with the NeuroPostureApp© ( M D C l C C = 3 . 7 ; M D C t C C = 6 . 7 ; M D C P S = 2 . 1 ) [21], or other conventional methods (X-ray images: M D C P S = 5 [37]; bubble inclinometer: M D C C C = 13 . 7 [38]).
It is worth noting that the measures taken as the ground truth are strongly operator-dependent. Indeed, the operator is asked to virtually palpate a few landmarks on a picture: the fulcrum of the spine flexion ( F C ), the most prominent process of the fifth lumbar vertebra (L5), the most prominent process of the seventh cervical vertebra (C7), and either the lateral malleoli or the mid-point between the feet ( M A ; depending on the sought measure: PS or lCC and tCC, respectively). An inaccuracy in the palpation of these points, would lead to a measurement error, that could hinder both the correct quantification of the axial postural abnormality, and the worst or better performance of AutoPosturePD with respect to NeuroPostureApp©. Most likely, the AutoPosturePD approach can overcome these limitation as it is based on automatic image processing to detect the keypoints used to measure PS, lCC and tCC angles. Factors that could hamper good keypoint identification could be associated with subjects’ anthropometry and images’ characteristics (i.e., the image size, the ratio between subject and image sizes, and the colour characteristics). The presented results demonstrated that AutoPosturePD measurement error is, though, robust to all the aforementioned variables (Figure 8 and Figure 9). Significant but weak correlations were obtained for lCC measures with respect to image height ( R = 0.46 and p = 0.019 , Figure 10b); width cover factor ( R = 0.43 and p = 0.028 , Figure 11a); and area cover factor ( R = 0.44 and p = 0.024 , Figure 11c). Similarly, the analysis performed on the AutoPosturePD outcomes demonstrated robustness of PS, lCC and tCC measurements to participants’ anthropometry (Figure 4) and image characteristics (Figure 4, Figure 5, Figure 6 and Figure 7). Weak to moderate correlations were instead obtained for PS with respect to subject/image height cover factor ( R = 0.46 and p = 0.019 ; Figure 7b), and for lCC measures with respect to subject/image width cover factor ( 0.43 < R < 0.51 and p < 0.03 ; Figure 7). These results suggest that the wider the subject image, the better aPA could be evaluated.
The presented findings lead to the conclusion that any picture taken with the RGB camera of a commercial smartphone could be sufficient to run this novel tool and still obtain reliable results on the evaluation of aPA in people with PD.
A few limitations of this study should be acknowledged. Among the 55 PD participants enrolled for the development of AutoPosturePD, 4 had both PS and tCC, 1 had PS and lCC, 12 had both tCC and lCC, and 2 had the coexistence of PS, tCC and lCC [30]. The authors also performed a sensitivity and specificity analysis on the AutoPosturePD in detecting aPA, considering those with no PS as controls for the PS classification, and those with no lCC and tCC as controls for the lCC and tCC classification, respectively, [30]. Results showed excellent classification performances of AutoPosturePD but the high heterogeneity and small sample size calls for a more substantial analysis, where a proper control group is included (i.e., participants with no diagnosed of axial postural abnormalities). This analysis, together with a repeatability and reproducibility analysis, is mandatory for the introduction of this tool into clinical practice. The repeatability analysis (i.e., the agreement of measures obtained with the same methodology applied by the same operator and device and on the same subject [39]) must be performed feeding AutoPosturePD with different pictures of the same participant taken by the same operator. A multi-centre clinical trial would also be useful to conduct a reproducibility analysis (i.e., the agreement of measures obtained with the same methodology applied by different operators and devices [39]).
The closeness and agreement of the measurements with those taken as the ground truth and obtained with the NeuroPostureApp© [19] are promising. Moreover, although having considered AutoPosturePD sensitivity to the subject/image cover factor, a proper sensitivity analysis to test the effect of camera–subject distance on outcomes (e.g., with a repeated measure design) was not performed. The lCC measurements dependence on the ratio of pixels covered by the subject and the total number of pixels (image width, height and area), despite being weak, suggests that a deeper understanding of this aspect is worth exploring, potentially leading to the implementation of a guiding frame to properly take pictures of patients with PD to evaluate sagittal axial postural deformities with AutoPosturePD.
Future works should consider designing a multi-centre clinical trial, enrolling a larger number of participants with PD with the inclusion of a appropriate control group. Considering a population of people with atypical Parkinsonisms and aPA not associated with PD would also foster AutoPosturePD validation for its use diagnosing other movement disorder medical conditions. As part of this clinical trial, the repeatability and reproducibility of measurements should be tested, together with a sensitivity analysis of AutoPosturePD to define an appropriate camera–subject distance. From the obtained results—those presented here and in previous research [30]—could lead to the development of a portable app easily available to clinicians. Moreover, the potential of this approach could be relevant to many other applications in aPA involving the spine, such as the non-invasive screening of idiopathic scoliosis.

5. Conclusions

AutoPosturePD is a novel low-cost software-based automatic and portable tool for the evaluation of axial postural abnormalities in people with Parkinson’s disease, which relies on the processing of images taken with off-the-shelf RGB cameras. This tool provides clinicians with reliable measurements of axial postural abnormalities, robust to differing operator expertise, image characteristics and subjects’ anthropometry. Its use could foster the diagnosis, management, and prevention of axial postural abnormalities.

Author Contributions

Conceptualization, All; methodology, S.A., R.D.M. and N.B.; software, S.A.; validation, S.A., R.D.M. and N.B.; formal analysis, S.A. and R.D.M.; investigation, C.A.A., S.C., C.G., G.I., L.L. and M.T.; resources, M.T. and N.B.; data curation, S.A., C.A.A., S.C., R.D.M., C.G. and G.I.; writing—original draft preparation, R.D.M. and N.B.; writing—review and editing, All; visualization, R.D.M. and N.B.; supervision, N.B.; project administration, M.T. and N.B.; funding acquisition, M.T. and N.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by Brain Research Foundation Verona ONLUS.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Ethics Committee for clinical trials of Verona and Rovigo (protocol code 1655CESC, 14 March 2018).

Informed Consent Statement

Written informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data collected and used for this study are available upon reasonable request to the corresponding author.

Acknowledgments

The authors gratefully acknowledge the support provided by Sonia E Moscatelli during the revision process for language editing and polishing.

Conflicts of Interest

The authors declare no conflict of interest. The funder had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
aPAAxial postural abnormalities
C7Seventh cervical vertebra
CCCamptocormia
CNNConvolutional neural networks
HPEHuman pose estimation
KPSKeypoints
L5Fifth lumbar vertebra
LALeft ankle joint centre
lCCLumbar camptocormia
LELeft elbow joint centre
LHLeft hip joint centre
LKLeft knee joint centre
LSHLeft shoulder joint centre
LWLeft wrist joint centre
MAMid-point of the two ankles
PDParkinson’s disease
PSPisa syndrome
RARight ankle joint centre
RERight elbow joint centre
RHRight hip joint centre
RKRight knee joint centre
RSHRight shoulder joint centre
RWRight wrist joint centre
tCCThoracic camptocormia
VFVideo-frames

References

  1. Horak, F.B.; Mancini, M. Objective Biomarkers of Balance and Gait for Parkinson’s Disease Using Body-worn Sensors. Mov. Disord. 2013, 28, 1544–1551. [Google Scholar] [CrossRef] [Green Version]
  2. Latt, M.D.; Lord, S.R.; Morris, J.G.; Fung, V.S. Clinical and physiological assessments for elucidating falls risk in Parkinson’s disease. Mov. Disord. 2009, 24, 1280–1289. [Google Scholar] [CrossRef]
  3. Nutt, J.G.; Wooten, G.F. Diagnosis and initial management of Parkinon’s disease. N. Engl. J. Med. 2005, 353, 1021–1027. [Google Scholar] [CrossRef] [Green Version]
  4. Jankovic, J. Parkinson’s disease: Clinical features and diagnosis. J. Neurol. Neurosurg. Psychiatry 2008, 79, 368–376. [Google Scholar] [CrossRef] [Green Version]
  5. Peterson, D.S.; King, L.A.; Cohen, R.G.; Horak, F.B. Cognitive Contributions to Freezing of Gait in Parkinson Disease: Implications for Physical Rehabilitation. Phys. Ther. 2016, 96, 659–670. [Google Scholar] [CrossRef]
  6. Mancini, M.; Curtze, C.; Stuart, S.; El-Gohary, M.; McNames, J.; Nutt, J.G.; Horak, F.B. The impact of freezing of gait on balance perception and mobility in community-living with Parkinson’S disease. In Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 3040–3043. [Google Scholar]
  7. Doherty, K.M.; van de Warrenburg, B.P.; Peralta, M.C.; Silveira-Moriyama, L.; Azulay, J.P.; Gershanik, O.S.; Bloem, B.R. Postural deformities in Parkinson’s disease. Lancet Neurol. 2011, 10, 538–549. [Google Scholar] [CrossRef]
  8. Margraf, N.G.; Granert, O.; Hampel, J.; Wrede, A.; Schulz-Schaeffer, W.J.; Deuschl, G. Clinical definition of camptocormia in Parkinson’s disease. Mov. Disord. Clin. Pract. 2017, 4, 349–357. [Google Scholar] [CrossRef] [Green Version]
  9. Tinazzi, M.; Gandolfi, M.; Ceravolo, R.; Capecci, M.; Andrenelli, E.; Ceravolo, M.G.; Bonanni, L.; Onofrj, M.; Vitale, M.; Catalan, M.; et al. Postural abnormalities in Parkinson’s disease: An epidemiological and clinical multicenter study. Mov. Disord. Clin. Pract. 2019, 6, 576–585. [Google Scholar] [CrossRef]
  10. Tinazzi, M.; Geroin, C.; Bhidayasiri, R.; Bloem, B.R.; Capato, T.; Djaldetti, R.; Doherty, K.; Fasano, A.; Tibar, H.; Lopiano, L.; et al. Task Force Consensus on Nosology and Cut-Off Values for Axial Postural Abnormalities in Parkinsonism. Mov. Disord. Clin. Pract. 2022, 9, 594–603. [Google Scholar] [CrossRef]
  11. Artusi, C.A.; Bortolani, S.; Merola, A.; Zibetti, M.; Busso, M.; De Mercanti, S.; Arnoffi, P.; Martinetto, S.; Gaidolfi, E.; Veltri, A.; et al. Botulinum toxin for Pisa syndrome: An MRI-, ultrasound-and electromyography-guided pilot study. Park. Relat. Disord. 2019, 62, 231–235. [Google Scholar] [CrossRef]
  12. Gandolfi, M.; Tinazzi, M.; Magrinelli, F.; Busselli, G.; Dimitrova, E.; Polo, N.; Manganotti, P.; Fasano, A.; Smania, N.; Geroin, C. Four-week trunk-specific exercise program decreases forward trunk flexion in Parkinson’s disease: A single-blinded, randomized controlled trial. Park. Relat. Disord. 2019, 64, 268–274. [Google Scholar] [CrossRef] [PubMed]
  13. Tinazzi, M.; Geroin, C.; Gandolfi, M.; Smania, N.; Tamburin, S.; Morgante, F.; Fasano, A. Pisa syndrome in Parkinson’s disease: An integrated approach from pathophysiology to management. Mov. Disord. 2016, 31, 1785–1795. [Google Scholar] [CrossRef]
  14. Buckley, C.; Alcock, L.; McArdle, R.; Rehman, R.Z.U.; Del Din, S.; Mazzà, C.; Yarnall, A.J.; Rochester, L. The role of movement analysis in diagnosing and monitoring neurodegenerative conditions: Insights from gait and postural control. Brain Sci. 2019, 9, 34. [Google Scholar] [CrossRef] [Green Version]
  15. Quijoux, F.; Vienne-Jumeau, A.; Bertin-Hugault, F.; Zawieja, P.; Lefevre, M.; Vidal, P.P.; Ricard, D. Center of pressure displacement characteristics differentiate fall risk in older people: A systematic review with meta-analysis. Ageing Res. Rev. 2020, 62, 101117. [Google Scholar] [CrossRef] [PubMed]
  16. Simpson, L.; Maharaj, M.M.; Mobbs, R.J. The role of wearables in spinal posture analysis: A systematic review. BMC Musculoskelet. Disord. 2019, 20, 55. [Google Scholar] [CrossRef] [PubMed]
  17. Panero, E.; Dimanico, U.; Artusi, C.A.; Gastaldi, L. Standardized biomechanical investigation of posture and gait in pisa syndrome disease. Symmetry 2021, 13, 2237. [Google Scholar] [CrossRef]
  18. Fabbri, M.; Pongmala, C.; Artusi, C.A.; Imbalzano, G.; Romagnolo, A.; Lopiano, L.; Zibetti, M. Video analysis of long-term effects of levodopa-carbidopa intestinal gel on gait and posture in advanced Parkinson’s disease. Neurol. Sci. 2020, 41, 1927–1930. [Google Scholar] [CrossRef]
  19. Margraf, N.G.; Wolke, R.; Granert, O.; Berardelli, A.; Bloem, B.R.; Djaldetti, R.; Espay, A.J.; Fasano, A.; Furusawa, Y.; Giladi, N.; et al. Consensus for the measurement of the camptocormia angle in the standing patient. Park. Relat. Disord. 2018, 52, 1–5. [Google Scholar] [CrossRef]
  20. Tinazzi, M.; Gandolfi, M.; Artusi, C.A.; Lanzafame, R.; Zanolin, E.; Ceravolo, R.; Capecci, M.; Andrenelli, E.; Ceravolo, M.G.; Bonanni, L.; et al. Validity of the wall goniometer as a screening tool to detect postural abnormalities in Parkinson’s disease. Park. Relat. Disord. 2019, 69, 159–165. [Google Scholar] [CrossRef] [Green Version]
  21. Schlenstedt, C.; Boße, K.; Gavriliuc, O.; Wolke, R.; Granert, O.; Deuschl, G.; Margraf, N.G. Quantitative assessment of posture in healthy controls and patients with Parkinson’s disease. Park. Relat. Disord. 2020, 76, 85–90. [Google Scholar] [CrossRef]
  22. Hammadi, Y.; Grondin, F.; Ferland, F.; Lebel, L. Evaluation of Various State of the Art Head Pose Estimation Algorithms for Clinical Scenarios. Sensors 2022, 22, 6850. [Google Scholar] [CrossRef] [PubMed]
  23. Cao, Z.; Simon, T.; Wei, S.E.; Sheikh, Y. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7291–7299. [Google Scholar]
  24. Moro, M.; Marchesi, G.; Hesse, F.; Odone, F.; Casadio, M. Markerless vs. Marker-Based Gait Analysis: A Proof of Concept Study. Sensors 2022, 22, 2011. [Google Scholar] [CrossRef] [PubMed]
  25. Lam, W.W.; Fong, K.N. The application of markerless motion capture (MMC) technology in rehabilitation programs: A systematic review and meta-analysis. Virtual Real. 2022, 1–16. [Google Scholar] [CrossRef]
  26. Scott, B.; Seyres, M.; Philp, F.; Chadwick, E.K.; Blana, D. Healthcare applications of single camera markerless motion capture: A scoping review. Sport. Med. Rehabil. 2022, 10, e13517. [Google Scholar] [CrossRef] [PubMed]
  27. Shin, J.H.; Woo, K.A.; Lee, C.Y.; Jeon, S.H.; Kim, H.J.; Jeon, B. Automatic Measurement of Postural Abnormalities with a Pose Estimation Algorithm in Parkinson’s Disease. J. Mov. Disord. 2022, 15, 140–145. [Google Scholar] [CrossRef]
  28. Zhang, Z.; Hong, R.; Lin, A.; Su, X.; Jin, Y.; Gao, Y.; Peng, K.; Li, Y.; Zhang, T.; Zhi, H.; et al. Automated and accurate assessment for postural abnormalities in patients with Parkinson’s disease based on Kinect and machine learning. J. Neuroeng. Rehabil. 2021, 18, 169. [Google Scholar] [CrossRef]
  29. Kanko, R.M.; Laende, E.K.; Strutzenberger, G.; Brown, M.; Selbie, W.S.; DePaul, V.; Scott, S.H.; Deluzio, K.J. Assessment of spatiotemporal gait parameters using a deep learning algorithm-based markerless motion capture system. J. Biomech. 2021, 122, 110414. [Google Scholar] [CrossRef]
  30. Artusi, C.A.; Geroin, C.; Imbalzano, G.; Camozzi, S.; Aldegheri, S.; Lopiano, L.; Tinazzi, M.; Bombieri, N. Assessment of Axial Postural Abnormalities in Parkinsonism: Automatic Picture Analysis Software. Mov. Disord. Clin. Pract. 2023; early view. [Google Scholar] [CrossRef]
  31. Cao, Z.; Hidalgo, G.; Simon, T.; Wei, S.E.; Sheikh, Y. OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 172–186. [Google Scholar] [CrossRef] [Green Version]
  32. Mehdizadeh, S.; Nabavi, H.; Sabo, A.; Arora, T.; Iaboni, A.; Taati, B. Concurrent validity of human pose tracking in video for measuring gait parameters in older adults: A preliminary analysis with multiple trackers, viewing angles, and walking directions. J. Neuroeng. Rehabil. 2021, 18, 139. [Google Scholar] [CrossRef]
  33. Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Computer Vision—ECCV 2014, Zurich, Switzerland, 6–12 September 2014; Springer International Publishing: New York, NY, USA, 2014; pp. 740–755. [Google Scholar]
  34. Andriluka, M.; Pishchulin, L.; Gehler, P.; Schiele, B. 2D Human Pose Estimation: New Benchmark and State of the Art Analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
  35. Robinson, R.; Robinson, H.S.; Bjørke, G.; Kvale, A. Reliability and validity of a palpation technique for identifying the spinous processes of C7 and L5. Man. Ther. 2009, 14, 409–414. [Google Scholar] [CrossRef] [PubMed]
  36. Boykov, Y.; Veksler, O.; Zabih, R. Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 1222–1239. [Google Scholar] [CrossRef] [Green Version]
  37. Etoom, M.; Alwardat, M.; Ala’S, A.; Lena, F.; Fabbrizo, R.; Modugno, N.; Centonze, D. Therapeutic interventions for Pisa syndrome in idiopathic Parkinson’s disease. A Scoping Systematic Review. Clin. Neurol. Neurosurg. 2020, 198, 106242. [Google Scholar] [CrossRef] [PubMed]
  38. Nair, P.; Bohannon, R.W.; Devaney, L.; Maloney, C.; Romano, A. Reliability and validity of nonradiologic measures of forward flexed posture in Parkinson disease. Arch. Phys. Med. Rehabil. 2017, 98, 508–516. [Google Scholar] [CrossRef] [PubMed]
  39. BIPM; IEC; IFCC; ILAC; ISO; IUPAC; IUPAP; OIML. International Vocabulary of Metrology—Basic and General Concepts and Associated Terms (VIM), 3rd. ed.; JCGM 200:2012; Joint Committee for Guides in Metrology: Paris, France, 2012. [Google Scholar]
Figure 1. Overview of the measurement pipeline.
Figure 1. Overview of the measurement pipeline.
Sensors 23 03193 g001
Figure 2. Geometrical extrapolation of F-KPS keypoints.
Figure 2. Geometrical extrapolation of F-KPS keypoints.
Sensors 23 03193 g002
Figure 3. Silhouette rendering and masking (a) and geometrical extrapolation of S-KPS keypoints (b).
Figure 3. Silhouette rendering and masking (a) and geometrical extrapolation of S-KPS keypoints (b).
Sensors 23 03193 g003
Figure 4. Scatter plot with Pearson’s correlation coefficient (R) and relevant significance (p value) of the AutoPosturePD measured angle against the participants’ body mass index (BMI, kg/m2): Anterior trunk flexion with lumbar fulcrum (lCC) in red; camptocormia with thoracic fulcrum (tCC) in green; and Pisa syndrome (PS) in blue.
Figure 4. Scatter plot with Pearson’s correlation coefficient (R) and relevant significance (p value) of the AutoPosturePD measured angle against the participants’ body mass index (BMI, kg/m2): Anterior trunk flexion with lumbar fulcrum (lCC) in red; camptocormia with thoracic fulcrum (tCC) in green; and Pisa syndrome (PS) in blue.
Sensors 23 03193 g004
Figure 5. Scatter plot with Pearson’s correlation coefficient (R) and relevant significance (p value) of the AutoPosturePD measured angle against the picture hue–saturation–variance (HSV): camptocormia with lumbar fulcrum (lCC) in red; camptocormia with thoracic fulcrum (tCC) in green; and Pisa syndrome (PS) in blue.
Figure 5. Scatter plot with Pearson’s correlation coefficient (R) and relevant significance (p value) of the AutoPosturePD measured angle against the picture hue–saturation–variance (HSV): camptocormia with lumbar fulcrum (lCC) in red; camptocormia with thoracic fulcrum (tCC) in green; and Pisa syndrome (PS) in blue.
Sensors 23 03193 g005
Figure 6. Scatter plot with Pearson’s correlation coefficient (R) and relevant significance (p value) of the AutoPosturePD measured angle against the image size (image width in panel (a); image height in panel (b); and total image area in panel (c)): camptocormia with lumbar fulcrum (lCC) in red; camptocormia with thoracic fulcrum (tCC) in green; and Pisa syndrome (PS) in blue.
Figure 6. Scatter plot with Pearson’s correlation coefficient (R) and relevant significance (p value) of the AutoPosturePD measured angle against the image size (image width in panel (a); image height in panel (b); and total image area in panel (c)): camptocormia with lumbar fulcrum (lCC) in red; camptocormia with thoracic fulcrum (tCC) in green; and Pisa syndrome (PS) in blue.
Sensors 23 03193 g006
Figure 7. Scatter plot with Pearson’s correlation coefficient (R) and relevant significance (p value) of the AutoPosturePD measured angle against the participants’ cover factor (ratio between participants width and image width in panel (a); ratio between participants height and image height in panel (b); and ratio between total participants’ area and image area in panel (c)): camptocormia with lumbar fulcrum (lCC) in red; camptocormia with thoracic fulcrum (tCC) in green; and Pisa syndrome (PS) in blue.
Figure 7. Scatter plot with Pearson’s correlation coefficient (R) and relevant significance (p value) of the AutoPosturePD measured angle against the participants’ cover factor (ratio between participants width and image width in panel (a); ratio between participants height and image height in panel (b); and ratio between total participants’ area and image area in panel (c)): camptocormia with lumbar fulcrum (lCC) in red; camptocormia with thoracic fulcrum (tCC) in green; and Pisa syndrome (PS) in blue.
Sensors 23 03193 g007
Figure 8. Scatter plot with Pearson’s correlation coefficient (R) and relevant significance (p value) of the error (i.e., the difference between the AutoPosturePD and the NeuroPostureApp© measurements) against the participants’ body mass index (BMI, kg/m2): anterior trunk flexion with lumbar fulcrum (lCC) in red; camptocormia with thoracic fulcrum (tCC) in green; and Pisa dyndrome (PS) in blue.
Figure 8. Scatter plot with Pearson’s correlation coefficient (R) and relevant significance (p value) of the error (i.e., the difference between the AutoPosturePD and the NeuroPostureApp© measurements) against the participants’ body mass index (BMI, kg/m2): anterior trunk flexion with lumbar fulcrum (lCC) in red; camptocormia with thoracic fulcrum (tCC) in green; and Pisa dyndrome (PS) in blue.
Sensors 23 03193 g008
Figure 9. Scatter plot with Pearson’s correlation coefficient (R) and relevant significance (p value) of the error (i.e., the difference between the AutoPosturePD and the NeuroPostureApp© measurements) against the picture hue–saturation—variance (HSV): camptocormia with lumbar fulcrum (lCC) in red; camptocormia with thoracic fulcrum (tCC) in green; and Pisa syndrome (PS) in blue.
Figure 9. Scatter plot with Pearson’s correlation coefficient (R) and relevant significance (p value) of the error (i.e., the difference between the AutoPosturePD and the NeuroPostureApp© measurements) against the picture hue–saturation—variance (HSV): camptocormia with lumbar fulcrum (lCC) in red; camptocormia with thoracic fulcrum (tCC) in green; and Pisa syndrome (PS) in blue.
Sensors 23 03193 g009
Figure 10. Scatter plot with Pearson’s correlation coefficient (R) and relevant significance (p value) of the error (i.e., the difference between the AutoPosturePD and the NeuroPostureApp© measurements) against the image size (image width in panel (a); image height in panel (b); and total image area in panel (c)): camptocormia with lumbar fulcrum (lCC) in red; camptocormia with thoracic fulcrum (tCC) in green; and Pisa syndrome (PS) in blue.
Figure 10. Scatter plot with Pearson’s correlation coefficient (R) and relevant significance (p value) of the error (i.e., the difference between the AutoPosturePD and the NeuroPostureApp© measurements) against the image size (image width in panel (a); image height in panel (b); and total image area in panel (c)): camptocormia with lumbar fulcrum (lCC) in red; camptocormia with thoracic fulcrum (tCC) in green; and Pisa syndrome (PS) in blue.
Sensors 23 03193 g010
Figure 11. Scatter plot with Pearson’s correlation coefficient (R) and relevant significance (p value) of the error (i.e., the difference between the AutoPosturePD and the NeuroPostureApp© measurements) against the participants’ cover factor (ratio between participants’ width and image width in panel (a); ratio between participants’ height and image height in panel (b); and ratio between total participants’ area and image area in panel (c)): camptocormia with lumbar fulcrum (lCC) in red; camptocormia with thoracic fulcrum (tCC) in green; and Pisa syndrome (PS) in blue.
Figure 11. Scatter plot with Pearson’s correlation coefficient (R) and relevant significance (p value) of the error (i.e., the difference between the AutoPosturePD and the NeuroPostureApp© measurements) against the participants’ cover factor (ratio between participants’ width and image width in panel (a); ratio between participants’ height and image height in panel (b); and ratio between total participants’ area and image area in panel (c)): camptocormia with lumbar fulcrum (lCC) in red; camptocormia with thoracic fulcrum (tCC) in green; and Pisa syndrome (PS) in blue.
Sensors 23 03193 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aldegheri, S.; Artusi, C.A.; Camozzi, S.; Di Marco, R.; Geroin, C.; Imbalzano, G.; Lopiano, L.; Tinazzi, M.; Bombieri, N. Camera- and Viewpoint-Agnostic Evaluation of Axial Postural Abnormalities in People with Parkinson’s Disease through Augmented Human Pose Estimation. Sensors 2023, 23, 3193. https://doi.org/10.3390/s23063193

AMA Style

Aldegheri S, Artusi CA, Camozzi S, Di Marco R, Geroin C, Imbalzano G, Lopiano L, Tinazzi M, Bombieri N. Camera- and Viewpoint-Agnostic Evaluation of Axial Postural Abnormalities in People with Parkinson’s Disease through Augmented Human Pose Estimation. Sensors. 2023; 23(6):3193. https://doi.org/10.3390/s23063193

Chicago/Turabian Style

Aldegheri, Stefano, Carlo Alberto Artusi, Serena Camozzi, Roberto Di Marco, Christian Geroin, Gabriele Imbalzano, Leonardo Lopiano, Michele Tinazzi, and Nicola Bombieri. 2023. "Camera- and Viewpoint-Agnostic Evaluation of Axial Postural Abnormalities in People with Parkinson’s Disease through Augmented Human Pose Estimation" Sensors 23, no. 6: 3193. https://doi.org/10.3390/s23063193

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop