Next Article in Journal
Fast Method of Computations of Ripples in the Junction Temperature of Discrete Power SiC-MOSFETs at the Steady State
Previous Article in Journal
Anaerobic Co-Digestion of Wastes: Reviewing Current Status and Approaches for Enhancing Biogas Production
Previous Article in Special Issue
The Effect of Cr Substitution on the Anomalous Hall Effect of Co3−xCrxAl (x = 0, 1, 2, 3) Heusler Compounds: An Ab Initio Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning in Left and Right Footprint Image Detection Based on Plantar Pressure

1
Department of Visual Communication Design, Soegijapranata Catholic University, Semarang 50234, Indonesia
2
Department of Biomedical Engineering, Hungkuang University, Taichung 433304, Taiwan
3
Rehabilitation Engineering Lab, University of Illinois at Urbana-Champaign, Champaign, IL 61820, USA
4
Kinesiology and Community Health, University of Illinois at Urbana-Champaign, Champaign, IL 61820, USA
5
Computational Science and Engineering, University of Illinois at Urbana-Champaign, Champaign, IL 61820, USA
6
Department of Digital Media Design, Asia University, Taichung 413305, Taiwan
7
School of Electrical Engineering, Telkom University, Bandung 40257, Indonesia
8
Department of Electrical Engineering, Yuan Ze University, Chung-Li 32003, Taiwan
9
Department of Mathematics, Airlangga University, Surabaya 60115, Indonesia
10
Department of Creative Product Design, Asia University, Taichung 413305, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(17), 8885; https://doi.org/10.3390/app12178885
Submission received: 28 July 2022 / Revised: 22 August 2022 / Accepted: 1 September 2022 / Published: 5 September 2022

Abstract

:

Featured Application

In this study, the left and right footprint images were predicted based on deep learning object detection models. YOLOv4 models are the most balanced deep learning models in detecting left and right feet from footprint images. In different object detection models, the right foot showed higher accuracy than the left foot.

Abstract

People with cerebral palsy (CP) suffer primarily from lower-limb impairments. These impairments contribute to the abnormal performance of functional activities and ambulation. Footprints, such as plantar pressure images, are usually used to assess functional performance in people with spastic CP. Detecting left and right feet based on footprints in people with CP is a challenge due to abnormal foot progression angle and abnormal footprint patterns. Identifying left and right foot profiles in people with CP is essential to provide information on the foot orthosis, walking problems, index gait patterns, and determination of the dominant limb. Deep learning with object detection can localize and classify the object more precisely on the abnormal foot progression angle and complex footprints associated with spastic CP. This study proposes a new object detection model to auto-determine left and right footprints. The footprint images successfully represented the left and right feet with high accuracy in object detection. YOLOv4 more successfully detected the left and right feet using footprint images compared to other object detection models. YOLOv4 reached over 99.00% in various metric performances. Furthermore, detection of the right foot (majority of people’s dominant leg) was more accurate than that of the left foot (majority of people’s non-dominant leg) in different object detection models.

1. Introduction

Cerebral palsy (CP) is a leading cause of physical disability in children. Currently, the spastic type of CP is the most common occurrence: 50 to 60% of all cases of CP [1,2]. People with spastic CP suffer primarily from impairments such as increased tone, muscle weakness, diminished selectivity, and joint contractures. These impairments contribute to poor performance of functional activities in people with spastic CP [3]. Spastic CP also affects ambulation ability, movement, and posture, with accompanying activity restrictions that significantly burden people with CP, their families, and society [4].
The impairment in neuromuscular control of the lower limb affects balance and ambulation ability in people with spastic CP. Ankle–foot orthosis is the primary intervention for improving balance and correcting asymmetrical gait [5]. The use of an orthosis reduces the range of motion of hip internal rotation and improves postural stability during standing and asymmetrical weight distribution over the foot, which results in better balance in people with spastic CP [6]. After orthosis treatment management is provided to people with spastic CP, a footprint can be used to evaluate balance and locomotion [7].
Identifying the dominant leg from the footprint can help evaluate orthosis treatment in improved postural control in spastic CP patients [8]. Foot supination occurs in the dynamic evaluation when the dominant leg transitions from heel contact to middle stance throughout the gait cycle, which is related to the foot balance index in orthosis treatment [9]. In people with spastic CP, the dominant leg was assumed to be ipsilateral [10]. Hence, proprioception error was related to the non-dominant leg, which may induce complex footprints [11]. Therefore, an assessed CP complex footprint would be beneficial in managing orthosis treatment to provide essential information on the dominant or non-dominant leg [12].
Precision detection of the left and right feet from complex footprints can positively decrease energy expenditure in the dominant or non-dominant leg and monitor the association between the maximum step length test and the walking efficiency in CP patients [13]. However, CP footprints have limitations derived from complex footprint features, which would be incomprehensible in determining the left and right feet in clinics [14]. For example, scissor gait and toe walking are prominent among CP patients [15]. Scissor gait, usually walking with crossing legs due to spastic paraplegia or excessive contraction of hip adductor muscles [16], may lead to abnormal foot progression angle in footprint images [17]. Abnormal foot progression angle affects the recognition of left and right footprints [18]. The other example is toe walking, a bilateral gait abnormality in which a normal heel strike is absent, and weight-bearing occurs through the forefoot caused by limb-length discrepancy, spastic equinus, and Achilles tendon contracture [19]. Determining the left and right feet based on footprint images in toe walking conditions may be difficult if the full foot pressure is not recorded, especially in the absence of the heel region on the footprint distribution. Regarding scissor gait and toe walking foot problems, it would be challenging to determine the left and right feet under abnormal foot progression angle and complex footprints in people with spastic CP.
Deep learning has been used in footprints with image features to auto-detect foot identification [20]. Object detection is a popular deep learning method that trains on images and directly optimizes performance when making predictions [21]. Object detection can obtain good accuracy based on image features in the footprint, even under the limitation of CP footprint features such as abnormal foot progression angle and complex footprints. Due to the models’ use of hierarchical feature extraction, the multi-layered representations from pixel to high-level semantic features are learned by a sequential structure [22].
Furthermore, the object detection method can optimize auto-identification by combining classification and bounding box regression into a multi-task learning manner. As a result, the auto-detected foot identification in the left and right feet of CP patients can be beneficial, reducing clinical workloads, increasing cost-efficacy, standardizing treatment, and improving patient care [23].
Based on the above description, CP primarily has lower-limb impairments, causing abnormal functional performance of activities of daily living and ambulation. Orthosis can effectively manage lower-limb impairments. Clinical deficiencies, such as dominant or non-dominant leg proprioception error in spastic CP, might cause complex footprints. Identifying the dominant leg can help evaluate foot balance index related postural control. Detecting the left and right feet can provide information on positive changes in decreased energy expenditure in the dominant or non-dominant leg in CP patients. Determining the left and right feet might also improve the efficacy of dominant-leg orthosis in CP treatment. Unusual footprints may impede left and right foot identification in spastic CP, caused by complex footprints resulting from foot problems such as scissor gait with abnormal foot progression angle and toe walking
Deep learning based on object detection can perform many tasks such as localizing, tracking, and image discrimination in defective footprints. Thus, initial research is essential to analyze healthy people’s foot profiles in left and right detection, providing a basis for understanding incomplete foot profiles in specific cases of CP. This study is a primary investigation in discovering new information on deep learning performance of defective footprint detection in CP. In addition, this study may have implications for providing new solutions for assessing the efficacy of ankle–foot orthosis in people with spastic CP.

2. Materials and Methods

In this study, we used the plantar pressure to present the footprint. The data used to develop this study were gathered from the AIdea website by the Industrial Technology Research Institute (ITRI) of Taiwan (https://aidea-web.tw, accessed on 21 February 2021). We randomly split 974 plantar pressure images into 70% (681 images) for the training set and 30% (293 images) for the validation set. Each image is sized 120 × 400 pixels. Furthermore, a senior expert in plantar pressure images determined the left and right foot, set as ground truth.
Based on our dataset that contained image features, this study used various object detection models: Residual Neural Network with 50 layers (ResNet-50), Dense Convolutional Network (DenseNet), and You Only Look Once (YOLO) v3, v4, v5s, v5m, v5l, and v5x to compare the network performances. As the most popular model used for plantar pressure images, the ResNet-50 used the deeper networks that support the feature extraction and achieved high accuracy in recognizing the plantar pressure images in different conditions [24] (Figure 1A). DenseNet is a deep neural network model which transmits the input data features from the convolution layer to the following layers that are popularly used in plantar pressure images [25] (Figure 1B). To classify images, Dewi et al. [26] introduced that multiscale feature maps can be effectively combined for YOLOv2 detection and classification of DenseNet and ResNet-50 to prevent performance loss. In this study, we used DarkNet-19 as the backbone to support DenseNet and ResNet-50 as a classifier with a YOLOv2 detector.
YOLO is a state-of-the-art deep learning framework for real-time object recognition with single-stage detection. YOLO supports real-time object detection and precision in predicting the bounding box based on footprint images [18]. YOLO has several versions, including YOLOv1, v2, v3, v4, and v5 series. However, the current, more popular object detection algorithms are YOLOv3, YOLOv4, and YOLOv5 (YOLOv5l, YOLOv5m, YOLOv5s, YOLO5x) models [21].
YOLOv3 has advantages in its three detectors that affect the prediction more precisely [27] (Figure 2A). YOLOv4 with its spatial pyramid pooling and path aggregation network may improve feature extraction to localize and classify the object [28] (Figure 2B). The latest YOLO series is the YOLOv5 algorithm, divided into four models named YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x, according to the depth and width of the network [21]. Unlike other versions, YOLOv5 uses an adaptive anchor strategy; the backbone part uses a focus structure and cross-stage partial connections (CSP) structure. In addition, the YOLOv5 series has an advantage in terms of run speed that may affect real-time detection in clinical workload [29] (Figure 2C).
Furthermore, this study used manual labeling of the left and right feet regarding the foot pressure profiles. After the manual labeling, we input the data into ResNet-50, DenseNet, YOLOv3, YOLOv4, and YOLOv5 series models to obtain a suitable model for detecting foot profiles (Figure 3). The detection was built in Windows 10 using Python 3.7.6 (YOLOv3, YOLOv4, DenseNet, and ResNet-50) and PyTorch 1.7.0. (YOLOv5 series). The model’s parameters were as follows: The batch size was 8; the momentum and weight decay were 0.937 and 0.0005, respectively; the initial learning rate was 0.01, and the epoch was 100. All experiments were carried out using a computer configured with the following hardware: Core I7 10,700 CPU, 32 GB RAM, and an NVIDIA RTX 3080 GPU.

2.1. Labeling Strategy

For determining the left and right feet, a senior expert in plantar pressure images with more than 15 years of experience in footprint imaging supervised the process (the 487 left foot images 487 and right foot images). The abnormal foot progression angle in the plantar pressure image may complicate the recognition of the left and right feet. Furthermore, the defective footprint features may introduce complications due to an incomplete footprint pattern, particularly in the plantar region, that challenge the prediction of the left and right feet [30]. Nevertheless, object detection achieved good results in medical images [22]. For the plantar pressure, this study used bounding box annotation to determine the left and right feet through different object detection models to achieve better accuracy [18,24]. The bounding box is used to localize and classify the left and right feet based on manual labeling to obtain a prediction.
The labeling process depends on the footprint features; labeling in the maximum area of plantar pressure images is recommended to make predictions more precise [29] (Figure 4A). However, abnormal foot progression angle and defective footprints proved challenging. Part of the plantar pressure images was an abnormal foot progression angle wherein the direction of foot placement was more or less than 15° [18]. On the other hand, defective footprint patterns cannot provide the full pressure distribution of the footprint pattern, and sometimes the plantar pressure images cannot provide full footprint patterns during locomotion [31]. For example, plantar pressure images with pressure in the rearfoot and forefoot in the metatarsal have limitations such as incomplete toe or forefoot area (Figure 4B) and abnormal foot progression angle with high pressure on the forefoot and midfoot and less pressure on the rearfoot (Figure 4C). Defective footprint images with high pressure on heels lost pressure in the midfoot and on the rear and forefoot (Figure 4D).

2.2. Deep Learning Performance

Average precision can be used as a comprehensive evaluation index to balance the effects of precision and recall and evaluate a model more thoroughly. The precision and recall curve area is the average precision value, and a larger value indicates better model performance [20].
This paper shows the model’s performance by average precision, recall, and precision. This experiment set a confidence threshold of 0.5 [18]. The algorithm performance evaluation metrics chosen are precision (P), recall (R), average precision (AP), mean average precision (mAP), and F1-score. Moreover, mAP is an essential index for evaluating the model’s performance, which can reflect the overall performance of the network model and avoid the problem of extreme performance of some categories and weakening of the performance of other types in the evaluation process. Furthermore, we use a simple average F1-score with the average precision value to show the methods’ performance on the complete dataset for the convenience and intuitiveness of the calculation. Finally, after obtaining all the metrics’ values of left and right foot detection, we compared the P, R, AP, mAP, and F1-score to achieve a stable model based on our manual labeling.

3. Results

3.1. Experimental Results

In this study, we randomly split 974 plantar pressure images into 70% for the training and 30% for the validation set. According to Table 1 and Figure 5 and Figure 6, our proposed method can detect the foot profiles and classify them with over 60% accuracy without overfitting. YOLOv4 has the best comprehensive performance evaluation out of all the models. YOLOv4 was the model that exhibited persistent accuracy gains across various performance evaluations, including mAP (99.45%), P (99.00%), R (100.00%), and F1-score (99.00%). Meanwhile, ResNet-50 showed the lowest performance in various evaluation metrics, including mAP (66.00%), P (66.00%), R (50.00%), and F1-score (57.00%). Furthermore, ResNet-50 achieved the lowest mAP in our study. Moreover, the YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x obtained mAP values of 91–93%, which were lower than those of YOLOv3 and YOLOv4 (99%).
The average AP on different object detection models was 89.01% and 91.86% on the left foot and right foot, respectively. The average AP on YOLO models was 93.35% on the left foot and 95.93% on the right foot. The right foot showed higher accuracy than the left foot in average AP in all YOLO series, as well as DenseNet and ResNet-50 models. Furthermore, in the YOLOv3 model, the highest AP was 99.80% and 99.81% on the left foot and right foot, respectively. However, the ResNet-50 model’s lowest AP was 61.88% and 70.13% on the left foot and right foot, respectively.

3.2. Testing Samples

We tested the images from the validation set as a prediction sample to evaluate the performance of the YOLOv4 network on several images. However, the unusual distribution of plantar pressure images decreased the accuracy. There were four frequency examples: the defective plantar pressure images on the midfoot (Figure 7A); the complex plantar pressure on the forefoot and midfoot (Figure 7B); the complex plantar pressure with less pressure at the full foot (Figure 7C); and the plantar pressure image on the forefoot with abnormal foot progression angle (Figure 7D).

4. Discussion

The deep learning performance evaluation on different object detection models showed our excellent results using plantar pressure images to identify the left and right footprints. YOLOv4 performed well in various metrics such as mAP, P, R, and F1-score (Table 1 and Figure 6). Furthermore, the right foot detection achieved a higher average AP than the left foot detection (Figure 5).
The YOLOv4 model showed good performance in detecting the left and right feet. Referring to Table 1. the results of YOLOv4 were balanced over 99.00% in various performance evaluations (mAP, P, R, and F1-score). YOLOv4 showed an R value of 100.00%, indicating that the detection was similar to the ground truth. Our results were identical to Jiang et al. [32]. In their study, YOLOv4 was sorted out, tried all possible optimizations in the entire process, and found the best effect in each permutation and combination, affecting overall performance. As far as we know, YOLOv4 models employ a CSP DarkNet-53 classifier and spatial pyramid pooling (SPP) that connects the YOLOv3 head [33]. YOLOv4 has the advantage of high detection accuracy and precise positioning of the bounding box [18].
The YOLOv4 results indicate that the model may have advantages in multi-color channel imaging detection [34]. Compared with YOLOv4, YOLOv3 was more accurate in grayscale medical images [22]. Furthermore, YOLOv5 is beneficial in sequential imaging such as visual audio or video [35]. Our study aimed to identify the left and right feet; YOLOv4 performed more significantly than YOLOv3 and YOLOv5.
Conversely, YOLOv4 has advantages in localizing and classifying multi-object and multi-color channel image features. In our study, YOLOv4, in plantar pressure image detection, showed high performance; this may be the result of plantar pressure images with multi-color features. Our results are similar to Dewi et al. [36]. They investigated state-of-the-art object detection systems for detecting traffic sign images using YOLOv3, YOLOv4, ResNet-50, and DenseNet. They found that the YOLOv4 model exhibited higher accuracy due to multi-color imaging. The advantages of YOLOv4’s performance were beneficial for predicting the left and right feet in plantar pressure images.
On the other hand, DenseNet and ResNet-50 have limitations in object detection. DenseNet has more powerful uses in medical image segmentation [37]. However, plantar pressure is not under medical images, and the DenseNet used for object detection produces a simple transition layer structure of dimensionality reduction. Therefore, it is difficult for a single sensory field to capture multilevel features of the dense block layers [38]. Meanwhile, ResNet-50 may also be influential in medical image categorization using residual networks in the same-sized object on an image [39]. In object detection with plantar pressure, images have more unusual object sizes, and loss of the image features may challenge ResNet-50 networks. Therefore, DenseNet and ResNet-50 may not be suitable models for plantar pressure image detection.
The abnormal foot progression angle and complex footprint images complicated the identification of left and right feet via plantar pressure images [18]. Furthermore, plantar pressure images are needed for bounding box annotation to specify the location of the object, which can help recognize the foot features [40].
In this study, detection of the left and the right feet was conducted by deep learning. According to Chen et al., deep learning was used to determine the flatfoot in left and right feet using 413 right foot samples and 422 left foot samples; this study achieved over 80.00% for the classification accuracy in left and right feet (Table 2). Chen et al. and Dose et al. showed that left foot detection is higher than that of right feet [41,42]. Meanwhile, Nadeem et al. and our results showed accuracy of the left foot was lower than for right feet [43].
According to the Rein et al. study, the dominant right leg showed higher accuracy (80%) than the dominant left leg (20%) [44]. In our results, the left and right feet have different results of AP due to the side difference between the dominant and non-dominant legs. The dominant leg may induce higher accuracy in the right foot. The dominant leg contributes more significantly to producing the plantar pressure distribution image pattern toward forwarding propulsion, while the non-dominant leg contributes toward support [45]. It is concluded that the dominant leg bearing the main body weight may affect the pressure and lead to loss of the foot features in plantar pressure images on the non-dominant leg (left foot). This may be affected by the short walking time that requires a more dominant leg (right foot). It produces more pressure on the footprint images, for determining the left and right foot results is more accurate than long-time walking [46]. Loss of foot features in plantar pressure images may affect the left foot YOLO series detection.
We compared our results with other studies that used deep learning to detect the left and right feet. Our study used plantar pressure images to identify the left and right feet. Furthermore, object detection with bounding box annotation was used to localize and classify the left and right feet. On the other hand, several models are used to identify left and right feet in deep learning. Nadeem et al. (2020) used the artificial neural network (ANN) model to classify the human body part’s movement, including determining the left and right feet [43]. Dose et al. (2018), used a convolutional neural network (CNN) classify foot type using heterogeneous pressure data; many variations in human pose and image quality introduced complications [41]. Chen et al. (2019) used a triple generalized-inverse neural network (TGINN) to classify the left and right feet under a flat foot using a smart insole [42] (Table 2).
However, the above three studies did not use the object detection method and showed around a 0%-16% range from object detection. Therefore, the main reason for our research was to select the object detection method that can localize and classify the image features of plantar pressure images. As a result, it can achieve a higher accuracy (over 93%) with small datasets. Furthermore, object detection may be suitable for determining where objects are located in a given image and which category each object belongs to [47].
We used the smallest dataset with the multi-color channel in this study. The data acquisition tools may affect the accuracy of left and right foot detection. According to the studies’ comparison in Table 2 our study showed high accuracy (93.35%) in left and (95.93%) right foot detection based on the data acquisition tools in plantar pressure images. Furthermore, based on the video-recorded dataset, the ANN models obtained over 90.00% accuracy with 2391 sequential images. In addition, the smallest dataset (835 images) of left and right feet based on smart insoles was predicted using TGINN and achieved over 80.00% accuracy.
Conversely, the left and right foot classification conducted from the electroencephalogram signal achieved around 78.00% precision. However, the left and right foot detection results show differentiation values, possibly due to the side difference between the dominant and non-dominant foot. In addition, the data acquisition tools in plantar pressure under a few images in the dataset may have high accuracy due to the multi-color channel in image features [34].
However, there are several limitations to the current work which could be used to develop future improvement directions. The first limitation is the different sizes of footprint patterns in plantar pressure images [48]. Serightcond, the diversity of data acquisition conditions, especially in complex target age testing in spastic CP, still needs to be studied [49]. Finally, using meta-learning or few-shot classification [50] and applying fusion of multi-modal physiological data in prediction may solve the different footprint sizes and complex target age limitations in future works [51].

5. Conclusions

This study aimed to determine the left and right feet in people with spastic CP using different object detection models to obtain a suitable detector based on the footprint image dataset. Our results showed that the different object detection models achieved different performances; YOLOv4 outperformed in detecting left and right feet through plantar pressure images. Compared with other models, YOLOv4 reached over 99.00% in various performance evaluations. In addition, the right foot obtained a higher accuracy prediction than the left foot. Therefore, the auto-detection of left and right feet may have implications in discovering new information about footprint images with defective features in people with CP and provide new solutions through beneficial information to manage the treatment strategy of orthosis in people with spastic CP.

Author Contributions

Conceptualization, P.A. and C.-W.L.; methodology, J.-Y.T., C.-Y.L. and F.A.; writing—original draft preparation, P.A. and R.B.R.S.; writing—review and editing, C.-W.L.; supervision, B.-Y.L. and Y.-K.J. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by a grant from the Ministry of Science and Technology of the Republic of China (MOST 110-2221-E-468-005, MOST 110-2637-E-241-002, and MOST 110-2221-E-155-039-MY3). The funding agency was not involved in data collection, data analysis, and data interpretation.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Acknowledgments

Data collection and sharing for this project were obtained from SHUI-MU International Co., Ltd., Taiwan, which is available on the AIdea platform (https://aidea-web.tw) accessed on 21 February 2021, provided by Industrial Technology Research Institute (ITRI) of Taiwan. The authors wish to express gratitude to Fahni Haris, Wei-Cheng Shen, Yori Pusparani, and Jifeng Wang for their assistance.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Skoutelis, V.C.; Kanellopoulos, A.D.; Kontogeorgakos, V.A.; Dinopoulos, A.; Papagelopoulos, P.J. The orthopaedic aspect of spastic cerebral palsy. J. Orthop. 2020, 22, 553–558. [Google Scholar] [CrossRef] [PubMed]
  2. Fu, W.-J.; Jin, B.-X.; Zhao, Y.; Liu, Z.-H. Clinical study on acupuncture combined with low-frequency electric stimulation for scissor gait in children with spastic cerebral palsy. J. Acupunct. Tuina Sci. 2015, 13, 150–155. [Google Scholar] [CrossRef]
  3. Papageorgiou, E.; Simon-Martinez, C.; Molenaers, G.; Ortibus, E.; Van Campenhout, A.; Desloovere, K. Are spasticity, weakness, selectivity, and passive range of motion related to gait deviations in children with spastic cerebral palsy? A statistical parametric mapping study. PLoS ONE 2019, 14, e0223363. [Google Scholar] [CrossRef] [PubMed]
  4. Crowgey, E.L.; Marsh, A.G.; Robinson, K.G.; Yeager, S.K.; Akins, R.E. Epigenetic machine learning: Utilizing DNA methylation patterns to predict spastic cerebral palsy. BMC Bioinform. 2018, 19, 225. [Google Scholar] [CrossRef]
  5. Rossi, S.; Colazza, A.; Petrarca, M.; Castelli, E.; Cappa, P.; Krebs, H.I. Feasibility study of a wearable exoskeleton for children: Is the gait altered by adding masses on lower limbs? PLoS ONE 2013, 8, e73139. [Google Scholar] [CrossRef] [PubMed]
  6. Pornsuree, O.; Ehara, Y.; Yamamoto, S.; Piyavit, S.; Sakai, K. Effect of foot orthoses on the standing balance of a child with spastic diplegic cerebral palsy: A case report. Niigata J. Health Welf. 2011, 11, 43–55. [Google Scholar]
  7. Miller, F. Pedobarograph Foot Evaluations in Children with Cerebral Palsy. Cerebral Palsy 2020, 1373–1380. [Google Scholar] [CrossRef]
  8. Son, M.S.; Jung, D.H.; You, J.S.H.; Yi, C.H.; Jeon, H.S.; Cha, Y.J. Effects of dynamic neuromuscular stabilization on diaphragm movement, postural control, balance and gait performance in cerebral palsy. NeuroRehabilitation 2017, 41, 739–746. [Google Scholar] [CrossRef]
  9. Chang, W.-D.; Chang, N.-J.; Lin, H.-Y.; Lai, P.-T. Changes of plantar pressure and gait parameters in children with mild cerebral palsy who used a customized external strap orthosis: A crossover study. BioMed Res. Int. 2015, 2015, 813942. [Google Scholar] [CrossRef]
  10. Wingert, J.R.; Burton, H.; Sinclair, R.J.; Brunstrom, J.E.; Damiano, D.L. Tactile sensory abilities in cerebral palsy: Deficits in roughness and object discrimination. Dev. Med. Child Neurol. 2008, 50, 832–838. [Google Scholar] [CrossRef] [PubMed]
  11. Galamb, K.; Szilágyi, B.; Magyar, O.; Hortobágyi, T.; Nagatomi, R.; Váczi, M.; Négyesi, J. Effects of side-dominance on knee joint proprioceptive target-matching asymmetries. Physiol. Int. 2018, 105, 257–265. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Figueiredo, E.M.; Ferreira, G.B.; Moreira, R.C.M.; Kirkwood, R.N.; Fetters, L. Efficacy of ankle-foot orthoses on gait of children with cerebral palsy: Systematic review of literature. Pediatric Phys. Ther. 2008, 20, 207–223. [Google Scholar] [CrossRef] [PubMed]
  13. Bennett, D.; Walsh, M.; O’Sullivan, R.; Gallagher, J.; O’Brien, T.; Newman, C.J. Use of a dynamic foot pressure index to monitor the effects of treatment for equinus gait in children with cerebral palsy. J. Pediatric Orthop. 2007, 27, 288–294. [Google Scholar] [CrossRef] [PubMed]
  14. Beyaert, C.; Pierret, J.; Vasa, R.; Paysant, J.; Caudron, S. Toe walking in children with cerebral palsy: A possible functional role for the plantar flexors. J. Neurophysiol. 2020, 124, 1257–1269. [Google Scholar] [CrossRef]
  15. Krigger, K.W. Cerebral palsy: An overview. Am. Fam. Physician 2006, 73, 91–100. [Google Scholar]
  16. Son, I.; Lee, G. Immediate Effects of the Dorsiflexion Angle of Hinged Ankle-foot Orthosis on the Spatiotemporal Gait Parameters of Children With Spastic Cerebral Palsy. Res. Sq. 2021, PPR354197. [Google Scholar] [CrossRef]
  17. Nonnekes, J.; Růžička, E.; Nieuwboer, A.; Hallett, M.; Fasano, A.; Bloem, B.R. Compensation strategies for gait impairments in Parkinson disease: A review. JAMA Neurol. 2019, 76, 718–725. [Google Scholar] [CrossRef]
  18. Ardhianto, P.; Subiakto, R.B.R.; Lin, C.-Y.; Jan, Y.-K.; Liau, B.-Y.; Tsai, J.-Y.; Akbari, V.B.H.; Lung, C.-W. A Deep Learning Method for Foot Progression Angle Detection in Plantar Pressure Images. Sensors 2022, 22, 2786. [Google Scholar] [CrossRef]
  19. DeHeer, P.A. Pediatric Equinus Deformity. In The Pediatric Foot and Ankle; Springer: Berlin/Heidelberg, Germany, 2020; pp. 147–162. [Google Scholar]
  20. Ardhianto, P.; Tsai, J.-Y.; Lin, C.-Y.; Liau, B.-Y.; Jan, Y.-K.; Akbari, V.-B.-H.; Lung, C.-W. A Review of the Challenges in Deep Learning for Skeletal and Smooth Muscle Ultrasound Images. Appl. Sci. 2021, 11, 4021. [Google Scholar] [CrossRef]
  21. Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. YOLOX: Exceeding YOLO Series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar]
  22. Tsai, J.-Y.; Hung, I.Y.-J.; Guo, Y.L.; Jan, Y.-K.; Lin, C.-Y.; Shih, T.T.-F.; Chen, B.-B.; Lung, C.-W. Lumbar disc herniation automatic detection in magnetic resonance imaging based on deep learning. Front. Bioeng. Biotechnol. 2021, 691. [Google Scholar] [CrossRef]
  23. Schwalbe, N.; Wahl, B. Artificial intelligence and the future of global health. Lancet 2020, 395, 1579–1586. [Google Scholar] [CrossRef]
  24. Chen, H.-C.; Jan, Y.-K.; Liau, B.-Y.; Lin, C.-Y.; Tsai, J.-Y.; Li, C.-T.; Lung, C.-W. Using Deep Learning Methods to Predict Walking Intensity from Plantar Pressure Images. In Proceedings of the International Conference on Applied Human Factors and Ergonomics, New York, NY, USA, 25–29 July 2021; pp. 270–277. [Google Scholar]
  25. Kaya, M.; Karakuş, S.; Tuncer, S.A. Detection of ataxia with hybrid convolutional neural network using static plantar pressure distribution model in patients with multiple sclerosis. Comput. Methods Programs Biomed. 2022, 214, 106525. [Google Scholar] [CrossRef]
  26. Dewi, C.; Juli Christanto, H. Combination of Deep Cross-Stage Partial Network and Spatial Pyramid Pooling for Automatic Hand Detection. Big Data Cogn. Comput. 2022, 6, 85. [Google Scholar] [CrossRef]
  27. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  28. Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  29. Yao, J.; Qi, J.; Zhang, J.; Shao, H.; Yang, J.; Li, X. A real-time detection algorithm for Kiwifruit defects based on YOLOv5. Electronics 2021, 10, 1711. [Google Scholar] [CrossRef]
  30. De Carolis, B.; Ladogana, F.; Macchiarulo, N. Yolo trashnet: Garbage detection in video streams. In Proceedings of the 2020 IEEE Conference on Evolving and Adaptive Intelligent Systems (EAIS), Bari, Italy, 27–29 May 2020; pp. 1–7. [Google Scholar]
  31. Leunkeu, A.N.; Lelard, T.; Shephard, R.J.; Doutrellot, P.-L.; Ahmaidi, S. Gait cycle and plantar pressure distribution in children with cerebral palsy: Clinically useful outcome measures for a management and rehabilitation. NeuroRehabilitation 2014, 35, 657–663. [Google Scholar] [CrossRef]
  32. Jiang, P.; Ergu, D.; Liu, F.; Cai, Y.; Ma, B. A Review of Yolo Algorithm Developments. Procedia Comput. Sci. 2022, 199, 1066–1073. [Google Scholar] [CrossRef]
  33. Naik, U.P.; Rajesh, V.; Kumar, R. Implementation of YOLOv4 Algorithm for Multiple Object Detection in Image and Video Dataset using Deep Learning and Artificial Intelligence for Urban Traffic Video Surveillance Application. In Proceedings of the 2021 Fourth International Conference on Electrical, Computer and Communication Technologies (ICECCT), Erode, India, 15–17 September 2021; pp. 1–6. [Google Scholar]
  34. Xu, C.; Zhang, Y.; Fan, X.; Lan, X.; Ye, X.; Wu, T. An efficient fluorescence in situ hybridization (FISH)-based circulating genetically abnormal cells (CACs) identification method based on Multi-scale MobileNet-YOLO-V4. Quant. Imaging Med. Surg. 2022, 12, 2961–2976. [Google Scholar] [CrossRef]
  35. Liu, Y.; Lu, B.; Peng, J.; Zhang, Z. Research on the use of YOLOv5 object detection algorithm in mask wearing recognition. World Sci. Res. J. 2020, 6, 276–284. [Google Scholar]
  36. Dewi, C.; Chen, R.-C.; Liu, Y.-T.; Liu, Y.-S.; Jiang, L.-Q. Taiwan stop sign recognition with customize anchor. In Proceedings of the 12th International Conference on Computer Modeling and Simulation, New York, NY, USA, 22–24 June 2020; pp. 51–55. [Google Scholar]
  37. Stawiaski, J. A pretrained densenet encoder for brain tumor segmentation. In Proceedings of the International MICCAI Brainlesion Workshop, Granada, Spain, 16 September 2018; pp. 105–115. [Google Scholar]
  38. Zhou, T.; Ye, X.; Lu, H.; Zheng, X.; Qiu, S.; Liu, Y. Dense Convolutional Network and Its Application in Medical Image Analysis. BioMed Res. Int. 2022, 2022, 2384830. [Google Scholar] [CrossRef]
  39. Ayyachamy, S.; Alex, V.; Khened, M.; Krishnamurthi, G. Medical image retrieval using Resnet-18. In Proceedings of the Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, San Diego, CA, USA, 17–18 February 2019; pp. 233–241. [Google Scholar]
  40. Xu, H.; Yao, L.; Zhang, W.; Liang, X.; Li, Z. Auto-fpn: Automatic network architecture adaptation for object detection beyond classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 6649–6658. [Google Scholar]
  41. Dose, H.; Møller, J.S.; Iversen, H.K.; Puthusserypady, S. An end-to-end deep learning approach to MI-EEG signal classification for BCIs. Expert Syst. Appl. 2018, 114, 532–542. [Google Scholar] [CrossRef]
  42. Chen, L.; Jin, L.; Li, Y.; Liu, M.; Liao, B.; Yi, C.; Sun, Z. Triple Generalized-Inverse Neural Network for Diagnosis of Flat Foot. In Proceedings of the 2019 Chinese Control Conference (CCC), Guangzhou, China, 27–30 July 2019; pp. 8594–8599. [Google Scholar]
  43. Nadeem, A.; Jalal, A.; Kim, K. Human actions tracking and recognition based on body parts detection via Artificial neural network. In Proceedings of the 2020 3rd International conference on advancements in computational sciences (ICACS), Lahore, Pakistan, 17–19 February 2020; pp. 1–6. [Google Scholar]
  44. Rein, S.; Fabian, T.; Zwipp, H.; Mittag-Bonsch, M.; Weindel, S. Influence of age, body mass index and leg dominance on functional ankle stability. Foot Ankle Int. 2010, 31, 423–432. [Google Scholar] [CrossRef]
  45. Wafai, L.; Zayegh, A.; Woulfe, J.; Aziz, S.M.; Begg, R. Identification of foot pathologies based on plantar pressure asymmetry. Sensors 2015, 15, 20392–20408. [Google Scholar] [CrossRef]
  46. Chen, H.-C.; Liau, B.-Y.; Lin, C.-Y.; Akbari, V.B.H.; Lung, C.-W.; Jan, Y.-K. Estimation of Various Walking Intensities Based on Wearable Plantar Pressure Sensors Using Artificial Neural Networks. Sensors 2021, 21, 6513. [Google Scholar] [CrossRef]
  47. Zhao, Z.-Q.; Zheng, P.; Xu, S.-t.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef]
  48. Xu, G.; Huang, H.; Liu, C.; Wang, Z.; Li, W.; Liu, S. A model for medical diagnosis based on plantar pressure. In Proceedings of the 2017 Ninth International Conference on Advances in Pattern Recognition (ICAPR), Bangalore, India, 27–30 December 2017; pp. 1–6. [Google Scholar]
  49. Wang, Z.; Wu, Y.; Yang, L.; Thirunavukarasu, A.; Evison, C.; Zhao, Y. Fast personal protective equipment detection for real construction sites using deep learning approaches. Sensors 2021, 21, 3478. [Google Scholar] [CrossRef]
  50. Elsken, T.; Staffler, B.; Metzen, J.H.; Hutter, F. Meta-learning of neural architectures for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 12365–12375. [Google Scholar]
  51. Tadesse, G.A.; Javed, H.; Thanh, N.L.N.; Thi, H.D.H.; Thwaites, L.; Clifton, D.A.; Zhu, T. Multi-modal diagnosis of infectious diseases in the developing world. IEEE J. Biomed. Health Inform. 2020, 24, 2131–2141. [Google Scholar] [CrossRef]
Figure 1. Illustration of object detection Network Architecture; (A) Residual Neural Network 50 (ResNet-50) network architecture. (B) Dense Convolutional Network (DenseNet) network architecture.
Figure 1. Illustration of object detection Network Architecture; (A) Residual Neural Network 50 (ResNet-50) network architecture. (B) Dense Convolutional Network (DenseNet) network architecture.
Applsci 12 08885 g001
Figure 2. YOLO series Network Architecture; (A) YOLOv3 Network Architecture. (B) YOLOv4 Network Architecture. (C) YOLOv5 Network Architecture. Note: YOLO, You Only Look Once.
Figure 2. YOLO series Network Architecture; (A) YOLOv3 Network Architecture. (B) YOLOv4 Network Architecture. (C) YOLOv5 Network Architecture. Note: YOLO, You Only Look Once.
Applsci 12 08885 g002
Figure 3. The illustration of foot profile box for left and right feet in plantar pressure image detection with the ResNet-50, DenseNet, YOLOv3, YOLOv4, YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x models.
Figure 3. The illustration of foot profile box for left and right feet in plantar pressure image detection with the ResNet-50, DenseNet, YOLOv3, YOLOv4, YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x models.
Applsci 12 08885 g003
Figure 4. The examples of labeling the maximum area of plantar pressure image dataset with a different condition; (A) The footprint images with pressure in the full foot. (B) Complex footprint images with pressure in the rearfoot and midfoot but loss of pressure in the forefoot. (C) The abnormal foot progression angle with high pressure on the forefoot and midfoot and less pressure on the rearfoot. (D) Defective footprint images with high pressure on the heels, lost pressure on the midfoot and forefoot.
Figure 4. The examples of labeling the maximum area of plantar pressure image dataset with a different condition; (A) The footprint images with pressure in the full foot. (B) Complex footprint images with pressure in the rearfoot and midfoot but loss of pressure in the forefoot. (C) The abnormal foot progression angle with high pressure on the forefoot and midfoot and less pressure on the rearfoot. (D) Defective footprint images with high pressure on the heels, lost pressure on the midfoot and forefoot.
Applsci 12 08885 g004
Figure 5. The average precision of left and right foot detection in different object detection models.
Figure 5. The average precision of left and right foot detection in different object detection models.
Applsci 12 08885 g005
Figure 6. YOLO series comparison results in various metrics. YOLOv4 exhibits persistent accuracy gains across various metrics.
Figure 6. YOLO series comparison results in various metrics. YOLOv4 exhibits persistent accuracy gains across various metrics.
Applsci 12 08885 g006
Figure 7. The prediction sample of detection in YOLOv4: (A) the defective plantar pressure images on the midfoot. (B) The complex plantar pressure on the forefoot and midfoot. (C) The complex plantar pressure with less pressure at full foot. (D) The plantar pressure image on the forefoot with abnormal foot progression angle.
Figure 7. The prediction sample of detection in YOLOv4: (A) the defective plantar pressure images on the midfoot. (B) The complex plantar pressure on the forefoot and midfoot. (C) The complex plantar pressure with less pressure at full foot. (D) The plantar pressure image on the forefoot with abnormal foot progression angle.
Applsci 12 08885 g007
Table 1. Performance evaluation of object detection models.
Table 1. Performance evaluation of object detection models.
ModelsModel Performance Evaluation
APmAPPrecisionRecallF1-Score
LeftRight
YOLOv399.80%99.81%99.80%94.00%99.00%97.00%
YOLOv499.62%99.28%99.45%99.00%100.00%99.00%
YOLOv5s89.80%93.70%91.70%99.58%99.64%100.00%
YOLOv5m89.30%95.10%92.18%99.62%99.64%100.00%
YOLOv5l91.40%94.60%92.98%99.42%99.64%100.00%
YOLOv5x90.20%93.10%91.66%91.84%87.92%89.00%
DenseNet90.06%89.17%89.62%56.00%97.00%71.00%
ResNet-5061.88%70.13%66.00%66.00%50.00%57.00%
Note: AP, Average Precision; mAP, Mean Average Precision.
Table 2. The comparison of studies in left and right foot prediction using deep learning approach.
Table 2. The comparison of studies in left and right foot prediction using deep learning approach.
ReferencesData Acquisition ToolsModelDatasetModel Performance
LeftRight
Dose et al. (2020) [41]ElectroencephalogramCNN64 EEG channels79.90%78.60%
Chen et al. (2019) [42]Smart insoleTGINN835 images82.35%81.93%
Nadeem et al. (2020) [43]Video framesANN2391 images94.00%95.00%
Our StudyPlantar pressureYOLO974 images93.35%95.93%
Note: ANN, Artificial Neural Networks; CNN, Convolutional Neural Network; EEG, Electroencephalogram; TGINN, Triple Generalized-Inverse Neural Network; YOLO, You Only Look Once; our study value was average from YOLO series.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ardhianto, P.; Liau, B.-Y.; Jan, Y.-K.; Tsai, J.-Y.; Akhyar, F.; Lin, C.-Y.; Subiakto, R.B.R.; Lung, C.-W. Deep Learning in Left and Right Footprint Image Detection Based on Plantar Pressure. Appl. Sci. 2022, 12, 8885. https://doi.org/10.3390/app12178885

AMA Style

Ardhianto P, Liau B-Y, Jan Y-K, Tsai J-Y, Akhyar F, Lin C-Y, Subiakto RBR, Lung C-W. Deep Learning in Left and Right Footprint Image Detection Based on Plantar Pressure. Applied Sciences. 2022; 12(17):8885. https://doi.org/10.3390/app12178885

Chicago/Turabian Style

Ardhianto, Peter, Ben-Yi Liau, Yih-Kuen Jan, Jen-Yung Tsai, Fityanul Akhyar, Chih-Yang Lin, Raden Bagus Reinaldy Subiakto, and Chi-Wen Lung. 2022. "Deep Learning in Left and Right Footprint Image Detection Based on Plantar Pressure" Applied Sciences 12, no. 17: 8885. https://doi.org/10.3390/app12178885

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop