Next Article in Journal
Synthesis of ZnAl2O4 and Evaluation of the Response in Propane Atmospheres of Pellets and Thick Films Manufactured with Powders of the Oxide
Next Article in Special Issue
Predictive Capacity of COVID-19 Test Positivity Rate
Previous Article in Journal
A Highly Compact Antipodal Vivaldi Antenna Array for 5G Millimeter Wave Applications
Previous Article in Special Issue
A Cross-Regional Analysis of the COVID-19 Spread during the 2020 Italian Vacation Period: Results from Three Computational Models Are Compared
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Prediction of Body Mass Index from Negative Affectivity through Machine Learning: A Confirmatory Study

1
Department of Computer Science and Engineering, University of Bologna, 40127 Bologna, Italy
2
Department of Education, University of Bologna, 40127 Bologna, Italy
3
Department of Psychology, University of Bologna, 40127 Bologna, Italy
*
Authors to whom correspondence should be addressed.
Sensors 2021, 21(7), 2361; https://doi.org/10.3390/s21072361
Submission received: 17 February 2021 / Revised: 17 March 2021 / Accepted: 26 March 2021 / Published: 29 March 2021

Abstract

:
This study investigates on the relationship between affect-related psychological variables and Body Mass Index (BMI). We have utilized a novel method based on machine learning (ML) algorithms that forecast unobserved BMI values based on psychological variables, like depression, as predictors. We have employed various machine learning algorithms, including gradient boosting and random forest, with psychological variables relative to 221 subjects to predict both the BMI values and the BMI status (normal, overweight, and obese) of those subjects. We have found that the psychological variables in use allow one to predict both the BMI values (with a mean absolute error of 5.27–5.50) and the BMI status with an accuracy of over 80% (metric: F1-score). Further, our study has also confirmed the particular efficacy of psychological variables of negative type, such as depression for example, compared to positive ones, to achieve excellent predictive BMI values.

1. Introduction

Obesity constitutes a major public health concern globally, generating considerable direct and indirect costs, and affecting over one-third of the world’s population [1]. Obesity is recognized as a complex, multifactorial disease, determined by a combination of factors and impacting both physical and psychological health [2]. However, existing research generally treats this condition mainly as a result of both behavioral factors, namely, an excessive caloric intake relative to metabolic energy expenditure [3], and genetic influences, such as single gene mutations [4]. The role of other relevant determinants, including psychological ones, tends therefore to be neglected although these variables clearly also contribute to weight gain and weight-related pathologies. Nevertheless, particularly in the field of psychology, researchers have emphasized a mutual association between overweight, obesity and high levels of negative affectivity, operationalized mainly as depression. For example, adults diagnosed with obesity report higher depression and anxiety levels compared to normally weighted individuals [5,6], and negative affects emerged as important factors for the maintenance of eating pathology [7]. At the same time, the literature shows inconsistencies with regards to strength and causal direction of such associations [8].
These mixed results might be related to several factors, including specific methodological issues, viz., the variables being measured, assessment tools, and strategies to data management. In fact, the tendency to employ a limited conceptualization of obesity, together with the general application of conventional regression analyses (e.g., linear and logistic regressions) to test empirical assumptions, reinforce existing difficulties in predicting and treating obesity. The use of regressions have certainly helped to identify risk factors of medical outcomes, however, in the case of a multidimensional, lifestyle condition such as obesity, these methods have made less progress [9].
One of the main approaches that may help to reduce these research-flaws and to improve scientific knowledge is the use of artificial intelligence (AI). In health-related disciplines, there is currently an increasing interest in the use of AI, particularly when the primary task is identifying clinically useful patterns in high-dimensional data sets. For example, several studies employed AI to classify a number of medical parameters that could efficiently predict obesity and body mass index (BMI; weight in kilograms divided by the square of the height in meters) [10], while a recent systematic review showed the application of machine learning (ML) algorithms for childhood obesity care [11].
Detection and diagnosis of diseases by the use of AI, in particular ML, is indeed an ongoing and prominent topic in scientific papers [12]. The interest in its potentiality has increased, even if the possible unintended consequences that may result from its application in clinical practice are clear, and include an overreliance on the capabilities of automation, thus reducing the skills of physician, as well as relying more on the data than on the clinical context [11]. Several medical investigations employed ML approaches to develop advanced remote healthcare systems to monitor long-term patients with BMI-related chronic illnesses [13,14,15,16]. Specifically, while a number of these studies attempted to predict BMI by voice signals [15], face images [16,17], or face points extracted with a Kinect [18], other studies focused on blood and biochemical indexes [19,20,21]. However, to our knowledge there are currently no studies analyzing the relationship between psychological functioning and BMI values through ML techniques.
Here, we aim to address this gap by further exploring the relationship between affect-related psychological variables and BMI through ML algorithms. Specifically, we applied ML to infer predictive features related to psychological functioning over BMI using data from a study [22] that demonstrated, employing correlational analysis, that depression levels may be useful in order to discriminate among BMI levels (normal weight, overweight, and all obesity classes). The main contributions of this work are twofold. Firstly, this study attempts to reproduce the results obtained on the relationship between affect-related psychological variables and BMI [22] by using ML techniques. Computational reproducibility is the ability to repeat an analysis of a given data set and obtain sufficiently similar results [23,24]. Not only is reproducibility critical for ML research [25], but it also constitutes a necessary requirement for science in general, given the constantly increasing need to subject study findings to more intensive scrutiny [26]. Secondly, this study aims to test whether psychological variables can be used as predictors to forecast unobserved BMI values [27]. The main objective of this study is therefore to identify risk and/or protective factors, conceptualized as negative and positive affectivity respectively, for overweight and obesity. Depending on the evidence for causality, these factors can be useful for screening patients who are at risk in a broader population as well as for the development of therapeutic interventions.

2. Materials and Methods

This section details the research questions at the base of this study, illustrates the dataset and the machine learning algorithms used, and then describes the employed approach together with the evaluation metrics adopted.

2.1. Research Questions

As anticipated in the Introduction section, the purpose of this study is to deepen the relationship between risk and protective factors, in the form of negative and positive affectivity for overweight and obese people through the use of Machine learning algorithms. In particular, the research questions that drove our study are the following ones:
(1)
Is it possible to predict the BMI value (or the BMI class) using psychological variables?
(2)
Which psychological variables, the positive or the negative ones, allow to better predict the BMI?
(3)
Among them, which one has more influence on the prediction capability?
To answer these research questions, we followed the steps outlined below. Firstly, we used all the psychological variables as input to predict the BMI. Secondly, we considered separately the positive and the negative ones. This had let us to understand which ones allow to better predict the BMI. Finally, we evaluated those ones that work better following a leave-one-out approach to understand if one of them is more related than other ones to the BMI.

2.2. Dataset Description

The dataset used is composed of psychological variables exhibited by adults seeking treatment for their obesity, and by the control group. A detailed description of both participants and data collection procedure for this study is available in a recently published article [22]. The dataset comprised a set of both positive and negative psychological variables relative to 320 subjects. Positive variables were those psychological factors that may play a protective role against obesity and include trait emotional intelligence (trait EI) measured with the Trait Emotional Intelligence Questionnaire–Short Form [27]; cognitive reappraisal as emotion regulation strategy measured with the Emotion Regulation Questionnaire [28]; and happiness, measured with the Oxford Happiness Inventory [29]. Negative variables were instead potential risk-factors for the development and maintenance of obesity, and included: expressive suppression as emotion regulation strategy, binge eating, assessed with the Binge Eating Scale [30], depression, assessed with the Beck Depression Inventory [31], trait and state anxiety, assessed with the State Trait Anxiety Inventory-Y [32]. Each of these questionnaires used to measure a certain psychological variable returns an integer value. Hence, for each subject, there are seven different values representing the psychological state of the subject. In addition to these ones, there are the BMI obtained for each participant and BMI categories computed according to the BMI ranges given by the World Health Organization [33]. The subjects were organized into three groups: normal weight, overweight, and obese adults (see Table 1).

2.3. Machine Learning Algorithms

BMI was considered as both a continuous and a categorical variable. We took advantage of several algorithms, with the aim of understanding which ones work best in this specific context. In particular, we evaluated: K-nearest neighbor (KNN) [34], classification and regression tree (CART) [35], support vector machine (SVM) [36], multi-layer perceptron (MLP) [37], Ada boosting with decision tree (AB) [38], gradient boosting (GB) [39], random forest (RF) [40], and extra tree (ET) [41].
All algorithms were used for both the classification and the regression problem. For the regression analysis, we employed Lasso [42] and Elastic Net Regression [43] as additional algorithms.
We employed the Scikit-learn machine learning library in all our experiments. For all the algorithms, we used the default parameters with the only exception of the random state one, that we provided when possible to ensure reproducibility of results.

2.4. Approach and Evaluation Criteria

First of all, the dataset has been preprocessed to deal with missing values. Since the subjects with missing values not only had one but had many (five to seven psychological variables were missing out of a total of seven), we decided to simply remove those subjects from the dataset. In this way, the number of subjects in the dataset has gone from 320 to 221. Moreover, the dataset was divided in two different parts, one for the training phase, composed of the 80% of subjects (i.e., 176), and the other one for the testing phase, composed of the remaining 20% of subjects (i.e., 45). Then, for each one of the three steps described in Section 2.1, in the training phase, we employed the stratified k-fold cross validation, a technique used to reduce the bias deriving from random sampling [44]. We chose to use the number of folds equal to four, considering that the size of the dataset does not allow the use of the typical ten-fold cross validation. In fact, by dividing the training set into 10 folds, at each iteration, the overweight class in validation would only count 2 elements. Instead, using 4 folds, at each iteration, the overweight class in validation counts 5 elements, number which is also consistent with the size of the class in the test set. Before each training phase, the data were scaled, subtracting the average value and dividing by the standard deviation.
With regard to the classification, Table 1 shows that the dataset suffers the problem of imbalance among the three classes. In fact, the subjects of the class Obesity are more than five times the ones of the class Overweight. This imbalance could lead to predictions that are more accurate on the majority class than on a minority class, resulting in a bias in favour of the majority class. To deal with this problem, we took advantage of a resampling technique with the aim of over-sampling the minority classes. In particular, we employed the synthetic minority over-sampling technique (SMOTE) [45]. This technique exploits K-nearest neighbour in the feature space to generate synthetic examples of the minority class. In this way, during training, the number of examples for each class will be always the same.
A final consideration is due to the evaluation metrics. To assess the performance of classifiers, we employed a global metric, namely the F1-score, that is the harmonic average of the precision and recall together with two class-specific metrics, the sensitivity and specificity to measure the ability of the classifiers to predict true positives and true negatives. The prediction accuracy of our regressors has been evaluated with two different measures: mean absolute error (MAE) and Pearson correlation coefficient (PCC). The MAE measures the prediction error (i.e., the average deviation between the real BMI values and the predicted ones). The PCC quantifies the degree of the linear association between real and predicted BMI values. The reason to couple MAE and PCC is that when the values are all distributed near the average, a naive regressor that predicts always the mean value, achieve good performance. In such case, the PCC will instead be low, allowing to highlight, and consequently to avoid, such a problem.

3. Results

3.1. BMI Prediction Using Psychological Variables

To answer the first research question, we investigate the employment of psychological variables to predict BMI values and classes. We conducted a first analysis employing all the algorithms described in the previous Section, taking advantage of the 4-fold cross validation. In Figure 1, we report the F1-scores obtained while predicting BMI classes with all the psychological variables available. In general, algorithms are able to predict the BMI classes. As shown, the best performances were achieved by the extra tree classifier, with an average F1-score of 0.84. However, MLP, GB, and RF were also able to achieve average F1-scores greater than 0.8.
After this initial assessment, we conducted a tuning phase on the most promising algorithms: MLP, GB, RF, and ET. We also varied the main hyperparameters of each algorithm employing a grid search approach, hence considering all the hyper-parameters combinations. The full list of values considered for each parameter and algorithm is reported in Table 2. For each algorithm, the best combination is highlighted in bold in the Table.
After finding the hyperparameters that enable the algorithms to perform best on the training set, we have evaluated them on the test set. Table 3 reports the obtained F1-scores on the training set, always employing the 4 folds cross validation, and on the test set. As shown, on the test set, both GB and ET were able to reach F1-scores equal to 0.82. Such results highlight how it is possible to predict the BMI class using the aforementioned psychological variables with a good accuracy.
We then tackled the regression analysis. Parallel to the classification analysis, we firstly analyzed the performances of all the algorithms. Table 4 reports both the mean absolute error and the Pearson Correlation Coefficient obtained by each algorithm. As shown, the best performance was achieved by the Lasso and Elastic Net, with a MAE equal to 4.35 and the PCCs respectively of 0.81 and 0.80, indicating a strong correlation between predictions and real values. Slightly worse results were instead obtained by KNN, GB, RF, and ET.
Also in this case, we conducted a tuning phase, varying the hyperparameters of: LASSO, EN, KNN, GB, RF, and ET. The full list of values considered for each parameter and algorithm is reported in Table 5. As in the previous case, we have employed a grid search approach. For each algorithm, the best combination is highlighted in bold in the Table.
Table 6 reports the MAE and the PCC on both the training and test set. As shown, the best performance on both the training and test set is achieved by the gradient boosting with an average error of 4.14 and 5.27. For both, there is a strong correlation between the predictions and the real values, as highlighted by the PCC. Instead, the worst performances on the test set were achieved by Lasso and EN, that were the ones that initially performed better. However, even when tackling the problem as a regression one, we were able to predict BMI values starting from psychological variables.

3.2. Evaluation of the Impact of Positive and Negative Psychological Variables on Prediction

To answer the second research question, we contrasted the performance of the various machine learning algorithms, when trained on positive and negative psychological variables separately. We started with the classification problem. We employed the same approach of the previous Section, using the same split for the cross-validation and the same parameters for the algorithms. Table 7 reports the F1-scores obtained by the algorithms when trained with positive (Positive Variables column) and negative (Negative Variables column) psychological variables.
As shown, there is not much difference between the performance obtained when training algorithms with all the psychological variables and the one obtained when training algorithms with only negative psychological variables. The same cannot be said of the algorithms trained with positive psychological variables. In fact, the performance falls significantly. These results highlight the fact that positive psychological variables do not influence BMI, contrary to the negative ones.
We then replicated the experiments on the regression analysis. Even in this case, we used the same split for the cross-validation and the same parameters for the algorithms. Table 8 reports both the mean absolute error and the Pearson correlation coefficient for both the algorithms trained with positive (Positive Variables column) and negative (Negative Variables column) psychological variables. The reported results confirm the ones obtained with classification algorithms. The algorithms trained with negative psychological variables present similar performances of the ones obtained by algorithms trained with all psychological variables. Thus, in many cases, the MAE of the algorithms trained on positive variables is almost two times the MAE of the algorithms trained on negative ones.

3.3. Evaluation of the Impact of the Single Negative Psychological Variables on Prediction

Finally, to answer the third research question, we trained our machine learning algorithms removing, in turn, each negative psychological variable: Depression (DE), Trait anxiety (TA), Binge eating (BE), and Expressive suppression (ES). In this way, we can understand which variable has more impact on the predictive capabilities of the algorithms. We first tackled the classification problem. We focused only on the training set, employing the 4-fold cross validation. Table 9 reports the F1-scores obtained by the algorithms when trained without one of the psychological variables. From the results, it is clear that the psychological variable that has most impact on the predictive capabilities of the algorithms is Depression. In fact, removing such a variable leads to a deterioration in performance of 0.2 on average (column No DE). Instead, removing any other variable does not significantly affect the performances, as shown by the values reported in columns No TA, No BE, and No ES.
We then repeated the same analysis evaluating the prediction of BMI values. Even in this case, we analysed only the training set, using the 4-fold cross validation. Table 10 reports both the mean absolute error and the Pearson correlation coefficient for the algorithm trained, in turn, without one of the negative psychological variables. The obtained results confirm the ones obtained with classification algorithms. Removing the Depression variable has a greater impact than the removal of any other variable. In fact, the performances get worse by about 2 on average.

4. Discussion

The current study aimed at exploring whether BMI values can be predicted from psychological parameters by using ML techniques. ML techniques represent a powerful set of algorithms that can derive useful knowledge for the medical field in general and for obesity more specifically, as they can help us to improve our understanding of such pathology and our capacity to predict it with greater precision [46]. Risk prediction of adverse health conditions and events is a primary goal of much health research, and this study had the objective to provide evidence about the role of psychological factors as either risk (negative affectivity) or protective (positive affectivity) determinants of BMI levels through non-conventional statistical techniques.
Several ML algorithms were used to test theoretical models about the relationship between psychological variables and BMI. First of all, we can highlight how regardless of how the BMI is conceptualized (i.e., as a continuous value or as a categorical variable), the results are the same, without particular differences. For this reason, in the presentation of the answers to the research questions, we will not differentiate between the two types of problems. From the results presented in Section 3.1, it is clear that the answer to the first research question is affirmative. In fact, using affect-related variables it is possible to predict the BMI with a good level of accuracy. To answer the second research question, instead, we have used as input positive and negative affect-related variables separately. The results reported in Section 3.2 showed that BMI can be better predicted by the set of negative affect-related variables, such as depression, anxiety, and emotion suppression, whereas variables with more positive contents, such as happiness and emotion regulation, did not seem to play a predictive role over BMI. Hence, in the third step of our experiments, we considered only negative affect-related variables, leaving out one variable in turn to respond to the last research question. Among the psychological variables that we considered, depression seemed to have the strongest predictive power. In fact, the results presented in Section 3.3 it is clear how the removal of depression generally leads to a significant lowering of the predictive capabilities of the machine learning algorithms, which does not happen for the other variables. Such a finding reinforces already published results that have highlighted the role of depression [22]. These results add to the literature on ML and obesity by focusing on relevant psychological parameters for the prediction of BMI, and suggest that affective variables, particularly depression, should be considered in preventive and treatment care of BMI-related problems, especially in the case of elevated BMI and obesity.
To our knowledge, no prior investigation has used ML techniques to test for the predictive effects of emotional and affective variables over BMI values. In fact, already published studies where ML was employed took into account physiological parameters such as voice signals [15] and face images [16,17]. However, further research should combine these findings by taking into account both medical and psychological parameters simultaneously. This would help to verify and compare the predictive role of these variables.
We must address the limitations of the current study. Firstly, it did not employ newly collected data, thus making our inferences limited. However, it allowed us to have a basis for comparison and to test for the reproducibility of previous findings [23,24,25,26]. Secondly, this study suffers from a number of methodological flaws, such as cross-sectional study design and a prevalence of self-evaluation (with the exclusion of BMI values which were directly assessed by the medical staff), as already discussed in [22]. Those issues should be solved in future studies. Thirdly, from a technical perspective, the main limitation of this work is surely the restricted number of subjects. Increasing the size of the dataset, possibly in a balanced way, would help to strengthen the obtained results. Moreover, it would also allow the use of more powerful, yet data-hungry, algorithms, such as deep neural networks. Lastly, aside from BMI, the present study took into account psychological and demographic variables only. However, given the multifactorial nature of weight-related disorders, future studies need to include relevant medical and ‘lifestyle’ variables which may contribute to the explanation of present results (e.g., actual calories intake, weekly exercise, social support).
In conclusion, despite these limitations, present findings provide statistically strong information regarding the possibility to predict BMI values by means of a set of psychological variables with negative contents. Particularly, this is one of the first studies investigating the predictive role of psychological factors over a condition such as obesity, through ML algorithms [47]. These data highlight the importance of considering the affective component of individual’s experience for a better and more complete understanding of weight-related disorders, as it can inform psychological interventions and treatment approaches, as well as improve preventive and therapeutic strategies. Yet, the use of ML has several advantages, as it outperforms traditional statistics, can be used to compare the impact of more variables on the prediction of the chosen outcome, and can handle any kind of variable. However, in order to improve the strength of these findings, future research aimed at overcoming present study limitations is required.

Author Contributions

Conceptualization, E.T., G.M. and P.S.; methodology, G.D. and F.A.; software, G.D.; validation, M.R., E.T. and G.M.; data curation, G.M.; writing—original draft preparation, G.D., F.A. and G.M.; writing—review and editing, M.R., P.S. and E.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chooi, Y.C.; Ding, C.; Magkos, F. The epidemiology of obesity. Metabolism 2019, 92, 6–10. [Google Scholar] [CrossRef] [Green Version]
  2. Dixon, J.B. The effect of obesity on health outcomes. Mol. Cell. Endocrinol. 2010, 316, 104–108. [Google Scholar] [CrossRef]
  3. Hill, J.O.; Wyatt, H.R.; Peters, J.C. Energy balance and obesity. Circulation 2012, 126, 126–132. [Google Scholar] [CrossRef]
  4. Bray, G.; Bouchard, C. Handbook of Obesity-Volume 2: Clinical Applications; CRC Press: Boca Raton, FL, USA, 2014; Volume 2. [Google Scholar]
  5. Gariepy, G.; Nitka, D.; Schmitz, N. The association between obesity and anxiety disorders in the population: A systematic review and meta-analysis. Int. J. Obes. 2010, 34, 407–419. [Google Scholar] [CrossRef] [Green Version]
  6. Luppino, F.S.; de Wit, L.M.; Bouvy, P.F.; Stijnen, T.; Cuijpers, P.; Penninx, B.W.; Zitman, F.G. Overweight, obesity, and depression: A systematic review and meta-analysis of longitudinal studies. Arch. Gen. Psychiatry 2010, 67, 220–229. [Google Scholar] [CrossRef]
  7. Stice, E. Risk and Maintenance Factors for Eating Pathology: A Meta-Analytic Review. Psychol. Bull. 2002, 128, 825–848. [Google Scholar] [CrossRef] [Green Version]
  8. Grundy, A.; Cotterchio, M.; Kirsh, V.A.; Kreiger, N. Associations between anxiety, depression, antidepressant medication, obesity and weight gain among Canadian women. PLoS ONE 2014, 9, e99780. [Google Scholar] [CrossRef]
  9. Selya, A.S.; Anshutz, D. Machine Learning for the Classification of Obesity from Dietary and Physical Activity Patterns. In Advanced Data Analytics in Health; Smart Innovation, Systems and Technologies; Giabbanelli, P., Mago, V., Papageorgiou, E., Eds.; Springer: Cham, Switzerland, 2018; Volume 93. [Google Scholar] [CrossRef]
  10. Bouharati, S.; Bounechada, M.; Djoudi, A.; Harzallah, D.; Alleg, F.; Benamrani, H. Prevention of obesity using artificial intelligence techniques. Int. J. Sci. Eng. Investig. 2012, 1, 146–150. [Google Scholar]
  11. Triantafyllidis, A.; Polychronidou, E.; Alexiadis, A.; Rocha, C.L.; Oliveira, D.N.; da Silva, A.S.; Freire, A.L.; Macedo, C.; Sousa, I.F.; Werbet, E.; et al. Computerized decision support and machine learning applications for the prevention and treatment of childhood obesity: A systematic review of the literature. Artif. Intell. Med. 2020, 104, 101844. [Google Scholar] [CrossRef]
  12. Chen, J.H.; Asch, S.M. Machine learning and prediction in medicine—Beyond the peak of inflated expectations. N. Engl. J. Med. 2017, 376, 2507. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Koza, J.R.; Bennett, F.H.; Andre, D.; Keane, M.A. Automated Design of Both the Topology and Sizing of Analog Electrical Circuits Using Genetic Programming. In Artificial Intelligence in Design’96; Gero, J.S., Sudweeks, F., Eds.; Springer: Dordrecht, The Netherlands, 1996. [Google Scholar] [CrossRef]
  14. Cabitza, F.; Rasoini, R.; Gensini, G.F. Unintended consequences of machine learning in medicine. JAMA 2017, 318, 517–518. [Google Scholar] [CrossRef] [PubMed]
  15. Lee, B.J.; Kim, K.H.; Ku, B.; Jang, J.S.; Kim, J.Y. Prediction of body mass index status from voice signals based on machine learning for automated medical applications. Artif. Intell. Med. 2013, 58, 51–61. [Google Scholar] [CrossRef]
  16. Jiang, M.; Shang, Y.; Guo, G. On visual BMI analysis from facial images. Image Vis. Comput. 2019, 89, 183–196. [Google Scholar] [CrossRef]
  17. Dantcheva, A.; Bremond, F.; Bilinski, P. Show me your face and I will tell you your height, weight and body mass index. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 3555–3560. [Google Scholar]
  18. Tai, C.H.; Lin, D.T. A framework for healthcare everywhere: BMI prediction using kinect and data mining techniques on mobiles. In Proceedings of the 2015 16th IEEE International Conference on Mobile Data Management, Pittsburgh, PA, USA, 15–18 June 2015; pp. 126–129. [Google Scholar]
  19. Chen, H.; Yang, B.; Liu, D.; Liu, W.; Liu, Y.; Zhang, X.; Hu, L. Using blood indexes to predict overweight statuses: An extreme learning machine-based approach. PLoS ONE 2015, 10, e0143003. [Google Scholar] [CrossRef]
  20. Recenti, M.; Ricciardi, C.; Gìslason, M.; Edmunds, K.; Carraro, U.; Gargiulo, P. Machine Learning Algorithms Predict Body Mass Index Using Nonlinear Trimodal Regression Analysis from Computed Tomography Scans. In Proceedings of the XV Mediterranean Conference on Medical and Biological Engineering and Computing—MEDICON 2019; Henriques, J., Neves, N., de Carvalho, P., Eds.; Springer: Chams, Switzerland, 2020; Volume 76. [Google Scholar]
  21. Gross, T.J.; Araujo, R.B.; Vale, F.A.C.; Bessani, M.; Maciel, C.D. Dependence between cognitive impairment and metabolic syndrome applied to a Brazilian elderly dataset. Artif. Intell. Med. 2018, 90, 53–60. [Google Scholar] [CrossRef]
  22. Andrei, F.; Nuccitelli, C.; Mancini, G.; Reggiani, G.M.; Trombini, E. Emotional intelligence, emotion regulation and affectivity in adults seeking treatment for obesity. Psychiatry Res. 2018, 269, 191–198. [Google Scholar] [CrossRef]
  23. Stodden, V.; Miguez, S. Best Practices for Computational Science: Software Infrastructure and Environments for Reproducible and Extensible Research. J. Open Res. Softw. 2014, 2, e21. [Google Scholar] [CrossRef]
  24. Piccolo, S.R.; Frampton, M.B. Tools and techniques for computational reproducibility. Gigascience 2016, 5, 30. [Google Scholar] [CrossRef] [Green Version]
  25. McDermott, M.; Wang, S.; Marinsek, N.; Ranganath, R.; Ghassemi, M.; Foschini, L. Reproducibility in machine learning for health. arXiv 2019, arXiv:1907.01463. [Google Scholar]
  26. Stupple, A.; Singerman, D.; Celi, L.A. The reproducibility crisis in the age of digital medicine. NPJ Digit. Med. 2019, 2, 1–3. [Google Scholar] [CrossRef] [Green Version]
  27. Petrides, K.V.; Furnham, A. The role of trait emotional intelligence in a gender-specific model of organizational variables. J. Appl. Soc. Psychol. 2006, 36, 552–569. [Google Scholar] [CrossRef]
  28. Gross, J.J.; John, O.P. Individual differences in two emotion regulation processes: Implications for affect, relationships, and well-being. J. Pers. Soc. Psychol. 2003, 85, 348. [Google Scholar] [CrossRef]
  29. Argyle, M.; Martin, M.; Crossland, J. Happiness as a function of personality and social encounters. In Recent Advances in Social Psychology: An International Perspective; Forgas, J.P., Innes, J.M., Eds.; Elsevier: Amsterdam, The Netherlands; North-Holland Publishers: Amsterdam, The Netherlands, 1989; pp. 189–203. [Google Scholar]
  30. Gormally, J.; Black, S.; Daston, S.; Rardin, D. The assessment of binge eating severity among obese persons. Addict. Behav. 1982, 7, 47–55. [Google Scholar] [CrossRef]
  31. Beck, A.T.; Steer, R.A. Manual for the Revised Beck Depression Inventory; Psychological Corporation: San Antonio, TX, USA, 1987. [Google Scholar]
  32. Spielberger, C.D. Manual for the State-Trait Anxiety Inventory; Consulting Psychologists Press Inc.: Palo Alto, CA, USA, 1983. [Google Scholar]
  33. World Health Organization. Physical Status: The Use of and Interpretation of Anthropometry; Report of a WHO Expert Committee; World Health Organization: Geneva, Switzerland, 1995. [Google Scholar]
  34. Gazalba, I.; Reza, N.G.I. Comparative analysis of k-nearest neighbor and modified k-nearest neighbor algorithm for data classification. In Proceedings of the 2017 2nd International Conferences on Information Technology, Information Systems and Electrical Engineering (ICITISEE), Yogyakarta, Indonesia, 1–2 November 2017; pp. 294–298. [Google Scholar]
  35. Chen, W.; Xie, X.; Wang, J.; Pradhan, B.; Hong, H.; Bui, D.T.; Duan, Z.; Ma, J. A comparative study of logistic model tree, random forest, and classification and regression tree models for spatial prediction of landslide susceptibility. Catena 2017, 151, 147–160. [Google Scholar] [CrossRef] [Green Version]
  36. Pisner, D.A.; Schnyer, D.M. Support vector machine. In Machine Learning; Academic Press: Cambridge, MA, USA, 2020; pp. 101–121. [Google Scholar]
  37. Lorencin, I.; Anđelić, N.; Španjol, J.; Car, Z. Using multi-layer perceptron with Laplacian edge detector for bladder cancer diagnosis. Artif. Intell. Med. 2020, 102, 101746. [Google Scholar] [CrossRef] [PubMed]
  38. Randhawa, K.; Loo, C.K.; Seera, M.; Lim, C.P.; Nandi, A.K. Credit card fraud detection using AdaBoost and majority voting. IEEE Access 2018, 6, 14277–14284. [Google Scholar] [CrossRef]
  39. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.-Y. Lightgbm: A highly efficient gradient boosting decision tree. Adv. Neural Inf. Process. Syst. 2017, 30, 3146–3154. [Google Scholar]
  40. Probst, P.; Wright, M.N.; Boulesteix, A.L. Hyperparameters and tuning strategies for random forest. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2019, 9, e1301. [Google Scholar] [CrossRef] [Green Version]
  41. Sharaff, A.; Gupta, H. Extra-tree classifier with metaheuristics approach for email classification. In Advances in Computer Communication and Computational Sciences; Springer: Singapore, 2019; pp. 189–197. [Google Scholar]
  42. Ranstam, J.; Cook, J.A. LASSO regression. J. Br. Surg. 2018, 105, 1348. [Google Scholar] [CrossRef]
  43. Zhang, Z.; Lai, Z.; Xu, Y.; Shao, L.; Wu, J.; Xie, G.S. Discriminative elastic-net regularized linear regression. IEEE Trans. Image Process. 2017, 26, 1466–1481. [Google Scholar] [CrossRef] [Green Version]
  44. Kohavi, R. A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1995; Volume 14, pp. 1137–1145. [Google Scholar]
  45. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  46. Nuttall, F.Q. Body mass index: Obesity, BMI, and health: A critical review. Nutr. Today 2015, 50, 117. [Google Scholar] [CrossRef] [Green Version]
  47. Dunstan, J.; Aguirre, M.; Bastias, M.; Nau, C.; Glass, C.; Tobar, F. Predicting nationwide obesity from food sales using machine learning. Health Inform. J. 2020, 26, 653–663. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Classification: F1-scores of the cross-validation on the training set.
Figure 1. Classification: F1-scores of the cross-validation on the training set.
Sensors 21 02361 g001
Table 1. Number of subjects for each BMI class.
Table 1. Number of subjects for each BMI class.
BMI ClassNumber of Subjects
Normal Weight60
Overweight25
Obesity136
Table 2. Hyper-parameters and values tested during tuning for the classification.
Table 2. Hyper-parameters and values tested during tuning for the classification.
AlgorithmParameterValues
MLPActivation Functionidentity, logistic, tanh, relu
Solverlbfgs, sgd, adam
Max Iterations200, 500, 1000
Alpha0.1, 0.01, 0.001, 0.0001
Hidden layer size50, 100, 150, 200
RFMin Samples Leaf1, 3, 5
Min Samples Split2, 4, 6
Max Depth3, 5, 8
Max Featureslog2, sqrt
Criteriongini, entropy
Bootstraptrue, false
Number of Estimators50, 100, 200, 500
GBLearning Rate0.01, 0.05. 0.1, 0.2
Min Samples Leaf1, 3, 5
Min Samples Split2, 4, 6
Max Depth3, 5, 8
Max Featureslog2, sqrt
Criterionfriedman mse, mae
Subsample0.5, 0.75, 1
Number of Estimators50, 100, 200, 500
ETMin Samples Leaf1, 3, 5
Min Samples Split2, 4, 6
Max Depth3, 5, 8
Max Featureslog2, sqrt
Criteriongini, entropy
Number of Estimators50, 100, 200, 500
Table 3. Classification: F1-score, specificity/sensitivity on the test set, after tuning.
Table 3. Classification: F1-score, specificity/sensitivity on the test set, after tuning.
AlgorithmClassSensitivitySpecificityF1-Score
MLPNormal Weight0.670.910.81
Overweight0.600.90
Obesity0.890.88
GBNormal Weight0.830.910.85
Overweight0.600.93
Obesity0.890.94
RFNormal Weight0.920.910.89
Overweight0.600.97
Obesity0.930.94
ETNormal Weight0.750.880.82
Overweight0.600.97
Obesity0.890.82
Table 4. Regression: Mean Absolute Error and Pearson Correlation Coefficient of the cross-validation on the training set.
Table 4. Regression: Mean Absolute Error and Pearson Correlation Coefficient of the cross-validation on the training set.
AlgorithmMAEPCC
LASSO4.350.81
EN4.350.8
CART5.930.63
KNN4.370.79
SVR5.330.75
MLP9.440.5
AB4.620.76
GB4.580.76
RF4.650.77
Table 5. Hyper-parameters and values tested during tuning for the regression.
Table 5. Hyper-parameters and values tested during tuning for the regression.
AlgorithmParameterValues
LASSOAlpha1.0, 0.75, 0.5, 0.25
ENAlpha1.0, 0.75, 0.5, 0.25
KNNN Neighbors3, 7, 11, 15, 21
Leaf Size1, 2, 3, 5
Weightsuniform, distance
Algorithmauto, ball tree, kd tree, brute
RFMin Samples Leaf1, 3, 5
Min Samples Split2, 4, 6
Max Depth3, 5, 8
Max Featureslog2, sqrt
Criterionmse,mae
Bootstraptrue, false
Number of Estimators50, 100, 200, 500
GbLearning Rate0.01, 0.05, 0.1, 0.2
Min Samples Leaf1, 3, 5
Min Samples Split2, 4, 6
Max Depth3, 5, 8
Max Featureslog2, sqrt
Criterionfriedman mse, mae
Subsample0.5, 0.75, 1
Number of Estimators50, 100, 200, 500
ETMin Samples Leaf1, 3, 5
Min Samples Split2, 4, 6
Max Depth3, 5, 8
Max Featureslog2, sqrt
Criterionmse,mae
Number of Estimators50, 100, 200, 500
Table 6. Regression: Mean Absolute Error and Pearson Correlation Coefficient on the training and test set, after the tuning phase.
Table 6. Regression: Mean Absolute Error and Pearson Correlation Coefficient on the training and test set, after the tuning phase.
Algorithm4-Fold CVTest
MAEPCCMAEPCC
LASSO4.350.816.000.72
EN4.350.806.520.70
KNN4.310.765.500.76
GB4.140.795.270.75
RF4.260.795.310.78
ET4.410.785.570.76
Table 7. Classification: positive vs. negative variables, F1-scores of the cross-validation on the training set.
Table 7. Classification: positive vs. negative variables, F1-scores of the cross-validation on the training set.
AlgorithmClassPositive VariablesNegative Variables
SenSpecF1SenSpecF1
KNNNormal Weight0.380.660.430.670.900.79
Overweight0.350.690.450.83
Obesity0.410.780.860.96
CARTNormal Weight0.330.670.440.670.890.77
Overweight0.150.800.300.85
Obesity0.500.740.870.91
SVCNormal Weight0.480.730.510.790.910.86
Overweight0.300.740.450.93
Obesity0.510.740.960.96
MLPNormal Weight0.500.690.520.710.900.82
Overweight0.250.780.400.90
Obesity0.530.770.940.94
ABNormal Weight0.420.680.490.690.870.70
Overweight0.150.830.550.76
Obesity0.560.630.680.94
GBNormal Weight0.400.750.510.690.910.81
Overweight0.250.810.350.92
Obesity0.570.590.960.88
RFNormal Weight0.380.720.470.750.890.82
Overweight0.200.800.350.92
Obesity0.520.560.940.93
ETNormal Weight0.380.730.490.750.880.81
Overweight0.200.780.300.91
Obesity0.550.620.930.93
Table 8. Regression: positive vs. negative variables, Mean Absolute Error and Pearson Correlation Coefficient of the cross-validation on the train set.
Table 8. Regression: positive vs. negative variables, Mean Absolute Error and Pearson Correlation Coefficient of the cross-validation on the train set.
AlgorithmPositive VariablesNegative Variables
MAEPCCMAEPCC
LASSO8.050.164.410.83
EN8.030.164.40.83
CART9.960.236.120.69
KNN8.470.14.370.8
SVR8.190.154.690.81
MLP9.880.087.740.62
AB8.010.174.350.82
GB8.040.284.340.82
RF8.130.34.340.83
ET8.140.334.180.84
Table 9. Classification: F1-scores of the cross-validation on the training set, removing in turn one negative psychological variable.
Table 9. Classification: F1-scores of the cross-validation on the training set, removing in turn one negative psychological variable.
AlgorithmClassNo DENo TANo BENo ES
SenSpecF1SenSpecF1SenSpecF1SenSpecF1
KNNN.W.0.600.770.590.630.910.800.630.850.740.560.920.78
Over.0.300.760.550.840.200.830.450.82
Obes.0.580.820.890.960.860.940.920.96
CARTN.W.0.460.800.560.650.880.750.560.910.750.750.880.78
Over.0.250.830.300.860.350.850.300.90
Obes.0.660.620.860.870.900.850.880.88
SVCN.W.0.650.810.630.750.920.850.540.880.770.730.910.84
Over.0.350.800.500.910.350.850.450.91
Obes.0.630.790.950.960.940.970.950.94
MLPN.W.0.650.780.640.690.910.820.750.880.810.690.910.82
Over.0.300.870.500.890.300.900.450.89
Obes.0.680.750.930.940.930.960.940.94
ABN.W.0.520.810.570.770.870.750.730.880.720.750.850.76
Over.0.300.800.300.830.300.780.350.85
Obes.0.610.660.790.930.750.960.800.94
GBN.W.0.540.830.620.670.890.790.670.880.790.750.910.82
Over.0.250.860.300.900.300.900.400.90
Obes.0.700.630.940.900.940.900.930.94
RFN.W.0.540.800.610.690.880.790.650.890.790.790.890.82
Over.0.250.850.300.900.350.900.350.92
Obes.0.690.680.920.900.940.900.930.93
ETN.W.0.580.810.620.710.900.810.600.880.780.810.910.83
Over.0.200.850.350.910.350.900.350.92
Obes.0.690.690.940.900.940.900.940.93
Table 10. Regression: Mean Absolute Error and Pearson Correlation Coefficient of the cross-validation on the training set, removing in turn one negative psychological variable.
Table 10. Regression: Mean Absolute Error and Pearson Correlation Coefficient of the cross-validation on the training set, removing in turn one negative psychological variable.
AlgorithmNo DENo TANo BENo ES
MAEPCCMAEPCCMAEPCCMAEPCC
LASSO6.830.585.010.784.590.824.470.82
EN6.830.5850.784.580.824.460.82
CART9.070.35.920.695.80.726.390.67
KNN6.680.574.860.784.410.834.60.8
SVR6.920.564.860.794.820.814.460.81
MLP8.180.447.260.617.090.668.020.62
AB7.250.544.580.84.560.824.630.8
GB6.690.554.60.794.410.824.660.79
RF6.770.534.540.814.540.824.750.79
ET7.090.514.450.824.550.834.670.8
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Delnevo, G.; Mancini, G.; Roccetti, M.; Salomoni, P.; Trombini, E.; Andrei, F. The Prediction of Body Mass Index from Negative Affectivity through Machine Learning: A Confirmatory Study. Sensors 2021, 21, 2361. https://doi.org/10.3390/s21072361

AMA Style

Delnevo G, Mancini G, Roccetti M, Salomoni P, Trombini E, Andrei F. The Prediction of Body Mass Index from Negative Affectivity through Machine Learning: A Confirmatory Study. Sensors. 2021; 21(7):2361. https://doi.org/10.3390/s21072361

Chicago/Turabian Style

Delnevo, Giovanni, Giacomo Mancini, Marco Roccetti, Paola Salomoni, Elena Trombini, and Federica Andrei. 2021. "The Prediction of Body Mass Index from Negative Affectivity through Machine Learning: A Confirmatory Study" Sensors 21, no. 7: 2361. https://doi.org/10.3390/s21072361

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop