Next Article in Journal
Estimation of Degradation Degree in Road Infrastructure Based on Multi-Modal ABN Using Contrastive Learning
Next Article in Special Issue
A Privacy-Preserving Desk Sensor for Monitoring Healthy Movement Breaks in Smart Office Environments with the Internet of Things
Previous Article in Journal
Text Summarization Method Based on Gated Attention Graph Neural Network
Previous Article in Special Issue
A Recommender System for Robust Smart Contract Template Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Health to Eat: A Smart Plate with Food Recognition, Classification, and Weight Measurement for Type-2 Diabetic Mellitus Patients’ Nutrition Control

1
Department of Electronics, Information and Communication Engineering, Kangwon National University, Samcheok-si 25913, Republic of Korea
2
Department of Computer Engineering, Kangwon National University, Samcheok-si 25913, Republic of Korea
3
Department of Liberal Studies, Kangwon National University, Samcheok-si 25913, Republic of Korea
*
Authors to whom correspondence should be addressed.
Sensors 2023, 23(3), 1656; https://doi.org/10.3390/s23031656
Submission received: 8 December 2022 / Revised: 30 January 2023 / Accepted: 31 January 2023 / Published: 2 February 2023

Abstract

:
The management of type 2 diabetes mellitus (T2DM) is generally not only focused on pharmacological therapy. Medical nutrition therapy is often forgotten by patients for several reasons, such as difficulty determining the right nutritional pattern for themselves, regulating their daily nutritional patterns, or even not heeding nutritional diet recommendations given by doctors. Management of nutritional therapy is one of the important efforts that can be made by diabetic patients to prevent an increase in the complexity of the disease. Setting a diet with proper nutrition will help patients manage a healthy diet. The development of Smart Plate Health to Eat is a technological innovation that helps patients and users know the type of food, weight, and nutrients contained in certain foods. This study involved 50 types of food with a total of 30,800 foods using the YOLOv5s algorithm, where the identification, measurement of weight, and nutrition of food were investigated using a Chenbo load cell weight sensor (1 kg), an HX711 weight weighing A/D module pressure sensor, and an IMX219-160 camera module (waveshare). The results of this study showed good identification accuracy in the analysis of four types of food: rice (58%), braised quail eggs in soy sauce (60%), spicy beef soup (62%), and dried radish (31%), with accuracy for weight and nutrition (100%).

1. Introduction

Nutrition is an important component that contributes to the continuation of the growth process. Children require nutrients such as protein, carbohydrates, fats, minerals, vitamins, and water during their growth and development period. If these requirements are not met, the process of further growth and development may be hampered [1]. Nutrients function as a basic material for the formation and repair of body cell tissue; as a protector and regulator of body temperature; and as a source of energy for organ function, motion, and physical function. Nutrients are elements required by the body for its processes and functions [2]. Energy is obtained from a variety of nutrients, including carbohydrates, proteins, fats, water, vitamins, and minerals. During the growth period, nutrition becomes essential for growth and development. Nutrients required for growth and development, such as protein, carbohydrates, fats, minerals, vitamins, and water, are required in nutrition [3]. If a person’s nutritional needs are not met, growth and development can be hampered. Today’s nutritional problems are the result of erroneous eating habits in which many people fail to pay attention to the diversity of food consumption, the body’s need for energy, and a balanced proportion of food. Food is one of life’s most basic needs [4]. This is because the body obtains the energy it requires for activities and metabolism by eating food. Eating food helps to keep the body healthy and the metabolism running smoothly [5].
Each type of food consumed has a different calorie content. Not only that, but each individual consumes a different amount of food. Many people nowadays eat far too much food [6]. Negative emotions, exposure to delicious food, an inability to restrain food intake, not feeling full, a craving for food, and even direct food addiction are all reasons for this excessive food consumption [7]. People suffering from diseases such as coronary heart disease, gout, and others must make careful food choices. Obesity, in addition to some of these diseases, is a problem for some people. Obesity is a disease that can be avoided in a variety of ways [8]. One of them is being in charge of recording and selecting. Another is keeping track of and selecting daily calorie consumption, as well as conducting a dietary assessment. If these foods are consumed uncontrollably, there will be an accumulation of excess calories in the body, leading to obesity [9]. Self-control in food consumption is required to avoid this, which includes measuring the calories of the food to be consumed. Controlling your diet can lower your risk of obesity and diseases, particularly diabetes (Table 1) [10].
Type 1 diabetes or insulin-dependent diabetes mellitus, type 2 diabetes or non-insulin-dependent diabetes mellitus, other types of diabetes mellitus, and gestational diabetes mellitus are the most common types of diabetes mellitus. Diabetes type 2 is a metabolic disorder characterized by hyperglycemia caused by insulin resistance and/or deficiency [11]. T2DM (Table 2) patients require diabetes management to properly and consistently control their blood glucose levels. If type 2 DM patients do not control their blood sugar levels properly, blood sugar levels can rise and fall, making them unstable, which can lead to complications. Diabetes mellitus control is accomplished through the application of basic diabetes mellitus control management principles, such as the modification of unhealthy lifestyles to become healthy through diet, physical exercise, and adherence to antidiabetic drug consumption [12].
As a result, an algorithm is required to make it easier for the system to recognize, compare, and study foods automatically using image data, so that differences between the foods to be consumed can be discovered [13]. At this time, it is undeniable that information technology is rapidly evolving and tends to aid humans in completing tasks more quickly and efficiently [14]. It is very possible to create a computation that can process information from an image or image for automatic object recognition in this all-digital era [15]. An “image processing” system is one in which an image is used as both the input and output to carry out the process. The goal of image processing is to improve an image’s quality so that it can be easily interpreted by humans or machines [16]. Because of its significant capabilities in modeling complex data such as images and sounds, deep learning has become one of the hottest topics in machine learning [17]. The convolutional neural network (CNN) is the deep learning method with the most significant results in image recognition right now [18].
CNN, or convolutional neural network, is a deep learning method that has been widely applied to image data [12]. In the case of image classification, the CNN method has succeeded in surpassing machine learning methods such as the SVM method, and currently has the most significant results in image recognition because CNN has a way of working that resembles the function of the human brain, where the computer will be given image data to study [19], trained to recognize each visual element on the image, and understand each image pattern, so that the computer will be able to identify the image [20]. CNN has recently advanced to become a sophisticated technique for image classification tasks [21]. CNN can investigate hierarchical structures. CNN can study the hierarchical features used for image classification, whereas the hierarchical approach can study more complex features with higher layers, resulting in a higher accuracy of the CNN method for image classification. CNN is said to be the most effective model for solving object detection and recognition problems [22]. On certain datasets, CNN research was able to perform digital image recognition with an accuracy that rivaled that of humans in 2012 [23].
In addition to CNN, there is the “You Only Look Once” (YOLO) algorithm. YOLO takes a different approach than the previous algorithm, which used a single neural network to process the entire image [24]. This network will divide the image into regions and then predict bounding boxes and probabilities; for each bounding region box, the probability of classifying as objects or not is weighed [25]. Detection is more complex than classification; classification can recognize objects but cannot tell where they are in the image [26]. Furthermore, if the image contains more than one object, the classification will fail. YOLO is a real-time detection smart neural network. YOLO has a simple architecture, namely a convolutional neural network [27]. This neural network uses only standard layer types: convolution with 3 × 3 kernels and max-pooling with 2 × 2 kernels [28]. The final convolutional layer employs 11 kernels to reduce the data to a 13x13x125 format. This 13 × 13 should look familiar; it is the size of the grid divided into images. In this case, 125 is the channel for each grid [29]. This 125 contains data for bounding boxes and class predictions. The reason for 125 is because each grid cell predicts 5 surrounding squares and is described by 25 data elements [30].
In this paper, we propose “Health to Eat: A Smart Plate with Real-Time Food Recognition, Classification, and Weight Measurement for Diabetic Patients’ Nutrition Control”, where this system will be useful to support or assist people with diabetes and non-diabetics in finding out information about the food they consume according to the nutritional standards they want to consume. Our research has a limitation: it is preliminary research (first stage), and we will use Korean food as our case study for our smart plate in this research. This research is related to our previous research, in which we developed a mobile-based diabetes application [31,32] that assists diabetic and non-diabetic sufferers and users in controlling their health activities through the use of a glucometer and exercise devices such as a treadmill and a connected gym cycle, which are integrated into a mobile-based diabetes application. In the future, we hope to be able to combine smart plates and mobile-based diabetes applications, with the hope that this system can further assist patients and users, where data from food consumption analysis will be processed to be used as information for users, so that doctors can use them as recommendations in order to manage diabetes.

2. Materials and Methods

2.1. Research Approach

Based on the research approach flow design (Figure 1), it is known that there are five stages carried out in this study.
  • Identification
    The identification stage is the first stage in this study. The first step is to choose a research topic. The research in this case was conducted because of a lack of knowledge about food classification based on food names. Furthermore, the identification and formulation of the problem are carried out, where the researcher must know what the main issues that can be solved are and what can underpin the existence of this research after knowing the topic to be raised. The objectives are then determined and, in order to determine their own goals, there is a requirement for conformity with the results of the problem formulation. The objectives should be able to address any existing issues related to the topic raised. Following that, problem boundaries are established and a research methodology is developed to ensure that this research is directed toward the existing goals. Designing a research methodology will help researchers begin the steps and prepare to begin with the small things required by this research, such as the software used, the data used, how the data should be processed, and how the effective stages of analysis can then be explained coherently and clearly.
  • Information Gathering
    The stages of information gathering in this case are related to the process of studying literature; this stage is one that every researcher must complete, given the need to enrich references in the field that we will investigate. Looking at other research that has been done and is almost identical can help researchers develop and find other new things to explore.
  • Data Validation and Evaluation
    The third stage involves data validation and evaluation. Preprocessing is the process of improving data after they have been collected according to a predetermined topic and then corrected or prepared so that they can be processed by the algorithm.
  • Hardware and Software Development
    At this stage, smart plate hardware will be designed and developed in accordance with the appropriate components that have been analyzed for integration into a smart plate. The image processing process will be developed and designed on the software side, and the image captured by the camera will be processed on the software side, where the data will be processed and visualized in the form of food analysis results on a smart plate.
  • Testing
    The testing phase will be carried out to ensure the functionality of the designed and developed hardware and software. At this point, researchers will put the system to the test by putting various types of food on a smart plate.

2.2. Artificial Intelligence

Artificial intelligence (AI) is a technique used to solve problems by imitating the intelligence of living and non-living things. The study of how to make computers do things that humans currently do better is known as artificial intelligence. Artificial intelligence (AI) seeks to discover or model human thought processes in order to create machines that can mimic human behavior [33].
Artificial intelligence is the study, application, and teaching of computer programming to perform tasks that humans consider intelligent. As a result, it is hoped that computers will be able to mimic some of the functions of the human brain, such as thinking and learning. This artificial intelligence system can be trained or learned on the computer, a process known as “machine learning” [34].
Deep learning is a type of artificial neural network algorithm that takes metadata as input and processes them through a series of hidden layers of nonlinear transformation of the input data to calculate the output value. Deep learning algorithms have a distinct feature, namely the ability to extract automatically; this means that its algorithm can automatically capture the relevant features as needed in solving a problem. This type of algorithm is very important in artificial intelligence because it can reduce the programming burden by selecting explicit features. This algorithm can be used to solve problems in image recognition, speech recognition, text classification, and other applications that require supervision (supervised), no supervised (unsupervised), or some supervised (semi-supervised) [35].
Artificial neural networks, which are used in deep learning, mimic the operation of real neural networks. This algorithm employs hidden layer neurons to translate input data from the input layer to the target at the output layer. As the number of hidden layers’ increases, the algorithm becomes more complex and abstract. Deep learning neural networks are constructed by ascending through a simple hierarchy of several layers to a high level or many layers (multi-layer). Deep learning can be used to solve complex problems that have a large number of non-linear transformation layers based on this. Machine learning is an artificial intelligence approach that is widely used to replace or imitate human behavior in order to solve problems or perform automation. Machine learning, as the name suggests, attempts to mimic how humans and other intelligent creatures learn and generalize. Machine learning has at least two major applications: classification and prediction. CNNs, or convolutional neural networks, are the most popular neural network technique. CNN can process multidimensional data such as video and images. The operation of CNN is nearly identical to that of neural networks in general; the only significant difference is that it convolutes each unit in the CNN layer using a two-dimensional or high-dimensional kernel [36].
In CNN, the kernel is used to combine spatial features with a spatial form that is similar to the input medium. Then, CNN employs a variety of parameters to reduce the number of variables, making it easier to learn. The term “convolutional neural network” refers to the network’s use of a mathematical operation known as convolution. The CNN is then trained to examine the object’s features in order to predict it [37].
  • Feature learning (feature extraction layer); there is a layer in this section that is useful for receiving image input directly at the start and processing it to produce multidimensional array data output. This process has two layers: a convolution layer and a pooling layer, and each layer process produces feature maps in the form of numbers that represent images, which are then forwarded to the classification layer section.
  • Classification layer; this layer is made up of several layers, each of which contains neurons that are fully connected to other layers. This layer receives input from the output layer of the feature learning section, which is then flattened with the addition of several fully connected hidden layers to produce output in the form of classification accuracy for each class.

2.3. YOLO Algorithm

The YOLO algorithm is a real-object detection algorithm that is currently being developed and has recently gained popularity. Most previous detection systems performed detection by applying a model to the image at multiple locations and scales and assigning a value to the image as material for detection. The You Only Look Once (YOLO) algorithm detects objects in real time [38]. A repurposed classifier or localizer is used as the detection system. A model is applied to an image at various scales and locations. The region with the highest image score will be considered a detection. An annotation process is required before beginning the training process to form the dataset. Each dataset has a class name, the object’s X and Y coordinates, and the length and width of the bounding box [39].
To detect objects in an image, YOLO employs an artificial neural network (ANN) approach. This network segments the image and predicts the bounding box and probability for each region. The predicted probabilities are then compared to the bounding boxes. YOLO has several advantages over a classifier-oriented system; it can be seen from the entire image at the time of the test, with predictions that are informed globally on the image. YOLO employs a convolutional-neural-network-like architecture. Only a convolution layer and a pooling layer are used by YOLO. It is adjusted for the final convolution layer based on the number of classes and prediction boxes desired. Convolutional neural networks, also known as CNNs or ConvNets, are a type of deep feed-forward artificial neural network widely used in image analysis. CNN is capable of detecting and recognizing objects in images. In general, CNN is not much different from a traditional neural network. CNN is made up of neurons with weight, bias, and activation functions. CNN is made up of an input layer, an output layer, and several hidden layers. The YOLO architecture is made up of only two layers: a convolution layer and a pooling layer [40].
The fifth-generation object detection model, YOLOv5, will be released in April 2020. In general, this model’s architecture is not significantly different from the previous YOLO generation. YOLOv5 is written in Python rather than C, as was the case in previous versions. It makes IoT device installation and integration easier [41].
Furthermore, the PyTorch community is larger than the darknet community, implying that PyTorch will receive more contributions and have greater growth potential in the future. Performance comparisons between YOLOv4 and YOLOv5 are difficult to make because they are written in two different programming languages on two different frameworks. However, after some time, YOLOv5 proved to be more performant than YOLOv4 in some cases, earning the computer vision community’s trust in addition to YOLOv4. There are several types of YOLOv5, each with its own detection speed and mAP performance [42] (Table 3).

2.4. Smart Plate Procedure

YOLO (You Only Look Once) is an object detection network. YOLOv5s is used in this study, and it is tasked with detecting objects, determining where on the image certain objects are present, and classifying these objects. Simply put, an image is used as input and the output is a bounding box vector and class prediction (Figure 2).
The food images used in this study were taken from the dataset. The number of images in the dataset is 30,800 images in jpg format, consisting of fifty different types of food. Furthermore, pre-processing of the data, which consists of labeling and changing the image size, is carried out. Image labeling is the initial stage where each image in the dataset is labeled with the aim of storing image information. The label process is carried out by giving a bounding box along with the class name for each image object. Next, the image size changes to improve the performance of the YOLOv5s model in object recognition.
The food images used in this study were taken from the dataset. The number of images in the dataset is 30,800 images in jpg format, consisting of fifty different types of food (Table 4). Furthermore, pre-processing of the data, which consists of labeling and changing the image size, is carried out. Image labeling is the initial stage where each image in the dataset is labeled with the aim of storing image information. The label process is carried out by giving a bounding box along with the class name for each image object. Next, the image size changes to improve the performance of the YOLOv5s model in object recognition.
The neural network used in this study for the YOLOv5s model consists of a convolutional layer with a 3 × 3 kernel and a maxpooling layer with a 2 × 2 kernel. The final convolutional layer employs a 1 × 1 kernel to shrink the data to a 13 × 13 40 form. The grid size is 13 x 13 and the sum of the filter formulas yields 40.
Real-time testing is carried out with an IMX219-160 camera module (waveshare). Tests are run to determine the level of object accuracy with a previously trained new model.

3. Results

3.1. System Development

Based on these concerns, the researchers proposed a study of measuring food calories with a Chenbo load cell weight sensor (1 kg) and a HX711 weight weighing A/D module pressure sensor. The IMX219-160 camera module (waveshare) and YOLOv5s are used in the study to detect the type of food as well as the load measuring sensor, specifically the loadcell sensor, to calculate the amount of food weight. The YOLOv5s algorithm is used to identify the type of food. This is because YOLOv5s is a classification method that uses training data to produce accurate results when a large amount of training data are used. The data training used up to 30,800 data points from 50 different types of foods across seven food categories (Table 3).
The results of food identification and weight measurement will be used to calculate the number of calories in food. In this study, a method for classifying food types is combined with the YOLOv5s algorithm, which is capable of classifying image data. Python 3.10.8 software with the Keras package aids in the processing of the YOLOv5s algorithm. The food images will be classified using the YOLOv5s algorithm, which will perform convolution operations on the data to form a pattern, and hardware and software will be integrated (Figure 3). MariaDB functions are used to process SQL data concurrently. MariaDB connects clients via TCP/IP, named pipes, or NT, as well as the built-in UNIX socket. Flask, on the other hand, serves as an application framework as well as a web display. Using Flask and the Python programming language, developers can easily create a structured web and manage web behavior. The results of the system’s analysis will be displayed on a web page.

3.2. Hardware and Software Design and Development

Figure 4, Figure 5, Figure 6 and Figure 7 depict the hardware design, which consists of a tool design and a system circuit design. Several electronic components will be used to build the system in the design of this hardware. The Raspberry Pi 4 model B, the Chenbo load cell weight sensor (1 kg), the HX711 weight weighing A/D module pressure sensor, and the IMX219-160 camera module are among the components used (waveshare). The camera module captures images of the tested samples, the loadcell sensor collects weight data, and the HX711 module is an ADC that converts the analog signal from the loadcell into a digital signal that the Raspberry Pi can process. The Raspberry Pi 4 model B is a microcontroller that processes the data obtained from the system input, which are then displayed as the system process’s final result.
Figure 7 depicts the implementation of the system that was created. This tool is shaped like a box, with a square board on top and an inverted angled board on the side. To take pictures, the camera module is mounted on the end of an inverted angled board, with the camera facing upwards. The loadcell sensor is installed between the box’s sides and the square board, which is used to place the food sample to be tested. Inside is a Raspberry Pi 4 model B and an IMX219-160 camera module (waveshare). All of these components will be linked to the Raspberry Pi 4 model B directly via the GPIO pins and the CSI socket.
Four Chenbo load cell weight sensors (1kg), HX711 weight weighing A/d module pressure sensors, and an IMX219-160 camera module (waveshare) are connected via jumper cables. The IMX219-160 camera module (waveshare) is linked via CSI to a Raspberry Pi 4 model B (Camera Serial Interface). The loadcell sensor is linked to the HX711 module, and the Raspberry Pi 4 model B is linked to the Chenbo load cell weight sensor (1 kg), HX711 weight weighing A/d module pressure sensor, and HX711 weight weighing A/d module pressure sensor.
The overall hardware (Raspberry Pi 4 I IMX219-160 camera module (waveshare), Chenbo load cell weight sensor (1 kg), and HX711 weight weighing A/d module pressure sensor) is connected via socket to the software (Server PC, YOLOv5s, MariDB, Flask); beginning with heavy data retrieval, image capture, and image data extraction, the YOLOv5s method processes until the system output results are displayed through the website display.

3.3. Experimental Results

The system flow must be implemented. To begin, the system’s library is initialized. The loadcell sensor should then be calibrated. If you press “enter” on the keyboard, the system will start. The loadcell sensor will then determine the weight of the food sample. The system will use the camera to capture an image, which will then be processed to extract color data from the image. Furthermore, food identification is based on the results of food image data extraction and existing training data using the YOLOv5s method. The system then calculates food calories based on the results of food identification and weight measurement (Table 5 and Table 6).
A formula for calculating food calories and calorie data samples for each food type is used. This test is performed to determine whether the system can identify food and calculate its calorific value. Figure 7 and Figure 8 depict the system results as they appear on the website page. Figure 9 depicts the system’s output, with the first display showing the type of food and its weight. The system identification process yields these types of food results, and the weight of the food yields loadcell sensor readings. Figure 8 depicts how the two systems appear on the website page. The second display in the second line will show the number of food calories calculated from the identification and reading of the loadcell sensor. Table 7, Table 8 and Table 9 shows the results of the accuracy of food identification using the YOLOv5s method, weight analysis and food nutrients in experimental foods.

3.4. Evaluation System

The total number of foods tested in this study was 50 and the total number of food images was 30,800. The tests were conducted on four different foods: rice, braised quail eggs in soy sauce, spicy beef soup, and dried radish. The researchers tested the accuracy of image detection, weight, and nutrition of the food three times with the same four food menus. The first test was carried out with a total of 800 images that focused on the four food menus tested; the second test increased the total number of foods to 50 with 10,000 images; and the third test added the same number of foods as in the second test but with an increased number of images, namely 20,000 (Figure 10).
Researchers obtained results for the detection of food types using three testing processes, with the highest accuracy value for each type of food being rice (58%), braised quail eggs in soy sauce (60%), spicy beef soup (62%), and dried radish (31%). In terms of weight and nutrition analysis, the system performed admirably, with 100% accuracy rates for rice, braised quail eggs in soy sauce, spicy beef soup, and dried radish.

4. Summary and Conclusions

Based on the findings of the analysis and testing, it is possible to conclude that the system can calculate the number of food calories measured using the YOLOv5s method and loadcell sensor readings. The identification process was carried out using the YOLOv5s method, which was based on training data that included up to 30,800 observations with 50 different types of food (Table 3). The use of variable k = 3 has the highest accuracy value of 62% in the YOLOv5s N method, namely the spicy beef soup food. The accuracy of the loadcell sensor readings performed demonstrates a zero percent error, indicating that the loadcell is capable of working with high accuracy while also providing good nutritional value.
Researchers can conclude from the results of the first, second, and third tests that there are variations in the value of accuracy, particularly in the detection of food types. We interpret this as being influenced by the increasing number of image types entered into the system, as well as the appearance of the food, because Korean food has a lot in common. This has an impact on the system’s ability to provide high accuracy values. As a result, the accuracy of the values in the first, second, and third tests varies. Our current research on how to design a smart plate has limitations.
This study is the first in a series on disease management for people with type 2 diabetes. We focused on the idea that the concepts we proposed could be applied on a small scale (amount and type of food), even with a simple model, in this study. The research we are conducting will be expanded upon in conjunction with our other research, specifically the development of a diabetes mobile application, with the ultimate goal of integrating all of these systems into a single diabetes healthcare management system that will benefit both diabetic and non-diabetic patients.

5. Future Work

Our project’s mobile-based diabetes application will be integrated with smart plates in the future. In the future, patients and users with diabetes or without diabetes will be able to easily control their health, from diet and activity to medication and even using a glucometer, thanks to smart plates and mobile-based diabetes applications.

Author Contributions

S.R.J.: software developer, evaluate project, methodology, investigation, resources, supervision, evaluate functionality, hardware design, testing smart plate. S.S.: software developer, evaluate functionality, software and hardware integration. J.-H.L.: conceptualization, funding acquisition, resources, supervision, writing—original draft, writing—review and editing. S.K.K.: conceptualization, funding acquisition, resources, supervision, writing—original draft, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by “Regional Innovation Strategy (RIS)” through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (MOE) (2022RIS-005).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Thuita, A.W.; Kiage, B.N.; Onyango, A.N.; Makokha, A.O. Effect of a nutrition education programme on the metabolic syndrome in type 2 diabetes mellitus patients at a level 5 Hospital in Kenya: “a randomized controlled trial”. BMC Nutr. 2020, 6, 30. [Google Scholar] [CrossRef] [PubMed]
  2. Shu, L.; Shen, X.-M.; Li, C.; Zhang, X.-Y.; Zheng, P.-F. Dietary patterns are associated with type 2 diabetes mellitus among middle-aged adults in Zhejiang Province, China. Nutr. J. 2017, 16, 81. [Google Scholar] [CrossRef] [PubMed]
  3. García, M.; Porras, Y.; Richmond, D.; Jensen, M.; Madrigal, M.; Zúñiga, G. Designing a Mobile Application to Support Type 2 Diabetes Mellitus Care in Costa Rica: A Qualitative Exploratory Study. J. Acad. Nutr. Diet. 2016, 116, A75. [Google Scholar] [CrossRef]
  4. Petroni, M.L.; Brodosi, L.; Marchignoli, F.; Sasdelli, A.S.; Caraceni, P.; Marchesini, G.; Ravaioli, F. Nutrition in Patients with Type 2 Diabetes: Present Knowledge and Remaining Challenges. Nutrients 2021, 13, 2748. [Google Scholar] [CrossRef] [PubMed]
  5. Shen, Z.; Shehzad, A.; Chen, S.; Sun, H.; Liu, J. Machine Learning Based Approach on Food Recognition and Nutrition Estimation. Procedia Comput. Sci. 2020, 174, 448–453. [Google Scholar] [CrossRef]
  6. Guo, Y.; Huang, Z.; Sang, D.; Gao, Q.; Li, Q. The Role of Nutrition in the Prevention and Intervention of Type 2 Diabetes. Front. Bioeng. Biotechnol. 2020, 8. [Google Scholar] [CrossRef]
  7. Mssallem, M.; Qarni, A.; Jamaan, M. Dietary pattern of patients with type 2 diabetes mellitus including date consumption. J. Public Health Theory Pract. 2022, 30, 301–307. [Google Scholar] [CrossRef]
  8. Veit, M.; Asten, R.V.; Olie, A.; Prinz, P. The role of dietary sugars, overweight, and obesity in type 2 diabetes mellitus: A narrative review. Eur. J. Clin. Nutr. 2022, 76, 1497–1501. [Google Scholar] [CrossRef]
  9. Forouhi, N.G.; Misra, A.; Mohan, V.; Taylor, R.; Yancy, W. Dietary and nutritional approaches for prevention and management of type 2 diabetes. BMJ 2018, 361, k2234. [Google Scholar] [CrossRef]
  10. Mohanty, S.P.; Singhal, G.; Scuccimarra, E.A.; Kebaili, D.; Héritier, H.; Boulanger, V.; Salathé, M. The Food Recognition Benchmark: Using Deep Learning to Recognize Food in Images. Front. Nutr. 2022, 9, 875143. [Google Scholar] [CrossRef]
  11. Jeffrey, B.; Bagala, M.; Creighton, A.; Leavey, T.; Nicholls, S.; Wood, C.; Longman, J.; Barker, J.; Pit, S. Mobile phone applications and their use in the self-management of Type 2 Diabetes Mellitus: A qualitative study among app users and non-app users. Diabetol. Metab. Syndr. 2019, 11, 84. [Google Scholar] [CrossRef] [PubMed]
  12. Adu, M.D.; Malabu, U.H.; Aduli, A.E.M.; Aduli, B.S.M. Mobile application intervention to promote self-management in insulin-requiring type 1 and type 2 diabetes individuals: Protocol for a mixed methods study and non-blinded randomized controlled trial. Diabetes Metab. Syndr. Obes. 2019, 12, 789–800. [Google Scholar] [CrossRef] [PubMed]
  13. Lawal, O.M.; Huamin, Z.; Fan, Z. Ablation studies on YOLOFruit detection algorithm for fruit harvesting robot using deep learning. IOP Conf. Ser. Earth Environ. Sci. 2021, 922, 012001. [Google Scholar] [CrossRef]
  14. Al-Salmi, N.; Cook, P.; D’Souza, M.S. Diet Adherence among Adults with Type 2 Diabetes Mellitus: A Concept Analysis. Oman Med. J. 2022, 37, e361. [Google Scholar] [CrossRef]
  15. Srivastava, S.; Divekar, A.V.; Anilkumar, C.; Naik, I.; Kulkarni, V.; Pattabiraman, V. Comparative analysis of deep learning image detection algorithms. J. Big Data 2021, 8, 66. [Google Scholar] [CrossRef]
  16. Sińska, B.I.; Dłużniak-Gołaska, K.; Jaworski, M.; Panczyk, M.; Duda-Zalewska, A.; Traczyk, I.; Religioni, U.; Kucharska, A. Undertaking Healthy Nutrition Behaviors by Patients with Type 1 Diabetes as an Important Element of Self-Care. Int. J. Environ. Res. Public Health 2022, 19, 13173. [Google Scholar] [CrossRef] [PubMed]
  17. Ansari, M.Y.; Yang, Y.; Balakrishnan, S.; Abinahed, J.; Al-Ansari, A.; Warfa, M.; Almokdad, O.; Barah, A.; Omer, A.; Singh, A.V.; et al. A lightweight neural network with multiscale feature enhancement for liver CT segmentation. Sci. Rep. 2022, 12, 14153. [Google Scholar] [CrossRef]
  18. Lim, C.H.; Goh, K.M.; Lim, L.L. Explainable Artificial Intelligence in Oriental Food Recognition using Convolutional Neural Network. In Proceedings of the 2021 IEEE 11th International Conference on System Engineering and Technology (ICSET), Shah Alam, Malaysia, 6 November 2021; pp. 218–223. [Google Scholar] [CrossRef]
  19. Ansari, M.Y.; Chandrasekar, V.; Singh, A.V.; Dakua, S.P. Re-routing drugs to blood brain barrier: A comprehensive analysis of Machine Learning approaches with fingerprint amalgamation and data balancing. IEEE Access 2022. [Google Scholar] [CrossRef]
  20. Kalivaraprasad, B.; Prasad, M.; Vamsi, R.; Tejasri, U.; Santhoshi, M.N.; Pramod Kumar, A. Analysis of food recognition and calorie estimation using AI. AIP Conf. Proc. 2021, 2407, 020020. [Google Scholar] [CrossRef]
  21. Braber, N.D.; Hutten, M.M.R.V.; Oosterwijk, M.M.; Gant, C.M.; Hagedoorn, I.J.M.; Beijnum, B.J.F.V.; Hermens, H.J.; Laverman, G.D. Requirements of an Application to Monitor Diet, Physical Activity and Glucose Values in Patients with Type 2 Diabetes: The Diameter. Nutrients 2019, 11, 409. [Google Scholar] [CrossRef]
  22. Rajput, S.A.; Ashraff, S.; Siddiqui, M. Diet and Management of Type II Diabetes Mellitus in the United Kingdom: A Narrative Review. Diabetology 2022, 3, 72–78. [Google Scholar] [CrossRef]
  23. Agbai, C.M. Application of artificial intelligence (AI) in food industry. GSC Biol. Pharm. Sci. 2020, 13, 171–178. [Google Scholar] [CrossRef]
  24. Mantau, A.J.; Widayat, I.W.; Leu, J.-S.; Köppen, M. A Human-Detection Method Based on YOLOv5 and Transfer Learning Using Thermal Image Data from UAV Perspective for Surveillance System. Drones 2022, 6, 290. [Google Scholar] [CrossRef]
  25. Namgung, K.; Kim, T.-H.; Hong, Y.-S. Menu Recommendation System Using Smart Plates for Well-balanced Diet Habits of Young Children. Wirel. Commun. Mob. Comput. 2019, 2019, 7971381. [Google Scholar] [CrossRef]
  26. Önler, E. Real Time Pest Detection Using YOLOv5. Int. J. Agric. Nat. Sci. 2021, 14, 232–246. [Google Scholar]
  27. Samad, S.; Ahmed, F.; Naher, S.; Kabir, M.A.; Das, A.; Amin, S.; Islam, S.M.S. Smartphone apps for tracking food consumption and recommendations: Evaluating artificial intelligence-based functionalities, features and quality of current apps. Intell. Syst. Appl. 2022, 15, 200103. [Google Scholar] [CrossRef]
  28. Jubayer, F.; Alam Soeb, J.; Mojumder, A.N.; Paul, M.K.; Barua, P.; Kayshar, S.; Akter, S.S.; Rahman, M.; Islam, A. Detection of mold on the food surface using YOLOv5. Curr. Res. Food Sci. 2021, 4, 724–728. [Google Scholar] [CrossRef] [PubMed]
  29. Anthimopoulos, M.M.; Gianola, L.; Scarnato, L.; Diem, P.; Mougiakakou, S.G. A Food Recognition System for Diabetic Patients Based on an Optimized Bag-of-Features Model. IEEE J. Biomed. Health Inform. 2014, 18, 1261–1271. [Google Scholar] [CrossRef]
  30. Dai, G.; Hu, L.; Fan, J.; Yan, S.; Li, R. A Deep Learning-Based Object Detection Scheme by Improving YOLOv5 for Sprouted Potatoes Datasets. IEEE Access 2022, 10, 85416–85428. [Google Scholar] [CrossRef]
  31. Park, J.-C.; Kim, S.; Lee, J.-H. Self-Care IoT Platform for Diabetic Mellitus. Appl. Sci. 2021, 11, 2006. [Google Scholar] [CrossRef]
  32. Lee, J.-H.; Park, J.-C.; Kim, S.-B. Therapeutic Exercise Platform for Type-2 Diabetic Mellitus. Electronics 2021, 10, 1820. [Google Scholar] [CrossRef]
  33. Sheng, G.; Sun, S.; Liu, C.; Yang, Y. Food recognition via an efficient neural network with transformer grouping. Int. J. Intell. Syst. 2022, 37, 11465–11481. [Google Scholar] [CrossRef]
  34. Tagi, M.; Tajiri, M.; Hamada, Y.; Wakata, Y.; Shan, X.; Ozaki, K.; Kubota, M.; Amano, S.; Sakaue, H.; Suzuki, Y.; et al. Accuracy of an Artificial Intelligence–Based Model for Estimating Leftover Liquid Food in Hospitals: Validation Study. JMIR Form. Res. 2022, 6, e35991. [Google Scholar] [CrossRef] [PubMed]
  35. Chen, T.C.; Yu, S.Y. The review of food safety inspection system based on artificial intelligence, image processing, and robotic. J. Food Sci. Technol. 2022, 42, 1–7. [Google Scholar] [CrossRef]
  36. Liu, Y.-C.; Onthoni, D.D.; Mohapatra, S.; Irianti, D.; Sahoo, P.K. Deep-Learning-Assisted Multi-Dish Food Recognition Application for Dietary Intake Reporting. Electronics 2022, 11, 1626. [Google Scholar] [CrossRef]
  37. Zang, H.; Wang, Y.; Ru, L.; Zhou, M.; Chen, D.; Zhao, Q.; Zhang, J.; Li, G.; Zheng, G. Detection method of wheat spike improved YOLOv5s based on the attention mechanism. Front. Plant Sci. 2022, 13, 993244. [Google Scholar] [CrossRef]
  38. Ahmad, I.; Yang, Y.; Yue, Y.; Ye, C.; Hassan, M.; Cheng, X.; Wu, Y.; Zhang, Y. Deep Learning Based Detector YOLOv5 for Identifying Insect Pests. Appl. Sci. 2022, 12, 10167. [Google Scholar] [CrossRef]
  39. Zhang, K.; Wang, C.; Yu, X.; Zheng, A.; Gao, M.; Pan, Z.; Chen, G.; Shen, Z. Research on mine vehicle tracking and detection technology based on YOLOv5. Syst. Sci. Control Eng. 2022, 10, 347–366. [Google Scholar] [CrossRef]
  40. Cao, M.; Fu, H.; Zhu, J.; Cai, C. Lightweight tea bud recognition network integrating GhostNet and YOLOv5. Math. Biosci. Eng. 2022, 19, 12897–12914. [Google Scholar] [CrossRef] [PubMed]
  41. Yang, R.; Hu, Y.; Yao, Y.; Gao, M.; Liu, R. Fruit Target Detection Based on BCo-YOLOv5 Model. Mob. Inf. Syst. 2022, 2022, 8457173. [Google Scholar] [CrossRef]
  42. Doan, T.-N. An Efficient System for Real-time Mobile Smart Device-based Insect Detection. Int. J. Adv. Comput. Sci. Appl. 2022, 13. [Google Scholar] [CrossRef]
Figure 1. Research approach.
Figure 1. Research approach.
Sensors 23 01656 g001
Figure 2. Flowchart of a smart plate.
Figure 2. Flowchart of a smart plate.
Sensors 23 01656 g002
Figure 3. Hardware and software integration.
Figure 3. Hardware and software integration.
Sensors 23 01656 g003
Figure 4. (a) Design of smart plate side view design, (b) design of smart plate top view design.
Figure 4. (a) Design of smart plate side view design, (b) design of smart plate top view design.
Sensors 23 01656 g004
Figure 5. 3D design of smart plate components.
Figure 5. 3D design of smart plate components.
Sensors 23 01656 g005
Figure 6. (a) Camera, (b) load cell.
Figure 6. (a) Camera, (b) load cell.
Sensors 23 01656 g006
Figure 7. Smart plate.
Figure 7. Smart plate.
Sensors 23 01656 g007
Figure 8. Data visualization.
Figure 8. Data visualization.
Sensors 23 01656 g008
Figure 9. (a) Capturing image, (b) zoomed-in image.
Figure 9. (a) Capturing image, (b) zoomed-in image.
Sensors 23 01656 g009
Figure 10. Evaluation results of the system.
Figure 10. Evaluation results of the system.
Sensors 23 01656 g010
Table 1. Risk factor.
Table 1. Risk factor.
NoRisk Factor
FactorDescription
1Obesity (overweight)There is a significant link between obesity and blood sugar levels; the degree of obesity with body mass index (BMI) > 23 can lead to an increase in blood glucose levels of up to 200 mg%.
2HypertensionAn increase in blood pressure beyond the normal range of hypertensive patients is closely associated with improper storage of salt and water or increased pressure in the body of the peripheral vascular system.
3DyslipidemiaDyslipidemia is a condition characterized by elevated blood fat levels (triglycerides > 250 mg/dL). There is a relationship between an increase in plasma insulin and low high-density lipoprotein (HDL) (<35 mg/dL).
4AgeIndividuals aged >40 years are susceptible to DM, although it is possible for individuals aged <40 years to avoid DM. The increase in blood glucose occurs at the age of about 45 years and the frequency increases with age.
5GeneticType 2 DM is thought to be associated with familial aggregation. The empirical risk in the event of type 2 DM will increase two to six times if there are parents or family members suffering from type 2 DM.
6Alcohol and CigarettesAn individual’s lifestyle is associated with an increase in the frequency of type 2 DM. Most of this increase is associated with increased obesity and decreased physical activity, while other factors associated with the shift from a traditional to a westernized environment, including changes in cigarette and alcohol consumption, also played a role in the increase. Type 2 DM alcohol will interfere with blood sugar metabolism, especially in people with type 2 DM, so it will complicate regulation and increase blood sugar.
Table 2. List of abbreviations.
Table 2. List of abbreviations.
NoAbbreviationMeaning
1T2DMType 2 diabetes mellitus
2DMDiabetes mellitus
3AIArtificial intelligence
4CNNConvolutional neural network
5YOLOYou Only Look Once
6ANNArtificial neural network
Table 3. Image identification accuracy test.
Table 3. Image identification accuracy test.
NoYOLOv5
ModelSize (Pixels)mAPval 0.5:0.95mAPval
0.5
Speed
CPU b1
(ms)
Speed
V100 b1
(ms)
Speed
V100 b32
(ms)
Params
(M)
FLOPs
@640
(B)
1YOLOv5n64028.446.0456.30.61.94.5
2YOLOv5s64037.256.0986.40.97.216.5
3YOLOv5m64045.263.92248.21.721.249.0
4YOLOv5l64048.867.243010.12.746.5109.1
5YOLOv5x64050.768.976612.14.886.7205.7
Table 4. List of foods.
Table 4. List of foods.
NoFood
CategoryName
1RiceRice
2SoupPorridge
3SoupVegetable Porridge
4MeatChicken Skewers
5MeatPork Galbi
6MeatSmoked Duck
7MeatChicken Wings
8VegetableGrilled Deodeok
9PancakeSeafood and Green Onion Pancake
10MeatPan-Fried Battered Meatballs
11PancakePan-Fried Battered Summer Squash
12MeatOmelet Roll
13FishStir-Fried Anchovies
14FishStir-Fried Fishcake
15MeatStir-Fried Sausages
16RiceTteokbokki
17FriedBraised Quail Eggs in Soy Sauce
18FishDeep-Fried Loach
19FishDeep-Fried Filefish Jerky
20MeatDeep-Fried Chicken
21MeatPork Cutlet
22MeatDeep-Fried Chicken Gizzards
23FriedDeep-Fried Potatoes
24VegetableDeep-Fried Laver Roll Stuffed with Glass Noodles
25VegetableDeep-Fried Vegetables
26VegetablePickled Radish Salad
27VegetableDried Radish
28VegetableBean Sprout
29VegetableJulienne Radish Fresh Salad
30FishSea Snail Salad
31VegetableLaver Salad
32VegetableJapchae
33VegetableDiced Radish Kimchi
34FishSoy Sauce Marinated Crab
35VegetableGarlic Stem Salad
36VegetablePickled Perilla Leaf
37VegetablePickled Radish
38FishSliced Raw Salmon
39RiceRice Cake Stick
40RiceRice Cake with Honey Filling
41RiceSteamed Rice Cake
42RiceBuckwheat Crepe
43RiceRainbow Rice Cake
44RiceSnow White Rice Cake
45RiceHalf-Moon Rice Cake
46RiceBean-Powder-Coated Rice Cake
47RiceHoney Cookie
48RiceFried Rice Sweet
49RiceSweet Rice Puffs
50SoupSpicy Beef Soup
Table 5. Total of food images.
Table 5. Total of food images.
NoFood Image
ExperimentTotal
1First Experiment
Test 4 types of food with 200 images each
800 images
2Second Experiment
Test 50 types of food with 200 images each
10,000 images
3Third Experiment
Test 50 types of food with 400 images each
20,000 images
Total Images Three Experiments30,800 images
Table 6. List of experimental foods.
Table 6. List of experimental foods.
NoFood
CategoryName
1RiceRice (쌀밥)
2Stewed foodsBraised Quail Eggs in Soy Sauce (메추리알)
3SoupsSpicy Beef Soup (육개장)
4VegetableDried Radish (무말랭이)
Table 7. Image identification accuracy test.
Table 7. Image identification accuracy test.
NoModel
ExperimentFood 1 (%)Food 2 (%)Food 3 (%)Food 4 (%)
1Test 4 types of food with 200 images each0.490.390.620.20
2Test 50 types of food with 200 images each0.500.550.340.25
3Test 50 types of food with 400 images each0.580.600.330.31
Table 8. Weight analysis.
Table 8. Weight analysis.
NoSmart Plate
ExperimentPlate APlate BPlate CPlate D
11RiceBraised Quail Eggs in Soy SauceSpicy Beef SoupDried Radish
Amount of food served (g)19010858123
22RiceBraised Quail Eggs in Soy SauceSpicy Beef SoupDried Radish
Amount of food served (g19010858123
33RiceBraised Quail Eggs in Soy SauceSpicy Beef SoupDried Radish
Amount of food served (g)19010858123
Relative Error Ratio (%)
(Error Ratio = (Actual Weight − Measured Weight): Actual Weight)
0%0%0%0%
Table 9. Food nutrients in experimental foods.
Table 9. Food nutrients in experimental foods.
NoFood Nutrient (per 100 g)
NameCarbohydrate (g)Protein (g)Fat (g)Kcal
1Rice28.02.70.3130.0
2Braised Quail Eggs in Soy Sauce6.314.2513.41203.0
3Spicy Beef Soup1.493.680.9930.0
4Dried Radish7.310.40.031.0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Joshua, S.R.; Shin, S.; Lee, J.-H.; Kim, S.K. Health to Eat: A Smart Plate with Food Recognition, Classification, and Weight Measurement for Type-2 Diabetic Mellitus Patients’ Nutrition Control. Sensors 2023, 23, 1656. https://doi.org/10.3390/s23031656

AMA Style

Joshua SR, Shin S, Lee J-H, Kim SK. Health to Eat: A Smart Plate with Food Recognition, Classification, and Weight Measurement for Type-2 Diabetic Mellitus Patients’ Nutrition Control. Sensors. 2023; 23(3):1656. https://doi.org/10.3390/s23031656

Chicago/Turabian Style

Joshua, Salaki Reynaldo, Seungheon Shin, Je-Hoon Lee, and Seong Kun Kim. 2023. "Health to Eat: A Smart Plate with Food Recognition, Classification, and Weight Measurement for Type-2 Diabetic Mellitus Patients’ Nutrition Control" Sensors 23, no. 3: 1656. https://doi.org/10.3390/s23031656

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop