Next Article in Journal
Effects of Dietary Intervention Using Spirulina at Graded Levels on Productive Performance and Physiological Status of Quail Birds Reared under Elevated Temperatures
Next Article in Special Issue
Experimental Research for Digging and Inverting of Upright Peanuts by Digger-Inverter
Previous Article in Journal
A Shortlisting Framework for Crop Diversification in the United Kingdom
Previous Article in Special Issue
Research on Fault Diagnosis of HMCVT Shift Hydraulic System Based on Optimized BPNN and CNN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Application of Artificial Neural Network for Predicting Threshing Performance in a Flexible Threshing Device

1
College of Mechanical and Electrical Engineering, Hunan Agricultural University, Changsha 410102, China
2
Institute of Bast Fiber Crops, Chinese Academy of Agricultural Sciences, Changsha 410205, China
3
Hunan Key Laboratory of Intelligent Agricultural Machinery Corporation, Changsha 410102, China
4
Changsha Zichen Technology Development Co., Ltd., Changsha 410221, China
*
Author to whom correspondence should be addressed.
Agriculture 2023, 13(4), 788; https://doi.org/10.3390/agriculture13040788
Submission received: 19 March 2023 / Revised: 23 March 2023 / Accepted: 27 March 2023 / Published: 29 March 2023
(This article belongs to the Special Issue 'Eyes', 'Brain', 'Feet' and 'Hands' of Efficient Harvesting Machinery)

Abstract

:
Rice is a widely cultivated food crop worldwide, and threshing is one of the most important operations of combine harvesters in grain production. It is a complex, nonlinear, multi-parameter physical process. The flexible threshing device has unique advantages in reducing the grain damage rate and has already been one of the major concerns in engineering design. Using the measured test database of the flexible threshing test bench, the rotation speed of the threshing cylinder (RS), threshing clearance of the concave sieve (TC), separation clearance of the concave sieve (SC), and feeding quantity (FQ) are used as the input layer. In contrast, the crushing rate (YP), impurity rate of the threshed material (YZ), and loss rate (YS) are used in the output layer. A 4-5-3-3 artificial neural network (ANN) model, with a backpropagation learning algorithm, was developed to predict the threshing performance of the flexible threshing device. Next, we explored the degree to which the inputs affect the outputs. The results showed that the R of the threshing performance model validation set in the hidden layer reached 0.980, and the root mean square error ( R M S E ) and the average absolute error ( M A E ) were less than 0.139 and 0.153, respectively. The built neural network model predicted the performance of the flexible threshing device, and the regression determination coefficient R2 between the prediction data and the experimental data was 0.953. The results showed revealed that the data combined with the ANN method is an effective approach for predicting the threshing performance of the flexible threshing device in rice. Moreover, the sensitivity analysis showed that RS, TC, and SC were crucial factors influencing the performance of the flexible threshing device, with an average relative importance of 15.00%, 14.89%, and 14.32%, respectively. FQ had the least effect on threshing performance, with an average threshing relative importance of 11.65%. Our findings can be leveraged to optimize the threshing performance of future flexible threshing devices.

1. Introduction

Rice is one of the four main staple food crops in China, with a perennial planting area of 30 million hectares [1]. Mechanized rice production relies heavily on the harvest process as an essential step. Threshing is a key link in the rice harvesting process; it is a complex, nonlinear, and uncertain process, with several influencing parameters and large nonlinearity [2,3]. The impact of threshing on rice determines how much grain is lost during the harvest and processing stages. Double cropping rice in southern China has a short harvesting duration. The performance parameters of the threshing and separation device directly affect the operation quality of the combined rice harvester, i.e., the core working component. The longitudinal axial threshing device is characterized by long threshing time, smooth threshing process, good adaptability, and relatively soft threshing effect, and it is broadly used in combined harvesters [4]. Researchers in agricultural mechanization are interested in the flexible threshing tooth due to its lower impact force and rate of damage to the cracked grains compared to its rigid counterpart [5]. For this reason, it is suitable for increasing the synthesis benefit in grain production [6]. Several scholars have studied the application of flexible materials in agricultural engineering. In 1972, Duane L et al. [7] designed a self-made collision test device to analyze the effects of corn grain velocity, collision surface material, collision angle, and other parameters on the extent of grain collision damage. One study found that when the impact surface was polyurethane, the damage degree of the grain was one-fifth of that when the impact surface was steel, and one-sixth of that when the impact surface was concrete. This is an inaugural study focusing on the effect of flexible materials on grain, demonstrating the benefit of flexible materials in reducing grain damage degree. Shi Qingxiang et al. [8] performed a comparative study on the flexible and rigid threshing elements, demonstrating that flexible threshing with flexible teeth made of flexible materials can extend the threshing time and reduce grain breakage with feasible flexible threshing. Xie Fangping et al. [9] utilized polyurethane plastic cylindrical strips as the teeth of flexible threshing rods to conduct a dynamic analysis of the threshing of flexible rod teeth. Consequently, they found that the indexes of flexible threshing, for instance, non-removal rate and impurity rate, were similar to those of rigid rod teeth threshing, and the crushing rate was significantly lower than that of rigid rod teeth threshing. Ren Xuguang et al. [10] analyzed the threshing process of rice using the conservation law of capacity and noted that it is conducive to rice threshing when the flexible teeth periodically hit the ear of rice, and a resonance response occurred. Su Yuan et al. [11] modified the conventional Q235 carbon steel teeth into nitrile rubber composite nail teeth and polyurethane rubber nail teeth. The test found better grain removal performance of nitrile rubber composite nail teeth than that of polyurethane rubber nail teeth and traditional carbon steel nail teeth. Geng Duanyang et al. [12] designed a cross-axial flow flexible corn threshing device. To realize flexible and low-damage threshing of corn ears, the threshing element combined a structure of flexible nail teeth and an elastic short grain rod. Li Yibo et al. [13] performed a bench test to explore the effect of composite nail teeth of different outer materials on the threshing performance and self-wear resistance of corn ear. The results showed that the rubber composite nail teeth had the best comprehensive effects in threshing and self-anti-fraying performance, the breakage rate of maize was lower compared with that of traditional carbon steel nail teeth, and the non-threshing rate of maize was similar to that of traditional carbon steel nail teeth, thus meeting the conditions of technical specifications for threshing quality evaluation of maize harvester. Fu Jun et al. [14] established a rigid–flexible coupled wheat threshing arch tooth. Under similar operating conditions, the damage rate of the rigid-flexible coupled arch tooth was significantly reduced, unlike that of the standard arch tooth, with significant loss reduction and threshing effect. Qian Zhenjie et al. [15] introduced the increase and decrease constraint strategy to establish a multi-friction dynamic model of flexible threshing teeth on grains. As a consequence, it was observed that the continuous normal striking force and repeated minor tangential kneading force of flexible teeth on grains combined to reduce the grain damage rate. Reports on the longitudinal axial flow threshing cylinder with a hollow core and flexible rod teeth used in rice threshing are limited. Flexible threshing can reduce the crushing rate of rice grains and, thus, developing a comprehensive and accurate evaluation model of flexible threshing has important theoretical value and practical significance.
In recent years, the artificial neural network (ANN) has achieved desired performance and high accuracy in predicting laboratory data because of its capacity to describe nonlinear systems. As a result, it is widely applied in the fields of mathematics, engineering, medicine, economy, environment, and agriculture [16], particularly where some traditional modeling methods have failed [17]. Artificial neural network technology has been utilized in harvester systems by some researchers [18,19]. Nonetheless, few studies have been conducted on the threshing performance of a flexible threshing device using artificial neural networks. Due to the uncertainty of the threshing condition and the complexity of the factors affecting the threshing device, the threshing performance prediction is a nonlinear problem affected by multiple factors. Nevertheless, the BP neural network is a nonlinear dynamic system [20,21] with powerful nonlinear [22] and generalization capacity and can identify complex relationships among the data [23]. Herein, parameters [24] affecting the performance of the threshing device and threshing performance indicators [25,26] were based on the parameters reported by several studies.
In the laboratory-based flexible threshing bench test, the rotated speed of the threshing cylinder, threshing clearance of the concave sieve, and separating clearance of the concave, as well as feeding quantity, were selected as the inputs of the model based on the BP neural network. The neural network model was established between inputs and their threshing characteristic of the crushing rate, impurity rate of threshed material, and entrainment loss rate. Further, the threshing performance index was predicted under different parameters. The objectives of this study included: (1) Determining the feasibility of artificial neural network technology in predicting the threshing performance of the flexible threshing device and providing executable procedures for an artificial neural network model for practical application; (2) Investigating the effect of artificial neural network geometry and some internal parameters on model performance; (3) Exploring the relative significance of factors influencing threshing performance through sensitivity analysis.

2. Materials and Methods

2.1. Test Materials and Equipment

The plots with basically similar crop growth rates were selected as the experimental sampling area. The rice variety tested was Xiangzaoxian 24. Table 1 shows the main material characteristics of the rice. The rice flexible threshing test was conducted in the Agricultural Machinery Engineering Training Center of Hunan Agricultural University from July 11 to 18, 2022. Figure 1 shows the test equipment, and Table 2 shows the equipment parameters.

2.2. Test Method

The test was conducted following GB/T 5262—2008 and GB/T 5982—2005.
The thousand-grain quality was determined using the national standard method to explore grain and stem water content and according to the GB 5519-85 “grain, oil test thousand grain weight determination method”.
The plots with similar crop growth were selected as the sample areas. Rice plants were artificially fed uniformly into the longitudinal axial flow threshing drum. In the multi-factor experiment, the material of each group weighed 10 kg. Three parallel tests were performed using similar parameter combinations, and the average value was taken. The performance evaluation indexes of the system were categorized into the crushing rate, impurity rate of threshed material (impurity rate for short), and entrainment loss rate. The mix that was threshed was collected in the receiving box located under the adaptable threshing mechanism. The mix released from the end of the cylinder was accumulated with the help of a tarpaulin attached to it. After each parallel test, the crushing rate and impurity rate of the threshing system were calculated using the mix, which was discharged into the receiving box. The mixture discharged onto the tarpaulin attached to the end of the cylinder was analyzed to determine the entrainment loss rate. The calculation formulas of the crushing, impurity, and entrainment loss rates are, respectively:
Y P = W P W X × 100 %
Y Z = W X Z W X h × 100 %
Y S = W W W × 100 %
where Y P is the crushing rate, %; W P is the mass of crushed grains in the sample, g; W X represents the total grain weight in the sample, g; Y Z is the impurity rate of threshed material, %; W X Z is the impurity mass in the extruded sample, g; W X h is the total mass of extruded samples, g; Y S is the entrainment loss rate, %; W W is the grain mass discharged from the tail of the drum, g; W is the grain weight of each group of test extracts, g.

2.3. Building the ANN Model

2.3.1. Development of Neural Network Model

One of the most commonly used neural network models is the BP neural network, which utilizes the BP algorithm. Even the most complex nonlinear relationship completely approximates it. The information is dispersed and stored in the neurons of the network. The computation is extremely fast due to parallel processing. Since neural networks are self-learning and adaptive, they can deal with uncertain or unknown systems. This system is excellent when simultaneously processing both quantitative and qualitative information. It can coordinate a wide range of input information relations and is, thus, ideal for fusion and multimedia applications. A well-trained artificial neural network can function as a predictive model for a specific application, which is a data processing system inspired by biological neural systems. The predictive power of an ANN is derived from training on experimental data, which is then validated using independent data. Artificial neural networks can relearn and adapt to improve their performance by updating data availability [27]. The structure and operation of ANNs have been described by numerous authors [28]. The modeling used in feedforward neural networks for prediction was designed to capture the correlation between the historical model inputs and their corresponding outputs. This is accomplished by repeatedly feeding the model examples of input/output relationships and adjusting the model coefficients (i.e., connection weights) to minimize the error function between the historical output and the model-predicted outputs.
This article follows the procedure of the artificial neural network model as described by Maier and Dandy [29]. They include determining model inputs and outputs, dividing and preprocessing available data, selecting an appropriate network architecture, optimizing connection weights (training), setting stopping criteria, and validating the model. A typical algorithm flow diagram is shown in Figure 2. In this work, all calculations and programming were executed in MATLAB (R2016a, 9.0.0.341360). The data used to calibrate and validate the neural network model were obtained from the bench field measurements of the flexible threshing experiment device and the corresponding information on the feeding amount and material characteristics. The data cover a wide range of variation in different operating parameters types and threshing properties. The database comprises a total of 25 individual cases. The statistics of the input and output parameters used for the artificial neural networks are shown in Table 3. Figure 3 is a database of all the threshing performance metrics for the ANN.

2.3.2. Model Inputs and Outputs

A thorough comprehension of the determinants of threshing performance is required to obtain accurate threshing performance prediction. The rotational speed of the cylinder is closely associated with the performance of the thresher because high-speed results in cracking of the grain, and low-speed leads to unthreshed grain. Moreover, the threshing clearance and separating clearance of the concave sieve significantly influence the threshing performance. Furthermore, the size of the feeding quantity is closely correlated to the threshing characteristics [30].
The primary factors affecting threshing performance include the rotational speed of the cylinder, threshing clearance of the concave sieve, separating clearance of the concave sieve, and feeding quantity. Other factors include total threshing power consumption and grain moisture content that contribute to a lesser degree, thus considered secondary. Grain moisture content was excluded in this work since the tests were conducted under specific moisture content conditions during the harvest period.
The aforementioned factors, i.e., the rotational speed of cylinder (RS), threshing clearance of concave sieve (TC), separating clearance of concave sieve (SC), and feeding quantity (FQ), were introduced to the ANN as the model input variables. On the other hand, the crushing rate (YP), impurity rate of threshed materials (YZ), and entrainment loss rate (YS) were the output variables. Sensitivity analysis was conducted on the trained network to identify the input variables with the most significant impact on threshing performance predictions.
Sensitivity analysis (Figure 4) was based on the validation set, where the input variable was RS, TC, SC, and FQ. To normalize the input variables, the value of the input variable was first changed, the trained network was introduced, the maximum and minimum output values were recorded, the difference between the maximum and the minimum value was computed, the difference to the maximum value was calculated, before finally taking the mean of all ratios as the sensitivity of the classification variable. Lastly, the sensitivity size was compared to establish the sensitivity of each categorical variable to the output variable. The sensitivity analysis results will be discussed later.

2.3.3. Data Division and Preprocessing

The database was randomly divided into three sets, i.e., training, testing, and validation. A training set was used to construct the neural network model, whereas an independent validation set was used to estimate model performance in the deployed environment [31]. In total, 60% of the data were used for training, 20% for testing, and 20% for validation. Table 4 shows the orthogonal test for different levels as well as the data ranges used for the ANN model variables.
Notably, it is critical to preprocess the data into an appropriate format before applying it to the ANN. Preprocessing the data by scaling is crucial in ensuring that all variables receive equal attention during training. The output variables must be scaled to commensurate with the limits of the transfer functions used in the output layer. Although scaling the input variables is not necessary, it is often recommended [32]. Here, the input and output variables were scaled between −1.0 and 1.0, as the purelin sigmoidal transfer function was used in the output layer.

2.3.4. Model Architecture

Determining the network architecture is one of the crucial and challenging tasks in the development of ANN models because it requires the selection of several hidden layers and the number of nodes in each of these.
The number of model inputs and outputs restricts the number of nodes in the input and output layers. The input layer of the ANN model developed in this work had four nodes, one for each of the model inputs (i.e., a rotational speed of cylinder (RS), threshing clearance of concave sieve (TC), separating clearance of concave sieve (SC), feeding quantity (FQ)). On the other hand, the output layer had three nodes (i.e., crushing rate (YP), impurity rate of threshed materials (YZ), and entrainment loss rate (YS)) representing the measured value of threshing performance.
Figure 5 shows the basic elements of an artificial neuron. Artificial neurons mainly comprise weight bias and activation functions. The BP neural network is the most popular and widely used artificial neural network architecture [33]. It involves an input layer, one or more hidden layers, and an output layer. Evidence suggests that a network with a threshold, at least one S-shaped hidden layer, and a linear input layer can approximate any rational number [34]. Mathematical expressions and interpretations of artificial neural networks can be referred to in reference [35].
The activation function introduces nonlinearity into the neural network, making it more powerful than the linear transformation. The Levenberg–Marquardt algorithm is the most commonly used multi-layer perception training algorithm. It is a gradient descent technique [36] used to reduce the error of specific training patterns. The network was built using the Levenberg–Marquardt backpropagation technique. Tansig is a common nonlinear activation function for nodes in the hidden layer. Figure 6 depicts the architecture of the artificial neural network system described in this paper. W is a weight matrix for the hidden and output layers, and N i j is a node that computes a weighted sum of its inputs and passes the sum through a soft nonlinearity or activity function.

2.3.5. Weight Optimization

“Training” or “learning” is the process of optimizing the connection weights. The goal is to identify a global solution to what is typically a highly nonlinear optimization problem. The method most commonly used for finding the optimum weight combination for feedforward neural networks is the backpropagation algorithm [37], which is based on first-order gradient descent. Feedforward networks trained with the backpropagation algorithm have been applied successfully to numerous agricultural engineering problems [38,39], hence, also used in this work.

2.3.6. Stopping Criteria and ANN Model Validation

The following are conditions for the neural network to stop: 1. Meet the accuracy requirements; 2. Complete the maximum number of iterations.
Backpropagation works by minimizing a cost function. The mean squared error ( M S E ) is the most common cost function.
Validation data were used to validate the performance of the trained model once the training phase of the model was completed. Additionally, the validation set was used to determine the optimum number of hidden layer nodes and the optimum internal parameters (learning rate, momentum, and initial weights). The M S E was used to validate the performance of the ANN in terms of the different number of hidden layer nodes according to Equation (4).
M S E = i = 1 m y i y ^ i 2 m
The evaluation parameters metrics of root mean square error ( R M S E ) [40], correlation coefficient ( R ), and mean absolute error ( M A E ) were utilized to assess the performance of the models by comparing the target and output values of networks.
R M S E = i = 1 m ( y i y ^ i ) 2 m
R = i = 1 m y i y ¯ y ^ i y ^ ¯ 2 i = 1 m ( y i y ¯ ) 2 i = 1 m y ^ i y ^ ¯ 2
M A E = 1 m i = 1 m y i y ^ i
The R M S E , R , and M A E values were calculated in all stages: training; validating; and testing. Where y i , y ^ i are the observed value and predicted values, y ¯ , y ^ ¯ are the average observed and predicted values, and m is the total number of points in each dataset, respectively. Using this parameter aids in selecting the best structure and network and provides the possibility of understanding the proximity of the model.
After model construction, the variable parameters of the experimental trials were entered as the new input model, and the actual results were compared with the model. Microsoft Excel 2016 software was used to analyze the correlation coefficient between the actual results and the output of the neural network model.

3. Results

3.1. Evaluation of the Number of Hidden Layer Nodes

The BP network has a varied number of nodes in the hidden layer, and the hidden nodes affect the error of the output connected neurons [41]. If the number of neurons in the hidden layer is too small, the network’s ability to learn is limited, resulting in the need for more training to decrease its fault tolerance. On the other hand, network iterations will increase with too many neurons, thereby extending the training time of the network, and reducing the generalization capacity of the network, resulting in a decrease in predictive ability. The optimal number of nodes needs to be explored to confirm the effect of different nodes on network performance. In practical situations, the number of nodes in the hidden layer is selected by first determining the approximate range of the number of nodes using the empirical formula before using the step-wise test strategy to establish the best number of nodes with the smallest error by training and comparing the networks with different neurons. The best number of hidden layer nodes can be derived from the following formula [42,43]:
l = ( m + n ) + a
where l represents the number of neurons in the hidden layer, n denotes the number of neurons in the input layer, m is the number of neurons in the output layer, a is the constant, and 1 < a < 10 . According to this formula, the value range of the hidden layer nodes of the network was 4–12, and the performance of the artificial neural network under different numbers of nodes is shown in Figure 7. When the number of hidden layers was 5, the minimum M S E was 0.00080796, indicating superior model performance.
Table 5 summarizes the predictive performance of the optimal neural network. The findings showed a validation set of R = 0.979 , R M S E of 0.138, and M A E of 0.153. The ANN model with a 4-5-3-3 structure performed effectively. Table 5 further shows the results of the model, which were generally consistent with those obtained during training and testing, indicating that the model can generalize within the range of data used for training.
Based on the data shown in Figure 8, the error curves of the model training sample, the corrected sample, and the test sample were well correlated. The curve trend slowly decreased, indicating that the network was trained on the training data. To avoid overfitting with the validation data, the M S E between the initial fitting and validation will become smaller and smaller, but as the network begins to overfit the training data, the M S E will become larger. In the default setting, the training ends when the validation error is added six consecutive times, and the best performance is obtained from the lowest validation error period (drawing circle). Finally, the obtained best artificial neural network parameters are shown in Table 6. Figure 9 shows the training state of the model training phase.

3.2. Evaluation of Prediction Results

The regression curves for assessing the accuracy of the ANN estimation are shown in Figure 10. Estimates of the threshing performance of the ANN were evaluated by regression analysis between the predicted and experimental data. To validate the ANN model, we applied the estimation and regression methods. The regression value for the threshing characteristics was calculated as 0.9525. Figure 10 displays the optimal curve resulting from multiple iterations of the R2 curve.

3.3. Sensitivity Analysis

Sensitivity analysis was performed to examine the sensitivity of the various factors influencing the threshing characteristics, and Table 7 shows the effects of the various input factors. As shown, different input variable values, i.e., the size of different sensitivity, reflected the effect of the input variable on the output variable. RS affected the predicted threshing performance when the network values had distinct input variables. However, the relative importance of the remaining input variables varied based on changes in the input variables. RS was the most important input in all trials followed by TC, SC, and FQ. Sensitivity analysis revealed that RS, TC, and SC were the most vital factors affecting threshing performance, with an average relative importance of 15.00%, 14.89%, and 14.32%, respectively. The results further showed that FQ had a minimal effect on threshing performance, with an average relative importance of 11.65%.

4. Discussion

Threshing is one of the most critical operations of combine harvesters during grain production, which is a complex, nonlinear, multi-parameter physical process. The working performance index of the threshing device has a significant on the separation, cleaning, and other parts and the working quality of the whole machine and has always been one of the main concerns of the engineered design. A flexible threshing device has the advantage of reducing the crushing rate of rice grain. Therefore, a comprehensive and accurate design of a flexible threshing performance evaluation model has important theoretical value and practical significance. In this study, the BP artificial neural network was used to model the threshing performance factors based on four factors: RS, TC, SC, and FQ. Determining the optimal network architecture is related to the number of hidden layers and neurons. The optimum network geometry was found to be 4-5-3-3 by evaluating different number of hidden layer nodes in this study. The performance of the ANN model was verified by comparing the predicted dataset with the experimental results (measured data). The sensitivity analysis performed for the described ANN model indicated that four working variables of the flexible threshing device had the greatest contribution to threshing performance attributes compared to FQ. These results can guide the optimal design of a flexible threshing cylinder to achieve the maximum performance of the device.

5. Conclusions

This study analyzed different numbers of hidden layer nodes and found that when the number of hidden layer nodes was five, the minimum M S E was 0.00080796, indicating that the model performed well. The results indicated that backpropagation neural networks could predict the threshing performance of the flexible threshing device with an acceptable degree of accuracy ( R = 0.980 , R M S E = 0.138 , M A E = 0.153 ). The built neural network model prediction predicted the performance of the flexible threshing device well. The regression determination coefficient R2 between the predicted and experimental data was 0.953, indicating that the predicted data of the built neural network model was in good agreement with the experimental data. The ANN method is an effective method for predicting the threshing performance of flexible threshing devices in rice. The established artificial neural network model exhibited stable prediction of the threshing performance of the flexible threshing device during operation. The sensitivity analysis revealed that RS, TC, and SC are important factors affecting the performance of the flexible threshing device, with an average relative importance of 15.00%, 14.89%, and 14.32%, respectively. FQ had the least impact on threshing performance, with an average threshing relative importance of 11.65%. These results can guide the optimal design of flexible threshing cylinders and improve the performance of the flexible threshing device.

Author Contributions

Conceptualization, L.M. and F.X.; methodology, L.M.; software, L.M.; validation, L.M., F.X. and D.L.; formal analysis, L.M.; investigation, D.L.; resources, X.W. and Z.Z.; data curation, X.W. and D.L.; writing—original draft preparation, L.M.; writing—review and editing, L.M., F.X. and Z.Z.; visualization, L.M.; supervision, F.X.; project administration, D.L.; funding acquisition, F.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Hunan High-Tech Industry Technology Leading Plan Project (Science and Technology Research category) (2020NK2002); Hunan Agricultural Machinery Equipment and Technology Innovation Research and Development Project (Xiangcai Agricultural Index (2021) No.47); and Hunan Agricultural Machinery Equipment and Technology Innovation Research and Development Project (Xiangcai Agricultural Index [2020] No.107).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Han, B.; Meng, F.C.; Liang, L.N.; Yang, Y.N.; Wang, B. Numerical simulation and experiment on performance of supplying seeds mechanism of directional precision seeding device for japonica rice. Trans. CSAE 2016, 32, 22–29. (In Chinese) [Google Scholar]
  2. Craessaerts, G.; Saeys, W.; Missotten, B.; De Baerdemaeker, J. A genetic input selection methodology for identification of the cleaning process on a combine harvester. Part I: Selection of relevant input variables for identification of the sieve losses. Biosyst. Eng. 2007, 98, 166–175. [Google Scholar] [CrossRef]
  3. Maertens, K.; De Baerdemaeker, J. Design of a virtual combine harvester. Math. Comput. Simul. 2004, 65, 49–57. [Google Scholar] [CrossRef]
  4. Liang, Z.W.; Li, Y.M.; Baerdemaeker, J.; Xu, L.Z.; Saeys, W. Development and testing of a multi-duct cleaning device for tangential- longitudinal flow rice combine harvesters. Biosyst. Eng. 2019, 182, 95–106. [Google Scholar] [CrossRef]
  5. Shi, Q.X.; Liu, S.D.; Ji, J.T.; Fu, R.X.; Ni, C.G. Research on seed-control feed and soft threshing for rice. Trans. Chin. Soc. Agric. Mach. 1996, 27, 41–46. (In Chinese) [Google Scholar]
  6. Qian, Z.J.; Jin, C.Q.; Zhang, D. Multiple frictional impact dynamics of threshing process between flexible tooth and grain kernel. Comput. Electron. Agric. 2017, 141, 276–285. [Google Scholar] [CrossRef]
  7. Keller, D.L. Corn kernel damage due to high velocity impact. Trans. ASAE 1972, 15, 330–332. [Google Scholar] [CrossRef]
  8. Shi, Q.X.; Liu, S.D.; Ji, J.T.; Fu, R.X.; Ni, C.G. Studies on the mechanism of speed-controlled feeding and soft threshing. Trans. CSAE 1996, 12, 177–180. (In Chinese) [Google Scholar]
  9. Xie, F.P.; Luo, X.W.; Lu, X.Y.; Sun, S.L.; Ren, S.G.; Tang, C.Z. Threshing principle of flexible pole-teeth roller for paddy rice. Trans. CSAE 2009, 25, 110–114. (In Chinese) [Google Scholar]
  10. Ren, S.G.; Xie, F.P.; Luo, X.W.; Sun, S.L. Analysis and test of power consumption in paddy threshing using flexible and rigid teeth. Trans. CSAE 2013, 29, 12–18. (In Chinese) [Google Scholar]
  11. Su, Y.; Liu, H.; Xu, Y.; Cui, T.; Qu, Z.; Zhang, D.X. Optimization and experiment of spike-tooth elements of axial flow corn threshing device. Trans. Chin. Soc. Agric. Mach. 2018, 49, 258–265. (In Chinese) [Google Scholar]
  12. Geng, D.Y.; He, K.; Wang, Q.; Jin, C.Q.; Zhang, G.H.; Liu, X.F. Design and experiment on transverse axial flow flexible threshing device for corn. Trans. Chin. Soc. Agric. Mach. 2019, 50, 101–108. (In Chinese) [Google Scholar]
  13. Li, Y.B.; Jiang, J.J.; Xu, Y.; Cui, T.; Su, Y.; Qiao, M.M. Preparation and threshing performance tests of rubber composite nail teeth under maize with high moisture content. Trans. Chin. Soc. Agric. Mach. 2020, 51, 158–167. (In Chinese) [Google Scholar]
  14. Fu, J.; Zhang, Y.C.; Cheng, C.; Chen, Z.; Tang, X.L.; Ren, L.Q. Design and experiment of bow tooth of rigid flexible coupling for wheat threshing. J. Jilin Univ. (Eng. Technol. Ed.) 2020, 50, 730–738. (In Chinese) [Google Scholar]
  15. Qin, Z.J.; Jin, C.Q.; Yuang, W.S.; Ni, Y.L.; Zhang, G.Y. Frictional impact dynamics model of threshing process between flexible teeth and grains. J. Jilin Univ. (Eng. Technol. Ed.) 2021, 51, 1121–1130. (In Chinese) [Google Scholar]
  16. Safa, M.; Samarasinghe, S. Determination and modelling of energy consumption in wheat production using neural networks: “A case study in Canterbury province, New Zealand”. Energy 2011, 36, 5140–5147. [Google Scholar] [CrossRef]
  17. Hertz, J. Introduction to the theory of neural computation. Am. J. Phys. 1994, 62, 668. [Google Scholar] [CrossRef]
  18. Mirzazadeh, A.; Abdollahpour, H.; Mahmoudi, A.; Bukat, A.R. Intelligent modeling of material separation in combine harvester’s thresher by ANN. Int. J. Agric. Crop Sci. 2012, 4, 1767–1777. [Google Scholar]
  19. Gundoshmian, T.M.; Ardabili, S.; Mosavi, A.; Várkonyi-Kóczy, A.R. Prediction of combine harvester performance using hybrid machine learning modeling and response surface methodology. In Engineering for Sustainable Future, Proceedings of the 18th International Conference on Global Research and Education Inter-Academia–2019, Budapest, Hungary, 4–7 September 2019; Springer International Publishing: New York, NY, USA, 2020; pp. 345–360. [Google Scholar]
  20. Hornik, K.; Stinchcome, M.; White, H. Multilayer feedford networks are universal approximaters. Neural Netw. 1989, 2, 359–379. [Google Scholar] [CrossRef]
  21. Therneau, T.M.; Grambsch, P.M.; Fleming, T.R. Martingale-based residuals for survival models. Biometrika 1990, 77, 147–160. [Google Scholar] [CrossRef]
  22. Jahirul, M.; Rahman, S.; Masjuki, H.; Kalam, M.; Rashid, M. Application of artificial neural networks (ANN) for prediction the performance of a dual fuel internal combustion engine. HKIE Trans. 2009, 16, 14–20. [Google Scholar] [CrossRef]
  23. Cirak, B.; Demirtas, S. An application of artificial neural network for predicting engine torque in a biodiesel engine. Am. J. Energy Res. 2014, 2, 74–80. [Google Scholar] [CrossRef]
  24. Shpokas, L. Research of Grain Damage Caused by High-Performance Combine Harvesters. Mot. Power Ind. Agric. 2007, 9, 168–177. [Google Scholar]
  25. Maertens, K.; Ramon, H.; De Baerdemaeker, J. An on-the-go monitoring algorithm for separation processes in combine harvesters. Comput. Electron. Agric. 2004, 43, 197–207. [Google Scholar] [CrossRef]
  26. Miu, P.I.; Kutzbach, H.D. Modeling and simulation of grain threshing and separation in threshing units-Part I. Comput. Electron. Agric. 2008, 60, 96–104. [Google Scholar] [CrossRef]
  27. Hertz, J.A.; Krogh, A.S.; Palmer, R.G. Introduction to the Theory of Neural Computation; Addison-Wesley Publishing Company: Boston, MA, USA, 1991; pp. 115–156. [Google Scholar]
  28. Fausett, L.V. Fundamentals of Neural Networks: Architectures, Algorithms, and Applications; Prentice Hall: Upper Saddle River, NJ, USA, 1994. [Google Scholar]
  29. Maier, H.R.; Dandy, G.C. Applications of Artificial Neural Networks to Forecasting of Surface Water Quality Variables: Issues, Applications and Challenges; Artificial Neural Networks in, Hydrology; Govindaraju, R.S., Rao, A.R., Eds.; Kluwer: Dordrecht, The Netherlands, 2000; pp. 287–309. [Google Scholar]
  30. Špokas, L.; Steponavičius, D.; Petkevičius, S. Impact of technological parameters of threshing apparatus on grain damage. Agron. Res. 2008, 6, 367–376. [Google Scholar]
  31. Twomey, J.M.; Smith, A.E. Validation and Verification Artificial Neural Networks for Civil Engineers: Fundamentals and Applications; Kartam, N., Flood, I., Garrett, J.H., Eds.; ASCE: New York, NY, USA, 1997; pp. 44–64. [Google Scholar]
  32. Masters, T. Practical Neural Network Recipes in C++; Academic Press Professional, Inc.: San Diego, CA, USA, 1993. [Google Scholar]
  33. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  34. Han, L.Q. Theory, Design, and Application of Artificial Neural Networks, 2nd ed.; Chemical Industry Press: Beijing, China, 2007. [Google Scholar]
  35. Haykin, S. Neural Networks: A Comprehensive Foundation; Prentice Hall International Inc.: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
  36. Eyercioglu, O.; Kanca, E.; Pala, M.; Ozbay, E. Prediction of martensite and austenite start temperatures of the Fe-based shape memory alloys by artificial neural networks. J. Mater. Process. Technol. 2008, 200, 146–152. [Google Scholar] [CrossRef]
  37. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Internal Representation by Error Propagation; Parallel Distributed Processing; Rumelhart, D.E., McClelland, J.L., Eds.; MIT Press: Cambridge, MA, USA, 1986; Volume 1, Chapter 8. [Google Scholar]
  38. Zhao, Z.; Qin, F.; Tian, C.J.; Yang, S.X. Monitoring method of total seed mass in a vibrating tray using artificial neural network. Sensors 2018, 18, 3659. [Google Scholar] [CrossRef]
  39. Kosari-Moghaddam, A.; Rohani, A.; Kosari-Moghaddam, L.; Esmaeipour-Troujeni, M. Developing a Radial Basis Function Neural Networks to Predict the Working Days for Tillage Operation in Crop Production. Int. J. Agric. Manag. Dev. (IJAMD) 2018, 9, 119–133. [Google Scholar]
  40. Al-Dosary, N.M.N.; Aboukarima, A.M.; Al-Hamed, S.A. Evaluation of Artificial Neural Network to Model Performance Attributes of a Mechanization Unit (Tractor-Chisel Plow) under Different Working Variables. Agriculture 2022, 12, 840. [Google Scholar] [CrossRef]
  41. Jinchuan, K.; Xinzhe, L. Empirical analysis of optimal hidden neurons in neural network modeling for stock prediction. In Proceedings of the Pacific-Asia Workshop on Computational Intelligence and Industrial Application, Wuhan, China, 19–20 December 2008; pp. 828–832. [Google Scholar]
  42. Feisi Technology Product Research and Development Center. Neural Network Theory and the MATLAB7 Implementation; Electronic Industry Press: Beijing, China, 2005. [Google Scholar]
  43. Fu, H.X.; Zhao, H. MATLAB Neural Network Application Design; Machinery Press: Beijing, China, 2010; pp. 83–97. [Google Scholar]
Figure 1. Flexible threshing experiment device. 1. Feeding device 2. Threshing device 3. Threshing cylinder 4. Flexible threshing teeth.
Figure 1. Flexible threshing experiment device. 1. Feeding device 2. Threshing device 3. Threshing cylinder 4. Flexible threshing teeth.
Agriculture 13 00788 g001
Figure 2. The artificial neural network algorithm flow chart.
Figure 2. The artificial neural network algorithm flow chart.
Agriculture 13 00788 g002
Figure 3. The threshing performance index database for the artificial neural network.
Figure 3. The threshing performance index database for the artificial neural network.
Agriculture 13 00788 g003
Figure 4. The sensitivity analysis flow.
Figure 4. The sensitivity analysis flow.
Agriculture 13 00788 g004
Figure 5. Schematic diagram of an artificial neural network.
Figure 5. Schematic diagram of an artificial neural network.
Agriculture 13 00788 g005
Figure 6. The artificial neural network architecture of threshing performance.
Figure 6. The artificial neural network architecture of threshing performance.
Agriculture 13 00788 g006
Figure 7. Performance of artificial neural network models with different hidden layer nodes (learning rate = 0.1 and training goal = 0.001).
Figure 7. Performance of artificial neural network models with different hidden layer nodes (learning rate = 0.1 and training goal = 0.001).
Agriculture 13 00788 g007
Figure 8. The neural network training performance (epoch 18, validation stop).
Figure 8. The neural network training performance (epoch 18, validation stop).
Agriculture 13 00788 g008
Figure 9. The neural network training state (epoch 18, validation stop).
Figure 9. The neural network training state (epoch 18, validation stop).
Agriculture 13 00788 g009
Figure 10. The predicted and the measured values of threshing performance.
Figure 10. The predicted and the measured values of threshing performance.
Agriculture 13 00788 g010
Table 1. Main physical characteristic parameters of harvesting rice.
Table 1. Main physical characteristic parameters of harvesting rice.
Rice VarietiesPlant Height/mmPanicle Length/mmMiddle Stem Diameter/mmMiddle Stem Wall Thickness/mmNumber of Shoots per EarNumber of Grains per EarThousand-Grain Mass/gStem Moisture Content/%Grain Moisture Content /%Yield per Unit Area /kg·hm−2Ratio of Grass to Grain
Xiangzaoxian No. 24833 ± 64182 ± 1232.18 ± 0.30.4 ± 0.112 ± 1.6110 ± 22.930.02 ± 1.055.68 ± 4.822.42 ± 0.862301:(0.83 ± 0.1)
Table 2. Parameter table of flexible threshing device.
Table 2. Parameter table of flexible threshing device.
ParametersValues
The total length of the cylinder/mm1935
Threshing cylinder diameter/mm620
Cylinder speed/(r·min−1)400–1500
Threshing clearance of concave
sieve/mm
0–60
Separating clearance of concave
Sieve/mm
0–60
Feeding rate/(kg·s−1)0.5–5
Table 3. Statistical criteria of the input parameters and performance attributes (output parameters) used in the ANN model.
Table 3. Statistical criteria of the input parameters and performance attributes (output parameters) used in the ANN model.
ParametersStatistical Criteria
MinimumMaximumAverageStandard DeviationMedianVariance
Training setinputs1800190.4500300.8486259.0154 × 104
outputs0.04901.19600.41420.40740.20700.1660
Validation setinputs2800183.3750291.1664308.4778 × 104
outputs0.04901.19600.39840.42310.19200.1790
Testing setinputs1.5800197.7750311.6720359.7139 × 104
outputs0.040930.99700.39950.40930.19200.1675
Table 4. Orthogonal test factor level table used for Artificial Neural Network Model Variables.
Table 4. Orthogonal test factor level table used for Artificial Neural Network Model Variables.
FactorsRotational
Speed of
Cylinder,
RS (r/min)
Threshing Clearance of Concave
Sieve,
TC (mm)
Separating Clearance
of Concave
Sieve,
SC (mm)
Feeding
Quantity,
FQ (kg/s)
Levels
160015151.0
265020251.5
370025352.0
475030452.5
580035553.0
Table 5. Artificial Neural Network Results.
Table 5. Artificial Neural Network Results.
Dataset R R M S E M A E
Training set0.975960.0791480.14100
Validation set0.979810.138230.15260
Testing set0.990410.0864660.13543
Table 6. Optimum ANN parameters for the design of the model.
Table 6. Optimum ANN parameters for the design of the model.
Sr.No.ParameterDescription
1No. of input nodesVarying from 1 to 25 in the cascaded training procedure
2No. of output nodes3
3No. of hidden layers2
4No. of neurons in the hidden layer (Hn)5–3
5Training ruleLevenberg-Marquardt (LM)
6Activation functionSigmoid
7Network typeFeed-forward (FF)
8Training methodBackpropagation algorithm
Table 7. Sensitivity Analyses of the Relative Importance of Artificial Neural Network Input Variables.
Table 7. Sensitivity Analyses of the Relative Importance of Artificial Neural Network Input Variables.
Trial No.Relative Importance for Input Variables
RSTCSCFQ
10.24170.10830.12080.1013
20.18310.15750.14680.086
30.14260.12780.18920.0828
40.05070.21910.12740.1805
Average0.15000.14890.14320.1165
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, L.; Xie, F.; Liu, D.; Wang, X.; Zhang, Z. An Application of Artificial Neural Network for Predicting Threshing Performance in a Flexible Threshing Device. Agriculture 2023, 13, 788. https://doi.org/10.3390/agriculture13040788

AMA Style

Ma L, Xie F, Liu D, Wang X, Zhang Z. An Application of Artificial Neural Network for Predicting Threshing Performance in a Flexible Threshing Device. Agriculture. 2023; 13(4):788. https://doi.org/10.3390/agriculture13040788

Chicago/Turabian Style

Ma, Lan, Fangping Xie, Dawei Liu, Xiushan Wang, and Zhanfeng Zhang. 2023. "An Application of Artificial Neural Network for Predicting Threshing Performance in a Flexible Threshing Device" Agriculture 13, no. 4: 788. https://doi.org/10.3390/agriculture13040788

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop