Next Article in Journal
Impact of Economic Growth, Environmental Pollution, and Energy Consumption on Health Expenditure and R&D Expenditure of ASEAN Countries
Previous Article in Journal
Two Dimensional Axisymmetric Simulation Analysis of Vegetation Combustion Particles Movement in Flame Gap under DC Voltage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Potential for Prediction of Water Saturation Distribution in Reservoirs Utilizing Machine Learning Methods

1
School of Civil and Resource Engineering, University of Science and Technology Beijing, Beijing 100083, China
2
National & Local Joint Engineering Lab for Big Data Analysis and Computer Technology, Beijing 100190, China
3
Research Institute of Petroleum Exploration and Development, PetroChina, Beijing 100083, China
4
Department of Petroleum Engineering, Texas A&M University at Qatar, Doha 999043, Qatar
5
Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Energies 2019, 12(19), 3597; https://doi.org/10.3390/en12193597
Submission received: 12 August 2019 / Revised: 13 September 2019 / Accepted: 16 September 2019 / Published: 20 September 2019

Abstract

:
Machine learning technology is becoming increasingly prevalent in the petroleum industry, especially for reservoir characterization and drilling problems. The aim of this study is to present an alternative way to predict water saturation distribution in reservoirs with a machine learning method. In this study, we utilized Long Short-Term Memory (LSTM) to build a prediction model for forecast of water saturation distribution. The dataset deriving from monitoring and simulating of an actual reservoir was utilized for model training and testing. The data model after training was validated and utilized to forecast water saturation distribution, pressure distribution and oil production. We also compared standard Recurrent Neural Network (RNN) and Gated Recurrent Unit (GRU) which are popular machine learning methods with LSTM for better water saturation prediction. The results show that the LSTM method has a good performance on the water saturation prediction with overall AARD below 14.82%. Compared with other machine learning methods such as GRU and standard RNN, LSTM has better performance in calculation accuracy. This study presented an alternative way for quick and robust prediction of water saturation distribution in reservoir.

1. Introduction

In the oilfield development process, due to reservoir heterogeneity and different production modes, there often exist great amounts of remaining oil in reservoirs. As a result, the prediction of remaining oil or water saturation distribution is of great significance for oilfield future development [1]. Accurate prediction of water saturation distribution in a reservoir facilitate to tap the remaining oil in the reservoir.
There are mainly two traditional methods to analyze and predict water saturation at the current time. On the one hand, some researchers calculate the water saturation in the reservoir based on petrophysical models. Archie [2] used the formation resistivity and porosity to calculate water saturation Sw in the reservoir. Some other researchers adopted saturation–height function like Leverett J function [3] and Heseldin method [4] to formulate capillary pressure data and then water saturation could be obtained. These methods are relatively time-consuming, and their sample range is hard to reflect the reservoir heterogeneity. On the other hand, numerical simulation has been widely used in the petroleum industry. A variety of simulation techniques have evolved for different development conditions and reservoir types [5,6,7,8,9]. Reservoir simulation plays a dominant role in formulating the reservoir development plan and improving oil recovery at the current stage [10]. In this background, many researchers studied water saturation distribution utilizing numerical simulation for different type of problem, such as water imbibition [11,12,13], performance of EOR (enhance oil recovery) methods [14,15], unconventional reservoir exploitation [16,17], etc. However, it will take a long time for history matching and forecast calculation, and the prediction cost is relatively higher.
In recent years, a data driven approach and artificial intelligence technology have been advantageous for quick exploitation determination with or without physical model. Take remaining oil saturation prediction in a reservoir as example, machine learning methods can show good ability to solve this problem. Reservoir engineers could propose rational scenarios for tapping of remaining oil and long-term healthy development of oilfields combining machine learning results with numerical simulation. In previous study, numbers of machine learning methods have already been applied into petroleum industry to analyze data, find patterns and predict target variables [18]. The utilization of a machine learning method in petroleum industry often focus on two purposes. For one purpose, try to figure out the relationship between petrophysics and reservoir properties. Silpngarmlers et al. [19] used the BP neural network method to learn different relative permeability curve data from a certain amount of papers and experiments, so as to develop a liquid/liquid and liquid/gas two-phase relative permeability predictors. Talebi et al. [20] utilized two improved algorithms: Multilayer Perceptron (MLP) neural network and Radial Basis Function (RBF) for efficient estimation of saturation pressure of reservoir oil. Nouri-Taleghani et al. [21] used three machine learning methods separately to predict the fracture density with full set log data as inputs. Masoudi et al. [22] adopted Bayesian Network and K2 algorithm to find interrelationships between petrophysical parameters and optimum production feature. Hegde et al. [23] conducted real time rate of penetration (ROP) optimization in drilling by utilizing Data-Driven model. Tian and Horne [24] applied three machine learning methods as linear regression, convolution kernel and ridge regression to interpret flow-rate, pressure and temperature data from permanent downhole gauge. For another, try to forecast future well production performance. Gupta et al. [25] forecasted gas production in unconventional resources using data mining and time series analysis. Schuetter et al. [26] adopted simple regression random forest (RF), support-vector regression (SVR), gradient-boosting machine (GBM) and multidimensional Kriging to predict oil production in an unconventional shale reservoir focusing on establishment of robust predictive models. Ma et al. [27] predicted the oil production using the novel multivariate nonlinear model based on traditional Arps decline model and a kernel method. Kamari et al. [28] used least square support machine vector method (LS-SVM) to develop a robust model for predicting surfactant-polymer flooding performance. The prediction on well performance and reservoir characteristics including water saturation distribution tried by Gomez [29] and Mohaghegh [30] with a data-driven reservoir modeling was successful while without detailed implementation description. This attempt indicated the data driven method is feasible and effective to characterize reservoir and facilitate production. Actually, typical machine learning approach should be modified or selected properly in terms of practical production to produce best results. The specific process introduction on machine learning implementation must benefit reservoir engineers in the petroleum industry.
Combined with a specific machine learning approach, the aim of this study is to predict water saturation distribution in future production, which is a typically time-series problem. Recurrent Neural Network (RNN) was found to be good at solving these problems. The output of the previous moment is partial input of next time step of RNN which is appropriate for short-term prediction. However, long-term dependencies problem of RNN limits its further development. To overcome this drawback, a Long-Short Term Memory (LSTM) method was presented. As a variant of RNN, LSTM has gradually become a research hotspot in the field of machine learning in recent years. Significant results have been achieved in language translation, speech recognition and machine reading [31,32,33,34,35]. As an improved form of RNN, LSTM solves the long-term dependencies due to its unique data preservation mechanism which could remember long-term data properties. This characteristic has great advantages in dealing with large-scale multi-dimensional data and time-series problems. In the petroleum industry, LSTM was used to perform secondary generation of well logging data [36], forecast production decline of multiple wells [37] and predict rate of penetration (ROP) based on recorded drilling data [38]. However, this method has not been used for the prediction of water saturation distribution in reservoir yet. In terms of the huge scale of data amount and consistence between water saturation and time, LSTM is a comparatively appropriate machine learning method for water saturation prediction.
This paper was organized in four aspects. Firstly, a water saturation prediction model was established utilizing neural network LSTM with data processing, model training and testing. Then, the model was calibrated and validated quantitatively for further prediction. After that, the prediction and analysis of water saturation distribution were carried out to tap remaining oil. Finally, a comparison of accuracy and computational time between different machine learning methods was presented. In this study, we provide an alternative way for quick and robust prediction of water saturation distribution in a reservoir without empirical and numerical simulation.

2. Methodology

2.1. Data Model for Water Saturation Prediction with LSTM

Neural networks include many different forms. Most neural network forms are utilized to process static data, that is, the samples are single and have no connection with each other. However, for practical case involving time-series problems, neural network requires not only calculation to a single sample, but also ability to remember characteristics like a human. The problem of serialization can be well solved by the Recurrent Neural Network (RNN) method. However, standard RNN does have obvious problems, especially the Long-Term dependencies problem [39]. To solve this defect, researchers developed the Long-Short Term Memory (LSTM) neural network.
LSTM is an RNN variant which is optimized based on standard RNN. It is able to learn long-term information. LSTM was first introduced by Hochreiter and Schmidhuber in 1997 [40]. It has been widely used in different study fields in recent years and has achieved good performance in different research issues. The core idea of LSTM is to add three functional gates into the standard structure. The repeating network structure of LSTM is shown in Figure 1, where xt is the input data in one single step, ht is the output data of this step, ht-1 represents the output data of last step which is also inputted again into the network.
Compared with standard RNN structure, LSTM has an extra subline called cell state C in the top of Figure 1, which could store information in the past like production information and water saturation distribution history for months. It runs straight down entire chains, just like the outline or storyline of a book, and after each calculation, the information in cell state will be updated.
The first key functional gate is “forget gate layer”. This gate determines which part of information should be thrown away from the cell state. After analyzing the input parameter xt and output results in last time step ht-1, the forget gate layer will produce a value ft between 0–1 into the cell state Ct-1, using a sigma function σ in Equation (1). Value 1 means “completely keep this” and 0 means “completely discarded” as shown in Equation (2) [41]:
σ ( x ) = 1 e x 1 + e x
f t = σ ( W f [ h t 1 , x t ] + b f )
Another functional gate is namely “input gate layer” which determines what new information in input data of this time step needs to be stored in the cell state, as follows [41]:
i t = σ ( W i [ h t 1 , x t ] + b i )
C ˜ t = A F ( W C [ h t 1 , x t ] + b C )
Equation (3) plays a similar role to Equation (2) and it is a value between 0–1 to determine the information to be added into the cell state. An activation function (AF) is used to deal with input data xt and ht-1 in Equation (4). Combine these two factors it and C ˜ t , specific information will be stored in cell state.
The old state Ct-1 can be multiplied by ft to forget the things computer decided to forget earlier then add it with C ˜ t .The result is the updated cell state in this time step Ct, as follows [41]:
C t = f t C t 1 + i t C ˜ t
Finally, the last gate which controls the output results evert step is namely “output gate layer”. It generates the output information ht which could include water saturation than we expected, based on the updated cell state Ct and input data xt and ht-1, as follows [41]:
h t = σ ( W o [ h t 1 , x t ] + b o ) A F ( C t )

2.2. Training Procedure

2.2.1. Data Acquisition and Processing

This paper selects a block in the Middle East as the research object with buried depth about 2400 m, average permeability of 57.4 mD, average porosity of 17.1% and initial formation pressure of 26.5 MPa. The exploitation conditions are relative ideal. There are 12 functional wells in this block and the well type is vertical well. Nine of these wells are production wells, numbered as: R-025, R-074, R-205, R-278, R-370, R-408, R-418, R-420, R-526. The other are injection (water) wells, numbered as: R-213, R-287, R-366. The well location in the block is shown in Figure 2. The block started production from August 1, 2004. In the beginning, wells production relied on formation energy. Then, from the third year of production, water injection for oil well production. The daily oil production per well ranged from 100 m3/day to 1000 m3/day.
The monitoring data can be obtained in wells, however data of water saturation distribution in a reservoir only could be acquired by numerical simulation according to well history data. As a result, the dataset for machine learning must be obtained including monitoring and simulating data together which also reflects a combination of data science and physical law. Then we can utilize the trained data model to predict water saturation in the same block faster and more accurately than numerical simulation in terms of a physical model. Based on the block geological model and well history information, the entire dataset was obtained after numerical simulation with 148-month production in this block, starting from August 1, 2004. In the process of history matching, the reservoir was divided into 25 × 24 grids in the X-Y plane, and was divided into 34 layers in the longitudinal direction. Permeability distribution, porosity distribution and initial water saturation distribution of the 22nd layer, one of main oil producing layers in the reservoir, are shown in Figure 2.
In order to make the model predict the distribution of water saturation efficiently, reasonable input parameters and output parameters should be assigned in advance. Input/output parameters in this study and their data formats for one month are shown in Table 1 and Table 2:
Data assimilation is the basic work of data processing. For artificial neural networks with multiple input parameters, different input parameters could have different units and different values. If the values between different parameters are distinctive, the interferences between the parameters will be obvious. In order to eliminate this influence and improve computational efficiency of the model, data was assimilated in advance as follows [42]:
X = X μ δ ,
where X is the initial value, X’ is its normalized value, μ and δ are mean and standard deviation, respectively.

2.2.2. Machine Learning Training Design

After obtaining the dataset, it is necessary to consider the division of the dataset for training, validation and testing [43]. A previous study shows different dataset partition has a certain influence on the training results and reasonable dataset partitioning can effectively improve the accuracy of the model [28]. The dataset of the model data in this paper was divided as shown in Table 3:
The flow-process diagram for the usage of datasets is shown in Figure 3. At first, a 148-month history dataset can be obtained after data processing. Then the dataset was divided as: 1st–100th month for training set and validation set, 101st–148th months for test set. The training set is used for LSTM model training. By learning the information in the training set, the LSTM model could predict 101st–148th month’s results. Finally, we can compare the data between LSTM prediction results and test set data. Prediction evaluation could be carried out through this comparison.
In this paper, a deep learning framework Keras was utilized to build up a LSTM model. Before a LSTM model is built, length of a sequence of input data should be defined. For better description, it can be called “Moving Time Window” or “Sild Window”. In this model, the length of the moving time window was 10 months. Here, the principle of LSTM model calculation process is illustrated using Figure 4. At the beginning of calculation, 10 months of input data are obtained and then the model can predict the 11th month’s data. After that, the 11th month’s prediction results are added back into the moving time window (with other input data at 11th month). In that way, the 12th prediction results can be figured out as well, using the updated window data (2nd month to 11th month). By continuously cycling this step, the whole calculation could be done eventually.
In the training process, to prevent the overfitting and gradient vanishing problems, a regularization process is needed. Random dropout is a method to solve overfitting problem. In the learning process, weight of some nodes in the hidden layer is randomly reduced to zero, in other words. In this way, the model will not rely too much on some local features, and structural risk can be reduced [44,45,46].
In addition to dropout, activation function is also a key factor in the model. The purpose of the activation function is to add some nonlinear factors between the input and output of the neural network. Common activation functions include Sigmoid, Tanh and ReLU. The function curve and mathematic equation of these three activation functions are shown in Figure 5.
According to Figure 5, it can be seen that Sigmoid can compress the value between 0,1. However, because of its soft saturation characteristic, it is easy for Sigmoid to produce gradient vanishing problem, leading to training problems. Tanh compresses the value between −1,1. This characteristic solves the problem that Sigmoid’s output is not zero-centered and it turns out that Tanh has a higher convergence time than Sigmoid, but saturation problem has not been solved by Tanh as well. To sum up, while Sigmoid neurons are more biologically plausible than Tanh neurons, the latter work better for training multi-layer neural networks [47].
In this paper, the rectified linear unit (ReLU) was used for the activation function. The advantage is that its intrinsic function form is relatively simple (Sigmoid and Tanh contain exponential operations when deriving, and ReLU has almost no computation in derivation), and the convergence time is much faster than the function Sigmoid and Tanh [47,48].

2.2.3. Model Evaluation Criteria

When the calculation is completed, the model needs to be evaluated by statistical evaluation indicators, so as to verify the model. This paper used some error evaluation criteria to study the accuracy of model results. These methods include average relative deviation (ARD), average absolute relative deviation (AARD), coefficient of determination (R2) and root mean square error (RMSE) as follows [49,50]:
A R D % = 100 N i = 1 N ( X i data X i mo del X i data )
A A R D % = 100 N i = 1 N ( | X i data X i mo del X i data | )
R 2 = 1 i = 1 N ( X i data X i model ) 2 i = 1 N ( X i data average ( X i model ) ) 2
R M S E = 1 N i = 1 N ( X i d a t a X i mod el ) 2
where N represents total number of data in each set, for example, in terms of evaluation of water saturation prediction, that is 48 (months) × 25 × 24 × 34 (water saturation data points for a single month). Xidata is the data value from each set, and Ximodel is the corresponding data value calculated by the LSTM model.

3. Results and Discussion

3.1. LSTM Model Calibration

In this section, experiments of LSTM model calibration were carried out to ensure optimal model parameter values. Three model parameters including Length of moving time window (default value is 10), dropout rate (default value is 0.1) and number of hidden layers (default value is 1) were selected for further study and discussion.
First, we trained the same network architecture with different length of moving time window. Window length varied from 5 to 25 months. Fixing other parameters, the AARD of water saturation distribution prediction (48 months) for different window size could be presented as shown in Figure 6:
It can be seen from Figure 6 that, as the window length increases, the AARD curve decreases dramatically at first. When the window length reaches 10 months, AARD curve becomes flat and the value is around 12%. Then, as the window length continuously increases, the AARD increases slightly. Window length is significant to a LSTM model. If the length is too small, data information in the window could not be enough to support a good prediction performance. On the contrary, if the length of the window is too large, some important information in specific time node could be weakened. For different problems and different kind of data, the optimal window length could be obtained only through continuously experiments.
Then, the same network architecture was trained with different dropout rate. Dropout rate in these experiments varied from 0.1 to 0.9. Fixing other parameters, the AARD of prediction results (48 months) for different dropout rate could be presented as shown in Figure 7:
It can be seen from Figure 7 that, as dropout rate increase, the prediction error goes down dramatically at first. The AARD value becomes stable when the dropout rate ranges from 0.3 to 0.7 and then increases as dropout rate close to 0.9. In other words, dropout rate in this model exists optimal value which ranges from 0.3–0.7.
Number of hidden layers was also chosen to be a study object. Same network architecture was trained with different number of hidden layers which varied from 1 to 4. Fixing other parameters, the relevant AARD of prediction results (48 months) for different number of hidden layers could be presented as shown in Figure 8:
From Figure 8, it can be known that the number of hidden layers has a positive relationship with the model prediction performance in this study. As the number of hidden layer increases, the prediction error goes down. This decrease is apparent when the layer number changes from 1 to 2 (i.e., AARD of water saturation decreases from 20.22% to 12.46%) and relatively slight when the layer number changes from 2 to 4 (i.e., AARD of water saturation decreases from 12.46% to 11.07%). In theory, we can continuously increase the number of hidden layers to obtain the better prediction results. However, more hidden layers in the model mean longer computing time and could lead to overfitting problem. Hence, 2 hidden layers in this model is recommended.

3.2. Model Validation

According to the described model training procedure in the Section 2 and model calibration results in Section 3.1, a LSTM model can be developed. The values for all the necessary parameters are shown in Table 4.
In the training process, a loss function is usually adopted to measure the model ability for fitting the training data. By monitoring the decreasing trend of the loss function curve, it can be figured out whether the model has achieved its best performance. Figure 9 shows loss decline curve in the training process for pressure distribution and water saturation distribution, respectively. It could be seen from the figures that model converges on pressure when calculating to 104 epochs, on water saturation when calculating to 179 epochs and on oil rate when calculating to 32 epochs. The calculation continued until the maximum epochs (500) is reached.
To verify the model, it is necessary to analyze the accuracy of model calculation results. A variety of evaluation equations in Section 2.2.3 were selected to quantitatively evaluate the calculation results of the LSTM model. The analysis of pressure and water saturation calculation results are shown in the Table 5:
Results in Table 5 indicate that the model has a good performance in training and testing process, respectively. Considering the great amount of data, the model overall AARD can still be controlled below 15% and the ARD is below 6.5% and the accuracy of the model is over 80%. In other words, Table 5 indicates that acceptable results are established and the prediction model is dependable. Moreover, the results obtained in this paper clearly show that the LSTM recurrent neural network is capable for efficient prediction of water saturation distribution in a reservoir with multi-layer.
Oil production is a significant output parameter. The comparison between production curve from well record and LSTM prediction model for the production well R-025 in the duration of prediction is as shown in Figure 10. According to the comparison results in Figure 10, it can be known that prediction results of oil production are acceptable with good precision.
To intuitively display the prediction results of the model, the output results were split for parameter separation and then reprocessed into the initial data format. After that, they were imported into MATLAB for post-processing. Images of pressure and water saturation distribution for different layers at different time can be produced. The 22nd layer in the 48th month of prediction results (148th month in whole time sequence) was selected for an example. The comparison results of pressure distribution between Test set data and LSTM prediction results are shown in Figure 11:
It can be seen from Figure 11 that, predicted pressure distribution from the LSTM model has obvious similarities with that from Test set. The comparison results indicate that the LSTM model has a good learning performance on the pressure fluctuation near the wellbore. To quantitatively study the prediction accuracy, cross plots of pressure data are produced as shown in Figure 11c. The abscissa represents the data in Figure 11a and the ordinate is the results predicted by the LSTM network model in Figure 11b. The red line in the figure stands for the line y = x, which means the ideal condition that the prediction results is completely consistent with the test set data. The blue point represents the data points from LSTM and there are 600 data points (25 × 24) dispersed in this cross plot. The accuracy of the model prediction can be judged by the degree of coincidence between the blue data point and the red standard line. According to the Figure 11c, it can be known that the model has a good calculation accuracy for a single layer at each time node in terms of pressure distribution.
The comparison results of water saturation are shown in Figure 12. The results turn out that the LSTM model has a good predictive effect on the change of the water saturation distribution in the reservoir under water-injection condition. Prediction results show great response to the saturation changes near injection wells. In other words, the prediction results of water saturation distribution are credible and the LSTM have been proved to be capable in dealing with water saturation prediction problem.

3.3. Water Saturation Prediction Analysis

After validation, the detailed analysis of calculation results was conducted. First, compared with numerical simulation methods, data-driven methods have advantages in terms of computing time. The computational time of LSTM and numerical history matching which is the part of source of the training set and test set in this study is shown in Table 6:
It could be seen from the Table 6 that LSTM needs quite a period of time for model training and validation (4 min 26 s). However, once the model has been built, the prediction is tremendously fast which takes only few seconds. Although it seems like using machine learning method could be much faster than numerical simulation, numerical simulation still needs to be conducted for dataset before we begin the machine learning work and this advantage in computational time will be weakened. In other words, if the dataset is all from monitoring data rather than the numerical simulation, this advantage could completely manifest.
Besides computing time, prediction results of water saturation distribution are also necessary for further analysis. By analyzing the distribution of water saturation, the location of remaining oil in the reservoir and the future exploitation potential of the reservoir could be identified. The water saturation distribution of three main oil producing layers including 12th layer, 14th layer, 27th layer in the reservoir at 48th months of prediction (148th month in whole time sequence) are as shown in Figure 13:
It could be seen from Figure 13 that the 12th and 14th layer of the reservoir has a higher water saturation level and a greater development degree. In contrast, there are still large areas in the 27th layer which have not been exploited thoroughly. In general, the reserves used in the block are not large, and each layer has a great amount of area to be developed as the red circle shown in Figure 13. The oil is greatly saturated near R-418, R-074 and R-025. Therefore, it is necessary to formulate an exploitation plan in these regions in future development. For example, in the regions inside the red circles, well pattern infilling construction could be carried out. It is reasonable to shorten the well spacing from 1000 m (at present) to 200 m. Besides, R-420 can be a good candidate for injection well because there exists a great amount of oil saturated near the R-420 according to Figure 13c.

3.4. Comparison Between Standard RNN, LSTM and GRU

To further study the performance of LSTM neural network, this paper selected the prediction results of three different machine learning methods for comparison, including standard Recurrent Neural Network (standard RNN), Gated Recurrent Unit (GRU) and LSTM. GRU originated in 2014 [51]. It is a variant of the LSTM recurrent network [52], and they have many similarities in internal structure. GRU combines the forget and input gates into a single “update gate” and merges the cell state and hidden state. This method can save computational resources and reduce the training difficulty. At the same time, its training effect is close to LSTM. Therefore, GRU is also gradually being widely used [53,54,55].
The AARD of prediction results for three different machine learning methods are shown in Figure 14:
It could be seen from Figure 14 that the prediction performances of different neural network methods are distinct. In general, LSTM has the best prediction performance, then the GRU, and standard RNN performs worst in this problem. Under the calculation condition of three hidden layers, the LSTM’s pressure AARD is 14.05%, and the water saturation AARD is 11.26% which are acceptable. The accuracy of GRU is slightly lower than LSTM. In the condition of three hidden layers, the pressure and water saturation AARD of GRU prediction results are 15.32% and 18.23%, respectively. GRU could be another method choice beside LSTM in terms of this problem. In contrast, the standard RNN perform relatively badly. Its AARD of pressure and water saturation are both over 24%, which indicates that the standard RNN is not appropriate for processing the water saturation distribution prediction problem in this paper.

4. Conclusions

This study presented an alternative way for quick and robust prediction of water saturation distribution in a reservoir besides of numerical simulation. A reservoir water saturation distribution predictor was developed utilizing a Long Short-Term Memory recurrent neural network. An actual reservoir dataset deriving from monitoring and simulating was utilized for model training and test. Through the trained model, it is accessible to forecast water saturation distribution, pressure distribution and oil production faster than with numerical simulation. After analyzing the calculation results, several conclusions can be drawn as follows:
The LSTM method has a good performance on the water saturation prediction. In the case of large data volume, the overall AARD can be controlled below 14.82%. It is proven that the model is valid and reliable. On the basis of model validation, the model can predict water saturation distribution and present suggestions for tapping remaining oil in future production. Compared with numerical simulation, machine learning methods have great advantages in computation time which could be exploited in the future. Different types of machine learning methods have distinct performance in water saturation prediction. The accuracy of LSTM is better than GRU and standard RNN. GRU is another choice, although the accuracy is slightly lower than LSTM.

Author Contributions

Conceptualization, H.S. and Q.Z.; methodology, S.D.; software, Y.Z.; validation, Q.Z., Y.W.; formal analysis, S.D.; investigation, Q.Z.; resources, C.W.; writing—original draft preparation, Q.Z.; writing—review and editing, Q.Z.; supervision, H.S.; project administration, H.S.

Funding

This research was funded by the Fundamental Research Funds for the Central Universities of China (Grant No. FRF-TP-19-005B1) and National Natural Science Foundation of China (Grant No.51974357).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yue, M.; Zhu, W.; Han, H.; Song, H.; Long, Y.; Lou, Y. Experimental research on remaining oil distribution and recovery performances after nano-micron polymer particles injection by direct visualization. Fuel 2018, 212, 506–514. [Google Scholar] [CrossRef]
  2. Archie, G.E. Electrical resistivity an aid in core-analysis interpretation. AAPG Bull. 1947, 31, 350–366. [Google Scholar]
  3. Leverett, M. Capillary behavior in porous solids. Trans. AIME 1941, 142, 152–169. [Google Scholar] [CrossRef]
  4. Heseldin, G.M. A method of averaging capillary pressure curves. In Proceedings of the SPWLA Fifteenth Annual Logging Symposium, McAllen, TX, USA, 2–5 June 1974. [Google Scholar]
  5. Ertekin, T.; Abou-Kassem, J.H.; King, G.R. (Eds.) Basic Applied Reservoir Simulation; SPE: Richardson, TX, USA, 2001. [Google Scholar]
  6. Karimi-Fard, M.; Durlofsky, L.J.; Aziz, K. An efficient discrete fracture model applicable for general purpose reservoir simulators. In Proceedings of the SPE Reservoir Simulation Symposium, Houston, TX, USA, 3–5 February 2003. [Google Scholar]
  7. Chen, Z.; Huan, G.; Ma, Y. (Eds.) Computational Methods for Multiphase Flows in Porous Media; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2006. [Google Scholar]
  8. Chen, Z. (Ed.) Reservoir Simulation: Mathematical Techniques in Oil Recovery; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2007. [Google Scholar]
  9. Chen, Z. (Ed.) The Finite Element Method: Its Fundamentals and Applications In Engineering; World Scientific: Hackensack, NJ, USA, 2011. [Google Scholar]
  10. Edwards, D.; Gunasekera, D.; Morris, J. Reservoir simulation: Keeping pace with oilfield complexity. Oilfield Rev. 2012, 5, 4–15. [Google Scholar]
  11. Tuero, F.; Crotti, M.M.; Labayen, I. Water Imbibition EOR Proposal for Shale Oil Scenarios. In Proceedings of the SPE Latin America and Caribbean Petroleum Engineering Conference, Buenos Aires, Argentina, 17–19 May 2017. [Google Scholar]
  12. Almulhim, A.; Alharthy, N.; Tutuncu, A.N.; Kazemi, H. Impact of Imbibition Mechanism on Flowback Behavior: A Numerical Study. In Proceedings of the Abu Dhabi International Petroleum Exhibition and Conference, Abu Dhabi, UAE, 10–13 November 2014. [Google Scholar]
  13. Zhang, F.; Saputra, I.W.; Niu, G.; Adel, I.A.; Xu, L.; Schechter, D.S. Upscaling Laboratory Result of Surfactant-Assisted Spontaneous Imbibition to the Field Scale through Scaling Group Analysis, Numerical Simulation, and Discrete Fracture Network Model. In Proceedings of the SPE Improved Oil Recovery Conference, Tulsa, OK, USA, 14–18 April 2018. [Google Scholar]
  14. Yuan, C.; Delshad, M.; Wheeler, M.F. Parallel Simulations of Commercial-Scale Polymer Floods. In Proceedings of the SPE Western Regional Meeting, Anaheim, CA, USA, 27–29 May 2010. [Google Scholar]
  15. Ahmed, S.A.; Hayder, E.M.; Baddourah, M.; Abd Karim, M.G.; Hidayat, W. Using Streamline and Reservoir Simulation to Improve Water Flood Management. In Proceedings of the SPE Middle East Oil and Gas Show and Conference, Manama, Bahrain, 25–28 September 2011. [Google Scholar]
  16. Liu, H.; Wu, S. The Numerical Simulation for Multi-Stage Fractured Horizontal Well in Low permeability reservoirs Based on Modified Darcy’s Equation. In Proceedings of the SPE/IATMI Asia Pacific Oil & Gas Conference and Exhibition, Nusa Dua, Bali, Indonesia, 20–22 October 2015. [Google Scholar]
  17. Liu, Y.; Leung, J.Y.; Chalaturnyk, R.; Virues, C.J.J. Fracturing Fluid Distribution in Shale Gas Reservoirs Due to Fracture Closure, Proppant Distribution and Gravity Segregation. In Proceedings of the SPE Unconventional Resources Conference, Calgary, AB, Canada, 15–16 February 2017. [Google Scholar]
  18. Adedigba, S.A.; Khan, F.; Yang, M. Dynamic failure analysis of process systems using neural networks. Process. Saf. Environ. Protect. 2017, 111, 529–543. [Google Scholar] [CrossRef]
  19. Silpngarmlers, N.; Guler, B.; Ertekin, T.; Grader, A.S. Development and testing of two-phase relative permeability predictors using artificial neural networks. In Proceedings of the SPE Latin American and Caribbean Petroleum Engineering Conference, Buenos Aires, Argentina, 25–28 March 2001. [Google Scholar]
  20. Talebi, R.; Ghiasi, M.M.; Talebi, H.; Mohammadyian, M.; Zendehboudi, S.; Arabloo, M.; Bahadori, A. Application of soft computing approaches for modeling saturation pressure of reservoir oils. J. Nat. Gas Sci. Eng. 2014, 20, 8–15. [Google Scholar] [CrossRef]
  21. Nouri-Taleghani, M.; Mahmoudifar, M.; Shokrollahi, A.; Tatar, A.; Karimi-Khaledi, M. Fracture density determination using a novel hybrid computational scheme: A case study on an Iranian Marun oil field reservoir. J. Geophys. Eng. 2015, 12, 188–198. [Google Scholar] [CrossRef]
  22. Masoudi, P.; Asgarinezhad, Y.; Tokhmechi, B. Feature selection for reservoir characterisation by Bayesian network. Arab. J. Geosci. 2015, 8, 3031–3043. [Google Scholar] [CrossRef]
  23. Hegde, C.; Daigle, H.; Gray, K.E. (Eds.) Performance Comparison of Algorithms for Real-Time Rate-of-Penetration Optimization in Drilling Using Data-Driven Models; Society of Petroleum Engineers: Richardson, TX, USA, 2018. [Google Scholar]
  24. Tian, C.; Horne, R. Applying Machine-Learning Techniques to Interpret Flow-Rate, Pressure, and Temperature Data from Permanent Downhole Gauges. In Proceedings of the SPE Western Regional Meeting, Garden Grove, CA, USA, 27–30 April 2019. [Google Scholar]
  25. Gupta, S.; Fuehrer, F.; Jeyachandra, B.C. Production Forecasting in Unconventional Resources using Data Mining and Time Series Analysis. In Proceedings of the SPE/CSUR Unconventional Resources Conference—Canada, Calgary, AB, Canada, 30 September–2 October 2014. [Google Scholar]
  26. Schuetter, J.; Mishra, S.; Zhong, M.; LaFollette, R. A Data-Analytics Tutorial: Building Predictive Models for Oil Production in an Unconventional Shale Reservoir. SPE J. 2018, 23, 1075–1089. [Google Scholar] [CrossRef]
  27. Ma, X.; Liu, Z. Predicting the oil production using the novel multivariate nonlinear model based on Arps decline model and kernel method. Neural Comput. Appl. 2018, 29, 579–591. [Google Scholar] [CrossRef]
  28. Kamari, A.; Gharagheizi, F.; Shokrollahi, A.; Arabloo, M.; Mohammadi, A.H. Integrating a robust model for predicting surfactant–polymer flooding performance. J. Petrol. Sci. Eng. 2016, 137, 87–96. [Google Scholar] [CrossRef]
  29. Gomez, Y.; Khazaeni, Y.; Mohaghegh, S.D.; Gaskari, R. Top-Down Intelligent Reservoir Modeling. In Proceedings of the SPE Annual Technical Conference and Exhibition, New Orleans, LA, USA, 4–7 October 2009. [Google Scholar]
  30. Mohaghegh, S.D. Top-Down Modeling: A Shift in Building Full-Field Models for Mature Fields. J. Pet. Technol. 2016, 68, 22–23. [Google Scholar] [CrossRef]
  31. Mikolov, T.; Karafiát, M.; Burget, L.; Černocký, J.; Khudanpur, S. Recurrent neural network based language model. In Proceedings of the Eleventh Annual Conference of the International Speech Communication Association, Makuhari, Japan, 26–30 September 2010. [Google Scholar]
  32. Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems; NeurlIPS: San Diego, CA, USA, 2014; pp. 3104–3112. [Google Scholar]
  33. Graves, A.; Jaitly, N. Towards end-to-end speech recognition with recurrent neural networks. In Proceedings of the 31st International Conference on International Conference on Machine Learning, Beijing, China, 21–26 June 2014; pp. 1764–1772. [Google Scholar]
  34. Li, X.; Wu, X. Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing, Brisbane, QLD, Australia, 19–24 April 2015; pp. 4520–4524. [Google Scholar]
  35. Cheng, J.; Dong, L.; Lapata, M. Long short-term memory-networks for machine reading. arXiv 2016, arXiv:1601.06733. [Google Scholar]
  36. Zhang, D.; Chen, Y.; Meng, J. Synthetic well logs generation via Recurrent Neural Networks. Petrol. Explor. Dev. 2018, 45, 629–639. [Google Scholar] [CrossRef]
  37. Sun, J.; Ma, X.; Kazi, M. Comparison of Decline Curve Analysis DCA with Recursive Neural Networks RNN for Production Forecast of Multiple Wells. In Proceedings of the SPE Western Regional Meeting, Garden Grove, CA, USA, 22–26 April 2018. [Google Scholar]
  38. Han, J.; Sun, Y.; Zhang, S. A Data Driven Approach of ROP Prediction and Drilling Performance Estimation. In Proceedings of the International Petroleum Technology Conference, Beijing, China, 26–28 March 2019. [Google Scholar]
  39. Bengio, Y.; Simard, P.; Frasconi, P. Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 2002, 5, 157–166. [Google Scholar] [CrossRef]
  40. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  41. Colah. Understanding LSTM Networks. 2015. Available online: http://colah.github.io/posts/2015-08-Understanding-LSTMs/ (accessed on 27 August 2015).
  42. Karambeigi, M.S.; Zabihi, R.; Hekmat, Z. Neuro-simulation modeling of chemical flooding. J. Petrol. Sci. Eng. 2011, 78, 208–219. [Google Scholar] [CrossRef]
  43. Al-Mudhafar, W.J. Incorporation of Bootstrapping and Cross-Validation for Efficient Multivariate Facies and Petrophysical Modeling. In Proceedings of the SPE Low Perm Symposium, Denver, CO, USA, 5–6 May 2016. [Google Scholar]
  44. Hinton, G.E.; Srivastava, N.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Improving neural networks by preventing co-adaptation of feature detectors. arXiv 2012, arXiv:1207.0580. [Google Scholar]
  45. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  46. Bouthillier, X.; Konda, K.; Vincent, P.; Memisevic, R. Dropout as data augmentation. arXiv 2015, arXiv:1506.08700. [Google Scholar]
  47. Glorot, X.; Bordes, A.; Bengio, Y. Deep sparse rectifier neural networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, Ft. Lauderdale, FL, USA, 11–13 April 2011; pp. 315–323. [Google Scholar]
  48. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, Nevada, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
  49. Ahmed, O.S.; Adeniran, A.A.; Samsuri, A. Computational intelligence based prediction of drilling rate of penetration: A comparative study. J. Petrol. Sci. Eng. 2019, 172, 1–12. [Google Scholar] [CrossRef]
  50. Soares, C.; Gray, K. Real-time predictive capabilities of analytical and machine learning rate of penetration (ROP) models. J. Petrol. Sci. Eng. 2019, 172, 934–959. [Google Scholar] [CrossRef]
  51. Cho, K.; Van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv 2014, arXiv:1406.1078. [Google Scholar]
  52. Greff, K.; Srivastava, R.K.; Koutník, J.; Steunebrink, B.R.; Schmidhuber, J. LSTM: A search space odyssey. IEEE Trans. Neural Netw. 2017, 28, 2222–2232. [Google Scholar] [CrossRef] [PubMed]
  53. Tang, Y.; Huang, Y.; Wu, Z.; Meng, H.; Xu, M.; Cai, L. Question detection from acoustic features using recurrent neural network with gated recurrent unit. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, Shanghai, China, 20–25 March 2016; pp. 6125–6129. [Google Scholar]
  54. Rana, R. Gated recurrent unit (gru) for emotion classification from noisy speech. arXiv 2016, arXiv:1612.07778. [Google Scholar]
  55. Dey, R.; Salemt, F.M. Gate-variants of gated recurrent unit (GRU) neural networks. In Proceedings of the IEEE 60th International Midwest Symposium on Circuits and Systems, Medford, MA, USA, 6–9 August 2017; pp. 1597–1600. [Google Scholar]
Figure 1. Illustration of the repeating network structure of Long-Short Term Memory (LSTM).
Figure 1. Illustration of the repeating network structure of Long-Short Term Memory (LSTM).
Energies 12 03597 g001
Figure 2. Reservoir properties of a representative layer in target reservoir in X-Y plane: (a) Permeability distribution of 22nd layer in X-Y plane; (b) Porosity distribution of 22nd layer in X-Y plane; (c) Initial water distribution of 22nd layer in X-Y plane.
Figure 2. Reservoir properties of a representative layer in target reservoir in X-Y plane: (a) Permeability distribution of 22nd layer in X-Y plane; (b) Porosity distribution of 22nd layer in X-Y plane; (c) Initial water distribution of 22nd layer in X-Y plane.
Energies 12 03597 g002
Figure 3. Flow-process diagram for usage of datasets.
Figure 3. Flow-process diagram for usage of datasets.
Energies 12 03597 g003
Figure 4. Illustration of the LSTM calculation process.
Figure 4. Illustration of the LSTM calculation process.
Energies 12 03597 g004
Figure 5. Diagram for different activation functions and its characteristics: (a) Sigmoid; (b) Tanh; (c) ReLU.
Figure 5. Diagram for different activation functions and its characteristics: (a) Sigmoid; (b) Tanh; (c) ReLU.
Energies 12 03597 g005
Figure 6. Effect of moving window length on prediction performance: (a) error curve of water saturation; (b) error curve of pressure; (c) error curve of oil rate.
Figure 6. Effect of moving window length on prediction performance: (a) error curve of water saturation; (b) error curve of pressure; (c) error curve of oil rate.
Energies 12 03597 g006
Figure 7. Effect of dropout rate on prediction performance: (a) error curve of water saturation; (b) error curve of pressure; (c) error curve of oil rate.
Figure 7. Effect of dropout rate on prediction performance: (a) error curve of water saturation; (b) error curve of pressure; (c) error curve of oil rate.
Energies 12 03597 g007aEnergies 12 03597 g007b
Figure 8. Effect of hidden layer number on prediction performance: (a) error curve of water saturation; (b) error curve of pressure; (c) error curve of oil rate.
Figure 8. Effect of hidden layer number on prediction performance: (a) error curve of water saturation; (b) error curve of pressure; (c) error curve of oil rate.
Energies 12 03597 g008
Figure 9. Loss decline curves in training process for pressure and water saturation: (a) Loss decline curve for pressure; (b) Loss decline curve for water saturation; (c) Loss decline curve for oil product.
Figure 9. Loss decline curves in training process for pressure and water saturation: (a) Loss decline curve for pressure; (b) Loss decline curve for water saturation; (c) Loss decline curve for oil product.
Energies 12 03597 g009aEnergies 12 03597 g009b
Figure 10. Comparison between actual well record and LSTM prediction in the duration of prediction (101st–148th month in whole time sequence) for oil production.
Figure 10. Comparison between actual well record and LSTM prediction in the duration of prediction (101st–148th month in whole time sequence) for oil production.
Energies 12 03597 g010
Figure 11. Comparison between Test set data and LSTM results at 48th month of prediction (148th month in whole time sequence) for pressure distribution: (a) Pressure distribution from Test set; (b) Pressure distribution from LSTM prediction; (c) Cross plot of pressure distribution for 22nd layer; (d) Cross plot of pressure distribution for all layers.
Figure 11. Comparison between Test set data and LSTM results at 48th month of prediction (148th month in whole time sequence) for pressure distribution: (a) Pressure distribution from Test set; (b) Pressure distribution from LSTM prediction; (c) Cross plot of pressure distribution for 22nd layer; (d) Cross plot of pressure distribution for all layers.
Energies 12 03597 g011
Figure 12. Comparison between Test set data and LSTM results at 48th month of prediction (148th month in whole time sequence) for water saturation distribution: (a) Water saturation distribution from Test set; (b) water saturation distribution from LSTM prediction; (c) Cross plot of water saturation distribution for 22nd layer; (d) Cross plot of water saturation distribution for all layers.
Figure 12. Comparison between Test set data and LSTM results at 48th month of prediction (148th month in whole time sequence) for water saturation distribution: (a) Water saturation distribution from Test set; (b) water saturation distribution from LSTM prediction; (c) Cross plot of water saturation distribution for 22nd layer; (d) Cross plot of water saturation distribution for all layers.
Energies 12 03597 g012
Figure 13. Remaining oil distribution in different layers at 48th month of prediction (148th month in whole time sequence): (a) Water saturation distribution in 12th layer; (b) Water saturation distribution in 14th layer; (c) Water saturation distribution in 27th layer.
Figure 13. Remaining oil distribution in different layers at 48th month of prediction (148th month in whole time sequence): (a) Water saturation distribution in 12th layer; (b) Water saturation distribution in 14th layer; (c) Water saturation distribution in 27th layer.
Energies 12 03597 g013
Figure 14. Comparison of average absolute relative deviation (AARD) results between standard Recurrent Neural Network (RNN), LSTM, Gated Recurrent Unit (GRU): (a) Comparison of AARD results between standard RNN, LSTM GRU in terms of pressure; (b) Comparison of AARD results between standard RNN, LSTM, GRU in terms of water saturation.
Figure 14. Comparison of average absolute relative deviation (AARD) results between standard Recurrent Neural Network (RNN), LSTM, Gated Recurrent Unit (GRU): (a) Comparison of AARD results between standard RNN, LSTM GRU in terms of pressure; (b) Comparison of AARD results between standard RNN, LSTM, GRU in terms of water saturation.
Energies 12 03597 g014
Table 1. Input parameters information in the model.
Table 1. Input parameters information in the model.
ParametersData Format for One Month
Porosity distribution25 × 24 × 34
Permeability distribution25 × 24 × 34
Pressure distribution25 × 24 × 34
Water saturation distribution25 × 24 × 34
Oil production of 9 wells9
Water injection of 3 wells3
Table 2. Output parameters information in the model.
Table 2. Output parameters information in the model.
ParametersData Format for One Month
Pressure distribution25 × 24 × 34
Water saturation distribution25 × 24 × 34
Oil production of 9 wells9
Table 3. Dataset partition for machine learning.
Table 3. Dataset partition for machine learning.
DatasetSizeUnit
Training set85month
Validation set15month
Test set48month
Table 4. Parameters for model training utilizing LSTM network.
Table 4. Parameters for model training utilizing LSTM network.
ParametersValueUnit
Learning rate0.01#
Window length10month
Batch size128#
Number of hidden layers2#
Number of hidden dim300#
Dropout rate0.3#
Input layer neurons81,611#
Output layer neurons9 for oil rate
20,400 for pressure distribution
20,400 for water saturation distribution
#
Total prediction time48month
Epoch500#
Sequence of layersInput-dropout-hidden layer-dropout- hidden layer-output
The symbol # means dimensionless.
Table 5. The results of statistical error analysis performed in this study for the developed LSTM model.
Table 5. The results of statistical error analysis performed in this study for the developed LSTM model.
Pressure distribution
ParameterValue
Training set & Validation set
R20.954
Average relative deviation, %3.27
Average absolute relative deviation, %10.85
RMSE, kPa2636.49
Testing set
R20.913
Average relative deviation, %4.11
Average absolute relative deviation, %14.98
RMSE, kPa3524.98
Total
R20.941
Average relative deviation, %3.46
Average absolute relative deviation, %11.97
RMSE, kPa2947.28
Water saturation distribution
ParameterValue
Training set & Validation set
R20.939
Average relative deviation, %5.52
Average absolute relative deviation, %11.35
RMSE, #0.0313
Testing set
R20.923
Average relative deviation, %5.89
Average absolute relative deviation, %12.41
RMSE, #0.0343
Total
R20.933
Average relative deviation, %5.75
Average absolute relative deviation, %11.81
RMSE, #0.323
Production rate
ParameterValue
Training set & Validation set
R20.972
Average relative deviation, %4.72
Average absolute relative deviation, %5.89
RMSE, #14.6
Testing set
R20.961
Average relative deviation, %4.99
Average absolute relative deviation, %6.28
RMSE, #16.8
Total
R20.965
Average relative deviation, %4.79
Average absolute relative deviation, %6.01
RMSE, #15.2
The symbol # means dimensionless.
Table 6. Comparison for computational time between LSTM and numerical history matching.
Table 6. Comparison for computational time between LSTM and numerical history matching.
MethodsCalculation Time with 100 MonthsPrediction Time with 48 Months
LSTM4 min 26 s3 s
Numerical simulation12 min 11 s3 min 24 s

Share and Cite

MDPI and ACS Style

Zhang, Q.; Wei, C.; Wang, Y.; Du, S.; Zhou, Y.; Song, H. Potential for Prediction of Water Saturation Distribution in Reservoirs Utilizing Machine Learning Methods. Energies 2019, 12, 3597. https://doi.org/10.3390/en12193597

AMA Style

Zhang Q, Wei C, Wang Y, Du S, Zhou Y, Song H. Potential for Prediction of Water Saturation Distribution in Reservoirs Utilizing Machine Learning Methods. Energies. 2019; 12(19):3597. https://doi.org/10.3390/en12193597

Chicago/Turabian Style

Zhang, Qitao, Chenji Wei, Yuhe Wang, Shuyi Du, Yuanchun Zhou, and Hongqing Song. 2019. "Potential for Prediction of Water Saturation Distribution in Reservoirs Utilizing Machine Learning Methods" Energies 12, no. 19: 3597. https://doi.org/10.3390/en12193597

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop