Next Article in Journal
Sustainable Governance, Energy Security, and Energy Losses of Europe in Turbulent Times
Next Article in Special Issue
Electromagnetic Surveys for Petroleum Exploration: Challenges and Prospects
Previous Article in Journal
Purification of Residual Glycerol from Biodiesel Production as a Value-Added Raw Material for Glycerolysis of Free Fatty Acids in Waste Cooking Oil
Previous Article in Special Issue
3D Geomechanical Model Construction for Wellbore Stability Analysis in Algerian Southeastern Petroleum Field
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Suitability of Different Machine Learning Outlier Detection Algorithms to Improve Shale Gas Production Data for Effective Decline Curve Analysis

1
Department of Petroleum Engineering, Faculty of Engineering and Technology, Future University in Egypt (FUE), Cairo 11835, Egypt
2
Department of Petroleum Engineering, Faculty of Petroleum and Mining Engineering, Suez University, Suez 43512, Egypt
*
Author to whom correspondence should be addressed.
Energies 2022, 15(23), 8835; https://doi.org/10.3390/en15238835
Submission received: 14 October 2022 / Revised: 13 November 2022 / Accepted: 18 November 2022 / Published: 23 November 2022

Abstract

:
Shale gas reservoirs have huge amounts of reserves. Economically evaluating these reserves is challenging due to complex driving mechanisms, complex drilling and completion configurations, and the complexity of controlling the producing conditions. Decline Curve Analysis (DCA) is historically considered the easiest method for production prediction of unconventional reservoirs as it only requires production history. Besides uncertainties in selecting a suitable DCA model to match the production behavior of the shale gas wells, the production data are usually noisy because of the changing choke size used to control the bottom hole flowing pressure and the multiple shut-ins to remove the associated water. Removing this noise from the data is important for effective DCA prediction. In this study, 12 machine learning outlier detection algorithms were investigated to determine the one most suitable for improving the quality of production data. Five of them were found not suitable, as they remove complete portions of the production data rather than scattered data points. The other seven algorithms were deeply investigated, assuming that 20% of the production data are outliers. During the work, eight DCA models were studied and applied. Different recommendations were stated regarding their sensitivity to noise. The results showed that the clustered based outlier factor, k-nearest neighbor, and the angular based outlier factor algorithms are the most effective algorithms for improving the data quality for DCA, while the stochastic outlier selection and subspace outlier detection algorithms were found to be the least effective. Additionally, DCA models, such as the Arps, Duong, and Wang models, were found to be less sensitive to removing noise, even with different algorithms. Meanwhile, power law exponential, logistic growth model, and stretched exponent production decline models showed more sensitivity to removing the noise, with varying performance under different outlier-removal algorithms. This work introduces the best combination of DCA models and outlier-detection algorithms, which could be used to reduce the uncertainties related to production forecasting and reserve estimation of shale gas reservoirs.

1. Introduction

The exploration and development of unconventional resources have increased considerably in recent decades. The essential elements for the massive exploitation of such resources are technical improvements and the use of effective procedures, such as multi-fractured horizontal wells. However, the complexity and variation of shale formations and the associated production features, as well as the lack of appropriate understanding of their physical governing flow factors, make the analysis and forecasting of production data more challenging [1]. The production of shale gas by hydraulic fracturing horizontal wells requires the use of various fracture fluids. When the well begins to flow, the fracturing fluid is the first fluid to be produced, which is called the flow-back period. Depending on the reservoir features and well completion, several flow regimes can be followed after the flow-back, such as bilinear, linear, radial, pseudo-radial, and pseudo-steady-state regimes [2,3].
Due to the aforementioned complexities, Decline Curve Analysis (DCA) is considered a simple and reliable tool to be used for estimating the ultimate recovery (EUR) and forecasting the production of shale gas reservoirs. However, various uncertainties are associated with such analysis. For example, applying the well-known Arps DCA model overestimates the EUR of shale gas, as it assumes boundary-dominated flow (BDF) [4,5]. Therefore, different DCA models with major differences in model structure and number of fitting parameters have been developed to match the shale gas declining behavior [6,7,8,9,10,11]. The fitting parameters of DCA models can be basically determined graphically or by regression. Regression of production data is the most common and easiest method [12]. The data size and quality and the regression technique itself affect the goodness of fitting and prediction reliability of the model [13]. Some techniques tried to improve them by multi-segmented fitting [14]; however, the goal of our research was quantifying and minimizing the uncertainties related to the models themselves and techniques of removing the noise. Figure 1 summarizes the uncertainties related to DCA and Table 1 includes the DCA models used in this research.
In this study, we focused on quantifying and minimizing the uncertainties related to data quality and model selection. A total of 12 machine learning (ML) algorithms for outlier detection (OD) were used to detect and remove the outliers from the production data of three shale gas wells. It was assumed that 20% of the production data are outliers. The task of each algorithm was to determine the 20% of the data with the highest potential of being outliers. After that, eight popular DCA models for shale gas were compared before and after removing the outliers, with each algorithm based on the goodness of fitting and the reliability of prediction. The best combination of DCA models and OD algorithms was addressed for effective production forecasting applications.

2. Outlier Detection Algorithms

Hawkins defines an outlier as “an observation that deviates so much from other observations as to arouse suspicion that it was generated by a different mechanism” [26]. Removing such anomalous observations from the data is challenging. The research issues related to OD are; (1) algorithm-related issues such as the method of detection and learning scheme involved, (2) data-dependent issues such as the type of data attributes, size, and dimensionality, (3) application-specific issues such as the nature of the application and mode of analysis [27]. This study focused on algorithm-related issues.
For production data analysis in shale gas wells, the main reasons for noise in data are; flow-back period after hydraulic fracture jobs, controlling the bottom hole flowing pressure by changing the choke size, successful shut-ins due to associated water, and the transition from one flow regime to another [28]. The presence of outliers affects the identification of the flow regimes and leads to incorrect estimation of EUR [29]. Detecting and removing the outliers improve the goodness of fitting and the reliability of the prediction.
The local outlier factor (LOF) algorithm was used to detect the outliers in unconventional wells [30]. Although the algorithm was effective in removing the scattered points, it was not compared to other algorithms. Recently, LOF was compared with other algorithms (Angular-based outlier detection (ABOD), one class supported vector machine (OCSVM), k nearest neighbor (KNN), and isolation forest (IF)) based on synthetic data with random and predetermined scattered points [31]. The ABOD was found to be the best algorithm. After that, the ABOD was used with different thresholds to detect and remove the outliers from field cases. Because the synthetic data are too smooth, it was not challenging for these algorithms to detect the artificial noise. Additionally, the randomness of the artificial noise is not represented for all field cases (i.e., has no base). Moreover, the performances of DCA models were not compared to each other before and after the removal of the outlier by those algorithms.
In another research, the ABOD was used to remove the outliers from shale gas production data [32]. Almost 16 DCA models were compared to each other before and after removing the outliers with three different thresholds of 10%, 15%, and 20%. Several variations were detected in the models’ sensitivity to removing these outliers. Although actual data were used in this study, only one well with moderate noise and only one algorithm were investigated.
In this section, the bases of the 12 OD algorithms used in this research were described briefly. These algorithms are the minimum covariance determinant (MCD), OCSVM, Principal Component Analysis (PCA), ABOD, stochastic outlier selection (SOS), LOF, KNN, connectivity-based outlier factor (COF), clustering-based outlier factor (CBOF) Subspace Outlier Detection (SOD), Histogram-based Outlier Score (HBOS), and IF. Figure 2 classifies and summarizes the bases of the used algorithms.

2.1. MCD

MCD estimator is one of the first linear equivariant and most reliable estimators of multivariate location and scatter. Numerous reliable multivariate approaches, such as PCA, factor analysis, and multiple regression, have also been developed using it [33]. MCD is based on the robust (RD) and Mahalanobis (MD) distances to distinguish outliers as shown in Equations (1) and (2). The distances show how xi is far from the center of the data cloud. RD is much smaller than MD. Figure 3 shows how the outliers can be determined by applying a certain threshold of the residual of the linearly correlated variables using the two distances.
R D ( x ) = d ( x ,   μ ^ MCD , Σ ^ MCD ) ,
M D ( x ) = d ( x , x ¯ , Cov ( X ) ) = ( x x ¯ ) Cov ( X ) 1 ( x x ¯ ) ,
where, x is the observation, μ ^ MCD is the MCD estimate of location, and Σ ^ MCD is the MCD covariance estimate, x ¯   is the observations mean and Cov(X) the observations covariance matrix.

2.2. PCA

When several variables are highly correlated and it is desirable to minimize their number to an independent set, PCA is most typically utilized. Unlike the MCD, and instead of using the variance as a measurement of deviation, the PCA algorithm is based on finding the projection of the data that have the maximal deviation. Firstly, the projection pursuit is applied to reduce the dimensions of the data. Secondly, the minimum covariance determinant estimator is then applied to this lower-dimensional data space. PCA is based on the orthogonal and score distances to distinguish outliers [34]. Inliers and outliers are distinguished by constructing an outlier map on which the score distance is plotted on the x-axis and the orthogonal distance is plotted on the y-axis, as shown in Figure 4. PCA is only useful if the observed variables are linearly connected. PCA will fail to capture appropriate variance with fewer components if there is no correlation [35].

2.3. OCSVM

The OCSVM technique is a new form of the support vector machine (SVM), which uses single class rather than many classes divided by hyperplanes or hyperspheres [36]. The OCSVM technique defines the inliers density and designates outliers for data points that are beyond the range of the density function. It presumes that given data have a specific probability distribution. Then, it learns to estimate a small subset (S) of the input dataset with a chance of a point falling within S between 0 and 1. Finally, it creates a boundary surrounding the typical data points, and any point beyond it is categorized as “outlier”. The decision function is then calculated using Lagrange techniques and the kernel function, which generates a hyperplane in feature space with the greatest distance from the origin and isolates all data points from the origin. OCSVM is a non-linear programming problem; as shown in Equation (3), it requires significant computational techniques, sensitive to overfitting, and only works for scenarios with a small number of outliers. Figure 5 illustrates the OCSVM model used to distinguish between outliers and inliers.
m i n w G ,   ξ i , b R { 1 2 w 2 + 1 ν N i = 1 N ξ i b } ,
ν [ 0 , 1 ] ,   ξ i 0   and   w · Φ ( x i ) b ξ i     f o r   a l l   i = 1 N
where, w T z + b = 0 is the hyperplane with w G (feature space) and b R , ϕ ( Z i ) is a nonlinear function that applies the points x i to the hyperplane, ξ i is the slack parameter, that was added to permit little variations from the hyperplane and ν is a parameter that describes the lower constraint on the number of training instances used as a support vector and the upper bound on the proportion of outliers.

2.4. ABOD

The ABOD method classifies a data point as an outlier or inlier based on the value of the angles between that point and any random pair of points in the dataset [37]. For simplicity, the KNN method is used to measure the angle between the point and a pre-determined number (k) of the nearest points rather than measuring the angle between each point in the dataset and every random pair of points in the dataset, as shown in Figure 6. Thus, if we have three points A, B, and C, then we can calculate the angle between them using Equation (4), and the angle-based outlier factor (ABOF) is then calculated for (A) using Equation (5) [37].
c o s Θ   = ( A B ¯ · A C ¯ A B ¯   A C   ¯ ) ,
  A B O F ( A ) = V A R B . C N K ( A ) ( A B ¯ · A C ¯ A B ¯ 2 A C ¯ 2 ) ,
where, A B ¯ · A C ¯ : is the dot product of vectors A B ¯ and A C ¯ , || A B ¯ ||, || A C ¯ ||: are the lengths of the respective vectors.
As ABOF lowers, the likelihood of the corresponding data point becoming an outlier increases. Therefore, by applying a subjective definition to an arbitrary ABOF threshold, ranking data points can be categorized as outliers. According to Figure 7, the data point that represents the angle’s head θ1 was regarded as an anomaly when compared to the other points. This indicates that a point is said to be as an outlier if its ABOF is below the ABOF threshold.

2.5. SOS

SOS is an unsupervised outlier-selection technique that generates an outlier probability for each data point. Based on the dissimilarity measurements (for example the Euclidean distance) and probability, SOS computes the probability of each data point being an outlier. Each data point is associated with a variance. The variance is determined by the density of the surrounding neighborhood. A smaller variance is implied by a larger density. Indeed, the variance is designed so that each data point has the same number of neighbors. This quantity is regulated by SOS’s only parameter, perplexity. The k in k-nearest neighbor algorithms can be interpreted as perplexity. The difference is that in SOS, being a neighbor is a probabilistic attribute rather than a binary one. Equation (6) shows the intuition behind SOS.
p C o x j = i j 1 p j i ,
where, p ( C o x j ) is the probability of observation x j to be an outlier, and p j i   is the affinity that data point xi has with data point xj decays exponentially concerning their dissimilarity.

2.6. HBOD

HBOD is a non-parametric statistic and univariate algorithm for every single feature. If multivariate data should be analyzed, single features are scored separately and then combined. Like Naive Bayes, it assumes feature independence. It is univariate algorithms that are unable to model feature dependencies [38]. HBOD constructs static or dynamic bin-width histograms. The density is estimated using the frequency (relative amount) of samples falling into each bin (height of the bins) [39]. To be computationally fast, histograms are normalized to [0, 1] for every single feature. HBOD works quite well for global anomaly detection jobs but fails miserably for local ones. Figure 8 describes how the HBOD algorithm identifies the outliers.

2.7. KNN

Knorr et al. [40] introduced KNN to categorize outliers according to the separation between data points and their neighbors. All the points in a specific distance threshold ( δ ) from a random data point in a dataset are set to be the original data point’s neighbors. If a data point has neighbors less than k within ( δ ), it is identified as an outlier, since the inlier points have highly dense neighbor points. The KNN classifier initially detects the k number of points in the data that are nearest to a given data point x 0 . Then, it determines the Euclidean k-distance (the separation between x 0 and its kth nearest neighbors) and identifies points as an outlier if this distance is greater than ( δ ). Due to the pairwise computation of distances between each point, this approach is extremely complicated, on the scale of O(n2). Figure 9 shows the computation of the distance threshold and the kth nearest neighbors for the locations x 1 and x 2 when k = 3. As x 1 and x 2 each have three neighbors, respectively, and they are both outliers.

2.8. LOF

Breunig et al. [41] introduced the local density-based OD (LDBOD) algorithm. LBOD suggests that there are no outliers or inliers among the data points. Instead, an outlier factor—how much the data point is an outlier—is present for each data point. It categorizes outliers according to the local outlier factor (LOF) considering the data points’ density local reachability. An inlier has a substantially more local density than an outlier.
The LDBOD algorithm begins by calculating the (p) “k-distance” of a point p—the distance from the object p to its kth nearest neighbor. Then, it defines Nk(p) (the k-distance neighborhood of p or for all the objects in the k-distance sphere). Finally, for each data sample, it computes the reachability distance of an item p from o as shown in Equation (7).
R e a c h d i s t k ( p , o ) = m a x { k d i s t a n c e ( o ) ,   d ( p , o ) } ,
The local reachability density of data is calculated using reachability distance as shown in Equation (8) where l r d k ( p ) is the inverse of the average reachability distance calculated using p’s k-nearest neighbors.
l r d k ( p ) = 1 / o N k ( p ) ( R e a c h d i s t k ( p . o ) | N k ( p ) | ) ,
Finally, we calculate the LOF, shown in Equation (9), of the data sample, which indicates the extent to which we consider p to be an outlier. An LOF value close to one indicates that the point is not an outlier, it only becomes an outlier when the k-nearest neighbors’ density is significantly larger when l r d k ( p ) is much lower.
  L O F k ( p ) = o N k ( p ) ( l r d k ( o ) l r d k ( p ) ) | N k ( p ) | ,
Figure 10 depicts the idea of k-distance(p) for a point p (blue) and the neighborhood of p for k = 3 as a shaded blue area. The k-dist(p) displays the reach-distk(a,p). the reach-distk(a,p) distance is determined by k-dist(p) by using Equation (9). r2 represents the reach-distk(b,p) for every random location outside of the reach-distk(a,p) (orange).

2.9. COF

COF is similar to LOF. However, the density estimation for the records is done differently [42]. This implies that the data are dispersed in a spherical manner around the instance. The density estimation is erroneous if this assumption is broken, for example, if features have a straight linear association. COF wishes to compensate for this deficiency by estimating the neighborhood’s local density using a shortest-path technique known as the chaining distance [43]. This chaining distance is the mathematical minimum of the total of all distances between all k neighbors and the instance. This density estimation technique works significantly better in basic situations when characteristics are visibly connected [44].

2.10. CBLOF

The CBOD technique assumes that typical data items belong to big and dense clusters, whereas outliers belong to tiny or sparse clusters or none at all. Therefore, outliers are detected by extracting the relationship between objects and clusters as shown in Figure 11. Dividing a dataset into subsets in which objects are similar to each other could be done based on partition methods, hierarchical methods, density-based methods, or grid-based methods [45].

2.11. SOD

SOD is a bottom-up technique for detecting outlier clusters in any m-dimensional subspace [46]. Figure 12 shows how this algorithm works [47]. First, it computes the outlier score for all points in each dimension using a technique of Chebyshev (L∞ norm) distance to properly rank the outliers. The score for that point in each dimension of the subspace must be high. The scores are then aggregated to obtain the final outlier score for the points in the dataset. It includes a filter threshold to remove high-dimensional noise during aggregation [48]. Unlike MCD and PCA algorithms (correlation-based), SOD is considered an objective-based algorithm for high dimensional data sets. One significant shortcoming of these techniques is that correlation is independent of the specific subspace dimensions. As a result, some of the data set dimensions may be missing from the search result [49].

2.12. IF

Liu et al. [50] introduced the IF algorithm. IF mainly depends on isolation trees that have only two outcomes (outlier/not outlier). It defines outliers by comparing their number and features to those of the inliers. Remarkable computing efficiency and a low propensity for overfitting are the key benefits of IF. It constructs a structure tree known as an iTree (i.e., isolation tree), this tree isolates outliers to its roots. It takes multiple splitting to split a point in a terminal node equal to the distance from the root node to the terminal node. Then, it creates a trees array by which the average path length of all the data points is calculated as shown in Equation (10). The inliers will have more branches than the outlier due to their significantly longer route lengths. A score that is close to 1 denotes an abnormality, a number that is significantly lower than 0.5 denotes a normal observation, and a score that is almost 0.5 denotes that it is difficult to spot any clear aberration. According to Figure 13, an inlier often requires more splits than an outlier when subjected to random partitioning.
S ( x , n ) = 2 E ( h ( x ) ) / c ( n ) ,
where, h ( x ) is the path length of observation x, c ( n ) is the average path length of unsuccessful search in a binary search tree and n is the number of external nodes.

3. Data and Methodology

In this study, actual data from 3 shale gas wells were used: Well_12, Well_29, and Well_40. These data were released on the Society of Petroleum Engineers (SPE)’s official website and dedicated to research interests [51,52,53]. Figure 14 shows the actual flow rate of the 3 wells before removing any noise.
The noise was removed by 12 OD algorithms. It was assumed that 20% of the data are outliers, and each algorithm should tell which data should be removed according to its base of detecting the outliers. After that, 8 DCA models were applied before and after removing the outliers, then the results were compared. Figure 15 illustrates the methodology followed in this research.

4. Results and Interpretations

4.1. Results of Applying OD Algorithms

By removing the outliers using the twelve OD algorithms, it was found that five algorithms are generally not suitable for improving the data quality before applying DCA. These algorithms are MCD, OCSVM, PCA, and HBOD. Using them led to completely removing core data and sweeping complete trends, which considered the main production history available for regression. For example, Figure 16 shows the production data of well_12 after removing the outliers and the removed data by the MCD, OCSVM, PCA, IF, and HBOD algorithms. Figure 17 shows the same behavior of these algorithms, but for well_29. Additionally, this behavior of removing core data continued with well_40.
MCD, OCSVM, and PCA have a linear base for detecting outliers. Their application is effective when the variables are linearly correlated. The flow rate is not linearly correlated with the time for shale gas production data. IF is based on partitioning data and detecting a few different ones as outliers and that is why it removed the main part of the production history of the very smooth trend. On the other hand, HBOD is based on the frequency of the data points, and that is why it removed the smoothed early time data (i.e., with low frequency within a range) in well_12 and well_29 as shown in Figure 16(5) and Figure 17(5). Therefore, it was obvious that these five algorithms are weak and not suitable to improve the production data quality before applying DCA.
Unlike the five algorithms, the other seven algorithms detect the outliers on different bases, while they do not remove general trends or main parts of the production history. As an example, Figure 18 shows the production data of well_40 after removing the outliers and the removed data by the KNN, ABOD, CBLOF, COF, LOF, SOS, and SOD algorithms. All of these algorithms were found to be effective in detecting the scattered data points. However, small differences exist between them.

4.2. Results of Applying DCA Models after Removing Outliers

The eight different DCA models were then applied and compared before and after removing the outliers using the selected seven OD algorithms to determine the best combination of the OD algorithm and DCA model for effective fitting and reliable prediction. For the Arps and Duong decline models, it was found that whatever the type of OD algorithm used, they would not be affected by any removed noise, as shown in Figure 19 and Figure 20.
Regarding the PLE and SEPD models, both of them gave the same results exactly, as when the D∞ parameter in the PLE equation equals zero, the PLE equation becomes the same as that of SEPD. Figure 21 shows the result of both the PLE and SEPD models, and it was found that for well_12, almost all OD algorithms affect the prediction slightly and cause a small overestimation of flow rate except for CBLOF, which causes a higher overestimation. For wells_29 and 40, the SOS algorithm almost has no change on the prediction while the other models give some overestimation.
For the HEHD model, well_29 and well_40 have almost no change in prediction. However, for the data of well_12, the CBLOF, KNN, and ABOD algorithms show overestimations because the history match of the data was not good before removing the noise; however, removing noise improved the matching, as shown in Figure 22.
Applying the LGM model with well_12 yielded almost the same overestimation for all algorithms except for CBLOF, which shows a higher overestimation. For well_29, almost all the algorithms showed overestimation. KNN, CBLOF, and LOF yielded the same highest results, ABOD and COF gave the same but less overestimated results; however, SOD showed no change in prediction. Well_40 showed no change in prediction, while other algorithms give different overestimations, as shown in Figure 23.
Figure 24 shows the results of applying the VEDM model. For well_12, using CBLOF and ABOD underestimate the prediction, while other models yielded overestimation. The SOD algorithm with the data of well_29 and well_40 showed no change in prediction; however, the other algorithms showed overestimation.
As shown in Figure 25, for Wang’s model, the seven OD algorithms show almost no change in prediction for Well_12 except for CBLOF, which gives underestimation. Well_29 and Well_40 almost show no change in prediction with all algorithms.
Obviously, the goodness of fitting and the reliability of prediction varied using different OD algorithms. Figure 26 shows the fitting correlation coefficient (R2) and the root mean square error (RMSE) for Well_12 after applying the eight DCA models before and after removing the noise using the seven selected OD algorithms. The results show that the CBOD, KNN, and ABOD algorithms have the highest R2 and lowest RMSE, while the SOD and COF algorithms have the lowest R2 and the highest RMSE with all DCA models except the HEHD and VEMDA models. For these two models, the KNN, and ABOD algorithms have the highest R2 and lowest RMSE, while the SOD and CBOD algorithms have the lowest R2 and highest RMSE.
For Well_29, The results show that the CBOD and KNN algorithms have the highest R2 and lowest RMSE, while the SOD and SOS algorithms have the lowest R2 and highest RMSE, as shown in Figure 27.
For well_40, shown in Figure 28, there was a significant improvement in the goodness of fitting after removing the noise with all algorithms except the SOD. Because most of the removed data points were scattered and isolated, it was easy for most algorithms to recognize them, especially for the LOF algorithm based on its working mechanism.

5. Limitations of This Work

This work focused only on twelve algorithms and eight DCA models, which can be enriched in other studies. The threshold used in this study (i.e., 20%) was a recommendation from previous work and should be critically analyzed. None of the used OD algorithms were tuned to optimize their parameters. We believe that tuning the algorithms’ parameters could improve their performance. Additionally, the excluded algorithms (i.e., linear-based) might be suitable and effective if their parameters are tuned and optimized. It is highly recommended to extend this work to different production data with different noise levels and production modes.

6. Conclusions

This study investigated the suitability of 12 different ML algorithms for OD to improve the shale gas production data for effective DCA. The methodology assumed that 20% of the production data were outliers, and each algorithm identified these outliers according to their working mechanisms. After that, eight DCA models were applied before and after removing this noise on actual production data of three shale gas wells to address the best combination of OD algorithms and DCA models that could be more effective in reducing the uncertainties related to production prediction and reserve estimation. The following conclusions can be drawn:
  • Although most OD algorithms are generic, not all of them are suitable for improving the production data, especially before applying DCA, such as the Linear-base algorithms, IF, and HBOD algorithms. The reason is that those algorithms detect complete portions of the production data as outliers, which causes hard application of DCA.
  • CBOD, KNN, and ABOD are the most effective algorithms to be used to improve the data quality before applying DCA. These algorithms were found to smooth the production profile by detecting the most scattered data points without affecting any trend within the data.
  • The LOF is especially suitable for production profiles with scattered isolated data points; however, it could affect the trends within the production profile in case of high assumed threshold values.
  • The SOS and SOD are the least effective algorithms, although they preserve the declining trend of the production profile. Unlike other algorithms, not all the scattered data points were detected as outliers by these two algorithms. This behavior made the goodness of fitting after applying the DCA models almost the same as before removing the noise.
  • DCA models are based on fitting the production history before extending them for prediction. Improving the production data improves their goodness of fitting and reliability of prediction. However, some models, such as Arps, Duong, and Wang, are less sensitive to removing the noise than others whenever the removing algorithms are applied. On the other hand, SEPD, PLE, and LGM models are more sensitive to removing the outliers and the production forecasting varied greatly using different OD algorithms.
  • The assumed threshold when using the OD algorithms should be optimized based on the noise level within the production data. When selecting a certain algorithm, different thresholds could be assumed and applied until no big differences appear based on the goodness of fitting and the lowest threshold value.
  • Due to the different assumptions and the model structure of each DCA model, it is highly recommended to use more than one model to evaluate the reserve of the shale gas wells.

Author Contributions

Conceptualization, T.Y., A.W., S.M. and O.M.; methodology, T.Y. and A.W.; software, T.Y. and A.W.; validation, T.Y., A.W. and S.M.; formal analysis, T.Y.; investigation, T.Y. and S.M.; writing—original draft preparation, T.Y. and S.M.; writing—review and editing, S.M. and O.M.; visualization, O.M.; supervision, O.M.; project administration, T.Y.; funding acquisition, O.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

ABODAngle-based outlier detection
ABOFAngle-based outlier factor
BDFBoundary Dominated Flow
CBODCluster-Based Outlier Detector
COFConnectivity-Based Outlier Factor
DCADecline Curve Analysis
EUREstimated Ultimate Recovery
HBODHistogram-Based Outlier Detector
HEHDHyperbolic–Exponential Hybrid Decline
IFIsolation Forest
KNNk-nearest neighbors
LDBODLocal Density-Based Outlier Detector
LGMLogistic Growth Model
LOFLocal Outlier Factor
MCDMinimum Covariance Determinant
MLMachine Learning
MRMahalanobis Distance
OCSVMOne Class Supported Vector Machine
ODOutlier Detection
PCAPrinciple Component Analysis
PLEPower-law Exponential
RDRubost Distance
SEPDStretched Exponential Decline Model
SODSubspace Outlier Detection
SOSStochastic Outlier Selection
SVMSupported vector Machine
VDMAVariable Decline Modified Arps
bDecline-Curve Exponent
DDecline Rate (Day−1)
DiInitial Decline Rate (Day−1)
DDecline Rate at Infinite Tima (Day−1)
GpGas Cumulative Production (Mscf)
qGas Flow Rate (Mscf/D)
qiInitial Gas Flow Rate (Mscf/D)
tTime (day)
nTime Exponent in Decline Curve Analysis Models
τCharacteristic Time Parameter, (Day−1)
mExponent Regression Parameter
aRegression Parameter
μ ^ MCD MCD Estimate of Location
Σ ^ MCD MCD Covariance Estimate
Cov(X)Observations Covariance Matrix
w   &   G Feature Space
ϕ ( Z i ) Nonlinear Function that Transforms the Points to The Hyperplane
ξ i Slack Variable Allows Minor Deviations from the Hyperplane
ν Parameter Characterizes the Upper Bound on the Fraction of Outliers and the Lower Bound on the Number of Training Examples Used as Support Vector.
p ( C o x j ) The Probability of Observation x j to be an Outlier
p j i Affinity that Data Point xi has with Data Point xj Decays Exponentially
l r d k ( p ) The Inverse of the Average Reachability Distance Calculated Using p’s k-nearest Neighbors
h ( x ) Path Length of Observation x
c ( n ) Average Path Length of Unsuccessful Search in a Binary Search
nNumber of External Nodes in Outlier Detection Algorithm

References

  1. Ibrahim, M.; Mahmoud, O.; Pieprzica, C. A New Look at Reserves Estimation of Unconventional Gas Reservoirs; OnePetro: Richardson, TX, USA, 2018. [Google Scholar]
  2. Mahmoud, O.; Ibrahim, M.; Pieprzica, C.; Larsen, S. EUR Prediction for Unconventional Reservoirs: State of the Art and Field Case; OnePetro: Richardson, TX, USA, 2018. [Google Scholar]
  3. Wahba, A.; Khattab, H.; Gawish, A. A Study of Modern Decline Curve Analysis Models Based on Flow Regime Identification. JUSST 2022, 24, 26. [Google Scholar] [CrossRef]
  4. Mahmoud, O.; Elnekhaily, S.; Hegazy, G. Estimating Ultimate Recoveries of Unconventional Reservoirs: Knowledge Gained from the Developments Worldwide and Egyptian Challenges. Int. J. Ind. Sustain. Dev. 2020, 1, 60–70. [Google Scholar] [CrossRef] [Green Version]
  5. Mostafa, S.; Hamid, K.; Tantawi, M. Studying Modern Decline Curve Analysis Models for Unconventional Reservoirs to Predict Performance of Shale Gas Reservoirs. JUSST 2021, 23, 36. [Google Scholar] [CrossRef]
  6. Liang, H.-B.; Zhang, L.-H.; Zhao, Y.-L.; Zhang, B.-N.; Chang, C.; Chen, M.; Bai, M.-X. Empirical Methods of Decline-Curve Analysis for Shale Gas Reservoirs: Review, Evaluation, and Application. J. Nat. Gas Sci. Eng. 2020, 83, 103531. [Google Scholar] [CrossRef]
  7. Hazlett, R.D.; Farooq, U.; Babu, D.K. A Complement to Decline Curve Analysis. SPE J. 2021, 26, 2468–2478. [Google Scholar] [CrossRef]
  8. Molina, O.; Santos, L.; Herrero, F.; Monaco, A.; Schultz, D. Is Decline Curve Analysis the Right Tool for Production Forecasting in Unconventional Reservoirs? In Proceedings of the SPE Annual Technical Conference and Exhibition, Dubai, United Arab Emirates, 15–23 September 2021; SPE: Richardson, TX, USA, 2021; p. D031S060R001. [Google Scholar]
  9. Xu, Y.; Liu, X.; Hu, Z.; Nan, S.; Duan, X.; Chang, J. Production Effect Evaluation of Shale Gas Fractured Horizontal Well under Variable Production and Variable Pressure. J. Nat. Gas Sci. Eng. 2021, 97, 104344. [Google Scholar] [CrossRef]
  10. Niu, W.; Lu, J.; Sun, Y. An Improved Empirical Model for Rapid and Accurate Production Prediction of Shale Gas Wells. J. Pet. Sci. Eng. 2022, 208, 109800. [Google Scholar] [CrossRef]
  11. Alimohammadi, H.; Sadeghi, M.; Chen, S.N. A Novel Procedure for Analyzing Production Decline in Unconventional Reservoirs Using Probability Density Functions. In Proceedings of the SPE Canadian Energy Technology Conference, Calgary, AB, Canada, 11–16 March 2022; SPE: Richardson, TX, USA, 2022; p. D011S012R002. [Google Scholar]
  12. Wahba, A.; Khattab, H.; Tantawy, M.; Gawish, A. Modern Decline Curve Analysis of Unconventional Reservoirs: A Comparative Study Using Actual Data. J. Pet. Min. Eng. 2022. online ahead of print. [Google Scholar] [CrossRef]
  13. Joshi, K.G.; Awoleke, O.O.; Mohabbat, A. Uncertainty Quantification of Gas Production in the Barnett Shale Using Time Series Analysis; OnePetro: Richardson, TX, USA, 2018. [Google Scholar]
  14. Tugan, M.F.; Weijermars, R. Improved EUR Prediction for Multi-Fractured Hydrocarbon Wells Based on 3-Segment DCA: Implications for Production Forecasting of Parent and Child Wells. J. Pet. Sci. Eng. 2020, 187, 106692. [Google Scholar] [CrossRef]
  15. Arps, J.J. Analysis of Decline Curves. Trans. AIME 1945, 160, 228–247. [Google Scholar] [CrossRef]
  16. Ilk, D.; Rushing, J.A.; Perego, A.D.; Blasingame, T.A. Exponential vs. Hyperbolic Decline in Tight Gas Sands: Understanding the Origin and Implications for Reserve Estimates Using Arps’ Decline Curves. In Proceedings of the SPE Annual Technical Conference and Exhibition, Denver, CO, USA, 21 September 2008; SPE: Richardson, TX, USA, 2008; p. SPE-116731-MS. [Google Scholar]
  17. Ilk, D.; Perego, A.D.; Rushing, J.A.; Blasingame, T.A. Integrating Multiple Production Analysis Techniques to Assess Tight Gas Sand Reserves: Defining a New Paradigm for Industry Best Practices. In Proceedings of the IPC/SPE Gas Technology Symposium 2008 Joint Conference, Calgary, AB, Canada, 16 June 2008; SPE: Richardson, TX, USA, 2008; p. SPE-114947-MS. [Google Scholar]
  18. Valko, P.P. Assigning Value to Stimulation in the Barnett Shale: A Simultaneous Analysis of 7000 plus Production Hystories and Well Completion Records; OnePetro: Richardson, TX, USA, 2009. [Google Scholar]
  19. Valkó, P.P.; Lee, W.J. A Better Way to Forecast Production from Unconventional Gas Wells. In Proceedings of the SPE Annual Technical Conference and Exhibition, Florence, Italy, 19 September 2010; SPE: Richardson, TX, USA, 2010; p. SPE-134231-MS. [Google Scholar]
  20. Duong, A.N. An Unconventional Rate Decline Approach for Tight and Fracture-Dominated Gas Wells. In Proceedings of the Canadian Unconventional Resources and International Petroleum Conference, Calgary, AB, Canada, 19 October 2010; SPE: Richardson, TX, USA, 2010; p. SPE-137748-MS. [Google Scholar]
  21. Duong, A.N. Rate-Decline Analysis for Fracture-Dominated Shale Reservoirs. SPE Reserv. Eval. Eng. 2011, 14, 377–387. [Google Scholar] [CrossRef] [Green Version]
  22. Clark, A.J.; Lake, L.W.; Patzek, T.W. Production Forecasting with Logistic Growth Models. In Proceedings of the SPE Annual Technical Conference and Exhibition, Denver, CO, USA, 30 October 2011; SPE: Richardson, TX, USA, 2011; p. SPE-144790-MS. [Google Scholar]
  23. Zhang, H.; Cocco, M.; Rietz, D.; Cagle, A.; Lee, J. An Empirical Extended Exponential Decline Curve for Shale Reservoirs. In Proceedings of the SPE Annual Technical Conference and Exhibition, Houston, TX, USA, 28–30 September 2015; SPE: Richardson, TX, USA, 2015; p. D031S031R007. [Google Scholar]
  24. Wang, K.; Li, H.; Wang, J.; Jiang, B.; Bu, C.; Zhang, Q.; Luo, W. Predicting Production and Estimated Ultimate Recoveries for Shale Gas Wells: A New Methodology Approach. Appl. Energy 2017, 206, 1416–1431. [Google Scholar] [CrossRef]
  25. Gupta, I.; Rai, C.; Sondergeld, C.; Devegowda, D. Variable Exponential Decline: Modified Arps to Characterize Unconventional-Shale Production Performance. SPE Reserv. Eval. Eng. 2018, 21, 1045–1057. [Google Scholar] [CrossRef]
  26. Hawkins, D.M. Identification of Outliers; Springer: Dordrecht, The Netherlands, 1980. [Google Scholar]
  27. Suri, N.N.R.R.; Murty, N.M.; Athithan, G. Outlier Detection: Techniques and Applications: A Data Mining Perspective; Springer: Berlin/Heidelberg, Germany, 2019; ISBN 978-3-030-05127-3. [Google Scholar]
  28. Ahmed, T. Analysis of Decline and Type Curves. In Reservoir Engineering Handbook; Elsevier: Amsterdam, The Netherlands, 2019; pp. 1227–1310. ISBN 978-0-12-813649-2. [Google Scholar]
  29. Yehia, T.; Khattab, H.; Tantawy, M.; Mahgoub, I. Improving the Shale Gas Production Data Using the Angular- Based Outlier Detector Machine Learning Algorithm. JUSST 2022, 24, 152–172. [Google Scholar] [CrossRef]
  30. Chaudhary, N.L.; Lee, W.J. Detecting and Removing Outliers in Production Data to Enhance Production Forecasting; OnePetro: Richardson, TX, USA, 2016. [Google Scholar]
  31. Jha, H.S.; Khanal, A.; Seikh, H.M.D.; Lee, W.J. A Comparative Study on Outlier Detection Techniques for Noisy Production Data from Unconventional Shale Reservoirs. J. Nat. Gas Sci. Eng. 2022, 105, 104720. [Google Scholar] [CrossRef]
  32. Yehia, T.; Khattab, H.; Tantawy, M.; Mahgoub, I. Removing the Outlier from the Production Data for the Decline Curve Analysis of Shale Gas Reservoirs: A Comparative Study Using Machine Learning. ACS Omega 2022. online ahead of print. [Google Scholar] [CrossRef]
  33. Simpson, D.G. Introduction to Rousseeuw (1984) Least Median of Squares Regression. In Breakthroughs in Statistics; Kotz, S., Johnson, N.L., Eds.; Springer Series in Statistics; Springer: New York, NY, USA, 1997; pp. 433–461. ISBN 978-1-4612-0667-5. [Google Scholar]
  34. Kotu, V.; Deshpande, B. Chapter 13—Anomaly Detection. In Data Science, 2nd ed.; Kotu, V., Deshpande, B., Eds.; Morgan Kaufmann: Amsterdam, The Netherlands, 2019; pp. 447–465. ISBN 978-0-12-814761-0. [Google Scholar]
  35. Rousseeuw, P.J.; Hubert, M. Anomaly Detection by Robust Statistics. WIREs Data Min. Knowl. Discov. 2018, 8, e1236. [Google Scholar] [CrossRef] [Green Version]
  36. Schölkopf, B.; Williamson, R.C.; Smola, A.; Shawe-Taylor, J.; Platt, J. Support Vector Method for Novelty Detection. In Proceedings of the Advances in Neural Information Processing Systems; Solla, S., Leen, T., Müller, K., Eds.; MIT Press: Cambridge, MA, USA, 1999; Volume 12. [Google Scholar]
  37. Kriegel, H.-P.; Schubert, M.; Zimek, A. Angle-Based Outlier Detection in High-Dimensional Data. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Las Vegas, NV, USA, 24–27 August 2008; Association for Computing Machinery: New York, NY, USA, 2008; pp. 444–452. [Google Scholar]
  38. Kim, Y.; Lau, W.C.; Chuah, M.C.; Chao, H.J. Packetscore: Statistics-Based Overload Control against Distributed Denial-of-Service Attacks. In Proceedings of the IEEE INFOCOM 2004, Hong Kong, China, 7–11 March 2004; Volume 4, pp. 2594–2604. [Google Scholar]
  39. Goldstein, M.; Dengel, A. Histogram-Based Outlier Score (HBOS): A Fast Unsupervised Anomaly Detection Algorithm; German Research Center for Artificial Intelligence (DFKI): Kaiserslautern, Germany, 2012. [Google Scholar]
  40. Knorr, E.M.; Ng, R.T.; Tucakov, V. Distance-Based Outliers: Algorithms and Applications. VLDB J. 2000, 8, 237–253. [Google Scholar] [CrossRef]
  41. Breunig, M.M.; Kriegel, H.-P.; Ng, R.T.; Sander, J. LOF: Identifying Density-Based Local Outliers. In Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, Dallas, TX, USA, 15–18 May 2000; Association for Computing Machinery: New York, NY, USA, 2000; pp. 93–104. [Google Scholar]
  42. Tang, J.; Chen, Z.; Fu, A.W.; Cheung, D.W. Enhancing Effectiveness of Outlier Detections for Low Density Patterns. In Proceedings of the Advances in Knowledge Discovery and Data Mining; Chen, M.-S., Yu, P.S., Liu, B., Eds.; Springer: Berlin/Heidelberg, Germany, 2002; pp. 535–548. [Google Scholar]
  43. Goldstein, M.; Uchida, S. A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data. PLoS ONE 2016, 11, e0152173. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Wang, Y.; Li, K.; Gan, S. A Kernel Connectivity-Based Outlier Factor Algorithm for Rare Data Detection in a Baking Process. IFAC-PapersOnLine 2018, 51, 297–302. [Google Scholar] [CrossRef]
  45. Jiang, S.; An, Q. Clustering-Based Outlier Detection Method. In Proceedings of the 2008 Fifth International Conference on Fuzzy Systems and Knowledge Discovery, Jinan, China, 18–20 October 2008; Volume 2, pp. 429–433. [Google Scholar]
  46. Nguyen, M.Q.; Mark, L.; Omiecinski, E. Subspace Outlier Detection in Data with Mixture of Variances and Noise; Georgia Institute of Technology: Atlanta, GA, USA, 2008. [Google Scholar]
  47. Muller, E.; Schiffer, M.; Seidl, T. Statistical Selection of Relevant Subspace Projections for Outlier Ranking. In Proceedings of the 2011 IEEE 27th International Conference on Data Engineering, Hannover, Germany, 11–16 April 2011; p. 445. [Google Scholar]
  48. Riahi-Madvar, M.; Nasersharif, B.; Azirani, A.A. Subspace Outlier Detection in High Dimensional Data Using Ensemble of PCA-Based Subspaces. In Proceedings of the 2021 26th International Computer Conference, Computer Society of Iran (CSICC), Tehran, Iran, 3–4 March 2021; pp. 1–5. [Google Scholar]
  49. Trittenbach, H.; Böhm, K. Dimension-Based Subspace Search for Outlier Detection. Int. J. Data Sci. Anal. 2019, 7, 87–101. [Google Scholar] [CrossRef]
  50. Liu, F.T.; Ting, K.M.; Zhou, Z.-H. Isolation Forest. In Proceedings of the 2008 Eighth IEEE International Conference on Data Mining, Pisa, Italy, 15–19 December 2008; pp. 413–422. [Google Scholar]
  51. SPE Data Repository: Data Set: {1}, Well Number: {12}. Available online: https://www.spe.org/datasets/dataset_1/spreadsheets/dataset_1_well_12.xlsx (accessed on 1 August 2022).
  52. SPE Data Repository: Data Set: {1}, Well Number: {29}. Available online: https://www.spe.org/datasets/dataset_1/spreadsheets/dataset_1_well_29.xlsx (accessed on 1 August 2022).
  53. SPE Data Repository: Data Set: {1}, Well Number: {40}. Available online: https://www.spe.org/datasets/dataset_1/spreadsheets/dataset_1_well_40.xlsx (accessed on 1 August 2022).
Figure 1. Summary of the uncertainties related to DCA.
Figure 1. Summary of the uncertainties related to DCA.
Energies 15 08835 g001
Figure 2. A Summary of the bases of anomaly detection algorithms.
Figure 2. A Summary of the bases of anomaly detection algorithms.
Energies 15 08835 g002
Figure 3. Shows how the outliers are detected using the MCD algorithm.
Figure 3. Shows how the outliers are detected using the MCD algorithm.
Energies 15 08835 g003
Figure 4. Shows how the outliers are detected using the PCA algorithm.
Figure 4. Shows how the outliers are detected using the PCA algorithm.
Energies 15 08835 g004
Figure 5. Conceptual model showing the application of OCSVM for outlier detection.
Figure 5. Conceptual model showing the application of OCSVM for outlier detection.
Energies 15 08835 g005
Figure 6. kth nearest neighbors for point A, k = 3.
Figure 6. kth nearest neighbors for point A, k = 3.
Energies 15 08835 g006
Figure 7. An outlier has low variance in its angle with other data points as compared to an inlier.
Figure 7. An outlier has low variance in its angle with other data points as compared to an inlier.
Energies 15 08835 g007
Figure 8. Shows how the HBOD works.
Figure 8. Shows how the HBOD works.
Energies 15 08835 g008
Figure 9. kth nearest neighbors when k = 3.
Figure 9. kth nearest neighbors when k = 3.
Energies 15 08835 g009
Figure 10. A conceptual diagram demonstrating reach-distk(a,p) for k = 3 and k-dist(p), k-distance neighborhood of p.
Figure 10. A conceptual diagram demonstrating reach-distk(a,p) for k = 3 and k-dist(p), k-distance neighborhood of p.
Energies 15 08835 g010
Figure 11. Shows how the CBOD works.
Figure 11. Shows how the CBOD works.
Energies 15 08835 g011
Figure 12. Shows how the SOD works.
Figure 12. Shows how the SOD works.
Energies 15 08835 g012
Figure 13. An inlier ( x i ) in (a) needs a more splits than an outlier ( x 0 ) in (b) to isolate. (c) shows each point’s average length.
Figure 13. An inlier ( x i ) in (a) needs a more splits than an outlier ( x 0 ) in (b) to isolate. (c) shows each point’s average length.
Energies 15 08835 g013
Figure 14. The actual flow rate of (a) well_12, (b) well_29, (c) well_40.
Figure 14. The actual flow rate of (a) well_12, (b) well_29, (c) well_40.
Energies 15 08835 g014
Figure 15. A scheme of the methodology used in this research.
Figure 15. A scheme of the methodology used in this research.
Energies 15 08835 g015
Figure 16. (a) Removed outliers and (b) production data after removing the outliers of Well_12 using; (1) MCD, (2) OCSVM, (3) PCA, (4) IF, and (5) HBOD algorithms.
Figure 16. (a) Removed outliers and (b) production data after removing the outliers of Well_12 using; (1) MCD, (2) OCSVM, (3) PCA, (4) IF, and (5) HBOD algorithms.
Energies 15 08835 g016aEnergies 15 08835 g016b
Figure 17. (a) Removed outliers and (b) production data after removing the outliers of Well_29 using; (1) MCD, (2) OCSVM, (3) PCA, (4) IF, and (5) HBOD algorithms.
Figure 17. (a) Removed outliers and (b) production data after removing the outliers of Well_29 using; (1) MCD, (2) OCSVM, (3) PCA, (4) IF, and (5) HBOD algorithms.
Energies 15 08835 g017
Figure 18. (a) Removed outliers and (b) production data after removing the outliers of Well_40 using; (1) KNN, (2) ABOD, (3) CBLOF, (4) COF, (5) LOF, (6) SOS, and (7) SOD algorithms.
Figure 18. (a) Removed outliers and (b) production data after removing the outliers of Well_40 using; (1) KNN, (2) ABOD, (3) CBLOF, (4) COF, (5) LOF, (6) SOS, and (7) SOD algorithms.
Energies 15 08835 g018
Figure 19. Applying Arps decline model before and after removing noise using the seven OD algorithms for; (a) Well_12, (b) Well_29, and (c) Well_40.
Figure 19. Applying Arps decline model before and after removing noise using the seven OD algorithms for; (a) Well_12, (b) Well_29, and (c) Well_40.
Energies 15 08835 g019
Figure 20. Applying Duong decline model before and after removing noise using the seven OD algorithms for; (a) Well_12, (b) Well_29, and (c) Well_40.
Figure 20. Applying Duong decline model before and after removing noise using the seven OD algorithms for; (a) Well_12, (b) Well_29, and (c) Well_40.
Energies 15 08835 g020
Figure 21. Applying the SEPD and PLE decline models before and after removing noise using the seven OD algorithms for; (a)Well_12, (b) Well_29, and (c) Well_40.
Figure 21. Applying the SEPD and PLE decline models before and after removing noise using the seven OD algorithms for; (a)Well_12, (b) Well_29, and (c) Well_40.
Energies 15 08835 g021aEnergies 15 08835 g021b
Figure 22. Applying HEHD decline model before and after removing noise using the seven OD algorithms for; (a) Well_12, (b) Well_29, and (c) Well_40.
Figure 22. Applying HEHD decline model before and after removing noise using the seven OD algorithms for; (a) Well_12, (b) Well_29, and (c) Well_40.
Energies 15 08835 g022
Figure 23. Applying LGM decline model before and after removing noise using the seven OD algorithms for; (a) Well_12, (b) Well_29, and (c) Well_40.
Figure 23. Applying LGM decline model before and after removing noise using the seven OD algorithms for; (a) Well_12, (b) Well_29, and (c) Well_40.
Energies 15 08835 g023
Figure 24. Applying VEDM decline model before and after removing noise using the seven OD algorithms for; (a) Well_12, (b) Well_29, and (c) Well_40.
Figure 24. Applying VEDM decline model before and after removing noise using the seven OD algorithms for; (a) Well_12, (b) Well_29, and (c) Well_40.
Energies 15 08835 g024
Figure 25. Applying Wang decline model before and after removing noise using the seven OD algorithms for; (a) Well_12, (b) Well_29, and (c) Well_40.
Figure 25. Applying Wang decline model before and after removing noise using the seven OD algorithms for; (a) Well_12, (b) Well_29, and (c) Well_40.
Energies 15 08835 g025aEnergies 15 08835 g025b
Figure 26. Well_12; R2 and RMSE after applying eight DCA models before and after removing the noise using the seven algorithms.
Figure 26. Well_12; R2 and RMSE after applying eight DCA models before and after removing the noise using the seven algorithms.
Energies 15 08835 g026
Figure 27. Well_29; R2 and RMSE after applying eight DCA models before and after removing the noise using the seven algorithms.
Figure 27. Well_29; R2 and RMSE after applying eight DCA models before and after removing the noise using the seven algorithms.
Energies 15 08835 g027
Figure 28. Well_40; R2 and RMSE after applying 8 DCA models before and after removing the noise using the 7 algorithms.
Figure 28. Well_40; R2 and RMSE after applying 8 DCA models before and after removing the noise using the 7 algorithms.
Energies 15 08835 g028
Table 1. Formulas of the DCA models used in this research.
Table 1. Formulas of the DCA models used in this research.
Model(q) Versus (t) *Reference
Hyperbolic Arps (1945) q t = q i ( 1 + b A   D i t ) 1 / b A   [15]
Power Law Exponential (PLE) (2008)
D 0
q = q i · e [ D t D i t n PLE   ] [16,17]
Stretched Exponential Production Decline (SEPD) (2010) q = q i exp [ ( t / τ SEPD   ) n SEPD   ] [18,19]
Duong (2010, 2011) q = q i t m D exp [ a D 1 m D ( t 1 m D 1 ) ] [20,21]
Logistic Growth Model (LGM) (2011) q = q i n LGM   a LGM   t n LGM 1 ( a LGM   + t n LGM   ) 2 [22]
Hyperbolic–Exponential Hybrid Decline (HEHD) (2016) q = q i exp ( D t ) ( 1 + m HEHD D i t ) ( 1 D D i ) m HEHD [23]
Wang (2017) q = q i exp [ λ W ( ln t ) 2 ] [24]
Variable Decline Modified Arps (VEDM) (2018) q = q i exp [ D i t ( 1 n VDMA   ) ] [25]
* Letters with the subscript of the model’s initials are the fitting parameters.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yehia, T.; Wahba, A.; Mostafa, S.; Mahmoud, O. Suitability of Different Machine Learning Outlier Detection Algorithms to Improve Shale Gas Production Data for Effective Decline Curve Analysis. Energies 2022, 15, 8835. https://doi.org/10.3390/en15238835

AMA Style

Yehia T, Wahba A, Mostafa S, Mahmoud O. Suitability of Different Machine Learning Outlier Detection Algorithms to Improve Shale Gas Production Data for Effective Decline Curve Analysis. Energies. 2022; 15(23):8835. https://doi.org/10.3390/en15238835

Chicago/Turabian Style

Yehia, Taha, Ali Wahba, Sondos Mostafa, and Omar Mahmoud. 2022. "Suitability of Different Machine Learning Outlier Detection Algorithms to Improve Shale Gas Production Data for Effective Decline Curve Analysis" Energies 15, no. 23: 8835. https://doi.org/10.3390/en15238835

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop