Next Article in Journal
An Artificial Intelligence Solution for Predicting Short-Term Degradation Behaviors of Proton Exchange Membrane Fuel Cell
Previous Article in Journal
A Multifunctional Nanoplatform Made of Gold Nanoparticles and Peptides Mimicking the Vascular Endothelial Growth Factor
Previous Article in Special Issue
One-Dimensional Convolutional Neural Network with Adaptive Moment Estimation for Modelling of the Sand Retention Test
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Efficacy and Superiority of the Expert Systems in Reservoir Engineering Decision Making Processes

Petroleum and Natural Gas Engineering Program, Department of Energy and Mineral Engineering, The Pennsylvania State University, University Park, PA 16802, USA
Appl. Sci. 2021, 11(14), 6347; https://doi.org/10.3390/app11146347
Submission received: 27 May 2021 / Revised: 2 July 2021 / Accepted: 5 July 2021 / Published: 9 July 2021

Abstract

:
In the process of making a critical decision in reservoir engineering, most of the time we find ourselves in a quandary. Like in any other scientific or technical field, when we find ourselves having to make a critical decision at a juncture, we cannot go ahead with our gut feelings, but rather must figure out what knowledge and information is lacking. In generating the missing knowledge and understanding, the depth and the rapid nature of the search will surface as two critical parameters. In other words, most of the time, a shallow search that can be conducted in a short period of time will not produce the missing information and the knowledge and more often, possibly, it will provide misguidance. When a large volume of sources of information is reviewed and the missing knowledge is generated using unbiased deductive methodologies, then, one can make an informed decision based on facts rather than intuition. In achieving such a desired result, it will be necessary to use fast algorithmic protocols to not sacrifice the wide nature of the search domain, to ensure that it is possible to generate the desired solution. In this paper, it is shown how in reservoir engineering desirable decisions can be reached in a timely manner choosing the most appealing course of action. It is true that in reservoir engineering applications, the decision-making process may involve a blend of intuition and scientific and rational thinking, critical factors such as blind spots, and the use of conventional methodologies that make decision-making hard to fully operationalize or to get a handle on. Luckily, there are mathematical and computational tools to ensure that scientists/engineers consistently make correct decisions, which include gathering as much information as possible and considering all possible alternatives (like combinatorial analysis protocols). The tool (model) proposed in this paper for making critical reservoir engineering decisions is a new computational platform/protocol that exploits the advantages of mathematically developed formulations and of the models that are based on the data/information collected. It is furthermore shown that the analyses conducted, and critical decisions reached, represent more thorough and far-reaching solutions that are structured using less computational overhead, thereby increasing the quality of the decision even further.

1. Introduction

Reservoir engineering is one of the more important branches of petroleum engineering. While reservoir engineering studies the existing drive mechanisms and their efficiencies, its overall objective is the optimization of the processes that are encountered in the extraction of hydrocarbons. Most of the reservoir engineering principles are bounded by the geological characteristics of the hydrocarbon reservoirs. The very first step in reservoir engineering involves estimating the location of the external boundaries and hence the size of the reservoir. The next step is to estimate the amounts of hydrocarbons that exist in the reservoir. The third step encompasses optimization of the recovery processes to make the entire process an economically sound and feasible one. In all these steps, reservoir engineers and geoscientists utilize various numerical and analytical tools, including stochastic methodologies; first, to characterize the reservoir in terms of its boundaries and intrinsic properties such as porosity, permeability, thickness and saturation distributions. During this first step, other macroscopic features of the reservoir including its compartmentalization characteristics, presence of sealing or non-sealing faults, whether the reservoir is layered or not, and whether it is a naturally fractured reservoir or not, etc. are examined. Conclusions reached in this first step will often bring up more questions about the reservoir, including the principal flow directions, micro-pore and macro-pore characteristics of the dual-porosity reservoirs, and existence or nonexistence of crossflow in layered systems. Lack of data, especially during this first step of reservoir analysis, presents some critically significant issues that operators face. What makes the problem even more complex, and demanding is the need for the high levels of accuracy in the data structure, so that it can be used confidently to find answers to the questions that surface during the second and the third steps. Most of the time, in the third step, the studies are conducted in a dynamic state whereas the studies conducted during the first and second steps are often carried out in a static state. Therefore, in the third step it is of paramount importance to have a good idea of the reservoir mechanisms/processes that are active. It should be clearly recognized that studies conducted during these three steps must be crafted in an integrated manner so that internal checks and balances are in place to ensure that the findings of each step are in a complete circular (recursive) agreement with each other. Such a recursive agreement requirement of reservoir engineering studies implies that there may be several combinations of reservoir and project design parameters that may indicate that the solution sought is around a local extremum. At that stage, it will be necessary to implement some Monte Carlo experiments, which are a broad class of computational algorithms that rely on repeated random sampling to assign uncertainty/certainty levels to numerical results. When the search domain is expanded, or the number of scenarios investigated is increased, then, most of the time, it becomes necessary to conduct a prohibitively large number of simulation studies to cover the entire search domain. At this juncture, it is important to note that a fast proxy model to reduce the computational footprint during the development stage will receive its training data set generated by a high-fidelity model. Therefore, in order to achieve a strong coupling between a high-fidelity model and the proxy model, an effective handshaking protocol must be in place. Three examples, which highlight the efficacy of such a hybrid platform, are introduced in later sections.

2. Decision-Making and Decision Quality

In our daily lives, we always make decisions. In psychology, decision-making is regarded as the cognitive process resulting in the selection of a belief or a course of action among several possible alternative options [1]. Decision-making can be regarded as a problem-solving activity yielding a solution deemed to be optimal, or at least satisfactory [2]. While some of these decisions are good decisions, some of them can be decisions that may result in failures. Moreover, the stage of the project implementation at which the decision is made is also critical. Sometimes although the decision is good, its timing can be off, and the failure of the project outcome becomes inevitable. Figure 1 below shows four possible quadrants where one can locate the outcomes of the decisions that have been made. The upper right-quadrant shows the region in which the good decisions are mapped out. While a good decision is risking little for the opportunity gained, similarly, a bad decision can risk a lot for the opportunity to gain just a little. The lower left-quadrant, on the other hand, represents the decisions that take us to complete failures. When we cannot objectively assess the risks and when we have a poor perception of time, the project outcome is destined to fail miserably. Between these two diametrically opposite quadrants, a decision-making tool (which can also be a model) is needed to establish a balance. It is obvious that the more information the model must consider, the longer it will take to reach a decision. Again, fast algorithmic protocols such as artificial expert systems can help by, on the one hand, generating and processing a larger volume of information to improve the accuracy, and on the other, because of its high speed, making decisions in a timely manner.
As a process, decision-making involves making choices by identifying a decision, collecting information, and assessing the viability of possible alternative resolutions. In searching alternative solutions, one should use their lateral thinking skills so that new paths or courses of actions can be constructed. An informed and effective analysis of the results for different alternatives (scenarios) should provide guidance towards a solution, which is identified as the one with a higher potential for reaching the desired goal. As a final step of the decision process, it will be necessary to examine the results of the decision and reach a conclusion whether the decision has met the identified need or not. If additional questions about the validity of the results still exist, it may become necessary to reiterate some of the steps of the decision-making process to modify the previous decision or to make a completely new decision (e.g., exploring some new alternatives).
The decision quality concept defines the framework of a good decision. It is generally accepted as an extension of decision analysis to have a set of perceptions and tools that produce more clarity about the best choice in an uncertain and dynamic environment. Therefore, “decision quality” represents the quality of a decision at the moment the decision is made [3]. Decision quality also describes the process that leads to a high-quality decision [4]. It is important to reiterate that the quality of a decision depends on the quality and quantity of the information to inform the decision. Quality in information is achieved when the information is meaningful and reliable and reflects properly all uncertainties. Therefore, a properly implemented decision quality process enables the achieving of the goal under uncertain and complex scenarios. Figure 2 shows possible decision outcomes when mapped onto decision quality and onto risk and uncertainty management.

3. Significance of Data-Driven Decisions

It is only fair to ask the question why we need data-driven decisions in reservoir engineering. The most direct answer to this question is because of the inherently complex nature of the reservoir systems. The simplest to the most difficult decisions made in reservoir engineering applications are thoroughly based on data. In common practice, in discussing data-driven decisions, most of the time it is assumed that data is already available. Unfortunately, this is not a realistic assumption for reservoir engineering applications. For example, in terms of property distributions, one can generate voluminous data sets that will help in studying the performance of a reservoir for different scenarios. As the number of scenarios (data sets) is increased, the decision reached is expected to be more comprehensive and more instructive in terms of the key performance indicators (KPIs). It is well recognized that every single reservoir engineering problem is an ill-conditioned one that needs to be converted to a well-posed problem. In other words, in reservoir engineering analysis problems, the number of unknowns far exceeds the number of equations available. Therefore, it will always be necessary to convert this ill-conditioned problem to a well-posed problem before attempting to obtain a solution. Then, within inherently imposed specific bounds of the domain, many scenarios can be generated and viewed in a successive manner. The generated results are then sorted out using a formal analytical protocol (such as Monte Carlo simulation analysis) to quantify the uncertainties. In order to increase the precision of such an analysis it will, then, be necessary to examine a sufficiently large number of scenarios. In order to satisfy this precision condition, more effectively, it is desirable to use some proxy (AI-based models) so that computational overhead can be logarithmically reduced as compared to the computational overhead of the numerical models. It should also be clearly recognized that as the number of scenarios investigated increases, the solution generated will approach the global extremum so that the margins towards the nonunique nature of the solution will be drastically compressed.

4. Decision Making in Reservoir Engineering While Facing Uncertainties

In reservoir management problems, the most critical problem is the geological uncertainty. In today’s practices of the upstream petroleum engineering technologies, reservoir engineering principles and fundamentals are instrumental in making sound decisions towards the development and production of a hydrocarbon reservoir. Most of the time these decisions need to be made in the presence of limited data.
Fundamentally, the basic relation for reservoir evaluation for a given process using a deterministic protocol can be expressed as [5]:
q(t)[or p(t)] = Ω(D, R)
In Equation (1), the following terms are identified:
  • q(t): Production/injection rate at point x, y, z at time t
  • p(t): Pressure at point x, y, z at time t
  • Ω(D, R): Mathematical flow model with the relevant “built in” physics and thermodynamics
  • D: Flow domain characteristics (e.g., fluid types, spatial and directional dependencies)
  • R: Some predefined production/injection mechanism, and/or recovery process including the project design parameters (e.g., well geometry).
Equation (1) may appear in front of a reservoir engineer in three different forms. If Equation (1) is used to solve for q(t)[or p(t)] using the available information on Ω(D, R) as it appears on the right-hand-side, then, this solution is known as “forward solution”. The accuracy of the forward solution depends on the degree of accuracy of the representation of the flow mechanisms in terms of physics and thermodynamics as well as accurate description of the spatial distribution of the domain characteristics, the design parameters imposed on the reservoir system in terms of boundary and initial conditions as well as the key features of the project implemented. Equation (1) can also be rearranged to solve for D, which is known as “history matching”, so that a reservoir can be characterized. This inverse form of the solution of Equation (1), as expected, inherently suffers from non-uniqueness. The second inverse solution involves the design of the project parameters. For example, in Equation (1), if a desired q(t) is specified, then, the same equation can be rearranged to solve for the project design parameters that will effectively generate the expected q(t) solution. Like any inverse solution, this second inverse solution is also prone to non-uniqueness problems. In solving Equation (1), if the right-hand-side of the equation, Ω(D, R), is fully known, solving for the left-hand-side is straightforward and such a solution protocol is known as a deterministic process. The main difficulty faced here is how certain one is of the entries on the right-hand-side, namely Ω, D and R. If Ω, D, and R are partially known and several scenarios are created for the unknown entries of Ω, D, and R and the problem is solved for each scenario, then the solution protocol is called stochastic. The “hybrid modeling” that will be discussed later incorporates deterministic, stochastic, and proxy modeling approaches in a synergistic manner.
There are four main sources of uncertainty in reservoir characterization including quantity and quality of the existing data, geological understanding, scaling-up, and mathematical representations. Unfortunately, out of these four sources of uncertainty, only one of them, the uncertainties originating from mathematical representation, is quantifiable. Yet, to reduce the uncertainties, we tend to purchase more seismic lines to be more certain of the reservoir volume, collect more core samples to be more certain of the average porosity and permeability values, do more petrophysical analysis to be more certain of the saturations, buy more gamma logs to be more certain of the net to gross thickness ratio, and do more simulation studies to have more certainty regarding the recovery factor. Table 1 shows the data categories that one needs to have a good grip on the solution of Equation (1).
In the next section, a general outline of a hybrid computational platform that is assembled to obtain the forward and two inverse forms of the solutions to Equation (1) will be discussed.

5. A Hybrid Computational Platform

The forward and the associated two inverse forms of Equation (1) can be solved using conventional formulations and applicable solution protocols. However, these solutions typically prove to be prohibitively expensive and slow in terms of personnel requirements and computational overhead encountered. By coupling proxy-based algorithms with conventional numerical and/or analytical solution procedures on a common platform, it will be possible to generate rapid and much less expensive solutions with higher levels of accuracy towards identifying the uncertainty. Figure 3 shows a schematic representation of the computational structure of the platform that will be discussed here.
The basic tenet of the computational platform shown in Figure 3 is to capitalize on the advantages of both high-fidelity models and artificial intelligence-based models by coupling them in computations in a synchronous manner. As a result of such an effective integration, it will be possible to achieve high accuracy, low-cost, low energy, and high-speed solutions that are tractable. The seamless integration of these two radically different modeling technologies with the help of well-established hand-shaking protocols adds a uniquely assembled powerful tool to the capabilities of reservoir engineers and geoscientists. On the left-hand-side of Figure 3, a suit of high-fidelity reservoir models is shown. These models are developed with the following functionalities:
  • Rectangular and radial-cylindrical grid systems: Both rectangular and radial-cylindrical grid systems are incorporated and supported in these models so that they become adaptable to problems for a set of varying physical boundaries and boundary conditions.
  • Black oil model: Black oil model with single, two, and three phase fluid flow conditions and variable bubble point formulation.
  • Compositional model: This model is used to represent the multi-phase compositional fluid flow with advanced flash calculation techniques (VLE and VLLE computations).
  • Shale gas/Coalbed Methane (CBM) model: A dual-porosity compositional shale gas/CBM model is developed as part of this module.
  • EOR process models: These models include thermal EOR models, chemical EOR models, and miscible gas injection models.
On the right-hand-side of Figure 3, a library of artificial-neural-network based expert systems are shown. These modules can be considered three separate toolboxes. Figure 4, Figure 5 and Figure 6 show the expert tools that are included in the enhanced oil recovery process, well test analysis, and general reservoir engineering toolboxes, respectively. It should be noted that tools highlighted in each of these figures should not be considered the final catalogue of each toolbox, as more tools can be added as they become available.
All the tools assembled honor the information exchange protocols of the hybrid platform discussed earlier.
Making the decision on the suitability of an EOR process to a specific field is generally accepted as the very first step of paramount importance. There are typically a multitude of options in making this important decision. The expert tools highlighted in Figure 4 are designed to make the best decision on an EOR process, and when some representative reservoir parameters are assigned, they can generate not only a checkmark as required by a rule of thumb approach, but also some expected recovery performances in a quantified form. Thus, during the screening stage, it will be possible to visit several EOR processes and make an informed decision about the suitability of the most promising process. Once such a decision is in place for a specific EOR methodology, then in the next step a large number (tens of thousands) of proxy model runs can be conducted by varying reservoir properties and project design parameters to generate some certainty brackets on the project performance indicators. If needed, the information on optimized project design parameters, together with reservoir parameters used, is passed to the relevant high-fidelity model located on the right-hand-side of Figure 3 and the high-fidelity model is run to verify the results from the proxy model. As explained earlier, the coupled approach helps the EOR project design engineer by narrowing the search window extensively so that most of the unnecessary and peripheral work is eliminated. Furthermore, by studying tens of thousands of scenarios rapidly on the implementation of the project, it will be possible to quantify the overall certainty of the results more precisely.
Figure 5 shows several well test analysis expert systems assembled in a second toolbox. These tools are designed for the pressure transient analysis (PTA) of hydrocarbon reservoirs with radically different macroscopic and microscopic characteristics. Some of these PTA expert systems are designed for complex well structures such as multilateral wells, slanted wells, wells completed with multi-stage hydraulic fractures, etc. Well test analysis is a class of systems analysis. In a systems analysis application, the system under observation is perturbed (e.g., the well is shut in or put on production). The reservoir system responds to this perturbation and the resulting signals (pressures) are measured in a time series. Then, using the existing analytical solutions in an inverse mode, some key reservoir characteristics are determined. Models that are used in pressure transient analysis are always built in the same manner. The basic models assume homogeneous and isotropic reservoir characteristics. There are also analytical models that are built for double-porosity, multilayer (with or without crossflow), and composite reservoirs. In well test analysis, perhaps the most critical step is the identification of the model that will be used in the interpretation of the well test data. The supervisory model that is placed at the top of Figure 5 reviews the time series of the pressure data and looks for specific signatures (markers) so that it can decide which expert system from the toolbox should be called in to conduct the pressure transient analysis. In generating the training data for the expert tools shown in Figure 5, either available analytical models or numerical models are used. Then, the specialized expert systems are exposed to the training data to learn and discover the existing signatures recorded in the data sets. The expert systems shown in Figure 5 have the capabilities to shed more light on the reservoir characteristics than the classical analysis models. For example, when the expert system analyzes a faulted reservoir system, it can analyze the sealing characteristics of the fault and directional permeability values where a classical model for faulted reservoirs will work only for isotropic systems with completely sealing faults.
The third toolbox, shown in Figure 6, is designed for generalized reservoir and production engineering methodologies and perhaps more importantly for integrated engineering and geosciences studies. This toolbox displays some expert systems that can be used in integrating geological, geophysical and production data to find the most promising infill well locations in brown and semi-brown fields, performing history matching studies. Furthermore, some expert systems that can be used to judge the spatial and temporal variations of reservoir properties, such as relative permeability and capillary pressure, are also included. Introducing some dynamic characterization of relative permeability and capillary pressure characteristics in conventional reservoir simulation studies will increase the accuracy of the results significantly. Again, the computational platform, as sketched in Figure 3, can incorporate expert systems in a “built-in” form into the reservoir models that are based on hard computing protocols.

6. Examples

In this section, some examples pertaining to different expert systems that appear in three toolboxes (Figure 4, Figure 5 and Figure 6) are presented. The final topology of a network is typically identified by the number of input and output neurons, number of intermediate layers, and the number of neurons on each intermediate layer. Another important component of a network is the learning algorithms used in assembling the network. Searching the combination of all possible options to find the most optimum network structure is an arduous task. In overcoming this difficulty, with the purpose of optimizing the network topology, typically a parallel-processing workflow is utilized [7]. This protocol is outlined in Figure 7. The overall idea of the workflow is to randomly generate many network architectures. With the help of the power of parallel processing, a series of multiple trainings on various architectures can be processed in a synchronous manner. The use of such a workflow is attractive not only because it converges to an optimized structure but also because it decreases the computational overhead.
After identifying the most suitable ANN architecture, the next step involves the training of the network using the available data. Often, the entire dataset is divided into a training set, a validation set, and a testing set in a random manner. Typically, 80% of the data is used for training and 10% is used for validation purposes, with the remaining 10% of the dataset being used for testing purposes.

6.1. Example 1—Design of Cyclic Steam Stimulation Process

Cyclic steam stimulation (CSS) has enjoyed great success as an enhanced oil recovery (EOR) process in its broad applications in heavy oil reservoirs. It is widely implemented for its attractive economic efficacy and rapid project response [8]. A typical CSS project consists of multiple repetitive cycles of (i) steam injection period, (ii) soaking period, and (iii) production period. During each of these three phases of the CSS process, only one well that serves as an injector and producer is utilized. During the injection stage, superheated steam is injected through the well. The injection period may last 2 to 10 days. Then, the well will be shut in (typically for 5 to 7 days). During this soaking period, the purpose is to create an opportunity for the injected steam to spread itself throughout the reservoir evenly and decrease the viscosity of the heavy and viscous oil. In the third phase of the process, the same well is put on production (typically from several weeks to several months) so that the heated oil with the reduced viscosity can flow more easily towards the wellbore. These three stages of injection, soaking, and production periods will be repeated in a cyclic manner if production rates at the end of each cycle are economical. From this short discussion, it should be clear that in the design of the process it will be important for the design engineer to plan the duration of each stage of the process and the total volume of the steam to be injected. It is not good practice to design a CSS project with fixed time periods of injection, soaking, and production stages. The disadvantage of such a pre-fixed time schedule is that the oil production rate may drop to an extremely low value when the cycle switches because the energy introduced to the reservoir is not enough to sustain such a long production period as designed, or conversely, the timing of the cycle switching could prove to be premature since the oil production rate is still high. This highly nonlinear behavior of the system makes it more challenging for the expert system to learn. To address this issue Sun and Ertekin suggest the development of a supervisory ANN, which can be used as a classification tool based on the number of cycles [7]. In other words, a catalogue of sub-ANN models is developed for a fixed number of cycles to predict the oil production rate profiles for projects with different numbers of CSS cycles. At the first phase of the design process, the supervisory ANN will predict the number of cycles that will be necessary for the specific reservoir being studied. Based on the predicted number of cycles, the corresponding sub-ANN will be used to study and understand the performance of the same reservoir. The general workflow of the expert system is defined in Figure 8.
The supervisory ANN is a fully connected three-layer model with its 78, 83, and 68 neurons on the first, second, and third intermediate layers, respectively. On the input layer with the help of 30 neurons, input parameters involving spatial properties, initial conditions, fluid properties, relative permeability coefficients, and project design parameters including steam quality, steam injection rate, steam temperature, production well bottom hole pressure, well drainage radius, injection duration, soaking duration, and cycle switching rate are provided to the network. As explained above, the output layer with one neuron, simply predicts the number of cycles. Once the number of cycles is learned from the supervisory ANN, the corresponding sub-ANN developed for that specific number of cycles predicts the oil production rate as shown in Figure 9.

6.2. Example 2—Characterization of a Fault Plane from PTA Data

In this section, a proxy model developed as a powerful tool to be deployed in the analysis of the pressure transient data collected in an anisotropic and faulted reservoir is described [9]. The principal tasks assigned to the proxy model include determination of the permeability values in principal flow directions, porosity of the reservoir, distance to the fault, sealing characteristics of the fault, and orientation of the fault plane with respect to the principal flow directions. The training data for the proxy model is generated using a two-dimensional, single-phase, slightly compressible numerical model. The pressure transient data sets generated for a large variety of combinations of input parameters are shown to the proxy model in a systematic manner. In generating the pressure transient data, the following assumptions are made:
  • Single-well producing at a constant flow rate,
  • Infinitely large reservoir,
  • Single-phase, slightly compressible fluid,
  • Homogeneous formation thickness, porosity, and anisotropic permeability distributions,
  • Fully or partially sealing fault plane (expressed in percentage),
  • Infinitely long fault plane with no width.
In order to accommodate the infinitely large reservoir assumption in the numerical model, it was ensured that during the collection of the well test data the outer physical boundaries had not felt any pressure transients. Table 2 shows the principal reservoir variables placed on the input and output layers.
The pressure transient data placed together with the other reservoir variables include nine pressure and time values that are chosen randomly from the pressure transient data. These nine pressure and time pairs that are exposed to the network must include pressure values after the pressure transients reach the fault plane (in other words, after the presence of fault is felt at the wellbore). Furthermore, several functional links are added to the input and output layers. The resulting optimum ANN structure is found to be six layers including the input and output layers. The first middle and the second middle layers have 175 and 100 neurons, respectively. The third middle layer has 60 neurons, and the fourth middle layer has 30 neurons. The log-sigmoid transfer functions (tansig, and logsig) in the middle layers and the linear transfer function purelin in the output layer appeared to be the most appropriate transfer function for this class of problems. The existing high-level nonlinearities were the main reasons for adding several functional links to the input and output layers and using four intermediate layers for deep machine learning. Figure 10 shows results of 30 test runs (predictions) and comparisons with the actual characteristics that are used in generating the pressure transient data used in the analysis. In developing the network described here, it was observed that the inclusion of appropriate functional links in the input and output layers were crucial in generating a powerful network with testing results as displayed in Figure 10.

6.3. Example 3—Integration of Seismic, Well-Log and Production Data to Design an Infill Drilling Program

Seismic data, petrophysical data including well logs and core analysis data, and field production history are the principal sources of information used in reservoir characterization. While the use of each of these data sets individually helps in answering several questions about the characterization of the reservoir rock, the integration of these different data resources in a purposeful way opens new vistas so that effective workflows for sweet-spot locations in infill drilling operations can be adopted. This challenging problem becomes even more perplexing when it needs to be addressed in the presence of a complex reservoir architecture. In this section, we will show how this challenging problem is brought to a resolution with the aid of an integrated model built using artificial neural networks [10]. This example, once again, reiterates how an expert system can become a powerful tool in decision-making processes.
The oil field focused on in this study is in North America and is a part of the Wilcox formation. Figure 11 shows the reservoir boundaries and locations of 39 existing wells. An extensive seismic survey was carried throughout the reservoir.
Table 3 gives a summary of the availability of the types of the well logs and number of wells with production histories. As can be seen in Table 3, a full suite of well logs was not available for each well. Therefore, it would be necessary to generate synthetic well logs not only at every intersection of the seismic lines but also at some actual well locations where certain types of well logs are missing.
In this work, the available production data spans over a period of three years. A typical well production history is decomposed into a plateau period and a decline period. For the purpose of reducing the volume of the data, a curve fitting scheme is applied to express the production data collected during the decline period in a hyperbolic-decline form. For each well, a plateau flow rate is calculated as the arithmetical average of the flow rates, which is then used as one of the fitting parameters for the plateau region. The total duration of the plateau period is also recorded. The production rate of the decline period is fitted using a 3-parameter hyperbolic-decline form as shown in Figure 12.
Following the protocol as described in Figure 12, the production history of a typical well can be simply expressed by five parameters. These parameters consist of two parameters in the plateau region (average flow rate during the plateau region and duration of the plateau time), and three parameters in the decline region (initial flow rate at the beginning of the decline period and the hyperbolic decline constants a and b). In this way, in order to predict the expected production at any location within the reservoir, it will be sufficient to find values of these five parameters.
In this work, a two-stage tool-development protocol is implemented. This protocol fully utilizes the available field data and provides fast and accurate predictions for the purposes of reservoir characterization and field development optimization. The first tool addresses the need for synthetic well logs at each intersection of seismic lines throughout the field. In other words, this tool can generate 5 different types of synthetic well logs at any location within the seismic survey boundaries. In the development stage of this tool, seismic attributes extracted from 3D seismic data, 5 different types of well logs, and well coordinates are utilized. The developed synthetic well log tool is capable of generating well logs for vertical wells at specified locations. However, the very same tool can be used in generating logs for wells with complex architecture such as slanted wells and horizontal wells when the wellbore trajectory is defined. Figure 13 shows schematically how the first tool works.
The second tool is developed to predict the oil flow rates and cumulative oil production profiles as a time series at any desired well location. It should be remembered that the final goal of the second tool is to predict the five production-profile related parameters as described in Figure 12. Once these five parameters are identified, it is a straightforward procedure to construct the predicted production profiles at the desired locations. Figure 14 highlights the significant parts of the workflow established for the second stage of the analysis and construction of potential hydrocarbon productivity distribution over the domain of interest.
Figure 15 shows the predicted oil flow rates and cumulative production profiles for the average, best, and worst cases. In these comparisons, predicted production profiles are compared against the production data from the actual wells. The average absolute error reported in the average quality case is 16.66% (Figure 15a), in the best quality case is 9.86% (Figure 15b), and in the worst case is 18.59% (Figure 15c). These testing cases represent the results for wells that are not shown to the expert system during the training phase. It is observed that promisingly high-quality matches were obtained both for oil production rates and cumulative oil production. It should be noted that the cumulative production time series is not obtained by integrating area under the production rate profile but determined directly by the expert system. This approach ensures the invoking of an additional internal check on the predictions. The average absolute error for cumulative production as a function of time is found to be slightly larger than 5%.
After validating and successfully testing the tools that are developed in the first and second phases of development, the remaining objective is to establish a heat map displaying the expected productivity distributions over the entire field. By doing so, it will be possible to identify previously unnoticed sweet spots for infill drilling purposes. This is schematically illustrated in the last phase of the second stage tool development as shown in Figure 14. After sweeping the entire field, the heat maps generated are shown in Figure 16. The upper left panel is a heat map showing the expected flow rates in barrels per month three years after commencing the production. On this panel, in the central section, the elongated area with bright colors shows the location of the existing producing wells (compare with the panel at the bottom). Additionally, in the same panel, the sweet spots identified as a result of this study are marked with red ovals. Furthermore, in the upper right panel, the heat map displays the expected cumulative production at the end of three years. In both heat maps, some permeability channels identified are also highlighted with the help of the red colored arrows.
The work summarized in this section offers a promising innovative protocol to characterize semi-brown to brown oil fields in terms of their expected productivities. Once again, what makes the entire protocol more attractive are the high accuracy levels in predictions that are achieved at high computational speeds.

7. Taking One Step Further down the Road on the Hybrid Computational Platform

Thus far, all the implementations on the hybrid computational platform discussed in this article have comprised of an asynchronous external handshaking between hard-computing and soft-computing modules. The ‘external-handshaking’ between hard-computing and soft-computing protocol shown in Figure 3 involves the two-way information/knowledge transfer between the high-fidelity and the proxy models. In this section, how a proxy model can be built within a numerical model synchronously to accelerate the computational speed is described via some examples.
In a numerical representation of multi-phase and multi-component fluid flow dynamics in porous media, in order to capture the effects of the varying composition of phases in the reservoir and the wellbore domains accurately, it is necessary to use compositional formulations. In the execution of such formalisms, most of the computational overhead is encountered in vapor-liquid-equilibria calculations. In order to decrease the time spent on flash calculations, it is recommended to use a neuro-simulation methodology such that a capable artificial expert system can predict the pressure profile along a production tubing containing flow of pure hydrocarbon components [11]. This expert system is designed to be operational within a range of flow rates, well depths, pipe diameters, inlet fluid compositions, wellhead pressures and geothermal gradients. The data for training the expert system is generated via a numerical simulator that computes wellbore hydraulics. In this work, a two-phase drift-flux model accounts for the variations in gaseous and oleic phase velocities under various flow regimes. In solving the wellbore hydraulics equations, compositional mass balance and momentum equations are solved simultaneously using a fully implicit scheme [11].
As expected, the numerical wellbore hydraulics model is computationally expensive and takes a significant time, ranging from minutes to hours to predict the pressure distribution of the stabilized wellbore system. The complexity of the computations increases when a fine grid is overlaid in the production tubing to control the numerical dispersion. This strategy results in a much larger number of blocks, especially in the case of ultra-deep wells. The increase in number of blocks in the wellbore hydraulics calculations necessitates more flash calculations to be performed. Therefore, a different model utilizing artificial neural network protocol as a classification and regression tool is suggested to carry the flash calculations in the wellbore in a much more expeditious manner. In this example, in structuring the artificial neural network, the total number of components was set to seven. These seven components are comprised of methane, ethane, n-propane, iso-butane, and iso-pentane, and two pseudo-components C6+ (C6 through C19) and C20+ (C20 through C45). Table 4 displays a summary of the input parameters used in the ANN-based wellbore hydraulics model. The model has 43 input parameters as shown in Table 4 and one output (flowing bottom-hole pressure) A total of almost 80,000 data sets have been generated. As observed in Table 4, input number 5 represents the depth as a fraction of the total depth for the point at which one intends to predict the wellbore pressure. After obtaining raw pressure data from each wellbore simulation, the pressures were interpolated at intervals representing each tenth of the total depth of a well. Figure 17 shows the architecture of the ANN-based wellbore hydraulics model.
Further computational experiments with the ANN-based wellbore hydraulics model indicate that the developed proxy model is fast and robust. Table 5 shows a comparison of the wellbore hydraulics model against the numerical model and compares the results against the field data with respect to the level of accuracy achieved. A computational performance comparison of the full numerical model (numerical reservoir model coupled with numerical wellbore hydraulics model) against the neuro-simulation model (numerical reservoir model coupled with ANN-based wellbore hydraulics model) was made. The full numerical wellbore model was run for 24 h with the input variables randomly chosen in a protocol like the one followed to generate data for ANN training. The numerical model was able to study 91 cases within this time span. These 91 cases were re-simulated on the same computer using the ANN model. Although the ANN can take inputs simultaneously in the form of a 43 × 91 matrix and predict a result as a 1 × 91 vector, an iterative loop was used to run the neuro-simulation model. This was done in order to account for the ANN call time, which will be a factor in each iteration of the coupled reservoir-wellbore simulation. It was observed that the total time taken by the ANN toolbox was 2.13 s to re-simulate the 91 runs. Since the numerical model was run for 86,400 s, the proposed ANN model outperformed the numerical model speed-wise by more than a factor of 40,000. The ANN model did not encounter any stability issues that may lead to time-step cuts or failure of the program. This speed and robustness of the ANN model provides a significant advantage in coupled reservoir-wellbore simulation studies. This comparison was performed on a computer with a 2.20 GHz processor and 8 GB RAM [11].
Further computational time experiments were conducted for gas-lift operations. Table 6 shows the computational time comparison for full reservoir numerical model (numerical reservoir model linked to numerical wellbore hydraulics model), numerical-ANN coupled model (numerical reservoir model linked to ANN-based wellbore hydraulics model), and full ANN-based model (full ANN-based reservoir model linked to ANN-based wellbore hydraulics model). Once again, the time comparisons are remarkably different. The numerical-ANN coupled gas lift model provides a speed-up of about 160 when compared to a full numerical model. Furthermore, the ANN-based gas lift model provides a speed up of six orders of magnitude greater when compared to the full numerical model.
The ideas and observations presented in these experiments suggest similar coupled protocols in some other reservoir engineering applications can be attractive not only from the perspective of gaining computational speed but also capturing the reservoir heterogeneities more accurately. Along these lines, two possible implementations immediately come to mind. One implementation is the use of the ANN-based relative permeability and capillary pressure models in conventional simulators. In other words, during the computations at any reservoir block, at any time step, at any iteration level rather than calling the same subroutine that will return the relative permeability values only as a function of saturation, it will be much more representatively accurate to receive the response from an ANN-based property model that is capable of tracing and updating the requested petrophysical property not only as a function of saturation but also as a function of other independent spatial and temporal properties such as permeability, porosity, interfacial tension or capillary number (the latter two collectively determine the final saturation values).
As another example of integration of ANN-based proxy models with numerical models, multi-component, multiphase simulation of coalbed methane (CBM) reservoirs can be considered. For example, consider multi-purpose CO2 injection into a CBM reservoir for enhanced methane gas recovery and CO2 sequestration. In compositional modeling of such an implementation, one of the challenges that surfaces may be the construction of multi-component Langmuir adsorption isotherms from individual pure adsorption isotherms. The thermodynamics of multi-component isotherms is analogous to vapor-liquid-equilibria calculations as established by Myers and Prausnitz [14]. However, the use of ideal adsorbate solution (IAS) theory still requires expensive flash calculations. In deploying the IAS theory again at any time step in any reservoir block at any iteration level, the computationally expensive flash calculations need to be executed to construct and reconstruct the multi-component adsorption isotherms. Accordingly, a relatively simple two-component (CH4 and CO2) ideal adsorbate construction for multi-component adsorption isotherms can be accomplished with the help of an ANN-based model. Then, such a proxy model can construct and update the multi-component adsorption isotherms in an extremely fast manner as requested by the transport equations. Obviously, implementation of the VLE computations at the reservoir block level with the help of the same proxy model will even accelerate the overall solution much further.

8. Concluding Remarks

The overall goal of the ideas presented in this paper is to achieve field scale optimization in reservoir engineering applications. In order to achieve this goal, it will be necessary to integrate every piece of information that is available from the field/reservoir characteristics to field production strategies, from implementing competing completion strategies to artificial lift techniques, from implementation of enhanced recovery techniques to infill drilling strategies in a continuous and seamless manner. Consideration of such an integrated approach will require the deployment of several automated modules that are connected to each other in parallel or in series and exploring the existing synergy that can be found in several different facets of the process as shown in Figure 18.
The integrated model shown in Figure 18 will have its boundary conditions at the reservoir boundaries, surface facilities, and the interfaces between reservoir, wellbore, and surface facilities.
The available manpower in most of our daily operations often constrains us from making comprehensive, multi-dimensional analyses to find the most optimum solution. Every day, industry in the field operations generates mountains of bytes of new information. Perhaps this information is recorded, but unfortunately, it is not typically utilized extensively. Use of the protocols proposed in this paper should enable the engineers and scientists, on the one hand, to maximize their contributions by not spending time on issues that are peripheral to the problem, and on the other hand, to aid them by providing them with dependable tools with functionalities that consider every aspect of the problem effectively in a decision-making process. In this article, some examples of how artificial expert systems can effectively be utilized in addressing some of the long-standing reservoir engineering problems are discussed. It is hoped that by virtue of artificial intelligence techniques, one will be able to assemble expert tools that will enable the reservoir engineers to implement better controls on decision systems, optimization studies, information management, and in making smart inferences concerning operations.
A technological discontinuity might be defined as a “breakthrough innovation” that advances by an order of magnitude the technological state-of the-art, which characterizes an industry [15]. Technological discontinuities are based on new technologies whose technical limits are inherently greater than those of the previous dominant technology along economically relative dimensions of merit [16]. Then, with this understanding, we can characterize the technological discontinuity in the form of introduction of a disruptive know-how that sweeps away the systems of habits it replaces because it has attributes that are recognizably superior. We should also acknowledge that in developing and assembling tools as described in this paper, there will be opportunities to examine challenges. The activation energy barrier to implementing expert system-based solutions are typically not the cost, lack of benefits, or technical risk, but other issues such as the resistance to change. It is recognized that new technologies may require changes to established workflows that practitioners may resist; however, it is safe to assume that practitioners are not resisting specific technologies but the creation of a discontinuity in technology adaptation. However, it should further be recognized that, it is very same discontinuity (disruption) that sets the level of betterment of our understanding and thinking on a new trajectory in technology adaptation. Therefore, it is naturally expected that the innovative protocols like the ones described in this paper will continually have immediate applications.
As a final point, it will be vital to remind the readers of this forum of the importance of “Pasteur’s Quadrant”, as shown in Figure 19.
Donald Stokes, in his 1997 book entitled Pasteur’s Quadrant: Basic Science and Technological Innovation [17], states that “he was struck by how often a gifted scientist/engineer would talk about the goals of research—especially the relationship between the quest of fundamental understanding on the one hand and consideration of use on the other—in a way that seemed odd. Odd and unhelpful, since the preceptors’ view of this relationship and of the relationship between the categories of basic and applied research derived from these goals kept them from seeing things they needed to see”. Figure 19, adapted from Stokes, schematically shows Pasteur’s Quadrant’s invitation to scholars and scientists who are at the front of their academic and industrial pursuits to move in and to pursue fundamental understanding of phenomena with the goal of tackling critically important real-world problems. Therefore, it is naturally expected that the innovative protocols in reservoir modeling efforts similar to the ones described in this paper will continually evolve and have immediate applications with the overall goal of responding to the calls of Pasteur’s Quadrant. In other words, the implementation of Pasteur’s Quadrant concept to available reservoir modeling options inspires similar thought development processes in reservoir model development activities, as shown in Figure 20.

Funding

No external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article and in the theses listed in Appendix A.

Acknowledgments

The ideas presented in this paper were developed and implemented by the author of this paper together with close to 80 graduate students between 1993 and 2020 at Penn State University’s Petroleum and Natural Gas Engineering program. The author of the paper would like to express his most genuine gratitude to these students who have worked diligently and exhibited their lateral thinking abilities and robust work ethics while working on their M.S. thesis and Ph.D. dissertation projects (for the names of the researchers and their thesis titles for future references, please see Appendix A). The author would also like to thank Penn State University for providing the necessary computational facilities over the course of many years to support the research efforts described in this paper.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

  • Doraisamy, H.: Methods of Neuro-Simulation for Field Development, M.S. Thesis, The Pennsylvania State University (05/1998).
  • Centilmen, A.: “Applications of Neural-Networks in Multi-Well Field Development”, M.S. Thesis, The Pennsylvania State University (08/1999).
  • Guler, B.: “Development of a Water/Oil Relative Permeability Predictor Using Artificial Neural Networks”, M.S.Thesis, The Pennsylvania State University (08/1999).
  • Dakshindas, S. S.: Virtual Well Testing, M.S. Thesis, The Pennsylvania State University (08/1999).
  • Bhat, M.: “Characterization of Sealing Faults from Pressure Transient Data: An Artificial Neural Network Approach”, M.S. Thesis, The Pennsylvania State University (12/2001).
  • Silpngarmlers, N.: “Development of Generalized Two-Phase (Oil/Gas) and Three-Phase Relative Permeability Predictors Using Artificial Neural Networks”, Ph.D. Thesis, The Pennsylvania State University (08/2002).
  • Aydinoglu, G.: “Characterization of Partially Sealing Faults from Pressure Transient Data: An Artificial Neural Network Approach”, M.S. Thesis, The Pennsylvania State University (08/2002).
  • Dong, X.: “Characterization of Coalbed Methane Reservoirs from Pressure Transient Data: An Artificial Neural Network Approach”, M.S. Thesis, The Pennsylvania State University (08/2003).
  • Al-Ajmi, M.: “The Development of an Artificial Neural Network as a Pressure Transient Analysis Tool for Applications in Double-Porosity Reservoirs”, M.S. Thesis, The Pennsylvania State University (12/2003).
  • Gammiero, A. J.: “An Artificial Neural Network Based Screening Model for CO2 Flooding Recovery Predictions”, M.S. Thesis, The Pennsylvania State University (08/2004).
  • Khattirat, K.: “The Development of an Artificial Neural Network as a Pressure Transient Analysis Tool for Application in Hydraulically Fractured Reservoirs”, M.S. Thesis, The Pennsylvania State University (12/2004).
  • Gokcesu, U.: “Generic Field Development Schemes Using Virtual Intelligence Based Protocols”, M.S Thesis, The Pennsylvania State University (12/2005).
  • Henry-Chow, K.O.: “An Artificial Neural Network Screening Model for Waterflooding Recovery Predictions”, M.S Thesis, The Pennsylvania State University (12/2005).
  • Moreno, M.: “Estimation of Vertical Permeability from Pressure Transient Data in Partially Penetrated Reservoirs: An Artificial Neural Network Approach”, M.S. Thesis, The Pennsylvania State University (12/2005).
  • Tarman, M.: “Development of an Artificial Neural Network as a Pressure Transient Analysis Tool for Multi-Layered Reservoirs with Cross Flow”, M.S Thesis, Pennsylvania State U. (12/2005).
  • Ramgulam, A.: “Utilization of Artificial Neural Networks in the Optimization of History Matching”, M.S. Thesis, The Pennsylvania State University (05/2006).
  • Senturk, D.R.: “Steamflood Recovery Prediction by the Application of Neural Networks”, M.S. Thesis, The Pennsylvania State University (08/2006).
  • Thararoop, P.: “A Neural Network Approach to Predict Well Performance in Conjunction with Infill Drilling Strategies”, M.S. Thesis, The Pennsylvania State University (05/2007).
  • Minakowski, C. H. P.: “An Artificial Neural Network Based Tool-Box for Screening and Designing Improved Oil Recovery Methods”, Ph.D. Thesis, The Pennsylvania State University (05/2008).
  • AlAbbad, M. A.: “Use of Artificial Intelligence in Predicting Capillary Pressure Characteristics of Saudi Arabian Oil Fields”, M.S. Thesis, The Pennsylvania State University (05/2008).
  • Seren, D.: “Prediction of Flowing Frictional Pressure Drop in Deviated Gas Condensate Wells through Utilization of Artificial Neural Networks”, M.S. Thesis, The Pennsylvania State University (05/2008).
  • Artun, F. E.: “Optimized Design of Cyclic Pressure Pulsing in Naturally Fractured Reservoirs Using Neural-Network Based Proxies”, Ph.D. Thesis, The Pennsylvania State University (08/2008).
  • Srinivasan, K.: “Development and Testing of an Expert System for Coalbed Methane Reservoirs Using Artificial Neural Networks”, M.S. Thesis, The Pennsylvania State University (08/2008).
  • Chidambaram, P. “Development and Testing of an Artificial Neural Network Based History Matching Protocol to Characterize Reservoir Properties”, Ph.D. Thesis, The Pennsylvania State University (05/2009).
  • Bansal, Y.: “Conducting In-Situ Combustion Tube Experiments Using Artificial Neural Networks”, M.S. Thesis, The Pennsylvania State University (05/2009).
  • Ma, J.: “Design of an Effective Water Alternating Gas (WAG) Injection Process Using Artificial Expert Systems”, M.S. Thesis, The Pennsylvania State University (05/2010).
  • Kulga, I. B.: “Development of an Artificial Neural Network for Hydraulically Fractured Horizontal Wells in Tight Gas Sands”, M.S. Thesis, The Pennsylvania State University (05/2010).
  • Gorucu, S. E.: “Optimization of the Design of Transverse Hydraulic Fractures in Horizontal Wells Placed in Dual-Porosity Tight Gas Reservoirs”, M.S. Thesis, The Pennsylvania State University (08/2010).
  • Shihab, R.: “Development and Testing of an Expert System Using Artificial Neural Networks for the Forward In-Situ Combustion Process”, M.S. Thesis, The Pennsylvania State University (08/2011).
  • Siripatrachai, N.: “Alternative Gridding Schemes for Modeling of Multi-Stage Hydraulically Fractured Horizontal Wells Completed in Shale Gas Reservoirs”, M.S. Thesis, The Pennsylvania State University (08/2011).
  • Sharma, S.: “Development of an Artificial Expert System for Estimating the Rate of Growth of Gas Cone”, M.S. Thesis, The Pennsylvania State University (08/2011).
  • Alrumah, M.: “A Study on the Analysis of the Formation of High-Water Saturation Zones Around Well Perforations”, Ph.D. Thesis, The Pennsylvania State University (12/2011).
  • Bansal, Y.: “Forecasting the Production Performance of Wells Located in Tight Oil Plays Using Artificial Expert Systems”, Ph.D. Thesis, The Pennsylvania State University (12/2011).
  • Chintalapati, S. P. B.: “Evaluation of Performance of Cyclic Steam Injection in Naturally Fractured Reservoirs — an Artificial Neural Network Application”, M.S. Thesis, The Pennsylvania State University (12/2011).
  • Nejad, A. M.: “Development of Expert Reservoir Characterization Tools for Unconventional Oil Reservoirs”, Ph.D. Thesis, The Pennsylvania State University (05/2012).
  • Bodipat, K.: “Numerical Model Representation of Multi-stage Hydraulically Fractured Horizontal Wells Located in Shale Gas Reservoirs Using Neural Networks”, M.S. Thesis, The Pennsylvania State University (05/2012).
  • Enab, K.: “Artificial Neural Network Based Design Tool for Dual Lateral Well Applications”, M.S. Thesis, the Pennsylvania State University (08/2012).
  • AlAbbad, M. A.: “Well Testing Using Artificial Expert Systems: Applications and Limitations”, Ph.D. Thesis, the Pennsylvania State University (08/2012).
  • Toktabolat, Z.: “Characterization of Sealing and Partially Communicating Faults in Dual-Porosity Gas Reservoirs Using Artificial Neural Networks”, M.S. Thesis, the Pennsylvania State University (08/2012).
  • Rajput, V. H.: “A Production Performance and Design Tool for Coalbed Methane Reservoirs”, M.S. Thesis, The Pennsylvania State University (08/2012).
  • Wang, H.: “Production Performance Analysis of Multi-Stage Hydraulic Fracture Designs in Tight Sands”, M.S. Thesis, The Pennsylvania State University (08/2012).
  • Cengiz, U.: “Development and Testing of an Artificial Expert System to Design Perforation Parameters”, M.S. Thesis, The Pennsylvania State University (08/2012).
  • Hua, L.: “Development of an Expert System to Identify Phase Equilibria and Enhanced Oil Recovery Characteristics of Crude Oils”, M.S. Thesis, The Pennsylvania State University (08/2012).
  • Sun, Q.: “Engineering Design Considerations to maximize Carbon Dioxide Injectivity in Deep Saline Formations”, M.S. Thesis, The Pennsylvania State University (05/2013).
  • Almousa, T. S.: “Development and Utilization of Integrated Artificial Expert Systems for Designing Multi- Lateral Well Configurations, Estimating Reservoir Properties and Forecasting Reservoir Performance”, Ph.D. Thesis, The Pennsylvania State University (08/2013).
  • Zhou, Q.: “Development and Application of an Artificial Expert System for the Pressure Transient Analysis of Dual Lateral Well Configurations”, M.S. Thesis, The Pennsylvania State University (08/2013).
  • Sengel, A.: “Development of Artificial Neural Networks for Steam Assisted Gravity Drainage (SAGD) Recovery Method”, M.S. Thesis, The Pennsylvania State University (08/2013).
  • Kistak, N.: “Development of an Artificial Neural Network for Dual Lateral Horizontal Wells in Gas Reservoirs”, M.S. Thesis, The Pennsylvania State University (12/2013).
  • Arpaci, B.: “Development of an Artificial Neural Network for Cyclic Steam Stimulation Method in Naturally Fractured Reservoirs”, M.S. Thesis, The Pennsylvania State University (05/2014).
  • Oz, S.: “Development of Artificial Neural Networks for Hydraulically Fractured Horizontal Wells in Faulted Shale Gas Reservoirs”, M.S. Thesis, The Pennsylvania State University (05/2014).
  • Cox, J.: “Development of an Artificial Neural Network as a Pressure Transient Analysis Tool with Application in Multi-Lateral Wells in Tight Gas, Dual Porosity Reservoirs”, M.S. Thesis, The Pennsylvania State University (08/2014).
  • Khamseen, B. N.: “Applications of Artificial Expert Systems in the Analysis of Unexpected Spatial and Temporal Changes in Reservoir Production Behavior”, Ph.D. Thesis, The Pennsylvania State University (08/2014).
  • Bukhari, A.: “Optimizing Corporate Decisions for Dominant Hydrocarbon Producers under Uncertainty”, Ph.D. Thesis, The Pennsylvania State University (12/2014).
  • Enyioha, C.: “An Investigation of the Efficacy of Advanced Well Structures in Unconventional Multi- Phase Reservoirs”, Ph.D. Thesis, The Pennsylvania State University (05/2015).
  • M-Amin, J.: “Development of an Artificial Neural Network Based Expert System for Rate Transient Analysis Tool in Multilayered Reservoirs with or without Crossflow”, M.S. Thesis, The Pennsylvania State University (05/2015).
  • Lu, J.: “Rate Transient Analysis of Dual-Lateral Wells in Naturally Fractured Reservoirs Using Artificial Intelligence Technologies”, M.S. Thesis, The Pennsylvania State University (05/2015).
  • Bae, C. E.: “Prediction of Water-Cone Formation in a Naturally fractured Reservoir with an Aquifer Drive: An Artificial Expert System Application”, M.S. Thesis, The Pennsylvania State University (08/2015).
  • Zhang, Yi.: “An Optimization Protocol Applicable to Pattern-Based Field Development Studies”, M.S. Thesis, the Pennsylvania State University (08/2015).
  • Alqahtani, M.: “Shale Gas Reservoir Development Strategies via Advanced Well Architectures”, Ph.D. Thesis, the Pennsylvania State University (08/2015).
  • Ozdemir, I.: “Synthetic Well Log Generation for Complex Well Architectures using Artificial Intelligence Based Tools”, M.S. Thesis, The Pennsylvania State University (08/2015).
  • Al-Ghazal, M.: “Development and Testing of Artificial neural Network Based Models for Water Flooding and Polymer Gel Flooding in Naturally Fractured Reservoirs”, M.S. Thesis, The Pennsylvania State University (08/2015).
  • Ketineni, S.: “Structuring an Integrative Approach for Field Development Planning Using Artificial Intelligence and its Application to Tombua Landana Asset in Angola”, Ph.D. Thesis, The Pennsylvania State University (12/2015).
  • Hamam, H.: “Continuous CO2 Injection Design in Naturally Fractured Reservoirs Using Neural Network Based Proxy Models”, Ph.D. Thesis, The Pennsylvania State University (08/2016).
  • Lai, I.: “Development of an Artificial Neural Network Model for Designing Waterflooding Projects in Three-Phase Reservoirs”, M.S. Thesis, The Pennsylvania State University (08/2016).
  • Alquisom, M.: “Development of an Artificial Neural Network Based Expert System to Determine the Location of Horizontal Well in a Three-Phase Reservoir with Simultaneous Gas Cap and Bottom Water Drive”, M.S. Thesis, The Pennsylvania State University (08/2016).
  • Ersahin, A.: “An Artificial Neural Network Approach for Evaluating the Performance of Cyclic Steam Injection in Naturally Fractured Heavy Oil Reservoirs”, M.S Thesis, The Pennsylvania State University (12/2016).
  • Zhang, Z.: “Predicting Petrophysical Properties from Rate-Transient Data: An Artificial Intelligence Application”, Ph.D. Thesis, The Pennsylvania State University (08/2017).
  • Yavuz, M.Z.: “An Artificial Neural Network Implementation for Evaluating the Performance of Cyclic CO2 Injection in Naturally Fractured Black Oil Reservoirs”, M.S. Thesis, The Pennsylvania State University (08/2017).
  • Shang, B.: “Design of Brine Disposal Wells in Depleted Gas Reservoirs via Artificial Neural Network Models”, M.S. Thesis, The Pennsylvania State University (08/2017).
  • Putcha, V.B.S: “Integration of Numerical and Machine Learning Protocols for Coupled Reservoir- Wellbore Models, A Study for Gas Lift Optimization”, Ph.D. Thesis, The Pennsylvania State University (08/2017).
  • Zhang, Y.: “Characterization of Tight Gas Reservoirs with Stimulated Reservoir Volume: An Artificial Intelligence Application”, M.S. Thesis, the Pennsylvania State University (08/2017).
  • Affane, C. R. N.: “Development of Artificial Neural Networks Applicable to Single-Phase Unconventional Gas Reservoirs with Slanted Wells”, M.S. Thesis, The Pennsylvania State University (08/2017).
  • Zhang, J.: “Development of Automated Neuro-Simulation Protocols for Pressure and Rate Transient Analysis Applications”, Ph. D. Thesis, The Pennsylvania State University (08/2017).
  • Sun, Q.: “Artificial-Neural-Network Based Toolbox for Screening and Optimization of Enhanced Oil Recovery Projects”, Ph. D. Thesis, The Pennsylvania State University (09/2017).
  • Rana, S.: “Development of an Assisted History matching Tool Using Gaussian Process based Proxy Models and Variogram Based Sensitivity Analysis”, Ph.D. Thesis, The Pennsylvania State University (11/2017).
  • Da, L.: “Screening and Design Criteria for Slanted Wells”, M.S. Thesis, the Pennsylvania State University (12/2017).
  • Enab, K.: “Artificial neural Network Based Design Protocol for WAG Implementation with CO2 Injection using Fishbone Wells in Low Permeability Oil Reservoirs”, Ph.D. Thesis, The Pennsylvania State University (12/2017).
  • Zhong, X.: “Pressure Transient Analysis of Shale Gas Reservoirs with Horizontal Boreholes: An Artificial Intelligence Based Approach”, M.S. Thesis, The Pennsylvania State University (08/2018).
  • Abdullah, M.B.: “Development and Application of an Artificial-Neural-Network-Based Analysis and Design Tool for Chemical Enhanced Oil Recovery: (Alkaline-Surfactant-Polymer) Field Implementations”, M.S. Thesis, The Pennsylvania State University (05/2019).

References

  1. Simon, H.A. The New Science of Management Decision; Prentice-Hall: Hoboken, NJ, USA, 1977. [Google Scholar]
  2. Frensch, P.; Joachim, F. Complex Problem Solving: The European Perspective; Lawrence Erlbaum Associates: Hillsdale, NJ, USA, 1995; ISBN 978-813364. [Google Scholar]
  3. Howard, R.A. Decision Analysis: Practice and Promise. Manag. Sci. 1988, 34, 679–695. [Google Scholar] [CrossRef] [Green Version]
  4. Neal, L.; Spetzler, C. An Organization-Wide Approach to Good Decision Making. Harvard Business Review, 27 May 2015. [Google Scholar]
  5. Ertekin, T.; Ayala, L. Reservoir Engineering Models-Analytical and Numerical Approaches; McGraw-Hill Education: New York, NY, USA, 2018. [Google Scholar]
  6. Ertekin, T.; Sun, Q. Artificial Intelligence Applications in Reservoir Engineering: A Status Check. Energies 2019, 12, 2897. [Google Scholar] [CrossRef] [Green Version]
  7. Sun, Q.; Ertekin, T. The Development of Artificial Neural Network Based Universal Proxies to Study Steam Assisted Gravity Drainage (SAGD) and Cyclic Steam Injection Processes. In Proceedings of the SPE Western Regional Meeting, Garden Grove, CA, USA, 27–30 April 2015. [Google Scholar]
  8. Alvarez, J.; Han, S. Current Overview of Cyclic Steam Injection Process. J. Pet. Sci. Res. 2013, 2, 116–127. [Google Scholar]
  9. Aydinoglu, G.; Bhat, G.M.; Ertekin, T. Characterization of Partially Sealing Faults from Pressure Transient Data: An Artificial Neural Network Approach. SPE Paper No. 78715. In Proceedings of the 2002 SPE Eastern Regional Meeting, Lexington, KY, USA, 23–25 October 2002. [Google Scholar]
  10. Ozdemir, I.; Sun, Q.; Ertekin, T. Structuring an Integrated Reservoir Characterization and Field Development Protocol Utilizing Artificial Intelligence. In Proceedings of the 26th ITU Petroleum and Natural Gas Symposium and Exhibition, Istanbul, Turkey, 20–21 June 2016. [Google Scholar]
  11. Putcha, V.B.; Ertekin, T. A Fast and Robust Compositional, Multi-Phase, Non-Isothermal Wellbore Hydraulics Model for Vertical Wells. SPE 187072. In Proceedings of the SPE Annual Technical Conference, San Antonio, TX, USA, 9–11 October 2017. [Google Scholar]
  12. Hasan, A.R.; Kabir, C.S. Two-phase flow in vertical and inclined annuli. Int. J. Multiph. Flow 1992, 18, 279–293. [Google Scholar] [CrossRef]
  13. Ansari, A.M.; Sylvester, N.D.; Sarica, C.; Shoham, O.; Brill, J.P. A Comprehensive Mechanistic Model for Upward Two-Phase Flow in Wellbores. SPE Prod. Facil. J. 1994, 9, 143–151. [Google Scholar] [CrossRef] [Green Version]
  14. Myers, A.L.; Prausnitz, J.M. Thermodynamics of mixed-gas adsorption. AIChE J. 1965, 11, 121–127. [Google Scholar] [CrossRef]
  15. Schumpeter, J. Capitalism, Socialism and Democracy; Harper & Brothers: New York, NY, USA; London, UK, 1942. [Google Scholar]
  16. Anderson, P.; Tushman, M.L. Technological Discontinuities and Dominant Design: A Cyclical Model of Technological Change. Adm. Sci. Q. 1990, 35, 609–633. [Google Scholar] [CrossRef] [Green Version]
  17. Stokes, D.E. Pasteur’s Quadrant: Basic Science and Technological Innovation; Brookings Institution Press: Washington, DC, USA, 1997. [Google Scholar]

Short Biography of Author

Turgay Ertekin Turgay Ertekin is a Professor Emeritus of Petroleum and Natural Gas Engineering at Penn State University. He received B.Sc. and M.Sc. degrees from the Middle East Technical University in Ankara, Turkey and a Ph.D. from Pennsylvania State University in the USA. Prior to his retirement, he worked at Penn State and held research assistant, faculty, and administrative positions over a period of four decades. His research activities and publications revolve around fluid flow dynamics in porous media, well test analysis, numerical reservoir simulation, and artificial intelligence applications in reservoir engineering. He is an Honorary Member of the Society of Petroleum Engineers, International.
Figure 1. Possible end results of a decision in view of timing of the decision and thoroughness of the decision process.
Figure 1. Possible end results of a decision in view of timing of the decision and thoroughness of the decision process.
Applsci 11 06347 g001
Figure 2. Due to existing uncertainties in deciding on a choice, a high-quality decision can still result in poor outcome and vice versa.
Figure 2. Due to existing uncertainties in deciding on a choice, a high-quality decision can still result in poor outcome and vice versa.
Applsci 11 06347 g002
Figure 3. The overall architecture of a computational platform on which the computational interactions of deterministic and proxy models are accomplished (Notes: VLE—vapor/liquid equilibria; VLLE—vapor/liquid/liquid equilibria; CBM—coalbed methane; EOR—enhanced oil recovery).
Figure 3. The overall architecture of a computational platform on which the computational interactions of deterministic and proxy models are accomplished (Notes: VLE—vapor/liquid equilibria; VLLE—vapor/liquid/liquid equilibria; CBM—coalbed methane; EOR—enhanced oil recovery).
Applsci 11 06347 g003
Figure 4. Enhanced oil/gas recovery related project process screening and design toolbox.
Figure 4. Enhanced oil/gas recovery related project process screening and design toolbox.
Applsci 11 06347 g004
Figure 5. Pressure transient analysis toolbox.
Figure 5. Pressure transient analysis toolbox.
Applsci 11 06347 g005
Figure 6. General reservoir engineering applications toolbox.
Figure 6. General reservoir engineering applications toolbox.
Applsci 11 06347 g006
Figure 7. Suggested protocol for finding an optimized ANN architecture.
Figure 7. Suggested protocol for finding an optimized ANN architecture.
Applsci 11 06347 g007
Figure 8. Division of work between the supervisory ANN and sub-ANN models of the CSS process [7].
Figure 8. Division of work between the supervisory ANN and sub-ANN models of the CSS process [7].
Applsci 11 06347 g008
Figure 9. A blind testing case results from the sub-ANN trained for six cycles [7].
Figure 9. A blind testing case results from the sub-ANN trained for six cycles [7].
Applsci 11 06347 g009
Figure 10. Comparison of the predicted and actual characteristics of a faulted reservoir.
Figure 10. Comparison of the predicted and actual characteristics of a faulted reservoir.
Applsci 11 06347 g010
Figure 11. Reservoir boundaries and existing well locations of the reservoir in Wilcox formation.
Figure 11. Reservoir boundaries and existing well locations of the reservoir in Wilcox formation.
Applsci 11 06347 g011
Figure 12. Curve fitting scheme applied to production histories.
Figure 12. Curve fitting scheme applied to production histories.
Applsci 11 06347 g012
Figure 13. Schematic representation of the workflow for the first tool.
Figure 13. Schematic representation of the workflow for the first tool.
Applsci 11 06347 g013
Figure 14. Schematic representation of the workflow for the second tool.
Figure 14. Schematic representation of the workflow for the second tool.
Applsci 11 06347 g014
Figure 15. (a) Average quality case: average error in predicting curve fitting parameters is ~16.66%. (b) Best quality case: average error in predicting curve fitting parameters is ~9.86%. (c) Worst quality case: average error in predicting curve fitting parameters is ~18.59%.
Figure 15. (a) Average quality case: average error in predicting curve fitting parameters is ~16.66%. (b) Best quality case: average error in predicting curve fitting parameters is ~9.86%. (c) Worst quality case: average error in predicting curve fitting parameters is ~18.59%.
Applsci 11 06347 g015
Figure 16. The sweet spots and permeability channels identified as a result of this study. The third insert in the bottom shows the wells that were put on production during the initial development stage of the field.
Figure 16. The sweet spots and permeability channels identified as a result of this study. The third insert in the bottom shows the wells that were put on production during the initial development stage of the field.
Applsci 11 06347 g016
Figure 17. Architecture of the ANN-based wellbore hydraulics model.
Figure 17. Architecture of the ANN-based wellbore hydraulics model.
Applsci 11 06347 g017
Figure 18. A fully integrated “reservoir—wellbore—surface” hybrid model.
Figure 18. A fully integrated “reservoir—wellbore—surface” hybrid model.
Applsci 11 06347 g018
Figure 19. Importance of Pasteur’s Quadrant.
Figure 19. Importance of Pasteur’s Quadrant.
Applsci 11 06347 g019
Figure 20. Choices in reservoir model development.
Figure 20. Choices in reservoir model development.
Applsci 11 06347 g020
Table 1. Data categories and their corresponding 1st and 2nd tier components, [6].
Table 1. Data categories and their corresponding 1st and 2nd tier components, [6].
Data CategoriesReservoir Engineering Components
Reservoir characteristics
(intrinsic)
Geophysical dataSeismic surveys
Well Logs
Petrophysical dataPermeability distribution
Porosity distribution
Net pay thickness
Formation depth
Reservoir pressure
Reservoir temperature
Fluid contact
Fluid propertiesFluid composition
PVT data
Rock/fluid interaction
characteristics
Relative permeability data
Capillary pressure data
Project design parameters
(extrinsic)
Field development dataWell specifications,
Well architecture
Well pattern,
Well spacing
Process (EOR) project design parameters
Field response functionsWell dataRate (production/injection),
pressure data
Project economics
Table 2. Ranges of variables placed on the input (A) and output (B) layers [9].
Table 2. Ranges of variables placed on the input (A) and output (B) layers [9].
(A) Reservoir Variables Placed on the Input Layer
VariableMinimum ValueMaximum ValueUnits
Fluid viscosity150cp
Fluid compressibility0.0000010.0001psi−1
Initial reservoir pressure5008000psi
Flow rate202000STB/D
Net pay thickness10200ft
(B) Reservoir Variables Placed on the Output Layer
VariableMinimum ValueMaximum ValueUnits
Permeability (kx and ky)21300md
Porosity10%50%-
Distance to the fault plane981060ft
Fault leaking capacity0%38%-
Fault orientation w.r.t
principal flow directions
1090degrees
Table 3. Availability of well logs and historical production data.
Table 3. Availability of well logs and historical production data.
Data SetNumber of Wells
GAMMA RAY18
INDUCTION RESISTIVITY35
INDUCTION CONDUCTIVITY29
SHORT NORMAL RESISTIVITY24
SPONTANEOUS POTENTIAL27
HISTORICAL PRODUCTION25
Table 4. Input parameters used in the construction of the ANN-based wellbore model.
Table 4. Input parameters used in the construction of the ANN-based wellbore model.
CategoryInputsUnitsMinimumMaximum
Well ParametersWellhead pressurepsia1002000
Tubing diameterinches14
Pipe roughnessft1.00 × 10−42.00 × 10−3
Total depthft20016,000
Fractional depthfraction01
Temperature at depth°F60340
Wellhead temperature°F60180
Temperature gradient°F/ft0.0050.02
Reservoir Fluid PropertiesMolar feed into welllb-moles/s0.001250.5
Initial water mole fractionfraction01
Water cutpercentage098.7
Oil viscositylb-moles/ft-s1.7 × 10−51.5 × 10−3
Gas viscositylb-moles/ft-s3.4 × 10−69.9 × 10−5
Water viscositylb-moles/ft-s4.7 × 10−48.1 × 10−4
Oil specific gravityfraction0.490.94
Gas specific gravityfraction0.470.93
Water specific gravityfraction11.05
Oil flow rateSTB/D1032,767
Gas flow rateMMSCF/D015.6
Water flow rateSTB/D0 2180
Gas-oil ratioSCF/STB0168,068
Gas-liquid ratioSCF/STB094,868
Composition of FeedMole fraction of C1fraction0.200.94
Mole fraction of C2fraction2.4 × 10−50.59
Mole fraction of C3fraction1.9 × 10−60.56
Mole fraction of C4fraction9.3 × 10−60.65
Mole fraction of C5fraction6.5 × 10−70.54
Mole fraction of C6+fraction2.9 × 10−60.62
Mole fraction of C20+fraction1.7 × 10−50.63
Thermodynamic
Properties of
C6+ and C20+
Critical temperature of C6+°R9141409
Critical temperature of C20+°R14281724
Critical pressure of C6+psia211477
Critical pressure of C20+psia105203
Accentricity factor of C6+unitless0.280.82
Accentricity factor of C20+unitless0.861.33
Molecular weight of C6+lb/lb-moles86275
Molecular weight of C20+lb/lb-moles291539
Volume shift parameter of C6+unitless−0.0590.142
Volume shift parameter of C20+unitless0.1390.358
Critical volume of C6+cubic ft/lb-mole5.516.5
Critical volume of C20+cubic ft/lb-mole17.231.3
Parachor of C6+unitless250.1710.5
Parachor of C20+unitless742.21090.4
Table 5. Validation of the wellbore hydraulics model with the field data [11].
Table 5. Validation of the wellbore hydraulics model with the field data [11].
Depth (ft)Pressure (psig)Relative Deviation (%)Pressure (psig)Relative Deviation (%)
ANNNumericalField DataField Data vs. ANNField data vs. NumericalHasan and
Kabir (1992)
[12]
Ansari et al. (1994)
[13]
Field data vs. Hasan and Kabir (1992)
[12]
Field data vs. Ansari et al. (1994)
[13]
05055055050.00.05055050.00.0
4005825955870.91.35935861.00.2
6506346556472.01.26546411.10.9
11507537817773.10.57817580.52.4
16508899179203.40.39188850.23.8
21501042106210743.01.1106310211.04.9
26501208121512372.41.8121211652.05.8
31501384137314071.62.4136913162.76.5
36501568153715820.92.8153014733.36.9
41501756170618505.17.8169516348.411.7
46501945187819600.84.2186417994.98.2
51512135205221051.42.5203419683.46.5
Average2.22.4Average2.65.3
Table 6. Computational time comparison between three different gas lift models.
Table 6. Computational time comparison between three different gas lift models.
Computational Time Per Day of Simulation (s)
Gas lift Injection rate (MSCFD)Well
Abandonment Time (days)
Full Numerical- Model 1
Applsci 11 06347 i001
Numerical-ANN Coupled Model 2
Applsci 11 06347 i002
Full ANN-Based
Model 3
Applsci 11 06347 i003
100427.29381.92.81.98 × 10−4
500497.29501.02.71.98 × 10−4
1000547.29469.82.41.98 × 10−4
1500527.29476.52.91.98 × 10−4
1700527.29484.92.91.98 × 10−4
2000517.29469.92.91.98 × 10−4
2500517.29441.12.81.98 × 10−4
2800537.29413.32.81.98 × 10−4
3000527.29465.831.98 × 10−4
Average456.02.81.98 × 10−4
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ertekin, T. The Efficacy and Superiority of the Expert Systems in Reservoir Engineering Decision Making Processes. Appl. Sci. 2021, 11, 6347. https://doi.org/10.3390/app11146347

AMA Style

Ertekin T. The Efficacy and Superiority of the Expert Systems in Reservoir Engineering Decision Making Processes. Applied Sciences. 2021; 11(14):6347. https://doi.org/10.3390/app11146347

Chicago/Turabian Style

Ertekin, Turgay. 2021. "The Efficacy and Superiority of the Expert Systems in Reservoir Engineering Decision Making Processes" Applied Sciences 11, no. 14: 6347. https://doi.org/10.3390/app11146347

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop