Next Article in Journal
Stable Allometric Trajectories in Picea abies (L.) Karst. Trees along an Elevational Gradient
Previous Article in Journal
Morphological Characteristics and Transcriptome Comparisons of the Shoot Buds from Flowering and Non-Flowering Pleioblastus pygmaeus
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multistage Sample Average Approximation for Harvest Scheduling under Climate Uncertainty

by
Martin B. Bagaram
1,2,* and
Sándor F. Tóth
1
1
School of Environmental and Forest Sciences, University of Washington, Box 352100, Seattle, WA 98195, USA
2
Industrial and Systems Engineering, University of Washington, Seattle, WA 98195, USA
*
Author to whom correspondence should be addressed.
Forests 2020, 11(11), 1230; https://doi.org/10.3390/f11111230
Submission received: 12 October 2020 / Revised: 1 November 2020 / Accepted: 21 November 2020 / Published: 23 November 2020
(This article belongs to the Section Forest Ecology and Management)

Abstract

:
Forest planners have traditionally used expected growth and yield coefficients to predict future merchantable timber volumes. However, because climate change affects forest growth, the typical forest planning methods using expected value of forest growth can lead to sub-optimal harvest decisions. In this paper, we propose to formulate the harvest planning with growth uncertainty due to climate change problem as a multistage stochastic optimization problem and use sample average approximation (SAA) as a tool for finding the best set of forest units that should be harvested in the first period even though we have a limited knowledge of what future climate will be. The objective of the harvest planning model is to maximize the expected value of the net present value (NPV) considering the uncertainty in forest growth and thus in revenues from timber harvest. The proposed model was tested on a small forest with 89 stands and the numerical results showed that the approach allows to have superior solutions in terms of net present value and robustness in face of different growth scenarios compared to the approach using the expected growth and yield. The SAA method requires to generate samples from the distribution of the random parameter. Our results suggested that a sampling scheme that focuses on generating high number of samples in distant future stages is favorable compared to having large sample sizes for the near future stages. Finally, we demonstrated that, depending on the level of forest growth change, ignoring this uncertainty can negatively affect forest resources sustainability.

1. Introduction

Climate change is arguably one of the most challenging issues that contemporary forest planners need to address. In general, forest practitioners need to provide a long term forest harvest scheduling plan that takes into account the preference of the stakeholders while complying with environmental, regional and ecological restrictions. Usually, forest planning aims at scheduling which forest units should receive a specific treatment such as harvesting, thinning, etc., in each period in order to achieve the management objectives. This planning also known as harvest scheduling, expands several decades, and is subject to many sources of uncertainty due to factors that include natural hazards (fire, windthrow, insects, etc.), and technological limitation such as forest inventory errors, growth prediction errors, and poor foresight of the price of forest products on the market [1,2]. It is important to incorporate all or some of these uncertainties in forest planning.
The need to incorporate uncertainty in forest harvest scheduling has been acknowledged several decades ago [3]. Subsequently, Kooten et al. [4] urged for consideration of forest growth uncertainty in the planning. The authors asserted that the cost of ignoring such uncertainty at a forest scale can be detrimental. When the uncertainty is ignored, we make decisions assuming the growth in the future will be the expected value of growth (average growth). However, it is rare that the actual growth will be the average one and therefore, many harvest decisions made in early periods of the planning horizon were not optimal. This leads to a failure to meet planning objectives and to satisfy many of the aforementioned restrictions. Owing to the computational challenges that the introduction of uncertainty in harvest modeling involves, Pukkala [5] recommended that each forest harvest scheduling plan should be accompanied by an estimation of its uncertainty or its reliability. Notwithstanding all the exhortation to incorporate uncertainty in forest planning, harvest scheduling models, for a long time, have ignored uncertainty or were limited at reporting the sensitivity of the harvesting plans in case of change of one or many input parameters of the modeling. It is only recently that there has been a prolific number of papers addressing the integration of uncertainty in harvest planning. The most common uncertainties that have been addressed is the wood price and the wood demand [6,7,8,9,10]. Recently, there is an interest to incorporate climate uncertainty in forest planning. The first study that we are aware of that explicitly addresses the issue was conducted by Garcia-Gonzalo et al. [11]. The authors assessed how climate change affected the management decisions of a Eucalyptus forest in Portugal with a planning horizon of 15 years. A similar study was conducted by Álvarez-Miranda et al. [12], Garcia-Gonzalo et al. [13] using the same dataset. In addition to optimization methods for incorporating climate change in forest harvest scheduling, some authors invested in developing decision support systems (DSSs). For instance, Rammer et al. [14] developed a vulnerability assessment toolbox that allows to generate management plans while dealing with the vulnerability of the forest to climate change. Similarly, Garcia-Gonzalo et al. [15] developed a DSS to help forest managers address climate uncertainty by providing strategic forest plans under climate change uncertainty. However, these DSS used deterministic approaches by providing management plans for each scenario independently.
In this paper, we model harvest planning as a stochastic optimization problem and use sample average approximation (SAA) as a method for identifying the best set of actions one can take to meet the management objectives. In other words, we use SAA for solving harvest planning models. Stochastic optimization is a modeling framework for mathematical models dealing with uncertainty. In multistage stochastic programming, decisions are made sequentially at stages and the uncertainty unfolds during periods. Thus, the decision maker needs to implement a decision at the beginning of the planning horizon (first stage decisions) without knowing what the value of the uncertain parameter will be. After a period, in which the uncertainty is revealed, the decision maker can take recourse actions at subsequent stages. Therefore, it is important to make optimal decisions in the early stages of the planning without knowing the magnitude of the uncertainty that will unfold. Specifically, in strategic harvest scheduling, the decision maker needs to prioritize the set of forest units (stands) that should be harvested here and now or during the ongoing decade or period. After that period, the decision maker will have the opportunity to revisit the management in the following periods. In this case, the goal of the harvest scheduling is to decide the set of actions that managers should apply immediately “here and now" (first stage decisions).
SAA is a Monte Carlo simulation based approach for solving stochastic optimization problems. It consists of drawing repetitively samples from the distribution of the random parameter and solving the resultant average sample deterministic optimization problem. Each sample might lead to a different solution. However, if the sample size is large enough, the sample objective function value will approximate the true optimal value. Mak et al. [16] showed that for minimization problems, the expectation of samples’ objective value corresponds to the lower bound on the true optimal value of the stochastic optimization problem and that the bound monotonically increases as the sample size increases. SAA has been successfully applied in several fields such as portfolio selection [17], supply chain network design [18,19], facility location problem [20] and personnel assignment [21]. Notwithstanding all these applications, and the fact that SAA is well suited for problems where the objective function is difficult to evaluate such as harvest planning under climate uncertainties [17], to the extent of our knowledge, SAA has never been applied in forestry domain. In addition, unlike stochastic programming which requires to know the probability distribution of the samples, SAA known as a data-driven optimization method, considers the samples to be equiprobable. The idea is that if the sample size is large enough, then the sample statistics will represent those of the actual population the sample is taken from. Hence, there is no need to know the actual distribution of the uncertainty. This is particularly important for harvest scheduling under climate uncertainty because climate change is forecast as possible futures without any probabilities. Moreover, although the method performs well for two-stage stochastic problems [22], it is unclear how the method will perform on a multistage harvest scheduling problem.
The objective of this paper is to fill the gap in the literature on SAA and multistage stochastic harvest scheduling models. For multistage stochastic optimization problems, each sample of the uncertainty is known as a scenario. It is known that the number of scenarios to consider grows exponentially with the number of stages in multistage stochastic programs. Forest harvest scheduling models are typically characterized by long planning horizons with many stages. The contributions in this paper are as follows:
  • Introduce a method to handle climate change uncertainty in forest harvest scheduling that allows to make sound decisions in early stages of planning. Poor decisions in early stages are significantly more detrimental to many businesses compared to decisions in later stages. Hence, later stage decisions can be considered as secondary.
  • Propose a method to generate the set of scenarios to reduce the sample size and keep the optimization model tractable. If we generate possible scenarios of forest growth in i.i.d (independent identical distributed) fashion, then we might have a very large sample size before SAA solution converges to optimality. This large sample size can make the problem computationally intractable. We propose a sampling scheme that focuses at having higher number of replications for distant stages.
We will show that stochastic harvest scheduling has an advantage over the expected value approach (deterministic model presented in Appendix A) because it allows to implement policies that are optimal for all foreseen forest growth changes by providing management decisions that would be implemented if we consider the uncertainty and if we do not. The rest of this document is structured as follows. In Section 2, we present the materials and methods used in this research. We describe in that section the SAA method and the scenario generation scheme. We dedicate Section 3 to presenting our findings. Finally, in Section 4, we discuss the results and outline future research.

2. Material and Methods

2.1. Problem Description and Formulation

We present in this section the multistage stochastic harvest scheduling problem with forest growth uncertainty due to climate change. Let t T = { 1 , , T } be the set of stages in which forest units are eligible for harvest in the future. We reserve t = 0 to the time “now” (first stage decision), where there is no uncertainty on the forest growth. We define ξ : = { ξ 1 , , ξ T } to be the random vector characterizing forest growth change due climate change. Hence, ξ t is the random parameter of ξ at time t. Each ξ t has a support Ξ t which is the range of values that the random parameter ξ t can take. The meaning of parameters, variables and sets is given in Table 1.
The objective of the decision maker is to maximize the expected net present value (NPV) from timber harvest. This can be formulated in a generic form as (1) and (2):
max s S r s x s + E R ( x , ξ )
subject to x { 0 , 1 } | S | ,
where we take the expectation with respect to ξ and, R ( x , ξ i ) is the optimal value of the harvest scheduling given the harvest in the first stage (x) and the occurrence of one forest growth scenario ξ i . In this formulation, the decision variables are only the first stage variables which means that the decision maker mainly cares about how to select the stands that can be harvested in the first period so that the expected NPV is maximized. The value of R ( x , ξ i ) is obtained by solving the following optimization problem:
max s S t T r s t ξ i y s t ξ i + r s 0 ξ i w s ξ i
subject to
x s + w s ξ i + t = 1 T y s t ξ i = 1 s = 1 , , | S |
s v s x s = H 0
s v s t ξ i y s t ξ i = H t i t = 1 , , T
H t i α H t 1 i t = 1 , , T
H t i β H t 1 i t = 1 , , T
H t i γ H t 2 i t = 2 , , T
H t i λ H t 2 i t = 2 , , T
s S a s t = 1 T a g e s t ( x s + y s t ξ i ) + a g e s 0 w s ξ i s a s a g e s .
w ξ i { 0 , 1 } | S | , y ξ i { 0 , 1 } | S | × T , H t i 0 t { 0 , 1 , , T } .
The objective function (3) maximizes the net present value from subsequent stages after the current harvest (x). This objective function includes the cost of harvest and re-plantation for each forest unit. In addition, this objective function accounts for the value of the stands that are not harvested during the planning horizon because those stands have a monetary value. Constraint set (4) imposes that if a stand is harvested now, then it cannot be harvested in the subsequent years. In other words, a forest unit can only be harvested once during the whole planning horizon. The use of w s variables is to account the stands that are not scheduled for harvest in the whole planning horizon. Constraint set (5) and (6) compute the volume of wood harvested now and in the future periods during the planning horizon, respectively. The volume computed in each period depends on the scenario ξ i . Constraint sets (7) and (8) impose that the volume fluctuation between two consecutive stages should be within a given lower and upper bounds, respectively. These sets of constraints are also known as even flow constraints since they ensure that the volume of timber harvested is almost evenly distributed in time. Even with constraint sets (7) and (8), there is a possibility that the volume harvested declines with time. Hence volume at t = 4 might be much lower than volume at t = 1 , for instance. To attenuate this effect, we impose supplementary wood flow constraints which are constraint sets (9) and (10). These two sets of constraints impose flow restriction between two non-consecutive stages. Constraint set (11) states that the age of the forest at the end of the planning horizon should be greater or equal to the current age of the forest. This constraint is a proxy for sustainability; it ensures that forest resources are not depleted during the planning horizon. Finally, the definition of the variables is given in (12).

2.2. Climate Change Data

In this paper the uncertainty in forest growth stems from climate change. In this section, we present how climate change was translated into forest growth suitable to the harvest planning model. Climate change in this project refers to the change in forest growth. Hence, the change can be positive or negative. The change is small in near future, stage 1, compared to the distant future (stage 4). There is a ten year difference between two consecutive stages. In this project, we consider the case of Pacific Northwest where most models forecast that climate change will lead to the increase of forest growth. We report in Table 2 the growth change used for the analysis. This growth change data is based on ([23], Table 3). Forest growth is age-dependent and this is reflected in growth change. In addition, this growth change modeling reflects a Geometric Brownian Motion process where the absolute increments of growth are not independent from one period to another although the percentages of change are. The Lower and Upper in the table correspond to the lowest and highest possible growth change, respectively, at each stage. Formally speaking, [Lower, Upper] at stage t, represents Ξ t of the random parameter ξ t .
To assess the performance of the proposed modeling framework, we change the lower bound on the growth change by multiplying it by a factor ϵ . This allows to assess the performance of the proposed method for climate change scenarios that forecast decrease in forest growth. We assessed values of ϵ equal 1, 20 and 40. Only ϵ = 40 corresponds to forest decline whereas, ϵ of 20 corresponds to a decrease in forest growth.

2.3. Multistage Sampling

We have presented both the optimization model we intend to solve in this paper and the uncertain parameter. In this section, we describe how to sample from the distribution of the uncertain parameter. To achieve that objective, let N = { N 1 , , N T } be a sequence of positive integers. At the first stage (stage 1), we generate N 1 replications drawn from Ξ 1 , the support of the random parameter ξ 1 . To minimize the number of replications necessary, we generate the N 1 by diving Ξ 1 into N 1 intervals. From each interval, we sample uniformly one realization of ξ 1 . We repeat this procedure for stage 2 and so forth for the following stages. At the end, we connect the realization at t = 0 to all realizations at t = 1 . We connect all realizations at t = 1 to all others at t = 2 and we continue until the last stage. The result of this procedure is a scenario tree with a total number of scenarios or sample size n = t = 1 T N t . Each path from this scenario tree can be viewed as a scenario with probability 1 / n . Varying the values of N t allows to have different sample sizes which can be solved using SAA as described in the following section. It is not clear, however, whether having N 1 N 2 N T is preferable to a sampling scheme with N 1 N 2 N T . To answer this question, we tested the solution by considering a sampling scheme with N 1 = N 2 = = N T in Section 3.1 and evaluate in Section 3.2, the solution behavior when we use the two other sampling schemes.

2.4. Solution Method

We used SAA as the method for solving the stochastic harvest scheduling model since the main challenge of the decision maker is to find the set of stands that are suitable for harvest in the first period. SAA method is a Monte Carlo simulation based approach for solving stochastic optimization problems. A random sample is generated with uniform probability distribution, and then the expected value function of the stochastic problem is approximated by the corresponding sample average function. The sample average approximation problem which is specified by the generated sample is then solved by a deterministic optimization technique which is mixed integer programming in this study. This procedure is repeated by increasing the size of the samples to obtain solutions that are close enough to the true optimal solution.
To formally introduce the SAA method we used to solve the harvest scheduling problem, we reformulate the harvest scheduling problem in (1) and (2) in a compact form, just for practicability. We can rewrite the problem as
max x X f ( x , ξ ) : = max x X E s S r s x s + R ( x , ξ ) ,
where X is the feasible set (constraints (2), (4) to (12)). We can write (13) because the term r s x s has no uncertainty.
Let ξ 1 , , ξ n be a sample of size n drawn from the distribution of ξ . The sample is generated using the method described in Section 2.3. We can write the sample average objective function as:
f n ( x ) : = 1 n i = 1 n f ( x , ξ i ) .
where
f ( x , ξ i ) = s S r s x s + s S t T r s t ξ i y s t ξ i + r s 0 ξ i w s ξ i .
The idea of SAA is to solve (16) instead of (13)
z n = max x X f n ( x )
We now describe the steps for solving a stochastic optimization problem using SAA as outlined in Figure 1.
Steps a and b: Generate a large sample of size (N) and solve it using (16) to obtain a first stage candidate solution x ^ . The choice of N depends on the model that can be solved in a reasonable time. The fact that this problem is only solved once, partially motivates on choosing the largest possible sample size.
Step c: Generate M samples of size n. The idea is to start with lower values of n and increase the sample size progressively. Ref. [24] describes a procedure for choosing the value of M. In a nutshell, M should be chosen in a way that allows to compute different statistics such as the mean, variance and confidence interval.
Step d: Solve SAA problem of each sample m to get z n m = max x X 1 n i = 1 n f n x , ξ i and the upper bound z ¯ n M = 1 M m = 1 M z n m . z ¯ n M is an upper bound because it is the average of upper bounds z n m obtained by using only a set of scenarios, and therefore the solution is more optimistic.
Step e: For each sample m, fix the first stage variables to x ^ obtained from steps a and b to get a lower bound on the optimal solution z ^ n m ( x ^ ) = max 1 n i = 1 n f n x ^ , ξ i
Step f: Compute the average optimality gap as G ¯ ( x ^ ) = z ¯ n M 1 M m = 1 M z ^ n m ( x ^ ) . We can also compute for each sample m the optimality gap as
G n m ( x ^ ) = max x X 1 n i = 1 n f n x , ξ i 1 n i = 1 n f n x ^ , ξ i
where f n x ^ , ξ i is obtained by fixing the first stage variable in f n to x ^ from Step a. Notice that G n m ( x ^ ) 0 . We can then compute the average optimality gap G ¯ ( x ^ ) = 1 M m = 1 M G n m ( x ^ ) and its variance s G ¯ 2 = 1 M ( M 1 ) m = 1 M G n m ( x ^ ) G ¯ ( x ^ ) 2 . Mak et al. [16] proved that the optimality gap depends on the sample size n. In fact, they proved that as n , the optimality gap converges to zero and the SAA objective function value converges to the true optimal objective value with probability one. In other words, the sample size is large enough if we have a small optimality gap.
Step g: If the optimality gap computed in step f is not sufficiently small, then increase the sample size n and go back to step c. The decision of whether the optimality gap is sufficiently small and its variance is sufficiently low is domain specific.
Step h: If the optimality gap is small enough, then we have obtained the optimal solution and we can compute the confidence interval on the optimality gap as: μ x ^ 0 , G ¯ ( x ^ ) + t M 1 , α S G ¯ with a confidence of 100 ( 1 α ) with α being Type I error.

2.5. Importance of SAA

To assess the advantage of solving the stochastic optimization problem with SAA instead of the deterministic model that only considers the average scenario (the model that ignores climate change uncertainty), we compute the so-called value of stochastic solution ( V S S ) [25]. Let x ¯ be the first stage solution we would get if we implemented the average growth solution, respectively. Similarly, let x * be the optimal solution of the stochastic model obtained from SAA. We denote by z i and z ¯ i the NPVs of the scenario ξ i when the first stage variables are fixed to x * and x ¯ , respectively. Finally, let z ( x ¯ ) = 1 | Ω | ξ i Ω z ¯ i whereas, z ( x * ) = 1 | Ω | ξ i Ω z i . We compute V S S as follows: V S S = z ( x * ) z ( x ¯ ) . In term of relative value, we present V S S in basis points (bps) (1 basis point is equivalent to 0.01%. It is commonly used in economics and finance) as:
V S S ( b p s ) = V S S z ( x ¯ ) × 10 , 000
In case where some scenarios are not feasible after fixing the first stage to x ¯ , we report the number of infeasible scenarios.

2.6. Experiment

We present in this section, the values of several parameters introduced in the methods in order to conduct a numerical experiment. The numerical experiment was run on a Windows desktop with an AMD 8 Core processor of 4 GHz and 8 GB of RAM. We have used CPLEX 12.8 to solve the stochastic model using python 3 as the modeling language. We solved each model to 0.5% mixed integer programming (MIP) optimality gap. For each model, we generated 30 independent replications ( M = 30 ). We used bootstrapping to compute the 95% confidence level for different metrics. N was set to 625 scenarios and we varied the sample size n from 16 to 625. We tested the modeling on Phyllis Leeper forest with 89 stands (http://ifmlab.for.unb.ca/fmos/datasets/PhyllisLeeper/). Model parameters α and γ were set to 0.85, whereas β and λ were set to 1.15. The planning horizon was 50 years divided into 5 planning periods of 10 years each.

3. Results

3.1. Effect of Sample Size on the Optimality Gap

The results of the experiments are summarized in Figure 2. The sample size is the number of scenarios. These results were obtained by setting N 1 = N 2 = N 3 = N 4 . The solutions show that as the sample size increases, the NPV decreases converging toward the optimal objective function value. In addition, we have tighter confidence intervals as the sample size increases. A similar behavior is observed for the optimality gap. However, larger sample sizes lead to a significant increase in solution time. We can conclude that solving problems with up to 256 scenarios is sufficient to capture the variability and obtain solutions that are close enough to the true optimal solution since increasing the sample size to 625 did not significantly reduce the optimality gap and its variance. Furthermore, although we need larger sample size to reduce the optimality gap and its variance, we need to compromise on solution time. The solution time increased exponentially with the sample size.

3.2. Effect of Sampling Scheme on the Optimality Gap

In this section, we test the effect of scenario generation scheme on the optimality gap. The scheme used to generate scenarios for the four stages is presented in Figure 3. In the figure, the sampling scheme represents four digits corresponding to N 1 , N 2 , N 3 and N 4 . For instance, 1155 means that N 1 = 1 , N 2 = 1 , N 3 = 5 and N 4 = 5 leading to 1 × 1 × 5 × 5 = 25 scenarios. This means that the schemes 1555 and 5551 lead to the same number of scenarios (125 scenarios). As we can see from Figure 3, the optimality gap and its variability when N 1 = 1 is larger than when N 1 > 1 . The lowest optimality gap is obtained with N 1 = 3 . In general having higher values of N t for higher t seems to be favorable.

3.3. Advantage of SAA in Stochastic Harvest Scheduling over the Deterministic Approach

The stochastic optimization solution allows to have superior solution in terms of NPV. In addition, it allows us to have solutions that are robust (feasible) for a set of climate change scenarios that are not feasible if we implement here and now the deterministic solution (Table 3). As the uncertainty on forest growth change magnitude increases, the value of stochastic solution increases as well. When we considered ϵ = 20 , out of 200 scenarios, three scenarios were infeasible.

4. Discussion and Conclusions

Climate change is a serious issue in forest management planning. In a study conducted in Norway, the majority of forest managers showed the importance of addressing climate change in forest planning [26]. Several researchers conducted similar studies in different ecosystems and reached analogous results [27,28]. To ensure the sustainability of forest resources, forest managers need to incorporate forest growth uncertainty in harvest scheduling models. In this work, we formulated a stochastic harvest scheduling model with forest growth uncertainty due to climate change and solved the model using sample average approximation (SAA). We tested the modeling and solution using climate change data transformed to forest growth change. We compared the robustness of the stochastic solution to the deterministic one by randomly generating a set of scenario and comparing the expected NPV if we implement the stochastic solution and the NPV if we use the deterministic solution. The numerical results showed that SAA allows to have stochastic solutions that are close enough to the true optimal solution when the sample size is large enough. However, large sample size leads to an exponential increase of solution time. This pattern of the computational complexity growing exponentially with the sample size was previously suggested by Kleywegt et al. [24].
One of the main limitations of the proposed method is the computation required in step d of the algorithm presented in Figure 1. As we discussed in the previous paragraph, the increase of the sample size leads to an exponential increase in solution time. It is therefore important to have a strategy that allows to reduce the sample size while producing solutions that allow the convergence of SAA solutions to the true optimal solutions. Our tests suggest that with the appropriate sampling scheme, it is possible to reach convergence with smaller sample sizes. For instance, the sampling schemes 2356 and 3344 yielding sample sizes of 180 and 144, respectively, have smaller optimality gaps and variances compared to a sample size of 625 which stemmed from a sampling scheme of 5555. In conclusion, when we adopt the adequate sampling strategy that allows to sufficiently explore the first stage and generate large samples for future stages, we can limit the number of scenarios necessary for the SAA solution to converge to the true optimal solution. This computational challenge is common to stochastic programs in forestry [29].
The proposed model not only allows the managers to make intelligent decision now, but also allows the preservation of forest resources that take time to replenish once depleted. Indeed, the stochastic solution is robust to different growth scenario whereas, the deterministic one is infeasible to many of the tested growth scenarios. These results are in line with the ones obtained by Álvarez-Miranda et al. [12], Garcia-Gonzalo et al. [13]. The infeasibility of the deterministic solution stems mainly from the fact that the wood flow constraints are violated due to an intensive harvest in early periods.
In addition, the solution method proposed in this paper is easy to develop and implement by many forest managers. Unlike the methods used in [11,13] that require to know the probability distribution of the random parameter, SAA does not require such information. The method relies on the fact that if the sample size is large enough, then the sample statistics will approximate those of the actual population. The method is also suitable for many applications where the objective function cannot be computed in a closed form such as the so-called black-box optimization problems [30] and the stochastic knapsack problem [24]. In forestry, this method is well suited for harvest scheduling problems with wood price and demand uncertainties because we can extract samples from historical demand and price without the need to model the price like done in Alonso-Ayuso et al. [10], Rios et al. [31]. Compared to stochastic programming which requires the so-called non-anticipativity constraints [13,32], the SAA model is relatively smaller in terms of number of constraints (and possibly in terms of number of variables, depending on the formulation) since it does not require such constraints.
Regarding climate change data, we used the information of climate change forecast with statistical models developed in Latta et al. [33]. Unlike the data from Garcia-Gonzalo et al. [11], Álvarez-Miranda et al. [12], Garcia-Gonzalo et al. [13], which originated from a process-based modeling, statistical models of forest growth under climate change are much more common (e.g., Elli et al. [34]). Hence, the method developed in this study can easily be extended to other forest systems. Moreover, it is straight forward to translate forecast of precipitation, air moisture and temperature into forest growth compared to processed-based models which require the expertise in plant biology.
Finally, this research can be extended by considering other sources of uncertainty such as the price of wood, or the demand of forest products. In this study we assumed that climate change does not lead to species transition. In some cases, one might need to consider the species shift because of climate change. It would be interesting to include this information in the decision-making process and provide to forest practitioners a range of options when implementing harvest scheduling plans and the choice of species for regeneration.

Author Contributions

Conceptualization: M.B.B.; methodology: M.B.B.; validation: S.F.T.; formal analysis: M.B.B.; resources: S.F.T.; writing—original draft preparation: M.B.B.; writing—review and editing: S.F.T. Both authors have read and agreed to the published version of the manuscript.

Funding

This research received funding of Precision Forestry Cooperative of the School of Environmental and Forest Sciences, University of Washington.

Acknowledgments

We thank the two referees who helped improve the quality of the original draft. We thank as well Bella Tsachidou who volunteered to proof read the manuscript to catch the typos.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Deterministic Harvest Scheduling Model

The objective function (A1) is to maximize the net present value from forest harvest. This objective function includes the cost of harvest and re-plantation for each forest unit. In addition, this objective function accounts for the value of the stands that are not harvested during the planning horizon because those stands have a monetary value. Constraint set (A2) imposes that if a stand is harvested now, then it cannot be harvested in subsequent years. In other words a forest unit can only be harvested once during the whole planning horizon. The use of n s variables is to account the stands that are not scheduled for harvest in the whole planning horizon. Constraint set (A3) compute the volume of wood harvested now and in the future periods during the planning horizon, respectively. Constraint sets (A4) and (A5) impose that the volume fluctuation between two consecutive stages should be within a fixed lower and upper bounds, respectively. These sets of constraints are also known as even flow constraint since they ensure that the exploitation of forest resources is evenly distributed in time. Even with constraint sets (A4) and (A5), there is a possibility that the volume harvested declines with time. Hence volume at t = 4 is much lower than volume at t = 0 , for instance. To attenuate this effect, we impose supplementary wood flow constraints which are constraint sets (A6) and (A7). These two set of constraints impose flow restriction between two non consecutive stages. Constraint set (A8) states that the age of the forest at the end of the planning horizon should be greater or equal to the current age of the forest. This constraint is a proxy for sustainability; it ensures that forest resources are not depleted during the planning horizon. Finally, the definition of the variables are given in (A9).
max s = 1 | S | t = 0 T r s t x s t + r s 0 w s
subject to
n s + t = 0 T x s t = 1 s = 1 , , | S |
s v s t x s t = H t t = 0 , , T
H t i α H t 1 t = 1 , , T
H t i β H t 1 t = 1 , , T
H t γ H t 2 t = 2 , , T
H t i λ H t 2 t = 2 , , T
s S a s t = 0 T a g e s t x s t + a g e s 0 w s s a s a g e s .
w s { 0 , 1 } , s , x { 0 , 1 } | S | × T + 1 , H t 0 t

References

  1. Mäkinen, A.; Borges, J.G. Review. Assessing uncertainty and risk in forest planning and decision support systems: Review of classical methods. For. Syst. 2013, 22, 282–303. [Google Scholar]
  2. Ross, K.L.; Tóth, S.F. A model for managing edge effects in harvest scheduling using spatial optimization. Scand. J. For. Res. 2016, 31, 646–654. [Google Scholar] [CrossRef]
  3. Dixon, B.L.; Howitt, R.E. Resource Production Under Uncertainty: A Stochastic Control Approach to Timber Harvest Scheduling. Am. J. Agric. Econ. 1980, 62, 499–507. [Google Scholar] [CrossRef]
  4. Kooten, G.C.V.; Kooten, R.E.; Brown, L.G. Modeling the effect of uncertainty on timber harvest: A suggested approach and empirical example. J. Agric. Resour. Econ. 1992, 17, 162–172. [Google Scholar]
  5. Pukkala, T. Multiple risks in multi-objective forest planning: Integration and importance. For. Ecol. Manag. 1998, 111, 265–284. [Google Scholar] [CrossRef]
  6. Alonso-Ayuso, A.; Escudero, L.F.; Guignard, M.; Quinteros, M.; Weintraub, A. Forestry management under uncertainty. Ann. Oper. Res. 2011, 190, 17–39. [Google Scholar] [CrossRef]
  7. Veliz, F.B.; Watson, J.P.; Weintraub, A.; Wets, R.J.B.; Woodruff, D.L. Stochastic optimization models in forest planning: A progressive hedging solution approach. Ann. Oper. Res. 2014, 259–274. [Google Scholar] [CrossRef]
  8. Piazza, A.; Pagnoncelli, B.K. The optimal harvesting problem under price uncertainty. Ann. Oper. Res. 2014, 217, 425–445. [Google Scholar] [CrossRef]
  9. Pagnoncelli, B.K.; Piazza, A. The optimal harvesting problem under price uncertainty: The risk averse case. Ann. Oper. Res. 2017, 258, 479–502. [Google Scholar] [CrossRef]
  10. Alonso-Ayuso, A.; Escudero, L.F.; Guignard, M.; Weintraub, A. Risk management for forestry planning under uncertainty in demand and prices. Eur. J. Oper. Res. 2018, 267, 1051–1074. [Google Scholar] [CrossRef]
  11. Garcia-Gonzalo, J.; Pais, C.; Bachmatiuk, J.; Weintraub, A. Accounting for climate change in a forest planning stochastic optimization model. Can. J. For. Res. 2016, 46, 1111–1121. [Google Scholar] [CrossRef]
  12. Álvarez-Miranda, E.; Garcia-Gonzalo, J.; Ulloa-Fierro, F.; Weintraub, A.; Barreiro, S. A multicriteria optimization model for sustainable forest management under climate change uncertainty: An application in Portugal. Eur. J. Oper. Res. 2018, 269, 79–98. [Google Scholar] [CrossRef]
  13. Garcia-Gonzalo, J.; Pais, C.; Bachmatiuk, J.; Barreiro, S.; Weintraub, A. A Progressive Hedging Approach to Solve Harvest Scheduling Problem under Climate Change. Forests 2020, 11, 224. [Google Scholar] [CrossRef] [Green Version]
  14. Rammer, W.; Schauflinger, C.; Vacik, H.; Palma, J.H.; Garcia-Gonzalo, J.; Borges, J.G.; Lexer, M.J. A web-based ToolBox approach to support adaptive forest management under climate change. Scand. J. For. Res. 2014, 29, 96–107. [Google Scholar] [CrossRef]
  15. Garcia-Gonzalo, J.; Borges, J.G.; Palma, J.H.N.; Zubizarreta-Gerendiain, A. A decision support system for management planning of Eucalyptus plantations facing climate change. Ann. For. Sci. 2014, 71, 187–199. [Google Scholar] [CrossRef]
  16. Mak, W.K.; Morton, D.P.; Wood, R.K. Monte Carlo bounding techniques for determining solution quality in stochastic programs. Oper. Res. Lett. 1999, 24, 47–56. [Google Scholar] [CrossRef]
  17. Wang, W.; Ahmed, S. Sample average approximation of expected value constrained stochastic programs. Oper. Res. Lett. 2008, 36, 515–519. [Google Scholar] [CrossRef]
  18. Schütz, P.; Tomasgard, A.; Ahmed, S. Supply chain design under uncertainty using sample average approximation and dual decomposition. Eur. J. Oper. Res. 2009, 199, 409–419. [Google Scholar] [CrossRef]
  19. Chunlin, D.; Liu, Y. Sample Average Approximation Method for Chance Constrained Stochastic Programming in Transportation Model of Emergency Management. Syst. Eng. Procedia 2012, 5, 137–143. [Google Scholar] [CrossRef] [Green Version]
  20. Emelogu, A.; Chowdhury, S.; Marufuzzaman, M.; Bian, L.; Eksioglu, B. An enhanced sample average approximation method for stochastic optimization. Int. J. Prod. Econ. 2016, 182, 230–252. [Google Scholar]
  21. Pour, A.G.; Naji-Azimi, Z.; Salari, M. Sample average approximation method for a new stochastic personnel assignment problem. Comput. Ind. Eng. 2017, 113, 135–143. [Google Scholar] [CrossRef]
  22. Mohammadi Bidhandi, H.; Patrick, J. Accelerated sample average approximation method for two-stage stochastic programming with binary first-stage variables. Appl. Math. Model. 2017, 41, 582–595. [Google Scholar]
  23. Latta, G.; Temesgen, H.; Adams, D.; Barrett, T. Analysis of potential impacts of climate change on forests of the United States Pacific Northwest. For. Ecol. Manag. 2010, 259, 720–729. [Google Scholar] [CrossRef]
  24. Kleywegt, A.J.; Shapiro, A.; Homem-De-Mello, T. The sample average approximation method for stochastic discrete optimization. SIAM J. Optim. 2001, 12, 479–502. [Google Scholar]
  25. Birge, J.R. The value of the stochastic solution in stochastic linear programs with fixed recourse. Math. Program. 1982, 24, 314–325. [Google Scholar] [CrossRef]
  26. Heltorp, K.M.A.; Kangas, A.; Hoen, H.F. Do forest decision-makers in Southeastern Norway adapt forest management to climate change? Scand. J. For. Res. 2018, 33, 278–290. [Google Scholar] [CrossRef]
  27. Scheller, R.M.; Parajuli, R. Forest management for climate change in New England and the Klamath Ecoregions: Motivations, practices, and barriers. Forests 2018, 9, 626. [Google Scholar] [CrossRef] [Green Version]
  28. Liu, K.; He, H.; Xu, W.; Du, H.; Zong, S.; Huang, C.; Wu, M.; Tan, X.; Cong, Y. Responses of korean pine to proactive managements under climate change. Forests 2020, 11, 263. [Google Scholar] [CrossRef] [Green Version]
  29. Eriksson, L.O. Planning under uncertainty at the forest level: A systems approach. Scand. J. For. Res. 2006, 21, 111–117. [Google Scholar] [CrossRef]
  30. Kim, S.; Ryu, J.H. The sample average approximation method for multi-objective stochastic optimization. In Proceedings of the 2011 Winter Simulation Conference (WSC), Phoenix, AZ, USA, 11–14 December 2011; pp. 4021–4032. [Google Scholar] [CrossRef]
  31. Rios, I.; Weintraub, A.; Wets, R.J. Building a stochastic programming model from scratch: A harvesting management example. Quant. Financ. 2016, 16, 189–199. [Google Scholar] [CrossRef]
  32. Bagaram, M.B.; Jaross, W.; Weintraub, A. A Parallelized Variable Fixing Process for Solving multistage Stochastic Programs with Progressive Hedging An Application to Harvest Scheduling in the Face of Climate Change A Parallelized Variable Fixing Process for Solving multistage Stochastic Programs. Preprint 2020. [Google Scholar] [CrossRef]
  33. Latta, G.; Temesgen, H.; Barrett, T.M. Mapping and imputing potential productivity of Pacific Northwest forests using climate variables. Can. J. For. Res. 2009, 39, 1197–1207. [Google Scholar] [CrossRef] [Green Version]
  34. Elli, E.F.; Sentelhas, P.C.; Bender, F.D. Impacts and uncertainties of climate change projections on Eucalyptus plantations productivity across Brazil. For. Ecol. Manag. 2020, 474, 118365. [Google Scholar] [CrossRef]
Figure 1. Flow chart of sample average approximation (SAA).
Figure 1. Flow chart of sample average approximation (SAA).
Forests 11 01230 g001
Figure 2. Performance measurement of SAA for harvest scheduling. Error bars indicate 95% confidence interval.
Figure 2. Performance measurement of SAA for harvest scheduling. Error bars indicate 95% confidence interval.
Forests 11 01230 g002
Figure 3. Change of optimality gap values due to the sample size and the configuration of the sampling scheme using severe climate change data.
Figure 3. Change of optimality gap values due to the sample size and the configuration of the sampling scheme using severe climate change data.
Forests 11 01230 g003
Table 1. Variable, set and parameters used in problem formulation.
Table 1. Variable, set and parameters used in problem formulation.
Sets
S set of stands or forest management units ( s S )
T set of time period in which the decision are implemented or stage at which the decisions are taken ( t T )
Ω set of scenarios ( ξ i Ω )
Variables
x s binary variable: 1 if management unit s is scheduled to be harvested in the first period (now); and 0 otherwise
w s ξ i binary variable: 1 if management unit s should not be harvested during the whole planning horizon given the scenario ξ i ; and 0 otherwise
y s t ξ i binary variable: 1 if the stand s should be harvested in t given the scenario ξ i ; 0 otherwise
H t i volume harvested in year t under scenario ξ i ( m 3 ). We omit the superscript i for t = 0
Parameters
r s discounted net harvest revenue from stand s if the stand is harvested now ($)
r s t ξ i net harvest revenue from stand s if the stand is harvested in year t according to scenario ξ i ($)
r s 0 ξ i discounted value of the stand at the end of the planning horizon if it is not harvested during the whole planning horizon under scenario ξ i ($)
v s merchantable yield of stand s in the first period (m3/ha)
v s t ξ i projected merchantable yield of stand s if harvested in year t according to scenario ξ i (m3/ha)
a s area of stand s (ha)
a g e s t age of stand s at the end of the planning horizon if harvested in year t (yr)
a g e s . current age of the stand s (yr)
a g e s 0 age of stand s at the end of the planning if not harvested during the planning horizon (yr)
α acceptable lower bound on the fluctuation of volume of wood from one period to the next
β acceptable upper bound on the fluctuation of volume of wood from one period to the next
γ acceptable lower bound fluctuation of volume of wood between two non consecutive periods
λ acceptable upper bound fluctuation of volume of wood between two non consecutive periods
Table 2. Forest growth change (%) due to climate change.
Table 2. Forest growth change (%) due to climate change.
StageLowerUpper
1−1.211.1
2−2.422.2
3−3.633.3
4−4.844.4
Table 3. Value of the stochastic solution.
Table 3. Value of the stochastic solution.
| Ω | ϵ z ( x ¯ ) z ( x * ) Infeasible Scen.VSSVSS (bp)
25617,775,1087,782,679075719.96
200207,160,9507,256,069395,119132.83
200404,940,2046,737,368601,797,1633637.83
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bagaram, M.B.; Tóth, S.F. Multistage Sample Average Approximation for Harvest Scheduling under Climate Uncertainty. Forests 2020, 11, 1230. https://doi.org/10.3390/f11111230

AMA Style

Bagaram MB, Tóth SF. Multistage Sample Average Approximation for Harvest Scheduling under Climate Uncertainty. Forests. 2020; 11(11):1230. https://doi.org/10.3390/f11111230

Chicago/Turabian Style

Bagaram, Martin B., and Sándor F. Tóth. 2020. "Multistage Sample Average Approximation for Harvest Scheduling under Climate Uncertainty" Forests 11, no. 11: 1230. https://doi.org/10.3390/f11111230

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop