Next Article in Journal
Predicting Saudi Stock Market Index by Using Multivariate Time Series Based on Deep Learning
Previous Article in Journal
Mechanical Properties of Slag-Based Geopolymer Grouting Material for Homogenized Micro-Crack Crushing Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Deficiency of the Weighted Sample Average Approximation (wSAA) Framework: Unveiling the Gap between Data-Driven Policies and Oracles

Faculty of Business, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(14), 8355; https://doi.org/10.3390/app13148355
Submission received: 17 June 2023 / Revised: 16 July 2023 / Accepted: 18 July 2023 / Published: 19 July 2023
(This article belongs to the Section Transportation and Future Mobility)

Abstract

:
This paper critically examines the weighted sample average approximation (wSAA) framework, a widely used approach in prescriptive analytics for managing uncertain optimization problems featuring non-linear objectives. Our research pinpoints a key deficiency of the wSAA framework: when data samples are limited, the minimum relative regret—the discrepancy between the expected optimal profit realized by an oracle aware of the genuine distribution, and the maximum expected out-of-sample profit garnered by the data-driven policy, normalized by the former profit—can approach towards one. To validate this assertion, we scrutinize two distinct contextual stochastic optimization problems—the production decision-making problem and the ship maintenance optimization problem—within the wSAA framework. Our study exposes a potential deficiency of the wSAA framework: its decision performance markedly deviates from the full-information optimal solution under limited data samples. This finding offers valuable insights to both researchers and practitioners employing the wSAA framework.

1. Introduction

In practical optimization problems, uncertainty is a pervasive feature—encountered in diverse scenarios such as uncertain travel durations in routing problems and unpredictable future demand in revenue management [1]. Two distinct methodologies exist for tackling optimization under uncertainty, primarily differentiated by their reliance on data [2].
The first category, which includes stochastic optimization [3] and robust optimization [4,5], does not treat data as a primary component. These methods typically predefine distributions for uncertain parameters without using empirical data, which can lead to inaccuracies in distribution specifications and subsequently, suboptimal decisions [6]. On the other hand, the rapid advancement of internet technologies, facilitating the collection and storage of vast quantities of data, has given birth to a second category—using data to characterize uncertainty.
This second category further branches out based on the nature of the data utilized: historical data of uncertain parameters or auxiliary data (contextual covariates) that assist in predicting these uncertain parameters. One subset of this category models scenarios or distributions of uncertain parameters using historical data, without taking into account any useful auxiliary data. Techniques such as the sample average approximation (SAA) method [7] and the data-driven distributionally robust optimization framework exemplify this subcategory [8,9].
The other subset capitalizes on machine learning (ML) techniques to predict uncertain parameters, integrating both historical data and relavant auxiliary data. This integration of ML techniques with mathematical optimization models is termed prescriptive analytics. The most commonly used method in this domain is the two-stage predict-then-optimize (PTO) framework. Here, ML models are used to obtain point predictions using available auxiliary data, effectively transforming uncertain problems into simpler and deterministic problems. However, the PTO framework may not be apt for optimization problems in which the objective functions are not linear in their uncertain parameters, as stochastic problems may involve complex integrals with respect to the conditional distribution model [1,6,10,11]. In such instances, the estimate-then-optimize (ETO) methods often yield superior decisions compared to PTO methods. ETO methods first estimate the distributions of uncertain parameters and then solve the resulting contextual stochastic optimization (CSO) problems. A prevalent non-parametric ETO method is the weighted SAA (wSAA) method, which estimates the conditional distribution by assigning weights to historical data, with the weights determined by the covariates [6,10].
Within the context of the wSAA framework, this paper scrutinizes its decision-making performance based on the minimum relative regret metric, defined as the difference between the expected optimal profit (benefits) of an oracle possessing full information (the actual distribution) and the maximum expected out-of-sample profit realized by the data-driven policy, normalized by the former profit.

Related Literature and Contributions

Our work contributes to two key areas of literature. Firstly, it relates to prescriptive analytics frameworks, in which decision-makers leverage contextual data about uncertainty to make informed decisions [1,2,11,12,13,14]. Several advanced frameworks have been developed, including the smart predict-then-optimize framework [13,15], the wSAA framework [10,14], the empirical risk minimization framework [12,14], and the kernel optimization framework [2,12,14,16].
Among these frameworks, our focus lies on the wSAA framework, which predicts the distributions of uncertain parameters by assigning weights to empirical data [10,12]. One of its strengths lies in its ability to reveal all necessary information for solving the stochastic problem when a perfect distributional prediction of the underlying conditional distribution can be achieved [6]. Bertsimas and Kallus [10] construct weights based on various ML models, such as kernel density estimation, k-nearest-neighbors (kNN) regression, and tree-based methods (i.e., classification and regression trees and random forest). Bertsimas and Kallus [10] demonstrate that the resulting wSAA problem can be solved in polynomial time given certain conditions (Theorem 2 in Bertsimas and Kallus [10]). They also demonstrate the asymptotic optimality of this framework—whether the data-driven decision converges to the full-information optimal solution as the data size grows to infinity. Ban and Rudin [12] propose a kernel optimization method to solve the feature-based newsvendor problem, in which they assign weights to each historical sample by the Nadaraya–Watson estimator [17,18]. They also show the generalization bounds—how well the performance of a prediction model, fitted on finite training data, generalizes out of sample [6,19]. Kallus and Mao [11] further learn effective forest-based CSO policies that integrate prediction and optimization, in which they also reweight each historical sample and solve the weighted sample analogue of the original CSO problem. For detailed reviews of corresponding prescriptive analytics frameworks for CSO problems, we refer to Tian et al. [1] and Qi and Shen [6].
Secondly, our work contributes to the literature that explores the influence of data size on data-driven decisions. Much of this literature focuses solely on previous samples of uncertainty, not considering contextual covariates. Our main idea is closely aligned with a recent study by Besbes and Mouchtaki [20], which presents a surprising finding: the worst-case relative regret of the SAA method is not monotonically related to the number of available samples, implying an influx of information could potentially harm decision quality. In contrast, our work examines the decision performance of the wSAA framework across a limited data size.
To achieve this aim, we show two CSO problems with non-linear maximization objectives. We examine the minimum relative regret—the difference between the expected optimal profit of the oracle and the maximum expected out-of-sample profit incurred by the data-driven policy, normalized by the former profit—of the wSAA framework with limited data samples. Our analysis reveals the following intriguing finding:
  • The minimum relative regret of the wSAA framework can approach one with limited data samples.
The value of one in this context indicates a total divergence between the expected profit from the oracle and the maximum expected profit obtained by the data-driven policy under the wSAA framework. That is to say, when the relative regret equals one, the data-driven policy under the wSAA framework performs as poorly as possible compared to the oracle. This interpretation underlines a considerable disparity between the decision-making efficacy of the wSAA framework and that of an oracle. Consequently, the primary contribution of our work lies in highlighting that the wSAA framework may not always effectively manage CSO problems when data samples are limited. We illustrate this observation through two straightforward examples, thus paving the way for future research to potentially attain optimal decision performance of the wSAA framework, contingent upon data size.
The remainder of this paper is organized as follows. In Section 2, we introduce the wSAA framework mathematically. In Section 3, we present our findings through two CSO problems. Section 4 concludes and outlines future research directions.

2. Methodology

Consider a stochastic optimization problem with a given profit function c ( y , z ) , where y Y R d y are the random variables that affect the value of the profit function, and z Z R d z are the decision variables that are subject to a set of constraints which define the feasible region within which solutions can exist. Let x X R d x denote the contextual features that are related to the distribution of y . Suppose we have a new observation of the contextual data, denoted as x 0 X R d x . The CSO problem can be mathematically formulated as:
z * ( x 0 ) arg max z Z E y c ( y , z ) | x = x 0 ,
where z * ( x 0 ) denotes the expected optimal decision (oracle) under the given context x 0 .
A historical dataset { ( x i , y i ) } i = 1 N is available to solve the optimization problem (1), where x i and y i represent the historical auxiliary data of x and the historical realization of y for the ith sample, respectively. To solve (1), we can adopt various ML techniques, that is, we leverage the data on x to predict the value of y given x = x 0 . Under mild conditions, and with infinite data, we can obtain the point prediction y ^ = E [ y | x = x 0 ] . If c ( y , z ) is linear in y , we can see from
E c ( y , z ) | x = x 0 = c E [ y | x = x 0 ] , z ,
that the point prediction y ^ is appropriate to be incorporated into the linear objective function to derive decisions. However, (2) does not hold when c ( y , z ) is not linear in y . In this case, we thus need to estimate the conditional distribution of y given x = x 0 . Several ways can be used to estimate this conditional distribution using { ( x i , y i ) } i = 1 N and approximate (1). This paper focuses on one possible approach termed the wSAA framework, motivated by a string of non-parametric ML models [10,14], such as kNN and decision trees [10]. The wSAA framework approximate (1) as follows:
z ^ N wSAA ( x 0 ) arg max z Z i = 1 N w i ( x 0 ) c ( y i , z ) ,
where z ^ N wSAA ( x 0 ) denotes the predictive prescriptions and w i ( x 0 ) are known as weight functions [10] that measure the distances between the new observation x 0 and historical covariates x i ( i { 1 , , N } ) ; there are various ways to determine w i ( x 0 ) . As long as the weights are nonnegative and sum to one, they can be seen as the estimated conditional distribution of y given x = x 0 [10]. Notably, point prediction can be seen as a specific case of the distributional prediction, where the predicted distribution has all its mass at a single point.
For instance, considering the kNN method, w i ( x 0 ) can be defined as:
w i k NN ( x 0 ) = 1 k I [ x i is a k NN of x 0 ] .
Accordingly, (1) can be reformulated as:
z ^ N k NN ( x 0 ) arg max z Z i N k ( x 0 ) 1 k c ( y i , z ) ,
where N k ( x 0 ) = { i = 1 , , N : j = 1 N I x 0 x i x 0 x j k } is the set of k historical data samples that are the closest to x 0 . Note that k is a hyperparameter in kNN regression, and there are multiple methods for measuring the distance between data points in kNN, such as Euclidean distance, Mantattan distance, and Minkowski distance.
Our interest lies in the performance of the data-driven decision under the wSAA framework. To this end, we assess the performance of a data-driven decision for an out-of-sample x 0 through the minimum relative regret, defined as
R N wSAA ( x 0 ) : = E c y , z * ( x 0 ) E c y , z ^ N wSAA ( x 0 ) E c y , z * ( x 0 ) ,
where y remains a random variable given x 0 , and the expectation is taken over y given x 0 . Here, E c y , z * ( x 0 ) denotes the expected profit from the oracle, and E c y , z ^ N wSAA ( x 0 ) denotes the maximum expected profit obtained by the wSAA decision. Observing the above equation, if the minimum relative regret approaches 1, it indicates a total divergence between the expected profit from the oracle and the profit obtained by the data-driven decision under the wSAA framework.
Notably, the wSAA framework tends to discard data that is not close to the new observation, thereby requiring a substantial amount of data, particularly data that is close to the current observation, to accurately estimate the distributions of the uncertain variables and subsequently maximize the given profit function [2]. This suggests that the wSAA framework may struggle when extrapolating to new observations that lie outside the range of known data samples. To comprehensively characterize this minimum relative regret, we design two distinct CSO problems. One problem features a new observation that is outside the range of known data samples, and the other features a new observation that is within the range of known data samples.

3. The Minimum Relative Regret of the wSAA Framework

In this section, we consider two CSO problems: the production decision-making problem (PDP), where the new observation is outside the range of known data samples, and the ship maintenance problem (SMP), where the new observation is within the range of known data samples. We use these two examples to demonstrate that the minimum relative regret of the wSAA framework can approach one when data samples are limited, no matter whether the new observation falls outside the range of known data samples or within the range of known data samples.

3.1. The Analysis of the PDP

In this section, we will analyze the PDP. Consider the demand function D, where D : R + R + , D ( x ) = 10 x . In this context, x represents a feature related to the demand, and D ( x ) corresponds to the actual demand in terms of quantity. Historical data is given as two points in this demand space, ( x 1 = 1 ,   y 1 = 10 ) and ( x 2 = 2 ,   y 2 = 20 ) , where y 1 and y 2 denote the historical realizations of demand. We consider a unit production cost of 1, a selling price per unit of 2, and the to-be-determined production quantity as z. Now, we encounter a new x 0 , which equals a very large positive value M and thus lies outside the range of known data samples. Assuming we only have access to the two historical data points and do not know the underlying relationship (distribution) of demand given the feature, we must make the production decision. Therefore, we formulate the CSO problem with the aim of maximizing the profit (price minus cost) as follows:
max z E c ( y ˜ , z ) | x = x 0 = max z 0 E [ 2 min { y ˜ , z } z | x = x 0 ] ,
where y ˜ denotes the uncertain demand given x 0 .
To solve this problem, we apply the wSAA framework, which assigns varying weights to samples 1 and 2. These weights are determined based on the similarity between the contextual data of the new observation and the contextual data of each historical sample. Additionally, the weights are normalized to ensure that they sum to one. We denote the weight of sample 1 as π and the weight of sample 2 as 1 π . Therefore, we need to solve the following wSAA of (7):
max z 0 π [ 2 min { y 1 , z } z decision problem of y 1 ] + ( 1 π ) [ 2 min { y 2 , z } z decision problem of y 2 ] ,
where the data-driven optimal decision is z ^ 2 , PDP wSAA ( x 0 ) = π y 1 + ( 1 π ) y 2 = 20 10 π . Because π can span any any value on [ 0 , 1 ] due to the use of different weighting schemes, the problem has an infinite number of data-driven optimal solutions. Given the underlying relationship y ˜ = D ( x ) = 10 x , an upper bound of the expected profits for all these data-driven optimal solutions can be obtained as follows:
max z ^ 2 , PDP wSAA ( x 0 ) = 20 10 π π [ 0 , 1 ] E y ˜ = 10 x c y ˜ ; z ^ 2 , PDP wSAA ( x 0 ) | x = x 0 = max π [ 0 , 1 ] E y ˜ = 10 x 2 min { y ˜ , 20 10 π } ( 20 10 π ) | x = x 0 = 20 ,
where π * = 0 , z ^ 2 , PDP wSAA , * ( x 0 ) = 20 , and no weighting scheme can achieve an expected profit higher than 20. This result signifies that the optimal weighting scheme that can achieve the highest expected profit is to assign sample 1 a weight of 0 and sample 2 a weight of 1.
However, if we knew the underlying relationship y ˜ = 10 x and given the CSO problem (7), we should solve
max z 0 E y ˜ = 10 x [ 2 min { y ˜ , z } z | x = x 0 ] ,
where the oracle with knowledge of the demand function should be z * ( x 0 ) = M , and the corresponding expected profit would be M. Therefore, the minimum relative regret of the PDP would be
R 2 , PDP wSAA ( x 0 ) : = M 20 M 1 , when M .
This analysis clearly shows that as M increases significantly, the minimum relative regret R 2 PDP wSAA ( x 0 ) tends towards one, highlighting the inherent limitations of using the wSAA framework when data samples are limited. We further use another problem to show that the minimum relative regret of the wSAA framework tends towards one even when the new observation lies within the range of known data samples.

3.2. The Analysis of the SMP

This section analyzes the SMP. A shipping company considers purchasing preventive maintenance services for a ship at a cost denoted by c (set to c = 100 for this discussion). For a ship, the generated income relies heavily on its operational condition and regular maintenance, as a well-kept vessel allows for uninterrupted service leading to consistent revenue, whereas a neglected ship might face service disruptions, reducing its earning potential. The ship’s state is represented by its deficiency number, denoted by y ˜ , which ranges from 0 to Q. Specifically, the income, falling into the range of 0 to K, is assumed to be Lipschitz continuous with respect to the number of deficiencies, regardless of the maintenance decision, as depicted in Figure 1 (with Q = 20 and K = 1000 ).
Figure 1 indicates that a ship’s state significantly impacts its income. A pristine ship, with zero deficiencies, can generate a maximum income of K = 1000 , irrespective of maintenance. As deficiencies rise to 0.5 Q = 10 , a maintenance-less ship becomes inoperable and profitless. However, maintenance at this stage rectifies all issues, restoring maximum income generation. When deficiencies reach the peak Q = 20 , the ship becomes irreparably inoperable, making maintenance futile and income unattainable. Importantly, the cost of preventive maintenance is less than the maximum potential income. Therefore, when deficiencies reach 0.5 Q = 10 , maintenance becomes a financially prudent decision.
We assume that a ship’s deficiency number, y ˜ , is a function of its age, denoted by x, with the relationship defined as y ˜ = x . As such, an ageing ship accumulates more deficiencies. A brand-new ship ( x = 0 ) is free of deficiencies and does not need maintenance, while a ship at the end of its lifespan ( x = Q ) is highly deficient, and failures may occur regardless of maintenance attempts.
Assuming a ship has aged to x 0 = 0.5 Q , uncertainty arises regarding the relationship between age and deficiency number, that is, we do not know their underlying relationship. However, employing the wSAA framework to two known data points, ( x 1 = 0 ,   y 1 = 0 ) and ( x 2 = Q ,   y 2 = Q ) , can guide the decision of maintenance purchasing. In this case, the new observation falls within the range of known data samples. Similarly, the wSAA framework assigns weight π to sample 1 and 1 π to sample 2 ( π [ 0 , 1 ] ). We denote z as the decision variable, with 1 signifying the purchase of maintenance service and 0 the opposite. By reweighting y 1 ( π ) and y 2 ( 1 π ) and fixing π , we solve the following wSAA problem with the objective of maximizing profit (income minus maintenance cost):
max z { 0 , 1 } π ( ( 1 z ) K + z ( K c ) decision problem when y 1 = 0 ) + ( 1 π ) ( ( 1 z ) 0 + z ( 0 c ) decision problem when y 2 = Q ) ,
where the data-driven optimal decision is z ^ 2 , SMP wSAA ( x 0 ) = 0 , which holds regardless of the value of π . Given the underlying relationship y ˜ = x , the expected number of deficiencies E [ y ˜ | x 0 = 0.5 Q ] = 0.5 Q . Therefore, without maintenance ( z ^ 2 , SMP wSAA ( x 0 ) = 0 ) and referring to Figure 1, the expected profit from this data-driven decision is zero when the expected deficiency number is 0.5Q.
However, with knowledge of the underlying relationship y ˜ = x , the expected deficiency number E [ y ˜ | x 0 = 0.5 Q ] = 0.5 Q . Examining Figure 1, in this case, the fully informed oracle would recommend z * ( x 0 ) = 1 , yielding an expected profit of K c . Consequently, the minimum relative regret of the SMP becomes:
R 2 , SMP wSAA ( x 0 ) : = K c 0 K c = 1 .
This analysis again shows that the minimum relative regret R 2 , SMP wSAA ( x 0 ) equals one even when the new observation lies within the range of known data samples, further highlighting the inherent limitations of using the wSAA framework when data samples are limited.

4. Conclusions and Future Research Directions

Our study provides a focused analysis on specific examples of the wSAA framework, which is commonly used in prescriptive analytics for dealing with uncertain optimisation problems featuring non-linear objectives. We highlight a potential deficiency of the wSAA framework: when data samples are limited, the minimum relative regret—an indicator of the deviation between a data-driven policy and a full-information oracle—can approach one. This is illustrated through the analysis of two CSO problems: the PDP and the SMP. These analyses consider the wSAA framework’s different capacities for extrapolation and interpolation. The primary implication of our finding is that the wSAA framework may not perform well when data is limited. This insight is of value to both researchers and practitioners who utilize the wSAA framework in their work, providing them with an important perspective on its limitations and potential areas for improvement.
The findings of this study open up several avenues for future research. The first and most immediate would be to examine ways to mitigate the high relative regret identified in this study, especially when working with limited data samples. This could involve developing new frameworks or improving upon the existing wSAA model. Another direction could be to analyze whether the same deficiency apply to other prescriptive analytics approaches. This research could contribute to a broader understanding of how these models perform under the condition of limited data and inform the development of more robust strategies. Lastly, future research could apply our findings to more specific, practical scenarios, examining how the limitations of the wSAA framework impact decision-making in various industries. This could lead to industry-specific recommendations for enhancing the utility of prescriptive analytics approaches.

Author Contributions

Conceptualization, S.W.; methodology, S.W. and X.T.; software, X.T.; validation, S.W. and X.T.; formal analysis, S.W. and X.T.; investigation, S.W. and X.T.; writing—original draft preparation, S.W. and X.T.; writing—review and editing, S.W. and X.T.; visualization, X.T.; supervision, S.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SAAsample average approximation
MLmachine learning
wSAAweighted sample average approximation
CSOcontextual stochastic optimization
PTOpredict-then-optimize
ETOestimate-then-optimize
SPOsmart predict-then-optimize
kNNk-nearest neighbor
PDPproduction decision-making problem
SMPship maintenance problem

References

  1. Tian, X.; Yan, R.; Wang, S.; Liu, Y.; Zhen, L. Tutorial on prescriptive analytics for logistics: What to predict and how to predict. Electron. Res. Arch. 2023, 31, 2265–2285. [Google Scholar] [CrossRef]
  2. Bertsimas, D.; Koduri, N. Data-driven optimization: A Reproducing Kernel Hilbert Space approach. Oper. Res. 2021, 70, 454–471. [Google Scholar] [CrossRef]
  3. Birge, J.; Louveaux, F. Introduction to Stochatic Programming; Springer: New York, NY, USA, 2011. [Google Scholar]
  4. Ben-Tal, A.; EL Ghaoui, L.; Nemirovski, A. Robust Programming; Princeton University Press: Princeton, NJ, USA, 2009. [Google Scholar]
  5. Bertsimas, D.; Brown, D.; Caramanis, C. Theory and applications of robust optimization. SIAM Rev. 2011, 53, 464–501. [Google Scholar] [CrossRef]
  6. Qi, M.; Shen, Z. Integrating prediction/estimation and optimization with applications in operations management. In Tutorials in Operations Research: Emerging and Impactful Topics in Operations; INFORMS: Catonsville, MD, USA, 2022; pp. 36–58. [Google Scholar]
  7. Kleywegt, A.; Shapiro, A.; Homem-de Mello, T. The sample average approximation for stochastic discrete optimization. SIAM J. Optim. 2002, 12, 479–502. [Google Scholar] [CrossRef] [Green Version]
  8. Bertsimas, D.; Gupta, V.; Kallus, N. Data-driven robust optimization. Math. Program. 2018, 167, 235–292. [Google Scholar] [CrossRef] [Green Version]
  9. Delage, E.; Ye, Y. Distributionally robust optimization under moment uncertainty with application to data-driven problems. Oper. Res. 2010, 58, 595–612. [Google Scholar] [CrossRef] [Green Version]
  10. Bertsimas, D.; Kallus, N. From predictive to prescriptive analytics. Manag. Sci. 2020, 66, 1025–1044. [Google Scholar] [CrossRef] [Green Version]
  11. Kallus, N.; Mao, X. Stochastic optimization forests. Manag. Sci. 2023, 69, 1975–1994. [Google Scholar] [CrossRef]
  12. Ban, G.; Rudin, C. The big data newsvendor: Practical insights from machine learning. Oper. Res. 2019, 67, 90–108. [Google Scholar] [CrossRef] [Green Version]
  13. Elmachtoub, A.; Grigas, P. Smart “predict, then optimize”. Manag. Sci. 2022, 68, 9–26. [Google Scholar] [CrossRef]
  14. Notz, P.; Pibernik, R. Prescriptive analytics for flexible capacity management. Manag. Sci. 2022, 68, 1756–1775. [Google Scholar] [CrossRef]
  15. Tian, X.; Yan, R.; Liu, Y.; Wang, S. A smart predict-then-optimize method for targeted and cost-effective maritime transportation. Transp. Res. Part B Methodol. 2023, 172, 32–52. [Google Scholar] [CrossRef]
  16. Chan, T.; Shen, Z.; Siddiq, A. Robust defibrillator deployment under cardiac arrest location uncertainty via row-and-column generation. Oper. Res. 2018, 66, 358–379. [Google Scholar] [CrossRef] [Green Version]
  17. Nadaraya, E. On estimating regression. Theory Probab. Appl. 1964, 9, 141–142. [Google Scholar] [CrossRef]
  18. Watson, G. Smooth regression analysis. Sankhyā Indian J. Stat. Ser. A 1964, 26, 359–372. [Google Scholar]
  19. El Balghiti, O.; Elmachtoub, A.N.; Grigas, P.; Tewari, A. Generalization bounds in the predict-then-optimize framework. Math. Oper. Res. 2023, in press. [Google Scholar]
  20. Besbes, O.; Mouchtaki, O. How big should your data really be? Data-driven newsvendor: Learning one sample at a time. Manag. Sci. 2023, in press. [Google Scholar] [CrossRef]
Figure 1. Income with respect to the deficiency number with and without maintenance.
Figure 1. Income with respect to the deficiency number with and without maintenance.
Applsci 13 08355 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, S.; Tian, X. A Deficiency of the Weighted Sample Average Approximation (wSAA) Framework: Unveiling the Gap between Data-Driven Policies and Oracles. Appl. Sci. 2023, 13, 8355. https://doi.org/10.3390/app13148355

AMA Style

Wang S, Tian X. A Deficiency of the Weighted Sample Average Approximation (wSAA) Framework: Unveiling the Gap between Data-Driven Policies and Oracles. Applied Sciences. 2023; 13(14):8355. https://doi.org/10.3390/app13148355

Chicago/Turabian Style

Wang, Shuaian, and Xuecheng Tian. 2023. "A Deficiency of the Weighted Sample Average Approximation (wSAA) Framework: Unveiling the Gap between Data-Driven Policies and Oracles" Applied Sciences 13, no. 14: 8355. https://doi.org/10.3390/app13148355

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop