# Convergence of Subtangent-Based Relaxations of Nonlinear Programs

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Background

#### 2.1. Branch-and-Bound Optimization Using Convex Relaxations

**Definition**

**1**

**.**Let $Q\subset {\mathbb{R}}^{n}$ be a nonempty compact set and $f:Q\to \mathbb{R}$ be a continuous function, and define $\mathbb{I}Q:={\mathbb{IR}}^{n}\cap Q$. Suppose that, for each interval $W\in \mathbb{I}Q$, a function ${\underline{f}}_{W}^{C}:W\to \mathbb{R}$ is convex and underestimates f on W. Then, the collection ${\{{\underline{f}}_{W}^{C}\}}_{W\in \mathbb{I}Q}$ is a scheme of convex relaxations of f on Q.

#### 2.2. Convergence of Convex Relaxation Schemes

**Definition**

**2.**

#### 2.3. Subtangents of Convex Relaxations

**Definition**

**3**

**.**Given a convex set $Z\subset {\mathbb{R}}^{n}$ and a convex function $\varphi :Z\to \mathbb{R}$, a vector $\mathbf{s}\in {\mathbb{R}}^{n}$ is a subgradient of $\varphi $ at $\mathit{\zeta}\in Z$ if

**Definition**

**4**

**.**For any interval $W:=[\underline{\mathbf{w}},\overline{\mathbf{w}}]\in {\mathbb{IR}}^{n}$, denote the midpoint of W as ${\mathbf{w}}^{\mathrm{mid}}:=\frac{1}{2}(\underline{\mathbf{w}}+\overline{\mathbf{w}})\in W$. For any $\alpha \in [0,+\infty )$, define a centrally-scaled interval of W as

**Assumption**

**1.**

**Theorem**

**1**

**.**Suppose that Assumption 1 holds, and choose some $\alpha \in [0,1)$. For each $W\in \mathbb{I}Q$, choose some ${\mathit{\zeta}}_{W}\in {\mathbf{s}}_{\alpha}\left(W\right)$ and a subgradient ${\mathit{\sigma}}_{W}\in \partial {\underline{f}}_{W}^{C}\left(\mathit{\zeta}\right)$, and consider a subtangent ${\underline{f}}_{\mathrm{sub},W}^{C}:W\to \mathbb{R}$ of f on W with

## 3. New Convergence Results

#### 3.1. Relaxations of Functions

**Assumption**

**2.**

**Definition**

**5.**

**Theorem**

**2.**

**Proof.**

#### 3.2. Relaxations of Constrained Optimization Problems

**Assumption**

**3.**

**Definition**

**6.**

**Example**

**1.**

**Assumption**

**4.**

- 1.
- The NLP (2) satisfies the linear-independence constraint qualification (LICQ). That is, with $I\left(\mathbf{y}\right)$ denoting the subset of $\{1,\dots ,m\}$ for which ${g}_{i}\left(\mathbf{y}\right)=0$, the gradients $\nabla {g}_{i}\left(\mathbf{y}\right)$ for $i\in I\left(\mathbf{y}\right)$ are linearly independent. Hence, as shown by Rockafellar [46], there exist unique multipliers ${\mu}_{i}\left(W\right)\ge 0$ (depending on W but not $\mathbf{y}$) for each $i\in \{1,\dots ,m\}$ for which
- $0=\nabla f\left(\mathbf{y}\right)+{\sum}_{i=1}^{m}{\mu}_{i}\left(W\right)\phantom{\rule{0.166667em}{0ex}}\nabla {g}_{i}\left(\mathbf{y}\right)$, and
- ${\mu}_{i}\left(W\right)=0$ for each $i\notin I\left(\mathbf{y}\right)$.

- 2.
- ${\mu}_{i}\left(W\right)>0$ for each $i\in I\left(\mathbf{y}\right)$.
- 3.
- Any vector $\mathbf{w}\in {\mathbb{R}}^{n}$ that satisfies both of the following conditions
- ${\mathbf{w}}^{\mathrm{T}}\nabla {g}_{i}\left(\mathbf{y}\right)=0$ for each $i\in I\left(\mathbf{y}\right)$, and
- ${\mathbf{w}}^{\mathrm{T}}({\nabla}^{2}f\left(\mathbf{y}\right)+{\sum}_{i=1}^{m}{\mu}_{i}\left(W\right)\phantom{\rule{0.166667em}{0ex}}{\nabla}^{2}{g}_{i}\left(\mathbf{y}\right))\mathbf{w}=0$

is also an element of the linear space tangent to $M\left(W\right)$ at $\mathbf{y}$.

**Theorem**

**3.**

**Proof.**

#### 3.3. Constructing Subgradient-Cutting Mappings

## 4. Comparison with Established Relaxation Methods

## 5. Implementation and Examples

#### 5.1. Implementation in Julia

#### 5.2. Convergence Illustration

**Example**

**2.**

#### 5.3. Optimization Test Problems

**Example**

**3.**

`bearing`, which has 14 continuous variables, 10 equality constraints, and 3 inequality constraints, and is as follows.

**Example**

**4.**

`ex6_2_11`, with 4 continuous variables and 2 equality constraints and with bound constraints as provided by the library. This instance is shown below.

**Example**

**5.**

`process`, shown as follows, with bound constraints as provided by the library.

#### 5.4. Application: Power Scheduling

## 6. Conclusions and Future Work

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Li, M.; Mu, H.; Li, N.; Ma, B. Optimal design and operation strategy for integrated evaluation of CCHP (combined cooling heating and power) system. Energy
**2016**, 99, 202–220. [Google Scholar] [CrossRef] - Tan, Z.F.; Zhang, H.J.; Shi, Q.S.; Song, Y.H.; Ju, L.W. Multi-objective operation optimization and evaluation of large-scale NG distributed energy system driven by gas-steam combined cycle in China. Energy Build.
**2014**, 76, 572–587. [Google Scholar] [CrossRef] - Wang, J.J.; Jing, Y.X.; Zhang, C.F. Optimization of capacity and operation for CCHP system by genetic algorithm. Appl. Energy
**2010**, 87, 1325–1335. [Google Scholar] [CrossRef] - Vo, D.N.; Ongsakul, W. Improved merit order and augmented Lagrange Hopfield network for short term hydrothermal scheduling. Energy Convers. Manag.
**2009**, 50, 3015–3023. [Google Scholar] [CrossRef] - Nguyen, T.T.; Vo, D.N.; Dinh, B.H. An effectively adaptive selective cuckoo search algorithm for solving three complicated short-term hydrothermal scheduling problems. Energy
**2018**, 155, 930–956. [Google Scholar] [CrossRef] - Dai, Y.; Wang, J.; Gao, L. Parametric optimization and comparative study of Organic Rankine Cycle (ORC) for low grade waste heat recovery. Energy Convers. Manag.
**2009**, 50, 576–582. [Google Scholar] [CrossRef] - Yu, H.; Eason, J.; Biegler, L.T.; Feng, X. Simultaneous heat integration and techno-economic optimization of Organic Rankine Cycle (ORC) for multiple waste heat stream recovery. Energy
**2017**, 119, 322–333. [Google Scholar] [CrossRef] - Yang, H.; Lu, L.; Zhou, W. A novel optimization sizing model for hybrid soar-wind power generation system. Sol. Energy
**2007**, 81, 76–84. [Google Scholar] [CrossRef] - Mouret, S.; Grossmann, I.E.; Pestiaux, P. A novel priority-slot based continuous-time formulation for crude-oil scheduling problems. Ind. Eng. Chem. Res.
**2009**, 48, 8515–8528. [Google Scholar] [CrossRef] - Zavala-Río, A.; Femat, R.; Santiesteban-Cos, R. An analytical study on the logarithmic mean temperature difference. Rev. Mexicana Ingenierá Quḿica
**2005**, 4, 406–411. (In English) [Google Scholar] - Demissie, A.; Zhu, W.; Belachew, C.T. A multi-objective optimization model for gas pipeline operations. Comput. Chem. Eng.
**2017**, 100, 94–103. [Google Scholar] [CrossRef] - Glotić, A.; Glotić, A.; Kitak, P.; Pihler, J.; Tičar, I. Optimization of hydro energy storage plants by using differential evolution algorithm. Energy
**2014**, 77, 97–107. [Google Scholar] [CrossRef] - Lopez-Echeverry, J.S.; Reif-Acherman, S.; Araujo-Lopez, E. Peng-Robinson equation of state: 40 years through cubics. Fluid Phase Equilib.
**2017**, 447, 39–71. [Google Scholar] [CrossRef] - Najafi, H.; Najafi, B.; Hoseinpoori, P. Energy and cost optimization of a plate and fin heat exchanger using genetic algorithm. Appl. Therm. Eng.
**2011**, 31, 1839–1847. [Google Scholar] [CrossRef] - Rios, L.M.; Sahinidis, N.V. Derivative-free optimization: A review of algorithms and comparison of software implementations. J. Glob. Optim.
**2013**, 56, 1247–1293. [Google Scholar] [CrossRef] - Holland, J.H. Adaptation in Natural and Artificial Systems; The University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
- Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim.
**1997**, 11, 341–359. [Google Scholar] [CrossRef] - Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Piscataway, NJ, USA, 27 November–1 December 1995. [Google Scholar]
- Gau, C.Y.; Stadtherr, M.A. Deterministic global optimization for error-in-variables parameter estimation. AIChE J.
**2002**, 48, 1192–1197. [Google Scholar] [CrossRef] - Vidigal, L.M.; Director, S.W. A design centering algorithm for nonconvex regions of acceptability. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst.
**1982**, 1, 13–24. [Google Scholar] [CrossRef] - Ghaohui, L.E.; Oks, M.; Oustry, F. Worst-case value-at-risk and robust portfolio optimization: A conic programming approach. Oper. Res.
**2003**, 51, 543–556. [Google Scholar] [CrossRef] - Paté-Cornell, M.E. Uncertainties in risk analysis: Six levels of treatment. Reliab. Eng. Syst. Saf.
**1996**, 54, 95–111. [Google Scholar] [CrossRef] - Huang, H.; Adjiman, C.S.; Shah, N. Quantitative framework for reliable safety analysis. AIChE J.
**2002**, 48, 78–96. [Google Scholar] [CrossRef] - Falk, J.E.; Soland, R.M. An algorithm for separable nonconvex programming problems. Manag. Sci.
**1969**, 15, 550–569. [Google Scholar] [CrossRef] - Du, K.; Kearfott, R.B. The cluster problem in multivariate global optimization. J. Glob. Optim.
**1994**, 5, 253–265. [Google Scholar] [CrossRef] [Green Version] - Bompadre, A.; Mitsos, A. Convergence rate of McCormick relaxations. J. Glob. Optim.
**2012**, 52, 1–28. [Google Scholar] [CrossRef] - Wechsung, A.; Schaber, S.D.; Barton, P.I. The cluster problem revisited. J. Glob. Optim.
**2014**, 58, 429–438. [Google Scholar] [CrossRef] - Adjiman, C.; Dallwig, S.; Floudas, C.; Neumaier, A. A global optimization method αBB, for general twice-differentiable constrained NLPs: I. Theoretical advances. Comput. Chem. Eng.
**1998**, 22, 1137–1158. [Google Scholar] [CrossRef] - Adjiman, C.; Androulakis, I.; Floudas, C. A global optimization method αBB, for general twice-differentiable constrained NLPs: II. Implementation and computational results. Comput. Chem. Eng.
**1998**, 22, 1159–1179. [Google Scholar] [CrossRef] - Mitsos, A.; Chachaut, B.; Barton, P.I. McCormick-based relaxations of algorithms. SIAM J. Optim.
**2009**, 20, 573–601. [Google Scholar] [CrossRef] - McCormick, G.P. Computability of global solutions to factorable nonconvex programs: Part I. Convex underestimating problems. Math. Program.
**1976**, 10, 147–175. [Google Scholar] [CrossRef] - Tsoukalas, A.; Mitsos, A. Multivariate McCormick relaxations. J. Glob. Optim.
**2014**, 59, 633–662. [Google Scholar] [CrossRef] [Green Version] - Scott, J.K.; Stuber, M.D.; Barton, P.I. Generalized McCormick relaxations. J. Glob. Optim.
**2011**, 51, 569–606. [Google Scholar] [CrossRef] - Najman, J.; Mitsos, A. Tighter McCormick relaxations through subgradient propagation. arXiv
**2017**, arXiv:1710.09188. [Google Scholar] - Najman, J.; Mitsos, A. Convergence Analysis of Multivariate McCormick Relaxations. J. Glob. Optim
**2015**, 66, 597–628. [Google Scholar] [CrossRef] - Khan, K.A. Subtangent-based approaches for dynamic set propagation. In Proceedings of the 57th IEEE Conference on Decision and Control, Miami Beach, FL, USA, 17 December 2018. [Google Scholar]
- Rote, G. The convergence rate of the sandwich algorithm for approximating convex functions. Computing
**1992**, 48, 337–361. [Google Scholar] [CrossRef] [Green Version] - Khan, K.A.; Watson, H.A.J.; Barton, P.I. Differentiable McCormick relaxations. J. Glob. Optim.
**2017**, 67, 687–729. [Google Scholar] [CrossRef] - Bezanson, J.; Edelman, A.; Karpinski, S.; Shah, V.B. Julia: A Fresh Approach to Numerical Computing. SIAM Rev.
**2017**, 59, 65–98. [Google Scholar] [CrossRef] [Green Version] - Duran, M.A.; Grossmann, I.E. An outer-approximation algorithm for a class of mixed-integer nonlinear programs. Math. Program.
**1986**, 36, 307–339. [Google Scholar] [CrossRef] - Fletcher, R.; Leyffer, S. Solving mixed integer nonlinear programs by outer approximation. Math. Program.
**1994**, 66, 327–349. [Google Scholar] [CrossRef] [Green Version] - Tawarmalani, M.; Sahinidis, N.V. Global optimization of mixed-integer nonlinear programs: A theoretical and computational study. Math. Program.
**2004**, 99, 563–591. [Google Scholar] [CrossRef] [Green Version] - Stuber, M.D. A Differentiable Model for Optimizing Hybridization of Industrial Process Heat Systems with Concentrating Solar Thermal Power. Processes
**2018**, 6, 76. [Google Scholar] [CrossRef] - Horst, R.; Tuy, H. Global Optimization: Deterministic Approaches; Springer: Berlin, Germany, 1993. [Google Scholar]
- Smith, M.B.; Pantelides, C.C. Global Optimisation of Nonconvex MINLPs. Comput. Chem. Eng.
**1997**, 21, S791–S796. [Google Scholar] [CrossRef] - Rockafellar, R.T. Convex Analysis; Princeton Landmarks in Mathematics Series; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
- Shapiro, A. Perturbation theory of nonlinear programs when the set of optimal solution is not a singleton. Appl. Math. Optim.
**1988**, 18, 215–229. [Google Scholar] [CrossRef] - Filippov, A.F. Differential Equations with Discontinuous Righthand Sides; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1988. [Google Scholar]
- Audet, C.; Hare, W. Derivative-Free and Blackbox Optimization; Springer Series in Operations Research and Financial Engineering; Springer International Publishing: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
- Griewank, A.; Walther, A. Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, 2nd ed.; Other Titles in Applied Mathematics; SIAM: Philadelphia, PA, USA, 2008. [Google Scholar]
- Beckers, M.; Mosenkis, V.; Naumann, U. Adjoint mode computation of subgradients for McCormick relaxations. In Recent Advances in Algorithmic Differentiation; Forth, S., Hovland, P., Phipps, E., Utke, J., Walther, A., Eds.; Springer: Berlin, Germany, 2012; pp. 103–113. [Google Scholar]
- Tawarmalani, M.; Sahinidis, N.V. A polyhedral branch-and-cut approach to global optimization. Math. Program.
**2005**, 103, 225–249. [Google Scholar] [CrossRef] - Wilhelm, M.; Stuber, M.D. Easy Advanced Global Optimization (EAGO): An Open-Source Platform for Robust and Global Optimization in Julia. In Proceedings of the AIChE Annual Meeting 2017 Minneapolis, Minneopolis, MN, USA, 31 October 2017. [Google Scholar]
- Wilhelm, M.; Stuber, M.D. EAGO: Easy Advanced Global Optimization Julia Package. Available online: https//github.com/PSORLab/EAGO.jl (accessed on 1 May 2018).
- Dunning, I.; Huchette, J.; Lubin, M. JuMP: A Modeling Language for Mathematical Optimization. SIAM Rev.
**2017**, 59, 295–320. [Google Scholar] [CrossRef] [Green Version] - Wächter, A.; Biegler, L.T. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program.
**2006**, 106, 25–57. [Google Scholar] [CrossRef] - MINLPLib: A Library of Mixed-Integer and Continuous Nonlinear Programming Instances. 2019. Available online: http://www.minlplib.org/instances.html (accessed on 1 March 2019).
- Ryoo, H.S.; Sahinidis, N.V. A branch-and-reduce approach to global optimization. J. Glob. Optim.
**1996**, 8, 107–138. [Google Scholar] [CrossRef] - Sahinidis, N.V. BARON: A general purpose global optimization software package. J. Glob. Optim.
**1996**, 8, 201–205. [Google Scholar] [CrossRef] - Hock, W.; Schittkowski, K. Test Examples for Nonlinear Programming Codes; Lecture Notes in Economics and Mathematical Systems; Springer: Berlin/Heidelberg, Germany, 1981. [Google Scholar]
- Andrei, N. Nonlinear Optimization Applications Using the GAMS Technology; Springer Optimization and Its Applications; Springer: New York, NY, USA, 2013. [Google Scholar]
- Misener, R.; Floudas, C.A. ANTIGONE: Algorithms for Continuous/Integer Global Optimization of Nonlinear Equations. J. Glob. Optim.
**2014**, 59, 503–526. [Google Scholar] [CrossRef]

**Figure 1.**A plot of the function f in Example 1 (dashed) and its associated subgradient-cutting relaxations (solid) on intervals of the form $[0.5-\u03f5,0.5+\u03f5]$ for $0<\u03f5<0.2$.

**Figure 2.**A log-log plot of $df:={sup}_{x\in X\left(\u03f5\right)}(f\left(x\right)-{\varphi}_{X\left(\u03f5\right)}\left(x\right))$ vs. $w:=2\u03f5$ (circles) and a reference line (dotted) with a slope of 2, for Example 1.

Component | Lower Bound | Upper Bound |
---|---|---|

${x}_{1}$ | 1 | 6 |

${x}_{2}$ | 1 | 6 |

${x}_{3}$ | 1 | 6 |

${x}_{4}$ | 1 | 6 |

${x}_{6}$ | 990 | 1000 |

${x}_{7}$ | 0.0001 | 2 |

${x}_{8}$ | 0.0001 | 2 |

${x}_{9}$ | 1 | 2 |

${x}_{10}$ | 50 | 50 |

${x}_{11}$ | 550 | 600 |

${x}_{12}$ | 1 | 3 |

${x}_{13}$ | 0.0001 | 3 |

${x}_{14}$ | 0.001 | 10 |

**Table 2.**The impact of the number of linearization points $(1+|{B}_{W}\left|\right)$ on our implementation’s branch-and-bound solution time for (9).

# Linearization Points | Lower-Bounding Time (s) * | Upper-Bounding Time (s) * | Total Time (s) * | # Iterations * |
---|---|---|---|---|

1 | 73.94 | 654.10 | 733.47 | 15,309 |

2 | 17.39 | 93.66 | 111.84 | 2217 |

4 | 10.25 | 26.13 | 36.68 | 737 |

8 | 12.54 | 17.47 | 30.26 | 484 |

16 | 20.34 | 13.09 | 33.68 | 402 |

32 | 35.76 | 13.38 | 49.46 | 364 |

64 | 66.38 | 10.35 | 77.17 | 332 |

**Table 3.**The solution statistics for our implementation when applied to (9).

# lin. pts. ^{†} | N-M ^{‡} | Lower-Bounding Time (s) * | Upper-Bounding Time (s) * | Total Time (s) * | # Iterations * |
---|---|---|---|---|---|

1 | Yes | 70.56 | 596.16 | 672.04 | 15,309 |

1 | No | 53.15 | 581.41 | 639.77 | 15,309 |

2 | Yes | 17.39 | 93.66 | 111.84 | 2217 |

2 | No | 16.96 | 125.41 | 140.77 | 2923 |

4 | Yes | 10.25 | 26.13 | 36.68 | 737 |

4 | No | 11.33 | 55.91 | 67.71 | 1194 |

**Table 4.**The results of BARON v18.5.8 in GAMS when applied to (9), with 1000 s of solver time allocated in each case.

Range Reduction Used? | No | Yes |
---|---|---|

Wall clock time | 1020.00 | 1030.00 |

Total CPU time | 1000.00 | 1000.00 |

Total no. of BaR * iterations | 282,404 | 467,659 |

Best solution found at node | 184,858 | 1 |

Max. no. of nodes in memory | 40,177 | 53,517 |

Best upper bound identified | 1.95 | 1.91 |

Best lower bound identified | 1.02 | 1.65 |

**Table 5.**The solution statistics for our implementation when applied to (10).

# lin. pts. ${}^{\u2020}$ | N-M ${}^{\u2021}$ | Lower-Bounding Time (s) * | Upper-Bounding Time (s) * | Total Time (s) * | # Iterations * |
---|---|---|---|---|---|

1 | Yes | 573.26 | 2356.06 | 3133.24 | 470,291 |

1 | No | 400.54 | 2028.58 | 2600.11 | 479,501 |

2 | Yes | 323.64 | 882.06 | 1278.40 | 210,928 |

2 | No | 360.95 | 1393.27 | 1872.82 | 310,279 |

4 | Yes | 251.70 | 513.98 | 806.21 | 136,107 |

4 | No | 322.95 | 1003.76 | 1404.62 | 245,125 |

**Table 6.**The solution statistics for (10), using BARON with range reduction.

Wall clock time (s) | 13.00 |
---|---|

Total CPU time used (s) | 12.48 |

Total no. of BaR * iterations | 12,475 |

Max. no. of nodes in memory | 357 |

**Table 7.**The solution statistics for our implementation applied to (11).

# lin. pts. ${}^{\u2020}$ | N-M ${}^{\u2021}$ | Lower-Bounding Time (s) * | Upper-Bounding Time (s) * | Total Time (s) * | # Iterations * |
---|---|---|---|---|---|

1 | Yes | 10.06 | 33.71 | 45.69 | 4133 |

1 | No | 17.63 | 95.34 | 116.28 | 8939 |

2 | Yes | 7.20 | 13.32 | 21.67 | 2312 |

2 | No | 13.66 | 37.72 | 53.54 | 4662 |

4 | Yes | 6.87 | 7.65 | 15.45 | 1394 |

4 | No | 13.27 | 19.32 | 34.13 | 3044 |

8 | Yes | 8.39 | 5.13 | 14.28 | 1064 |

8 | No | 18.53 | 16.36 | 36.30 | 2769 |

**Table 8.**The solution statistics for (11) using BARON, with 1000 s of solver time allocated in each case.

Range Reduction Used? | No | Yes |
---|---|---|

Wall clock time (s) | 1002.00 | 1.00 |

Total CPU time (s) | 1000.00 | 0.44 |

Total no. of BaR * iterations | 330,033 | 563 |

Best solution found at node | 298,558 | 9 |

Max. no. of nodes in memory | 40,307 | 51 |

Best upper bound determined | −1161.34 | −1161.34 |

Best lower bound determined | −3085.43 | −1162.37 |

**Table 9.**The bound constraints for decision variables in (12).

Component | Lower Bound | Upper Bound |
---|---|---|

${x}_{1}$ | 0.5 | 1.2 |

${x}_{2}$ | 0.5 | 1.2 |

${x}_{3}$ | 0 | 0.3 |

${x}_{4}$ | 0 | 0.3 |

${x}_{5}$ | 0.90909 | 1.0909 |

${x}_{6}$ | 0.90909 | 1.0909 |

${x}_{7}$ | 0.90909 | 1.0909 |

${x}_{8}$ | $-0.5$ | 0.3 |

${x}_{9}$ | $-0.5$ | 0.3 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Cao, H.; Song, Y.; Khan, K.A.
Convergence of Subtangent-Based Relaxations of Nonlinear Programs. *Processes* **2019**, *7*, 221.
https://doi.org/10.3390/pr7040221

**AMA Style**

Cao H, Song Y, Khan KA.
Convergence of Subtangent-Based Relaxations of Nonlinear Programs. *Processes*. 2019; 7(4):221.
https://doi.org/10.3390/pr7040221

**Chicago/Turabian Style**

Cao, Huiyi, Yingkai Song, and Kamil A. Khan.
2019. "Convergence of Subtangent-Based Relaxations of Nonlinear Programs" *Processes* 7, no. 4: 221.
https://doi.org/10.3390/pr7040221