3,813
Views
12
CrossRef citations to date
0
Altmetric
Letters

Practical algorithms for value-at-risk portfolio optimization problems

, &
Pages 1-9 | Received 11 Feb 2014, Accepted 14 Nov 2014, Published online: 22 Jan 2015

Abstract

This article compares algorithms for solving portfolio optimization problems involving value-at-risk (VaR). These problems can be formulated as mixed integer programs (MIPs) or as chance-constrained mathematical programs (CCMPs). We propose improvements to their state-of-the-art MIP formulations. We also specialize an algorithm for solving general CCMPs, featuring practical interpretations. We present numerical experiments on practical-scale VaR problems using various algorithms and provide practical advice for solving these problems.

JEL Classification:

1. Introduction

Value-at-risk (VaR) and conditional value-at-risk (CVaR) are both risk measures that describe tail risks of financial portfolios. VaR can be defined as a percentile of the loss distribution and CVaR as the average loss for losses exceeding the corresponding VaR. Readers are encouraged to refer to Rockafellar and Uryasev (Citation2002) for rigorous discussions on VaR and CVaR for general distributions. The study of coherent risk measures (Artzner et al. Citation1999) and the development of a convex CVaR optimization framework (see Krokhmal et al. Citation2002, Rockafellar and Uryasev Citation2002, and references therein) show advantages of CVaR over VaR. In particular, scenario-based CVaR problems can be reformulated as linear programs (LPs). Even when the number of assets and the number of scenarios are large, there are special formulations and algorithms that can solve CVaR problems more efficiently than the originally proposed LP reformulation (see Hong and Liu Citation2009, Lim et al. Citation2010, Ogryczak and Śliwiński, Citation2011, for example). Nevertheless, since its introduction as a risk measure VaR has played an important role in both practical applications and academic research (see Jorion Citation1997, Pritsker Citation1997, for example). In particular, VaR has been written into industry regulations such as Basel Accord II (BCBS Citation2004) as a capital adequacy requirement. Despite the aforementioned favorable developments for CVaR, VaR retains a similar status in Basel Accord III (BCBS Citation2010), which is scheduled to be implemented from 2013 to 2018. In addition to its practical importance, VaR also attracts significant academic research attention. For example, Kou and Peng (Citation2012) provide counterarguments against common criticisms, and Kou et al. (Citation2013) recommend a new risk measure for regulatory purpose, the tail conditional median, which is essentially VaR at a higher confidence level. As a consequence, VaR optimization both is a valuable tool in practice and an interesting area of research.

VaR problems can be solved heuristically or optimally. Readers are encouraged to refer to Gilli and Këllezi (Citation2002), Larsen et al. (Citation2002), and Hong et al. (Citation2014) for various heuristics for VaR problems. In this article we explore ways to solve VaR problems to global optimality. Since VaR is defined as a percentile of the portfolio loss distribution, optimization problems with a VaR constraint or objective can be naturally formulated as chance-constrained mathematical programs (CCMPs). A common approach to solve CCMPs is to reformulate them into mixed integer programs (MIPs). Two MIP formulations for VaR portfolio optimization problems are proposed by Benati and Rizzi (Citation2007) and are commented on by Lin (Citation2009). Moreover, there have been advances in recent years for solving general CCMPs directly (see Qiu et al. Citation2012, Song et al. Citation2012, Luedtke Citation2013, for example). We examine both the traditional approach and the new advances and compare their effectiveness in solving practical VaR problems.

This paper is organized as follows. Section 2 reviews the definition of VaR and presents common MIP formulations for VaR portfolio optimization problems. Section 3 proposes some improvements for the MIP formulations. The application of a decomposition algorithm for general CCMPs to VaR problems is shown in Section 4. Section 5 summarizes results of numerical experiments. Section 6 concludes with practical advice for solving large-scale VaR problems.

2. VaR problems and MIP formulations

Given a universe of n assets and a set of m scenarios, denote the loss (or the negative return) incurred in scenario i by per unit of asset j by . We denote the loss matrix by for , . Each row of the loss matrix, , represents the assets' losses in scenario i, . We denote the probability of realizing scenario i by , and the decision variable by , where is the set of all feasible portfolios. We assume that the loss of asset j in any scenario is proportional to , . Then the portfolio loss random variable is represented by the vector , where represents the portfolio loss in the ith scenario.

Given , VaR at confidence level α, or α-VaR, of the portfolio loss random variable l is defined as its α-percentile. Rigorous definitions of α-VaR for general distributions can be found in Rockafellar and Uryasev (Citation2002). In our settings the α-VaR is computed as follows.

Proposition 2.1.

Assume that for each portfolio the distribution of the portfolio loss is concentrated in finitely many points, that is, the vector has finite dimension. Let those losses be ordered and let the corresponding probabilities be . The α-VaR of the loss is given by where is the unique index such that .

To simplify exposition we assume that all scenarios are equally probable, that is, for . Then the unique index in Proposition 2.1 is given by . The extension to general discrete distributions is straightforward. We are interested in finding a portfolio with minimum α-VaR of portfolio losses among all feasible portfolios, that is, (1) The set of feasible portfolios is described by a given set of portfolio constraints. Examples of portfolio constraints considered in practice are budget constraint (), no short-selling constraints , , etc. One can formulate (Equation1) into a mathematical program using implication constraints, which requires additional binary variables and an auxiliary variable. In particular, by introducing binary variables and an auxiliary variable we can reformulate (Equation1) into the following mathematical program with implication constraints. (2a) (2b) (2c) (2d)

Constraints (Equation2b) and (Equation2c) ensure that , so for any feasible solution the value of z is an upper bound for . Moreover, since and (2) is a minimization problem, if is an optimal solution, then must be the least upper bound for , which is itself. In particular, equals the th largest portfolio loss, that is, for some scenario . If the probability of realizing each scenario is different, that is, , , the only change needed is replacing (Equation2c) by .

A common way to solve (Equation1) is to reformulate (2) into a MIP using the so-called ‘big-M’ constraints given by (3) If one replaces (Equation2b) with (Equation3), then the MIP formulation for (Equation1) is obtained, provided that the coefficient vector is sufficiently large. It is clear that for any , if . If , then (Equation2b) does not impose any restriction on the variables . Thus if is large enough to ensure the constraint is not active when , then (Equation3) is a valid reformulation of (Equation2b). However, big-Ms that are too large result in poor continuous relaxations in branch-and-bound methods and hence hinder computational performance. Suitable choices of , for constraints (Equation3) is the main focus in the next section.

3. Big-Ms tailored for VaR problems

In general, finding suitable big-Ms for MIP formulations can be a challenging task. For portfolio optimization problems such as (Equation1) there is often a natural way of bounding the portfolio losses using bounds of individual assets' losses, which can help calculating big-Ms. For example, to deactivate constraint (Equation3) it requires for some scenario . Without knowing the optimal solution an upper bound on the difference of scenario losses is needed. Knowing the bounds of portfolio losses for some , then , is valid for (Equation3).

Due to limited liability, common stocks have a maximum loss of . The big-Ms considered by Benati and Rizzi (Citation2007) are based on this bound. Similarly, one can derive natural bounds for an insurance portfolio or any long-only credit portfolio because individual losses in these cases are non-negative. Prior to the developments in chance-constrained mathematical programming in the last few years (see Qiu et al. Citation2012, Luedtke Citation2013, among others), this natural way of calculating big-Ms was the state-of-the-art approach for solving VaR problems as MIPs. We present a different choice of big-Ms that is tailored to VaR problems. This choice entails additional computational costs but we expect it to improve the overall computational efficiency. The following discussions about big-Ms not only reveal a choice of big-Ms for VaR problems but also provide meaningful interpretations for a specialized decomposition algorithm, which is presented in the next section.

Definition 3.1.

For any two scenarios define the relative excessive loss REL of i relative to j as (4)

Consider the following pivoting procedure: For a scenario , called the pivot scenario, we calculate for all , and then sort them to obtain a permutation of such that (5)

The following proposition shows how the RELs and the pivoting procedure relate to the feasible set of (2). A valid choice of big-Ms for constraints (Equation3) is presented based on these relationships.

Proposition 3.2.

Given any pivot scenario the following constraints are valid for any feasible solution to problem (2). (6a) (6b) (6c)

A formal proof for Proposition 3.2 is given in Appendix 1. We present here the intuitive interpretations for constraints (6). By definition of α-VaR in our setting there can be at most scenarios with positive excessive losses over α-VaR. However, for any and any pivot scenario i, by definition of RELs and the pivoting procedure there are at least scenarios whose losses are greater than or equal to . Therefore α-VaR must be at least , which validates constraint (6a). If , the th scenario loss is less than or equal to α-VaR. Therefore the difference between the ith scenario loss and the α-VaR cannot exceed that between the ith scenario's and the th scenario's loss, whose maximum is . The above interpretations remain valid for any upper bound of α-VaR instead of α-VaR. For any feasible solution to (Equation1), z is an upper bound of α-VaR for portfolio x thus constraints (6a) and (6b) are valid. Based on Equation (6a) we propose the following big-Ms.

Proposition 3.2.

With the following big-Ms, Equation (Equation3) are valid constraints for Equation (2): (7) where the second equality holds because .

Proof.

For any scenario i, if then Equation (Equation3) requires that . If , then Equation (Equation3) becomes Equation (6a). Because Equation (6a) is valid for any feasible solution to (2), the big-M constraint (Equation3) is not active. Therefore, Equation (Equation7) is valid because this choice of big-M makes Equation (Equation3) a valid reformulation of Equation (Equation2b).

In general, computing Equation (Equation7) requires solving optimization problems. Efficient algorithms for solving these problems, if available, should always be considered. For example, if has one linear constraint and bound constraints then Equation (Equation4) is a continuous knapsack problem with one constraint, which can be solved efficiently with an algorithm. Suppose is described by a small number of N constraints. In general, Equation (Equation4) has n variables and N constraints, which is usually a small optimization problem. Qiu et al. (Citation2012) propose an iterative algorithm to tighten the big-Ms. In that proposal each iteration of the algorithm requires solving m optimization problems with variables and constraints, which may be computationally expensive in financial applications where and .

A decomposition algorithm is presented in the next section. We will see the similarities between constraints (6b) and (6c) and the valid inequalities used in the decomposition algorithm in the following discussions.

4. A branch-and-cut decomposition algorithm for VaR problems

Note that Equation (Equation1) can be viewed as a special case of CCMP because it can be cast as (8) In this section, we apply the decomposition algorithm proposed by Luedtke (Citation2013) for general CCMPs to solve Equation (Equation8). Some components of the algorithm can be simplified due to the VaR problems' special structures.

The outer loop of decomposition algorithm proposed by Luedtke (Citation2013) is similar to a branch-and-bound method. A generic branch-and-bound method for solving MIPs can be summarized as follows. At the beginning, a continuous relaxation of a master problem is solved where all integrality constraints on variables are relaxed. This initial master problem is commonly referred to as the root node. Two child nodes are then created, which inherit the master problem and one of the constraints or where is an integral variable but has a fractional optimal value at the current node and (respectively, ) is the largest (respectively, smallest) integer that is less than (respectively, greater than) . Based on the optimal solution of a child node, new child nodes are created. In general, a node is fathomed when it is infeasible, when all discrete variables happen to have optimal integer values, or when the optimal objective is worse than the best optimal objective from all previously found integral solutions. A fathomed node will not be processed any further. A node is open if it has not been fathomed yet. The branch-and-bound method terminates when there are no open nodes.

As stated by Luedtke (Citation2013), the only important difference between the decomposition algorithm and a standard branch-and-bound algorithm is how nodes are processed. The decomposition algorithm generates valid inequalities based on optimal solutions to current node. The current node is solved repeatedly if a valid inequality is violated and added to the current node. The initial master problem for the decomposition algorithm may not be a complete description of the original chance-constrained problem so it is possible to generate valid inequalities even when the solution to the current node is integral. For ease of discussion, if the solution to the current node is integral, then the valid inequalities generated with respect to this solution are referred to as lazy constraints. Otherwise when the solution to current node is fractional then the valid inequalities generated are called user cuts. This terminology is consistent with those in the CPLEX documentation (IBM ILOG Citation2013).

Applying the decomposition algorithm proposed by Luedtke (Citation2013) to solve Equation (Equation8) requires solving a master problem of the following form: (9a) (9b) (9c) (9d) where is a polyhedron that contains the feasible region of Equation (2), and are such that . Note that the description of C may not contain all implication constraints at the beginning, but it will be augmented when processing the branch-and-bound tree so that correctness of the algorithm is guaranteed.

The following components are needed to fully specify our decomposition algorithm:

  1. Given a solution to the current node, identify pivot scenarios from which violated valid inequalities may be obtained.

  2. Solve subproblems that are required to generate valid inequalities.

  3. Identify a good violated inequality from the set of inequalities generated from step (ii).

Component (i) can be specified in a straightforward manner. As shown by Luedtke (Citation2013), given a solution to the current node, any scenario whose implication constraint is violated can be considered to generate valid inequalities. For our problem specifically, we consider a scenario i as a pivot scenario if . There may be multiple scenarios that satisfy this condition. Luedtke (Citation2013) shows that if the candidate solution is not feasible for Equation (2), then for any of these scenarios at least one valid inequality, which will be presented below, is violated and can be added to C.

To specify component (ii), suppose scenario i is chosen to be a pivot scenario. If one follows the steps of the general decomposition algorithm in the original proposal (Luedtke Citation2013) then the required single-scenario subproblems for scenario i are for all . We suggest the following simplifications to these subproblems: (10) because and appears in only one constraint . Our suggestion helps to reduce one variable and one constraint in every subproblem. This reduction not only helps to solve the subproblems more efficiently but also lends the subproblems practical interpretations in the context of VaR-optimization, as discussed in Section 3.

To condense notation, the dependence on pivot scenario i for and is suppressed in the following discussion. After pivoting on scenario i (i.e. calculating its RELs and sorting them) the following inequalities (11) are valid for the feasible set of Equation (2). Inequalities (Equation11) can be viewed as an application of Lemma 1 of Luedtke (Citation2013) to the feasible set of Equation (2). Alternatively, they can also be seen as a way to formulate both Equations (6a) and (6b) in one inequality for scenarios . In particular, if , then Equation (Equation11) resembles Equation (6a). If , then Equation (Equation11) resembles Equation (6b). The following theorem presents a family of valid inequalities by mixing Equation (Equation11). The valid inequalities generated in component (ii) are presented in the following theorem.

Theorem 4.1.

For any pivot scenario i, let be any non-empty index set such that for where . Then the inequalities (12) are valid for the feasible set of Equation (2).

Theorem 4.1 is a special case of Theorem 3 of Luedtke (Citation2013) and readers are encouraged to refer to his discussion on the strength of these inequalities. It is beneficial to have in Equation (Equation12) because in that case the inequalities are facet-defining in a certain sense (Luedtke Citation2013). Moreover, setting for some yields the inequality (13) which dominates inequality (Equation11) for this particular k. We observe that by setting inequalities (Equation12) resemble the form of big-M constraints (Equation3) with big-Ms defined in Equation (Equation7). Therefore the specialized decomposition algorithm is able to generate big-M constraints when needed.

It is shown by Luedtke (Citation2013) that constraints (Equation12) are both a necessary and sufficient description for the implication constraints (Equation2b). The discussion on big-Ms in Section 2 sheds some light on the interpretations of Equation (Equation12). They are a compact way of expressing the necessary conditions (6c). Specifically, any that satisfies Equation (Equation12) also satisfies Equation (6c).

Lastly, we specify component (iii). For each pivot scenario i there are exponentially many ( to be precise) valid inequalities of the form (Equation12). Separating a ‘good’ valid inequality from this exponential family is the next task. An separation algorithm is proposed by Günlük and Pochet (Citation2001) and is employed in Luedtke (Citation2013). We show that the complexity of the separation algorithm proposed by Günlük and Pochet (Citation2001) improves from to when it is implemented as a subroutine of the decomposition algorithm. This is not only true in our VaR problem but also true in general when implementing the decomposition algorithm in Luedtke (Citation2013) to solve CCMPs.

We first formulate the separation problem mathematically. Adding to both sides of Equation (Equation12) we obtain . Then we have by replacing the RHS with a telescopic sum. Finally, we group the like terms and arrive at the following inequality: (14) Although Equation (Equation14) is algebraically equivalent to Equation (Equation12), its LHS is independent of the choice of the index set R. Given a candidate solution to the current node, that is, may be fractional, one can minimize the RHS of Equation (Equation14) to identify a most violated inequality in Equation (Equation12), if any, or determine that none can be found. The minimization problem to be solved is as follows. (15a) (15b) (15c)

Next we examine optimal solutions for Equation (15). Observe that the index set for any feasible R is also feasible and has objective value no greater than that of R. Therefore there is at least one optimal solution in which is a member. Following the arguments in Günlük and Pochet (Citation2001) we can find an optimal index set which satisfies the conditions and . An algorithm for identifying such an index set is shown in lines 6 to 9 in Algorithm 2 in Appendix 2.

We state a basic version of the specialized algorithm for solving VaR problems in Algorithm 1 in the appendix. The algorithm is a specialization of the one proposed by Luedtke (Citation2013) with a detailed separation subroutine Algorithm 2. The general mechanism of the algorithm can be found in its original proposal and is omitted to avoid repetition. The user cuts added (Line 17 of Algorithm 1) do not affect the correctness of the algorithm but may be used to improve computational efficiency. For example, one can include user cuts every other 50 nodes in the branch-and-bound tree or include only user cuts that have significant violations. In addition, this decomposition algorithm can be used in conjunction with the special branching scheme proposed by Qiu et al. (Citation2012). The special branching scheme adds a cut of the form to nodes that have and all subsequent child nodes. Such local cuts enforce the implication which does not affect the correctness of the decomposition algorithm. Readers are encouraged to refer to Qiu et al. (Citation2012) for details of the special branching scheme.

5. Numerical experiments

Numerical experiments are conducted using CPLEX 12.5.1 on a desktop running Ubuntu 12.04.4 LTS, with 8 cores 3.4 GHz Intel Core i7, 32 GB RAM and 32 GB swap space. We set the number of threads to one for fair comparison. Some advanced CPLEX callbacks are necessary to implement the decomposition algorithm and the special branching scheme described at the end of Section 4. However, some advanced CPLEX features, such as dynamic search, are disabled when advanced callbacks are used. Unless stated otherwise, advanced CPLEX features are disabled for fair comparison.

We considered VaR-minimization problems of the form for reinsurance portfolios. An anonymous reinsurance company generously provided us with simulated reinsurance loss and premium data. The data was hypothetically simulated to represent realistic risk and returns of real business. The characteristics of these loss matrices include non-negativity, sparsity (e.g. to non-zero entries), positive skewness, and high kurtosis. It is shown by Qiu et al. (Citation2012) that sparsity is favorable for computation efficiency. The premiums , are the market premiums of the risk contracts. The decision variable , represents the fraction of risk contract j that the reinsurer is willing to take. The portfolio premium is then cx. A reward constraint impose a minimum level of portfolio premium. In our experiments we set , , and .

In our first set of experiments we compare the following algorithms:

  1. NaturalM: MIP formulation with a natural choice of big-Ms. Since reinsurance losses and the decision variables , the portfolio loss in scenario i is bounded by , . The discussion at the beginning of Section 2 explains that , is a natural choice of big-Ms.

  2. TightM: MIP formulation with big-Ms defined in Equation (Equation7). In our experiments contains only one linear constraint and bound constraints. Therefore, the REL subproblems can be solved with a algorithm.

  3. Decomp: Decomposition algorithm with lazy constraints only. The initial master problem has an empty description in the cut set C. This is a minimal form of the decomposition algorithm because no user cut is added. When a strong valid inequality of the form (Equation12) is violated by a integer solution to the current node, we add not only this inequality but also any inequality of the form (Equation13) that is violated. As observed by Luedtke (Citation2013), including inequalities (Equation13) is a simple way to add additional sparse valid inequalities in one round.

  4. Decomp+: Decomposition algorithm with lazy constraints as well as user cuts at the root node. At the root node we continue generating valid inequalities with violation greater than 0.01 until none can be found.

  5. TightMB: TightM with special branching scheme implemented, as discussed at the end of Section 4.

  6. DecompB: Decomp with special branching scheme implemented.

We minimize the -VaR for 10 instances of reinsurance problems with 5000 scenarios and 25 risk contracts. A time limit of 3 hours for each instance is set, which is a realistic computational budget. We are interested in both the CPU time if an instance is solved and the final optimality gap if an instance is halted due to the time limit.

Table  summarizes the CPU time of the 10 instances using different algorithms. We define the optimality gap as where is the objective value of best incumbent found up to the current node and is the best lower bound. Table  summarizes the optimality gaps after 3 hours if an instance is not solved within the time limit. NaturalM serves as a benchmark for the comparison because it is the state-of-the-art method. We see from tables  and that TightM, TightMB, Decomp, and Decomp+ all have better performance than NaturalM: More instances are solved within the time limit, it takes less CPU time to solve an instance, and there is a smaller optimality gap when a problem is halted. It takes only about 20 seconds to solve subproblems to calculate the tight big-Ms. Recognizing the knapsack problem structure enables us the calculate the tight big-Ms so efficiently. Our experiments show that TightM and TightMB are the fastest algorithms to solve a given instance within time limit. On average, TightM and TightMB can solve an instance more than 20 times faster than the NaturalM does. However, since this observation takes into account only problems that are solved within an arbitrary time limit, it may be an overstatement about the improvement of TightM and TightMB over NaturalM. For instances which are halted, either TightM, TightMB, or Decomp+ has the smallest optimality gap. In 1 out of 10 instances Decomp takes less CPU time than TightM does and Decomp+ has a smaller optimality gap than TightM does.

Table 1. CPU time (in seconds) for solving -VaR minimization problems with 5000 scenarios and 25 risk contracts (including time for calculating big-Ms).

Table 2. Optimality gap after 3 hours for solving -VaR minimization problems with 5000 scenarios and 25 risk contracts.

The addition of a special branching scheme to TightM yields little additional improvement in either CPU time or optimality gap, as evidenced by comparing columns TightM and TightMB in tables  and . We also observe a significant deterioration in CPU time and optimality gap when the special branching scheme is implemented in conjunction with Decomp when comparing columns Decomp and DecompB in tables  and . Even when DecompB manages to solve an instance, its CPU time is much longer than that of Decomp. We currently have no explanation why the special branching scheme deteriorates the decomposition algorithm so significantly.

For NaturalM and TightM, it is for the sake of fair comparison that the advanced CPLEX features were disabled. In practice, however, these features should be enabled for NaturalM and TightM. The second to fifth columns of table  summarize the CPU time and final optimality gap for the same 10 instances when CPLEX features are enabled (CPLEX default settings). We see that enabling CPLEX features significantly improves performance for both NaturalM and TightM. More problems are solved within the given time limit and problems are solved much faster. Under the CPLEX default settings TightM can be 2–14 times faster than NaturalM. When a problem is halted, TightM always has a smaller optimality gap than that of NaturalM. With TightM, 9 out 10 instances have final optimality gaps less than . Compared to NaturalM with CPLEX features disabled, the computational enhancement from calculating the tight big-Ms is more significant than enabling the CPLEX features. Comparing the second column in table  and the third column (TightM) in table  we see that TightM with CPLEX features disabled takes less CPU time to solve an instance than NaturalM with the features enabled.

Table 3. CPU time (in seconds, including big-M calculation time) and optimality gap for solving -VaR minimization problems with 25 risk contracts.

The algorithms can find a high-quality solution quickly but devote most of the computation time to proving optimality. We define the incumbent gap as where and are the objective of the current best incumbent and of the optimal solution, respectively. Table  shows incumbent gaps for various algorithms after 1 minute and 10 minutes. Although solving VaR-minimization problems of the size we considered may take hours or even days, as shown in the fourth column of table , the algorithms can find a high-quality feasible solution in a few minutes. For example, using TightM with CPLEX features enabled (the fifth column in table ), incumbents within of optimal solutions are found in 1 minute for all 10 instances. Moreover, after 10 minutes TightM with CPLEX features enabled found incumbents within of the optimal solutions for 4 instances, yet 3 of them took more than 12 hours to eventually be solved to optimality.

Table 4. Percentage gaps between best incumbent and optimal solution for solving -VaR minimization problems with 5000 scenarios and 25 risk contracts.

In the second set of experiments we aim to provide practical advice for solving large scale problems. All CPLEX advanced features are enabled. We considered 10 instances of reinsurance problems with 10 000 scenarios and 25 risk contracts. A -VaR minimization problem contains 50 tail scenarios, that is, . The number of scenarios and number of tail scenarios of this size are similar to problems considered in practice. In this case we set a time limit of 12 hours to replicate real-life situations where an over-night computational budget is allowed. The results are summarized in the sixth to the ninth columns of table . Out of 10 large scale problems, TightM solved 5 instances within the time limit and had a final optimality gap less than for 7 instances. NaturalM, however, solved only 2 instances within the time limit and had 3 instances whose optimality gaps were less than . For these large instances TightM is 7–18 times faster than NaturalM when an instance is solved within the time limit. When an instance is halted, TightM always has a smaller optimality gap than NaturalM.

6. Conclusion

We examined several algorithms to solve VaR-minimization problems. Our experiments suggest that calculating tight big-Ms improves overall performance. When CPLEX's advanced features are disabled, variations of the specialized decomposition algorithm perform comparably to the MIP formulation with tight big-Ms. We hope to see significant improvements in computational performance if we were able to enable some of the CPLEX features for the decomposition algorithm. The efficacy of these methods when VaR-constrained problems remains to be investigated. At the current stage, we recommend TightM as a practical method to solve VaR problems.

Acknowledgments

The authors thank Dr James Luedtke and Dr Simge Küçükyavuz for offering insights into this work. Mingbin Feng and Andreas Wächter were supported partially by the NSF grant DMS-1216920.

References

  • Artzner, P., Delbaen, F., Eber, J.M. and Heath, D., Coherent measures of risk. Math. Financ., 1999, 9, 203–228. doi: 10.1111/1467-9965.00068
  • Basel Committee on Banking Supervision, Basel II: International convergence of capital measurement and capital standards. A revised framework. Bank for International Settlements, 2004.
  • Basel Committee on Banking Supervision, Basel III: A global regulatory framework for more resilient banks and banking systems. Bank for International Settlements, 2010.
  • Benati, S. and Rizzi, R., A mixed integer linear programming formulation of the optimal mean/value-at-risk portfolio problem. Eur. J. Oper. Res., 2007, 176, 423–434. doi: 10.1016/j.ejor.2005.07.020
  • Gilli, M. and Këllezi, E., A global optimization heuristic for portfolio choice with VaR and expected shortfall. In Computational Methods in Decision-making, Economics and Finance, Applied Optimization Series, edited by E.J. Kontoghiorghes, B. Rustem, and S. Siokos, pp. 167–183, 2002 (Springer: New York).
  • Günlük, O. and Pochet, Y., Mixing mixed-integer inequalities. Math. Program., 2001, 90, 429–457. doi: 10.1007/PL00011430
  • Hong, L.J., Hu, Z. and Zhang, L., Conditional value-at-risk approximation to value-at-risk constrained programs: A remedy via Monte Carlo. INFORMS J. Comput., 2014, 26, 385–400. doi: 10.1287/ijoc.2013.0572
  • Hong, L.J., and Liu, G., Simulating sensitivities of conditional value at risk. Management Science, 2009, 55, 281–293. doi: 10.1287/mnsc.1080.0901
  • IBM ILOG, IBM ILOG CPLEX Optimization Studio. Getting Started with CPLEX., 2013, Version 12 Release 5.
  • Jorion, P., Value at Risk: The New Benchmark for Controlling Market Risk, 1997 (McGraw-Hill: New York).
  • Kou, S. and Peng, X., Comments on the consultative document ‘Fundamental Review of the Trading Book’ Released by Bank for International Settlement on May 3rd, 2012.
  • Kou, S., Peng, X. and Heyde, C.C., External risk measures and basel accords. Math. Oper. Res., 2013, 38, 393–417. doi: 10.1287/moor.1120.0577
  • Krokhmal, P., Palmquist, J. and Uryasev, S., Portfolio optimization with conditional value-at-risk objective and constraints. J. Risk, 2002, 4, 43–68.
  • Larsen, N., Mausser, H. and Uryasev, S., Algorithms for Optimization of Value-at-Risk. In Financial Engineering, e-Commerce and Supply Chain, Applied Optimization Series, edited by P. Pardalos, and V. Tsitsiringos, pp. 19–46, 2002 (Springer: New York).
  • Lim, C., Sherali, H.D. and Uryasev, S., Portfolio optimization by minimizing conditional value-at-risk via nondifferentiable optimization. Comput. Optim. Appl., 2010, 46, 391–415. doi: 10.1007/s10589-008-9196-3
  • Lin, C.-C., Comments on ‘A mixed integer linear programming formulation of the optimal mean/value-at-risk portfolio problem’. Eur. J. Oper. Res., 2009, 194, 339–341. doi: 10.1016/j.ejor.2008.01.041
  • Luedtke, J., A branch-and-cut decomposition algorithm for solving chance-constrained mathematical programs with finite support. Math. Program., 2013, 146, 219–244. doi: 10.1007/s10107-013-0684-6
  • Ogryczak, W. and Śliwiński, T., On solving the dual for portfolio selection by optimizing conditional value at risk. Comput. Optim. Appl., 2011, 50, 591–595. doi: 10.1007/s10589-010-9321-y
  • Pritsker, M., Evaluating value at risk methodologies: Accuracy versus computational time. J. Financ. Serv. Res., 1997, 12, 201–242. doi: 10.1023/A:1007978820465
  • Qiu, F., Ahmed, S., Dey, S.S. and Wolsey, L.A., Covering linear programming with violations. Working paper, 2012.
  • Rockafellar, R.T., and Uryasev, S., Conditional value-at-risk for general loss distributions. Journal of Banking & Finance, 2002, 26, 1443–1471. doi: 10.1016/S0378-4266(02)00271-6
  • Song, Y., Luedtke, J. and Küçükyavuz, S., Chance-constrained binary packing problems. Working paper, 2012.

Appendix 1. Proof for Proposition 3.2

Proof.

Let i be a pivot scenario. Inequality (6a) is backed by (Equation2b), (Equation2c) and the definition of RELs. Since Equation (Equation2b) implies and Equation (Equation2c) implies , thus . By definition of RELs and the ordering (Equation5) we know that . Thus . Assume to the contrary that Equation (6a) is violated by any solution , that is, . Then we have , which is a contradiction. Therefore Equation (6a) is valid for all feasible solution to Equation (2).

The implication constraint (6b) results from the definition of REL and the implication constraint (Equation2b). By Equation (Equation2b) we know that for all . So implies that , which validates the implication constraint (6b).

Constraints (6c) is a consequence of (6a), (6b) and the ordering (Equation5). This completes the proof for Proposition 3.2.

Appendix 2. Specialized decomposition algorithm for VaR problems