468
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

Enhancing an existing algorithm for small-cardinality constrained portfolio optimisation

Pages 967-981 | Received 29 Jun 2022, Accepted 24 May 2023, Published online: 22 Jun 2023

Abstract

The efficient frontier (EF) allows an investor to (in theory) maximise their return for a given level of risk. Portfolios on the EF may contain many assets, making their management difficult and possibly expensive. An investor may wish to impose an upper bound on the number of assets in their portfolio, leading to so-called cardinality constrained efficient frontiers (CCEFs). Recently, a new algorithm was developed to find CCEFs for small cardinalities. Relative to other algorithms for this problem, this algorithm is very intuitive, and its authors demonstrated that it performs at nearly the state-of-the-art. However, we have found that the algorithm seems to struggle in certain situations, particularly when faced with both bonds and equities. While preserving its intuitiveness, we modified the algorithm to improve its CCEFs. This improvement comes with longer runtimes, but we think many practitioners will prefer the algorithm with modifications. Some practitioners may prefer other algorithms, due to the runtimes or because some points on our CCEFs still fall short of optimality. However, in addition to its intuitiveness, our modified algorithms (and the original version) find low-risk points on the CCEF that a state-of-the-art algorithm does not.

1. Introduction

Portfolio construction is a highly personalised task. Depending on one’s time horizon, investment goals, and other factors, an individual may be willing to take a relatively large amount of risk, with the expectation that this risk is accompanied by larger expected returns, or that individual may be willing to take only a very small amount of risk, knowing that their expected returns will be smaller. The trade-off between risk and return is characterised nicely by the efficient frontier (EF), which is a set of portfolios that maximises return for a given level of risk and was introduced in Markowitz’s seminal work on mean-variance portfolio optimisation (Markowitz, Citation1952). In theory, a rational investor’s portfolio should lie somewhere on the EF; if it does not, they are failing to maximise their return for their level of risk. In practice, a portfolio along the EF may not be optimal because the weights attributed to each asset are very sensitive to error in the estimates of the returns, risks, and covariances of the assets (e.g., Michaud & Michaud, Citation2007). Nonetheless, mean-variance portfolio optimisation remains the backbone of many portfolio optimisation studies (e.g., Chen et al., Citation2021; Kalayci et al., Citation2019; van Staden et al., Citation2021).

Some studies have aimed to solve the mean-variance portfolio optimisation problem with constraints aimed to reflect practical considerations. A very common constraint in research studies (e.g., Chang et al., Citation2000) is to require that an investor takes only long positions (i.e., no short selling), as short selling is not permitted in many investment accounts (e.g., registered accounts in Canada). From here on, we consider the portfolio optimisation problem with this constraint in place, but for brevity we will no longer explicitly mention it. Another constraint, and the focus of this paper, is known as the cardinality constraint. It allows an investor to own only a limited number of assets. Although this is not a legal limitation, an investor may want to limit their holdings for several reasons, including transaction costs and the complexity of managing a portfolio with many holdings (e.g., Cesarone et al., Citation2013). The introduction of this constraint makes finding the EF much more challenging, as the problem becomes NP-hard (e.g., Moral-Escudero et al., Citation2006).

A number of studies have developed algorithms to approximate the cardinality constrained efficient frontier (CCEF) (e.g., Cesarone et al., Citation2013; Chang et al., Citation2000). Recently, Graham and Craven (Citation2021) developed an algorithm for this problem, specifically designed for cases in which the cardinality is small (i.e., eight or less in all cases presented, and generally even smaller). The authors highlighted their algorithm’s intuitiveness and demonstrated its competitiveness (i.e., with respect to the risk-return tradeoff) with the results from Cesarone et al. (Citation2013) for multiple benchmark datasets. The algorithm’s implementation in R is freely available,Footnote1 poising the algorithm to potentially make an impact in practice.

Although the algorithm appears to be effective in many situations, we have found cases in which the algorithm fails to faithfully represent the true CCEF across the entire risk-return spectrum. In what follows, we describe why the algorithm’s performance is subpar in some cases and how it can be improved. Section 2 provides a background for our study, outlining Graham and Craven’s algorithm (GC algorithm), the datasets considered in this study, and the identification of a problem with the algorithm; Sections 3 and 4 outline two causes of the algorithm’s poor performance; Section 5 provides initial results comparing the performance of the algorithm in its current state to its performance with some modifications; Section 6 addresses practical considerations; Section 7 compares the algorithms in more detail; Section 8 compares the performance of the algorithm with modifications to the performance of the algorithm in Cesarone et al.. (Citation2013); Section 9 discusses our findings; and Section 10 concludes the paper.

2. Background

2.1. Definitions and notation

We suppose that there are n assets available, in which we can invest. The mean vector and covariance matrix of the returns on those assets are denoted by μ and , respectively (here μ is a vector of length n and is an n×n matrix). Portfolios are represented by vectors x of length n, whose components sum to one. In what follows we take all vectors to be column vectors.

The mean and variance (i.e., risk) of the return on the portfolio x are given by R=R(x)=μTx and V=V(x)=xTx. The portfolio x is said to be efficient if R(x)R(x) whenever x is such that V(x)V(x). Intuitively, efficient portfolios minimise portfolio risk for a given level of (expected) return.

In general, EFs can be parametrised via a (hypothetical) risk aversion parameter λ that ranges over the unit interval [0,1]. The case λ=0 corresponds to a risk-neutral investor who seeks to maximise return at all costs and the case λ=1 corresponds to an extremely risk-averse individual who seeks to minimise volatility at all costs. Intermediate values of λ correspond to less extreme degrees of risk aversion, with the value of λ dictating the relative importance a given individual places on risk versus return. The parameter λ is an artificial (but intuitively natural) device for identifying efficient portfolios and constructing EFs.

To construct an efficient portfolio, one fixes a value of λ and minimises (with respect to x) the objective function λV(x)(1λ)R(x) subject to the constraint that i=1nxi=1, as well as any other desired constraints. For example, to exclude short positions we would impose the n additional constraints xi0 for every i. For a given value of λ, we let x(λ) denote the solution to the indicated constrained minimisation problem, and let Rλ=μTx(λ) and Vλ=x(λ)Tx(λ) denote the expected return and risk on the corresponding portfolio. Although it may not be immediately obvious to all readers, x(λ) is in general an efficient portfolio, so that (Vλ,Rλ) denotes a point on the EF. To construct the entire frontier, one simply solves the constrained minimisation problem for a grid of λ values in [0,1], computes and stores the corresponding points (Vλ,Rλ), and then plots the results.

The geometric properties of the efficient frontier depend heavily on the nature of the additional constraints. If there are no additional constraints then points (V,R) on the EF obey a quadratic relationship. If we preclude short selling, the EF becomes “piecewise quadratic” (Best and Grauer, Citation1990). In this sense, the constrained EF is locally, but not globally, quadratic. At the points along the constrained EF where an asset’s allocation decreases from a positive allocation to zero, the EF is not differentiable and is said to have a kink. These kinks can vary in size, such that the EF may still be approximately globally quadratic or substantially deviate from a quadratic shape.

2.2. Graham and Craven’s algorithm for cardinality constrained portfolio optimisation

Many algorithms have been used to develop CCEFs. Several of these are heuristics that involve randomness (e.g., Chang et al., Citation2000; Lwin et al., Citation2014) and are believed to work more effectively (i.e., find a good solution with shorter runtimes) for larger cardinalities (e.g., Graham & Craven, Citation2021). In contrast, the GC algorithm is purported to be exact, deterministic, and more effective for small cardinalities. We note, however, that an exact algorithm typically refers to an algorithm that is guaranteed to solve a problem optimally instead of approximately (e.g., Cesarone et al., Citation2013). As we have mentioned and will later show, the algorithm is not exact under this definition. Graham and Craven (Citation2021) observed that their algorithm does not always perform optimally, so this seems to be an issue of terminology rather than a misunderstanding of the performance of the algorithm.

With n representing the total number of assets available and K representing the maximum number of assets that can be in a portfolio (i.e., the maximum cardinality), the GC algorithm begins by determining which of the (nK) sub-EFs (i.e., EFs for each subset of K assets) is dominated by a CCEF composed of other assets. A sub-EF is dominated if it lies below and to the right of another EF (see the left side of ).Footnote2 To efficiently identify dominated sub-EFs, the GC algorithm begins by ordering the assets according to their return, from largest to smallest. Beginning with the portfolio composed of the first K assets, then the first K1 assets and asset K+1, and so on, the algorithm assesses if a sub-EF is dominated by the current CCEF, which is composed of points from sub-EFs that were previously classified as non-dominated. The points included in the current CCEF are those that minimise the objective function described in Section 2.1. Rather than using the entire sub-EF to determine if it is dominated, which would be very time-consuming, only three points are used: the point with maximum return (i.e., (Vmax,Rmax)), the point with minimum risk (i.e., (Vmin,Rmin)), and a third point computed based on an approximation using the coordinates of the first two points and an assumption that the sub-EF is globally quadratic.

Figure 1. A visual description of curve dominance. In the left side, C1 dominates C2. In the right side, neither curve dominates the other. This figure has been taken from Graham and Craven (Citation2021) and has been reprinted by permission of the publisher (Taylor & Francis Ltd, http://www.tandfonline.com).

Figure 1. A visual description of curve dominance. In the left side, C1 dominates C2. In the right side, neither curve dominates the other. This figure has been taken from Graham and Craven (Citation2021) and has been reprinted by permission of the publisher (Taylor & Francis Ltd, http://www.tandfonline.com).

The way in which these three points are used to determine if a sub-EF is dominated is described in detail in Section 3. The process involves checking if each of the three points on a sub-EF is dominated. If all three are dominated, then the sub-EF is dominated. Otherwise, it is not. The number of non-dominated sub-EFs is generally a small fraction of (nK). For the non-dominated sub-EFs, several points (e.g., 200) are computed, so a substantial amount of time is saved by computing only three points for the other sub-EFs. Next, Graham and Craven’s pooling/sifting algorithm (Citation2021) is used to eliminate dominated points. The non-dominated points remaining at the end of this process form the CCEF.

To our knowledge, the most direct competitor to the GC algorithm is the algorithm of Cesarone et al. (Citation2013). Like Graham and Craven’s, this algorithm is deterministic and effective for small cardinalities. It also appears to operate much faster than the GC algorithm (see Table 1 of Cesarone et al. (Citation2013) and Tables 2 and 3 of Graham and Craven (Citation2021)). However, there are two advantages of the GC algorithm that may lend it more practical appeal. First, code for the algorithm’s implementation in R is available to the public. Second, the algorithm is much more intuitive than that of Cesarone et al. (Citation2013), as it involves directly considering all (nK) sub-portfolios. Especially if longer runtimes are not an issue, a practitioner may wish to use the GC algorithm because they have a better understanding of it. For these reasons, we believe it is important to address the shortcomings of the algorithm in its current state.

2.3. Data

In our analysis we consider the five datasets from Beasley’s OR-LibraryFootnote3 (Beasley, Citation1990) that have become a benchmark for cardinality constrained portfolio optimisation (e.g., Cesarone et al., Citation2013; Chang et al., Citation2000; Graham & Craven, Citation2021). These datasets provide the mean and risk (i.e., variance) of the weekly log returnsFootnote4 of 31, 85, 89, 98, and 225 assets from stock market indices around the world (i.e., Hang Seng, DAX 100, FTSE 100, S&P 100, and Nikkei 225) over the period spanning from March 1992 to September 1997, as well as the correlations between the log returns of the assets. We note that the risk was computed with a denominator of nt, where nt is the number of months, as opposed to the alternative of nt1.

We also created two of our own datasets consisting of adjusted close prices for some exchange traded funds (ETFs) whose data were obtained from Yahoo Finance. These two datasets contain ETFs offered by BlackRock, Inc. and The Vanguard Group, Inc., thus we refer to them as the BlackRock dataset and Vanguard dataset, respectively. The BlackRock dataset contains daily adjusted closing prices from November 10, 2006 to December 3, 2021 for 15 ETFs, which we broadly categorised as 10 equity ETFs and five bond ETFs. Likewise, the Vanguard dataset consists of daily adjusted closing prices for 36 ETFs, including 32 equity ETFs and four bond ETFs. This dataset spans from January 2, 2008 to December 20, 2021. More details about the ETFs included in these datasets are available in the online supplementary material. When computing risk for ETFs in these datasets, we used a denominator of nt1, where nt is the number of days, to obtain an unbiased estimate.

2.4. Problem identification

Our original objective when considering the GC algorithm was to explore existing ways in which we could find efficient cardinality constrained portfolios. Our interest was in the context of a collaboration with a large wealth management company, so the resulting portfolio selection problem required that we consider assets from both bond and equity classes. The results of using the GC algorithm on the BlackRock dataset are shown in , which also includes the EF. shows the weights associated with each ETF for the EF. Notably, excluding assets with very small weights (i.e., less than 3%), the portfolios along the EF are always composed of five or less assets. Thus, the CCEF with K=5 should nearly overlap the EF, but we can see in that this is not the case, indicating a problem with the algorithm.

Figure 2. Efficient frontiers (EFs) for the BlackRock dataset. a) shows the cardinality constrained efficient frontier (CCEF) for various cardinalities, as well as the EF for reference. b) shows the weights applied to the funds for portfolios along the EF.

Figure 2. Efficient frontiers (EFs) for the BlackRock dataset. a) shows the cardinality constrained efficient frontier (CCEF) for various cardinalities, as well as the EF for reference. b) shows the weights applied to the funds for portfolios along the EF.

3. Quadratic approximation for third point on the sub-efficient frontier

To determine if a given sub-EF is dominated by the current CCEF, the GC algorithm employs a clever geometric device. Recall that we let (Vmin,Rmin) and (Vmax,Rmax) denote the minimum and maximum points, respectively, on the sub-EF. We also set Rmid=0.5(Rmin+Rmax) and let (Vmid,Rmid) denote the point on the sub-EF whose expected return lies halfway between Rmin and Rmax. For each pair of adjacent points on the current CCEF, the algorithm implicitly considers three triangles - one formed by those points and (Vmin,Rmin), one formed by those points and (Vmax,Rmax), and one formed by those points and (Vmid,Rmid). Using matrix algebra, the algorithm then determines if each of these triangles has positive or negative area. Since this is not explicitly described in Graham and Craven (Citation2021), for end users of the algorithm that may not have a substantial (or recent) background in matrix algebra, we provide a thorough description of this process in the supplementary material. If any of those triangles has a negative area, the corresponding sub-EF is not dominated by the current CCEF and points on that sub-EF are eventually added to the current CCEF. Otherwise, the sub-EF is classified as dominated and none of its points are added to the current CCEF.

The computational cost associated with identifying the maximum point is trivial, since the corresponding portfolio is simply 100% in the asset with the highest return. The computational cost associated with the minimum and intermediate points, however, is non-trivial since one must first numerically determine the corresponding portfolios. Although this is a straightforward quadratic programming problem, repeated solutions do require a non-trivial amount of runtime. To reduce computational times, the GC algorithm employs a closed-form approximation to Vmid that essentially replaces the actual sub-EF with the quadratic curve connecting the minimum and maximum points. The abscissa of the indicated quadratic at the ordinate Rmid is demonstrably V̂mid=0.75Vmin+0.25Vmax, so replacing the actual sub-EF with the approximating quadratic is tantamount to approximating Vmid with V̂mid.Footnote5 The implicit assumption here is that the actual sub-EF can be approximated reasonably well by a quadratic curve. To the extent that this is true the approximation V̂mid will be accurate, otherwise it may not be.

Graham and Craven (Citation2021) claim that the approximation is exact for K=2, but this is in fact only true if both assets are included in the minimum risk portfolio. If the minimum risk portfolio includes only one of the assets, then V̂mid underestimates Vmid. Consequently, it is possible for a portfolio that is dominated to appear as though it is not. However, in the cases we have observed, the approximation has still been reasonably close, suggesting that only few portfolios would be classified incorrectly. In addition, the only possible negative consequence here is increased runtime.

Conversely, the assumption that the sub-EF is quadratic can lead to concluding that a portfolio is dominated when it is not. If this portfolio belongs to the CCEF, then its exclusion can lead to an inaccurate representation of the actual CCEF. Although the approximation can be very accurate in some situations (e.g., )), several portfolios have an EF with a substantial kink caused by the no short selling constraint. As shown in , these EFs can take a shape that is clearly not quadratic, causing the approximation for the third point to be very poor.

Figure 3. Examples of cases in which the approximation for the third point on the sub-efficient frontier (EF) is a) good and b) very poor.

Figure 3. Examples of cases in which the approximation for the third point on the sub-efficient frontier (EF) is a) good and b) very poor.

A noteworthy distinction between the Beasley OR-Library datasets and our BlackRock dataset is the inclusion of bond ETFs. We originally hypothesised that the inclusion of different kinds of assets may result in cases in which the approximation is less effective. To test this theory, we measured the difference between Vmid and V̂mid for several portfolios. We constructed two sets of portfolios using ETFs in the BlackRock dataset. All portfolios had four ETFs, with the first set of ETFs (m = 450) consisting of portfolios with two equity ETFs and two bond ETFs and the second set of ETFs (m = 210) consisting of portfolios with only equity ETFs. As shown in , the distributions of the approximation’s error are very different for the two sets of portfolios, with larger errors for the portfolios with both equity and bond ETFs.

Figure 4. The distribution of the error of the approximation for the third point on the sub-efficient frontier (EF) for two sets of portfolios using exchange traded funds (ETFs) from the BlackRock dataset. The error is the difference in risk (i.e., variance) for a point with return halfway between the minimum and maximum return for the portfolio. All portfolios are composed of four ETFs, with either two bond and two equity ETFs or four equity ETFs.

Figure 4. The distribution of the error of the approximation for the third point on the sub-efficient frontier (EF) for two sets of portfolios using exchange traded funds (ETFs) from the BlackRock dataset. The error is the difference in risk (i.e., variance) for a point with return halfway between the minimum and maximum return for the portfolio. All portfolios are composed of four ETFs, with either two bond and two equity ETFs or four equity ETFs.

Since most investors have multi-class portfolios, a portfolio optimisation algorithm must perform well in a multi-class setting. Due to the sizeable errors of the approximation, the first modification of the GC algorithm that we recommend is computing Vmid exactly rather than approximating it. Although this is less efficient than approximating it, computing a single point along a sub-EF is not prohibitively expensive.

4. Gaps in the cardinality constrained efficient frontier

Due to the way it adds points to the existing CCEF, the GC algorithm can leave substantial gaps in the CCEF during its process of finding non-dominated sub-EFs. Adjacent points along a CCEF can be spaced very unevenly, with clusters of points that are very close together followed by points that are quite far apart (i.e., the gaps; see ). These gaps are then eventually filled (to some degree) using Graham and Craven’s pooling/sifting algorithm (Citation2021). However, the pooling/sifting process occurs after determining which sub-EFs are non-dominated and the gaps in the CCEF can play a role in non-dominated sub-EFs being classified as dominated. In , we show the CCEF partway through (a slight modification of) the GC algorithm when working with Beasley’s Portfolio 4 with a cardinality of three. The red points are points on the sub-EF being considered at this stage in the algorithm. Under the algorithm’s current construction, they are dominated by the current CCEF. Indeed, we can see they fall below the line connecting all the points along the CCEF. However, the line connecting the points can provide an optimistic outlook of the CCEF at times. As we saw in , the existence of kinks in the CCEF make it possible for a straight line connecting two points on the CCEF to dominate the CCEF within the risk-return spectrum covered by those two points. Thus, a sub-EF that is not dominated by the actual CCEF can be classified as dominated based on this representation of the CCEF, leading to a final CCEF that does not come close to maximizing return for a given level of risk.

Figure 5. An example of the representation of a cardinality constrained efficient frontier (CCEF) midway through running a slight modification of Graham and Craven’s algorithm. The lines connect adjacent points along the CCEF. Red points are points from a sub-efficient frontier (EF) that were considered dominated by the current CCEF even though the points of the CCEF do not dominate them.

Figure 5. An example of the representation of a cardinality constrained efficient frontier (CCEF) midway through running a slight modification of Graham and Craven’s algorithm. The lines connect adjacent points along the CCEF. Red points are points from a sub-efficient frontier (EF) that were considered dominated by the current CCEF even though the points of the CCEF do not dominate them.

To address this issue, we propose another modification to the GC algorithm. As we loop through pairs of adjacent points on the CCEF to determine if any of the three points on a sub-EF are non-dominated by the curve, we also consider if the points on the sub-EF are dominated by the points on the CCEF. Let Pa and Pb be the two points under consideration from the CCEF, with Pb being the point with the larger return and risk. If any of the three points on the sub-EF have a return greater than the return of Pa and a risk less than the risk of Pb, that sub-EF is flagged. After iterating through all sub-EFs, we then re-run the GC algorithm (using Vmid instead of V̂mid) on only the flagged sub-EFs, continuing this process until none are flagged or the process has iterated 10 times, including the initial run. We chose (somewhat arbitrarily) to set a maximum of 10 to avoid prohibitively long runtimes in exchange for presumably marginal (or non-existent) gains at that stage. Finally, all the points from the sub-EFs are merged and fed into Graham and Craven’s pooling/sifting algorithm (Citation2021).Footnote6 Note that we do not simply treat flagged sub-EFs as non-dominated sub-EFs after the first iteration because considering all flagged sub-EFs in the sifting/pooling process can be prohibitively time-consuming.

5. Initial comparison of original and modified algorithm

After implementing our modifications to the GC algorithm, we compared the CCEFs from the original algorithm to the CCEFs from our modified algorithm for various cardinalities for all seven datasets we considered. We computed 200 points for each non-dominated sub-EF and set the tolerance parameter specified in Graham and Craven (Citation2021) for determining if a sub-EF is dominated to 10−18. For Beasley’s datasets, we used the same range of values of λ as Graham and Craven (see Table 1 of Graham and Craven (Citation2021)) and for our additional datasets, λ ranged from 0.01 to 1, with equidistant intervals. We considered cardinalities from two to five for all datasets except Portfolio 5, which was too large to consider a cardinality of five. The algorithms were run on SHARCNET.Footnote7 shows examples of two cases in which our modifications provide improvements to the resulting CCEFs: Beasley’s Portfolio 4 with a cardinality of four and the BlackRock dataset with a cardinality of five. Because we annualise our risks and returns, our plot for Beasley’s Portfolio 4 does not exactly match those in previous studies (e.g., Cesarone et al., Citation2013; Graham & Craven, Citation2021). We note the rate of return may seem exceptionally high, but the stock market performed very well during the period associated with this dataset and the portfolios are constructed of individual stocks, which can generate extremely large returns.

Figure 6. Comparisons of the cardinality constrained efficient frontier (CCEF) produced by the original algorithm and our modified version of it prior to incorporating practical considerations for Beasley’s Portfolio 4 and our BlackRock dataset. The return is the annual continuously compounded rate of return and the risk is the variance of this annual rate. Annual rates were computed assuming either 52 trading weeks or 252 trading days in a year.

Figure 6. Comparisons of the cardinality constrained efficient frontier (CCEF) produced by the original algorithm and our modified version of it prior to incorporating practical considerations for Beasley’s Portfolio 4 and our BlackRock dataset. The return is the annual continuously compounded rate of return and the risk is the variance of this annual rate. Annual rates were computed assuming either 52 trading weeks or 252 trading days in a year.

Here, we have shown how our modifications can improve results, in some cases drastically. Further analysis of the benefits of our modifications is provided in Section 7, accompanied by results from the algorithm with additional modifications described in the next section.

6. Practical considerations

When implementing the algorithm with the two modifications we have proposed thus far, we found that runtimes can become unacceptably long. Consequently, we decided to consider further adjustments that may make the use of the algorithm in practice more feasible. We found that some cases had many flagged sub-EFs, leading to a large number of sub-EFs being carried forward to the sifting stage. Presumably, the value of flagging sub-EFs is greater when the gap between consecutive points on the current CCEF is large. Thus, one additional adjustment is the requirement of a substantial gap between points along the CCEF to flag sub-EFs for further consideration as discussed in Section 4. We opted to require a gap such that the change in risk and return between adjacent points were both at least 5% of the range of the risk and return, respectively.

A second additional adjustment is the removal of the maximum return point from consideration, for determining both if a sub-EF is non-dominated and if a sub-EF should be flagged. Due to the order in which the sub-EFs are considered in the algorithm, our belief was that the number of cases in which a sub-EF improves the CCEF but was determined to be non-dominated only because of its maximum return point are very few (and possibly non-existent).

7. In-depth comparison of original and modified algorithms

In this section, we provide a more detailed comparison of the output from the original GC algorithm and our modified versions of it. We refer to our four variations of the original GC algorithm as follows: including the maximum return point and excluding the gap requirement (with max, without gap; this is our modified algorithm prior to the practical considerations), excluding the maximum return point and excluding the gap requirement (without max, without gap), including the maximum return point and including the gap requirement (with max, with gap), and excluding the maximum return point and including the gap requirement (without max, with gap). The process for running each of these algorithms is exactly as described in Section 5.

For the first dataset from Beasley’s OR-Library, we found virtually no difference between the CCEFs produced by each of the five algorithms for K=2,3,4,5. However, for the remaining four Beasley’s datasets we found instances in which a modified version of the GC algorithm outperformed the original version. By outperformed, we mean the CCEF from a modified version of the GC algorithm dominates the CCEF from the original algorithm. The without max, without gap algorithm outputted CCEFs as good as the CCEFs of other modified algorithms (i.e., including the maximum point did not help performance) for these remaining datasets except for Portfolio 4. Requiring a gap more commonly led to slightly worse CCEFs. In ), we present CCEFs using the without max, without gap modified algorithm and the original GC algorithm for various datasets and cardinalities. We have chosen to focus our results on this modification because we found it offered (in our opinion) the best mix of relatively short runtimes and good CCEFs. For those interested in CCEFs not shown, we have provided the results here.Footnote8

Figure 7. Comparisons of the cardinality constrained efficient frontier (CCEF) produced by the original algorithm and our modified without max, without gap algorithm for selected datasets from Beasley’s OR-Library. The return is the annual continuously compounded rate of return and risk is the variance of this annual rate. Annual rates were computed assuming 52 trading weeks in a year.

Figure 7. Comparisons of the cardinality constrained efficient frontier (CCEF) produced by the original algorithm and our modified without max, without gap algorithm for selected datasets from Beasley’s OR-Library. The return is the annual continuously compounded rate of return and risk is the variance of this annual rate. Annual rates were computed assuming 52 trading weeks in a year.

The same analysis was performed for the BlackRock and Vanguard datasets, again with annual rates. Like with Beasley’s Portfolio 4, although CCEFs from the without max, without gap algorithm are better than the original, CCEFs related to both these datasets can be improved even more using the with max, without gap algorithm. For the BlackRock dataset, the additional gains from including the maximum return point are marginal (i.e., less than 6 bps of return), but the gains are more substantial for the Vanguard dataset. In ), we can see that the CCEF for the BlackRock dataset from the without max, without gap algorithm takes a shape similar to that of the EF (see )) after the inclusion of only three assets. For the Vanguard dataset, ) shows the gains available from using each of the without max, without gap and with max, without gap algorithms instead of the original GC algorithm.

Figure 8. Comparisons of the cardinality constrained efficient frontier (CCEF) produced by the original algorithm and modified versions of the algorithm. a) compares the original algorithm to the without max, without gap algorithm for the BlackRock dataset and b) compares the original algorithm to two modified algorithms, the without max, without gap one and the with max, without gap algorithm, for the Vanguard dataset. For both panels, the return is the annual continuously compounded rate of return and risk is the variance of this annual rate. Annual rates were computed assuming 252 trading days in a year.

Figure 8. Comparisons of the cardinality constrained efficient frontier (CCEF) produced by the original algorithm and modified versions of the algorithm. a) compares the original algorithm to the without max, without gap algorithm for the BlackRock dataset and b) compares the original algorithm to two modified algorithms, the without max, without gap one and the with max, without gap algorithm, for the Vanguard dataset. For both panels, the return is the annual continuously compounded rate of return and risk is the variance of this annual rate. Annual rates were computed assuming 252 trading days in a year.

Visually, the differences in the CCEFs for the BlackRock and Vanguard datasets seem much larger than the differences in the CCEFs for Beasley’s datasets. To compare quantitatively, we computed the maximum gap between pairs of CCEFs. The maximum difference in return is the maximum additional return attainable for the same (or slightly lower) risk. These were computed by iterating through the points along the CCEF outputted by the modified algorithm for each point on the CCEF outputted by the original algorithm until a suitable point (e.g., one with the same or lower risk as the point on the original CCEF) was found, then computing the difference in return between those two points, and finally taking the maximum of all these differences. The maximum difference in risk was computed in a similar fashion and is defined as the maximum reduction in risk attainable for the same (or slightly higher) return. The results for selected CCEFs, comparing the modified without max, without gap algorithm to the original, are shown in in annual terms.

Table 1. The maximum additional return and reduction in risk available through using the modified without max, without gap algorithm instead of Graham and Craven’s original algorithm. Relative percent changes refer to the relative additional return (reduction in risk) at the point at which the maximum additional return (reduction in risk) occurs. Examples of the largest maximum additional return and largest maximum reduction in risk are shown for each dataset.

The results in illustrate that a maximum additional return of at least 50 bps is not uncommon. For Portfolio 4 with a cardinality of three, the maximum additional return is approximately 200 bps, or 2%. To get a sense of the economic significance of these findings, it is important to consider the additional return in relative terms. The additional 2% improves an investor’s annual return from 25.3% to 27.3%, a relative percent change of only 7.89%, because of the huge returns for some assets in this dataset. For the BlackRock dataset with a cardinality of five, the relative percent change is 9.62%, despite a maximum additional return of only 53.36 bps. We say “only” here in comparison to 200 bps, but in practice 53.36 bps is material. For context, of the ETFs included in the BlackRock and Vanguard datasets, several ETFs have a management expense ratio less than 20 bps and the largest is 61 bps.Footnote9 For some datasets with very large returns this may not be material, but it certainly is for the BlackRock and Vanguard datasets.

The values for maximum reduction in risk shown in range from 9.39 bps to 37.30 bps, with the largest reduction in risk obtained with Portfolio 4 with a cardinality of three. However, in terms of relative percent change, the BlackRock dataset dwarfs all others, with a relative percent change of nearly 30%. Note that, unlike the relative percent change pertaining to return, this relative percent change corresponds to a reduction of 30% of the original risk. The results for the Vanguard dataset are closest, with a relative percent change of 16.84%. A material change in risk is harder to define than a material change in return. For a long-term investor, annual risk may be a secondary consideration to annual return. Nonetheless, we provide a practical comparison so that a reader may determine if the reduction in risk is substantial for their purposes. Over the lifetime of our BlackRock dataset, the ETFs with the most comparable annual risk are the iShares Core Canadian Short Term Bond Index ETF (risk of 8.56 bps), the iShares Core Canadian Government Bond Index ETF (risk of 21.88 bps), and the iShares Core Canadian Corporate Bond Index ETF (risk of 37.77 bps).

The improved performance relative to the original GC algorithm comes at the expense of longer runtimes. This is unsurprising considering the modifications involve a more thorough procedure for determining if sub-EFs are non-dominated and the classification of more sub-EFs as non-dominated, which leads to considering more points in the sifting/pooling stage. A comparison of the number of sub-EFs and runtimes is shown in the supplementary material available online. In general, we do not believe that the modifications have prohibitively increased the runtimes such that the original algorithm would be preferred, although some runtimes (e.g., instances with many non-dominated sub-EFs) may be prohibitively expensive.

8. Comparison of modified algorithms and Cesarone et al.’s algorithm

In Graham and Craven (Citation2021), the authors compared the CCEFs produced by their algorithm to the CCEFs produced in Cesarone et al. (Citation2013). Here, we do the same with our modified versions of the GC algorithm, using the same quantitative measures as in Section 7. displays the results of this analysis, with values reflecting the potential increase in return/decrease in risk available based on using Cesarone et al.’s algorithm instead of our without max, without gap algorithm. Notably, our algorithm is still outperformed by Cesarone et al.’s in some cases. However, the magnitude of the gaps between the CCEFs is much smaller than the magnitudes presented in , especially in relative terms (i.e., in terms of relative percent change at the maximum).

Table 2. The maximum additional return and reduction in risk available through using Cesarone et al.’s algorithm instead of our without max, without gap algorithm. Relative percent changes refer to the relative additional return (reduction in risk) at the point at which the maximum additional return (reduction in risk) occurs. Examples of the largest maximum additional return and largest maximum reduction in risk are shown for each dataset.

Graham and Craven’s comparison of their algorithm and Cesarone et al.’s algorithm revealed that their algorithm found low-risk points on some CCEFs that had not been previously reported. The authors noted that “most of these are negative in return” and, indeed, all “new” points presented in their paper are negative in return. In , we show the CCEFs of our without max, without gap algorithm and Cesarone et al.’s algorithm for Beasley’s Portfolio 2 with cardinalities of two and four. For both cardinalities, the CCEFs of the modified algorithm extend further than the CCEFs of Cesarone et al.’s algorithm. In these cases, the “new” points have positive return. In addition, with a cardinality of two, we can see that the CCEF from Cesarone et al.’s algorithm requires an investor take slightly more risk to obtain returns of ∼33–35% and ∼14–15%. Although shows CCEFs produced with a modified algorithm, that CCEF is virtually unchanged from the one produced by the original GC algorithm, so this observation is not due to our modifications. However, it was not presented in Graham and Craven (Citation2021). It is an important finding because, to our knowledge, the CCEF of Cesarone et al. (Citation2013) was believed to be optimal. Cesarone et al. compared it to the output from CPLEX, an exact mixed integer quadratic programming solver, and found they were the same up to numerical precision. Given our understanding that CPLEX is exact, we were quite surprised to find this result. It may be worthwhile for future research to study why it is possible for what was thought to be an optimum CCEF to be outperformed over some parts of the curve.

Figure 9. Comparisons of the cardinality constrained efficient frontier (CCEF) produced by our without max, without gap algorithm and the algorithm of Cesarone et al. (Citation2013) for Beasley’s Portfolio 4. The return is the annual continuously compounded rate of return and risk is the variance of this annual rate. Annual rates were computed assuming 52 trading weeks in a year.

Figure 9. Comparisons of the cardinality constrained efficient frontier (CCEF) produced by our without max, without gap algorithm and the algorithm of Cesarone et al. (Citation2013) for Beasley’s Portfolio 4. The return is the annual continuously compounded rate of return and risk is the variance of this annual rate. Annual rates were computed assuming 52 trading weeks in a year.

Overall, the CCEFs produced by our modified algorithms and Cesarone et al.’s algorithm are very similar, and perhaps the greatest differences between the algorithms lie elsewhere. Cesarone et al.’s algorithm is much faster than both the GC algorithm and our modified versions of it (see Table 1 of Cesarone et al. (Citation2013)). Although there is no maximum portfolio size for any of these algorithms in theory, in practice there are situations when Cesarone et al.’s algorithm may be the only option of the three. With 2191 candidate assets and considering portfolio sizes from 2-10, this algorithm was able to find a CCEF with 500 return values in under 18 h. From our runs of the GC algorithm on SHARCNET, we deduced that the runtime of the GC algorithm (with the specifications described in Section 5) is approximately linear, requiring roughly one second of runtime for every 850 combinations. If searching for a portfolio of 10 assets from 2191 candidate assets, this would require 2.2×1020 hours of runtime, which is completely infeasible. Our modified versions of the GC algorithm would take even longer. With a portfolio of fewer assets, the use of the GC algorithm can become feasible. For example, if searching for a portfolio of two assets from the 2191 candidate assets, we estimate the GC algorithm would require less than one hour. Likewise, if there are fewer candidate assets, finding a portfolio of 10 assets may be feasible.

Relative to Cesarone et al.’s algorithm, the biggest advantages of the GC algorithm and our modified versions of it seem to be their intuitiveness and ease of implementation. Our modifications have made the algorithm more complicated, but we believe we have preserved the intuitiveness of the GC algorithm. For practitioners that value understanding how an algorithm arrived at its output, one of our modified algorithms may be preferred, especially if real time updates are not a priority. In addition, like Graham and Craven, we have made an R implementation of the modified algorithms available here,Footnote10 facilitating easy use for practitioners.

9. Discussion

In this study, we have made detailed comparisons between algorithms for finding CCEFs: Graham and Craven (Citation2021), our modified versions of their algorithm, and Cesarone et al. (Citation2013). These algorithms differ in several ways, including the quality of the CCEFs they produce, their runtimes, and their intuitiveness. A practitioner’s preferred algorithm may differ depending on how they prioritise these aspects. For a practitioner concerned with the quality of the CCEFs and intuitiveness but not runtime, a suitable algorithm could iterate through all possible combinations of assets (without the “shortcuts” introduced by Graham and Craven) to create an exact CCEF. In fact, when the tolerance parameter of the GC algorithm is sufficiently small, this seems to be essentially the GC algorithm; we found with a sufficiently small tolerance the GC algorithm seemed to classify every sub-EF as non-dominated. As mentioned in their paper, a suitable value for the tolerance depends on the dataset. It may also depend on the cardinality constraint. Of course, a smaller tolerance reduces the probability of excluding a sub-EF that contributes to the overall CCEF. Users may wish to tune this according to their priorities.

In the same vein, practitioners may want to tune several other parameters depending on their needs. One option from Graham and Craven’s work (Citation2021) is changing the number of values of λ. Our modifications have led to a user having several more options for tuning the algorithm. Not only can they decide if they want to include the maximum return point or if they want to require a gap between points on the CCEF, but the size of the gap is also tunable and so is the maximum number of iterations of the algorithm. A user could choose to forgo the flagging process we have introduced altogether, and in some cases the resulting CCEFs will be equally good. We note that our first adjustment to the GC algorithm (computing an exact third point) generates a stable increase in runtime relative to the original, but the secondary flagging process results in highly variable increases in runtime, including potentially dramatic increases.

In general, our modifications designed to reduce runtimes worked as intended, reducing runtimes without substantially hurting performance. However, we did find some isolated cases in which the modifications had an unexpected effect. We describe these cases in the supplementary material. Even with these modifications, the runtimes of the algorithms can be very large and can grow considerably (e.g., from less than an hour to more than a day) just from increasing the cardinality constraint by one. We stress that these algorithms are designed for small cardinalities and that other algorithms (e.g., Cesarone et al.’s algorithm or CPLEX) may be more suitable for cardinalities as small as five, if not smaller. Also, the heuristics we developed in this study were designed through a trial-and-error process. That is, in some sense they were “tuned” to the datasets we have considered. If the parameters/heuristics we have used are used on other datasets, the level of performance may differ from that observed in this study.

The approach used in the GC algorithm (and by extension, our modifications to it) facilitates relatively easy incorporation of other constraints. In our work, we have included the constraint that short selling is not permitted, and others can be incorporated in a similar manner, at the stage of creating the individual sub-EFs. Of the additional constraints discussed in Lwin et al. (Citation2014) and Jin et al. (Citation2016), only the round lot constraint cannot be easily incorporated into the framework.

10. Conclusion

Cesarone et al.’s (Citation2013) algorithm for finding CCEFs produces world-class results with relatively fast runtimes. However, its potential for widespread use is limited due to its complexity and lack of publicly available implementation. In contrast, the GC algorithm is very intuitive and publicly available, setting it up to make an impact in practice. However, we have found that its performance is subpar in several cases of practical interest, in particular when both bonds and equities are available. Through modifying the algorithm in various ways, we improved the CCEFs produced by the GC algorithm while preserving arguably its most important feature, its intuitiveness. We have also made the implementation of our modifications publicly available. The CCEFs of our modified algorithms are much more competitive with the CCEFs of Cesarone et al. (Citation2013) but are still sometimes outperformed. Future work may consider either further modifications of the GC algorithm designed to improve the CCEFs even more and/or modifications designed to shorten runtimes so that the algorithm can be applied to situations with larger cardinalities.

Supplemental material

Supplemental Material

Download MS Word (42.3 KB)

Disclosure statement

The authors report there are no competing interests to declare.

Data availability statement

Some of the datasets used in this study are publicly available at http://people.brunel.ac.uk/∼mastjjb/jeb/orlib/files/. The datasets we created are available upon request.

Additional information

Funding

This work was supported by the Canadian Financial Wellness Lab, which receives funding from the NSERC Alliance Grant Program (Grant Number: ALLRP 566997-21).

Notes

1 The code can be downloaded from https://github.com/MJCraven/SiftedQP.

2 Although this is not shown in , an EF can dominate another EF even if parts of the curves overlap.

3 The datasets can be obtained from http://people.brunel.ac.uk/∼mastjjb/jeb/orlib/files/ and are titled port1.txt through to port5.txt.

4 Using log returns instead of simple returns for our calculations (which we also do for our own datasets, described in the next paragraph) means we are approximating the portfolio return rather than computing it exactly.

5 Graham and Craven (Citation2021) state that they use an approximated third point that differs from what we have described. However, the code they provide uses the approximation we have outlined, and is much more reasonable than what is written in the paper (which we believe was written in error).

6 In some cases, a sub-EF may consist of only a single point because of the risks and returns associated with the assets in the portfolio. Such cases result in an error when running the algorithm and should be discarded. This can be done by adding a condition that the maximum and minimum return of a sub-EF must differ by more than some threshold.

7 This research was aided by support from Compute Ontario (www.computeontario.ca) and Compute Canada (www.computecanada.ca).

9 These values are not continuously compounded. However, for values of this size the continuously compounded return is approximately the same.

References

  • Beasley, J. E. (1990). OR-Library: Distributing test problems by electronic mail. Journal of the Operational Research Society, 41(11), 1069–1072. https://doi.org/10.1057/jors.1990.166
  • Best, M. J., & Grauer, R. R. (1990). The efficient set mathematics when mean-variance problems are subject to general linear constraints. Journal of Economics and Business, 42(2), 105–120. https://doi.org/10.1016/0148-6195(90)90027-A
  • Cesarone, F., Scozzari, A., & Tardella, F. (2013). A new method for mean-variance portfolio optimization with cardinality constraints. Annals of Operations Research, 205(1), 213–234. https://doi.org/10.1007/s10479-012-1165-7
  • Chang, T. J., Meade, N., Beasley, J. E., & Sharaiha, Y. M. (2000). Heuristics for cardinality constrained portfolio optimisation. Computers & Operations Research, 27(13), 1271–1302. https://doi.org/10.1016/S0305-0548(99)00074-X
  • Chen, W., Zhang, H., Mehlawat, M. K., & Jia, L. (2021). Mean–variance portfolio optimization using machine learning-based stock price prediction. Applied Soft Computing, 100, 106943. https://doi.org/10.1016/j.asoc.2020.106943
  • Graham, D. I., & Craven, M. J. (2021). An exact algorithm for small-cardinality constrained portfolio optimisation. Journal of the Operational Research Society, 72(6), 1415–1431. https://doi.org/10.1080/01605682.2020.1718019
  • Jin, Y., Qu, R., & Atkin, J. (2016). Constrained portfolio optimisation: The state-of-the-art Markowitz models. In Proceedings of 5th the International Conference on Operations Research and Enterprise Systems (pp. 388–395). https://doi.org/10.5220/0005758303880395
  • Kalayci, C. B., Ertenlice, O., & Akbay, M. A. (2019). A comprehensive review of deterministic models and applications for mean-variance portfolio optimization. Expert Systems with Applications, 125, 345–368. https://doi.org/10.1016/j.eswa.2019.02.011
  • Lwin, K., Qu, R., & Kendall, G. (2014). A learning-guided multi-objective evolutionary algorithm for constrained portfolio optimization. Applied Soft Computing, 24, 757–772. https://doi.org/10.1016/j.asoc.2014.08.026
  • Markowitz, H. ( 1952). Portfolio selection. Journal of Finance, 7, 77–91. https://doi.org/10.1111/j.1540-6261.1952.tb01525.x
  • Michaud, R. O., & Michaud, R. (2007). Estimation error and portfolio optimization: A resampling solution. Available at SSRN 2658657.
  • Moral-Escudero, R., Ruiz-Torrubiano, R., Suárez, A. (2006, July). Selection of optimal investment portfolios with cardinality constraints. In 2006 IEEE International Conference on Evolutionary Computation (pp. 2382–2388). IEEE.
  • van Staden, P. M., Dang, D. M., & Forsyth, P. A. (2021). The surprising robustness of dynamic mean-variance portfolio optimization to model misspecification errors. European Journal of Operational Research, 289(2), 774–792. https://doi.org/10.1016/j.ejor.2020.07.021