Mean Variance Optimization is primarily the concern of institutional investors and quants, but this blog can still be of interest to individual investors who actively consider their personal asset allocation.
With month end, quarter end, and fiscal year end approaching, many institutional investment teams are in the late innings of portfolio strategy and asset allocation related endeavors. A panel on “The Quantitative Investment Process” at a quant finance conference I attended last week1 started with discussion of mean-variance optimization (MVO) . The comments of the panelists confirmed that MVO remains a commonly used framework for such exercises.
MVO has virtue as a discipline but is also the subject of much criticism
MVO and related asset allocation decisions can have broad implications for an entire investment team, yet the input parameters are viewed with great skepticism. Further, there is often such a delay between analysis and stakeholder approval and implementation that the whole thing has to be re-run because the market moved enough to invalidate the conclusions. Finally, it is a lot of work to pull together at many institutions.
Fortunately, MVO is typically a baseline for asset allocation decisions, not necessarily the final asset allocation decision. Tweaks to MVO output can be made on the basis of considerations not adequately specified in the MVO inputs (to the extent they don’t violate constraints). Still, a bad baseline can negatively skew final asset allocation decisions, so it is key that your MVO generates the most accurate output possible.
“Garbage in, Garbage Out” (GIGO) is perhaps the foremost criticism of MVO
Most MVO processes seek to maximize expected return per unit risk. The simplest such MVO process would put 100% of a portfolio in the asset with the highest expected return per unit of risk. Obviously, forward returns are probably impossible to forecast with accuracy on a consistent basis. Any portfolio so designed would therefore be woefully overexposed to downside (or underexposed to upside) at some point because of error in the expected return input. This dynamic is commonly summarized as “Garbarge in Garbage Out”, or GIGO. Fortunately, MVO calculation engines sophisticated enough to incorporate constraints upon concentration are broadly available.
However, there is only so much that the MVO calculation engine can do to limit GIGO vulnerability. Bayesian based modifications to MVO, such as Black-Litterman, that balance expected return inputs with Capital Asset Pricing Model (CAPM) implied inputs, can at best dilute GIGO risk to MVO processes. Thus, it is my sense that for many practitioners, the MVO process entails running fairly traditional MVO calculations across a range of parameters to gauge GIGO sensitivity net of constraints, and then focusing upon the MVO’s output for a few carefully specified sets of key parameter estimates, such as expected return.
Assembling estimates of expected return across the assets considered in an MVO typically involves one or more of the following approaches: (1) CAPM based or average historical returns ,(2) econometric model based returns, and (3) polling exercises in which internal and / or external specialists for each asset in the MVO are polled on their expectations for the asset’s return over the time horizon of interest. These approaches can be fairly criticized by stakeholders as being too simple, too complex, and vulnerable to (at the very least) the many pitfalls associated with any polling process, respectively. Further, measurement and related reporting of the historic out of sample accuracy for these approaches is typically inconsistent or non-existent. Hence, as stated earlier, despite substantial effort, GIGO concerns for MVO processes are hard to substantially mitigate.
The Vector Strength Histogram can facilitate consistent, efficient consideration of MVO expected return inputs and related discussions.
Efforts by stakeholders to assess the credibility of the expected return inputs can serve as an important constraint on “GIGO” risk to MVO processes. However, the breadth of each stakeholder’s knowledge and experience probably may not always match the breadth of the asset exposures considered in an MVO exercise. The Vector Strength Histogram was designed to serve as an efficient, consistent framework for such consideration and related discussions, and is relevant in both the Black-Litterman and more traditional MVO settings.
An image of the Vector Strength Histogram for SPY is provided below, as an illustration. See the several case studies in our blog for further illustration of how the Vector Strength Histogram can aid cognition of how a forward return (converted to price) could come to occur through visualization and narrative.

Machine learning based metrics such as the V-Score can bring incremental, measurable perspective to MVO expected return inputs.
Machine learning (ml) can bring objectivity and consistency to estimating parameters such as Expected Forward Returns, the objective of most MVO exercises. However, the drivers behind ml based metrics are usually opaque, limiting their contribution to a MVO stakeholder discussions.
VecViz’s V-Score of expected forward relative returns offers an unusual degree of transparency for a ml based metric. For each V-Score generated, VecViz publishes V-Score “closest matches” among top and bottom performing ticker-model dates (see the example below). VecViz also details the V-Score criteria inputs (which are patent pending). Finally, since it is based solely on ticker price history, the V-Score could likely add new perspective to MVO processes currently reliant only on inputs from the aforementioned CAPM, econometric modelling and polling based approaches, which tend to relate primarily to fundamentals and valuation2.

The chart below displays the V-Score’s rolling, out of sample historic correlation with 252d (1 year) forward returns (by model date, across ~150 tickers), alongside the same for Sigma based volatility and the logarithm thereof3.
The V-Score’s correlation to forward returns is very far from perfect (or even “quite good”). However, it is measurable and more correlated, on average, to forward returns than Sigma or the logarithm of Sigma (with which average returns tends to correlate well), and with less variability, for the ~20 month period examined for which forward 252d returns are currently available.


Of course, the V-Score does not represent a forward return, and so it cannot be directly mapped to a Vector Strength Histogram. If you need a figure that represents a Total Return as your primary optimization objective, then consider using the V-Score as a modifier of your Expected Return estimates. We have found that it enhances the correlation of Vector Model EUB and EDB4 to forward returns, when used as a modifier, for example. Also, consider using a minimum average V-Score as an MVO constraint.
Vector Model VaR can provide valuable perspective for setting VaR constraints and calibrating stress tests.
If your MVO is subject to a fixed VaR constraint and it calculates VaR using Sigma based analytics, it will tend to make less risky allocations after periods of high volatility and more risky allocations after periods of low volatility. That pro-cyclical behavior can hurt performance.
VecViz’s Vector Model VaR metrics do not rely on historic day to day price return volatility. Instead they are based on a patent pending score (Vector Strength) of support and resistance that applies machine learning to the topography of tops and bottoms, the channels that can be drawn from them, and overall chart shape. In so doing, the Vector Model provides a stochastic-esque perspective on volatility.
The performance impact of using Vector Model based VaR and OaR instead of Sigma based VaR (and -VaR we suppose) upon volatility constrained investors is something we estimate via our Return on VaR Based Capital (ROVBC) and Return on OaR Based Capital (ROOBC) metrics. These ticker level metrics5 scale exposure to a ticker up or down based on how much risk the Vector Model sees relative to Sigma, subject to a collar on relative weight of 0.333x to 3.0x. See our FAQ for a more detailed definition.
ROVBC and ROOBC for the Vector Model and Sigma for the 252d (1 Year horizon) for a sampling of the Vector Model’s out of sample testing period (1/20/22 thru present, i.e., 9/18/24) is provided below, for 95th%tile and 99th%tile metrics, alongside associated breakage rates. Explore this dashboard for yourself. It can be found at the bottom of our Dashboards page.


Vecctor Model VaR can still help inform the volatility related inputs to your MVO despite the fact that VecViz has relatively limited ticker coverage and doesn’t provide correlation related metrics. At the more aggressive end of the spectrum, you could perhaps adjust the diagonal of your asset (or factor) covariance matrix to reflect the ratio of ticker level Vector Model VaR (or OaR) and ticker level Sigma VaR (or -VaR). Alternatively, or in addition, a relatively simpler way to incorporate the Vector Model’s view of volatility would be to reference its VaR (or OaR, for long only tracking error) level for some macro oriented etf it covers, such as SPY or TLT in calibrating the stress test constraints included in your MVO analysis.
Conclusion
MVO is a difficult process. Expected Return forecasts and forward volatility estimates are an important input and are probably impossible to get significantly correct with consistency. VecViz has a limited coverage universe of ~150 tickers, but whether you run Black Litterman or traditional MVO it can probably help by (1) facilitating consideration and related dialogue of all such input estimates via its Vector Strength Histogram (2) making the exclusively price history based, measurable, ml powered perspective of the V-Score available to inform your Expected Return inputs and (3) making the stochastic-esque, less pro-cyclical perspective of Vector Model VaR and OaR available to inform your MVO stress test constraints.
Appendix: V-Score and Sigma Correlation excluding MSTR


- https://www.rebellionresearch.com/cornell-financial-engineering-manhattan-rebellion-research-2024-future-of-finance-conference ↩︎
- Though ostensibly very different in technique, the V-Score has a fair bit of commonality to the methodology discussed in “(Re-)Imag(in)ing Price Trends”, published in the December 2023 issue of the Journal of Finance, Jingwen Jiang of University of Chicago, Bryan Kelly of Yale, and Dacheng Xiu, also of University of Chicago, I explore the comparison further in this blog entry. ↩︎
- Chart displays rolling correlation to 252d forward returns across all ~150 tickers VecViz has out of sample testing for. See the “History of V-Score Forward Price Performance” dashboard on the Dashboards page of this site for the full list of tickers covered and a sampling of the ticker -model date detail. In the Appendix we present this chart excluding MSTR, whose rolling 252d return history includes values > 700%, causing it to be an extremely influential point for a ticker that probably isn’t a large part of most investor’s portfolios. The V-Score’s correlation outperformance relative to Sigma and ln(Sigma) increases when MSTR is excluded. ↩︎
- EUB = “Expected Up Body” and EDB = “Expected Down Body”. See our FAQ for more detail. ↩︎
- At present we are not incorporating correlation between tickers in this table. All “Grand Total” figures are simple averages. ↩︎