We are always wary of the potential for decay in the performance of VecViz’s suite of machine learning-based analytics, especially since we have done zero retraining. Thus, we are somewhat surprised that 2025 was perhaps the strongest consecutive 12 months for individual metric performance.in VecViz’s nearly four year out of sample performance period,
Here we will discuss performance of our analytics both in terms of their contribution to the performance of portfolios optimized subject to constraints, and in isolation with respect to their respective performance objectives. We will see that leading portfolio performance contribution does not necessarily translate to leading individual metric scorecard performance, and explore the reasons why that was the case in 2025.
VecViz’s VaR was the key contributor in the optimized constrained portfolio context
A few months ago we introduced our framework for evaluating analytic metric performance via ablation1 in the context of constrained portfolio optimization. The period studied is only 7 months longer than 2025 itself, and, while we have updated it for performance through 12/31/2025, we have not yet isolated 2025 specific performance. However, the dispersion in results is wide enough to allow us to omit qualification from the heading above. Our VaR metric, denoted as “Vol_VV” in the table below, has done best, followed by our correlation metrics (“Correl (VE)_VV” and “Correl (FP)_VV”), with our regime based expected return metric (“Ret_VV”), a variation of the “V-Score”, lagging behind.

Keep these results in mind when viewing the VaR metric-focused results below. VaR metrics contribute to performance in the context of expected risk-constrained portfolio optimization not just by helping to avoid losses, but also by not overly constraining portfolios when risk is low and opportunity is high.
On an individual metric basis the V-Score shines, VaR is solid but a relative laggard.
Our “Summary Results For All VecViz Analytics, Individual Metric Basis” report presents the performance criteria for each individual metric and the related performance. Then, further down the Reports page, there are detailed reports for each metric that provide ticker level and model date detail.
Over the 365 days ended 12/31/2025 the average VecViz metric met 60.6% of its objectives. This was solidly better than the 55.9% of objectives met on average across metrics for the entire 1/31/2022 – 12/31/2025 period (denoted as “All”) considered.

The V-Score had the best performance for the period, meeting 68% of its objectives. despite being the core component of the VecViz expected return metric (“Ret_VV”) that was the weakest VecViz contributor to optimized portfolio performance . Also surprising is that the best performer on a portfolio contribution basis, Vol_VV, constitutes 50% of what is the weakest metric on a performance vs metric objectives basis, VaR (“Vol_VV “is 99% VaR, but we evaluate both 95% and 99% VaR), meeting 53% of its objectives. We will look closer at these two categories in the remainder of this blog.
V-Score: leading performer relative to individual metric ideal, despite lagging as an optimized portfolio contributor
Despite being the core of the regime based expected return metric that was the weakest VecViz metric performer in the portfolio context (“Ret_VV” in the first table above), the V-Score was the strongest performer relative to its individual metric performance objectives over the last year.
V-Score performance metric objectives focus upon internal consistency of the correspondence between V-Score level groupings and Forward Returns, performance relative to the average ticker, and to a lesser extent, performance vs. the SPY. They are detailed below.
Average returns for the categories considered in the criteria detailed above are presented below. None of them closely replicates the exclusively momentum focused “Ret_T252D”, the alternative expected return methodology to the regime based variation of the V-Score that comprises “Ret_VV”.

Ticker level V-Score performance drivers vary somewhat by forward time horizon, and of course, by V-Score grouping. The top 10 for the 21d forward time horizon, and the Positive and Negative V-Score groupings during 2025 are presented below, with positive category contributors on the left half, and negative category return contributors on the right half.

VaR, a laggard relative to a (demanding) individual metric ideal, despite leading in terms of optimized portfolio performance contribution
Despite being the leading contributor to optimized portfolio performance by a large margin, VecViz’s Vector Model VaR (95D and 99D combined) was the laggard on performance relative to metric ideal objectives. These objectives demand ergodicity2, a standard rarely attained in quant finance. They also demand not just higher ROVBC3, but alpha in ROVBC as well. In sum, they are pretty demanding.4
All VaR performance objectives are calibrated relative to an equivalent VaR percentile generated from a “bell curve” based Sigma model. Note that Sigma here is enhanced by weighting of daily returns with exponential time decay over a trailing two year window. In the portfolio optimization with constraints ablation process discussed above, the “Vol_252D” metric also relies on “bell curve” Sigma style metric, but it is currently calculated over an equally weighted trailing 252D window. That might also drive some of the differential in performance in this metric specific context vs. the portfolio optimization with constraints context.

Vector Model VaR handled the April 2025 “liberation day” sell-off quite well vs. “Sigma”

Average VaR Breakage Rates and ROVBC over time:
As depicted in the section above, the 99% VaR is more conservative than Sigma at the 21d horizon than the 95% VaR, and that is reflected in the breakage rates and ROVBC (each stated here on a rolling 20d average basis, for ease of spotting significant differences).

Ticker Level VaR Breakage Rate Comparison of Vector Model to Sigma:
There are a few tickers for which the Vector Model was too aggressive in its estimation of VaR (ZTS, UNH, CMCSA, AMC) at both the 95%tile and 99% tile. Overall, it was more conservative at the 99%tile than Sigma. More detail, again for the 21d forward horizon, below:

ROVBC Comparison of Vector Model to Sigma:
Could be anecdotal, but it seems that in 2025 Vector Model ROVBC exceeded Sigma for large caps more so than it did for mid and small caps. More detail, again for the 21d forward horizon, below:

- Ablation is a technique where components of a system are systematically removed one at a time to measure their specific impact. ↩︎
- consistency across time by ticker and across tickers by date ↩︎
- ROVBC = Return on VaR Based Capital. For Sigma, it is the price return of the ticker. For the Vector Model, it is the price return of the ticker multiplied by the ratio of Sigma VaR / Vector Model VaR, with a cap of 3.0 and a floor of 0.333. ↩︎
- Our VaR performance report also provides detailed results for Kupiec and Christofferson tests for the entire out of sample period. ↩︎
Conclusion:
As good as the V-Score was this year relative to its performance objectives, it lagged the simple momentum based alternative in the optimized portfolio performance contribution context. Further, in the context of a portfolio optimized with constraints, the handling of the April 2025 sell off by Vector Model VaR so exceeded that of the Sigma based VaR alternative that it was the foremost contributor to optimized portfolio performance, despite being a laggard on the basis of performance relative to metric objectives. Unavoidable disparities in individual metric performance objectives are responsible for some part of the disconnect, as are differentials in the definition of “Sigma” and the purity of the V-Score signal in the portfolio optimization context vs. the individual metric performance evaluation context.
If you have interest in receiving VecViz analytics such as the ones discussed above via API for delivery into OpenBB please reach out to coyner@vecviz.com. Thanks.