QUANTCONNECT COMMUNITY
The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.
McCartney Taylor
This is remarkable strategy. While there are some very obvious optimized parameters in here, that could very easily lead this to be over fitted. It still has a extremely strong core and can be used as a point of departure for variants. I'm really surprised how few have cloned this. It should be studied. To @Rowan Whiteside, my hat is off to you. Brilliant work here.
Eugene Kuzmin
Here are some issues I see:
1. bottom_fraction_history deque is never read. It's populated every rebalance, but never used anywhere. It looks like it was intended to smooth the breadth signal over 3 months before triggering risk-off, but the check uses the raw bottom_fraction directly. This means risk-off can be triggered by a single bad month.
2. Recovery threshold is asymmetric and fragile
python
If max_stress is exactly 0.45 (the trigger threshold), a 60% recovery only requires stress to drop to 0.18 — but the < 0.15 absolute clause can fire almost immediately after entering risk-off if the stress was marginal. The two conditions aren't mutually coherent.
3. maximum_stretch is never reset. It accumulates the peak stretch over the entire backtest lifetime. So as time goes on, smoothed_stretch < peak_stretch * 0.80 becomes increasingly easy to trigger, progressively over-scaling down positions on older symbols. The exhaustion rule becomes more aggressive over time, unintentionally.
4. ADX filter excludes trending stocks
python
This keeps only ranging/weak-trend stocks and discards the strongest momentum candidates — likely the opposite of what's intended for a momentum strategy.
5. Band history is reset on recovery, but maximum_stretch is not. During recovery, band_history is cleared per symbol, but maximum_stretch_by_symbol is not. So the exhaustion rule retains memory of pre-crash peaks, while the band ceiling is reset to a fresh state — they're now on mismatched timescales.
6. Universe selection takes the top 100 per sector by market cap, but no sector cap on the final portfolio. The selection is sector-neutral at the universe level, but TargetPositionCount = 5 with no sector constraint on final selection means all 5 picks could come from the same sector if that sector dominates momentum.
Eric Cheshier
Eugene Kuzmin have you backtested any of these issues or is this just AI-assisted issue finding? I ask because I red-teamed it with AI as well and came up with similar conclusions. However, I have been tinkering around with this strategy for a little bit and can share some tidbits:
Â
The issue that I see with the strategy is the concentration of 1-2 momentum names for long time periods. Nevertheless, I am deploying my revised version of this strategy to IBKR paper trading posthaste. I think there is some special sauce in this one.Â
Â
Eugene Kuzmin
Eric Cheshier Indeed, the analysis does involve AI to some extent. Testing with all the fixes shows a significant loss of the alpha. I would appreciate it if the owner, assuming there is one, could comment on this.
What really concerns me is that it seems impossible to transfer the strategy to C# without losing the alpha.Â
On another note, as it currently stands, this is a bi-monthly strategy. In principle, all of the logic from OnDaily() could be moved into Rebalance(). You would just need to use historical data to calculate everything required.
Rowan Whiteside
Some of the flaws that you are pointing out are by design, if you run your own analysis on ADX over the time period you will see that stocks that you are buying at the beginning of the month greater than 35 have already ran and the majority of the time crush your returns. There could be nuances here based on time frame and level, but 35 seemed generic enough across tests.Â
Rowan Whiteside
I admit there are two things that you pointed out, that are flaws/issues that I am reworking, but the rest are by design. The weighting mechanics were designed to squeeze the weak stocks out of the portfolio, that is what creates the outsized returns, which in turn also creates extreme reversals such as 2018-2019. I have been working through how to adjust this without sacrificing outsized return potential.Â
Eugene Kuzmin
Rowan Whiteside, Thank you! Also, while not a flaw, getting rid of the OnData() in favor of Rebalance() is appreciated.
Kryptonite
Rowan Whiteside Hi Rowan, thank you for this strategy.
It is very promising, however I have noticed a major problem with rebalancing. Depending on the start date the strategy returns different stocks for rebalancing.
For instance, if you set start date to 2025, 2, 15, the strategy will return SATS 100% for 2026-04-30.
However, if you set the start date to 2026, 2, 15, the strategy will return MU: 10.0%, SATS: 10.0%, NBIS: 10.0%, FIX: 10.0%, DD: 10.0%, GLW: 10.0%, VRT: 10.0%, ALB: 10.0%, LRCX: 10.0%, UI: 10.0% SATS 100% for the same 2026-04-30.
It must have to do with how data is accumulated over time. I was trying to play around a bit, but couldn't figure out a fix yet.
If you come up with a solution, I would appreciate if you could provide it.
Thanks again for this strategy.
Eugene Kuzmin
Kryptonite it will be resolved once the necessary calculation moves from OnDaily to Rebalance.
Rowan Whiteside yes, so it is another flow that's really good to fix. I do have a fix already if you like.Â
Rowan Whiteside
I will push the patcht this afternoon.Â
The differences in selection is based on band history, not sure y'all can visualize what is happening in the algo itself, but if band history is not long enough you will get very different outcomes. Based on your example this is expected based on enough time for the band stretch to have plenty of history to work with, where as with only two months not really being enough.Â
Rowan Whiteside
The first patch is out, I am going to work on cleaning up the code a bit more. Let me know if you have any other tweaks/suggestions.Â
QuantConnect Reconciliation
The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.
To unlock posting to the community forums please complete at least 30% of Boot Camp.
You can continue your Boot Camp training progress from the terminal. We hope to see you in the community soon!