Dual Momentum with Out Days


Inspired by T Smith idea to implement Gary's Antonacci's dual momentum approach to ETF
selection in "IN OUT" strategy.

-The execution code has been completely changed to keep levarage under control and avoid
 insufficient buying power warnings.
-To calculate returns I used widely used in industry momentum with excluding period.
-Modified components that are more in line with the strategy.
-The IN OUT part of the strategy has not changed except for some cosmetics
 to make it more readable for myself.

"DUAL MOMENTUM IN OUT" nearly doubled "IN OUT" Net Profit while maintaining risk metrics at the same level.

Compounding Annual Return

Sharpe Ratio




Annual Standard Deviation


Here is my second version of "DUAL MOMENTUM-IN OUT".


Update Backtest


The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.

Hi Vladimir

good to see you again here...just some off topic question...i arrived as well from quantopian and try to translate/learn porting an algo from zipline. I did the tutorial and the framework they are using looks far better than quantopian but the warming up and other stuff is a bit nerve ripping...

Do you thnik it would be difficult to move that algo aboue into the quantopian framework concept? 

# Universe Model
# Alpha Model
# PortfolioConstruction Model
# RiskManagement Model
# Execution Model

I translated already an algo with pre-set ticker, filtering and ranking, if you are intersted i dont mind to post it here. It still has some issus with warm up.

For me alone its quite hard, but I know that you are a very experienced programmer..

Moving to the framework it might a lot easyer to contribute as one only need to improve some modules and don't need to touch the whole code, and one can reuse a lot from other algos.


Hi, Carsten,

For zipline, you can skip warm-up and consolidation because they are built-in features.


Hi Carsten,

if you are intersted i dont mind to post it here

I have created thread Transition from Quantopian to QuantConnect.
You can post it there.


Great work. Adjusting the tradeout scheduling and using hourly rather than daily resolution for our signal instruments, I've managed to reduce drawdown.


T Smith

Thank for the improvement.
"Many a little makes a mickle".
There was a typo in the code on line 154.
I deleted the second "if".


In this backtest I used the original T Smith setup:

IN and OUT assets determined by momentum (IN: QQQ/IWF, OUT: TLT/IEF), 100 day simple return

Net Profit 2097.037%; Sharpe Ratio 1.743; PSR 97.247%; Drawdown 17.900%; Beta 0.026;

It also sufficiently beat the setup of Peter Guenther.

IN: QQQ:1, OUT: TLT:0.5 + IEF:0.5

Great  job Vladimir and T Smith.


Hello Vovik,

Did you fix the issue with the original T.Smith algo?  It has many error in the log with invalid trades due to margin violation.  I believe Vladimir's has fixed those issue, but trades different instruments.  


Yeah, Nathan Swenson he is using Vlads corrected trade logic to ensure sells are placed before buys and leverage doesn't go beyond 1. He is also using the updated sheduling and resolutions. This gets us out the market quicker, but avoids over trading/rebalancing whilst 'IN'.


Nathan Swenson,

In this backtest I have no margin violation warnings. 

Here is the full log.

Algorithm Initializa:    tion: Data for symbol DBB has been limited due to numerical precision issues in the factor file. The starting date has been set to 1/4/2007.
Algorithm Initializa:    tion: Data for symbol UUP has been limited due to numerical precision issues in the factor file. The starting date has been set to 2/19/2007.
Algorithm Initializa:    tion: Data for symbol DBB has been limited due to numerical precision issues in the factor file. The starting date has been set to 1/4/2007.
Algorithm Initializa:    tion: Data for symbol UUP has been limited due to numerical precision issues in the factor file. The starting date has been set to 2/19/2007.
Algorithm Initializa:    tion: Warning: when performing history requests, the start date will be adjusted if it is before the first known date for the symbol.
2008-01-01 00:00:00 :    Launching analysis for b3e5c2897bc5099bb64e69aa0ecedcca with LEAN Engine v2.
2007-08-28 10:00:00 :    Algorithm warming up...
2008-01-01 00:00:00 :    Algorithm finished warming up.
2020-12-11 16:00:00 :    Algorithm Id:(b3e5c2897bc5099bb64e69aa0ecedcca) completed in 631.42 seconds at 3k data points per second. Processing total of 2,197,238 data points.

Perhaps the problem is something else.


Vladimir regarding your comment on using more than one indicator to trigger the out of market. I have had a go at implementing it here:

Would appreciate your feedback!

Nathan Swenson this seems to solve the issue of SHY taking us out the market without another signal.


That's really great T Smith I am not a huge fan of FDN or QQQ but this strat looks great!  Quick question...

Are these the only changes to code to change if you don't want 5 signals as the IN/OUT? Say 4 or 3 for example...

        pctl = np.nanpercentile(mom, 5, axis=0)

       if self.no_signals > 5:


a great algorithm, we will test it on a large scale, and then leave you a review.


This strategy, like its predecessors in the other 2 or 3 “...In Out...” threads, is totally dependent on its up/down trend declarations. And the first question to ask should be: How reliable is this trend definition?

I see the strategy as a variant to a 60-day over a 252-day moving average crossover to bond switcher, but still responding to daily price variations. The advantage goes to being long for longer periods than being in bonds. And this translates - in an upmarket - to taking a higher average percent profit than when it loses a trade (about 2:1). And there, even with a hit rate in the 50s, you are bound to make some overall profits just for playing the game.


The above chart illustrates where I'm at presently. There was some leveraging (1.4x). Not excessive when compared to the 89.86% CAGR. The strategy could more than afford to pay the leveraging fees. 3x-leveraged ETFs were used which made the strategy behave the same as if leveraged by 4.2x. The initial capital was set at 100k as most here. Increasing the initial capital to 1 million did increase performance about 10-fold since the strategy is fully scalable.

To get there, I modified a few things. One of interest might be the following section:

mom['S_G'] = (mom[self.SLV] - mom[self.GLD])*0.0
mom['I_U'] = (mom[self.XLI] - mom[self.XLU])*0.0
mom['A_F'] = (mom[self.FXA] - mom[self.FXF])*0.0

which flatlined most of the momentum signals leaving only UUP. Why do this? It resulted in higher profits. It also said that those signal components were not needed or could be considered as irrelevant to the task at hand. There is a lot that could be said about this strategy's strengths and weaknesses.


Hello Vladimir and all,

I see your many contributions and I am still a complete newbie, thank you for sharing all these invaluable insights and your experience online for everyone to learn from and contribute too (me too hopefully, one day).

I will need to spend several hours to understand your code and the logic behind , (would be great if there were comments in the code explaining the logic =D) but have some question around lookback bias in the meantime. This strategy is almost too good to be true - what are some of the items in here that may have lead to over-optimization? I see Guy Fleury's expert critique on the MA crossover periods defined a fixed value, what are some of the other things? Would you say that the asset selection could also be biased?

Thank you for your teachings!



Guy Fleury,

I am aware of your incredible ability to send any profitable strategy into the sky since we
discussed Andreas Clenow's momentum in 2017.

It's no secret, but simple math, that if you use 3x tools and apply a 1.4 leverage to
a strategy with 30% CAGR you should get much more than 150% CAGR.
You got 89.86% CAGR because these 3x instruments only started trading in 2010.
The decision to use 3x tools and a leverage greater than 1.0 is usually made
at the last stage of development strategies.
Before that, we must use 1x tools and use 1.0 leverage for compatibility of results.

I can also see that you are starting to improve the strategy in the right direction by
cutting half the sources.
Hopefully I can see results where you apply pressure points and other tools from your


Hi Guy Fleury 

Great to see you in the discussion,. I have played with using just dollar as our signal, and have decrease sharpe considerably. whilst bring beta to 0.5.


Vladimir totally agree. Best to build an algo with no leverage and just look to improving sharpe ratio. Leveraging later is then an option. Have you had much of a look into the concept of requiing more than 1 signal to exit the market?


T. Smith,

I don't have a solution right now, but I definitely want to either change the decision logic or reduce the number of sources.
Two months ago, this algorithm had only 4-6 sources and much fewer parameters.

I have played with using just dollar as our signal.

USD in this strategy has a negative sign.

mom[self.UUP] = mom[self.UUP] * (-1)


@Vladimir, your version of this trading strategy was already tested with 1.0x leverage. My task is to find out if it can scale up and do more, including to see if it can support leveraging in order to find the strategy's operational limits. Something I need to know before the end of the testing process. Otherwise, I will be wasting my time if those limits are found nearby, later, or much worse, after going live.

The way we test trading strategies, they have to generate all their profits from within, no matter what trading methods we use. There are no external funds added during a simulation. The tools we have are part of better stock selection, better timing, better gaming, better process amplification, better trade modulation, better protective measures, etc...

Leverage can be used over selected time periods where you anticipate for whatever reason that it will increase the potential return on your trade. It is like saying: you do not use it all the time, but can use it where you think it might count.

Nonetheless, it is a CAGR game, and basic math will matter. You can apply limited leveraging when your added alpha exceeds leveraging costs (here, the alpha being the excess return over the market average).

You want to know the strategy's limits before your “feasibility” simulation study ends. I have looked at many strategies that do not pass these basic tests. I usually lose interest quite rapidly and expedite their journey to file 13.

Here is a fun observation. After flatlining 3 of the signal components, we are left with a minus UUP. The thing is that if you reverse this signal to +UUP you should see the strategy lose a lot of money. But, if you do this, you get about the same profits, within 0.5% of each other on my 1 million initial capital scenario. Putting into question the “real” value of that signal? Note that you still get a little more with the negative signal (-UUP).

What kind of signal, that it be negative or its reverse, can produce about the same results? I would venture one that does not matter which side it is on.

How about flatlining UUP making the strategy having no switching signal, no trend definition? What would be the result? Such a move reduced overall profits by almost half (-49%). But still managed a +91.25% CAGR compared to the 98.73% and the 98.49% CAGR in -UUP and +UUP respectively. Therefore, UUP's presence was worthwhile, while its predictive powers were nil since you made about as much that UUP declared the trend up or down.


Guy Fleury,

This strategy yields good results, but I don't like just one word in the code in line 124,
and that word is any()

        if (extreme[self.SIGNALS + self.PAIR_LIST]).any():

Just imagine that you are the president of the United States.
You invited the top twelve generals of the country to help you make decisions whether to start
a war with country x or not.
Eleven generals said no, but one said yes.
What will be your decisions?


Getting out of the Market early is more of a conservative move rather than aggressive such as going to war.  I think the point of it is to take the least risk possible.  The outer 1% of std deviation if a very high standard to meet, so I can understand why Peter made it that way.  If you want multiple confirming signals, then you likely can't use 1% outliers.

The In and Out does very well with 3x leveraged funds because it is overly cautious, generally exiting the market too early, but safetly for the most part.  So while you don't get all of the move, you could perhaps take greater risk for the shorter period you are in.  The "jitter" from only 1 signal appears valid as the Out holding have done well, at least in the In sample data we've tested.  That being said, it's difficult to watch this market zoom higher while sitting in Out holding since 10/6.  In reality this is our first real "Out of Sample" data and it's not looking good so far but who knows what happens in the coming weeks.  Everyone is predicting all time highs.  We shall see.  


Removing SHY from 2020 makes 2020 and possibly 2020+ trade normally and perform normally -- until such time as we can comment back in the SHY indicator.

If we are trying to "ace" the backtest then keeping SHY in always looks good until of course late 2020 when the algo stops trading in October. 

But credit where credit is due... T Smith multiple signals (5 of 8) approach still did well with the Qual-Up universe approach from 2014-2018 as well. During those years by commenting out SHY in the original IN/OUT algo, the Qual-Up stocks didn't do well i.e. QQQ would have done well regardless during 2013-2018 because tech has been on a tear the whole past decard.


Here is updated DUAL MOMENTUM IN OUT v2.1

I have changed line 97 to:

    prices = self.History(symbol, period + excl, Resolution.Daily).close


Thx, nice one.


@Vladimir, I go with Nathan's explanation. The strategy goes to the sideline at the first sign of trouble. You do not want to wait for a consensus since you are already dealing with ETFs.

Trading QQQ is like trading a market average surrogate. It holds the same shares as the NASDAQ 100 index as everyone knows. The top ten holdings (AAPL, MSFT, AMZN, TSLA, FB, GOOGL, GOOG, NVDA, PYPL, ADBE) account for 55% of QQQ. It should be view as one of the easiest stock selection you can make. Playing QQQ tends to dampen overall volatility. While using TQQQ puts volatility back into play and at a higher level with an expected beta of 3.0x. Therefore, you are playing QQQ on steroids which evidently brings in higher risk. The reason for “extreme” caution even if there is a cost to it.

This is not a game where we will fix things after we lose. So, we should first play safe whatever the performance level we are at. We might need to compromise like playing this strategy at a higher level but with other strategies in order to reduce overall volatility and drawdowns, or only use part of the available capital (say 10 to 20% as if on riskier assets).

In my previous post, the point was made that you could drop some of the signal components (3 out of 4) and it would increase overall performance. Well, here is another point of interest: self.INI_WAIT_DAYS. I see its use as a way to reduce whipsaws around the moving average crossovers. The original code has it at 15 trading days. No one questioned this as it was a reasonable assumption since there are indeed a lot of whipsaws near those crossovers. Removing it, for instance, making self.INI_WAIT_DAYS = 0, dropped performance considerably and thereby justified its use.

In my version of the program, if you set it to zero, you get a 62.68% CAGR. If you keep it at 15, you have a 97.84% CAGR. If you set it to 10, you get about the same result (97.82%). However, if you set it below 5, something like 2 or 1, you improve the picture considerably. The economic reasoning is simple. The wait days operate on a high decay function: e^(-0,5t). It might also suggest that whipsaws fade away rather fast near the crossovers. Also, by reducing the wait days, you are increasing the number of days the strategy is fully invested.

The table below shows the evolution of the strategy where only the wait days are changed from 15 to 0. I think that the chart speaks for itself. Changing a single number in the program can have a tremendous long-term impact. Note that this is close to a 6000% improvement going from zero wait days to one. Nothing else in the program was changed for these tests.



Guy Fleury,

Looks like I saw a spreadsheet like above two months ago.
The only difference was Quantopian instead of Quantconnect.
But that not optimized strategy had a completely different decision-making structure:
-Consensus of individual signals.
-Far less degree of freedom.
-Three times fewer sources of information.
-Three times fewer variables.
-Static parameters.

Something like the one below.
In terms of total return, it exceeds the latest In_out_flex_v5 2020-12-16

BTW: What will be your decisions?



Hi there,

some comments regarding the trigger for in or out:

    if (extreme[self.SIGNALS + self.PAIR_LIST]).any():

  • There are several "regime scouts" that watch out if they observe danger. The beauty of this algo is, that danger is beeing memorized for some days.
  • They indicate danger, if they observe something extremely unusual, which is less likely than "np.nanpercentile(mom, 1, axis=0)" = lower 1% percentile of the data.
  • If there are "self.lookback = 252" observations, this roughly translates into less than 2.52 observations, or: the current signal observation is the worst or the second worst within the time period. This is a bit digital = not smooth.
  • This leads to the algo beeing very sensitive to not only the 1% but also the 252 observations which should not be the case. So we need more observations and then we can improve both the 1% treshold and the number of "regime scouts" theat we want to use in our algo.
  • As these are only "regime scouts", which only trigger our real algo logic e.g. "long 100 largest companies on Nasdaq" or anything else, I don't consider tuning these more or less independent scouts as overfitting.

What if we would (a) keep calculating daily returns for our signals, but (b) do this for every hour with a rolling 24-hours window? This should result in 24 times more observations = increase our resolution, allowing to optimize the 1%, the lookback period and increase the "any" until we get some redundancy from our scouts. Keep exploring ;)...


Here is the updated DUAL MOMENTUM IN OUT v2.2
-Based on In_out_flex_v5_try.
-Used exponential like smoothing on line 116-121
-Line 120 is commented out.


@Vladimir, yes, as you say: "...the strategy had a completely different decision-making structure". You improved on that strategy design since... thanks.

Changing the number of wait days (self.INI_WAIT_DAYS) in the program is more like an administrative decision. The idea is not bad since we know there will be some whipsaws at crossover times. However, there was no need to wait more than one day or maybe two at the most.

It is not that surprising an observation. We want security, be decisive, and not be clobbered by added trading costs due to whipsaw after whipsaw for days after an exit, and yet, this says do wait but at most one day and probably no more.

Such a small decision with such an impact. You change a single number in the program from 0 to 1 and it increases performance by 5910%!


@Vladimir, Do you have one like that for FOREX?



Do you have one like that for FOREX?

Not yet, but you can try yourself.


Some added notes. This trading strategy has shown that it could go quite far depending on some of its parameter settings, ETF selection, trading logic, and initial capital. Using 10k, 100k or 1 million is an administrative decision. The program will do its job either way since it is scalable (but up to a limit). It is a simple bond switcher based on QQQ, but it has interesting properties.

The max drawdown and overall volatility will be the same with either capital options. All you will be changing is the ongoing bet size. This will barely change the price at which a trade is executed. But will change the traded quantity. Increasing the capital ten-fold will increase the bet size 10-fold, and in turn, increase profits (losses) 10-fold. However, going for 10 million as initial capital will tend to make the strategy unfeasible since you might end up trading 175,000,000 shares of TQQQ on practically a weekly basis which is more than the average daily volume. So, there are practical limits to the strategy which will need to be addressed.

With 100k you can push the strategy beyond 1 B and with 1 M you can pass the 10 B mark. Almost incredible. However, this is achieved by taking on more risk, using 3x-leveraged ETFs which are also leveraged at 1.4x. Thereby pushing on the machine way beyond what the original design was. Of note, changing the wait days (self.INI_WAIT_DAYS) to 1 had a tremendous impact on overall performance, a real game-changer, and yet, just another administrative decision.

I have not touched risk reduction procedures yet. This comes at a later stage in my testing process. It is expected that by installing protective measures the overall performance will be reduced to some extent. But, I will know that after those measures are added. Meanwhile, I have other tests to make.

My version of this program is dealing with a 3.x leveraged QQQ surrogate (TQQQ). It is playing an index tracker but with 3.x the average market beta saying it swings more than QQQ which is itself an average market consensus equivalent.

In pushing further, the strategy reaches a performance plateau from which it starts breaking down. It does not blow up mind you. It simply trades less and less and thereby generates less and less suggesting not to go that far. But that should be expected. Knowing that the strategy has seen its own built-in structural limits, it is almost time to apply protective measures and scale it down to a more acceptable risk/reward level.

Here is my take. You PLAY the game for its long-term CAGR potential. Which trading methods will give you the highest return within your own trading constraints? Not somebody else's, but your own. We need to answer the question: will we accept 5% more on a temporary max drawdown for 5% more in CAGR? The decision has value and is based on the initial stake:

10k ∙ (1+0.30)^20 - 10k ∙ (1+0.25)^20 = 1,033,135

100k ∙ (1+0.30)^20 - 100k ∙ (1+0.25)^20 = 10,331,346

1M ∙ (1+0.30)^20 - 1M ∙ (1+0.25)^20 = 103,313,464

This should weigh in the evaluation of your acceptable risk/reward scenario. It can be a costly decision.


Here is the updated DUAL MOMENTUM IN OUT v2.3

Based on In_out_flex_v5_disambiguate_v2.



Great idea Vladimir! I was working on a similar update on the INOUT algo, but you anticipated me! :P

Have you also tried to decrease further the "decay" value for the SELF.WAIT_DAYS variable, reducing the waiting to increase sharpe and return? (guess probably yes, isn't it?)



Vladimir as you requested.. :) was a bit trick, just happy to get it as a multi AlphaModel running. Its a super simplified version, but you can easily upgrade it. At the end it has much more lines than the normal version. It was quite trick to get the signal into the two AlphaModels. At the end I used ObjectStore. If someone has a simpler solution, with a global variable? please comment.




@Vladimir, I like the behavior and equity line of version 2.3. Remarkable, and great numbers. I will try to find some time to look at it since I think there are things I will learn in the process. Thanks for sharing.


Guys, I really fell in love with this strategy (I actually started following the thread on Quantopian) and so ran some additional backtests taking into consideration several 5-year periods. 

The strategy really shines during the 2008-2012 timeframe and then again in 2020. That's how it delivers 30% annual return. Take any other period of time and it will barely matches the returns of holding QQQ: I literally just finished a backtest between 1-1-2013 and 12-31-2019 and it's underperforming by nearly 10% overall.

If one substites QQQ and FDN with SP500 equivalents the same behavior can be observed, actually returns are even worse.

Am I the only one experiecing this?


Just tweeking the Waiting variable using a bigger decay, as suggested above to get slightly better returns and sharpe :) 


 Vladimir could you plese check again, should work now, the objektstore object was not created in the initialize, but it was yesterday on my disk as i was finding out how to impement it....


FYI to make this more robust these same arguments were brought up in the old QT thread. 

Not sure if this is implemented in this or not.

There should be a 3rd option or ultimate out where it just goes into cash or adding gold as a  3rd / 4th asset to rotate into. 

Given at some point in time stocks and bonds might breakdown together. 


Damiano Bolzoni I don't understand why people only focus on returns and ignore things like drawdown. Yeah you can buy and hold QQQ and possibly get the same returns with 40-60% drawdowns. 


Elsid Aliaj you are totally right. People should take into account drawdowns too. So...do you know what was the worst drawdown for QQQ between 1-1-2013 and 12-31-2019?

I can give you a hint...far below 40%


Some basic questions for any trading strategy: Did it survive? Did it at least beat market averages? Will it survive going forward? How far can it go? How much money can it “literally” print? We simulate over past data to gain some insight into what could be done to answer those questions.

The structure of a trading program is simple: it is a single do-while loop. While the clock is still ticking, initiate acyclical long(short) positions for whatever reason you might have, close them out later at a profit or loss and cumulate the results.

Most of the programs I see have scheduled trading times (like: self.Schedule.On(self.DateRules.EveryDay()...) or whatever. Such a command does not ask: is there a profit to be had? It just executes and comes back with the results, that they be good or bad. Meaning that initiating such a daily routine is at the heart of what is supposed to help you outperform long-term market averages. We definitely need more than that...

The following chart is based on Vladimir's version 2.3. Changed a few things around including some of its structural design and trading procedures. It is based on 3x-leveraged ETFs (TQQQ, TMF) which in turn were leverage modulated around 1.4x. The initial capital was set to 1M with a start date of 2010-3-1 to make sure both ETFs were active. Data on QQQ prior to 2010 could have been used to demonstrate the benefits of trading a market average tracker. Trading TQQQ in this strategy amplifies QQQ's beta by a factor of 3. It also increases volatility. This is compensated by higher returns.


The strategy had an 89.515% CAGR. More than enough to pay all the added leveraging fees. However, it has a 53% hit rate which makes the strategy perform close to a slightly biased coin. It also comes close to the historical upside biased experienced in stocks over the long term. It is like saying that you are not getting more than what the market is giving to everyone else. Nonetheless, the return is impressive. And there has to be some reason why. For one, the average win per trade was 17.88% while its average loss was -7.53%. That is enough to explain the outperformance and this even if the hit rate had been 50%.

The strategy could be considered another variation on a theme (see the different versions of In & Out strategy). I applied some of the same techniques in this version as I did in others. The above chart does illustrate that it can be done.


What an interesting (and educational) discussion. Thanks to all for sharing your work and knowledge.

I'd like to learn more of the background, as it seems this is a continuation of a prior thread.

Where can I read more about what this dual momentum strategy actually is? IE: a high level summary on how it works. Is there a previous post somewhere? 





As a follow-up. Would you have the nerves to hold on for more?


The above chart required pushing on the machine and accepting more volatility which translated into a higher max drawdown, meaning higher risk. But with it came higher returns as illustrated above.

You have a basic template for a trading strategy. It buys and sells according to instructions. And it shows some built-in alpha. The question then becomes: is this excess return sufficient to support leveraging costs? It goes with another related question: can you increase that alpha further by some means or other?

When I design or modify a trading strategy, it is always based on the mathematics of the “game”. The more your program trades, the more it is faced with probability and statistics. Like you do not win every trade, you are not right all the time. You win some and you lose some. We all accept that, but we usually do not want to accept its accompanying conclusion. We are not that good at predicting where stocks are going or how much they will rise or fall. So we design trading “excuses”. Something that will trigger trades (in and out) on our behalf.

You play QQQ. It is like playing 100 stocks where the average outcome is QQQ. All you can get is QQQ's return while you are holding it. Based on the metrics of the above chart, the program switched in and out 100 times. It had a 59% hit rate with an average win of 14.45% and an average loss of -7.43% per trade. Numbers pointing to an overall positive return.

A hundred times, over the trading interval, the program jumped to bonds at the first sign of potential turmoil. That is a lot more than needed or required. But, you could not know which of those would save you from real damage especially since you were leveraging whatever the outcome might be. You were protecting yourself a lot more often than needed. But this should be considered as part of the cost of being in the trading business.

You could say that 90 times the switching to safety was not needed. However, it is that switching around, technically taking chunks of QQQ's return, that helped the strategy exceed the result of the prior posted simulation. Both simulations used TQQQ and TMF as primary assets. Leveraging stayed the same at 1.4x. The other modifications seem to have proven themselves worthwhile or at least worth studying in more detail.

For those wishing to start with a lower initial capital, the strategy is scalable down.


T. Smith,
I found a solution.
Here is the updated DUAL MOMENTUM IN OUT v2.4

It uses "Intersection of ROC comparison using OUT_DAY approach by Vladimir" for In-Out which has
three pairs as a source, two parameters and concensus of all three for exit signal.
Changed Dual Momentum parameters RET = 252; EXCL = 21;

Happy New Year to all



I am absolutely loving this thread, and have already learned so much! Thank you all for your incredible work.

I have a question regarding "poor man accounts" (ie: those with less than 100k). When I run backtests with a smaller account size (under 10k), everything runs smoothly. However, I can't understand how that is, as with such a small account and not utilising leverage I simply can't afford the 100 SPY shares. If I tried to deploy this live, nothing would happen as I don't have sufficient equity. How would one scale this down to work with whatever equity is on hand? (I'm still learning, so forgive me if all of this is painfully obvious.)

Again, thank you to all the Quantconnect community members who are happy to share their experience with the rest of us. We really do appreciate you taking the time to educate us. Happy new year!


My apologies! I see where I'm going wrong! It's not 100 shares of SPY, it's 100 minutes after opening. Ahhhh!

Thank you all for not pointing and laughing at the new guy.


Scales well with 2x leverage. Can't seem to find any 3x leveraged equivalent ETFs for FDN and TLH for a 3x test.


The following charts show a scaled-down version to 100k and a 1M version of my variation to this theme. Both operate on the same principles and conditions as the two prior charts. It shows that returning to the initial capital to 1M simply added a zero to the equity as displayed in the second chart below. Now, that final equity number is a big number. Obviously, this is pushing toward the limits, but still not beyond since the strategy did not blow up and could do even more if requested (one path is to increase a single number).

100k as initial capital


1M as initial capital


The performance difference, compared to the simulation in the previous post comes from trading more for a higher average win per trade which came in at 15.70% compared to 14.45% in the prior test. This might not appear as much, but trade by trade, the added profits are compounding repeatedly and it does make a difference even if the win rate was down a little to 56% versus 59% in the previous simulation.

Used @Vladimir's version 2.3 as template. Added and discarded stuff, changed things here and there, all relatively easy to do. Most of it to force the strategy to trade more and at a higher profit margin. I pushed on the machine to reduce inter-trade delays while using modulated leverage in order to find the strategy's trading limits. I think they are in sight. It is where I usually add more protective measures knowing the overall return will be reduced. Afterward, I will add more gaming procedures.

The changes made are ordinary. Yet, we have the charts above. There is no magic there. The modifications were incrementally small and not drastic changes. Nonetheless, the code was considerably altered. These changes did respect the math of the game. If you increase by any means the number of trades and the average profit per trade, overall profit will simply rise.

Due to the very structure of this trading strategy, it technically cannot go bankrupt. Its bet size is proportional to the ongoing equity. When the strategy loses money, the bet size is reduced, and likewise, when the strategy makes money, bets get larger. Notwithstanding, you could still lose most of the strategy's equity to the point where no trade could be executed due to insufficient remaining funds. All you would lose would be the money put into the strategy.

So, how would you play this thing?

Take 10% or 20% of your portfolio and put it in this strategy as the high-risk part of your portfolio. Keep the rest for a more mundane portfolio with an expected secular outcome (say ~10% or better). Buy something like QQQ or SPY and hold for the duration, it should give you something near the expected long-term market average.

The Math For Such A Scenario

First, consider you lose your speculative bid. Net result: 90k ∙ (1+0.10)^20 + 10k ∙ 0 = 605,475 which is equivalent to 100k at 9.42%. The effective portfolio risk should be valued at this 0.58 basis points loss. Thereby risking less than 1% return of your long-term expected market return. Technically, risking not making 67,275 over those 20 years which you were also at risk of not making.

You win your bet, but at a lesser rate than the chart above (it is at 125.95%). You get your 10% on 90k plus the outcome of your long-term speculative bet. The math:

90k ∙ (1+0.10)^20 + 10k ∙ (1+0.80)^20 = 1,275,429,097. That is equivalent to a 60.43% portfolio CAGR for the 20-year period. If your bet turns out at a higher rate, say 0.90, then the math says: 90k ∙ (1+0.10)^20 + 10k ∙ (1+0.90)^20 = 3,759,602,821 which is a 69.34% equivalent CAGR on the $100k invested in this portfolio.

For the 80/20 mix, you would get: 80k ∙ (1+0.10)^20 + 20k ∙ 0 = 538,200 for the losing scenario. This is equivalent to an 8.78% CAGR on your 100k. Whereas, with a winning bet:

80k ∙ (1+0.10)^20 + 20k ∙ (1+0.90)^20 = 7,518,532,892 or a 75.31% overall portfolio CAGR.

Even if some think this is impossible, the above charts show nonetheless that it is doable, it is feasible, and it is executable. Of course, the future will be different. But, you do have a wide margin of error. Which game do you want to play? Can you risk 1% less on your future CAGR for a 50%+ return on your overall portfolio? Naturally, should you embark on such a journey, be ready for a wild ride. On the other hand, for your comfort, it is all on auto-pilot...


Guy Fleury,

It is not clear from your extensive study on which version of DUAL MOMENTUM IN OUT this was done.
FYI, the latest version of DUAL MOMENTUM IN OUT v2.4 is significantly different from the previous ones.


@Vladimir, I used the first occurrence of version 2.3 located HERE


Guy Fleury,

Can you share the way you create a leverage to avoid warnings of insufficient buying power?


@Vladimir, leverage was set at 1.4x. Therefore, theoretically, I should not have those warnings.


Guy Fleury,

I tried the same way you did and got a bunch of invalid orders and warnings about insufficient
buying power.
Can you activate and print the holdings chart for any of your backtests and check the logs?


@Vladimir, sorry, but I do not think the way you do it is the same way I did. I changed that program considerably, including trading times. Changed its trading philosophy and objectives. As you know, I use math, leveraging, feedback loops, and compounding to control a trading strategy. If I did as everybody, I would get the same results as everybody.

I would add that if there were many insufficient buying power warnings, it would reduce the number of trades, and that would appear counterproductive. It would kind of choke the strategy to inaction and produce less. As expressed in a prior post I put a lot of emphasis on increasing the number of trades and the average net profit per trade. The sum of all trading profits and losses and therefore the net result of a trading strategy is totally explained by those two numbers.

So, the question would be: if there were insufficient buying power warnings, would they be to the benefit of the strategy or not? If it was, I would keep them in even if it reduced the number of trades. We are playing a “hold the puck game”. While you hold you can make a profit or a loss and none of either otherwise.

What I find interesting in that long post are the equations at the end of it which propose a way to handle a high-risk/high-potential strategy combined with a lesser one. We look at risk in terms of max drawdown and volatility (sigma). This might apply when dealing with a simulation.

We design trading strategies over past market data to see how they would have behaved if we had done this or that. Whereas in real life, it is going forward that is important. And what we perceived as risk in past data might better be represented as differences in opportunity sets and their respective probability of occurrence. A totally different world.

The post ended with: “Can you risk 1% less on your future CAGR for a 50%+ return on your overall portfolio?“

That 1% is not a major risk or any type of substantial added drawdown. It is only a 1% opportunity cost at best. In the example provided, it is: do you lose that 10k or not? Overall, you would still win the game. I would dare say: who should care about that 1% in the first place? It would take 20 years to find out if you kept it or not. And if you win, I do not think it will matter at all. It is like taking a side bet where you say: let it run. What have you got to lose? 1% on your long-term CAGR. Geez...


Guy Fleury,

Can you activate and print the holdings chart for any of your backtests and check the logs?



regarding the buying power issue, did you try in Initialize:


You would need one line of that for each security you would hold. It works for me...



Leandro Maya,

here's a backtest with your recommendation.
Turn on the "Holdings" chart to see account leverage.
The engine does not keep the leverage and gives a lot of "Insufficient buying power" warnings.


@Vladimir, looked up the log. At times, it does show some insufficient buying power warnings. Probably due to the leverage modulation process pushing too hard. I will look closer into it to see if it brings any benefits.

What I have shown in the presented charts is the return progression as modifications were made. Just part of my strategy analysis and evaluation process: Can the strategy survive? Can it scale and how far? Can it be leveraged and by how much? What are the operational limits? What would it take for it to blow up? Where does it get most of its performance? Does the return come from anomalies that might not happen in the future? Would I withstand emotionally such a wild ride?

Further down this transformational process, I expect after more modifications and more constraints that the expected CAGR will go down. By how much, I do not know. But, what I do know now, is that I would not let it roam free in its present state. More work needs to be done...

I want to know where the profits are coming from and would they be sustainable going forward. Where is my risk/reward compromise? One of the most basic questions: how come, when pushed to its limits, is it making so much and not breaking down? I need to answer why since it has the potential to blow up the strategy in my face. On the other hand, these procedures could be most useful in other trading strategies. It will certainly take time to answer these questions to my satisfaction.

Presently, I think I have shown that there is a wide range of results that could have been achieved over past market data. A factor of over 3,000 times more than the original trading script (that is 300,000% more just by changing things around!). All this without having the strategy blow up, at least, not yet. That in itself is remarkable. It is probably the highest performing strategy ever seen in the QC forums.

Should you want to learn more on how I do these things, you can find a lot of it on my website.



sorry for the confusing. I actually had tried different solutions but then got confused about what really solved the issue. Now I confirmed that it was:

self.Securities["QQQ"].MarginModel =  PatternDayTradingMarginModel()

According to the documentation it gives 4x leverage for day trading but also 2x leverage for overnight positions.

Attached is your code with it implemented.

I think you don't need SetWarmup for this one.


Leandro Maya,

Thanks for your recommendation.
"Insufficient buying power" warnings have disappeared.
But account leverage behaves differently from  Quantopian.
May be my formula

account_leverage = self.Portfolio.TotalHoldingsValue / self.Portfolio.TotalPortfolioValue

is wrong?

Or is it not being managed properly by the Lean Engine?    
Is there any settings for margin monthly payment?





this leverage oscillations have being annoying me too, but I don't know what is the cause. In this particular case, the way I found to control it was by doing a daily rebalance and calculating the average just after the rebalance. It ended up with 400x more trades, but at least the performance improved.

Regarding margin fees, I read in other posts that is not yet available but in the to do list.

For live trading we are recomended to let some cash in the account. That can be done manually or by using:

security = self.AddEquity("SPY", Resolution.Daily)
security.MarginModel = BuyingPowerModel(requiredFreeBuyingPowerPercent = 0.05)

Leandro Maya,
Believe me or not but we came to my original version.

def trade(self):

for sec, weight in self.wt.items():
if weight == 0 and self.Portfolio[sec].IsLong:
self.SetHoldings(sec, weight)

Those 3 lines was added by Thomas Chang in Oct 2020 and was working fine there.

Thanks for help




what intrigues me is why Thomas Chang lines work with leverage below 1 and don't work with leverages above. I think LEAN probably changes the way it calculate stuff when margin is used... 


Vladimir, on the v2.4, If we can make system "liquidated" before rotating to the next assets, more leverage can be used?

Example bellow. is there a way to have TLT sold prior to 11:10:00


Date TimeSymbolTypePriceQuantityStatusTag+2008-01-02 11:10:00TLTBuy Market

Fill: $64.243318616 USD

3104Filled +2008-02-19 11:10:00QQQBuy Market 4839Invalid +2008-02-19 11:10:00TLTSell Market

Fill: $62.76673408 USD


I think the easiest solution is to use QC's built in SetHoldings function with a list of PortfolioTarget. This will carry out the trades by releasing margin before the purchases


orders = []
for sec, weight in self.wt.items():
PortfolioTarget(sec, weight)




Hey Flame,

In theory, using PortfolioTarget is the easiest, but because the weight for each holding is static, what actually happens is that it makes far more trades daily to meet the weight value - so more trades means more cost. Leandro's and Vlad's idea is closer to the solution when using a margin account.


Hi Vladimir!

Thanks a lot for sharing this amazing strategy!

I found data with one-day resoultion is consolidated into one-day timeframe agian.

I'd like to know why this process is necessary.




Apologies for just jumping on this thread after being away for a while.

After reading three four differents posts + comments what I'm understanding is that IN-OUT is very good now and you guys are working to figure out how to increase leverage while looking for other signals to exit, right?

Vladimir mentions: "This algorithm unlike those from Peter Gunther has three pairs as a source, two parameters and
concensus of all three for exit signal." Have been trying to identify these  but at least the last version from Peter has three consensus pairs as well. I did notice that you are picking the bonds/stocks with highest ttm returns.  



Hi everyone. I liked this algo so I plugged in some 3x ETFs and removed all margin. I'm not a fan of maxing out margin since House Calls in real life are a pain, and are best avoided.

But then I noticed a lot of order rejections due to buying power. Eventually the orders get filled. But in order to fix that I ended up changing the end just to SetHoldings for either TQQQ or TMF, and skipped the "trade" section. This got rid of the order rejections, but returns dropped significantly.

Was there another reason for the previous higher returns?


Cash accounts requires a few days for the profit from sales to settle before purchasing. 


OMG I can't believe I didn't notice that, thanks Chak.


Hey Guys,

I made a small tweak that has noticable impact.  I switched the trade-in from Friday to Monday and for my period of 2017-2021 the PSR went up by 4% and the return by 10% cagr by 1.2%.

Also special thank you to Vladmir for sharing his changes and Rahul Chowdry for explaining whats really going on here.  Really cool.

For those running this in Paper Trading or Live, I also added some code to email us every day what is happening on Trade In, Trade Out Days, what the signals are indicating for the day.



Will Berger,
Here is the backtest of the latest version 2.5 for the same period you
backtested the modified version 2.2.


Thank you!  Will check it out.


Hi Vladimir,

I got a chance to look at your changes.  I am trying to understand this line of code here

vola = self.history[[self.MKT]].pct_change().std() * np.sqrt(252)

While I understand and see what it is doing, I don't understand why this is telling us the market is volatile.

Could you shed some light on it?  Any help much appreciated.



Hi Will, when you calculate the standard deviation of the price changes of the market, any large value relative to the X number of days it's calculated from implies market volatility or a market drop.

Let's say you take the last 126 trading days of the market trending one way. In this case, the standard deviation remains small. Suddenly, you get a market drop where SPY whipsaws back and forth. In this case, the constant price change increases the standard deviation.

The number 252 is arbitrary and only denotes the last trading year. I hope this helped.


Will Berger

> Could you shed some light on it?  Any help much appreciated.

This may help.

The original "In & Out" strategy, published on the Quantopian forum thread
"New Strategy - In & Out" Oct 4, 2020, had only 4 sources SPY, XLI, DBB, SHY.
There were 6 constant parameters: waitdays = 15, period = 58 and thresholds for each source.

Some participants was asked where these magic numbers came from because the results changed
significantly due to a slight change in parameters.

Tentor Testivis, the great "In & Out" contributor, on October 12, 2020, proposed the idea of
making them adaptive to volatility.

The idea:
Using SPY's volatility to replace the magic numbers

vola = hist[SPY].iloc[-126:].pct_change().std() * np.sqrt(252)
waitdays = int(vola * 100 / 3)
per = int((1 - vola) * 50)

Implementing the idea as is dose not give grate results so in my
"Price relative ratios (intersection) with wait days" Oct 16, 2020
I modified Tentor Testivis idea to what I have been using since then.

        vola = self.history[[self.MKT]].pct_change().std() * np.sqrt(252)
        wait_days = int(vola * BASE_RET)
        period = int((1.0 - vola) * BASE_RET)

You are welcome to modify these empirical formulas.


Thank you for that explanation!


Vladimir, why use FDN? FDN and QQQ are very similar in historical price behavior. Also FDN doesn't have any leverage products tracking in like QQQ does. What is the rationale for that?


Mark hatlan
Symbols are parameters in this strategy.
You can try whatever you want and choose the ones that suit you most.


Yeah I understand that, I was just wondering if you had a specific reason for having those 2 for the stocks as opposed to any other 2, just because those 2 have behaved similarly. Thats all.


Thank you so much for sharing your algo with us. This is GREAT! I'm learning a ton just by reading through the code.

I'd like to contribute where I can, so this is my first take at it.

I converted your "v2.5 Dual Momentum with Out Days  by Vladimir" into QuantConnect's new Algorithm Framework. In particular, I've split your algo into the signal generation (Alpha Model) and placing trades (Portfolio Construction Model).

The converted code is as close as possible to yours (save some optimizations here and there). Note that the insights (buy/sell) are generated for SPY by DualMomentumWithOutDaysAlphaModel. DualMomentumWithOutDaysPortfolioConstructionModel then captures these SPY insights and converts them into buy/sell orders for one of the 4 assets.

Hopefully, this makes it easier to plug in and combine different strategies. I could make it more generic too, eg, pass the list of stocks and bonds are parameters.

Let me know what you (and the community) think!

PS: I'm not sure how familiar with git folks are (eg, GitHub.com, GitLab.com), but I think there's some value in using a collaborative version control system where we could collaborate/share/comment on the code in a more structured way. Just a thought. :)


Hi Joao,

For personal "version control", each time a backtest is run, the code used in execution is saved in the backtest details.

For collaborative version control, one idea is to use Skylight and and turn the local Skylight folder into a Git repository so that version control can be done.

Shile Wen


The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.

Joao Antunes,

Thank you for sharing your great Alpha Model v2.5 Dual Momentum with Out Days.



I have been running the Model v2.5 Dual Momentum with out Days for the last month.  The draw down so far is 10%+.  Not insignificant.

I believe this is being caused by the rise in the 10 year treasury yield and growth stocks are feeling the impact.  The 3 trade signals in the algorithm today do not take this into account and was wondering should there be a 4th and what should be the action.

In this case both equity and bonds are suffering.  Would it not make sense to sell both equity and bonds when detected?

Any thoughts?




Will Berger,

Correlation of daily returns between stock index and Treasury bonds has been negative not only
in the last 10 years.
You can find research that proves that over the past 85 years or even over the past 250 years.
But sometimes the correlation can change sign, or both go up, or both go down.
In the latter case, it makes sense to switch from Treasury bonds (TLT, TLH)
to Treasury bills (SHY, SHV).
You can try this.


Ok.  Will try it out.

On my first point, the algorithm in my observations from the past has done a good job of not over reacting when the market declines, but I don't think it is handling the scenario we are in today where the us yield is rising and the growth stocks like qqq are being signifcantly being effected by it.  Maybe choosing a value stock as the alternative dual stock will do the trick.   Will play with that as well.

Thanks for the suggestion Vladimir.


Will Berger,

Symbols are parameters in this strategy.
You can try whatever you want and choose the ones that suit you most.


Neat looking code will definitely play around with this


Update Backtest

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.


To unlock posting to the community forums please complete at least 30% of Boot Camp.
You can continue your Boot Camp training progress from the terminal. We hope to see you in the community soon!