This is the original name of the algorithm that I created as a result of a successful collaboration
on the Quantopian forum thread "New Strategy - In & Out" in October 2020.
Unfortunately, the collaboration did not continue on the QuantConnect forum.
At least I am very uncomfortable with the strange names used by Peter Gunther in the algorithms,
such as "Distilled Bear", variable names and decision making logic.
This algorithm unlike those from Peter Gunther has three pairs as a source, two parameters and
concensus of all three for exit signal.
I did not optimized parameters, so you can get better results.
I want to thank Jared Broad and his team for giving me the opportunity to recover one of
my favorite algorithms.
Happy New Year to all
Vladimir
Leandro Maia,
I tried to apply some leverage to the strategy.
I applied:
LEV = 1.5
self.SetBrokerageModel(BrokerageName.InteractiveBrokersBrokerage, AccountType.Margin)
Here are what the leverages  looks like
- For your execution style
- For my execution styleÂ
Do you have any idea why the code does not hold target leverage?
Vladimir
Guy Fleury,
I have an idea how to improve your latest code.
In Quanopian, you could use:
Â
   BONDS = symbols('TMF') if data.can_trade(symbol('TMF')) else symbols('TLT')
Leandro Maia
Frank,
thank you for the suggestion. The issue I see are the frequent history calls, applied to an unfiltered universe. I feel it would make backtest much slower.
Leandro Maia
Vladimir,
I think I found a good explanation for the leverage behaviour in Ernst Chan's Algorithimic Trading page 170:
"No matter how the optimal leverage is determined, the one central theme is that the leverage should be kept constant. This is necessary to optimize the growth rate wheter or not we have the maximum drawdown constraint. Keeping a constant leverage may sound rather mundane, but can be counterintuitive when put into action. For example, if you have a long stock portfolio, and your P&L was positive in the last trading period, the constant leverage requirement force you to buy more stocks for this period. However, if your P&L was negative in the last period, it forces you to sell stocks into the loss."
So I think Lean is behaving correctly and Quantopian was the one with strange behavior.
To keep leverage constant we'll need to rebalance everyday and to obtain a straight line, plot the leverage just after rebalance.Â
  @Vladimir If you win leverage decreases, while if you lose leverage increases. If for example you bought stock X at price $100,- with a leverage of 2 (you have $50,- in cash for every $100,- in equity), and the stock loses 20% of its value, you will have lost 40% in cash, an so you are left with $30,- in cash for $80,- in equity, and your leverage will be 2.67. So, to maintain a leverage of 2, you will have to sell 25% of your equity.Â
Guy Fleury
@Vladimir, yes.
Some might think they can reach the same objective using: BONDS = ['TMF', 'TLT']. TMF would still not trade prior to February 2010. However, after TMF would become available, trades would be split between TMF and TLT. Based on the strategy presented, this resulted in reducing total equity by 180 million. An expensive administrative decision. So, I would agree with your proposition.
I had started making those 2 modifications on version 1.2 when you came out with version 1.3. Since I only had two easy modifications to make, I opted to switch. Your version 1.3 appears as much better than version 1.2 Sorry, but I have not read the code beyond line 30, it is my next step to see where more pressure can be applied. And even game the thing some more.
Nice work. Thanks for sharing your insight.
Making another administrative decision, putting leverage at 1.5x for version 1.3 would result in something like this:
This had a 72% win rate, with an average win of 2.53% while the average loss was -1.20%. Of note, the beta was at -0.221 which could be exploited by combining this strategy with something else with a higher beta.
Vladimir
It is very interesting that both Ernst Chan and Menno Dreischor see rebalancing as process
of selling relative losers and buying relative winners.
William Sharpe consider a contrarian, under assumption that no position in portfolio are negative:
Rebalancing a portfolio to a previously-set asset allocation policy involves
selling relative winners and buying relative losers.
Â
@Vladimir That is actually not what I'm saying. There are two processes at play here: rebalancing, and maintaining a desired level of risk (by controling leverage). The rebalancing involves selling relative winners, and buying relative losers, while the leverage constraint requires reducing exposure when you lose, and increasing exposure when you win. The combined effect is, that depending on the actual wins and losses you either buy the losers, and sell the winners, or sell less of the losers, and sell more of the winners, if the overall portfolio loses significantly, or buy more of the losers and buy less of the winners, if the overall portfolio wins significantly.
Vladimir
Here is my attempt to realize in this strategy Dr Ernst Chan's recommendation:
"No matter how the optimal leverage is determined, the one central theme is that the leverage should be kept constant".
v1.5
The fee saving part of the code was excluded, and daily rebalancing was
implemented with target leverage = 1.50.
The actual leverage ranges from 1.47 to 1.57.
But I still prefer v1.4 with target leverage = 1.50.
The actual leverage ranges from 1.39 to 1.59.
Â
Â
Â
Â
Guy Fleury
@Vladimir, taking version 1.5 and replacing TLT with TMF produced the following:
It increased total equity about 4 times. Sure, it puts TMF at 4.5x leverage in periods of market downturns and thereby increases drawdowns and volatility. Leveraging fees however remain at 1.5x. TMF is not something that will go bankrupt. It could decline in value but that would be spread out over years and you could always switch back to TLT even on the flip of a coin (meaning for any reason).
Increasing the 100k initial capital to 1 M generated the following:
It should have been expected to increase 10-fold since the strategy is scalable. It is about what it did and a little bit more.
I liked the 11,000+ trades. It is more than enough to have the strategy's statistical averages to be significant and not dominated by outliers. I also like the win rate which was maintained at 71-72% indicating that there is a statistical edge at work here.
The 2 trading decisions are administrative in nature. We usually make them before we even run a simulation. However, their impact can be considerable as can be seen in those 2 simulation headers. It always goes back to: it is a matter of choice. Do you push or not?
Guy Fleury
@Vladimir, in case what was presented was not enough, you could always push for more:
@Guy Fleury I consider the tendency to push for the highest return with leveraged instruments highly risky. These kind of leverages are suited for intraday high frequency strategies, that have zero correlation with the market (and even then it should be done with extreme caution), not a relative long term investment strategy, such as this. Large drawdowns can develop in days, if for whatever reason the market goes against you (and even with the best strategies we should assume it will when assessing risk). TLT has seen a drawdown of 16% over less than 20 days, which translates to a 60% drawdown in a 4.5x leverage situation. Just consider this. In 1987 the S&P 500 opened at -20%, completely out of the blue. With leverage applied, such a move would translate to instantly losing 90%. If an opening of -20% over a few decades is a reality, then -25% is a very real possibility over a similar time frame. In such an event you would be wiped out instantly, no matter how much the strategy made in the past. Trading highly leveraged instruments is like driving a Formula 1 car. It may seem easy driving over a straight piece of road, but at some point in the future the curves will come, and a concrete wall is just seconds away.
Frank Schikarski
Building on the comment of Menno Dreischor: this algo (and the other in-out-algos) currently consists of an excellent regime filter, but there are some more features needed to make sure someone can sleep well while trading live. Given the low number of regime switches, the algo has prevented very well from situations where three pairs have simultaneously indicated risk in the past.
Vladimir is this something which you would like to discuss in this thread? The code would become a bit more lengthy amd the backtest take a bit longer, however the live trading more relaxed ;). I can provide an algo with the stop loss part (until next rebalance) and the portfolio part (based on Emilios nice contribution) including feature switches so performance can easily be compared moving further.
Guy Fleury
@Menno, we have slightly different views on this thing and here is mine.
My question is: now that you have this strategy, how do you game it? You know what it can do, you know its weaknesses and explored its potential. You know you can turn bets on and bets off at any time of your choosing. So what will it be? You want to reduce volatility, then hedge the thing with options or using it with other trading strategies. Put less capital at work in what you might consider a more risky proposition, it would tend to reduce the impact of drawdowns. You have many solutions at hand.
The initial questions are answered by the simulations. Does it survive? Can it support leveraging and by how much? Is it tradable? Is it executable? Is it worth it?
A long-term investment strategy is not the same as a long-term trading strategy. Like I would not even suggest a long-term Buy & Hold (even Buffett style) using leverage. On Berkshire Hathaway's record, we have 4 drawdowns that exceeded -50%, and at 2x leverage, it would have been game over the very first time it occurred. Mr. Buffett knew that from the start, most probably why he does not generally use leverage. Note that during the financial crisis, the drawdown exceeded 100B in losses, and yet, he took the opportunity to buy stuff and stay the course. Also note that Mr. Buffett's investment strategy is not what we would consider a trading strategy even if he does trade from time to time.
But, and this is a serious but, in a trading environment, the problem is very different. For instance, this bond switcher changes direction all the time. Liquidating all its stocks or bonds depending on its trend-following decision making proxy. It switches direction a lot more times than needed or necessary. Over the trading interval, the switch occurs something like 70+ times of which 60 might not have been required but still served as anticipated protection. It is the just in case part of the strategy, preemptively putting aside the need to set stop-loss measures either on the individual stock basis or from the global side of the equation.
Nonetheless, the portfolio, whatever its composition is fully invested at all times either in bonds or in stocks. The bond part is easy. If it was in only one bond, I would prefer switching to cash or somehow go on the sideline. However, we are dealing with an ETF, an average of many bonds rendering the proposition more secure (meaning less subject to huge variations). And since we do have market halts in too volatile times, the risk is reduced there too. Note that this would not stop something like the Flash Crash of 2010 where some ETFs exchanged hands at one cent per share (those trades were later reversed). Remedies (new rules) have been put in place not to repeat those circumstances. Yet, they might still happen going forward...
All TMF does is accentuate TLT's volatility and indirectly risk. We all understand that. But, we are in the game to make money and take measurable risks. We need to manage this risk. The mission remains the same: maximize overall return in this compounded return game in which time is a major component. I do not design strategies to spare feelings, I have them follow the math. Someone wishing to do less can always scale down a strategy to the level they feel comfortable with. It is their choice. But there is also an opportunity cost to it as illustrated in the charts provided.
All the profits generated can only come from your trading procedures. Your portfolio with obey the following equation that it wants it or not: F(t) = F_0 ∙ (1 + g – exp)^t where return g = r_m + α_t, with exp the trading expenses, r_m the average market return, and α_t the alpha you bring to the game. To improve on your long-term outcome, you do not have that many choices, all the variables are there. You can increase or decrease the initial capital F_0. Increase your portfolio return g, but to do so you will need to generate some positive alpha since r_m is not under your control. It should be implicit that to reach t (the end game) the strategy needs to survive for the duration.
You can always try to reduce trading expenses, but I have always found that trivial. You cannot add more capital as you go along. All the money needs to be generated from within the trading strategy. You improve market timing, stock selection, trading procedures, betting system, and strategy gaming. All aimed at giving you a higher alpha. And since it is all compounding over the entire trading interval, anything that will improve the strategy should bring higher returns.
This strategy is scalable, trades some 150 stocks, and will try to maintain its stock weights constant with a constant leverage: self.wt[sec] = LEV/len(stocks). When adding leverage, all you do is increase the bet size for every trade. It does not change the nature of the strategy or it dynamics. The bet size will increase with rising equity and decrease in periods of drawdowns. In this case, it is not the price movement of stocks that determines the trend, it is a surrogate that it be right or wrong. And it is wrong a lot of times. However, there is an advantage to this even if it increases trading expenses. It makes money.
In the 1.5x leverage scenario, there were 11,937 trades, some due to rebalancing and a lot of them due to the bond switcher logic. We do not have data on which is producing what. Nonetheless, 11,937 trades is sufficient to speak in terms of averages. We really do not know what triggered each of those trades, but we do have statistics on them. The hit rate was 72% which in any trading strategy should be considered as remarkable and also as the expression of an edge due to the trading procedures (view it as a non-random outcome). The average win was 0.29% while the average loss per trade was -0.22%. Again, expressing a slight edge in the trading procedures even though I find it relatively small, but with a 72% win rate, it explains the overall outcome. Also, the average loss per trade of -0.22% shows how sensitive this strategy is to downturns. It does not tolerate much.
So, what we see in the charts presented is: F(t) = LEV ∙ F_0 ∙ (1 + g – exp) )^t where the added leveraging fees are not accounted for. You change LEV and it will have an impact on the total return. The question becomes as said before: how far do you want to push this thing? BTW, you can push it to even higher levels, if you wanted to.
Jack Pizza
Frank Schikarski also I think we would need some interest rate logic for the bond selection or maybe switch to 1-3 year short duration bond funds.
Because the out only works well now because of low interest rates. If you have a regime where % rates start rising rapidly and the markets crash the bond funds would perform poorly at the far end of the duration.
So there would need to be an out of say very short term funds or just straight money market cash.
@Guy Fleury When you say, we can push this thing, my question would be what is this thing? We can push this thing, if history repeats itself in exactly the same way. Betting the market is like playing a game of dice, except we don't know how many sides it has, and it's difficult to make out the number of dots on the sides we have seen. Until we introduce new information (that is information, that the algorithm hasn't been trained on) we cannot have a reasonable idea of how well it might work under similar let alone very different market conditions. Like I said before, the 72% win rate is representative of the optimized period 2008-2020. It may be 57% over de period 2021-2035, or 42%, since we have no indication of what the win rate will be out-of-sample except, that it is likely to be less than 72%, and if the strategy is severely overfitted significantly less than 72%. Now, this latest version of IN & OUT may be perfectly viable. However, I think it's a much better idea to focus on obtaining win rates, that are representative of how the strategy reacts to unseen situations (for which there are plenty of methods available), than to push a hypothetical and likely over-optimistic backtest to its breaking point. You can look at a thing, that vaguely looks like a checkers board, and device a strategy, that will make you the greatest checkers player of all time. You run the risk however, that you're so focussed on your hypothetical checkers game, you missed the fact, that market is playing chess.
In the context of the above I would suggest to those who are interested to read some articles on sampling theory, and the pitfalls therein. To quote a text, I found:
"Sampling theory is the body of principles underlying the drawing of samples that accurately represent the population from which they are taken and to which inferences will be made."
The data points of the period 2008 - 2020 are a set of samples from a much broader population. The question is what inferences can be made from these samples about the broader population, and to what extent inferences made from the sample are only applicable to the sample, and not to the broader population. To me at least these questions are far more fundamental to constructing a good strategy than pushing the strategy under development to produce astronomical hypothetical numbers.Â
Chak
Â
Yes, sampling theory posits prediction accuracy in population/sample parameters similarity, which I completely agree with, though you can't discredit value in an simple model that's not overfitted year over year. To dummyify things, I suppose one could just look at the F-statistics, R-Squared, R-Square Change and beta coefficient tables vs the distance/variance of the actual-predicted errors to see merit in the model's performance on an index that generally grows ~8%/year across time.
If we're focused on randomly selecting stocks from an index based on certain attributes, then, yes, the model should be completely reconsidered. But the model looks at, sorts, and selects stocks with the largest DollarVolume from an index per rebalance, so we're essentially selecting the equities that contributes the most to stock market growth.
Gv
Vladimir v1.3 selects top 100 based on DollarVolume on that particular day. Do you see any value to change this to AverageDollarVolume from say last 126days?
Vladimir
Gv,
Do you see any value to change this value to AverageDollarVolume for, say, the last 126 days?
Definitely!
In one of my successful out of sample Quantopian algorithms, created in 2019, I used the exact same period to average the dollar volume.
ADV = 126;
Dollar_volume = Factors.AverageDollarVolume (mask = m, window_length = ADV)
Vladimir
The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.
To unlock posting to the community forums please complete at least 30% of Boot Camp.
You can continue your Boot Camp training progress from the terminal. We hope to see you in the community soon!