This is the original name of the algorithm that I created as a result of a successful collaboration
on the Quantopian forum thread "New Strategy - In & Out" in October 2020.
Unfortunately, the collaboration did not continue on the QuantConnect forum.
At least I am very uncomfortable with the strange names used by Peter Gunther in the algorithms,
such as "Distilled Bear", variable names and decision making logic.
This algorithm unlike those from Peter Gunther has three pairs as a source, two parameters and
concensus of all three for exit signal.
I did not optimized parameters, so you can get better results.
I want to thank Jared Broad and his team for giving me the opportunity to recover one of
my favorite algorithms.
Happy New Year to all
Vladimir
Menno Dreischor,
Just a friendly reminder.
The topic of this thread is not validation of overfitting or sensitivity tests so you have your own
thread for this.
I have seen several times "overfited" for this actually one parameter strategy.
Nobody now asking question why J. Welles Wilder chose default period for RSI 14
or Gerald Appel choose default parameters for MACD 12-26-9
or Goichi Hosoda choose default parameters for Ichimoku Clouds 9-26-52.
We have been using these magic numbers for many many years.
And they work if applied correctly without sensitivity tests, walk forward optimization and PCA.
I do not realy understand why you choose specificaly this strategy for your nanotechnology testing.
You are not holding back your words.
In any case, I will stop polluting Vladimir's thread.
For those who always call "the glass half full", there are thousands of ways to improve this strategy.
Fill it with the juice of various factors (technical or fundamental), use the stop loss technique
and enjoy a different flavor.
A famous hedge fund manager said:
"The best indicator of future performence is past performence".
Thunder Chicken
Vladimir
Your stance is fine. However, having someone like Menno Dreischor critique these approaches is entirely reasonable. I appreciate the input both of you provide. I will not provide any critiques on the approach or the investment philosophy as I don't see the value it would provide. However, this thread has devolved into a massive curve fitting and parameter optimization exercise.
It seems we are down the rabbit hole of just changing variables to increase Sharpe and returns, or to decrease drawdowns or volatility. This variable optimization approach is the literal definition of curve fitting and is guaranteed to fail. I'm certain Jared Broad can confirm the same.
In that case, I don't see the point of continuing it. I'd propose we just start a new thread with a focus on parameter optimization, and make it clear, and close this thread.
I don't think it makes sense to pollute @Vladimir work. We should just agree to disagree on curve fitting and parameter optimization.
Thank you.
Frank Schikarski
Hi there,
building on v1.5, please find some feature additions to play with:
- Trailing Stop Loss. To use it, please set TSL to a value > 0. It is basically the original QC code plugged into this algo in a helpers.py to get out of the way.
- Portfolio Optimization. To use it, please set PFO = 1. Thanks to Emilio Freire for his original contribution! Also located in the helpers.py. This feature is slow, but with the 'riskParity' logic it builds a great low volatility portfolio especially for a bundle of high momentum securities with a lower correlation and for live trading.
- Weighted Fundamentals. To use it, please change the weights for the currently three fundamentals. In case you want to change the fundamentals: helpful could be a combination of high actual return, high historic growth & currently undervalued price etc. Please change the sort order in line with a 'lowest' or 'highest' target. As some stocks don't have all fundamentals filled, please make sure that you add filters in the 'filtered_fundamental' section to prevent errors. Due to these filters, the results of this version does not match exactly to v1.5 even if the features are unused.
- Some quick inspiration on further states from the pairs logic: bear, down2x, up2x, up3x which might help for an easy case logic to switch leverage.
As mentioned before, the idea is to take the next steps in risk reduction for this beautiful algo idea! The timeframe in the attached backtest is only 2021, so untuned. Let's see where this goes - enjoy!What happens is that the TSL is checked every hour and liquidates an individual security with a market order, should the price drop. In case of daily rebalancing, it can happen that the same security is bought again next day.
Damiano Bolzoni
Frank Schikarski the version you posted buys and sells stocks every day, and updates the universe every RebalanceFreq days. Is that the intended behavior?
Frank Schikarski
Hi Damiano Bolzoni, a quick answer in my lunch break: the purpose was to add some fresh ideas / features for everybody to play with, on the way to make this algo even more live tradeable. As this may reduce PSR and return, I didn't paste a backtest startin in 2008. So please feel free to change parameters, switch logic on/off, share your insights or bring in further code snipets ;). Rebalancing and trading follows the v1.5.
Damiano Bolzoni
Frank Schikarski apologies, I had missed that the daily rebalancing had been introduced PRIOR to your additions. I was very curious to test those hence the cloning :-)
I'll code a monthly rebalance and then test. With a momentum-based strategy even weekly rebalancing could be too often...
Thanks!
Guy Fleury
Is there something in @Vladimir's In & Out strategy (version 1.5)? What I see is that there is money in there. But, you have to determine that for yourself. What follows is not intended to convince you, you have to do your own homework.
Is there an edge that could persist going forward? Is it of any consequence what this strategy did over its simulated past? Is this strategy overfitted or not? In all simplicity: is it worth it? There is so much that could be said about this strategy.
We can gather opinions, anybody can have those, even people that did not even look at the code to see what it really does. But what I see is that little is given in substantiating those opinions. I find it understandable since I do not know what the future will bring either. For instance, just consider: during 2019, who programmed their strategies beforehand to handle the impact of Covid-19 in 2020? It should be remembered as a testament to our own predictive abilities which might be considered as rather limited when faced with uncertainty.
Notwithstanding, is there really a positive probabilistic edge? To answer that question we might have to answer another one. Is the signal used predictive in some way of the average market trend? Or expressed differently, is its trend-following declaration good enough to extract profits from stock price gyrations going forward?
An even more elementary question might be: how is this trading strategy making its money? If we know how and why then we could evaluate if the trade mechanics could prevail going forward. And thereby see that the strategy might in fact continue to extract profits going forward.
We have theories for everything. Even for stock trading and investing. One of the most limiting theories is the MPT (Modern Portfolio Theory) which already dates back to the '60s. It can be summarized in that the expected optimal long-term market portfolio resides on the Markowitz efficient frontier. This states that over the long term, for a fully invested portfolio, your expected future outcome will “tend” to the market average. It does not say that the ride will be a straight line. Even for such a portfolio, you should expect a lot of ups and downs.
Yet, most modern portfolio managers have a hard time hitting that mark. Anyone exceeding that mark using their private IP is immediately put in the overfitted trading strategy bin. This without even a demonstration or proof that the trading strategy is or not overfitted.
The other guy does better than them, therefore, their strategy is overfitted while theirs is still better and not overfitted for some nonsensical reason. Period. No proof or corroborating evidence, just an opinion and apparently it should be sufficient. Well, for me, it is not. I will do my homework and determine what holds and what does not. I will test the thing and see what is under the hood.
We need to know how a trading strategy is making or losing its money. That sounds simple, we could tentatively say that the trading rules were the main reason for the generated profits. Whatever, the question remains: is it really so?
We do not need drawdown protection when the market is going up, it is when it is going down that such protection shows its value.
For instance, @Frank was kind enough to put out a version of @Vladimir's 1.5 that has stop-loss procedures in place. For me, it contradicts something I have said in a prior post: “...the strategy does not need or require a stop-loss”. A simple test to see if this holds is to enable the stop-loss and see what happens. Here is what I found: the strategy will slowly degrade til there is not enough money left in the account to even execute a single trade. You simply lose, and the reason is also quite simple.
For those finding weaknesses in this trading strategy; know that there are some. However, once you have identified those areas of vulnerability, it is your responsibility to compensate for them. You think that bond prices might fall going forward, then you should add procedures in your version of the program that would handle the situation. You identify some weaknesses in the program structure, its trading mechanics, then find ways to correct them. You have a template that is offered on a silver platter, it's debugged, it provides right out of the box a higher return than market averages. What you need to do is prove to yourself that the strategy has merit, that it can withstand time. Push it to its limits, see it blow up to identify added structural deficiencies, then correct those potential problems if you can. That is the job.
This strategy has been in a walk-forward mode (out-of-sample) since at least last October and it is still going strong. There is something in the trading methods used. It is up to you to determine if it is enough for you or not. All I know is that you can push this strategy to quite a great extent even with simple administrative procedures and it can translate in quite a pretty penny. At least, I think I have demonstrated that it can be done over past market data (see my prior posts). It is not just an opinion, it is corroborating evidence of what the strategy did over past market data.
I know what makes this strategy tick. Where do the profits come from, and how they are made. There is math underneath that governs this trading strategy and you can control the math. So, my suggestion is, dismantle the strategy and then reconstruct the parts you want, see why it is profitable, see how you could improve on its design. I would add, if you do not see how this strategy is making its money, how can you control it? How could you even trust it?
Once you will know what this strategy really does and how it operates, you will be able to control it, and then add the protection you think it might need or whatever. It should be a way for you to gain the confidence needed by first showing to yourself under your own set of rules and constraints the level at which you might find an executable compromise. It is always up to you.
Vladimir
The strategy you see in my second post was developed in October 2020 on Quantopian forum.
In that version it trades two symbols QQQ and TLT and has only two parameters one not significant VOLA = 126 (the period on wich we calculate anualised market volatility) and the other more or less significant BASE_RET = 85
They needed to calculate wait_days and momentum period adapted to market volatility
self.history = self.History(symbols, VOLA + 1, Resolution.Daily)
vola = self.history[[self.MKT]].pct_change().std() * np.sqrt(252)
wait_days = int(vola * BASE_RET)
period = int((1.0 - vola) * BASE_RET)
Since then, several versions have been published with the same parameters:
v1.1, in which diversified static lists are traded.
v1.2 with dynamic stocks selector by momentum(thanks to Mikko M)
v1.3 with dynamic stocks selector by fundamental factors and momentum(Leandro Maia setup)
v1.5 with dynamic stocks selector by fundamental factors and momentum
eliminated fee saving part of the code plus daily rebalence specificaly for leverage testing.
Guy Fleury,
Could v1.2-v1.5 be considered out-of-sample since they trade completely different symbols dynamically selected without any lookahead bios?
Thanks for your support.
Thunder Chicken
However, this thread has devolved into a massive curve fitting and parameter optimization exercise.
Where did you find a massive curve fitting? This strategy has only one meaninfull parameter.
and the worst BASE_RET from 50 to 150 will outperform SPY.
Frank Schikarski, Damiano Bolzoni
I do not recommend in future research use v1.5 because it was designed specifically
for testing Dr Ernst Chan's recommendation.
Use v1.3
Frank Schikarski,
Thanks for sharing your version with stop loss.
On Quantopian I used the addition of stop loss on similar strategy but that version increased return.
The opposite happens here with a little lower drow down but takes 10x time to complete.
Try it on version 1.3.
It is completely up to you to use it or not.
Regarding your post.
You probably gave the best recommendation in that thread, but do not fall into the illusion that the higher sharpe ratio, the better. I first look at the return and then share ratio for the reason explained here.
Radu Spineanu
Vladimir I took v1.5 and removed the daily rebalance so it's not doing anything if the stock picks have not changed and it perfomed worse. The original idea of rebalancing by buying losers and selling winners had an edge.
I wish I could share the code but I have no idea how to recover old backtests.
v1.3 has cleaner code, but otherwise why do you prefer v1.3 over v1.5?
Jack Pizza
Vladimir and all here my concern about this algo is what is the hypothesis behind it? How do we know the current correlations for the exit signal will continue into the future?
((r[self.SLV] < r[self.GLD]) and (r[self.XLI] < r[self.XLU]) and (r[self.DBB] < r[self.UUP]))
two of the conditions are basically gold / cash related, and the last is metals again / and utilities how solid are these correlations? what happens if they break down in a black swan even? Your strategy will never get an exit signal.
And as mentioned last time the exit can't just be bonds, needs to have an ultimate out of cash or maybe some sort of other momentum calculated assets such as Gold ect.
Guy Fleury
@Vladimir, yes. You changed the stock universe, technically, you are out-of-sample. Especially in this case. The selection process went from one to 150 stocks. For sure, 149 of those had not been seen by your trading strategy. So, it qualifies as out-of-sample. When the selection was for only one stock you had QQQ which in itself represents 100 stocks. And therefore, we should not be surprised if actually dealing with 100+ stocks also worked.
Since the core of these versions remained about the same with slight variations on the same theme we could classify all of the 5 or 6 versions of In & Out in other threads as variants.
I've worked on all of those variants. They all had different stock selection processes, trading rules, trading methods, and constraints. They all gave different answers, some pushing here or there. They all were fully invested and periodically rebalanced. Note also that the trade mechanics have not changed that much since October. We could go further back if we included the stuff done on Quantopian.
In the end, the question will be: which variation of which version will you prefer since they all could be pushed to “extraordinary” performance levels? I prefer the ones dealing with 100+ stocks. It spreads out market risk and reduces bet size.
What is left to do is include other protection measures in the form of having code that would activate should the trend-following signal breaks down, or the market really tanks. We can backtest our trading strategies, but the real objective is to have them survive going forward and we should plan for those “events”.
I see the trade mechanics in need of some improvements as well as the stock selection process and the overall timing of trades. But those are additions that each user should address on their own terms following their own objectives and portfolio constraints.
@Guy Fleury I would have to disagree with that statement, since out-of-sample implies, that no part of the data has been seen by your algorithm during trading. While a stock selection introduces some new information, if the 150+ stocks have a high correlation with for example QQQ, the stock universe cannot be classified as out-of-sample, since your universe depending on the correlation shares over 90% of the underlying factors, that drive the outcome of what the algorithm has been trained on. That would still make it an in-sample result. Ideally an out-of-sample test should have no relation with the training data, and should definitely not share the same time frame. In a less ideal case an out-of-sample data set may have a low correlation with the training data, such that any inflation of the results from overfitting are minimal, but a broad stock universe, would not fit that bill.
Frank Schikarski
Hi there, thanks to Vladimirencouragement to try stop loss to increase returns, here is the Trailing Stop Loss and the Portfolio logic plugged into v.1.3. Please don't use my old version from above. My updates are as follows:
Please note, that the level of Stop Loss now depends on the regime defined by the number of pairs in the upward direction. If 2/3 or 3/3 pairs are bullish, the SL is 20%, and if 0/3 pairs are bullish, the SL for bonds is at 15% which means a very rare trigger. Most importantly: if only 1/3 pairs are bullish, the SL is at 11%, defined from the highest high to the current price which significantly improved the results. These numbers are a result from in-sample optimization, but I believe that similar reduction in drawdown will be achievable in the future. In particular, lowering the SL when only one pair is bullish reduces the dependency on all three selected pairs being triggered simultaneously.
In the event that an SL is hit for a single security, a complete rebalancing is triggered 2 days after the exit to fill the empty slot again. These 2 days are also in-sample optimised, but seem to be repeatable, and rebalancing the next day after triggering an SL is not really necessary. Only should an exit be triggered at the same time for a large number of shares, and should these then all rise above the exit price the next day, it would be disadvantageous to wait 2 days with the rebalancing.
I also conducted an in-sample backtest of rebalancing frequency with SL activated. Rebalancing after 84 days (when there was no SL-triggered rebalancing before) was the most promising and did not seem to be a "local optimum". This could be challenged by conducting several in-sample backtests with different time windows.
The most sensitive parameters seem to be the fundamental weights, which I have changed to 85% EV to EBITDA ratio, 10% price to EBITDA ratio and 5% PE ratio. The reason is that these parameters have picked the historically successful stocks quite well in the in-sample back tests, and we don't know if these weights will continue to be very good in the future. We could optimise these parameters with a much higher number of stock picks and see if the parameters hold, and also test them with different time windows. And: we don't have to put an algo live and close our eyes for 10 years, we can repeat the optimisation at any time.
The portfolio logic does not play out its strength with this small number of stocks picks, but can be useful with a larger number of stocks, or if we want to mix stock picks with index ETFs from US or other countries or asset classes with high growth and low correlation.
Results from 2008/1/1 to 2021/1/13 are:
Just to compare risk of v13.b and the original v1.3: with a leverage of 1.43, we can achieve the same historical return, but the historical drawdown decreased a bit from ~38% to ~31%.
Have fun!
Jack Pizza
Maybe another criteria if you're looking at selecting stock is adding a repurchase share filter for obvious reasons, not sure if Quantconnect fundamental data supports that sort of info.
Vladimir
Frank Schikarski,
I like your latest version 1.3b.
Despite it has lower CAGR it has lower volatility, max draw-down, and higher Sharpe Ratio and PSR.
Also it runs reasonably fast.
Thank you for bringing a different flavor to this algorithm.
At the next stage, try to invest the money available from the stop loss exit in bonds.
Frank Schikarski
Elsid Aliaj nice idea! I found these on repurchase of stocks in the fundementals. Also possible would be to continuously calculate the correlation of all available fundamentals and sentiments to price changes and then adapt the fundamentals filter dynamically - volunteers welcome ;)
CashFlowStatement.RepurchaseOfCapitalStock: Payments for Common Stock plus Payments for Preferred Stock.
BalanceSheet.ComTreShaNum: The treasury stock number of common shares. This represents the number of common shares owned by the company as a result of share repurchase programs or donations.
BalanceSheet.PreTreShaNum: The treasury stock number of preferred shares. This represents the number of preferred shares owned by the company as a result of share repurchase programs or donations.
Frank Schikarski
Vladimir interesting thought to replace stopped out stocks with bonds, especially in the case where only 1/3 pairs is bullish.
What do you think of the rebalancing frequency of 84 days, do you think this is too high? Question is, should we let the top runners running (for even more than 84 days) and only replace the stopped out stocks with either bonds or other stocks? But codde will get more complicated...
Jack Pizza
Also regarding bonds maybe should check performance momentum vs different classes such as cash long medium. In case in say where high interest rates return TLT < SHY go into cash as opposed to long medium term bonds always.
Jack Pizza
Also what was the code to turn off the daily rebalancing i think from prior tests the annoying 1 share trades a day didn't add much meaningful performance. Also the bond thing maybe put a threshold for interest rates like > 3% or > velocity of rate hikes ect.
T Smith
Following on from Menno Dreischor comments and Vladimir initial design. I have reduce the signals to just XLI/XLU in order to backtest from 2001 I then optimized the strategy from 2001-2010. We then use the optimized parameters and test 'out-of-sample'.
Using just SPY and TLT (only starts trading TLT from 2003)
Vladimir
The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.
To unlock posting to the community forums please complete at least 30% of Boot Camp.
You can continue your Boot Camp training progress from the terminal. We hope to see you in the community soon!