This thread is meant to continue the development of the In & Out strategy started on Quantopian. The first challenge for us will probalbly be to translate our ideas to QC code.

I'll start by attaching the version Bob Bob kindly translated on Vladimir's request.

Vladimir:

About your key error, did you also initialize UUP like this?

```
self.UUP = self.AddEquity('UUP', res).Symbol
```

Joshua Tsai

Guy Fleury Would you give the max drawdown? The current Sharpe stands at 1.85ish, so it seems like simply increasing leverage and doing some minor modifications would achieve similar results?

We're big proponents of the work of Marcos Lopez de Prado. I would recommend reading up on his work on false discoveries in finance:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2599105

His methods can be extended to include the effects of increasing model complexity. We use these methods to define Sharpe ratio thresholds for which to accept in sample backtests for our statistical models, which increase with model complexity.

Nathan Swenson

Perhaps we should have a thread for tracking live results? Anyway, the last trade was on 10/6 into bonds. I am in the aggressive setup so the entry for TMF of 10/6 was 37.06. I started mid-cycle and entered at 34.10. Either case, trade is looking good. If you are using default conservative setup, entry on 10/6 was TLT: $159.15 and IEF: $120.97. I'm still uncertain if my waitdays is really the same as a long term run, or if it is tied to my mid cycle start. We shall see if the move back to IN occurs soon.

Aalap Sharma

+1 to that Nathan.

My Live algo entered the TMF position today @37.46 on both Alpaca and Quantconnect paper trading

Guy Fleury

@Joshua, max drawdown was about 0.54. But that might not matter so much in the beginning of a testing process. The strategy starts during the 2008 financial crisis and some drawdowns were unavoidable if not unpredictable. The phase to reduce max drawdown comes later in my testing process.

You are pushing on a trading strategy and forcing it to seek volatility. Not only seeking it but amplifying it by using leveraging. You first want to see how far it can go, with and without constraints to then refine your objectives and protective measures.

First show that the strategy has something, then give it the restrictions you want, and see if there is anything left. At this stage, I kind of find the strategy promising. That might change going forward after more tests and a better understanding of what the trading strategy really does. Afterward, its benefits, if there are any, will be compared to other strategies anyway.

Average win per trade was at 16.53% compared to the average loss of -5.84% with a 65% win rate. That is the reason the strategy outperformed. Nonetheless, the average portfolio beta (0.369) showed less volatility than a market surrogate like SPY which has a beta of 1.0.

In the beginning of the battery of tests I tend to apply to a trading strategy I find the drawdown as something that is “correctable” later on by applying better protective measures. I often borrow them from other programs that have shown better trend-following procedures. I am still a novice at using QC, so it will take time to adapt.

Obviously, we design trading strategies to make money with the lowest acceptable risk. It does not matter so much how we do it as long as it is done honestly and without going bankrupt. We might not know the future but one thing we do want is to not lose our trading capital over the long term. It is the reason why we spread out our bets over time and tradable assets. We also test our trading methods on historical data to see if our strategies would have at least survived over extended periods of time, and how well.

@Menno, I do not believe in simple trading systems. We are millions having tried that for decades and decades, and look at the results... If those simple systems were that good, we would never even consider trying to go beyond. We all knew that already. We are analyzing one of the most complex and chaotic systems out there with millions and millions of participants and we want something simple and permanent out of it, could I say: think again. We have not even scratched the surface of possibilities even after over 200 years of trying to do so. Nonetheless, somehow, somewhere, someone will find something interestingly intricate and push forward.

Guy, being simple and having few adjustable parameters are two very different things. It's all about controlling the degrees of freedom of your trading strategy in relation to the amount of data available. A limited amount of data with many parameters / trading rules is a recipe for disaster. A theory may be intricate, or complex, but it should ultimately be subject to few adjustable variables. The more adjustable variables you introduce, the better the outcome of the strategy will have to be in sample for its results to be beyond the reach of the pitfalls of overfitting. The figures of merit for the strategy should therefore always be well beyond what can be achieved with random data. In this case using six adjustable parameters virtually guarantees Sharpe ratios >> 1. If the strategy you're developing has a Sharpe ratio in the same range it is very much in danger of being a false discovery.

Just to add to the above, for our trading systems, the entry and exit levels are part of our theoretical framework. A stop-loss for example should not be some free variable to minimize losses in a simulation. It should be the consequence of a logical theoretical definition of risk and reward independent of the strategy you are considering. For this strategy the number of free variables will thus be reduced to a single variable: the lookback period. A good theoretical framework will dramatically reduce the degrees of freedom of your strategies, and thus increase the chances of success.

Tristan F

Menno Dreischor good points. I got comfortable with this strategy because it appears robust even with significant reductions in degrees of freedom.

For example, in the attached, all logic related to varying the number of days out of the market are removed. This drops the following parameters: maximum number of days out of the market, time decay, and the 3 conditions associated with extending those number of days.

What we're left with is a strategy with the following thesis: if recent 3 month returns in certain indicator assets (metals, natural resources...), or relationship between related assets (silver less gold...) hit extremes (less than 1 percentile over last year), derisk for 15 days; otherwise risk on. The following levers still remain:

With this simplification, we have a strategy with a sharpe ratio of 1.85 since 2008. This is almost as good as the 1.9 sharpe of the original strategy. I've tested a few variations in the numerical parameters above (#1-4), and the results still hold up well. Unfortunately, QC doesn't have a way to test parameter ranges in backtests so there's no means test robustness more systematically. Instead of parameter optimization, do you run any tests for robustness?

Leif Trulsson

Menno Dreischor by proving that the strategy/algorithm is not good because it gives good results with random data, you have actually proved the strength of the strategy/algorithm. The strength of the algorithm per se does not lay in the data itself, but in how the data is handled, and in particular this part:

`hist_shift = hist.apply(lambda x: (x.shift(65) + x.shift(64) + x.shift(63) + x.shift(62) + x.shift( 61) + x.shift(60) + x.shift(59) + x.shift(58) + x.shift(57) + x.shift(56) + x.shift(55)) / 11) returns_sample = (hist / hist_shift - 1) # Reverse code USDX: sort largest changes to bottom returns_sample[self.USDX] = returns_sample[self.USDX] * (-1) # For pairs, take returns differential, reverse coded returns_sample['G_S'] = -(returns_sample[self.GOLD] - returns_sample[self.SLVA]) returns_sample['U_I'] = -(returns_sample[self.UTIL] - returns_sample[self.INDU]) returns_sample['C_A'] = -(returns_sample[self.SHCU] - returns_sample[self.RICU]) self.pairlist = ['G_S', 'U_I', 'C_A'] # Extreme observations; statist. significance = 1% pctl_b = np.nanpercentile(returns_sample, 1, axis=0) extreme_b = returns_sample.iloc[-1] < pctl_b # Determine waitdays empirically via safe haven excess returns, 50% decay self.WDadjvar = int( max(0.50 * self.WDadjvar, self.INI_WAIT_DAYS * max(1, np.where((returns_sample[self.GOLD].iloc[-1]>0) & (returns_sample[self.SLVA].iloc[-1]<0) & (returns_sample[self.SLVA].iloc[-2]>0), self.INI_WAIT_DAYS, 1), np.where((returns_sample[self.UTIL].iloc[-1]>0) & (returns_sample[self.INDU].iloc[-1]<0) & (returns_sample[self.INDU].iloc[-2]>0), self.INI_WAIT_DAYS, 1), np.where((returns_sample[self.SHCU].iloc[-1]>0) & (returns_sample[self.RICU].iloc[-1]<0) & (returns_sample[self.RICU].iloc[-2]>0), self.INI_WAIT_DAYS, 1) )) ) adjwaitdays = min(60, self.WDadjvar)`

in

rebalance_when_out_of_the_market."Absence ofevidenceisnot evidenceof absence"The fact that the real strategy does not outperform the same strategy performed with random variables is evidence of the fact that the strategy results are not stastically significant. It is evidence of the fact, the strategy's figures of merit cannot be credibly distuinguished from random noise, and while a relation may exist, it cannot be scientifically verified. It means, that I can take any random set of three assets, optimize the parameters and obtain similar results. Here for example I've picked gold (GLD), commodities (DBC), and international equity (VEA) as my signals. I deliberately took these, because the idea that gold, commodities and international equity are somehow related to US equity is not inconcievable, and not unsurpisingly the results look good:

I'm sure we could create a narrative, that is actually credible to support these results (the well known trappings of confirmation bias), but the reality is, that these results could have been achieved with any non related variable.

Now, you are right, that the random data argument does not proof the strategy itself is bogus. It just proofs the results presented with regards to the strategy are no evidence of the contrary, and should not be trusted to say anything about real-life performance. However, it is up to the strategy developer to scientifically proof, that the strategy has merit, and that its performance is statistically significant. My criticism here is, that this has not been done. The real evidence, that the strategy itself does not work however, I've already presented, namely that strategy validation through cross-validation shows

the strategy does not work out of sample. This is the evidence of absence.Vladimir

Menno Dreischor

John von Neumann famously said:"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk."In a algorithm any word is a parameter.

Joshua Tsai

While returns drop significantly if you change parameters, the PSR remains quite high even if I shift them, so I'd say the strategy is rather robust (if slightly overfit). Thus, returns are likely to be less in the future but still beat the Sp500

Vladimir

Leif Trulsson,

Can you explain why in rebalance_when_out_of_the_market

adjwaitdayscan get a value of 225and then in the next line is limited to 60?

Vladimir

Leif Trulsson,

Why adjwaitdays today is calculated based on price to 11 day moving average ratio 55 days ago?

Josua, sadly this is not the case. Here's the outcome of varrying the parameters over a credible range:

Exit at DBB levels between -10% and -5%

Exit at SHY levels between -2% and -0.25%

Exit at XLI levels between -10% and -5%

Lookback period between 20 days and 100 days

Exit period between 10 days and 30 days

Re-entry after drop in XLI between -100% ans -20%

Only a small minority of simulations significantly outperform the SPY.

Vladimir, that is not true, since we are talking about degrees of

freedom. The words in an algorithm are not freely chosen, but are purposefully strongly related to each other. The point of an algorithm is to reduce the number of degrees of freedom, to turn seemingly random noise into structure in a deterministic manner.Leu Bar

Hi, Im new here trying to understand algorithm could some one please explain this part. Thank you

hist_shift = hist.apply(lambda x: (x.shift(65) + x.shift(64) + x.shift(63) + x.shift(62) + x.shift( 61) + x.shift(60) + x.shift(59) + x.shift(58) + x.shift(57) + x.shift(56) + x.shift(55)) / 11)Nathan Swenson

It's capturing the average value of each signal over an 11 day period between 55 days and 65 days ago.

Jared Broad

Tristan F Parameter sensitivity optimization is coming very soon =) All the tech is built, we're waiting on the UX team to catch up. ETA next week.

https://github.com/QuantConnect/Lean/pull/4923The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.

Vladimir

Leu Bar,

A more compact way to express what you are

trying to understandis`hist_shift = hist.shift(55).rolling(window = 11).mean()`

Tentor Testivis

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.

To unlock posting to the community forums please complete at least 30% of Boot Camp.

You can continue your Boot Camp training progress from the terminal. We hope to see you in the community soon!