In a strategy, there might be some parameters that are arbitrarely chosen. It would be useful to have the ability to automatically launch many backtests, each of which would produce the equity line for a different combination of values of these parameters.


 

I could then apply a filter and select the best performing parameters.

This is highly likely to produce what's known as "overfitting": we might have identified a set of parameters that worked well to produce profit in the past but that did so not because of an inherent market bias but because they picked up noise and randomness.

How do I test for this?

Random timeframes should be excluded when running the selection of the best performing parameters:

 

Then, we will simply ask: are the parameters that worked best in the sample also the ones that worked best in out of sample (the portion that we excluded)? If we really identified a market trend/bias best captured by a set of parameters, those parameters will be the best ones for the out-of-sample portion as well. In the example below, we see that the best strategies worked well just because of overfitting:

You can see there is no correlation between the success a set of parameters had in-sample versus out-of-sample.

Adding the ability to make these analysis in QuantConnect would make it exponentially more powerful.

Author