Hello Community! 

We're happy to share we've finally shipped cloud parameter optimization for parameter sensitivity testing. This feature lets you spin up a small cluster of servers for a few minutes to run up to 24 backtests in parallel. It joins all the results together in a nice view for you to explore the aggregate data and find areas of parameter "strength".

We recommend you use it to test your parameter robustness. If you find a sharp peak in returns on a single parameter value that's a sign it's been overfitted. If you see a wide range of parameter values that show a strong result it's reasonably safe to pick a parameter in that zone.  

Parameters are pulled into your strategy from the GetParameter method. This takes the name of the parameter and returns the value at this step in time.

fast = self.GetParameter("ema-fast")
slow = self.GetParameter("ema-slow")

Check out the demonstration in this video below. We'll follow in the next weeks with documentation on how to use it and a tutorial video.

Parameter optimization

class EmaCrossParameterAlgorithm(QCAlgorithm):

def Initialize(self):

self.slow = int(self.GetParameter("ema-slow"))
self.fast = int(self.GetParameter("ema-fast"))

self.symbol = self.AddEquity("QQQ", Resolution.Hour).Symbol
self.emaFast = self.EMA(self.symbol, self.fast)
self.emaSlow = self.EMA(self.symbol, self.slow)

def OnData(self, slice):
if self.emaFast.Current.Value > self.emaSlow.Current.Value:
self.SetHoldings(self.symbol, 1)
self.SetHoldings(self.symbol, 0)

PS: It may be tempting to use this to hyper tune the backtest but resist the urge! It is generally meaningless to look at a single backtest with this analysis tool. Try always to pick strategies with meaning, and assign parameters meaningful values. E.g. Rainfall per day has a realistic range of values.