OK, so we all know this problem: You re-run your backtest, and because of a miniscule change in conditions (minor algorithm difference, or randomness) you get an entirely different outcome. In algorithms where decisions are impacted by current portfolio holdings, it even worse as then you will get a domino effect from the first difference. Chaos theory in action I suppose.

The way I dealt with this outside QC in a crude manner is re-running tests a lot of times. QC doesn't currently support that functionality aside from manually doing so, or if you somehow hack it into your algorithm via e.g. simulating trading inside the simulation (messy and defeats the purpose of QC). It would also be naturally more CPU-intensive (but hey, I am willing to pay more for something that solves my problems out of the box and saves me time).

I think QC would also benefit from an optional high level buy/sell signal framework that allows signals to be tested in isolation. Again, because testing signals without impact stateful portfolio holdings is important to avoid the domino effect. I'm not sure how far one can go in customizing the QC test output today (I think this is basically possible already with some work), but again convenience applies.

As for parameter optimzation (e.g. walkforward), and machine learning, it's necessary to hack around system limitations in a rather awkward manner. I'm curious what the future holds here for QC, there are definitely great possibilities to do more things in the cloud than on my local computer's limited CPU.

Overall I'm wondering what other people's thoughts are about these things, because I regard them as my biggest problems with working in QC right now, some other purely technical issues aside.

Author