Hi there!

I’d like to share some experience testing over the QCAlgorithm, for sure someone will find this helpful.

First some background, this is the boring part, skip it if you want (TL;DR can be found at the end). The first time I realized the importance of testing was two years ago when I spent 3 weeks – yes, 3 weeks - debugging an algorithm that was, in fact, correct. Then a year later I meet a great developer that introduce me to TDD. In fact, he is a TTDD (Taliban test driven developer), he told me that TDD is done completely or is not TDD, I learned a lot from him and in that time I produced the best pieces of code I’ve done so far. Sadly, my honeymoon with TDD lasts some months and I left it because started to develop features that relies on the QCAlgorithm (and I think there is no need to tell in this forum that QCAlgorithm is a huge Matryoshka doll) and I hadn’t idea of how to tests it. As I desperately needed to see some stuff working I developed without testing but with the firm conviction that someday, when I know how, I’ll re-write it but using TDD.

These last weeks I was working on a Risk Manager module, and once it was running I wrote some test, just to be sure. Now I want to add some new features but I also want to produce some good quality code, this stuff will manage real money.

So here is the promised testing road (all the examples are available in this Lean fork, ready to be run) :

The first try I made was simple write an QCAlgorithm called RiskManagerTestingAlgorithm, and put some exceptions here and there. I run the algorithm as any other common Lean algorithm and if there are some red text means something is broken. I know, it has all the defects that a test can have, but was the first try.

Then I tried by using a more focused test method… like the ones I saw in the QuantConnect.Test project. But as I need more and more of the QCAlgorithm features, the tests began to be very prone to errors (my errors, of course). Here is an example of the kind of stuff I tried… and failed miserably.

Finally, I came up with this method. First I define a base QCAlgorithm (thanks Stefano!) and then each test is a QCAlgorithm that derives from the base. Each feature is checked inside a different QCAlgorithm and the way to take the tests results outside is through the RealTimeStatistic dictionary.

Then I use beautiful AlgorithmRunner with two modifications:

Extract as a method the piece of code that actually runs the algorithm and returns the ResultsHandler. So all the tests that already depends on the algorithm runner can be used as always.

A new RunLocalBacktest overload, without the dictionary of expected results. In that case, I get RuntimeStatistics and I check that all the parsed Booleans are True

And voila…take a look at this error message, I know , it can be improved :p

Now a OCD question, the RiskManagerTestingAlgorithms is in the CSharp project. If the file isn’t there, the engine comlies that I can’t find the Algorithms. Is there a way to move the file to the QuantConnect.Test project?

Any suggestion will be much appreciated. Thanks for your time.

 

TL;DR: I know testing is important. I wanted to test features that depends on QCAlgorithm but I didn’t know how. Now I use a modified AlgorithmRunner to do that kind of testing stuff.

Author