Hello

I currently have an algo that doesn't do that much stuff (IMO), yet it's getting killed randomly by out of memory events during back tests. It would be rather bad if it happened during live deployment. Given the non-deterministic nature of GC, chances are it sometimes will. Is there a way to detect these events ahead of time? Such limits are (assuming one runs 64 bit executables) on a dedicated machine typically softer and cause incremental slow down way ahead of running out (since virtual memory can be paged to slower SSD/HDD).

I've deliberately coded in a non-memory optimal manner (because I didn't expected memory to be an issue). I could rewrite my own algo to use zero transient allocations code patterns, but then there remains the issue of LeanEngine happily allocating temporary objects (from the very little I've looked at the code so far, it loves using Linq). Admittedly, with proper GC, these objects should die in gen 0/1 collections before causing loss of performance in gen 2, however, the greater system stress the greater chance of a build up in allocations before next GC cycle.

If anyone has any QC specific guidelines of memory usage, please share. Currently I'm somewhat skeptical of being able to build algorithms that regularly survive longer backtests with more than a few indicators/data series. Will certainly be looking more into the matter though.

Author