Hi Everyone!

We're happy to share we've added Backtesting and Research GPU support to our cloud platform! The current servers use a Tesla V100S GPU, shared with a maximum of three other users, and depending on the server load, you may use the full 100% GPU 🚀🎉

In initial tests, we see speed-ups of +100x when used with an awareness of their limitations. GPUs need data loaded into their cores (which incurs an overhead); for best results, you should do batch analysis at regular weekly or monthly intervals.

To test the speed up, we ran a strategy provided by a client on a CPU-only machine, with all training period limitations removed. After more than 24 hours, we stopped it and re-ran the same strategy on a GPU machine in 17 minutes. 

Please consider this a public beta; we're still testing the edge cases and performing optimizations, so specific behavior is subject to change. If you have specific strategies you'd like us to test, please post them here or to support and we'll run them on the new nodes. 

If you're interested, check out the new nodes from the pricing page: Backtesting B4-16-GPU and Research R4-16-GPU. These are available for a monthly lease for $400. 

Happy Coding!

Jared

Author