Here is a backtest with some changes I made to get you started on optimizing.
I think part of the linear trend is the accumulation of rolling metrics, orders, trades, and more. I imagine Lean has to do some heavy lifting to offer us all the info provided in Overview, Trades, etc.
I switched to list comprehension on the main compute code, instead of the existing for loop, using something like:
results = [eat(cheese) for cheese in wheel if cheese.owner == 'Me'] #yummy
I traded a bit of ram for some cpu. The large, redundant use of History was slowing things down a bit and would be hard to accomplish in live without a timeout or delay. I switched this to a dictionary of length limited DataFrames to reduce how much history needs to be requested at each rebalanced. I also think you can increase speed by adjusting your hyperparameters such as tol, max_iter, and n_itit to limit time spent waiting for each model's loss to converge or for the training to give up.
I added an elapsed time tracker to the chart to ensure things are not blowing up there. The original algo had times between 0 and 20, averaging over 10+ sec. I also switched the plotting of ram to occur every day just in case some kind of post compute memory use was inflating the Ram chart.