Popular Libraries



This page explains how to build, train, deploy and store GPlearn models.

Import Libraries

Import the gplearn and joblib libraries.

from AlgorithmImports import *
from gplearn.genetic import SymbolicRegressor, SymbolicTransformer
import joblib

You need the joblib library to store models.

Create Subscriptions

In the Initializeinitialize method, subscribe to some data so you can train the GPLearn model and make predictions.

self._symbol = self.add_equity("SPY", Resolution.DAILY).symbol

Build Models

In this example, build a genetic programming feature transformation model and a genetic programming regression prediction model using the following features and labels:

Data CategoryDescription
FeaturesDaily percent change of the close price of the SPY over the last 5 days
LabelsDaily percent return of the SPY over the next day

The following image shows the time difference between the features and labels:

Features and labels for training

Follow these steps to create a method to build the model:

  1. Declare a set of functions to use for feature engineering.
  2. function_set = ['add', 'sub', 'mul', 'div',
                    'sqrt', 'log', 'abs', 'neg', 'inv',
                    'max', 'min']
  3. Call the SymbolicTransformer constructor with the preceding set of functions and then save it as a class variable.
  4. self.gp_transformer = SymbolicTransformer(function_set=function_set)
  5. Call the SymbolicRegressor constructor to instantiate the regression model.
  6. self.model = SymbolicRegressor()

Train Models

You can train the model at the beginning of your algorithm and you can periodically re-train it as the algorithm executes.

Warm Up Training Data

You need historical data to initially train the model at the start of your algorithm. To get the initial training data, in the Initializeinitialize method, make a history request.

training_length = 252*2
self.training_data = RollingWindow[float](training_length)
history = self.history[TradeBar](self._symbol, training_length, Resolution.DAILY)
for trade_bar in history:

Define a Training Method

To train the model, define a method that fits the model with the training data.

def get_features_and_labels(self, n_steps=5):
    training_df = list(self.training_data)[::-1]
    daily_pct_change = ((np.roll(training_df, -1) - training_df) / training_df)[:-1]

    features = []
    labels = []
    for i in range(len(daily_pct_change)-n_steps):
    features = np.array(features)
    labels = np.array(labels)

    return features, labels

def my_training_method(self):
    features, labels = self.get_features_and_labels()

    # Feature engineering
    self.gp_transformer.fit(features, labels)
    gp_features = self.gp_transformer.transform(features)
    new_features = np.hstack((features, gp_features))

    # Fit the regression model with transformed and raw features.
    self.model.fit(new_features, labels)

Set Training Schedule

To train the model at the beginning of your algorithm, in the Initializeinitialize method, call the Traintrain method.


To periodically re-train the model as your algorithm executes, in the Initializeinitialize method, call the Traintrain method as a Scheduled Event.

# Train the model every Sunday at 8:00 AM
self.train(self.date_rules.every(DayOfWeek.SUNDAY), self.time_rules.at(8, 0), self.my_training_method)

Update Training Data

To update the training data as the algorithm executes, in the OnDataon_data method, add the current close price to the RollingWindow that holds the training data.

def on_data(self, slice: Slice) -> None:
    if self._symbol in slice.bars:

Predict Labels

To predict the labels of new data, in the OnDataon_data method, get the most recent set of features and then call the predict method.

features, _ = self.get_features_and_labels()

# Get transformed features
gp_features = self.gp_transformer.transform(features)
new_features = np.hstack((features, gp_features))

# Get next prediction
prediction = self.model.predict(new_features)
prediction = float(prediction.flatten()[-1])

You can use the label prediction to place orders.

if prediction > 0:
    self.set_holdings(self._symbol, 1)
elif prediction < 0:            
    self.set_holdings(self._symbol, -1)

Save Models

Follow these steps to save GPLearn models into the Object Store:

  1. Set the key names you want to store the models under in the Object Store.
  2. transformer_model_key = "transformer"
    regressor_model_key = "regressor"
  3. Call the GetFilePathget_file_path method with the keys.
  4. transformer_file_name = self.object_store.get_file_path(transformer_model_key)
    regressor_file_name = self.object_store.get_file_path(regressor_model_key)

    This method returns the file paths where the models will be stored.

  5. Call the dump method the file paths.
  6. joblib.dump(self.gp_transformer, transformer_file_name)
    joblib.dump(self.model, regressor_file_name)

    If you dump the models using the joblib module before you save the models, you don't need to retrain the models.

Load Models

You can load and trade with pre-trained GPLearn models that you saved in the Object Store. To load a GPLearn model from the Object Store, in the Initializeinitialize method, get the file path to the saved model and then call the load method.

def initialize(self) -> None:
    if self.object_store.contains_key(transformer_model_key) and self.object_store.contains_key(regressor_model_key):
        transformer_file_name = self.object_store.get_file_path(transformer_model_key)
        regressor_file_name = self.object_store.get_file_path(regressor_model_key)
        self.gp_transformer = joblib.load(transformer_file_name)
        self.model = joblib.load(regressor_file_name)

The ContainsKeycontains_key method returns a boolean that represents if transformer_model_key and regressor_model_key are in the Object Store. If the Object Store does not contain the keys, save the model using them before you proceed.

Clone Example Algorithm

You can also see our Videos. You can also get in touch with us via Discord.

Did you find this page helpful?

Contribute to the documentation: