Machine Learning

Gplearn

Introduction

This page introduces how to build, train, test, and store GPlearn models.

Import Libraries

Import the GPlearn library.

from gplearn.genetic import SymbolicRegressor, SymbolicTransformer
from sklearn.model_selection import train_test_split
import joblib

You need the sklearn library to prepare the data and the joblib library to store models.

Get Historical Data

Get some historical market data to train and test the model. For example, to get data for the SPY ETF during 2020 and 2021, run:

qb = QuantBook()
symbol = qb.AddEquity("SPY", Resolution.Daily).Symbol
history = qb.History(symbol, datetime(2020, 1, 1), datetime(2022, 1, 1)).loc[symbol]

Prepare Data

You need some historical data to prepare the data for the model. If you have historical data, manipulate it to train and test the model. In this example, use the following features and labels:

Data CategoryDescription
FeaturesDaily percent change of the open, high, low, close, and volume of the SPY over the last 5 days
LabelsDaily percent return of the SPY over the next day

The following image shows the time difference between the features and labels:

Follow these steps to prepare the data:

  1. Call the pct_change method and then drop the first row.
  2. daily_returns = history['close'].pct_change()[1:]
  3. Loop through the daily_returns DataFrame and collect the features and labels.
  4. n_steps = 5
    features = []
    labels = []
    for i in range(len(daily_returns)-n_steps):
        features.append(daily_returns.iloc[i:i+n_steps].values)
        labels.append(daily_returns.iloc[i+n_steps])
  5. Convert the lists of features and labels into numpy arrays.
  6. X = np.array(features)
    y = np.array(labels)
  7. Split the data into training and testing periods.
  8. X_train, X_test, y_train, y_test = train_test_split(X, y)

Train Models

You need to prepare the historical data for training before you train the model. If you have prepared the data, build and train the model. In this example, create a Symbolic Transformer to generate new non-linear features and then build a Symbolic Regressor model. Follow these steps to create the model:

  1. Declare a set of functions to use for feature engineering.
  2. function_set = ['add', 'sub', 'mul', 'div',
                    'sqrt', 'log', 'abs', 'neg', 'inv',
                    'max', 'min']
  3. Call the SymbolicTransformer constructor with the preceding set of functions.
  4. gp_transformer = SymbolicTransformer(function_set=function_set,
                                         random_state=0, 
                                         verbose=1)
  5. Call the fit method with the training features and labels.
  6. gp_transformer.fit(X_train, y_train)

    This method displays the following output:

  7. Call the transform method with the original features.
  8. gp_features_train = gp_transformer.transform(X_train)
  9. Call the hstack method with the original features and the transformed features.
  10. new_X_train = np.hstack((X_train, gp_features_train))
  11. Call the SymbolicRegressor constructor.
  12. gp_regressor = SymbolicRegressor(random_state=0, verbose=1)
  13. Call the fit method with the engineered features and the original labels.
  14. gp_regressor.fit(new_X_train, y_train)

Test Models

You need to build and train the model before you test its performance. If you have trained the model, test it on the out-of-sample data. Follow these steps to test the model:

  1. Feature engineer the testing set data.
  2. gp_features_test = gp_transformer.transform(X_test)
    new_X_test = np.hstack((X_test, gp_features_test))
  3. Call the predict method with the engineered testing set data.
  4. y_predict = gp_regressor.predict(new_X_test)
  5. Plot the actual and predicted labels of the testing period.
  6. df = pd.DataFrame({'Real': y_test.flatten(), 'Predicted': y_predict.flatten()})
    df.plot(title='Model Performance: predicted vs actual closing price', figsize=(15, 10))
    plt.show()
  7. Calculate the R-square value.
  8. r2 = gp_regressor.score(new_X_test, y_test)
    print(f"The explained variance of the GP model: {r2*100:.2f}%")

Store Models

You can save and load GPlearn models using the ObjectStore.

Save Models

Follow these steps to save models in the ObjectStore:

  1. Set the key names of the models to be stored in the ObjectStore.
  2. transformer_key = "transformer"
    regressor_key = "regressor"
  3. Call the GetFilePath method with the key names.
  4. transformer_file = qb.ObjectStore.GetFilePath(transformer_key)
    regressor_file = qb.ObjectStore.GetFilePath(regressor_key)

    This method returns the file paths where the models will be stored.

  5. Call the dump method with the models and file paths.
  6. joblib.dump(gp_transformer, transformer_file)
    joblib.dump(gp_regressor, regressor_file)

    If you dump the model using the joblib module before you save the model, you don't need to retrain the model.

  7. Call the Save method with the key names.
  8. qb.ObjectStore.Save(transformer_key)
    qb.ObjectStore.Save(regressor_key)

Load Models

You must save a model into the ObjectStore before you can load it from the ObjectStore. If you saved a model, follow these steps to load it:

  1. Call the ContainsKey method.
  2. qb.ObjectStore.ContainsKey(transformer_key)
    qb.ObjectStore.ContainsKey(regressor_key)

    This method returns a boolean that represents if the model_key is in the ObjectStore. If the ObjectStore does not contain the model_key, save the model using the model_key before you proceed.

  3. Call the GetFilePath method with the keys.
  4. transformer_file = qb.ObjectStore.GetFilePath(transformer_key)
    regressor_file = qb.ObjectStore.GetFilePath(regressor_key)

    This method returns the path where the model is stored.

  5. Call the load method with the file paths.
  6. loaded_transformer = joblib.load(transformer_file)
    loaded_regressor = joblib.load(regressor_file)

    This method returns the saved models.

You can also see our Videos. You can also get in touch with us via Discord.

Did you find this page helpful?

Contribute to the documentation: