# Popular Libraries

## Aesera

### Introduction

This page explains how to build, train, deploy and store Aesera models.

### Import Libraries

Import the aesera and sklearn libraries.

from AlgorithmImports import *
import aesara
import aesara.tensor as at
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
import joblib

You need the joblib library to store models.

### Create Subscriptions

In the Initialize method, subscribe to some data so you can train the sklearn model and make predictions.

self.symbol = self.AddEquity("SPY", Resolution.Daily).Symbol

### Build Models

In this example, build a logistic regression prediction model that uses the following features and labels:

Data CategoryDescription
FeaturesNormalized daily close price of the SPY over the last 5 days
LabelsReturn direction of the SPY over the next day

The following image shows the time difference between the features and labels:

Follow the below steps to build the model:

1. Initialize variables.
2. # Declare Aesara symbolic variables
x = at.dmatrix("x")
y = at.dvector("y")

# initialize the weight vector w randomly using share so model coefficients keep their values
rng = np.random.default_rng(100)
w = aesara.shared(rng.standard_normal(5), name="w")

# initialize the bias term
b = aesara.shared(0., name="b")
3. Construct the model graph.
4. # Construct Aesara expression graph
p_1 = 1 / (1 + at.exp(-at.dot(x, w) - b))       # Logistic transformation
prediction = p_1 > 0.5                          # The prediction thresholded
xent = y * at.log(p_1) - (1 - y) * at.log(1 - p_1)  # Cross-entropy log-loss function
cost = xent.mean() + 0.01 * (w ** 2).sum()      # The cost to minimize (MSE)
gw, gb = at.grad(cost, [w, b])                  # Compute the gradient of the cost
5. Compile the model.
6. self.train = aesara.function(
inputs=[x, y],
outputs=[prediction, xent],
updates=((w, w - 0.1 * gw), (b, b - 0.1 * gb)))
self.predict = aesara.function(inputs=[x], outputs=prediction)

### Train Models

You can train the model at the beginning of your algorithm and you can periodically re-train it as the algorithm executes.

#### Warm Up Training Data

You need historical data to initially train the model at the start of your algorithm. To get the initial training data, in the Initialize method, make a history request.

training_length = 252*2
self.training_data.Add(trade_bar)

#### Define a Training Method

To train the model, define a method that fits the model with the training data.

def get_features_and_labels(self, n_steps=5):

features = []
for i in range(1, n_steps + 1):
close = training_df.shift(i)[n_steps:-1]
close.name = f"close-{i}"
features.append(close)
features = pd.concat(features, axis=1)
# Normalize using the 5 day interval
features = MinMaxScaler().fit_transform(features.T).T[4:]

Y = training_df.pct_change().shift(-1)[n_steps*2-1:-1].reset_index(drop=True)
labels = np.array([1 if y > 0 else 0 for y in Y])   # binary class

return features, labels

def my_training_method(self):
features, labels = self.get_features_and_labels()
D = (features, labels)
self.train(D[0], D[1])

#### Set Training Schedule

To train the model at the beginning of your algorithm, in the Initialize method, call the Train method.

self.Train(self.my_training_method)

To periodically re-train the model as your algorithm executes, in the Initialize method, call the Train method as a Scheduled Event.

# Train the model every Sunday at 8:00 AM
self.Train(self.DateRules.Every(DayOfWeek.Sunday), self.TimeRules.At(8, 0), self.my_training_method)

#### Update Training Data

To update the training data as the algorithm executes, in the OnData method, add the current TradeBar to the RollingWindow that holds the training data.

def OnData(self, slice: Slice) -> None:
if self.symbol in slice.Bars:
self.training_data.Add(slice.Bars[self.symbol])

### Predict Labels

To predict the labels of new data, in the OnData method, get the most recent set of features and then call the predict method.

features, _ = self.get_features_and_labels()
prediction = self.predict(features[-1].reshape(1, -1))
prediction = float(prediction)

You can use the label prediction to place orders.

if prediction == 1:
self.SetHoldings(self.symbol, 1)
elif prediction == 0:
self.SetHoldings(self.symbol, -1)

### Save Models

Follow these steps to save sklearn models into the ObjectStore:

1. Set the key name you want to store the model under in the ObjectStore.
2. model_key = "model"
3. Call the GetFilePath method with the key.
4. file_name = self.ObjectStore.GetFilePath(model_key)

This method returns the file path where the model will be stored.

5. Call the dump method the file path.
6. joblib.dump(self.predict, file_name)

If you dump the model using the joblib module before you save the model, you don't need to retrain the model.

7. Call the Save method with the key.
8. self.ObjectStore.Save(model_key)

You can load and trade with pre-trained sklearn models that you saved in the ObjectStore. To load a sklearn model from the ObjectStore, in the Initialize method, get the file path to the saved model and then call the load method.

def Initialize(self) -> None:
if self.ObjectStore.ContainsKey(model_key):
file_name = self.ObjectStore.GetFilePath(model_key)
self.model = joblib.load(file_name)

The ContainsKey method returns a boolean that represents if the model_key is in the ObjectStore. If the ObjectStore does not contain the model_key, save the model using the model_key before you proceed.

### Clone Example Algorithm

You can also see our Videos. You can also get in touch with us via Discord.