Popular Libraries
Stable Baselines
Create Subscriptions
In the Initialize
method, subscribe to some data so you can train the stable_baselines
model and make predictions.
self.symbol = self.AddEquity("SPY", Resolution.Daily).Symbol
Build Models
In this example, create a gym
environment to initialize the training environment, agent, and reward. Then, create a reinforcement learning model by a single-asset deep Q-network learning algorithm using the following observations and rewards:
Data Category | Description |
---|---|
Observations | The 5-day open, high, low, close, and volume (OHLCV) of the SPY |
Rewards | Maximum portfolio return |
Follow these steps to create a method to build the model:
- Create a custom
gym
environment class. - Get the processed training data.
- Initialize the environment with the observations and results.
- Call the
DQN
constructor with the learning policy and thegym
environment.
In this example, create a custom environment with the previous 5 OHLCV log-return data as observation and the highest portfolio value as reward.
class TradingEnv(gym.Env): FLAT = 0 LONG = 1 SHORT = 2 def __init__(self, ohlcv, ret): super(TradingEnv, self).__init__() self.ohlcv = ohlcv self.ret = ret self.trading_cost = 0.01 self.reward = 1 # The number of step the training has taken, starts at 5 since we're using the previous 5 data for observation. self.current_step = 5 # The last action self.last_action = 0 # Define action and observation space # Example when using discrete actions, we have 3: LONG, SHORT and FLAT. n_actions = 3 self.action_space = gym.spaces.Discrete(n_actions) # The observation will be the coordinate of the agent, shape for (5 previous data poionts, OHLCV) self.observation_space = gym.spaces.Box(low=-2, high=2, shape=(5, 5, 5), dtype=np.float64) def reset(self): # Reset the number of step the training has taken self.current_step = 5 # Reset the last action self.last_action = 0 # must return np.array type return self.ohlcv[self.current_step-5:self.current_step].astype(np.float32) def step(self, action): if action == self.LONG: self.reward *= 1 + self.ret[self.current_step] - (self.trading_cost if self.last_action != action else 0) elif action == self.SHORT: self.reward *= 1 + -1 * self.ret[self.current_step] - (self.trading_cost if self.last_action != action else 0) elif action == self.FLAT: self.reward *= 1 - (self.trading_cost if self.last_action != action else 0) else: raise ValueError("Received invalid action={} which is not part of the action space".format(action)) self.last_action = action self.current_step += 1 # Have we iterate all data points? done = (self.current_step == self.ret.shape[0]-1) # Reward as return return self.ohlcv[self.current_step-5:self.current_step].astype(np.float32), self.reward, done, {}
obs, rewards = self.get_observations_and_rewards()
self.env = TradingEnv(obs, rewards)
self.model = DQN(MlpPolicy, env)
Train Models
You can train the model at the beginning of your algorithm and you can periodically re-train it as the algorithm executes.
Warm Up Training Data
You need historical data to initially train the model at the start of your algorithm. To get the initial training data, in the Initialize
method, make a history request.
training_length = 252*2 self.training_data = RollingWindow[TradeBar](training_length) history = self.History[TradeBar](self.spy, training_length, Resolution.Daily) for trade_bar in history: self.training_data.Add(trade_bar)
Define a Training Method
To train the model, define a method that fits the model with the training data.
def get_observations_and_rewards(self, n_step=5): training_df = self.PandasConverter.GetDataFrame[TradeBar](list(self.training_data)[::-1]) daily_pct_change = training_df['close'].pct_change().dropna() obs = [] rewards = [] for i in range(len(daily_pct_change)-n_step): obs.append(training_df.iloc[i:i+n_step].values) rewards.append(float(daily_pct_change.iloc[i+n_step])) obs = np.array(obs) rewards = np.array(rewards) return obs, rewards def my_training_method(self): obs, rewards = self.get_observations_and_rewards() self.env = TradingEnv(obs, rewards) self.model = DQN("MlpPolicy", self.env) self.model.learn(total_timesteps=500)
Set Training Schedule
To train the model at the beginning of your algorithm, in the Initialize
method, call the Train
method.
self.Train(self.my_training_method)
To periodically re-train the model as your algorithm executes, in the Initialize
method, call the Train
method as a Scheduled Event.
# Train the model every Sunday at 8:00 AM self.Train(self.DateRules.Every(DayOfWeek.Sunday), self.TimeRules.At(8, 0), self.my_training_method)
Update Training Data
To update the training data as the algorithm executes, in the OnData
method, add the current TradeBar
to the RollingWindow
that holds the training data.
def OnData(self, slice: Slice) -> None: if self.symbol in slice.Bars: self.training_data.Add(slice.Bars[self.symbol])
Predict Labels
To predict the labels of new data, in the OnData
method, get the most recent set of features and then call the predict
method.
features, _ = self.get_observations_and_rewards() action, _ = self.model.predict(features[-5:], deterministic=True) _, _, _, _ = self.env.step(action)
You can use the label prediction to place orders.
if action == 0: self.Liquidate(self.spy) elif action == 1: self.SetHoldings(self.spy, 1) elif action == 2: self.SetHoldings(self.spy, -1)
Save Models
Follow these steps to save stable_baselines
models into the ObjectStore:
- Set the key name of the model to be stored in the ObjectStore.
- Call the
GetFilePath
method with the key. - Call the
save
method the file path.
model_key = "model"
file_name = self.ObjectStore.GetFilePath(model_key)
This method returns the file path where the model will be stored.
self.model.save(file_name)
Load Models
You can load and trade with pre-trained keras
models that you saved in the ObjectStore. To load a keras
model from the ObjectStore, in the Initialize
method, get the file path to the saved model and then call the load_model
method.
def Initialize(self) -> None: if self.ObjectStore.ContainsKey(model_key): file_name = self.ObjectStore.GetFilePath(model_key) self.model = DQN.load(file_name, env=env)
The ContainsKey
method returns a boolean that represents if the model_key
is in the ObjectStore. If the ObjectStore does not contain the model_key
, save the model using the model_key
before you proceed.