我使用监督学习创建了一个用于股票数据预测的LSTM(RNN)神经网络。问题是为什么它在自己的训练数据上预测错误?(注意:下方有可复现的例子)
我创建了一个简单的模型来预测未来5天的股票价格:
model = Sequential()model.add(LSTM(32, activation='sigmoid', input_shape=(x_train.shape[1], x_train.shape[2])))model.add(Dense(y_train.shape[1]))model.compile(optimizer='adam', loss='mse')es = EarlyStopping(monitor='val_loss', patience=3, restore_best_weights=True)model.fit(x_train, y_train, batch_size=64, epochs=25, validation_data=(x_test, y_test), callbacks=[es])
正确的结果在y_test
中(5个值),所以模型训练时,回顾前90天的数据,然后从最佳结果(val_loss=0.0030
)中恢复权重,patience=3
:
Train on 396 samples, validate on 1 samplesEpoch 1/25396/396 [==============================] - 1s 2ms/step - loss: 0.1322 - val_loss: 0.0299Epoch 2/25396/396 [==============================] - 0s 402us/step - loss: 0.0478 - val_loss: 0.0129Epoch 3/25396/396 [==============================] - 0s 397us/step - loss: 0.0385 - val_loss: 0.0178Epoch 4/25396/396 [==============================] - 0s 399us/step - loss: 0.0398 - val_loss: 0.0078Epoch 5/25396/396 [==============================] - 0s 391us/step - loss: 0.0343 - val_loss: 0.0030Epoch 6/25396/396 [==============================] - 0s 391us/step - loss: 0.0318 - val_loss: 0.0047Epoch 7/25396/396 [==============================] - 0s 389us/step - loss: 0.0308 - val_loss: 0.0043Epoch 8/25396/396 [==============================] - 0s 393us/step - loss: 0.0292 - val_loss: 0.0056
预测结果非常棒,不是吗?
这是因为算法从第5个epoch恢复了最佳权重。好的,现在让我们将这个模型保存到.h5
文件中,向后移动10天,并预测最后5天(在第一个例子中,我们创建了模型并在4月17日至23日进行验证,包括周末休息日,现在让我们在4月2日至8日进行测试)。结果:
它显示了完全错误的方向。正如我们所见,这是因为模型是在4月17日至23日训练并选择了第5个epoch的最佳验证集,而不是4月2日至8日。如果我尝试更多训练,玩弄选择哪个epoch,无论我做什么,过去总有很多时间间隔预测错误。
为什么模型在自己训练的数据上显示错误结果?我训练了数据,它应该记住如何在这个数据集上预测数据,但预测错误。我还尝试了以下方法:
- 使用大型数据集,包含5万多行,20年的股票价格,添加更多或更少的特征
- 创建不同类型的模型,如添加更多隐藏层,不同的batch_sizes,不同的层激活,dropouts,batchnormalization
- 创建自定义的EarlyStopping回调,从多个验证数据集中获取平均val_loss,并选择最佳结果
也许我遗漏了什么?我可以改进什么?
这是一个非常简单且可复现的例子。yfinance
下载S&P 500股票数据。
"""python 3.7.7tensorflow 2.1.0keras 2.3.1"""import numpy as npimport pandas as pdfrom keras.callbacks import EarlyStopping, Callbackfrom keras.models import Model, Sequential, load_modelfrom keras.layers import Dense, Dropout, LSTM, BatchNormalizationfrom sklearn.preprocessing import MinMaxScalerimport plotly.graph_objects as goimport yfinance as yfnp.random.seed(4)num_prediction = 5look_back = 90new_s_h5 = True # 当你创建模型并想在其他过去日期上测试时,将其更改为Falsedf = yf.download(tickers="^GSPC", start='2018-05-06', end='2020-04-24', interval="1d")data = df.filter(['Close', 'High', 'Low', 'Volume'])# 删除最后N天以验证保存的模型在过去的表现df.drop(df.tail(0).index, inplace=True)print(df)class EarlyStoppingCust(Callback): def __init__(self, patience=0, verbose=0, validation_sets=None, restore_best_weights=False): super(EarlyStoppingCust, self).__init__() self.patience = patience self.verbose = verbose self.wait = 0 self.stopped_epoch = 0 self.restore_best_weights = restore_best_weights self.best_weights = None self.validation_sets = validation_sets def on_train_begin(self, logs=None): self.wait = 0 self.stopped_epoch = 0 self.best_avg_loss = (np.Inf, 0) def on_epoch_end(self, epoch, logs=None): loss_ = 0 for i, validation_set in enumerate(self.validation_sets): predicted = self.model.predict(validation_set[0]) loss = self.model.evaluate(validation_set[0], validation_set[1], verbose = 0) loss_ += loss if self.verbose > 0: print('val' + str(i + 1) + '_loss: %.5f' % loss) avg_loss = loss_ / len(self.validation_sets) print('avg_loss: %.5f' % avg_loss) if self.best_avg_loss[0] > avg_loss: self.best_avg_loss = (avg_loss, epoch + 1) self.wait = 0 if self.restore_best_weights: print('new best epoch = %d' % (epoch + 1)) self.best_weights = self.model.get_weights() else: self.wait += 1 if self.wait >= self.patience or self.params['epochs'] == epoch + 1: self.stopped_epoch = epoch self.model.stop_training = True if self.restore_best_weights: if self.verbose > 0: print('Restoring model weights from the end of the best epoch') self.model.set_weights(self.best_weights) def on_train_end(self, logs=None): print('best_avg_loss: %.5f (#%d)' % (self.best_avg_loss[0], self.best_avg_loss[1]))def multivariate_data(dataset, target, start_index, end_index, history_size, target_size, step, single_step=False): data = [] labels = [] start_index = start_index + history_size if end_index is None: end_index = len(dataset) - target_size for i in range(start_index, end_index): indices = range(i-history_size, i, step) data.append(dataset[indices]) if single_step: labels.append(target[i+target_size]) else: labels.append(target[i:i+target_size]) return np.array(data), np.array(labels)def transform_predicted(pr): pr = pr.reshape(pr.shape[1], -1) z = np.zeros((pr.shape[0], x_train.shape[2] - 1), dtype=pr.dtype) pr = np.append(pr, z, axis=1) pr = scaler.inverse_transform(pr) pr = pr[:, 0] return prstep = 1# 创建具有回顾的数据集scaler = MinMaxScaler()df_normalized = scaler.fit_transform(df.values)dataset = df_normalized[:-num_prediction]x_train, y_train = multivariate_data(dataset, dataset[:, 0], 0,len(dataset) - num_prediction + 1, look_back, num_prediction, step)indices = range(len(dataset)-look_back, len(dataset), step)x_test = np.array(dataset[indices])x_test = np.expand_dims(x_test, axis=0)y_test = np.expand_dims(df_normalized[-num_prediction:, 0], axis=0)# 创建过去的数据集以使用EarlyStoppingCust进行验证number_validates = 50step_past = 5validation_sets = [(x_test, y_test)]for i in range(1, number_validates * step_past + 1, step_past): indices = range(len(dataset)-look_back-i, len(dataset)-i, step) x_t = np.array(dataset[indices]) x_t = np.expand_dims(x_t, axis=0) y_t = np.expand_dims(df_normalized[-num_prediction-i:len(df_normalized)-i, 0], axis=0) validation_sets.append((x_t, y_t))if new_s_h5: model = Sequential() model.add(LSTM(32, return_sequences=False, activation = 'sigmoid', input_shape=(x_train.shape[1], x_train.shape[2]))) # model.add(Dropout(0.2)) # model.add(BatchNormalization()) # model.add(LSTM(units = 16)) model.add(Dense(y_train.shape[1])) model.compile(optimizer = 'adam', loss = 'mse') # EarlyStoppingCust是自定义回调,用于验证每个validation_sets并获取平均值 # 它采用最佳的"best_avg"值的epoch # es = EarlyStoppingCust(patience = 3, restore_best_weights = True, validation_sets = validation_sets, verbose = 1) # 或者这里有内置的EarlyStopping的keras扩展,但它只验证你通过fit()传递的一个集合 es = EarlyStopping(monitor = 'val_loss', patience = 3, restore_best_weights = True) model.fit(x_train, y_train, batch_size = 64, epochs = 25, shuffle = True, validation_data = (x_test, y_test), callbacks = [es]) model.save('s.h5')else: model = load_model('s.h5')predicted = model.predict(x_test)predicted = transform_predicted(predicted)print('predicted', predicted)print('real', df.iloc[-num_prediction:, 0].values)print('val_loss: %.5f' % (model.evaluate(x_test, y_test, verbose=0)))fig = go.Figure()fig.add_trace(go.Scatter( x = df.index[-60:], y = df.iloc[-60:,0], mode='lines+markers', name='real', line=dict(color='#ff9800', width=1)))fig.add_trace(go.Scatter( x = df.index[-num_prediction:], y = predicted, mode='lines+markers', name='predict', line=dict(color='#2196f3', width=1)))fig.update_layout(template='plotly_dark', hovermode='x', spikedistance=-1, hoverlabel=dict(font_size=16))fig.update_xaxes(showspikes=True)fig.update_yaxes(showspikes=True)fig.show()
回答:
原帖提出了一个有趣的发现。让我简化原始问题如下:
如果模型在一个特定的时间序列上进行训练,为什么模型不能重建它已经训练过的先前时间序列数据?
答案实际上嵌入在训练过程本身中。由于这里使用了EarlyStopping
来避免过拟合,最佳模型在epoch=5
时保存,其中val_loss=0.0030
,正如原帖所提到的。在这个实例中,训练损失等于0.0343
,也就是说,训练的RMSE是0.185
。由于数据集使用MinMaxScalar
进行缩放,我们需要撤销RMSE的缩放以理解发生了什么。
时间序列的最小值和最大值被发现为2290
和3380
。因此,训练的RMSE为0.185
意味着,即使对于训练集,预测值与真实值的差异平均约为0.185*(3380-2290)
,即~200
单位。
这解释了为什么在预测训练数据本身的先前时间步时会出现很大的差异。
我应该做些什么来完美地模拟训练数据?
我问了自己这个问题。简单的答案是,使训练损失接近0
,也就是过拟合模型。
经过一些训练后,我意识到只有1个LSTM层的模型,具有32
个单元,复杂度不足以重建训练数据。因此,我添加了另一个LSTM层如下:
model = Sequential()model.add(LSTM(32, return_sequences=True, activation = 'sigmoid', input_shape=(x_train.shape[1], x_train.shape[2])))# model.add(Dropout(0.2))# model.add(BatchNormalization())model.add(LSTM(units = 64, return_sequences=False,))model.add(Dense(y_train.shape[1]))model.compile(optimizer = 'adam', loss = 'mse')
并且模型在不考虑EarlyStopping
的情况下训练了1000
个epoch。
model.fit(x_train, y_train, batch_size = 64, epochs = 1000, shuffle = True, validation_data = (x_test, y_test))
在第1000
个epoch结束时,我们的训练损失为0.00047
,这比你的情况下的训练损失低得多。因此,我们期望模型能够更好地重建训练数据。以下是4月2日至8日预测图表:
最后的说明:
在特定数据库上进行训练并不一定意味着模型应该能够完美地重建训练数据。特别是当引入早期停止、正则化和dropout等方法来避免过拟合时,模型倾向于更具通用性,而不是记忆训练数据。