如何将Keras回归器预测结果从(1006,19)重塑为(1006,1)的numpy数组?

我正在尝试在PyTorch和Keras中创建一个股票预测模型。我已经按照一些在线教程进行了修改以适应我的数据,并且运行得很好。

现在我正在将代码转换为兼容的Keras模型。我已经创建了模型并进行了预测,但问题是Keras的regressor.predict()函数返回了一个(1006,19)的numpy数组,而当我使用predictions = model(x_test)时,它返回了一个(1006,1)的数组,这是我后续工作所需的,以便我可以绘制结果。

这是我目前的Keras代码:

from keras.models import Sequentialfrom keras.layers import LSTM, Dense, Dropoutlookback = 20x_train_keras, y_train_keras, x_test_keras, y_test_keras = split_data(price, lookback)print('x_train.shape = ',x_train_keras.shape)  # x_train.shape =  (1006, 19, 1)print('y_train.shape = ',y_train_keras.shape)  # y_train.shape =  (1006, 1)print('x_test.shape = ',x_test_keras.shape)  # x_test.shape =  (252, 19, 1)print('y_test.shape = ',y_test_keras.shape)  # y_test.shape =  (252, 1)regression = Sequential()regression.add(LSTM(units=50, return_sequences=True, kernel_initializer='glorot_uniform', input_shape=(x_train_keras.shape[1],1)))regression.add(Dropout(0.2))regression.add(LSTM(units=50,kernel_initializer='glorot_uniform',return_sequences=True))regression.add(Dropout(0.2))regression.add(LSTM(units=50,kernel_initializer='glorot_uniform',return_sequences=True))regression.add(Dropout(0.2))regression.add(LSTM(units=50,kernel_initializer='glorot_uniform',return_sequences=True))regression.add(Dropout(0.2))regression.add(Dense(units=1))regression.compile(optimizer='adam', loss='mean_squared_error')from keras.callbacks import Historyhistory = History()history = regression.fit(x_train_keras, y_train_keras, batch_size=30, epochs=100, callbacks=[history])train_predict_keras = regression.predict(x_train_keras)train_predict_keras = train_predict_keras.reshape((train_predict_keras.shape[0], train_predict_keras.shape[1]))predict = pd.DataFrame(scaler.inverse_transform(train_predict_keras))original = pd.DataFrame(scaler.inverse_transform(y_train_keras))fig = plt.figure()fig.subplots_adjust(hspace=0.2, wspace=0.2)plt.subplot(1,2,1)ax = sns.lineplot(x=original.index, y=original[0], label='Data', color='royalblue')ax = sns.lineplot(x=predict.index, y=predict[0], label='Training Prediction', color='tomato')ax.set_title('Stock Price', size=14, fontweight='bold')ax.set_xlabel("Days", size = 14)ax.set_ylabel("Cost (USD)", size = 14)ax.set_xticklabels('', size=10)plt.subplot(1,2,2)ax = sns.lineplot(data=history.history.get('loss'), color='royalblue')ax.set_xlabel("Epoch", size = 14)ax.set_ylabel("Loss", size = 14)ax.set_title("Training Loss", size = 14, fontweight='bold')fig.set_figheight(6)fig.set_figwidth(16)# Make predictionstest_predict_keras = regression.predict(x_test_keras)# Invert predictionstrain_predict_keras = scaler.inverse_transform(train_predict_keras)y_train_keras = scaler.inverse_transform(y_train_keras)test_predict_keras = scaler.inverse_transform(test_predict_keras.reshape((test_predict_keras.shape[0], test_predict_keras.shape[1])))y_test = scaler.inverse_transform(y_test_keras)# Calculate root MSEtrainScore = math.sqrt(mean_squared_error(y_train[:,0], y_train_pred[:,0]))print(f'Train score {trainScore:.2f} RMSE')testScore = math.sqrt(mean_squared_error(y_test[:,0], y_test_pred[:,0]))print(f'Test score {testScore:.2f} RMSE')# shift train predictions for plottingtrainPredictPlot_keras = np.empty_like(price)trainPredictPlot_keras[:, :] = np.nantrainPredictPlot_keras[lookback:len(train_predict_keras)+lookback, :] = train_predict_keras# shift test predictions for plottingtestPredictPlot_keras = np.empty_like(price)testPredictPlot_keras[:, :] = np.nantestPredictPlot_keras[len(train_predict_keras)+lookback-1:len(price)-1, :] = test_predict_kerasoriginal = scaler.inverse_transform(price['Close'].values.reshape(-1,1))predictions_keras = np.append(trainPredictPlot_keras, testPredictPlot_keras, axis=1)predictions_keras = np.append(predictions_keras, original, axis=1)result_keras = pd.DataFrame(predictions_keras)

错误发生在trainPredictPlot_keras[lookback:len(train_predict_keras)+lookback, :] = train_predict_keras这一行,提示could not broadcast input array from shape (1006,19) into shape (1006,1)


回答:

将最后一个LSTM层的return_sequences设置为False。你需要按如下方式操作:

........regression.add(LSTM(units=50,kernel_initializer='glorot_uniform',                     return_sequences=False))regression.add(Dropout(0.2))regression.add(Dense(units=1))regression.compile(optimizer='adam', loss='mean_squared_error')

查看文档:

return_sequences: 布尔值。是否返回输出序列中的最后一个输出,或整个序列。默认值: False.

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注