我正在尝试在Keras中实现一个多步预测的LSTM模型。数据的形状如下:
X : (5831, 48, 1)y : (5831, 1, 12)
我尝试使用的模型是:
power_in = Input(shape=(X.shape[1], X.shape[2]))power_lstm = LSTM(50, recurrent_dropout=0.4128, dropout=0.412563, kernel_initializer=power_lstm_init, return_sequences=True)(power_in)main_out = TimeDistributed(Dense(12, kernel_initializer=power_lstm_init))(power_lstm)
当我尝试这样训练模型时:
hist = forecaster.fit([X], y, epochs=325, batch_size=16, validation_data=([X_valid], y_valid), verbose=1, shuffle=False)
我得到了以下错误:
ValueError: Error when checking target: expected time_distributed_16 to have shape (48, 12) but got array with shape (1, 12)
如何解决这个问题?
回答:
根据你的评论:
我拥有的数据类似于t-48, t-47, t-46, ….. , t-1作为过去的数据,而t+1, t+2, ……, t+12是我想要预测的值
你可能根本不需要使用TimeDistributed
层:首先,去掉LSTM层的return_sequences=True
参数。做完这个操作后,LSTM层将把过去的时间序列输入编码成形状为(50,)
的向量。现在你可以直接将其输入到一个具有12个单元的Dense层:
# 确保标签的形状为(num_samples, 12)y = np.reshape(y, (-1, 12))power_in = Input(shape=(X.shape[1:],))power_lstm = LSTM(50, recurrent_dropout=0.4128, dropout=0.412563, kernel_initializer=power_lstm_init)(power_in)main_out = Dense(12, kernel_initializer=power_lstm_init)(power_lstm)
或者,如果你想使用TimeDistributed
层,并且考虑到输出本身是一个序列,我们可以通过在Dense层之前添加另一个LSTM层来明确地强制执行这种时间依赖性(在第一个LSTM层之后添加一个RepeatVector
层,使其输出成为长度为12的时间序列,即与输出时间序列长度相同):
# 确保标签的形状为(num_samples, 12, 1)y = np.reshape(y, (-1, 12, 1))power_in = Input(shape=(48,1))power_lstm = LSTM(50, recurrent_dropout=0.4128, dropout=0.412563, kernel_initializer=power_lstm_init)(power_in)rep = RepeatVector(12)(power_lstm)out_lstm = LSTM(32, return_sequences=True)(rep)main_out = TimeDistributed(Dense(1))(out_lstm)model = Model(power_in, main_out)model.summary()
模型摘要:
_________________________________________________________________Layer (type) Output Shape Param # =================================================================input_3 (InputLayer) (None, 48, 1) 0 _________________________________________________________________lstm_3 (LSTM) (None, 50) 10400 _________________________________________________________________repeat_vector_2 (RepeatVecto (None, 12, 50) 0 _________________________________________________________________lstm_4 (LSTM) (None, 12, 32) 10624 _________________________________________________________________time_distributed_1 (TimeDist (None, 12, 1) 33 =================================================================Total params: 21,057Trainable params: 21,057Non-trainable params: 0_________________________________________________________________
当然,在这两种模型中,你可能需要调整超参数(例如LSTM层的数量,LSTM层的维度等)以便能够准确地比较它们并取得良好的结果。
补充说明:实际上,在你的情境下,你根本不需要使用TimeDistributed
层,因为(目前)Dense层是应用在最后一个轴上的。因此,TimeDistributed(Dense(...))
和Dense(...)
是等价的。