我需要构建一个包含两层dropout和两层LSTM的模型。不幸的是,我遇到了一个关于输入形状的问题,这个问题出现在我的第二个LSTM层。在查找问题之后,我发现我需要更改输入维度,但我不知道如何操作。我找到了一种需要使用Lambda层的选项,但我在环境中无法导入它(这是一个Coursera环境)。你有任何建议来解决我的错误吗?
model = Sequential()Layer1 = model.add(Embedding(total_words, 64, input_length=max_sequence_len-1))Layer2 = model.add(Bidirectional(LSTM(20)))Layer3 = model.add(Dropout(.03))Layer4 = model.add(LSTM(20))Layer5 = model.add(Dense(total_words, kernel_regularizer=regularizers.l1_l2(l1=1e-5, l2=1e-4), bias_regularizer=regularizers.l2(1e-4), activity_regularizer=regularizers.l2(1e-5))) # A Dense Layer including regularizersLayer6 = model.add(Dense(total_words, activation = 'softmax')) # Pick an optimizer model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])model.summary()
错误:
ValueError: Input 0 of layer lstm_20 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 40]
回答:
感谢@*** 和@*** 的更新。
为了社区的利益,这里提供一个使用下方示例数据的解决方案
import tensorflow as tfimport numpy as npfrom tensorflow.keras.models import Sequentialfrom tensorflow.keras.layers import Embedding, Bidirectional, LSTM, Dropout, Densefrom tensorflow.keras.regularizers import l1_l2, l2total_words = 478max_sequence_len = 90model = Sequential()Layer1 = model.add(Embedding(total_words, 64, input_length=max_sequence_len-1))Layer2 = model.add(Bidirectional(LSTM(20)))Layer3 = model.add(Dropout(.03))Layer4 = model.add(LSTM(20))Layer5 = model.add(Dense(total_words, kernel_regularizer=l1_l2(l1=1e-5, l2=1e-4), bias_regularizer=l2(1e-4), activity_regularizer=l2(1e-5))) # A Dense Layer including regularizersLayer6 = model.add(Dense(total_words, activation = 'softmax')) # Pick an optimizer model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])model.summary()
输出:
---------------------------------------------------------------------------ValueError Traceback (most recent call last)<ipython-input-1-8ce04225c92d> in <module>() 12 Layer2 = model.add(Bidirectional(LSTM(20))) 13 Layer3 = model.add(Dropout(.03))---> 14 Layer4 = model.add(LSTM(20)) 15 Layer5 = model.add(Dense(total_words, 16 kernel_regularizer=l1_l2(l1=1e-5, l2=1e-4),8 frames/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name) 221 'expected ndim=' + str(spec.ndim) + ', found ndim=' + 222 str(ndim) + '. Full shape received: ' +--> 223 str(tuple(shape))) 224 if spec.max_ndim is not None: 225 ndim = x.shape.rankValueError: Input 0 of layer lstm_1 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 40)
修复后的代码:
一旦你在LSTM层(即Layer2)中添加return_sequences=True
,你的问题就可以解决。
model = Sequential()Layer1 = model.add(Embedding(total_words, 64, input_length=max_sequence_len-1))Layer2 = model.add(Bidirectional(LSTM(20, return_sequences=True)))Layer3 = model.add(Dropout(.03))Layer4 = model.add(LSTM(20))Layer5 = model.add(Dense(total_words, kernel_regularizer=l1_l2(l1=1e-5, l2=1e-4), bias_regularizer=l2(1e-4), activity_regularizer=l2(1e-5))) # A Dense Layer including regularizersLayer6 = model.add(Dense(total_words, activation = 'softmax')) # Pick an optimizer model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])model.summary()
输出:
Model: "sequential_1"_________________________________________________________________Layer (type) Output Shape Param # =================================================================embedding_1 (Embedding) (None, 89, 64) 30592 _________________________________________________________________bidirectional_1 (Bidirection (None, 89, 40) 13600 _________________________________________________________________dropout_1 (Dropout) (None, 89, 40) 0 _________________________________________________________________lstm_3 (LSTM) (None, 20) 4880 _________________________________________________________________dense (Dense) (None, 478) 10038 _________________________________________________________________dense_1 (Dense) (None, 478) 228962 =================================================================Total params: 288,072Trainable params: 288,072Non-trainable params: 0_________________________________________________________________