我正在使用 Tensorflow 2.1.1
,并尝试构建一个带有 Attention 的序列到序列模型。
latent_dim = 300embedding_dim=100batch_size = 128# Encoderencoder_inputs = tf.keras.Input(shape=(None,), dtype='int32')#embedding layerenc_emb = tf.keras.layers.Embedding(x_voc, embedding_dim,trainable=True)(encoder_inputs)#encoder lstm 1encoder_lstm = tf.keras.layers.LSTM(latent_dim,return_sequences=True,return_state=True,dropout=0.4,recurrent_dropout=0.4)encoder_output, state_h, state_c = encoder_lstm(enc_emb)print(encoder_output.shape)# Set up the decoder, using `encoder_states` as initial state.decoder_inputs = tf.keras.Input(shape=(None,), dtype='int32')#embedding layerdec_emb_layer = tf.keras.layers.Embedding(y_voc, embedding_dim,trainable=True)dec_emb = dec_emb_layer(decoder_inputs)decoder_lstm = tf.keras.layers.LSTM(latent_dim, return_sequences=True, return_state=True,dropout=0.4,recurrent_dropout=0.2)decoder_output,decoder_fwd_state, decoder_back_state = decoder_lstm(dec_emb,initial_state=[state_h, state_c])# Attention layerattn_out, attn_states = tf.keras.layers.Attention()([encoder_output, decoder_output])# Concat attention input and decoder LSTM outputdecoder_concat_input = tf.keras.layers.Concatenate(axis=-1, name='concat_layer')([decoder_output, attn_out])#dense layerdecoder_dense = tf.keras.layers.TimeDistributed(Dense(y_voc, activation='softmax'))decoder_outputs = decoder_dense(decoder_concat_input)# Define the model model = Model([encoder_inputs, decoder_inputs], decoder_outputs)model.summary()
当我运行这个代码时,在创建 Attention 层时出现了错误 TypeError: Cannot iterate over a tensor with unknown first dimension.
。
我检查了 encoder_output
和 decoder_output
的维度,它们都是 (None, None, 300)
,所以我以为可能是这个问题。但我查看了来自 tensorflow 示例 的 Attention 示例,他们也为 Attention 层的输入参数使用了 None
维度。
我想知道我错过了什么?请给出建议。
编辑
添加堆栈跟踪
---------------------------------------------------------------------------TypeError Traceback (most recent call last)<ipython-input-49-d37cd48e626b> in <module>() 28 29 # Attention layer---> 30 attn_out, attn_states = tf.keras.layers.Attention()([encoder_output, decoder_output]) 31 32 # Concat attention input and decoder LSTM output~/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py in __iter__(self) 546 if shape[0] is None: 547 raise TypeError(--> 548 "Cannot iterate over a tensor with unknown first dimension.") 549 for i in xrange(shape[0]): 550 yield self[i]TypeError: Cannot iterate over a tensor with unknown first dimension.
回答:
错误的原因是 keras Attention 层输出一个张量,而你期望是两个。你需要将
attn_out, attn_states = tf.keras.layers.Attention()([encoder_output, decoder_output])
改为
attn_out = tf.keras.layers.Attention()([encoder_output, decoder_output])
这里是完整的模型
# Encoderencoder_inputs = tf.keras.Input(shape=(None,), dtype='int32')#embedding layerenc_emb = tf.keras.layers.Embedding(x_voc, embedding_dim)(encoder_inputs)#encoder lstm 1encoder_lstm = tf.keras.layers.LSTM(latent_dim, return_sequences=True,return_state=True)encoder_output, state_h, state_c = encoder_lstm(enc_emb)# Set up the decoder, using `encoder_states` as initial state.decoder_inputs = tf.keras.Input(shape=(None,), dtype='int32')#embedding layerdec_emb = tf.keras.layers.Embedding(y_voc, embedding_dim)(decoder_inputs)decoder_lstm = tf.keras.layers.LSTM(latent_dim, return_sequences=True, return_state=True)decoder_output,decoder_fwd_state,decoder_back_state = decoder_lstm(dec_emb,initial_state=[state_h, state_c])# Attention layerattn_out = tf.keras.layers.Attention()([encoder_output, decoder_output])# Concat attention input and decoder LSTM outputdecoder_concat_input = tf.keras.layers.Concatenate(axis=-1, name='concat_layer')([decoder_output, attn_out])#dense layerdecoder_dense = tf.keras.layers.TimeDistributed(Dense(y_voc, activation='softmax'))decoder_outputs = decoder_dense(decoder_concat_input)# Define the model model = Model([encoder_inputs, decoder_inputs], decoder_outputs)model.summary()