我在https://github.com/raducrs/Applications-of-Deep-Learning/blob/master/Image%20captioning%20Flickr8k.ipynb看到这段代码,并尝试在Google Colab上运行,但运行下面的代码时出现了错误。错误提示是
Merge已被弃用
我想知道如何在最新的keras版本中运行这段代码。
LSTM_CELLS_CAPTION = 256LSTM_CELLS_MERGED = 1000image_pre = Sequential()image_pre.add(Dense(100, input_shape=(IMG_FEATURES_SIZE,), activation='relu', name='fc_image'))image_pre.add(RepeatVector(MAX_SENTENCE,name='repeat_image'))caption_model = Sequential()caption_model.add(Embedding(VOCABULARY_SIZE, EMB_SIZE, weights=[embedding_matrix], input_length=MAX_SENTENCE, trainable=False, name="embedding"))caption_model.add(LSTM(EMB_SIZE, return_sequences=True, name="lstm_caption"))caption_model.add(TimeDistributed(Dense(100, name="td_caption")))combined = Sequential()combined.add(Merge([image_pre, caption_model], mode='concat', concat_axis=1,name="merge_models"))combined.add(Bidirectional(LSTM(256,return_sequences=False, name="lstm_merged"),name="bidirectional_lstm"))combined.add(Dense(VOCABULARY_SIZE,name="fc_merged"))combined.add(Activation('softmax',name="softmax_combined"))predictive = Model([image_pre.input, caption_model.input],combined.output)
回答:
Merge(mode='concat')
现在已经改为Concatenate(axis=1)
。
以下代码在Colab上可以正确生成图形。
from tensorflow.python import kerasfrom keras.layers import *from keras.models import Model, SequentialIMG_FEATURES_SIZE = 10MAX_SENTENCE = 80VOCABULARY_SIZE = 1000EMB_SIZE = 100embedding_matrix = np.zeros((VOCABULARY_SIZE, EMB_SIZE))LSTM_CELLS_CAPTION = 256LSTM_CELLS_MERGED = 1000image_pre = Sequential()image_pre.add(Dense(100, input_shape=(IMG_FEATURES_SIZE,), activation='relu', name='fc_image'))image_pre.add(RepeatVector(MAX_SENTENCE,name='repeat_image'))caption_model = Sequential()caption_model.add(Embedding(VOCABULARY_SIZE, EMB_SIZE, weights=[embedding_matrix], input_length=MAX_SENTENCE, trainable=False, name="embedding"))caption_model.add(LSTM(EMB_SIZE, return_sequences=True, name="lstm_caption"))caption_model.add(TimeDistributed(Dense(100, name="td_caption")))merge = Concatenate(axis=1,name="merge_models")([image_pre.output, caption_model.output])lstm = Bidirectional(LSTM(256,return_sequences=False, name="lstm_merged"),name="bidirectional_lstm")(merge)output = Dense(VOCABULARY_SIZE, name="fc_merged", activation='softmax')(lstm)predictive = Model([image_pre.input, caption_model.input], output)predictive.compile('sgd', 'binary_crossentropy')predictive.summary()
描述:
这是一个每样本有两个输入的模型:一张图片和一个标题(一系列单词)。输入图在连接点(name=’merge_models’)处合并。
图片通过一个Dense层简单处理(你可能想在图片分支上添加卷积);然后这个Dense层的输出被复制MAX_SENTENCE次,为合并做准备。
标题通过一个LSTM和一个Dense层处理。
合并结果是每个时间步都包含来自两个分支的特征,共MAX_SENTENCE个时间步。
合并后的分支最终预测出VOCABULARY_SIZE中的一个类别。
model.summary()是理解图形的一个好方法。