我想使用这个模型,但我们不能再使用merge函数了。
image_model = Sequential([ Dense(embedding_size, input_shape=(2048,), activation='relu'), RepeatVector(max_len)])caption_model = Sequential([ Embedding(vocab_size, embedding_size, input_length=max_len), LSTM(256, return_sequences=True), TimeDistributed(Dense(300))])final_model = Sequential([ Merge([image_model, caption_model], mode='concat', concat_axis=1), Bidirectional(LSTM(256, return_sequences=False)), Dense(vocab_size), Activation('softmax')])
我按照以下方式重写了它,排除了final_model:
image_in = Input(shape=(2048,))caption_in = Input(shape=(max_len, vocab_size))merged = concatenate([image_model(image_in), caption_model(caption_in)],axis=0)latent = Bidirectional(LSTM(256, return_sequences=False))(merged)out = Dense(vocab_size, activation='softmax')(latent)final_model = Model([image_in, caption_in], out)final_model.compile(loss='categorical_crossentropy', optimizer=RMSprop(), metrics=['accuracy'])final_model.summary()
这也让我得到了:
ValueError: "input_length" is 40, but received input has shape (None, 40, 8256).
有谁能帮我修复这个问题吗?来源:https://github.com/yashk2810/Image-Captioning/blob/master/Image%20Captioning%20InceptionV3.ipynb
回答:
你应该将caption_in定义为二维: Input(shape=(max_len,))
。在你的情况下,拼接操作必须在最后一个轴上进行: axis=-1
。其余部分看起来没问题
embedding_size=300max_len=40vocab_size=8256image_model = Sequential([ Dense(embedding_size, input_shape=(2048,), activation='relu'), RepeatVector(max_len)])caption_model = Sequential([ Embedding(vocab_size, embedding_size, input_length=max_len), LSTM(256, return_sequences=True), TimeDistributed(Dense(300))])image_in = Input(shape=(2048,))caption_in = Input(shape=(max_len,))merged = concatenate([image_model(image_in), caption_model(caption_in)],axis=-1)latent = Bidirectional(LSTM(256, return_sequences=False))(merged)out = Dense(vocab_size, activation='softmax')(latent)final_model = Model([image_in, caption_in], out)final_model.compile(loss='categorical_crossentropy', optimizer=RMSprop(), metrics=['accuracy'])final_model.summary()