我试图使用 Concatenate() 来创建 VGG16 和 VGG19 的集成模型。我的图像尺寸是 (224, 224, 3)。我不明白这个错误是什么意思。
这是我的代码:
# Preprocessing the Training settrain_datagen = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True)# Preprocessing the Train settraining_set = train_datagen.flow_from_directory('/content/drive/MyDrive/Model Development /tbdataset/Train', target_size = (224, 224), batch_size = 32, class_mode = 'categorical')# Preprocessing the Test settest_datagen = ImageDataGenerator(rescale = 1./255)test_set = test_datagen.flow_from_directory('/content/drive/MyDrive/Model Development /tbdataset/Test', target_size = (224, 224), batch_size = 32, class_mode = 'categorical', shuffle=False)vgg19 = VGG19(input_shape=IMAGE_SIZE, weights='imagenet', include_top=False)for layer in vgg19.layers: layer._name = layer._name + str('_19') layer.trainable = Falsevgg16 = VGG16(input_shape=IMAGE_SIZE, weights='imagenet', include_top=False)for layer in vgg16.layers: layer._name = layer._name + str('_16') layer.trainable = Falsevgg16_x = Flatten()(vgg16.output)vgg19_x = Flatten()(vgg19.output)x = Concatenate()([vgg16_x, vgg19_x])out = Dense(2, activation='softmax')(x)model = Model(inputs = [vgg16.input, vgg19.input], outputs = out)model.compile( loss='categorical_crossentropy', optimizer=tf.keras.optimizers.Adam( learning_rate=0.0005, name="Adam"), metrics=['accuracy', 'AUC', 'Precision', 'Recall', ])model.summary()r = model.fit( training_set, validation_data=test_set, epochs=20, steps_per_epoch=len(training_set), validation_steps=len(test_set))
我收到了以下错误:
ValueError: Layer model_16 expects 2 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, None, None, None) dtype=float32>]
有谁能指导我解决上述问题吗?提前感谢!
回答:
如果 vgg16
和 vgg19
接收相同的输入,你可以为两者使用一个共享的输入层。这样,你的模型将只有一个输入。
这是代码:
IMAGE_SIZE = (224,224,3)vgg19 = tf.keras.applications.vgg19.VGG19( input_shape=IMAGE_SIZE, weights='imagenet', include_top=False)for layer in vgg19.layers: layer._name = layer._name + str('_19') layer.trainable = Falsevgg16 = tf.keras.applications.vgg16.VGG16( input_shape=IMAGE_SIZE, weights='imagenet', include_top=False)for layer in vgg16.layers: layer._name = layer._name + str('_16') layer.trainable = Falseinp = Input(IMAGE_SIZE) vgg16_x = Flatten()(vgg16(inp))vgg19_x = Flatten()(vgg19(inp))x = Concatenate()([vgg16_x, vgg19_x])out = Dense(2, activation='softmax')(x)model = Model(inputs = inp, outputs = out)model.compile( loss='categorical_crossentropy', optimizer=tf.keras.optimizers.Adam( learning_rate=0.0005, name="Adam"), metrics=['accuracy', 'AUC', 'Precision', 'Recall', ])model.summary()