在Keras中为生成器模型添加类信息

我想使用条件生成对抗网络(GANs)来生成一个领域(记为domain A)的图像,同时输入来自另一个领域(记为domain B)的图像以及类信息。这两个领域都与相同的标签信息相关联(每个domain A的图像都与domain B的一个图像和一个特定的标签相关联)。我在Keras中的生成器模型目前如下所示:

def generator_model_v2():   global BATCH_SIZE   inputs = Input((IN_CH, img_cols, img_rows))      e1 = BatchNormalization(mode=0)(inputs)   e2 = Flatten()(e1)   e3 = BatchNormalization(mode=0)(e2)   e4 = Dense(1024, activation="relu")(e3)   e5 = BatchNormalization(mode=0)(e4)   e6 = Dense(512, activation="relu")(e5)   e7 = BatchNormalization(mode=0)(e6)   e8 = Dense(512, activation="relu")(e7)   e9 = BatchNormalization(mode=0)(e8)   e10 = Dense(IN_CH * img_cols *img_rows, activation="relu")(e9)   e11  = Reshape((3, 28, 28))(e10)   e12 = BatchNormalization(mode=0)(e11)   e13 = Activation('tanh')(e12)   model = Model(input=inputs, output=e13)   return model

目前,我的生成器输入的是domain A的图像(目标是输出domain B的图像)。我想以某种方式也输入domain A的类信息,以便生成domain B中相同类的图像。我想在平坦化(flattening)之后添加标签信息。这样,输入大小就不再是1x1024,而是1x1025。我可以在生成器中使用第二个输入来处理类信息吗?如果可以,在GANs的训练过程中如何调用生成器?

训练过程如下:

discriminator_and_classifier_on_generator = generator_containing_discriminator_and_classifier(    generator, discriminator, classifier)generator.compile(loss=generator_l1_loss, optimizer=g_optim)discriminator_and_classifier_on_generator.compile(    loss=[generator_l1_loss, discriminator_on_generator_loss, "categorical_crossentropy"],    optimizer="rmsprop")discriminator.compile(loss=discriminator_loss, optimizer=d_optim) # rmspropclassifier.compile(loss="categorical_crossentropy", optimizer=c_optim)for epoch in range(30):    for index in range(int(X_train.shape[0] / BATCH_SIZE)):        image_batch = Y_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE]        label_batch = LABEL_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE]  # replace with your data here        generated_images = generator.predict(X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE])        real_pairs = np.concatenate((X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE, :, :, :], image_batch),axis=1)        fake_pairs = np.concatenate((X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE, :, :, :], generated_images), axis=1)        X = np.concatenate((real_pairs, fake_pairs))        y = np.concatenate((np.ones((100, 1, 64, 64)), np.zeros((100, 1, 64, 64))))        d_loss = discriminator.train_on_batch(X, y)        discriminator.trainable = False        c_loss = classifier.train_on_batch(image_batch, label_batch)        classifier.trainable = False        g_loss = discriminator_and_classifier_on_generator.train_on_batch(            X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE, :, :, :],             [image_batch, np.ones((100, 1, 64, 64)), label_batch])        discriminator.trainable = True        classifier.trainable = True

这段代码是条件深度卷积生成对抗网络的实现(在判别器上增加了一个分类器)。网络的功能如下:

def generator_containing_discriminator_and_classifier(generator, discriminator, classifier):   inputs = Input((IN_CH, img_cols, img_rows))   x_generator = generator(inputs)   merged = merge([inputs, x_generator], mode='concat', concat_axis=1)   discriminator.trainable = False   x_discriminator = discriminator(merged)   classifier.trainable = False   x_classifier = classifier(x_generator)   model = Model(input=inputs, output=[x_generator, x_discriminator, x_classifier])   return modeldef generator_containing_discriminator(generator, discriminator):   inputs = Input((IN_CH, img_cols, img_rows))   x_generator = generator(inputs)   merged = merge([inputs, x_generator], mode='concat',concat_axis=1)   discriminator.trainable = False   x_discriminator = discriminator(merged)   model = Model(input=inputs, output=[x_generator,x_discriminator])   return model

回答:

首先,按照条件生成对抗网络中给出的建议,你需要定义第二个输入。然后,只需将这两个输入向量连接起来,并处理这个连接后的向量。

def generator_model_v2():    input_image = Input((IN_CH, img_cols, img_rows))     input_conditional = Input((n_classes))      e0 = Flatten()(input_image)     e1 = Concatenate()([e0, input_conditional])       e2 = BatchNormalization(mode=0)(e1)    e3 = BatchNormalization(mode=0)(e2)    e4 = Dense(1024, activation="relu")(e3)    e5 = BatchNormalization(mode=0)(e4)    e6 = Dense(512, activation="relu")(e5)    e7 = BatchNormalization(mode=0)(e6)    e8 = Dense(512, activation="relu")(e7)    e9 = BatchNormalization(mode=0)(e8)    e10 = Dense(IN_CH * img_cols *img_rows, activation="relu")(e9)    e11  = Reshape((3, 28, 28))(e10)    e12 = BatchNormalization(mode=0)(e11)    e13 = Activation('tanh')(e12)    model = Model(input=[input_image, input_conditional] , output=e13)    return model

然后,你需要在训练过程中也将类标签传递给网络:

classifier.train_on_batch((image_batch, class_batch), label_batch)

Related Posts

Keras Dense层输入未被展平

这是我的测试代码: from keras import…

无法将分类变量输入随机森林

我有10个分类变量和3个数值变量。我在分割后直接将它们…

如何在Keras中对每个输出应用Sigmoid函数?

这是我代码的一部分。 model = Sequenti…

如何选择类概率的最佳阈值?

我的神经网络输出是一个用于多标签分类的预测类概率表: …

在Keras中使用深度学习得到不同的结果

我按照一个教程使用Keras中的深度神经网络进行文本分…

‘MatMul’操作的输入’b’类型为float32,与参数’a’的类型float64不匹配

我写了一个简单的TensorFlow代码,但不断遇到T…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注