Unet Keras ValueError: 维度必须相等

我使用了输入形状为512x512x1的Unet模型,但遇到了问题。我收到了一个ValueError:维度必须相等。我知道输入形状出了问题。我需要解决这个问题。可能模型的损失函数有问题

X 训练集形状 : (512, 512, 1)Y 训练集形状 : (512, 512, 1)

模型:

from keras.models import Modelfrom keras.layers import Dense, Dropout, Activation, Flattenfrom keras.layers import concatenate, Conv2D, MaxPooling2D, Conv2DTransposefrom keras.layers import Input, merge, UpSampling2D,BatchNormalizationfrom keras.callbacks import ModelCheckpointfrom keras.optimizers import Adamfrom keras.preprocessing.image import ImageDataGeneratorfrom keras import backend as Kimport tensorflow as tfK.set_image_data_format('channels_last')def dice_coef(y_true, y_pred):    smooth = 0.005     y_true_f = K.flatten(y_true)    y_pred_f = K.flatten(y_pred)    intersection = K.sum(y_true_f * y_pred_f)    return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)def dice_coef_loss(y_true, y_pred):    return 1-dice_coef(y_true, y_pred)    def unet_model():        inputs = Input((512 , 512, 1))        conv1 = Conv2D(64, (3, 3), activation='relu', padding='same') (inputs)    batch1 = BatchNormalization(axis=1)(conv1)    conv1 = Conv2D(64, (3, 3), activation='relu', padding='same') (batch1)    batch1 = BatchNormalization(axis=1)(conv1)    pool1 = MaxPooling2D((2, 2)) (batch1)        conv2 = Conv2D(128, (3, 3), activation='relu', padding='same') (pool1)    batch2 = BatchNormalization(axis=1)(conv2)    conv2 = Conv2D(128, (3, 3), activation='relu', padding='same') (batch2)    batch2 = BatchNormalization(axis=1)(conv2)    pool2 = MaxPooling2D((2, 2)) (batch2)        conv3 = Conv2D(256, (3, 3), activation='relu', padding='same') (pool2)    batch3 = BatchNormalization(axis=1)(conv3)    conv3 = Conv2D(256, (3, 3), activation='relu', padding='same') (batch3)    batch3 = BatchNormalization(axis=1)(conv3)    pool3 = MaxPooling2D((2, 2)) (batch3)        conv4 = Conv2D(512, (3, 3), activation='relu', padding='same') (pool3)    batch4 = BatchNormalization(axis=1)(conv4)    conv4 = Conv2D(512, (3, 3), activation='relu', padding='same') (batch4)    batch4 = BatchNormalization(axis=1)(conv4)    pool4 = MaxPooling2D(pool_size=(2, 2)) (batch4)        conv5 = Conv2D(1024, (3, 3), activation='relu', padding='same') (pool4)    batch5 = BatchNormalization(axis=1)(conv5)    conv5 = Conv2D(1024, (3, 3), activation='relu', padding='same') (batch5)    batch5 = BatchNormalization(axis=1)(conv5)        up6 = Conv2DTranspose(512, (2, 2), strides=(2, 2), padding='same') (batch5)    up6 = concatenate([up6, conv4], axis=1)    conv6 = Conv2D(512, (3, 3), activation='relu', padding='same') (up6)    batch6 = BatchNormalization(axis=1)(conv6)    conv6 = Conv2D(512, (3, 3), activation='relu', padding='same') (batch6)    batch6 = BatchNormalization(axis=1)(conv6)        up7 = Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same') (batch6)    up7 = concatenate([up7, conv3], axis=1)    conv7 = Conv2D(256, (3, 3), activation='relu', padding='same') (up7)    batch7 = BatchNormalization(axis=1)(conv7)    conv7 = Conv2D(256, (3, 3), activation='relu', padding='same') (batch7)    batch7 = BatchNormalization(axis=1)(conv7)        up8 = Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same') (batch7)    up8 = concatenate([up8, conv2], axis=1)    conv8 = Conv2D(128, (3, 3), activation='relu', padding='same') (up8)    batch8 = BatchNormalization(axis=1)(conv8)    conv8 = Conv2D(128, (3, 3), activation='relu', padding='same') (batch8)    batch8 = BatchNormalization(axis=1)(conv8)        up9 = Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same') (batch8)    up9 = concatenate([up9, conv1], axis=1)    conv9 = Conv2D(64, (3, 3), activation='relu', padding='same') (up9)    batch9 = BatchNormalization(axis=1)(conv9)    conv9 = Conv2D(64, (3, 3), activation='relu', padding='same') (batch9)    batch9 = BatchNormalization(axis=1)(conv9)    conv10 = Conv2D(1, (1, 1), activation='sigmoid')(batch9)    model = Model(inputs=[inputs], outputs=[conv10])    model.compile(optimizer=Adam(lr=1e-4), loss=dice_coef_loss, metrics=[dice_coef])    return modelmodel = unet_model()

回答:

如果你检查模型摘要(model.summary()),你会看到最后一层的输出形状是:

Layer (type)              Output Shape      Param #      Connected to                ===============================================================================......conv2d_18 (Conv2D)    (None, 2560, 512, 1)    65      batch_normalization_17[0][0]     

输出形状是 (None,2560,512,1),这与期望的输出形状 (None, 512, 512, 1) 不匹配

这种不匹配是由于你的 concatenate 层造成的。你在 concatenate 层中设置了 axis=1 参数,这会连接第二个轴,但你需要基于最后一个轴进行连接,即 axis=-1,这是默认值。

因此,从你的 concatenate 层中移除 axis 参数,让它保持默认值,或者在所有 concatenate 层中设置 axis=-1

例如:

#up6 = concatenate([up6, conv4], axis=1) # 移除 axisup6 = concatenate([up6, conv4])

然后,你将得到与你的输入和 y_train 相同的最后一层形状:

Layer (type)              Output Shape      Param #      Connected to                ===============================================================================......conv2d_18 (Conv2D)    (None, 512, 512, 1)    65      batch_normalization_17[0][0]    

Related Posts

Keras Dense层输入未被展平

这是我的测试代码: from keras import…

无法将分类变量输入随机森林

我有10个分类变量和3个数值变量。我在分割后直接将它们…

如何在Keras中对每个输出应用Sigmoid函数?

这是我代码的一部分。 model = Sequenti…

如何选择类概率的最佳阈值?

我的神经网络输出是一个用于多标签分类的预测类概率表: …

在Keras中使用深度学习得到不同的结果

我按照一个教程使用Keras中的深度神经网络进行文本分…

‘MatMul’操作的输入’b’类型为float32,与参数’a’的类型float64不匹配

我写了一个简单的TensorFlow代码,但不断遇到T…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注