Unet Keras ValueError: 维度必须相等

我使用了输入形状为512x512x1的Unet模型,但遇到了问题。我收到了一个ValueError:维度必须相等。我知道输入形状出了问题。我需要解决这个问题。可能模型的损失函数有问题

X 训练集形状 : (512, 512, 1)Y 训练集形状 : (512, 512, 1)

模型:

from keras.models import Modelfrom keras.layers import Dense, Dropout, Activation, Flattenfrom keras.layers import concatenate, Conv2D, MaxPooling2D, Conv2DTransposefrom keras.layers import Input, merge, UpSampling2D,BatchNormalizationfrom keras.callbacks import ModelCheckpointfrom keras.optimizers import Adamfrom keras.preprocessing.image import ImageDataGeneratorfrom keras import backend as Kimport tensorflow as tfK.set_image_data_format('channels_last')def dice_coef(y_true, y_pred):    smooth = 0.005     y_true_f = K.flatten(y_true)    y_pred_f = K.flatten(y_pred)    intersection = K.sum(y_true_f * y_pred_f)    return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)def dice_coef_loss(y_true, y_pred):    return 1-dice_coef(y_true, y_pred)    def unet_model():        inputs = Input((512 , 512, 1))        conv1 = Conv2D(64, (3, 3), activation='relu', padding='same') (inputs)    batch1 = BatchNormalization(axis=1)(conv1)    conv1 = Conv2D(64, (3, 3), activation='relu', padding='same') (batch1)    batch1 = BatchNormalization(axis=1)(conv1)    pool1 = MaxPooling2D((2, 2)) (batch1)        conv2 = Conv2D(128, (3, 3), activation='relu', padding='same') (pool1)    batch2 = BatchNormalization(axis=1)(conv2)    conv2 = Conv2D(128, (3, 3), activation='relu', padding='same') (batch2)    batch2 = BatchNormalization(axis=1)(conv2)    pool2 = MaxPooling2D((2, 2)) (batch2)        conv3 = Conv2D(256, (3, 3), activation='relu', padding='same') (pool2)    batch3 = BatchNormalization(axis=1)(conv3)    conv3 = Conv2D(256, (3, 3), activation='relu', padding='same') (batch3)    batch3 = BatchNormalization(axis=1)(conv3)    pool3 = MaxPooling2D((2, 2)) (batch3)        conv4 = Conv2D(512, (3, 3), activation='relu', padding='same') (pool3)    batch4 = BatchNormalization(axis=1)(conv4)    conv4 = Conv2D(512, (3, 3), activation='relu', padding='same') (batch4)    batch4 = BatchNormalization(axis=1)(conv4)    pool4 = MaxPooling2D(pool_size=(2, 2)) (batch4)        conv5 = Conv2D(1024, (3, 3), activation='relu', padding='same') (pool4)    batch5 = BatchNormalization(axis=1)(conv5)    conv5 = Conv2D(1024, (3, 3), activation='relu', padding='same') (batch5)    batch5 = BatchNormalization(axis=1)(conv5)        up6 = Conv2DTranspose(512, (2, 2), strides=(2, 2), padding='same') (batch5)    up6 = concatenate([up6, conv4], axis=1)    conv6 = Conv2D(512, (3, 3), activation='relu', padding='same') (up6)    batch6 = BatchNormalization(axis=1)(conv6)    conv6 = Conv2D(512, (3, 3), activation='relu', padding='same') (batch6)    batch6 = BatchNormalization(axis=1)(conv6)        up7 = Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same') (batch6)    up7 = concatenate([up7, conv3], axis=1)    conv7 = Conv2D(256, (3, 3), activation='relu', padding='same') (up7)    batch7 = BatchNormalization(axis=1)(conv7)    conv7 = Conv2D(256, (3, 3), activation='relu', padding='same') (batch7)    batch7 = BatchNormalization(axis=1)(conv7)        up8 = Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same') (batch7)    up8 = concatenate([up8, conv2], axis=1)    conv8 = Conv2D(128, (3, 3), activation='relu', padding='same') (up8)    batch8 = BatchNormalization(axis=1)(conv8)    conv8 = Conv2D(128, (3, 3), activation='relu', padding='same') (batch8)    batch8 = BatchNormalization(axis=1)(conv8)        up9 = Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same') (batch8)    up9 = concatenate([up9, conv1], axis=1)    conv9 = Conv2D(64, (3, 3), activation='relu', padding='same') (up9)    batch9 = BatchNormalization(axis=1)(conv9)    conv9 = Conv2D(64, (3, 3), activation='relu', padding='same') (batch9)    batch9 = BatchNormalization(axis=1)(conv9)    conv10 = Conv2D(1, (1, 1), activation='sigmoid')(batch9)    model = Model(inputs=[inputs], outputs=[conv10])    model.compile(optimizer=Adam(lr=1e-4), loss=dice_coef_loss, metrics=[dice_coef])    return modelmodel = unet_model()

回答:

如果你检查模型摘要(model.summary()),你会看到最后一层的输出形状是:

Layer (type)              Output Shape      Param #      Connected to                ===============================================================================......conv2d_18 (Conv2D)    (None, 2560, 512, 1)    65      batch_normalization_17[0][0]     

输出形状是 (None,2560,512,1),这与期望的输出形状 (None, 512, 512, 1) 不匹配

这种不匹配是由于你的 concatenate 层造成的。你在 concatenate 层中设置了 axis=1 参数,这会连接第二个轴,但你需要基于最后一个轴进行连接,即 axis=-1,这是默认值。

因此,从你的 concatenate 层中移除 axis 参数,让它保持默认值,或者在所有 concatenate 层中设置 axis=-1

例如:

#up6 = concatenate([up6, conv4], axis=1) # 移除 axisup6 = concatenate([up6, conv4])

然后,你将得到与你的输入和 y_train 相同的最后一层形状:

Layer (type)              Output Shape      Param #      Connected to                ===============================================================================......conv2d_18 (Conv2D)    (None, 512, 512, 1)    65      batch_normalization_17[0][0]    

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注