什么是Keras中UpSampling2D和Conv2DTranspose函数的区别?

在这段代码中,UpSampling2DConv2DTranspose似乎可以互换使用。我想知道为什么会这样。

# u-net model with up-convolution or up-sampling and weighted binary-crossentropy as loss funcfrom keras.models import Modelfrom keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D, concatenate, Conv2DTranspose, BatchNormalization, Dropoutfrom keras.optimizers import Adamfrom keras.utils import plot_modelfrom keras import backend as Kdef unet_model(n_classes=5, im_sz=160, n_channels=8, n_filters_start=32, growth_factor=2, upconv=True,               class_weights=[0.2, 0.3, 0.1, 0.1, 0.3]):    droprate=0.25    n_filters = n_filters_start    inputs = Input((im_sz, im_sz, n_channels))    #inputs = BatchNormalization()(inputs)    conv1 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(inputs)    conv1 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv1)    pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)    #pool1 = Dropout(droprate)(pool1)    n_filters *= growth_factor    pool1 = BatchNormalization()(pool1)    conv2 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(pool1)    conv2 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv2)    pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)    pool2 = Dropout(droprate)(pool2)    n_filters *= growth_factor    pool2 = BatchNormalization()(pool2)    conv3 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(pool2)    conv3 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv3)    pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)    pool3 = Dropout(droprate)(pool3)    n_filters *= growth_factor    pool3 = BatchNormalization()(pool3)    conv4_0 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(pool3)    conv4_0 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv4_0)    pool4_1 = MaxPooling2D(pool_size=(2, 2))(conv4_0)    pool4_1 = Dropout(droprate)(pool4_1)    n_filters *= growth_factor    pool4_1 = BatchNormalization()(pool4_1)    conv4_1 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(pool4_1)    conv4_1 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv4_1)    pool4_2 = MaxPooling2D(pool_size=(2, 2))(conv4_1)    pool4_2 = Dropout(droprate)(pool4_2)    n_filters *= growth_factor    conv5 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(pool4_2)    conv5 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv5)    n_filters //= growth_factor    if upconv:        up6_1 = concatenate([Conv2DTranspose(n_filters, (2, 2), strides=(2, 2), padding='same')(conv5), conv4_1])    else:        up6_1 = concatenate([UpSampling2D(size=(2, 2))(conv5), conv4_1])    up6_1 = BatchNormalization()(up6_1)    conv6_1 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(up6_1)    conv6_1 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv6_1)    conv6_1 = Dropout(droprate)(conv6_1)    n_filters //= growth_factor    if upconv:        up6_2 = concatenate([Conv2DTranspose(n_filters, (2, 2), strides=(2, 2), padding='same')(conv6_1), conv4_0])    else:        up6_2 = concatenate([UpSampling2D(size=(2, 2))(conv6_1), conv4_0])    up6_2 = BatchNormalization()(up6_2)    conv6_2 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(up6_2)    conv6_2 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv6_2)    conv6_2 = Dropout(droprate)(conv6_2)    n_filters //= growth_factor    if upconv:        up7 = concatenate([Conv2DTranspose(n_filters, (2, 2), strides=(2, 2), padding='same')(conv6_2), conv3])    else:        up7 = concatenate([UpSampling2D(size=(2, 2))(conv6_2), conv3])    up7 = BatchNormalization()(up7)    conv7 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(up7)    conv7 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv7)    conv7 = Dropout(droprate)(conv7)    n_filters //= growth_factor    if upconv:        up8 = concatenate([Conv2DTranspose(n_filters, (2, 2), strides=(2, 2), padding='same')(conv7), conv2])    else:        up8 = concatenate([UpSampling2D(size=(2, 2))(conv7), conv2])    up8 = BatchNormalization()(up8)    conv8 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(up8)    conv8 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv8)    conv8 = Dropout(droprate)(conv8)    n_filters //= growth_factor    if upconv:        up9 = concatenate([Conv2DTranspose(n_filters, (2, 2), strides=(2, 2), padding='same')(conv8), conv1])    else:        up9 = concatenate([UpSampling2D(size=(2, 2))(conv8), conv1])    conv9 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(up9)    conv9 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv9)    conv10 = Conv2D(n_classes, (1, 1), activation='sigmoid')(conv9)    model = Model(inputs=inputs, outputs=conv10)    def weighted_binary_crossentropy(y_true, y_pred):        class_loglosses = K.mean(K.binary_crossentropy(y_true, y_pred), axis=[0, 1, 2])        return K.sum(class_loglosses * K.constant(class_weights))    model.compile(optimizer=Adam(), loss=weighted_binary_crossentropy)    return model

回答:

UpSampling2D 只是简单地通过最近邻或双线性上采样来放大图像,所以没有什么智能之处。它的优势在于成本低廉。

Conv2DTranspose 是一种卷积操作,其核在训练模型时被学习(就像普通的 Conv2D 操作一样)。使用 Conv2DTranspose 也会对其输入进行上采样,但关键的区别在于模型应该学习什么是最适合任务的上采样方法。

编辑:转置卷积的精彩可视化链接:https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注