如何使输出图像的大小与原始图像相同以在CNN中计算损失?

我定义了如下自编码器的CNN模型:

filters = (32, 16)X = Input(shape = (32, 32, 3))# encodefor f in filters:    X = Conv2D(filters = f, kernel_size = (3, 3), activation = 'relu')(X)    X = MaxPooling2D(pool_size = (2, 2), strides = (2, 2), padding = 'same')(X)    X = BatchNormalization(axis = -1)(X)# decodefor f in filters[::-1]:    X = Conv2D(filters = f, kernel_size = (3, 3), activation = 'relu')(X)    X = UpSampling2D(size = (2, 2))(X)    X = BatchNormalization(axis = -1)(X)

模型摘要如下

Model: "functional_13"_________________________________________________________________Layer (type)                 Output Shape              Param #   =================================================================input_7 (InputLayer)         [(None, 32, 32, 3)]       0         _________________________________________________________________conv2d_24 (Conv2D)           (None, 30, 30, 32)        896       _________________________________________________________________max_pooling2d_12 (MaxPooling (None, 15, 15, 32)        0         _________________________________________________________________batch_normalization_24 (Batc (None, 15, 15, 32)        128       _________________________________________________________________conv2d_25 (Conv2D)           (None, 13, 13, 16)        4624      _________________________________________________________________max_pooling2d_13 (MaxPooling (None, 7, 7, 16)          0         _________________________________________________________________batch_normalization_25 (Batc (None, 7, 7, 16)          64        _________________________________________________________________conv2d_26 (Conv2D)           (None, 5, 5, 16)          2320      _________________________________________________________________up_sampling2d_12 (UpSampling (None, 10, 10, 16)        0         _________________________________________________________________batch_normalization_26 (Batc (None, 10, 10, 16)        64        _________________________________________________________________conv2d_27 (Conv2D)           (None, 8, 8, 32)          4640      _________________________________________________________________up_sampling2d_13 (UpSampling (None, 16, 16, 32)        0         _________________________________________________________________batch_normalization_27 (Batc (None, 16, 16, 32)        128       =================================================================Total params: 12,864Trainable params: 12,672Non-trainable params: 192_________________________________________________________________

由于输出图像的尺寸与输入图像不同,我得到了以下错误

InvalidArgumentError:  Incompatible shapes: [128,32,32,3] vs. [128,16,16,32]     [[node mean_squared_error/SquaredDifference (defined at <ipython-input-7-a9683921f595>:83) ]] [Op:__inference_train_function_21329]Function call stack:train_function

因此无法计算损失函数。你能详细说明如何解决这个问题吗?


回答:

我建议你在卷积中使用padding='same'。还要注意不要用其他变量覆盖你的输入层。你还缺少一个最终输出层,其通道数应与输入图像的通道数相同

filters = (32, 16)inp = Input(shape = (32, 32, 3))# encodeX = inpfor f in filters:    X = Conv2D(filters = f, kernel_size = (3, 3), padding = 'same', activation = 'relu')(X)    X = MaxPooling2D(pool_size = (2, 2), strides = (2, 2), padding = 'same')(X)    X = BatchNormalization(axis = -1)(X)# decodefor f in filters[::-1]:    X = Conv2D(filters = f, kernel_size = (3, 3), padding = 'same', activation = 'relu')(X)    X = UpSampling2D(size = (2, 2))(X)    X = BatchNormalization(axis = -1)(X)out =  Conv2D(filters = 3, kernel_size = (3, 3), padding = 'same')(X)    model = Model(inp, out)

现在的模型摘要如下

Layer (type)                 Output Shape              Param #   =================================================================input_9 (InputLayer)         [(None, 32, 32, 3)]       0         _________________________________________________________________conv2d_22 (Conv2D)           (None, 32, 32, 32)        896       _________________________________________________________________max_pooling2d_10 (MaxPooling (None, 16, 16, 32)        0         _________________________________________________________________batch_normalization_20 (Batc (None, 16, 16, 32)        128       _________________________________________________________________conv2d_23 (Conv2D)           (None, 16, 16, 16)        4624      _________________________________________________________________max_pooling2d_11 (MaxPooling (None, 8, 8, 16)          0         _________________________________________________________________batch_normalization_21 (Batc (None, 8, 8, 16)          64        _________________________________________________________________conv2d_24 (Conv2D)           (None, 8, 8, 16)          2320      _________________________________________________________________up_sampling2d_10 (UpSampling (None, 16, 16, 16)        0         _________________________________________________________________batch_normalization_22 (Batc (None, 16, 16, 16)        64        _________________________________________________________________conv2d_25 (Conv2D)           (None, 16, 16, 32)        4640      _________________________________________________________________up_sampling2d_11 (UpSampling (None, 32, 32, 32)        0         _________________________________________________________________batch_normalization_23 (Batc (None, 32, 32, 32)        128       _________________________________________________________________conv2d_26 (Conv2D)           (None, 32, 32, 3)         867       =================================================================Total params: 13,731Trainable params: 13,539Non-trainable params: 192_________________________________________________________________

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注