我是TensorFlow和Keras的新手,我一直在制作一个扩张的ResNet,并且想在一个层上添加实例归一化,但它总是抛出错误,无法实现。
我使用的是TensorFlow 1.15和Keras 2.1。我注释掉了正常工作的BatchNormalization部分,并尝试添加实例归一化,但它找不到模块。
非常感谢您的建议
from keras.layers import Conv2Dfrom keras.layers.normalization import BatchNormalizationfrom keras.optimizers import Nadam, Adamfrom keras.layers import Input, Dense, Reshape, Activation, Flatten, Embedding, Dropout, Lambda, add, concatenate, Concatenate, ConvLSTM2D, LSTM, average, MaxPooling2D, multiply, MaxPooling3Dfrom keras.layers import GlobalAveragePooling2D, Permutefrom keras.layers.advanced_activations import LeakyReLU, PReLUfrom keras.layers.convolutional import UpSampling2D, Conv2D, Conv1Dfrom keras.models import Sequential, Modelfrom keras.utils import multi_gpu_modelfrom keras.utils.generic_utils import Progbarfrom keras.constraints import maxnormfrom keras.activations import tanh, softmaxfrom keras import metrics, initializers, utils, regularizersimport tensorflow as tfimport numpy as npimport mathimport osimport sysimport randomimport keras.backend as Kepsilon = K.epsilon()def basic_block_conv2D_norm_elu(filters, kernel_size, kernel_regularizer=regularizers.l2(1e-4),act_func="elu", normalize="Instance", dropout='0.15', strides=1,use_bias = True,kernel_initializer = "he_normal",_dilation_rate=0): def f(input): if kernel_regularizer == None: if _dilation_rate == 0: conv = Conv2D(filters=filters, kernel_size=kernel_size, strides=strides, padding="same", use_bias=use_bias)(input) else: conv = Conv2D(filters=filters, kernel_size=kernel_size, strides=strides, padding="same", use_bias=use_bias,dilation_rate=_dilation_rate)(input) else: if _dilation_rate == 0: conv = Conv2D(filters=filters, kernel_size=kernel_size, strides=strides, kernel_initializer=kernel_initializer, padding="same", use_bias=use_bias, kernel_regularizer=kernel_regularizer)(input) else: conv = Conv2D(filters=filters, kernel_size=kernel_size, strides=strides, kernel_initializer=kernel_initializer, padding="same", use_bias=use_bias, kernel_regularizer=kernel_regularizer, dilation_rate=_dilation_rate)(input) if dropout != None: dropout_layer = Dropout(0.15)(conv) if normalize == None and dropout != None: norm_layer = conv(dropout_layer) else: norm_layer = InstanceNormalization()(dropout_layer)# norm_layer = BatchNormalization()(dropout_layer) return Activation(act_func)(norm_layer) return f
回答:
不存在InstanceNormalization()
这样的东西。在Keras中,你没有一个独立的InstanceNormalization
层。(这并不意味着你不能应用InstanceNormalization
)
在Keras中,我们有tf.keras.layers.BatchNormalization
层,可以用于应用任何类型的归一化。
该层具有以下参数:
axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer="zeros", gamma_initializer="ones", moving_mean_initializer="zeros", moving_variance_initializer="ones", beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None, **kwargs)
现在,您可以更改axis
参数来生成InstanceNormalization
层或任何其他类型的归一化层。
BatchNormalization和InstanceNormalization的公式如下所示:
现在,假设您有通道优先的实现,即[B,C,H,W]
。如果您想计算BatchNormalization,那么您需要将通道轴设置为BatchNormalization()层的轴。在这种情况下,它将计算C
个均值和标准差
BatchNormalization层:tf.keras.layers.BatchNormalization(axis=1)
如果您想计算InstanceNormalization,只需设置批次和通道的轴。在这种情况下,它将计算B*C
个均值和标准差
InstanceNormalization层:tf.keras.layers.BatchNormalization(axis=[0,1])
更新1
在使用BatchNormalization时,如果您想将其用作InstanceNormalization,您必须保持training=1
更新2
您可以直接使用内置的InstanceNormalization
,如下所示
https://www.tensorflow.org/addons/api_docs/python/tfa/layers/InstanceNormalization