我正在尝试使用函数式API重写Network In Network CNN的顺序模型。我使用的是CIFAR-10数据集。顺序模型可以正常训练,但函数式API模型却卡住了。我在重写模型时可能遗漏了一些东西。
这是一个可复现的示例:
依赖项:
from keras.models import Model, Input, Sequentialfrom keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D, Dropout, Activationfrom keras.utils import to_categoricalfrom keras.losses import categorical_crossentropyfrom keras.optimizers import Adamfrom keras.datasets import cifar10
加载数据集:
(x_train, y_train), (x_test, y_test) = cifar10.load_data()x_train = x_train / 255.x_test = x_test / 255.y_train = to_categorical(y_train, num_classes=10)y_test = to_categorical(y_test, num_classes=10)input_shape = x_train[0,:,:,:].shape
这是可用的顺序模型:
model = Sequential()#mlpconv block1model.add(Conv2D(32, (5, 5), activation='relu',padding='valid',input_shape=input_shape))model.add(Conv2D(32, (1, 1), activation='relu'))model.add(Conv2D(32, (1, 1), activation='relu'))model.add(MaxPooling2D((2,2)))model.add(Dropout(0.5))#mlpconv block2model.add(Conv2D(64, (3, 3), activation='relu',padding='valid'))model.add(Conv2D(64, (1, 1), activation='relu'))model.add(Conv2D(64, (1, 1), activation='relu'))model.add(MaxPooling2D((2,2)))model.add(Dropout(0.5))#mlpconv block3model.add(Conv2D(128, (3, 3), activation='relu',padding='valid'))model.add(Conv2D(32, (1, 1), activation='relu'))model.add(Conv2D(10, (1, 1), activation='relu'))model.add(GlobalAveragePooling2D())model.add(Activation('softmax'))
编译并训练:
model.compile(loss=categorical_crossentropy, optimizer=Adam(), metrics=['acc']) _ = model.fit(x=x_train, y=y_train, batch_size=32, epochs=200, verbose=1,validation_split=0.2)
在三个epoch内,模型的验证准确率接近50%。
这是使用函数式API重写的相同模型:
model_input = Input(shape=input_shape)#mlpconv block1x = Conv2D(32, (5, 5), activation='relu',padding='valid')(model_input)x = Conv2D(32, (1, 1), activation='relu')(x)x = Conv2D(32, (1, 1), activation='relu')(x)x = MaxPooling2D((2,2))(x)x = Dropout(0.5)(x)#mlpconv block2x = Conv2D(64, (3, 3), activation='relu',padding='valid')(x)x = Conv2D(64, (1, 1), activation='relu')(x)x = Conv2D(64, (1, 1), activation='relu')(x)x = MaxPooling2D((2,2))(x)x = Dropout(0.5)(x)#mlpconv block3x = Conv2D(128, (3, 3), activation='relu',padding='valid')(x)x = Conv2D(32, (1, 1), activation='relu')(x)x = Conv2D(10, (1, 1), activation='relu')(x)x = GlobalAveragePooling2D()(x)x = Activation(activation='softmax')(x)model = Model(model_input, x, name='nin_cnn')
然后使用与顺序模型相同的参数编译此模型。训练时,训练准确率卡在0.10
,这意味着模型没有改进,随机选择了10个类别之一。
我在重写模型时遗漏了什么?调用model.summary()
时,除了函数式API模型中的显式Input
层外,两个模型看起来是相同的。
回答:
移除最后一个卷积层中的activation
可以解决问题:
x = Conv2D(10, (1, 1))(x)
仍然不确定为什么顺序模型在该层使用激活函数时也能正常工作。