我在构建以下CNN模型时遇到了错误:
features_train = np.reshape(features_train, (2363,2,-1))features_test = np.reshape(features_test, (591,2,-1))features_train = np.array(features_train)features_test = np.array(features_test)print('Data Shape:', features_train.shape, features_test.shape)print('Training & Testing Data:', features_train, features_test)model_2 = Sequential()model_2.add(Conv1D(256, kernel_size=1, activation='relu', input_shape=(2,1)))model_2.add(BatchNormalization())model_2.add(MaxPooling1D())model_2.add(Conv1D(128, kernel_size=1, activation='relu'))model_2.add(BatchNormalization())model_2.add(MaxPooling1D())model_2.add(Conv1D(64, kernel_size=1, activation='relu'))model_2.add(BatchNormalization())model_2.add(MaxPooling1D())model_2.add(Conv1D(32, kernel_size=1, activation='relu'))model_2.add(BatchNormalization())model_2.add(MaxPooling1D())model_2.add(Flatten())model_2.add(Dense(4,kernel_initializer="uniform",activation='relu'))model_2.add(Dense(1,kernel_initializer="uniform",activation='softmax'))
执行时的输出和错误:
Data Shape: (2363, 2, 1) (591, 2, 1)Training & Testing Data:[[[0.5000063 ] [0.4999937 ]] [[0.5000012 ] [0.4999988 ]] [[0.50005335] [0.49994668]] ... [[0.50000364] [0.49999636]] [[0.5000013 ] [0.49999866]] [[0.49999487] [0.5000052 ]]] [[[0.50000024] [0.4999998 ]] [[0.5000017 ] [0.49999833]] [[0.50003964] [0.49996033]] ... [[0.5000441 ] [0.4999559 ]] [[0.5 ] [0.5 ]] [[0.5000544 ] [0.4999456 ]]]
ValueError: Negative dimension size caused by subtracting 2 from 1 for '{{node max_pooling1d_1/MaxPool}} = MaxPool[T=DT_FLOAT, data_format="NHWC", explicit_paddings=[], ksize=[1, 2, 1, 1], padding="VALID", strides=[1, 2, 1, 1]](max_pooling1d_1/ExpandDims)' with input shapes: [?,1,1,128].
我试图输入的数据是features_train,形状为(2363,2,1)。我认为这是输入形状和维度的问题。我是神经网络的新手,任何帮助都将不胜感激。谢谢
回答:
MaxPooling1D
将模型尺寸缩小2倍,因此第一个池化层的输出为1,然后你有更多的池化层,这些层将无法工作,因为它不能再被缩小2倍
因此,你的模型中不能有超过一个池化层
另外,我不建议在如此小的输入上使用MaxPooling1D
层
还有,你在最后一层使用了1个单元和softmax
激活函数,这没有意义。在最后一层使用softmax
激活函数时,单元数为1时总是返回1的值
所以,我认为你应该使用sigmoid
而不是softmax
你的模型应该像这样,
model_2 = Sequential()model_2.add(Conv1D(64, kernel_size=1, activation='relu', input_shape=(2,1)))model_2.add(BatchNormalization())model_2.add(Conv1D(32, kernel_size=1, activation='relu'))model_2.add(BatchNormalization())model_2.add(Flatten())model_2.add(Dense(10,kernel_initializer="uniform",activation='relu'))model_2.add(Dense(1,kernel_initializer="uniform",activation='sigmoid'))