我的Keras模型似乎什么也没学到,我搞不清楚为什么。即使我将训练集的大小减少到5个元素,模型仍然无法适应训练数据。
这是我的代码:
model = Sequential()model.add(Conv1D(30, filter_length=3, activation='relu', input_shape=(50, 1)))model.add(Conv1D(40, filter_length=(3), activation='relu'))model.add(Conv1D(120, filter_length=(3), activation='relu'))model.add(Flatten())model.add(Dense(1024, activation='relu'))model.add(Dense(256, activation='relu'))model.add(Dense(32, activation='relu'))model.add(Dense(1, activation='relu'))model.summary()model.compile(loss='mse', optimizer=keras.optimizers.adam())train_limit = 5 batch_size = 4096 tb = keras.callbacks.TensorBoard(log_dir='./logs/' + run_name + '/', histogram_freq=0, write_images=False)tb.set_model(model)model.fit(X_train[:train_limit], y_train[:train_limit], batch_size=batch_size, nb_epoch=10**4, verbose=0, validation_data=(X_val[:train_limit], y_val[:train_limit]), callbacks=[tb])score = model.evaluate(X_test, y_test, verbose=0)print('Test loss:', score)print('Test accuracy:', score)
任何帮助都将不胜感激!
回答:
这似乎是一个回归问题。我注意到你最后一层的激活函数仍然是ReLU。我建议在最后一层去掉ReLU激活函数。