我按照书中《动手学机器学习与scikit-learn和TensorFlow》中的代码,在Keras中构建了一个多输出神经网络。然而,我总是得到损失值为nan的输出。如何修复这个问题?
from sklearn.datasets import fetch_california_housinghousing = fetch_cawwwlifornia_housing()X_train_full, X_test, y_train_full, y_test = train_test_split( housing.data, housing.target)X_train, X_valid, y_train, y_valid = train_test_split( X_train_full, y_train_full)scaler = StandardScaler()X_train_scaled = scaler.fit_transform(X_train)X_valid_scaled = scaler.transform(X_valid)X_test_scaled = scaler.transform(X_test)X_train_A, X_train_B = X_train[:, :5], X_train[:, 2:]X_valid_A, X_valid_B = X_valid[:, :5], X_valid[:, 2:]X_test_A, X_test_B = X_test[:, :5], X_test[:, 2:]X_new_A, X_new_B = X_test_A[:3], X_test_B[:3]input_A = keras.layers.Input(shape=[5], name="wide_input")input_B = keras.layers.Input(shape=[6], name="deep_input")hidden1 = keras.layers.Dense(30, activation="relu")(input_B)hidden2 = keras.layers.Dense(30, activation="relu")(hidden1)concat = keras.layers.concatenate([input_A, hidden2])output = keras.layers.Dense(1, name="main_output")(concat)aux_output = keras.layers.Dense(1, name="aux_output")(hidden2)model = keras.models.Model(inputs=[input_A, input_B], outputs=[output, aux_output])model.compile(loss=["mse", "mse"], loss_weights=[0.9, 0.1], optimizer="sgd")history = model.fit( [X_train_A, X_train_B], [y_train, y_train], epochs=20, validation_data=([X_valid_A, X_valid_B], [y_valid, y_valid]))
输出
Train on 11610 samples, validate on 3870 samplesEpoch 1/2011610/11610 [==============================] - 6s 525us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 2/2011610/11610 [==============================] - 4s 336us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 3/2011610/11610 [==============================] - 5s 428us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 4/2011610/11610 [==============================] - 5s 424us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 5/2011610/11610 [==============================] - 5s 414us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 6/2011610/11610 [==============================] - 5s 400us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 7/2011610/11610 [==============================] - 5s 392us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 8/2011610/11610 [==============================] - 5s 405us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 9/2011610/11610 [==============================] - 4s 369us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 10/2011610/11610 [==============================] - 5s 405us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 11/2011610/11610 [==============================] - 5s 423us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 12/2011610/11610 [==============================] - 5s 454us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 13/2011610/11610 [==============================] - 4s 380us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 14/2011610/11610 [==============================] - 5s 446us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 15/2011610/11610 [==============================] - 5s 411us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 16/2011610/11610 [==============================] - 5s 457us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 17/2011610/11610 [==============================] - 5s 415us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 18/2011610/11610 [==============================] - 5s 411us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 19/2011610/11610 [==============================] - 5s 388us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nanEpoch 20/2011610/11610 [==============================] - 4s 363us/sample - loss: nan - main_output_loss: nan - aux_output_loss: nan - val_loss: nan - val_main_output_loss: nan - val_aux_output_loss: nan
回答:
如评论中所解释的,NaN值通常是由学习率过高或优化过程中类似不稳定性导致的梯度爆炸引起的。可以通过设置clipnorm
来预防这种情况。设置一个合适学习率的优化器:
opt = keras.optimizers.Adam(0.001, clipnorm=1.)model.compile(loss=["mse", "mse"], loss_weights=[0.9, 0.1], optimizer=opt)
可以在您的笔记本中获得更好的训练效果:
Epoch 1/20363/363 [==============================] - 1s 2ms/step - loss: 1547.7197 - main_output_loss: 967.1940 - aux_output_loss: 6772.4609 - val_loss: 19.9807 - val_main_output_loss: 20.0967 - val_aux_output_loss: 18.9365Epoch 2/20363/363 [==============================] - 1s 2ms/step - loss: 13.2916 - main_output_loss: 14.0150 - aux_output_loss: 6.7812 - val_loss: 14.6868 - val_main_output_loss: 14.5820 - val_aux_output_loss: 15.6298Epoch 3/20363/363 [==============================] - 1s 2ms/step - loss: 11.0539 - main_output_loss: 11.6683 - aux_output_loss: 5.5244 - val_loss: 10.5564 - val_main_output_loss: 10.2116 - val_aux_output_loss: 13.6594Epoch 4/20363/363 [==============================] - 1s 1ms/step - loss: 7.4646 - main_output_loss: 7.7688 - aux_output_loss: 4.7269 - val_loss: 13.2672 - val_main_output_loss: 11.5239 - val_aux_output_loss: 28.9570Epoch 5/20363/363 [==============================] - 1s 2ms/step - loss: 5.6873 - main_output_loss: 5.8091 - aux_output_loss: 4.5909 - val_loss: 5.0464 - val_main_output_loss: 4.5089 - val_aux_output_loss: 9.8839
虽然性能不是特别出色,但您需要从这里开始优化所有超参数以达到满意的效果。
您还可以通过使用SGD来观察clipnorm
的效果,正如您最初打算的那样:
opt = keras.optimizers.SGD(0.001, clipnorm=1.)model.compile(loss=["mse", "mse"], loss_weights=[0.9, 0.1], optimizer=opt)
这可以正常训练。然而,一旦您移除clipnorm
,您将得到NaN
值。