我正在使用RNN构建一个房价预测模型,以下是代码。数据集没有空值,并且已经完全清理,但我的损失和验证损失值仍然保持不变且很高。我如何才能降低这些值?
A = dataset.drop(['price'],axis="columns")B = dataset['price']from sklearn import preprocessingmin_max_scaler = preprocessing.MinMaxScaler()A_scale = min_max_scaler.fit_transform(A)from sklearn.model_selection import train_test_splitA_train, A_test, B_train, B_test = train_test_split(A_scale, B, test_size=0.3)a_val, a_test, b_val, b_test = train_test_split(A_test, B_test, test_size=0.5)from keras.models import Sequentialfrom keras.layers import Dense,LSTM,Dropoutregressor = Sequential()model = Sequential([Dense(32, activation='relu', input_shape=(10,)),Dense(32, activation='relu'),Dense(1, activation='sigmoid'),])model.compile(optimizer='adam',loss='mse',metrics=['mae'])hist = model.fit(A_train, B_train, batch_size=32, epochs=4, validation_data=(a_val, b_val))
输出:
Epoch 1/20292/292 [==============================] - 0s 1ms/step - loss: 36314.9180 - mae: 111.9050 - val_loss: 23161.0312 - val_mae: 106.9015Epoch 2/2020292/292 [==============================] - 0s 646us/step - loss: 36295.7930 - mae: 111.8202 - val_loss: 23160.9219 - val_mae: 106.9010Epoch 3/20292/292 [==============================] - 0s 715us/step - loss: 36295.7383 - mae: 111.8199 - val_loss: 23160.9121 - val_mae: 106.9009Epoch 4/20292/292 [==============================] - 0s 716us/step - loss: 36295.7422 - mae: 111.8199 - val_loss: 23160.9082 - val_mae: 106.9009
回答:
这可能意味着很多事情,但我想到了三点:
- 调整模型超参数中的学习率非常重要。这个链接会给你一些关于学习率的背景知识 🙂
- 增加模型的训练轮数可以帮助它收敛到局部最小值。
- 如果你在做回归任务,请使用线性激活函数。
要实现这些,试试以下代码:
from keras.optimizers import Adamfrom keras.models import Sequentialfrom keras.layers import DenseLR=0.001EPOCHS=100BATCH_SIZE=32opt = Adam(lr=LR, decay=LR/EPOCHS)model = Sequential([Dense(32, activation='relu', input_shape=(10,)),Dense(32, activation='relu'),Dense(1, activation='linear'),])model.compile(optimizer=opt, loss='mse', metrics=['mae'])hist = model.fit(A_train, B_train, batch_size=BATCH_SIZE, epochs=EPOCHS, validation_data=(a_val, b_val))
我建议你多尝试、多犯错,阅读关于所有超参数及其影响的资料,并在神经网络的每一层尝试不同的组合。