目前,我正在处理一个与车速和角度相关的机器学习算法问题,并试图改进我的工作。我最近完成了一项XGBRegressor测试,交叉验证数据的准确率在88%到95%之间。然而,为了进一步提高,我开始研究LSTM算法,因为我的数据具有时间序列依赖性。具体来说,每条记录包括当前的转向角度、前一个时间点的转向角度(x-1)、再前一个时间点的转向角度(x-2),以及当前值与前一个值的差值(x – (x-1))。目标是预测一个值是否“异常”。例如,如果角度从0.1跳到0.5(在0到1的范围内),这被认为是异常的。我之前的算法在判断角度是否异常方面表现得非常好。不幸的是,我的LSTM算法对每个输入值都预测出相同的值。例如,以下是它的输出结果。
test_X = array([[[ 5.86925570e-01, 5.86426251e-01, 5.85832947e-01, 3.19300000e+03, -5.93304274e-04, -1.09262314e-03]], [[ 5.86426251e-01, 5.85832947e-01, 5.85263908e-01, 3.19400000e+03, -5.69038950e-04, -1.16234322e-03]], [[ 5.85832947e-01, 5.85263908e-01, 5.84801158e-01, 3.19500000e+03, -4.62749993e-04, -1.03178894e-03]], ..., [[ 4.58070203e-01, 4.57902738e-01, 4.64613980e-01, 6.38100000e+03, 6.71124195e-03, 6.54377704e-03]], [[ 4.57902738e-01, 4.64613980e-01, 7.31314846e-01, 6.38200000e+03, 2.66700866e-01, 2.73412108e-01]], [[ 4.64613980e-01, 7.31314846e-01, 4.68819741e-01, 6.38300000e+03, -2.62495104e-01, 4.20576175e-03]]])test_y = array([0, 0, 0, ..., 0, 1, 0], dtype=int64)yhat = array([[-0.00068355], [-0.00068355], [-0.00068355], ..., [-0.00068355], [-0.00068355], [-0.00068355]], dtype=float32)
我尝试过根据在线阅读的一些建议来更改epoch和batch大小。此外,我还尝试绘制一些特征,看看算法是否对某些特征不满意,但没有发现任何问题。我对机器学习并不陌生,但对深度学习是新手,所以如果这是一个愚蠢的问题或错误,请原谅我。以下是我的代码。
data = pd.read_csv('final_angles.csv') data.dropna(axis=0, subset=['steering_angle'], inplace=True)from sklearn.preprocessing import MinMaxScalerscaler = MinMaxScaler()data['steering_angle'] = scaler.fit_transform(data[['steering_angle']])y = data.flag #Set y to the value we want to predict, the 'flag' value. X = data.drop(['flag', 'frame_id'], axis=1) X = concat([X.shift(2), X.shift(1), X], axis=1)X.columns = ['angle-2', 'id2', 'angle-1', 'id1', 'steering_angle', 'id'] X = X.drop(['id2', 'id1'], axis=1) X['diff'] = 0;X['diff2'] = 0;for index, row in X.iterrows(): if(index <= 1): pass; else: X.loc[index, "diff"] = row['steering_angle'] - X['steering_angle'][index-1] X.loc[index, "diff2"] = row['steering_angle'] - X['steering_angle'][index-2] X = X.iloc[2:,]; y = y.iloc[2:,];train_X, test_X, train_y, test_y = train_test_split(X.as_matrix(), y.as_matrix(), test_size=0.5, shuffle=False)# reshape input to be 3D [samples, timesteps, features]train_X = train_X.reshape((train_X.shape[0], 1, train_X.shape[1]))test_X = test_X.reshape((test_X.shape[0], 1, test_X.shape[1]))print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)model = Sequential()model.add(LSTM(50, input_shape=(train_X.shape[1], train_X.shape[2])))model.add(Dense(1))model.compile(loss='mae', optimizer='adam')# fit networkhistory = model.fit(train_X, train_y, epochs=50, batch_size=150, validation_data=(test_X, test_y), verbose=2, shuffle=False)yhat = model.predict(test_X)
而不是预测值是
array([[-0.00068355], [-0.00068355], [-0.00068355], ..., [-0.00068355], [-0.00068355], [-0.00068355]], dtype=float32)
我期望的结果更像是
array([-0.00065207, -0.00065207, -0.00065207, 1.0082773 , 0.01269123, 0.01873571, -0.00065207, -0.00065207, 0.99916965, 0.002684 , -0.00018287, -0.00065207, -0.00065207, -0.00065207, -0.00065207, 1.0021645 , 0.00654274, 0.01044858, -0.0002622 , -0.0002622 ], dtype=float32)
这是之前提到的XGBRegressor测试的结果。
任何帮助都将不胜感激,如果需要更多代码/信息,请告诉我。
编辑:打印语句的结果
Train on 3190 samples, validate on 3191 samplesEpoch 1/50 - 5s - loss: 0.4268 - val_loss: 0.2820Epoch 2/50 - 0s - loss: 0.2053 - val_loss: 0.1256Epoch 3/50 - 0s - loss: 0.1442 - val_loss: 0.1256Epoch 4/50 - 0s - loss: 0.1276 - val_loss: 0.1198Epoch 5/50 - 0s - loss: 0.1256 - val_loss: 0.1179Epoch 6/50 - 0s - loss: 0.1250 - val_loss: 0.1188Epoch 7/50 - 0s - loss: 0.1258 - val_loss: 0.1183Epoch 8/50 - 1s - loss: 0.1258 - val_loss: 0.1199Epoch 9/50 - 0s - loss: 0.1256 - val_loss: 0.1179Epoch 10/50 - 0s - loss: 0.1255 - val_loss: 0.1192Epoch 11/50 - 0s - loss: 0.1247 - val_loss: 0.1180Epoch 12/50 - 0s - loss: 0.1254 - val_loss: 0.1185Epoch 13/50 - 0s - loss: 0.1252 - val_loss: 0.1176Epoch 14/50 - 0s - loss: 0.1258 - val_loss: 0.1197Epoch 15/50 - 0s - loss: 0.1251 - val_loss: 0.1175Epoch 16/50 - 0s - loss: 0.1253 - val_loss: 0.1176Epoch 17/50 - 0s - loss: 0.1247 - val_loss: 0.1183Epoch 18/50 - 0s - loss: 0.1249 - val_loss: 0.1178Epoch 19/50 - 0s - loss: 0.1253 - val_loss: 0.1178Epoch 20/50 - 0s - loss: 0.1253 - val_loss: 0.1181Epoch 21/50 - 0s - loss: 0.1245 - val_loss: 0.1192Epoch 22/50 - 0s - loss: 0.1250 - val_loss: 0.1187Epoch 23/50 - 0s - loss: 0.1244 - val_loss: 0.1184Epoch 24/50 - 0s - loss: 0.1252 - val_loss: 0.1188Epoch 25/50 - 0s - loss: 0.1253 - val_loss: 0.1197Epoch 26/50 - 0s - loss: 0.1253 - val_loss: 0.1192Epoch 27/50 - 0s - loss: 0.1267 - val_loss: 0.1177Epoch 28/50 - 0s - loss: 0.1256 - val_loss: 0.1182Epoch 29/50 - 0s - loss: 0.1247 - val_loss: 0.1178Epoch 30/50 - 0s - loss: 0.1249 - val_loss: 0.1183Epoch 31/50 - 0s - loss: 0.1259 - val_loss: 0.1189Epoch 32/50 - 0s - loss: 0.1258 - val_loss: 0.1187Epoch 33/50 - 0s - loss: 0.1248 - val_loss: 0.1179Epoch 34/50 - 0s - loss: 0.1259 - val_loss: 0.1203Epoch 35/50 - 0s - loss: 0.1252 - val_loss: 0.1190Epoch 36/50 - 0s - loss: 0.1260 - val_loss: 0.1192Epoch 37/50 - 0s - loss: 0.1249 - val_loss: 0.1183Epoch 38/50 - 0s - loss: 0.1249 - val_loss: 0.1187Epoch 39/50 - 0s - loss: 0.1252 - val_loss: 0.1185Epoch 40/50 - 0s - loss: 0.1246 - val_loss: 0.1183Epoch 41/50 - 0s - loss: 0.1247 - val_loss: 0.1179Epoch 42/50 - 0s - loss: 0.1242 - val_loss: 0.1194Epoch 43/50 - 0s - loss: 0.1255 - val_loss: 0.1187Epoch 44/50 - 0s - loss: 0.1244 - val_loss: 0.1176Epoch 45/50 - 0s - loss: 0.1248 - val_loss: 0.1183Epoch 46/50 - 0s - loss: 0.1257 - val_loss: 0.1179Epoch 47/50 - 0s - loss: 0.1248 - val_loss: 0.1177Epoch 48/50 - 0s - loss: 0.1247 - val_loss: 0.1194Epoch 49/50 - 0s - loss: 0.1248 - val_loss: 0.1181Epoch 50/50 - 0s - loss: 0.1245 - val_loss: 0.1182
回答:
一个可能的问题可能是你的timestamps
。你将输入重塑成了timestamps
= 1的形状。如果我们想利用LSTM的特性,timestamps
应该大于1,对吗?
如果你对每个数据点有连续三个时间步的转向角度,那么你可以尝试将timestamps
设置为3。