如果我运行以下代码,我会得到一个相同值的数组(预测值),如您所见:
基本上,我的回归器输入是一系列数字0, 1, 2, … 99,我期望输出为100。我以序列方式(多次)进行操作,如代码中所示。这段代码应该是可运行的。我做错了什么,为什么预期结果和实际结果不同?
代码如下:
import numpy as npimport pandas as pdimport tensorflow as tfimport matplotlib.pyplot as pltfrom keras.layers import Densefrom keras.layers import LSTMfrom keras.models import Sequentialfrom keras.layers import Dropoutfrom sklearn.preprocessing import MinMaxScalerfrom datetime import datetimefrom datetime import timedeltafrom time import mktimemy_data = []for i in range(0, 1000): my_data.append(i)X_train = []y_train = []np_data = np.array(my_data)for i in range(0, np_data.size - 100 ): X_train.append(np_data[i : i+100]) y_train.append(np_data[i+100])X_train, y_train = np.array(X_train), np.array(y_train)X_train = np.reshape(X_train, [X_train.shape[0], X_train.shape[1], 1])regressor = Sequential()regressor.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1], 1)))regressor.add(Dropout(0.2))regressor.add(LSTM(units=50, return_sequences=True))regressor.add(Dropout(0.2))regressor.add(LSTM(units=50, return_sequences=True))regressor.add(Dropout(0.2))regressor.add(LSTM(units=50, return_sequences=True))regressor.add(Dropout(0.2))regressor.add(LSTM(units=50))regressor.add(Dropout(0.2))regressor.add(Dense(units=1))regressor.compile(optimizer='adam', loss='mean_squared_error')regressor.fit(X_train, y_train, epochs=5, batch_size=32)X_test = []y_test = []my_data = []for i in range(1000, 1500): my_data.append(i)np_data = np.array(my_data)for i in range(0, np_data.size - 100 ): X_test.append(np_data[i : i+100]) y_test.append(np_data[i+100])X_test = np.array(X_test)X_test = np.reshape(X_test, [X_test.shape[0], X_test.shape[1], 1])predicted = regressor.predict(X_test)plt.plot(y_test, color = '#ffd700', label = "Real Data")plt.plot(predicted, color = '#1fb864', label = "Predicted Data")plt.title(" Price Prediction")plt.xlabel("X axis")plt.ylabel("Y axis")plt.legend()plt.show()
回答:
正如我在评论中所解释的,这是一个简单的线性问题,因此您可以使用线性回归。如果您想使用keras/tf,可以构建一个只有一层密集层的模型,以下是可用的代码:
import numpy as npimport pandas as pdimport tensorflow as tfimport matplotlib.pyplot as pltfrom keras import optimizersfrom keras.layers import Densefrom keras.layers import LSTMfrom keras.models import Sequentialfrom keras.layers import Dropoutfrom sklearn.preprocessing import MinMaxScalerfrom datetime import datetimefrom datetime import timedeltafrom time import mktimemy_data = []for i in range(0, 1000): my_data.append(i)X_train = []y_train = []np_data = np.array(my_data)for i in range(0, np_data.size - 100): X_train.append(np_data[i: i + 100]) y_train.append(np_data[i + 100])X_train, y_train = np.array(X_train), np.array(y_train)X_train = np.reshape(X_train, [X_train.shape[0], X_train.shape[1]])regressor = Sequential()regressor.add(Dense(units=1, input_shape=(len(X_train[1]),)))regressor.compile(optimizer=optimizers.adam_v2.Adam(learning_rate=0.1), loss='mean_squared_error')regressor.fit(X_train, y_train, epochs=1000, batch_size=len(X_train))X_test = []y_test = []my_data = []for i in range(1000, 1500): my_data.append(i)np_data = np.array(my_data)for i in range(0, np_data.size - 100): X_test.append(np_data[i: i + 100]) y_test.append(np_data[i + 100])X_test = np.array(X_test)X_test = np.reshape(X_test, [X_test.shape[0], X_test.shape[1]])predicted = regressor.predict(X_test)plt.plot(y_test, color='#ffd700', label="Real Data")plt.plot(predicted, color='#1fb864', label="Predicted Data")plt.title(" Price Prediction")plt.xlabel("X axis")plt.ylabel("Y axis")plt.legend()plt.show()
以上代码将产生所需的预测,以下是我所做的更改:
- 将模型更改为单一密集层,正如我解释的,这是线性关系
- 增加批量大小。这只是为了加速训练,您可以减少它,但同时需要降低学习率并增加轮数
- 将轮数增加到1000。这组数据包含大量无用信息,每个X的最后一个值才有用,因此需要相对更多的轮数来学习这一点。实际上,使用这种线性回归时,有数千甚至数万个轮数是很常见的,因为每个轮次都非常快
- 将数据重塑为(样本数,特征数),这是Dense层所期望的
- 增加学习率,只是为了更快地学习
我只是修改了这些来证明我的观点,我没有进一步调整其他参数,我相信您可以添加正则化器,改变学习率等等,以使其更快更简单。但说实话,我认为调整这些参数不值得花时间,因为预测线性关系真的不是深度学习的用途。
希望这对您有帮助,如果您有进一步的困惑,请随时评论 🙂