我刚开始学习机器学习,所以如果这个问题很傻,请原谅我。我在这里使用的是TensorFlow和Keras。
这是我的代码:
import tensorflow as tfimport numpy as npfrom tensorflow import kerasmodel = keras.Sequential([ keras.layers.Dense(units=1, input_shape=[1])])model.compile(optimizer="sgd", loss="mean_squared_error")xs = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0], dtype=float)ys = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0], dtype=float)model.fit(xs, ys, epochs=500)print(model.predict([25.0]))
我得到的输出如下(我没有展示全部500行,只展示了20个周期):
Epoch 1/5001/1 [==============================] - 0s 210ms/step - loss: 450.9794Epoch 2/5001/1 [==============================] - 0s 4ms/step - loss: 1603.0852Epoch 3/5001/1 [==============================] - 0s 10ms/step - loss: 5698.4731Epoch 4/5001/1 [==============================] - 0s 7ms/step - loss: 20256.3398Epoch 5/5001/1 [==============================] - 0s 10ms/step - loss: 72005.1719Epoch 6/5001/1 [==============================] - 0s 4ms/step - loss: 255956.5938Epoch 7/5001/1 [==============================] - 0s 3ms/step - loss: 909848.5000Epoch 8/5001/1 [==============================] - 0s 5ms/step - loss: 3234236.0000Epoch 9/5001/1 [==============================] - 0s 3ms/step - loss: 11496730.0000Epoch 10/5001/1 [==============================] - 0s 3ms/step - loss: 40867392.0000Epoch 11/5001/1 [==============================] - 0s 3ms/step - loss: 145271264.0000Epoch 12/5001/1 [==============================] - 0s 3ms/step - loss: 516395584.0000Epoch 13/5001/1 [==============================] - 0s 4ms/step - loss: 1835629312.0000Epoch 14/5001/1 [==============================] - 0s 3ms/step - loss: 6525110272.0000Epoch 15/5001/1 [==============================] - 0s 3ms/step - loss: 23194802176.0000Epoch 16/5001/1 [==============================] - 0s 3ms/step - loss: 82450513920.0000Epoch 17/5001/1 [==============================] - 0s 3ms/step - loss: 293086593024.0000Epoch 18/5001/1 [==============================] - 0s 5ms/step - loss: 1041834835968.0000Epoch 19/5001/1 [==============================] - 0s 3ms/step - loss: 3703408164864.0000Epoch 20/5001/1 [==============================] - 0s 3ms/step - loss: 13164500484096.0000
如你所见,损失函数呈指数级增长。很快(在第64个周期),这些数字变成了inf
。然后,从无穷大,它做了一些操作变成了NaN
(非数字)。我以为模型会随着时间的推移越来越好地识别模式,到底发生了什么?
我注意到一件事,如果我将xs
和ys
的长度从20减少到10,损失会减少并变成7.9193e-05
。当我将这两个numpy数组的长度增加到18
时,损失开始无法控制地增加,否则一切正常。我提供了20个值,因为我认为如果我提供更多的数据,模型会更好,这就是为什么我提供了20个值。
回答:
你的学习率/alpha似乎设置得太大了。
尝试使用较低的学习率,像这样:
import tensorflow as tfimport numpy as npfrom tensorflow import kerasmodel = keras.Sequential([ keras.layers.Dense(units=1, input_shape=[1])])# 手动设置优化器,默认学习率为0.01opt = keras.optimizers.SGD(learning_rate=0.0001)model.compile(optimizer=opt, loss="mean_squared_error")xs = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0], dtype=float)ys = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0], dtype=float)model.fit(xs, ys, epochs=500)print(model.predict([25.0]))
… 这样会收敛。
ADAM表现得更好的一个原因可能是它能自适应地估计学习率——我认为ADAM中的A代表自适应。
编辑:确实如此!
来自https://arxiv.org/pdf/1412.6980.pdf
该方法通过估计梯度的第一和第二时刻,为不同参数计算个性化的自适应学习率;Adam这个名字来源于自适应矩估计
Epoch 1/5001/1 [==============================] - 0s 129ms/step - loss: 1.2133Epoch 2/5001/1 [==============================] - 0s 990us/step - loss: 1.1442Epoch 3/5001/1 [==============================] - 0s 0s/step - loss: 1.0792Epoch 4/5001/1 [==============================] - 0s 1ms/step - loss: 1.0178Epoch 5/5001/1 [==============================] - 0s 1ms/step - loss: 0.9599Epoch 6/5001/1 [==============================] - 0s 1ms/step - loss: 0.9053Epoch 7/5001/1 [==============================] - 0s 0s/step - loss: 0.8538Epoch 8/5001/1 [==============================] - 0s 1ms/step - loss: 0.8053Epoch 9/5001/1 [==============================] - 0s 999us/step - loss: 0.7595Epoch 10/5001/1 [==============================] - 0s 1ms/step - loss: 0.7163...Epoch 499/5001/1 [==============================] - 0s 1ms/step - loss: 9.9431e-06Epoch 500/5001/1 [==============================] - 0s 999us/step - loss: 9.9420e-06
编辑2:
使用真正的/“原味”梯度下降(与随机梯度下降相对),你应该在每一步都能看到收敛。如果开始发散,通常是因为学习率/alpha/步长设置得太大了。这意味着搜索在某一个或多个维度上“超出了目标”。
考虑一个损失函数,其偏导数/梯度在某一个或多个维度上有一个非常狭窄的谷。一旦“迈得太远”,就可能突然出现很大的误差。