我在尝试编写一个程序来计算线性回归模型的斜率和截距,但当我运行超过10次迭代时,梯度下降函数对于截距和斜率都返回了np.nan
值。
以下是我的实现
def get_gradient_at_b(x, y, b, m): N = len(x) diff = 0 for i in range(N): x_val = x[i] y_val = y[i] diff += (y_val - ((m * x_val) + b)) b_gradient = -(2/N) * diff return b_gradientdef get_gradient_at_m(x, y, b, m): N = len(x) diff = 0 for i in range(N): x_val = x[i] y_val = y[i] diff += x_val * (y_val - ((m * x_val) + b)) m_gradient = -(2/N) * diff return m_gradientdef step_gradient(b_current, m_current, x, y, learning_rate): b_gradient = get_gradient_at_b(x, y, b_current, m_current) m_gradient = get_gradient_at_m(x, y, b_current, m_current) b = b_current - (learning_rate * b_gradient) m = m_current - (learning_rate * m_gradient) return [b, m]def gradient_descent(x, y, learning_rate, num_iterations): b = 0 m = 0 for i in range(num_iterations): b, m = step_gradient(b, m, x, y, learning_rate) return [b,m]
我运行的是以下数据:
a=[3.87656018e+11, 4.10320300e+11, 4.15730874e+11, 4.52699998e+11, 4.62146799e+11, 4.78965491e+11, 5.08068952e+11, 5.99592902e+11, 6.99688853e+11, 8.08901077e+11, 9.20316530e+11, 1.20111177e+12, 1.18695276e+12, 1.32394030e+12, 1.65661707e+12, 1.82304993e+12, 1.82763786e+12, 1.85672212e+12, 2.03912745e+12, 2.10239081e+12, 2.27422971e+12, 2.60081824e+12]b=[3.3469950e+10, 3.4784980e+10, 3.3218720e+10, 3.6822490e+10, 4.4560290e+10, 4.3826720e+10, 5.2719430e+10, 6.3842550e+10, 8.3535940e+10, 1.0309053e+11, 1.2641405e+11, 1.6313218e+11, 1.8529536e+11, 1.7875143e+11, 2.4981555e+11, 3.0596392e+11, 3.0040058e+11, 3.1440530e+11, 3.1033848e+11, 2.6229109e+11, 2.7585243e+11, 3.0352616e+11]print(gradient_descent(a, b, 0.01, 100))#result --> [nan, nan]
当我在数值较小的数据集上运行gradient_descent
函数时,它会给出正确的答案。另外,我能够使用sklearn.linear_model import LinearRegression
为上述数据获取截距和斜率。
任何帮助都会被感激,以弄清楚为什么结果是[nan, nan]
而不是给我正确的截距和斜率。
回答:
你需要降低学习率。由于a
和b
中的值非常大(>= 1e11
),学习率需要大约1e-25
才能进行梯度下降,否则由于a
和b
的大梯度,它会随机超出范围。
b, m = gradient_descent(a, b, 5e-25, 100)print(b, m)Out: -3.7387067636195266e-13 0.13854551291084335