我已经尝试增加和减少学习率,但似乎无法收敛,或者需要非常长的时间。如果我将学习率设置为0.0004,它会缓慢尝试收敛,但需要如此多的迭代,我不得不设置超过100万次迭代,结果只能将最小二乘误差从93降低到58。
我遵循的是Andrew Ng的公式
带有梯度线的图表图像:
我的代码:
import numpy as npimport pandas as pdfrom matplotlib import pyplot as pltimport matplotlib.patches as mpatchesimport timedata = pd.read_csv('weight-height.csv')x = np.array(data['Height'])y = np.array(data['Weight'])plt.scatter(x, y, c='blue')plt.suptitle('Male')plt.xlabel('Height')plt.ylabel('Weight')total = mpatches.Patch(color='blue', label='Total amount of data {}'.format(len(x)))plt.legend(handles=[total])theta0 = 0theta1 = 0learning_rate = 0.0004epochs = 10000# gradient = theta0 + theta1*Xdef hypothesis(x): return theta0 + theta1 * xdef cost_function(x): return 1 / (2 * len(x)) * sum((hypothesis(x) - y) ** 2)start = time.time()for i in range(epochs): print(f'{i}/ {epochs}') theta0 = theta0 - learning_rate * 1/len(x) * sum (hypothesis(x) - y) theta1 = theta1 - learning_rate * 1/len(x) * sum((hypothesis(x) - y) * x) print('\ncost: {}\ntheta0: {},\ntheta1: {}'.format(cost_function(x), theta0, theta1))end = time.time()plt.plot(x, hypothesis(x), c= 'red')print('\ncost: {}\ntheta0: {},\ntheta1: {}'.format(cost_function(x), theta0, theta1))print('time finished at {} seconds'.format(end - start))plt.show()
回答:
你的问题可能是你一次一个地更新theta0
和theta1
:
theta0 = theta0 - learning_rate * 1/len(x) * sum (hypothesis(x) - y)# theta1的更新现在使用了更新后的theta0theta1 = theta1 - learning_rate * 1/len(x) * sum((hypothesis(x) - y) * x)
最好重写代码,使得’hypothesis’函数只调用一次,并且明确传递theta0和theta1的值,而不是使用全局值。
# 修改为明确传递theta0/1def hypothesis(x, theta0, theta1): return theta0 + theta1 * x# 明确传递ydef cost_function(x, y, theta0, theta1): return 1 / (2 * len(x)) * sum((hypothesis(x, theta0, theta1) - y) ** 2)for i in range(epochs): print(f'{i}/ {epochs}') # 一次计算hypothesis delta = hypothesis(x, theta0, theta1) theta0 = theta0 - learning_rate * 1/len(x) * sum (delta - y) theta1 = theta1 - learning_rate * 1/len(x) * sum((delta - y) * x) print('\ncost: {}\ntheta0: {},\ntheta1: {}'.format(cost_function(x, y, theta0, theta1))