我在学习机器学习课程,并尝试在Matlab中实现梯度下降算法。单独测试时,computeCost函数运行正常。我用它来查看每次迭代的成本,但似乎成本根本没有减少。成本只是随机波动。alpha的值被给定为0.01,所以我知道这不是学习率过大的问题。我得到的theta值与预期输出相差很大。我哪里做错了?提前谢谢!
function theta = gradientDescent(X, y, theta, alpha, num_iters)%GRADIENTDESCENT Performs gradient descent to learn theta% Initialize some useful valuesm = length(y); % number of training examplestemp1=0;temp2=0;for iter = 1:num_itersfor k = 1:m temp1 = temp1 + (theta(1) + theta(2)*X(k, 2) - y(k)); temp2 = temp2 + ((theta(1) + theta(2)*X(k, 2) - y(k))*X(k, 2));end theta(1) = theta(1)-(1/m)*alpha*temp1;theta(2) = theta(2)-(1/m)*alpha*temp2;computeCost(X, y, theta)endend
编辑:这里也附上computeCost函数
function J = computeCost(X, y, theta)m = length(y); % number of training examplesJ = 0;temp = 0;for index = 1:m temp = temp + (theta(1) + theta(2)*X(index, 2)-y(index))^2;end J = temp/(2*m); end
回答:
尝试更改:
temp1=0;temp2=0;for iter = 1:num_iters
为
for iter = 1:num_iters temp1=0; temp2=0;
梯度需要在每次迭代时重新计算(否则你实际上是在引入动量项)。