我尝试使用优化器进行简单的权重更新,如下所示:
x = torch.rand(10, requires_grad=True)
y = x * 15. + 10.
optimizer = torch.optim.Adam
loss = torch.nn.MSELoss()
def train(x, y, loss, ep, opti):
w = torch.rand(1, dtype=torch.float32, requires_grad=True)
b = torch.rand(1, dtype=torch.float32, requires_grad=True)
op = opti([w, b])
for e in range(ep):
y_hat = x.multiply(w) + b
l = loss(y_hat, y)
print(f'Epoch: {e}, loss: {l}')
l.backward()
op.step()
op.zero_grad()
return w, b
w_hat, b_hat = train(x, y, loss, 10, optimizer)
然而,尽管我在每一步都将梯度归零,我还是遇到了Trying to backward through the graph a second time
错误,我不明白为什么会这样。
你有什么建议吗?
回答:
原因是x
请将第一行更改为x = torch.rand(10)