我尝试在PyTorch中进行多项式回归。首先,我仅尝试了线性回归(b + wx)。
model_1 = RegressionModel()W = torch.zeros(1, requires_grad=True)b = torch.zeros(1, requires_grad = True)optimizer_1 = torch.optim.SGD([W, b], lr = 0.001)x_train = torch.FloatTensor(dataset.x_data['LSTAT'])y_train = torch.FloatTensor(dataset.data['target'])nb_epochs = 10000for epoch in range(nb_epochs + 1): hypothesis = x_train * W + b cost = torch.nn.functional.mse_loss(hypothesis, y_train.float()) optimizer_1.zero_grad() cost.backward() optimizer_1.step() print('Epoch {:4d}/{} W: {:.3f}, b: {:.3f}, Cost: {:.6f}'.format(epoch, nb_epochs, W.item(), b.item(), cost.item()))
然后,我更改并添加了一些变量来进行多项式回归(b + w1x + w2x^2)
model_2 = RegressionModel()W1 = torch.zeros(1, requires_grad=True) W2 = torch.zeros(1, requires_grad=True) b = torch.zeros(1, requires_grad = True)optimizer_2 = torch.optim.SGD([W2, W1, b], lr = 0.0000099)x_train = torch.FloatTensor(dataset.x_data['LSTAT'])y_train = torch.FloatTensor(dataset.data['target'])nb_epochs = 10000for epoch in range(nb_epochs + 1): hypothesis = b + x_train * W1 + x_train * x_train * W2 cost = torch.nn.functional.mse_loss(hypothesis, y_train.float()) optimizer_2.zero_grad() cost.backward() optimizer_2.step() print('Epoch {:4d}/{} W1: {:.3f}, W2: {:.3f}, b: {:.3f}, Cost: {:.6f}'.format(epoch, nb_epochs, W1.item(), W2.item(), b.item(), cost.item()))
我可以像这样尝试多项式回归吗?如果不行,我会非常感激你告诉我。我对PyTorch真的是个新手…
回答:
你的代码应该可以工作。在处理较大数据时,如果你通过单一矩阵操作进行回归,会更有效率。为此,你需要首先预计算输入特征的多项式:
x_train_polynomial = torch.stack([x_train, x_train ** 2], dim=1)
为了节省一些代码行,你可以将投影重写为线性层:
在训练循环中,你可以调用:
hypothesis = projection(x_train_polynomial)