我是AI和Python的新手,我试图只运行一个批次以达到过拟合的目的。我找到了以下代码:iter(train_loader).next()
但我不确定该在我的代码中哪里实现它。即使我实现了,我如何在每次迭代后检查以确保我在训练同一个批次?
train_loader = torch.utils.data.DataLoader( dataset_train, batch_size=48, shuffle=True, num_workers=2)net = nn.Sequential( nn.Flatten(), nn.Linear(128*128*3,10))nepochs = 3statsrec = np.zeros((3,nepochs))loss_fn = nn.CrossEntropyLoss()optimizer = optim.Adam(net.parameters(), lr=0.001)for epoch in range(nepochs): # 多次遍历数据集 running_loss = 0.0 n = 0 for i, data in enumerate(train_loader, 0): inputs, labels = data # 清零参数梯度 optimizer.zero_grad() # 前向传播、反向传播和更新参数 outputs = net(inputs) loss = loss_fn(outputs, labels) loss.backward() optimizer.step() # 累积损失 running_loss += loss.item() n += 1 ltrn = running_loss/n ltst, atst = stats(train_loader, net) statsrec[:,epoch] = (ltrn, ltst, atst) print(f"epoch: {epoch} 训练损失: {ltrn: .3f} 测试损失: {ltst: .3f} 测试准确率: {atst: .1%}")
请给我一些提示
回答:
如果你想在一个批次上进行训练,那么请移除你对数据加载器的循环:
for i, data in enumerate(train_loader, 0): inputs, labels = data
并且在遍历epoch之前简单地获取train_loader
迭代器的第一个元素,否则next
会在每次迭代时被调用,你将会在每个epoch运行不同的批次:
inputs, labels = next(iter(train_loader))i = 0for epoch in range(nepochs): optimizer.zero_grad() outputs = net(inputs) loss = loss_fn(outputs, labels) loss.backward() optimizer.step() # ...