我的验证数据有150张图像,但在使用模型进行预测时,我的预测结果长度只有22,我不明白这是为什么?
total_v=0correct_v=0with torch.no_grad(): model.eval() for data_v, target_v in (validloader): if SK: target_v = torch.tensor(np.where(target_v.numpy() == 2, 1, 0).astype(np.longlong)) else: target_v = torch.tensor(np.where(target_v.numpy() == 0, 1, 0).astype(np.longlong)) data_v, target_v = data_v.to(device), target_v.to(device) outputs_v = model(data_v) loss_v = criterion(outputs_v, target_v) batch_loss += loss_v.item() _,pred_v = torch.max(outputs_v, dim=1) correct_v += torch.sum(pred_v==target_v).item() total_v += target_v.size(0) val_acc.append(100 * correct_v/total_v) val_loss.append(batch_loss/len(validloader)) network_learned = batch_loss < valid_loss_min print(f'validation loss: {np.mean(val_loss):.4f}, validation acc: {(100 * correct_v/total_v):.4f}\n')
这是我的模型
model = models.resnet50(pretrained = True)num_ftrs = model.fc.in_featuresmodel.fc = nn.Linear(num_ftrs, 2)model.to(device)criterion = nn.CrossEntropyLoss()optimizer = optim.Adagrad(model.parameters())
回答:
如果你想要得到所有的预测结果,你应该存储每个批次的预测结果,并在迭代结束时将它们连接起来
...all_preds = []for data_v, target_v in validloader: .... _,pred_v = torch.max(outputs_v, dim=1) all_preds.append(pred_v) ....all_preds = torch.cat(all_preds).cpu().numpy()print(len(all_preds))