我想在4个GPU上使用Horovod PyTorch训练一个VGG16模型。我不打算使用torchvision.datasets.CIFAR10的数据集,而是想自己拆分数据集。所以我从官方网站下载了数据集并进行了拆分。我的拆分方式如下:
if __name__ == '__main__': import pickle train_data, train_label = [], [] test_data, test_label = [], [] for i in range(1, 6): with open('/Users/wangqipeng/Downloads/cifar-10-batches-py/data_batch_{}'.format(i), 'rb') as f: b = pickle.load(f, encoding='bytes') train_data.extend(b[b'data'].tolist()[:8000]) train_label.extend(b[b'labels'][:8000]) test_data.extend(b[b'data'].tolist()[8000:]) test_label.extend(b[b'labels'][8000:]) num_train = len(train_data) num_test = len(test_data) print(num_train, num_test) train_data = np.array(train_data) test_data = np.array(test_data) for i in range(4): with open('/Users/wangqipeng/Downloads/train_{}'.format(i), 'wb') as f: d = {b'data': train_data[int(0.25 * i * num_train): int(0.25 * (i + 1) * num_train)], b'labels': train_label[int(0.25 * i * num_train): int(0.25 * (i + 1) * num_train)]} pickle.dump(d, f) with open('/Users/wangqipeng/Downloads/test'.format(i), 'wb') as f: d = {b'data': test_data, b'labels': test_label} pickle.dump(d, f)
然而,我发现如果使用从官方网站下载的数据集,会出现梯度爆炸的问题。我发现损失会在几次迭代后增加并变为”nan”。这是我读取数据集的方式:
class DataSet(torch.utils.data.Dataset): def __init__(self, path): self.dataset = unpickle(path) def __getitem__(self, index): data = torch.tensor( self.dataset[b'data'][index], dtype=torch.float32).resize(3, 32, 32) return data, torch.tensor(self.dataset[b'labels'][index]) def __len__(self): return len(self.dataset[b'data'])train_dataset = DataSet("./cifar10/train_" + str(hvd.rank()))train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=args.batch_size, sampler=None, **kwargs)
如果我打印每次迭代的损失,我会看到类似这样的结果:
Mon Nov 9 11:28:29 2020[0]<stdout>:epoch 0 iter[ 0 / 313 ] loss 7.725658416748047 accuracy 5.46875Mon Nov 9 11:28:29 2020[0]<stdout>:epoch 0 iter[ 1 / 313 ] loss 15.312677383422852 accuracy 8.59375Mon Nov 9 11:28:29 2020[0]<stdout>:epoch 0 iter[ 2 / 313 ] loss 16.333066940307617 accuracy 9.375Mon Nov 9 11:28:30 2020[0]<stdout>:epoch 0 iter[ 3 / 313 ] loss 15.549728393554688 accuracy 9.9609375Mon Nov 9 11:28:30 2020[0]<stdout>:epoch 0 iter[ 4 / 313 ] loss 14.090616226196289 accuracy 9.843750298023224Mon Nov 9 11:28:31 2020[0]<stdout>:epoch 0 iter[ 5 / 313 ] loss 12.310989379882812 accuracy 9.63541641831398Mon Nov 9 11:28:31 2020[0]<stdout>:epoch 0 iter[ 6 / 313 ] loss 11.578919410705566 accuracy 9.15178582072258Mon Nov 9 11:28:31 2020[0]<stdout>:epoch 0 iter[ 7 / 313 ] loss 13.210229873657227 accuracy 8.7890625Mon Nov 9 11:28:32 2020[0]<stdout>:epoch 0 iter[ 8 / 313 ] loss 764.713623046875 accuracy 9.28819477558136Mon Nov 9 11:28:32 2020[0]<stdout>:epoch 0 iter[ 9 / 313 ] loss 4.590414250749922e+20 accuracy 8.984375Mon Nov 9 11:28:32 2020[0]<stdout>:epoch 0 iter[ 10 / 313 ] loss nan accuracy 9.446022659540176Mon Nov 9 11:28:33 2020[0]<stdout>:epoch 0 iter[ 11 / 313 ] loss nan accuracy 10.09114608168602Mon Nov 9 11:28:33 2020[0]<stdout>:epoch 0 iter[ 12 / 313 ] loss nan accuracy 10.39663478732109
然而,如果我使用torchvision的数据集,一切都会正常:
train_dataset = \ datasets.CIFAR10(args.train_dir, download=True, transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))]))train_sampler = torch.utils.data.distributed.DistributedSampler( train_dataset, num_replicas=hvd.size(), rank=hvd.rank())train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=args.batch_size, sampler=train_sampler, **kwargs)
DistributedSampler也可能存在问题。但我认为DistributedSampler只是用来拆分数据的。我不知道DistributedSampler是否可能是导致这个问题的因素。
我读取CIFAR10数据集的方式有问题吗?还是我“重塑”数据集的方式有问题?谢谢你的帮助!
回答:
可能是由于我没有对数据集进行归一化。感谢大家的帮助!