由于这是论文中发布的配置,我认为我可能犯了非常严重的错误。
每次尝试运行训练时,这个错误都会在不同的图像上出现。
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\pydevd.py", line 1741, in <module> main() File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\pydevd.py", line 1735, in main globals = debugger.run(setup['file'], None, None, is_module) File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\pydevd.py", line 1135, in run pydev_imports.execfile(file, globals, locals) # execute the script File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "C:/Noam/Code/vision_course/hopenet/deep-head-pose/code/original_code_augmented/train_hopenet_with_validation_holdout.py", line 187, in <module> loss_reg_yaw = reg_criterion(yaw_predicted, label_yaw_cont) File "C:\Noam\Code\vision_course\hopenet\venv\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "C:\Noam\Code\vision_course\hopenet\venv\lib\site-packages\torch\nn\modules\loss.py", line 431, in forward return F.mse_loss(input, target, reduction=self.reduction) File "C:\Noam\Code\vision_course\hopenet\venv\lib\site-packages\torch\nn\functional.py", line 2204, in mse_loss ret = torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction))RuntimeError: reduce failed to synchronize: cudaErrorAssert: device-side assert triggered
有什么想法吗?
回答:
这种错误通常在使用NLLLoss
或CrossEntropyLoss
时发生,并且当你的数据集中有负标签(或标签大于类别数)时也会出现。这正是你遇到的断言错误t >= 0 && t < n_classes
失败的情况。
这种情况不会在MSELoss
中发生,但原帖提到某处有CrossEntropyLoss
,因此错误发生(程序在其他一些行上异步崩溃)。解决方案是清理数据集,确保t >= 0 && t < n_classes
得到满足(其中t
代表标签)。
此外,如果你使用NLLLoss
或BCELoss
,请确保你的网络输出在0到1的范围内(这时你分别需要softmax
或sigmoid
激活函数)。请注意,对于CrossEntropyLoss
或BCEWithLogitsLoss
不需要此操作,因为它们在损失函数内部实现了激活函数。(感谢@PouyaB指出这一点)。