无法理解PyTorch代码中的就地操作?

我在PyTorch中使用LSTM进行学习的实现如下:

https://gist.github.com/rahulbhadani/f1d64042cc5a80280755cac262aa48aa

然而,代码出现了就地操作错误

我的错误输出是:

/home/ivory/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:10: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).  # Remove the CWD from sys.path while we load stuff.---------------------------------------------------------------------------RuntimeError                              Traceback (most recent call last)<ipython-input-86-560ec78f2b64> in <module>     27 linear = torch.nn.Linear(hidden_nums, output_dim)     28 ---> 29 global_loss_list = global_training(lstm2)<ipython-input-84-152890a3028c> in global_training(optimizee)      3     adam_global_optimizer = torch.optim.Adam([{'params': optimizee.parameters()},       4                                      {'params':linear.parameters()}], lr = 0.0001)----> 5     _, global_loss_1 = learn2(LSTM_Optimizee, training_steps, retain_graph_flag=True, reset_theta=True)      6       7     print(global_loss_1)<ipython-input-83-0357a528b94d> in learn2(optimizee, unroll_train_steps, retain_graph_flag, reset_theta)     43             # requires_grad=True. These are accumulated into x.grad for every     44             # parameter x. In pseudo-code: x.grad += dloss/dx---> 45             loss.backward(retain_graph = retain_graph_flag) #The default is False, when the optimized LSTM is set to True     46      47             print('x.grad: {}'.format(x.grad))~/anaconda3/lib/python3.7/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)    116                 products. Defaults to ``False``.    117         """--> 118         torch.autograd.backward(self, gradient, retain_graph, create_graph)    119     120     def register_hook(self, hook):~/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)     91     Variable._execution_engine.run_backward(     92         tensors, grad_tensors, retain_graph, create_graph,---> 93         allow_unreachable=True)  # allow_unreachable flag     94      95 RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 10]] is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

我尝试追踪错误但没有成功。任何这方面的帮助将不胜感激。

谢谢。


回答:

我认为问题出在以下这行代码:

global_loss_list.append(global_loss.detach_())

PyTorch中就地操作的惯例是在函数名后面使用_(如detach_)。我认为你不应该进行就地分离。换句话说,将detach_改为detach

Related Posts

如何从数据集中移除EXIF数据?

我在尝试从数据集中的图像中移除EXIF数据(这些数据将…

用于Python中的“智能点”游戏的遗传算法不工作

过去几天我一直在尝试实现所谓的“智能点”游戏。我第一次…

哪个R平方得分更有帮助?

data.drop(‘Movie Title’, ax…

使用线性回归预测GRE分数对录取率的影响

我正在学习线性回归,并尝试在Jupyter笔记本中用P…

使用mlrMBO贝叶斯优化进行SVM超参数调优时出现错误

我试图针对一个分类任务优化SVM,这个方法在许多其他模…

Keras模型的二元交叉熵准确率未发生变化

我在网上看到了很多关于这个问题的提问,但没有找到明确的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注