我正在尝试创建一个模型,该模型以单词作为输入并以段落作为输出。当我尝试将fastai|text上给出的示例应用到我自己的数据集时,出现了错误。错误发生在以下步骤。您在查看网站时,直到得到下面的代码才开始关注。但这段代码却引发了错误。这可能是什么原因导致的错误?
代码:
from fastai import *from fastai.text import * path = untar_data(URLs.IMDB_SAMPLE)df = pd.read_csv(path/'texts.csv')# 语言模型数据data_lm = TextLMDataBunch.from_csv(path, 'texts.csv')# 分类模型数据data_clas = TextClasDataBunch.from_csv(path, 'texts.csv', vocab=data_lm.train_ds.vocab, bs=32)data_lm.save()data_clas.save()data_lm = TextLMDataBunch.load(path)data_clas = TextClasDataBunch.load(path, bs=32)learn = language_model_learner(data_lm, pretrained_model=URLs.WT103, drop_mult=0.5)learn.fit_one_cycle(1, 1e-2)
错误代码:
learn = language_model_learner(data_lm, pretrained_model=URLs.WT103, drop_mult=0.5)
输出:
102 if not ps: return None 103 if b is None: return ps[0].requires_grad--> 104 for p in ps: p.requires_grad=b 105 106 def trainable_params(m:nn.Module)->ParamList:RuntimeError: you can only change requires_grad flags of leaf variables. If you want to use a computed variable in a subgraph that doesn't require differentiation use var_no_grad = var.detach().
回答:
使用以下命令将grad设置为false:torch.set_grad_enabled(False)(在创建learner对象之前使用)
并使用torch.enable_grad()包装函数调用(learn.fit cycle()):