我正在尝试训练一个简单的RNN模型,其目标非常简单,即无论输入如何,输出都应匹配一个固定的向量。
import torchimport torch.nn as nnfrom torch.autograd import Variableimport numpy as npclass RNN(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(RNN, self).__init__() self.hidden_size = hidden_size self.i2h = nn.Linear(input_size + hidden_size, hidden_size) print "i2h WEIGHT size ", list(self.i2h.weight.size()) print "i2h bias size ", list(self.i2h.bias.size()) self.i2o = nn.Linear(hidden_size, output_size) print "i2o WEIGHT size ", list(self.i2o.weight.size()) print "i2o bias size ", list(self.i2o.bias.size()) self.softmax = nn.LogSoftmax(dim=1) def forward(self, input, hidden): combined = torch.cat((input, hidden), 1) hidden = self.i2h(combined) output = self.i2o(hidden) output = self.softmax(output) return output, hidden def initHidden(self): return Variable(torch.zeros(1, self.hidden_size))n_hidden = 20rnn = RNN(10, n_hidden, 3)learning_rate = 1e-3loss_fn = torch.nn.MSELoss(size_average=False)out_target = Variable( torch.FloatTensor([[0.0 , 1.0, 0.0]] ) , requires_grad=False)print "target output::: ", out_targetdef train(category_tensor, line_tensor): hidden = rnn.initHidden() rnn.zero_grad() for i in range(line_tensor.size()[0]): #print "train iteration ", i, ": input data: ", line_tensor[i] output, hidden = rnn(line_tensor[i], hidden) loss = loss_fn(output, out_target) loss.backward() # Add parameters' gradients to their values, multiplied by learning rate for p in rnn.parameters(): #print "parameter: ", p, " gradient: ", p.grad.data p.data.add_(-learning_rate, p.grad.data) return output, loss.data[0]current_loss = 0n_iters = 500for iter in range(1, n_iters + 1): inp = Variable(torch.randn(100,1,10) + 5) output, loss = train(out_target, inp) current_loss += loss if iter % 1 == 0: print "weights: ",rnn.i2h.weight print "LOSS: ", loss print output
如图所示,损失值一直保持在6以上,从未下降。请注意,我对所有随机输入的正态分布进行了5的偏移,因此它们大多是正数,因此应该存在一种权重分布可以接近目标输出。
在这个例子中,我在哪里做错了,导致输出未能接近目标?
回答:
你的固定输出是:
torch.FloatTensor([[0.0, 1.0, 0.0]])
但你在RNN中使用了以下作为最后一层:
self.softmax = nn.LogSoftmax(dim=1)
LogSoftmax
返回的值是在[0, 1]
范围内吗?虽然你可以使用Softmax
,但我建议你使用sign函数,并将-1
转换为0。