我在尝试使用numpy实现一个简单的RNN(基于这篇文章),并训练它进行二进制加法,即每次处理一个比特,将两个8位无符号整数从末尾开始相加,目的是让它学会在必要时进行“进位”。然而,它似乎没有在学习。在训练过程中,我选择两个随机数,前向传播8步,每次输入来自a和b的一个比特,并存储每一步的时间的输出和隐藏层值,然后进行8步的反向传播,我计算隐藏层错误(output_error.dot(weights_hidden_to_output.T)) * sigmoid_to_derivative(hidden) + future_hidden_error.dot(weights_hidden_to_hidden.T)
)以及通过将父层与子层的错误矩阵相乘来更新每个权重矩阵。这样做正确吗?
如果我的代码能更清楚地说明问题的话,这里是我的代码。我注意到每次训练时,权重突然开始疯狂增长,并且它们在sigmoid函数中引起了溢出,导致训练失败。您知道这是什么原因引起的吗?
import numpy as npnp.random.seed(0)def sigmoid(x): return np.atleast_2d(1/(1+np.exp(-x))) #return np.atleast_2d(np.max(x, 0.01))def sig_deriv(x): return x*(1-x)def add_bias(x): return np.hstack([np.ones((len(x), 1)), x])def dec_to_bin(dec): return np.array(map(int, list(format(dec, '#010b'))[2:]))def bin_to_dec(b): out = 0 for bit in b: out = (out << 1) | bit return outbatch_size = 8learning_rate = .1input_size = 2hidden_size = 16output_size = 1weights_xh = 2 * np.random.random((input_size+1, hidden_size)) - 1weights_hh = 2 * np.random.random((hidden_size+1, hidden_size)) - 1weights_hy = 2 * np.random.random((hidden_size+1, output_size)) - 1xh_update = np.zeros_like(weights_xh)hh_update = np.zeros_like(weights_hh)hy_update = np.zeros_like(weights_hy)for i in xrange(10000): a = np.random.randint(0, 2**batch_size/2) b = np.random.randint(0, 2**batch_size/2) sum_ = a+b X = add_bias(np.hstack([np.atleast_2d(dec_to_bin(a)).T, np.atleast_2d(dec_to_bin(b)).T])) y = np.atleast_2d(dec_to_bin(sum_)).T error = 0 output_errors = [] outputs = [] hiddens = [add_bias(np.zeros((1, hidden_size)))] #forward propagation through time for j in xrange(batch_size): hidden = sigmoid(X[-j-1].dot(weights_xh) + hiddens[-1].dot(weights_hh)) hidden = add_bias(hidden) hiddens.append(hidden) output = sigmoid(hidden.dot(weights_hy)) outputs.append(output[0][0]) output_error = (y[-j-1] - output) error += np.abs(output_error[0]) output_errors.append((output_error * sig_deriv(output))) future_hidden_error = np.zeros((1,hidden_size)) #backward ppropagation through time for j in xrange(batch_size): output_error = output_errors[-j-1] hidden = hiddens[-j-1] prev_hidden = hiddens[-j-2] hidden_error = (output_error.dot(weights_hy.T) * sig_deriv(hidden)) + future_hidden_error.dot(weights_hh.T) hidden_error = np.delete(hidden_error, 0, 1) #delete bias error xh_update += np.atleast_2d(X[j]).T.dot(hidden_error) hh_update += prev_hidden.T.dot(hidden_error) hy_update += hidden.T.dot(output_error) future_hidden_error = hidden_error weights_xh += (xh_update * learning_rate)/batch_size weights_hh += (hh_update * learning_rate)/batch_size weights_hy += (hy_update * learning_rate)/batch_size xh_update *= 0 hh_update *= 0 hy_update *= 0 if i%1000==0: guess = map(int, map(round, outputs[::-1])) print "Iteration {}".format(i) print "Error: {}".format(error) print "Problem: {} + {} = {}".format(a, b, sum_) print "a: {}".format(list(dec_to_bin(a))) print "+ b: {}".format(list(dec_to_bin(b))) print "Solution: {}".format(map(int, y)) print "Guess: {} ({})".format(guess, bin_to_dec(guess)) print
回答:
我已经解决了这个问题。如果有人想知道为什么它之前不工作,是因为我只将隐藏错误的一部分(来自输出错误的那部分)乘以隐藏层激活的导数。现在它在几千次迭代内就能轻松学会加法问题了。