我在所有层和输出上使用sigmoid函数,最终得到的错误率为0.00012,但当我使用理论上更好的ReLU函数时,却得到了最差的结果。能有人解释一下这是为什么吗?我使用了一个非常简单的两层实现代码,这个代码在100个网站上都能找到,但还是在这里给出,
import numpy as np#test#avg(nonlin(np.dot(nonlin(np.dot([0,0,1],syn0)),syn1)))#returns list >> [predicted_output, confidence]def nonlin(x,deriv=False):#Sigmoid if(deriv==True): return x*(1-x) return 1/(1+np.exp(-x))def relu(x, deriv=False):#RELU if (deriv == True): for i in range(0, len(x)): for k in range(len(x[i])): if x[i][k] > 0: x[i][k] = 1 else: x[i][k] = 0 return x for i in range(0, len(x)): for k in range(0, len(x[i])): if x[i][k] > 0: pass # do nothing since it would be effectively replacing x with x else: x[i][k] = 0 return xX = np.array([[0,0,1], [0,0,0], [0,1,1], [1,0,1], [1,0,0], [0,1,0]])y = np.array([[0],[1],[0],[0],[1],[1]])np.random.seed(1)# randomly initialize our weights with mean 0syn0 = 2*np.random.random((3,4)) - 1syn1 = 2*np.random.random((4,1)) - 1def avg(i): if i > 0.5: confidence = i return [1,float(confidence)] else: confidence=1.0-float(i) return [0,confidence]for j in xrange(500000): # Feed forward through layers 0, 1, and 2 l0 = X l1 = nonlin(np.dot(l0,syn0Performing)) l2 = nonlin(np.dot(l1,syn1)) #print 'this is',l2,'\n' # how much did we miss the target value? l2_error = y - l2 #print l2_error,'\n' if (j% 100000) == 0: print "Error:" + str(np.mean(np.abs(l2_error))) print syn1 # in what direction is the target value? # were we really sure? if so, don't change too much. l2_delta = l2_error*nonlin(l2,deriv=True) # how much did each l1 value contribute to the l2 error (according to the weights)? l1_error = l2_delta.dot(syn1.T) # in what direction is the target l1? # were we really sure? if so, don't change too much. l1_delta = l1_error * nonlin(l1,deriv=True) syn1 += l1.T.dot(l2_delta) syn0 += l0.T.dot(l1_delta)print "Final Error:" + str(np.mean(np.abs(l2_error)))def p(l): return avg(nonlin(np.dot(nonlin(np.dot(l,syn0)),syn1)))
所以p(x)是训练后的预测函数,其中x是一个1 x 3的输入值矩阵。
回答:
你为什么说理论上ReLU更好?在大多数应用中,ReLU确实表现得更好,但这并不意味着它在所有情况下都更好。你的例子非常简单,输入值在[0,1]之间,输出也是如此。这正是我预期sigmoid表现良好的地方。在实践中,由于梯度消失问题和其他在大型网络中的问题,我们在隐藏层中很少使用sigmoid,但这对你来说几乎不是问题。
另外,如果你碰巧使用了ReLU的导数,你的代码中缺少了一个’else’。你的导数会被简单地覆盖。
作为提醒,这里是ReLU的定义:
f(x)=max(0,x)
…这意味着它可以使你的激活值无限大。你应该避免在最后一层(输出层)使用ReLU。
另外,只要有可能,你应该利用向量化操作:
def relu(x, deriv=False):#RELU if (deriv == True): mask = x > 0 x[mask] = 1 x[~mask] = 0 else: # HERE YOU WERE MISSING "ELSE" return np.maximum(0,x)
是的,这比你使用的if/else要快得多。