我刚刚用Python编写了我的第一个神经网络类。从我所能看到的来看,一切都应该正常工作,但其中似乎存在一个我找不到的错误(可能就在我眼前)。我首先在10,000个MNIST数据示例上尝试了它,然后再次尝试复制符号函数,再次尝试复制XOR门。每一次,无论迭代次数多少,所有的输出神经元(无论有多少)总是产生大致相同的值的输出,但成本函数似乎在下降。我使用的是批量梯度下降法,全部使用向量完成(没有针对每个训练示例的循环)。
#Neural Network Classimport numpy as npclass NeuralNetwork:#methodsdef __init__(self,layer_shape): #Useful Network Info self.__layer_shape = layer_shape self.__layers = len(layer_shape) #Initialize Random Weights self.__weights = [] self.__weight_sizes = [] for i in range(len(layer_shape)-1): current_weight_size = (layer_shape[i+1],layer_shape[i]+1) self.__weight_sizes.append(current_weight_size) self.__weights.append(np.random.normal(loc=0.1,scale=0.1,size=current_weight_size))def sigmoid(self,z): return (1/(1+np.exp(-z)))def sig_prime(self,z): return np.multiply(self.sigmoid(z),(1-self.sigmoid(z)))def Feedforward(self,input,Train=False): self.__input_cases = np.shape(input)[0] #Empty list to hold the output of every layer. output_list = [] #Appends the output of the the 1st input layer. output_list.append(input) for i in range(self.__layers-1): if i == 0: output = self.sigmoid(np.dot(np.concatenate((np.ones((self.__input_cases,1)),input),1),self.__weights[0].T)) output_list.append(output) else: output = self.sigmoid(np.dot(np.concatenate((np.ones((self.__input_cases,1)),output),1),self.__weights[i].T)) output_list.append(output) #Returns the final output if not training. if Train == False: return output_list[-1] #Returns the entire output_list if need for training else: return output_listdef CostFunction(self,input,target,error_func=1): """Gives the cost of using a particular weight matrix based off of the input and targeted output""" #Run the network to get output using current theta matrices. output = self.Feedforward(input) #####Allows user to choose Cost Functions.##### # #Log Based Error Function # if error_func == 0: error = np.multiply(-target,np.log(output))-np.multiply((1-target),np.log(1-output)) total_error = np.sum(np.sum(error)) # #Squared Error Cost Function # elif error_func == 1: error = (target - output)**2 total_error = 0.5 * np.sum(np.sum(error)) return total_errordef Weight_Grad(self,input,target,output_list): #Finds the Error Deltas for Each Layer # deltas = [] for i in range(self.__layers - 1): #Finds Error Delta for the last layer if i == 0: error = (target-output_list[-1]) error_delta = -1*np.multiply(error,np.multiply(output_list[-1],(1-output_list[-1]))) deltas.append(error_delta) #Finds Error Delta for the hidden layers else: #Weight matrices have bias values removed error_delta = np.multiply(np.dot(deltas[-1],self.__weights[-i][:,1:]),output_list[-i-1]*(1-output_list[-i-1])) deltas.append(error_delta) # #Finds the Deltas for each Weight Matrix # Weight_Delta_List = [] deltas.reverse() for i in range(len(self.__weights)): current_weight_delta = (1/self.__input_cases) * np.dot(deltas[i].T,np.concatenate((np.ones((self.__input_cases,1)),output_list[i]),1)) Weight_Delta_List.append(current_weight_delta) #print("Weight",i,"Delta:","\n",current_weight_delta) #print() # #Combines all Weight Deltas into a single row vector # Weight_Delta_Vector = np.array([[]]) for i in Weight_Delta_List: Weight_Delta_Vector = np.concatenate((Weight_Delta_Vector,np.reshape(i,(1,-1))),1) return Weight_Delta_List def Train(self,input_data,target): # #Gradient Checking: # #First Get Gradients from first iteration of Back Propagation output_list = self.Feedforward(input_data,Train=True) self.__input_cases = np.shape(input_data)[0] Weight_Delta_List = self.Weight_Grad(input_data,target,output_list) #Creates List of Gradient Approx arrays set to zero. grad_approx_list = [] for i in self.__weight_sizes: current_grad_approx = np.zeros(i) grad_approx_list.append(current_grad_approx) #Compute Approx. Gradient for every Weight Change for W in range(len(self.__weights)): for index,value in np.ndenumerate(self.__weights[W]): orig_value = self.__weights[W][index] #Saves the Original Value print("Orig Value:", orig_value) #Sets weight to weight +/- epsilon self.__weights[W][index] = orig_value+.00001 cost_plusE = self.CostFunction(input_data, target) self.__weights[W][index] = orig_value-.00001 cost_minusE = self.CostFunction(input_data, target) #Solves for grad approx: grad_approx = (cost_plusE-cost_minusE)/(2*.00001) grad_approx_list[W][index] = grad_approx #Sets Weight Value back to its original value self.__weights[W][index] = orig_value # #Print Gradients from Back Prop. and Grad Approx. side-by-side: # print("Back Prop. Grad","\t","Grad. Approx") print("-"*15,"\t","-"*15) for W in range(len(self.__weights)): for index, value in np.ndenumerate(self.__weights[W]): print(self.__weights[W][index],"\t"*3,grad_approx_list[W][index]) print("\n"*3) input_ = input("Press Enter to continue:") # #Perform Weight Updates for X number of Iterations # for i in range(10000): #Run the network output_list = self.Feedforward(input_data,Train=True) self.__input_cases = np.shape(input_data)[0] Weight_Delta_List = self.Weight_Grad(input_data,target,output_list) for w in range(len(self.__weights)): #print(self.__weights[w]) #print(Weight_Delta_List[w]) self.__weights[w] = self.__weights[w] - (.01*Weight_Delta_List[w]) print("Done")`
我甚至实现了梯度检查,数值不同,我还尝试用近似梯度检查值替换反向传播更新,但结果相同,这让我甚至怀疑我的梯度检查代码。
以下是训练XOR门时产生的一些值:
反向传播梯度:0.0756102610697 0.261814503398 0.0292734023876近似梯度:0.05302210631166 0.0416095559674 0.0246847342122成本:训练前:0.508019225507 训练后:0.50007095103(经过10000次迭代)4个不同示例的输出(训练后):[ 0.49317733] [ 0.49294556] [ 0.50489004] [ 0.50465824]
所以我的问题是,我的反向传播或我的梯度检查是否有明显的问题?当ANN出现这些症状时(输出大致相同/成本下降),是否有常见的问题?
回答:
我不太擅长阅读Python代码,但你的XOR梯度列表包含3个元素,对应3个权重。我假设这是单个神经元的两个输入和一个偏置。如果这是真的,这样的网络无法学习XOR(能够学习XOR的最小神经网络需要两个隐藏神经元和一个输出单元)。现在,看看前馈函数,如果np.dot计算的是它名称所表示的(即两个向量的点积),并且sigmoid是标量,那么这将始终对应于一个神经元的输出,我看不出你如何用这段代码向层中添加更多神经元的方法。
以下建议可能有助于调试任何新实现的神经网络:
1)不要从MNIST甚至XOR开始。完美的实现可能无法学习XOR,因为它很容易陷入局部最小值,你可能会花很多时间寻找不存在的错误。一个好的起点是AND函数,它可以用单个神经元学习
2)通过手动计算几个示例的结果来检查前向计算过程。这在权重数量较少时很容易做到。然后尝试用数值梯度进行训练。如果失败了,那么要么你的数值梯度是错误的(手动检查),要么训练过程是错误的。(如果学习率设置得太大,它可能会失败,但否则训练必须收敛,因为误差表面是凸的)。
3)一旦你能用数值梯度进行训练,调试你的分析梯度(检查每个神经元的梯度,然后检查各个权重的梯度)。这同样可以手动计算并与你看到的进行比较。
4)完成第3步后,如果一切正常,添加一个隐藏层,并用AND函数重复第2步和第3步。
5)在AND函数一切正常后,你可以转向XOR函数和其他更复杂的任务。
这个过程看似耗时,但几乎总能最终得到一个工作的神经网络