我尝试在MNIST手写数字数据集上训练一个前馈神经网络(包含60,000个训练样本)。
每次我都遍历所有训练样本,在每个周期对每个样本执行反向传播。运行时间当然太长了。
- 我运行的算法是否称为梯度下降?
我读到,对于大型数据集,使用随机梯度下降可以显著改善运行时间。
- 为了使用随机梯度下降,我应该做些什么?我是否应该随机选择训练样本,对每个随机选择的样本执行反向传播,而不是我目前使用的周期?
回答:
您描述的新方案(对每个随机选择的样本执行反向传播),是随机梯度下降的一种常见“变体”,如这里所述:https://www.quora.com/Whats-the-difference-between-gradient-descent-and-stochastic-gradient-descent
根据该文档,最常见的三种变体是(您的变体是C):
A)
randomly shuffle samples in the training setfor one or more epochs, or until approx. cost minimum is reached: for training sample i: compute gradients and perform weight updates
B)
for one or more epochs, or until approx. cost minimum is reached: randomly shuffle samples in the training set for training sample i: compute gradients and perform weight updates
C)
for iterations t, or until approx. cost minimum is reached: draw random sample from the training set compute gradients and perform weight updates