我一直在研究日志输出中的重新开始数据预取。显然,这意味着数据不足,因此需要从头开始预取数据。然而,我的数据集有10,000个样本,我的批次大小是4。既然批次大小是4,这意味着每次迭代会处理4个数据样本,为什么还需要预取数据呢?能有人澄清一下我的理解吗?
日志如下:
I0409 20:33:35.053406 20072 data_layer.cpp:73] Restarting data prefetching from start.I0409 20:33:35.053447 20074 data_layer.cpp:73] Restarting data prefetching from start.I0409 20:33:40.320605 20074 data_layer.cpp:73] Restarting data prefetching from start.I0409 20:33:40.320598 20072 data_layer.cpp:73] Restarting data prefetching from start.I0409 20:33:45.591019 20072 data_layer.cpp:73] Restarting data prefetching from start.I0409 20:33:45.591047 20074 data_layer.cpp:73] Restarting data prefetching from start.I0409 20:33:49.392580 20034 solver.cpp:398] Test net output #0: loss = nan (* 1 = nan loss)I0409 20:33:49.780678 20034 solver.cpp:219] Iteration 0 (-4.2039e-45 iter/s, 20.1106s/100 iters), loss = 54.0694I0409 20:33:49.780731 20034 solver.cpp:238] Train net output #0: loss = 54.0694 (* 1 = 54.0694 loss)I0409 20:33:49.780750 20034 sgd_solver.cpp:105] Iteration 0, lr = 0.0001I0409 20:34:18.812854 20034 solver.cpp:219] Iteration 100 (3.44442 iter/s, 29.0325s/100 iters), loss = 21.996I0409 20:34:18.813213 20034 solver.cpp:238] Train net output #0: loss = 21.996 (* 1 = 21.996 loss)
回答:
如果你有10,000个样本,并且以大小为4的批次处理它们,这意味着在10,000/4=2,500次迭代后,你将处理完所有数据,caffe将重新开始从头读取数据。
顺便提一下,遍历所有样本一次也被称为一个“epoch”。
在每个epoch之后,caffe会在日志中打印
Restarting data prefetching from start