我正在尝试使用scikit-learn版本0.15.1中的SGDClassifier。似乎除了迭代次数之外,没有其他设置收敛标准的方法。因此,我想通过在每次迭代时检查误差,然后进行温启动额外的迭代,直到改进足够小来手动实现这一点。
不幸的是,warm_start标志以及coef_init/intercept_init似乎都没有真正启动优化——它们似乎都是从头开始的。
我该怎么办?没有真正的收敛标准或温启动,分类器无法使用。
请注意下面的情况,每次重新启动时偏差增加了很多,损失也增加了,但随着进一步的迭代会下降。经过250次迭代后,偏差为-3.44,平均损失为1.46。
sgd = SGDClassifier(loss='log', alpha=alpha, verbose=1, shuffle=True, warm_start=True)print('INITIAL FIT')sgd.fit(X, y, sample_weight=sample_weight)sgd.n_iter = 1print('\nONE MORE ITERATION')sgd.fit(X, y, sample_weight=sample_weight)sgd.n_iter = 3print('\nTHREE MORE ITERATIONS')sgd.fit(X, y, sample_weight=sample_weight)INITIAL FIT-- Epoch 1Norm: 254.11, NNZs: 92299, Bias: -5.239955, T: 122956, Avg. loss: 28.103236Total training time: 0.04 seconds.-- Epoch 2Norm: 138.81, NNZs: 92598, Bias: -5.180938, T: 245912, Avg. loss: 16.420537Total training time: 0.08 seconds.-- Epoch 3Norm: 100.61, NNZs: 92598, Bias: -5.082776, T: 368868, Avg. loss: 12.240537Total training time: 0.12 seconds.-- Epoch 4Norm: 74.18, NNZs: 92598, Bias: -5.076395, T: 491824, Avg. loss: 9.859404Total training time: 0.17 seconds.-- Epoch 5Norm: 55.57, NNZs: 92598, Bias: -5.072369, T: 614780, Avg. loss: 8.280854Total training time: 0.21 seconds.ONE MORE ITERATION-- Epoch 1Norm: 243.07, NNZs: 92598, Bias: -11.271497, T: 122956, Avg. loss: 26.148746Total training time: 0.04 seconds.THREE MORE ITERATIONS-- Epoch 1Norm: 258.70, NNZs: 92598, Bias: -16.058395, T: 122956, Avg. loss: 29.666688Total training time: 0.04 seconds.-- Epoch 2Norm: 142.24, NNZs: 92598, Bias: -15.809559, T: 245912, Avg. loss: 17.435114Total training time: 0.08 seconds.-- Epoch 3Norm: 102.71, NNZs: 92598, Bias: -15.715853, T: 368868, Avg. loss: 12.731181Total training time: 0.12 seconds.
回答:
warm_start=True
将使用已拟合的系数作为起点,但它会重新启动学习率计划。
如果你想手动检查收敛情况,我建议你使用partial_fit
而不是fit
,正如@AdrienNK所建议的:
sgd = SGDClassifier(loss='log', alpha=alpha, verbose=1, shuffle=True, warm_start=True, n_iter=1)sgd.partial_fit(X, y)# after 1st iterationsgd.partial_fit(X, y)# after 2nd iteration...