我在相同的数据集上使用Scikit-learn和XGBoost训练了两个梯度提升模型。
Scikit-learn模型
GradientBoostingClassifier( n_estimators=5, learning_rate=0.17, max_depth=5, verbose=2)
XGBoost模型
XGBClassifier( n_estimators=5, learning_rate=0.17, max_depth=5, verbosity=2, eval_metric="logloss")
然后我检查了推理性能:
- XGBoost: 每次循环9.7毫秒 ± 84.6微秒
- Scikit-learn: 每次循环426微秒 ± 12.5微秒
为什么XGBoost这么慢?
回答:
“为什么XGBoost这么慢?”: XGBClassifier()
是XGBoost的scikit-learn API接口(更多详情请见 https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.XGBClassifier)。如果你直接调用函数(不通过API)会更快。为了比较两个函数的性能,最好直接调用每个函数,而不是一个直接调用,一个通过API调用。这里有一个例子:
# benchmark_xgboost_vs_sklearn.py# Adapted from `xgboost_test.py` by Jacob Schreiber # (https://gist.github.com/jmschrei/6b447aada61d631544cd)"""Benchmarking scripts for XGBoost versus sklearn (time and accuracy)"""import timeimport randomimport numpy as npimport xgboost as xgbfrom sklearn.ensemble import GradientBoostingClassifierrandom.seed(0)np.random.seed(0)def make_dataset(n=500, d=10, c=2, z=2): """ Make a dataset of size n, with d dimensions and m classes, with a distance of z in each dimension, making each feature equally informative. """ # Generate our data and our labels X = np.concatenate([np.random.randn(n, d) + z*i for i in range(c)]) y = np.concatenate([np.ones(n) * i for i in range(c)]) # Generate a random indexing idx = np.arange(n*c) np.random.shuffle(idx) # Randomize the dataset, preserving data-label pairing X = X[idx] y = y[idx] # Return x_train, x_test, y_train, y_test return X[::2], X[1::2], y[::2], y[1::2]def main(): """ Run SKLearn, and then run xgboost, then xgboost via SKLearn XGBClassifier API wrapper """ # Generate the dataset X_train, X_test, y_train, y_test = make_dataset(10, z=100) n_estimators=5 max_depth=5 learning_rate=0.17 # sklearn first tic = time.time() clf = GradientBoostingClassifier(n_estimators=n_estimators, max_depth=max_depth, learning_rate=learning_rate) clf.fit(X_train, y_train) print("SKLearn GBClassifier: {}s".format(time.time() - tic)) print("Acc: {}".format(clf.score(X_test, y_test))) print(y_test.sum()) print(clf.predict(X_test)) # Convert the data to DMatrix for xgboost dtrain = xgb.DMatrix(X_train, label=y_train) dtest = xgb.DMatrix(X_test, label=y_test) # Loop through multiple thread numbers for xgboost for threads in 1, 2, 4: # xgboost's sklearn interface tic = time.time() clf = xgb.XGBModel(n_estimators=n_estimators, max_depth=max_depth, learning_rate=learning_rate, nthread=threads) clf.fit(X_train, y_train) print("SKLearn XGBoost API Time: {}s".format(time.time() - tic)) preds = np.round( clf.predict(X_test) ) acc = 1. - (np.abs(preds - y_test).sum() / y_test.shape[0]) print("Acc: {}".format( acc )) print("{} threads: ".format( threads )) tic = time.time() param = { 'max_depth' : max_depth, 'eta' : 0.1, 'silent': 1, 'objective':'binary:logistic', 'nthread': threads } bst = xgb.train( param, dtrain, n_estimators, [(dtest, 'eval'), (dtrain, 'train')] ) print("XGBoost (no wrapper) Time: {}s".format(time.time() - tic)) preds = np.round(bst.predict(dtest) ) acc = 1. - (np.abs(preds - y_test).sum() / y_test.shape[0]) print("Acc: {}".format(acc))if __name__ == '__main__': main()
总结结果:
sklearn.ensemble.GradientBoostingClassifier()
- 时间: 0.003237009048461914秒
- 准确率: 1.0
scikit-learn的XGBoost API封装XGBClassifier()
- 时间: 0.3436141014099121秒
- 准确率: 1.0
XGBoost(无封装)xgb.train()
- 时间: 0.0028612613677978516秒
- 准确率: 1.0