对于下面的代码,我的R平方得分是负值,但使用K折交叉验证的准确率得分是92%。这是怎么可能的?我使用随机森林回归算法来预测一些数据。数据集的链接如下所示:https://www.kaggle.com/ludobenistant/hr-analytics
import numpy as npimport pandas as pdfrom sklearn.preprocessing import LabelEncoder,OneHotEncoderdataset = pd.read_csv("HR_comma_sep.csv")x = dataset.iloc[:,:-1].values ##Independent variabley = dataset.iloc[:,9].values ##Dependent variable##Encoding the categorical variablesle_x1 = LabelEncoder()x[:,7] = le_x1.fit_transform(x[:,7])le_x2 = LabelEncoder()x[:,8] = le_x1.fit_transform(x[:,8])ohe = OneHotEncoder(categorical_features = [7,8])x = ohe.fit_transform(x).toarray()##splitting the dataset in training and testing datafrom sklearn.cross_validation import train_test_splity = pd.factorize(dataset['left'].values)[0].reshape(-1, 1)x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state = 0)from sklearn.preprocessing import StandardScalersc_x = StandardScaler()x_train = sc_x.fit_transform(x_train)x_test = sc_x.transform(x_test)sc_y = StandardScaler()y_train = sc_y.fit_transform(y_train)from sklearn.ensemble import RandomForestRegressorregressor = RandomForestRegressor(n_estimators = 10, random_state = 0)regressor.fit(x_train, y_train)y_pred = regressor.predict(x_test)print(y_pred)from sklearn.metrics import r2_scorer2_score(y_test , y_pred)from sklearn.model_selection import cross_val_scoreaccuracies = cross_val_score(estimator = regressor, X = x_train, y = y_train, cv = 10)accuracies.mean()accuracies.std()
回答:
你的问题存在几个问题…
首先,你犯了一个非常基本的错误:你认为你在使用准确率作为指标,而实际上你在回归设置中,实际使用的指标是均方误差(MSE)。
准确率是用于分类的指标,它与正确分类的例子的百分比有关 – 请查看维基百科条目以获取更多详情。
你选择的回归器(随机森林)内部使用的指标包含在你的regressor.fit(x_train, y_train)
命令的详细输出中 – 注意criterion='mse'
参数:
RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_split=1e-07, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1, oob_score=False, random_state=0, verbose=0, warm_start=False)
MSE是一个正的连续量,并且没有上限为1,即如果你得到0.92的值,这意味着…嗯,0.92,而不是92%。
了解这一点后,最好在交叉验证中明确包含MSE作为评分函数:
cv_mse = cross_val_score(estimator = regressor, X = x_train, y = y_train, cv = 10, scoring='neg_mean_squared_error')cv_mse.mean()# -2.433430574463703e-28
就实际目的而言,这几乎是零 – 你几乎完美地拟合了训练集;为了确认,这里是你在训练集上的(再次完美的)R平方得分:
train_pred = regressor.predict(x_train)r2_score(y_train , train_pred)# 1.0
但是,正如往常一样,真相的时刻到来时,你将模型应用于测试集;你的第二个错误是,由于你用缩放的y_train
训练回归器,你也应该在评估前缩放y_test
:
y_test = sc_y.transform(y_test)r2_score(y_test , y_pred)# 0.9998476914664215
你会在测试集上得到一个非常好的R平方值(接近1)。
那么MSE呢?
from sklearn.metrics import mean_squared_errormse_test = mean_squared_error(y_test, y_pred)mse_test# 0.00015230853357849051