我惊讶于使用RandomForestRegressor进行预测时得到了负分,我使用的是默认的评分器(决定系数)。任何帮助都将不胜感激。我的数据集看起来像这样。数据集截图在这里
from sklearn.ensemble import RandomForestRegressorfrom sklearn.preprocessing import OneHotEncoderfrom sklearn.compose import ColumnTransformerfrom sklearn.impute import SimpleImputerfrom sklearn.model_selection import cross_val_score,RandomizedSearchCV,train_test_splitimport numpy as np,pandas as pd,pickledataframe = pd.read_csv("../../notebook/car-sales.csv")y = dataframe["Price"].str.replace("[\$\.\,]" , "").astype(int)x = dataframe.drop("Price" , axis = 1)cat_features = [ "Make", "Colour", "Doors",]oneencoder = OneHotEncoder()transformer = ColumnTransformer([("onehot" ,oneencoder, cat_features)],remainder="passthrough")transformered_x = transformer.fit_transform(x)transformered_x = pd.get_dummies(dataframe[cat_features])x_train , x_test , y_train,y_test = train_test_split(transformered_x , y , test_size = .2)regressor = RandomForestRegressor(n_estimators=100)regressor.fit(x_train , y_train)regressor.score(x_test , y_test)
回答:
我对你的代码稍作修改后,达到了89%的得分。你离成功就差一点了!你做得非常好。不错的成绩!
from sklearn.ensemble import RandomForestRegressorfrom sklearn.preprocessing import OneHotEncoderfrom sklearn.compose import ColumnTransformerfrom sklearn.model_selection import train_test_splitimport pandas as pddataframe = pd.read_csv("car-sales.csv")df.head()y = dataframe["Price"].str.replace("[\$\.\,]" , "").astype(int)x = dataframe.drop("Price", axis=1)cat_features = ["Make", "Colour", "Odometer", "Doors", ]oneencoder = OneHotEncoder()transformer = ColumnTransformer([("onehot", oneencoder, cat_features)], remainder="passthrough")transformered_x = transformer.fit_transform(x)transformered_x = pd.get_dummies(dataframe[cat_features])x_train, x_test, y_train, y_test = train_test_split(transformered_x, y, test_size=.2, random_state=3)forest = RandomForestRegressor(n_estimators=200, criterion="mse", min_samples_leaf=3, min_samples_split=3, max_depth=10)forest.fit(x_train, y_train)# Explained variance score: 1 is perfect predictionprint('Score: %.2f' % forest.score(x_test, y_test, sample_weight=None))print(forest.score(x_test, y_test))
我认为负分是因为数据量极少导致了极端的过拟合。
这是直接来自sklearn文档的内容:
我引用文档中的内容:
https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html
https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html
The coefficient R^2 is defined as (1 - u/v), where u is the residual sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.
我将数据集扩大到100行,删除了代理键(第一列的整数ID从0到99),结果如下: