Sklearn – 发现输入变量的样本数量不一致:[16512, 4128]

在阅读《Hands-On Machine Learning with Scikit-Learn & TensorFlow》第二章时,遇到了上述错误。当我尝试执行以下代码行时,发生了这个错误:

linReg.fit(housingPrepared, housing_labels)

通过在线研究,发现这似乎与我的特征和标签的维度不匹配有关。打印housingPrepared(X)和housing_labels(Y)的形状,得到以下结果:

(16512, 16) (4128,)

我花了一个小时逐行检查,看是否在这一章中漏掉了某一行,但没有发现任何问题。想知道这里是否有人对解决这个问题的可能方法有直觉。

非常感谢您的提前帮助。以下是我在遇到问题之前的所有代码:

import osimport tarfilefrom six.moves import urllibimport pandas as pdimport matplotlib.pyplot as pltimport numpy as npfrom zlib import crc32from sklearn.model_selection import train_test_split, StratifiedShuffleSplitfrom pandas.plotting import scatter_matrixfrom sklearn.preprocessing import Imputer, OneHotEncoder, StandardScaler, LabelEncoderfrom sklearn.base import BaseEstimator, TransformerMixinfrom sklearn.pipeline import Pipeline, FeatureUnionfrom CategoricalEncoder import CategoricalEncoderfrom sklearn.linear_model import LinearRegressionfrom sklearn.utils.validation import check_arrayfrom sklearn.metrics import mean_squared_errorfrom sklearn.model_selection import cross_val_scorefrom sklearn.tree import DecisionTreeRegressorDOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml/master/"HOUSING_PATH = os.path.join("datasets","housing")HOUSING_URL = DOWNLOAD_ROOT + "datasets/housing/housing.tgz"def fetchHousingData(housingUrl=HOUSING_URL, housingPath=HOUSING_PATH):    if not os.path.isdir(housingPath):        os.makedirs(housingPath)    tgzPath = os.path.join(housingPath, "housing.tgz")    urllib.request.urlretrieve(housingUrl, tgzPath)    housingTgz = tarfile.open(tgzPath)    housingTgz.extractall(path=housingPath)    housingTgz.close()def loadHousingData(housingPath=HOUSING_PATH):    return pd.read_csv("https://raw.githubusercontent.com/ageron/handson-ml/master/datasets/housing/housing.csv")housing = loadHousingData()#plt.hist(housing['longitude'],bins=50)#plt.show()def splitTrainTesT(data, testRatio):    shuffled_indices = np.random.permutation(len(data))    testSetSize = int(len(data)* testRatio)    testIndices = shuffled_indices[:testSetSize]    trainIndices = shuffled_indices[testSetSize:]    return data.iloc[trainIndices], data.iloc[testIndices]def testSetCheck(identifier, testRatio):    return crc32(np.int64(identifier)) & 0xffffffff < testRatio * 2 ** 32def splitTrainTestByID(data, testRatio, idColumn):    ids = data[idColumn]    inTestSet = ids.apply(lambda id_: testSetCheck(id_, testRatio))    return data.loc[~inTestSet], data.loc[inTestSet]#housingWithID = housing.reset_index()#trainSet, testSet = splitTrainTestByID(housingWithID,0.2,"index")trainSet, testSet = train_test_split(housing,test_size=0.2,random_state=42)housing["income_cat"] = np.ceil(housing["median_income"]/1.5)housing["income_cat"].where(housing["income_cat"] < 5, 5.0, inplace=True)#plt.hist(housing["income_cat"])#plt.show()split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)for trainIndex, testIndex in split.split(housing, housing["income_cat"]):    stratTrainSet = housing.loc[trainIndex]    stratTestSet = housing.loc[testIndex]for set in (stratTrainSet, stratTestSet):    set.drop("income_cat", axis=1, inplace=True)housing = stratTrainSet.copy()#print(housing)#plt.scatter(x=housing["latitude"],y=housing["longitude"], alpha=0.4)#plt.show()corr_matrix = housing.corr()#print(corr_matrix["median_house_value"].sort_values(ascending=False))#attribues = ["median_house_value", "median_income", "total_rooms", "housing_median_age"]#scatter_matrix(housing[attribues], figsize=(12,8))#plt.show()""" PREPARING DATA FOR MACHINE LEARNING ALGORITHMS"""housing = stratTrainSet.drop("median_house_value", axis=1)housing_labels = stratTestSet["median_house_value"].copy()housing.dropna(subset=["total_bedrooms"])imputer = Imputer(strategy="median")housingNum = housing.drop("ocean_proximity", axis=1)imputer.fit(housingNum)X = imputer.transform(housingNum)housingTr = pd.DataFrame(X, columns=housingNum.columns)housingCat = housing["ocean_proximity"]housingCatEncoded, housingCategories = housingCat.factorize()encoder = OneHotEncoder()housingCat1Hot = encoder.fit_transform(housingCatEncoded.reshape(-1,1))"""Custom Transformers For Rooms Per Household, etc"""roomsIX, bedroomsIX, populationIX, householdsIX = 3,4,5,6class CombinedAttributesAdder(BaseEstimator, TransformerMixin):    def __init__(self, addBedroomsPerRoom = True):        self.addBedroomsPerRoom = addBedroomsPerRoom    def fit(self, X, y=None):        return self    def transform(self, X, y=None):        roomsPerHousehold = X[:,roomsIX]/X[:,householdsIX]        populationPerHousehold = X[:,populationIX]/X[:,householdsIX]        if self.addBedroomsPerRoom:            bedroomsPerRoom = X[:,bedroomsIX]/X[:,roomsIX]            return np.c_[X, roomsPerHousehold, populationPerHousehold, bedroomsPerRoom]        else:            return np.c_[X, roomsPerHousehold, populationPerHousehold]attrAdder = CombinedAttributesAdder(addBedroomsPerRoom=False)housingExtraAttribs = attrAdder.transform(housing.values)numPipeline = Pipeline([('imputer', Imputer(strategy='median')),                        ('attribs_adder', CombinedAttributesAdder()),                        ('std_scaler', StandardScaler()),                        ])housingNumTr = numPipeline.fit_transform(housingNum)class DataFrameSelector(BaseEstimator, TransformerMixin):    def __init__(self, attributeNames):        self.attributeNames = attributeNames    def fit(self, X, y=None):        return self    def transform(self, X):        return X[self.attributeNames].valuesnumAttribs = list(housingNum)catAttribs = ["ocean_proximity"]numPipeline = Pipeline([('selector', DataFrameSelector(numAttribs)),                        ('imputer', Imputer(strategy='median')),                        ('attribs_adder', CombinedAttributesAdder()),                        ('std_scaler', StandardScaler()),])"""UPDATE SKLEARN TO INCLUDE CATEGORICAL ENCODER LIBRARY"""catPipeline = Pipeline([('selector', DataFrameSelector(catAttribs)),                        ('cat_encoder', CategoricalEncoder(encoding='onehot-dense')),                        ])fullPipeline = FeatureUnion(transformer_list=[("num_pipeline", numPipeline), ("cat_pipeline", catPipeline),])housingPrepared = fullPipeline.fit_transform(housing)linReg = LinearRegression()print(housingPrepared.shape, housing_labels.shape)linReg.fit(housingPrepared, housing_labels)

回答:

我认为问题出在这两行代码:

housing = stratTrainSet.drop("median_house_value", axis=1)housing_labels = stratTestSet["median_house_value"].copy()

将其更改为:

housing = stratTrainSet.drop("median_house_value", axis=1)housing_labels = stratTrainSet["median_house_value"].copy()

这样就可以了。

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注