在Python Pandas中进行机器学习时出现内存错误

我在尝试通过从一个更大的DataFrame中抽取100,000行的数据来进行机器学习的训练和测试。我已经尝试过抽取30,000到60,000行的随机样本,并得到了预期的输出,但当增加到100,000行以上时,出现了内存错误。

# coding=utf-8import pandas as pdfrom pandas import DataFrame, Seriesimport numpy as npimport nltkimport reimport randomfrom random import randintimport csvimport dask.dataframe as ddimport sysreload(sys)sys.setdefaultencoding('utf-8')from sklearn.linear_model import LogisticRegressionfrom sklearn.feature_extraction import DictVectorizerfrom sklearn.preprocessing import Imputerlr = LogisticRegression()dv = DictVectorizer()imp = Imputer(missing_values='NaN', strategy='most_frequent', axis=0)# Get csv file into data framedata = pd.read_csv("file.csv", header=0, encoding="utf-8")df = DataFrame(data)# Random sampling a smaller dataframe for debuggingrows = random.sample(df.index, 100000)df = df.ix[rows] # Warning!!!! overwriting original df# Assign X and y variablesX = df.raw_name.valuesy = df.ethnicity2.values# Feature extraction functionsdef feature_full_last_name(nameString):    try:        last_name = nameString.rsplit(None, 1)[-1]        if len(last_name) > 1: # not accept name with only 1 character            return last_name        else: return '?'    except: return '?'# Transform format of X variables, and spit out a numpy array for all featuresmy_dict = [{'last-name': feature_full_last_name(i)} for i in X]all_dict = my_dictnewX = dv.fit_transform(all_dict).toarray()# Separate the training and testing data setshalf_cut = int(len(df)/2.0)*-1X_train = newX[:half_cut]X_test = newX[half_cut:]y_train = y[:half_cut]y_test = y[half_cut:]# Fitting X and y into model, using training datalr.fit(X_train, y_train)# Making predictions using trained datay_train_predictions = lr.predict(X_train)y_test_predictions = lr.predict(X_test)print (y_train_predictions == y_train).sum().astype(float)/(y_train.shape[0])print (y_test_predictions == y_test).sum().astype(float)/(y_test.shape[0])

错误声明:

Traceback (most recent call last):  File "C:\Users\Dropbox\Python_Exercises\_Scraping\BeautifulSoup\FamilySearch.org\FamSearch_Analysis\MachineLearning\FamSearch_LogReg_GOOD8.py", line 93, in <module>    newX = dv.fit_transform(all_dict).toarray()  File "E:\Program Files Extra\Python27\lib\site-packages\scipy\sparse\compressed.py", line 942, in toarray    return self.tocoo(copy=False).toarray(order=order, out=out)  File "E:\Program Files Extra\Python27\lib\site-packages\scipy\sparse\coo.py", line 274, in toarray    B = self._process_toarray_args(order, out)  File "E:\Program Files Extra\Python27\lib\site-packages\scipy\sparse\base.py", line 793, in _process_toarray_args    return np.zeros(self.shape, dtype=self.dtype, order=order)MemoryError

回答:

这看起来不对:

newX = dv.fit_transform(all_dict).toarray()

因为scikit-learn中的几乎所有估计器都支持稀疏数据集,但你试图将你的稀疏数据集转换为密集数据集。当然,这会消耗大量的内存。你需要在代码中避免使用todense()和toarray()方法。

Related Posts

L1-L2正则化的不同系数

我想对网络的权重同时应用L1和L2正则化。然而,我找不…

使用scikit-learn的无监督方法将列表分类成不同组别,有没有办法?

我有一系列实例,每个实例都有一份列表,代表它所遵循的不…

f1_score metric in lightgbm

我想使用自定义指标f1_score来训练一个lgb模型…

通过相关系数矩阵进行特征选择

我在测试不同的算法时,如逻辑回归、高斯朴素贝叶斯、随机…

可以将机器学习库用于流式输入和输出吗?

已关闭。此问题需要更加聚焦。目前不接受回答。 想要改进…

在TensorFlow中,queue.dequeue_up_to()方法的用途是什么?

我对这个方法感到非常困惑,特别是当我发现这个令人费解的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注