在Python Pandas中进行机器学习时出现内存错误

我在尝试通过从一个更大的DataFrame中抽取100,000行的数据来进行机器学习的训练和测试。我已经尝试过抽取30,000到60,000行的随机样本,并得到了预期的输出,但当增加到100,000行以上时,出现了内存错误。

# coding=utf-8import pandas as pdfrom pandas import DataFrame, Seriesimport numpy as npimport nltkimport reimport randomfrom random import randintimport csvimport dask.dataframe as ddimport sysreload(sys)sys.setdefaultencoding('utf-8')from sklearn.linear_model import LogisticRegressionfrom sklearn.feature_extraction import DictVectorizerfrom sklearn.preprocessing import Imputerlr = LogisticRegression()dv = DictVectorizer()imp = Imputer(missing_values='NaN', strategy='most_frequent', axis=0)# Get csv file into data framedata = pd.read_csv("file.csv", header=0, encoding="utf-8")df = DataFrame(data)# Random sampling a smaller dataframe for debuggingrows = random.sample(df.index, 100000)df = df.ix[rows] # Warning!!!! overwriting original df# Assign X and y variablesX = df.raw_name.valuesy = df.ethnicity2.values# Feature extraction functionsdef feature_full_last_name(nameString):    try:        last_name = nameString.rsplit(None, 1)[-1]        if len(last_name) > 1: # not accept name with only 1 character            return last_name        else: return '?'    except: return '?'# Transform format of X variables, and spit out a numpy array for all featuresmy_dict = [{'last-name': feature_full_last_name(i)} for i in X]all_dict = my_dictnewX = dv.fit_transform(all_dict).toarray()# Separate the training and testing data setshalf_cut = int(len(df)/2.0)*-1X_train = newX[:half_cut]X_test = newX[half_cut:]y_train = y[:half_cut]y_test = y[half_cut:]# Fitting X and y into model, using training datalr.fit(X_train, y_train)# Making predictions using trained datay_train_predictions = lr.predict(X_train)y_test_predictions = lr.predict(X_test)print (y_train_predictions == y_train).sum().astype(float)/(y_train.shape[0])print (y_test_predictions == y_test).sum().astype(float)/(y_test.shape[0])

错误声明:

Traceback (most recent call last):  File "C:\Users\Dropbox\Python_Exercises\_Scraping\BeautifulSoup\FamilySearch.org\FamSearch_Analysis\MachineLearning\FamSearch_LogReg_GOOD8.py", line 93, in <module>    newX = dv.fit_transform(all_dict).toarray()  File "E:\Program Files Extra\Python27\lib\site-packages\scipy\sparse\compressed.py", line 942, in toarray    return self.tocoo(copy=False).toarray(order=order, out=out)  File "E:\Program Files Extra\Python27\lib\site-packages\scipy\sparse\coo.py", line 274, in toarray    B = self._process_toarray_args(order, out)  File "E:\Program Files Extra\Python27\lib\site-packages\scipy\sparse\base.py", line 793, in _process_toarray_args    return np.zeros(self.shape, dtype=self.dtype, order=order)MemoryError

回答:

这看起来不对:

newX = dv.fit_transform(all_dict).toarray()

因为scikit-learn中的几乎所有估计器都支持稀疏数据集,但你试图将你的稀疏数据集转换为密集数据集。当然,这会消耗大量的内存。你需要在代码中避免使用todense()和toarray()方法。

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注