如何在scikit-learn中标准化排名数据?

我在进行机器学习,需要在编码的一个方面得到帮助。在我的训练数据中,我有一系列网页的URL以及这些网页的一些特征。我正在对网页文本进行TF-IDF处理,以创建更多的特征。

我提取的一个特征是,对于每个网址,我获取了Google的页面排名。这个值可以是世界上任何一个值,但排名越低,Google认为其质量越“高”。

考虑到我有7000个网址,并且排名可能差异很大(例如,www.google.com可能排名第1,而www.bbc.co.uk可能排名第1,117,其他排名可能会超出我们的7000个URL范围),我该如何标准化这个数值呢?

我如何使用scikit-learn有效地标准化这些数据,以便在我的机器学习算法中使用?我正在运行一个逻辑回归,仅试图预测一个网页是否“优质”。目前我使用的唯一特征是通过对网页文本进行TF-IDF处理创建的特征。理想情况下,我希望将这些特征与我的页面排名特征结合起来,以获得最高的交叉验证分数。

非常感谢!

我们可以假设我的数据是以TSV格式的:

URL GooglePageRank WebsiteText

一行的示例:

http://www.google.com 1 This would be the text of the google webpage.

我想标准化我的排名数据并在我的逻辑回归中使用它。目前,我只使用“WebsiteText”列,对其进行TF-IDF处理,并将其输入到我的逻辑回归中。我想学习如何将这个列与我标准化的GooglePageRank列结合起来,并在我的逻辑回归中使用这两个列 – 我该如何做呢?

这是我目前的代码:

  import numpy as np  from sklearn import metrics,preprocessing,cross_validation  from sklearn.feature_extraction.text import TfidfVectorizer  import sklearn.linear_model as lm  import pandas as p  loadData = lambda f: np.genfromtxt(open(f,'r'), delimiter=' ')  print "loading data.."  traindata = list(np.array(p.read_table('train.tsv'))[:,2])  testdata = list(np.array(p.read_table('test.tsv'))[:,2])  y = np.array(p.read_table('train.tsv'))[:,-1]  tfv = TfidfVectorizer(min_df=3,  max_features=None, strip_accents='unicode',          analyzer='word',token_pattern=r'\w{1,}',ngram_range=(1, 2), use_idf=1,smooth_idf=1,sublinear_tf=1)  rd = lm.LogisticRegression(penalty='l2', dual=True, tol=0.0001,                              C=1, fit_intercept=True, intercept_scaling=1.0,                              class_weight=None, random_state=None)  X_all = traindata + testdata  lentrain = len(traindata)  print "fitting pipeline"  tfv.fit(X_all)  print "transforming data"  X_all = tfv.transform(X_all)  X = X_all[:lentrain]  X_test = X_all[lentrain:]  print "20 Fold CV Score: ", np.mean(cross_validation.cross_val_score(rd, X, y, cv=20, scoring='roc_auc'))  print "training on full data"  rd.fit(X,y)  pred = rd.predict_proba(X_test)[:,1]  testfile = p.read_csv('test.tsv', sep="\t", na_values=['?'], index_col=1)  pred_df = p.DataFrame(pred, index=testfile.index, columns=['label'])  pred_df.to_csv('benchmark.csv')  print "submission file created.."

*编辑 : *

这是我目前正在运行的代码 –

from sklearn import metrics,preprocessing,cross_validationfrom sklearn.feature_extraction.text import TfidfVectorizerfrom sklearn.feature_extraction import DictVectorizerimport sklearn.preprocessingimport sklearn.linear_model as lmimport pandas as ploadData = lambda f: np.genfromtxt(open(f,'r'), delimiter=',')print "loading data.."#load train/test data for TF-IDF -- I know this is bad practice, but keeping it this way for the moment!traindata = list(np.array(p.read_csv('FinalCSVFin.csv', delimiter=";"))[:,2])testdata = list(np.array(p.read_csv('FinalTestCSVFin.csv', delimiter=";"))[:,2])#load labelsy = np.array(p.read_csv('FinalCSVFin.csv', delimiter=";"))[:,-2]#Load Integer values and append togetherAllAlexaInfo = np.array(p.read_csv('FinalCSVFin.csv', delimiter=";"))[:,-1]#make tfidf objecttfv = TfidfVectorizer(min_df=1, max_features=None, strip_accents='unicode',                        analyzer='word',token_pattern=r'\w{1,}',ngram_range=(1, 2),                       use_idf=1,smooth_idf=1,sublinear_tf=1)div = DictVectorizer()X = []X_all = traindata + testdatalentrain = len(traindata)# fit/transform the TfidfVectorizer on the training datavect = tfv.fit_transform(X_all) #bad practice, but using this for the moment!for i, alexarank in enumerate(AllAlexaInfo):    feature_dict = {'alexarank': AllAlexaInfo}    # get ith row of the tfidf matrix (corresponding to sample)    row = vect.getrow(i)        # filter the feature names corresponding to the sample    all_words = tfv.get_feature_names()    words = [all_words[ind] for ind in row.indices]     # associate each word (feature) with its corresponding score    word_score = dict(zip(words, row.data))     # concatenate the word feature/score with the datamining feature/value    X.append(dict(word_score.items() + feature_dict.items()))div.fit_transform(X)  # training data based on both Tfidf features and pageranksc = preprocessing.StandardScaler().fit(X)X = sc.transform(X)X_test = X_all[lentrain:]X_test = sc.transform(X_test)print "20 Fold CV Score: ", np.mean(cross_validation.cross_val_score(rd, X, y, cv=20, scoring='roc_auc'))print "training on full data"rd.fit(X,y)pred = rd.predict_proba(X_test)[:,1]testfile = p.read_csv('test.tsv', sep="\t", na_values=['?'], index_col=1)pred_df = p.DataFrame(pred, index=testfile.index, columns=['label'])pred_df.to_csv('benchmark.csv')print "submission file created.."

这似乎永远在运行,而且我认为我有一个“alexarank”值输入不正确的问题 – 我该如何解决这个问题?


回答:

根据您对我评论的回答,我将相应地执行:

tfv = TfidfVectorizer(    min_df=3,    max_features=None,    strip_accents='unicode',                        analyzer='word',    token_pattern=r'\w{1,}',    ngram_range=(1, 2),     use_idf=1,    smooth_idf=1,    sublinear_tf=1)div = DictVectorizer()X = []# fit/transform the TfidfVectorizer on the training datavectors = tfv.fit_transform(traindata)for i, pagerank in enumerate(pageranks):    feature_dict = {'pagerank': pagerank}    # get ith row of the tfidf matrix (corresponding to sample)    row = vect.getrow(i)        # filter the feature names corresponding to the sample    all_words = tfv.get_feature_names()    words = [all_words[ind] for ind in row.indices]     # associate each word (feature) with its corresponding score    word_score = dict(zip(words, row.data))     # concatenate the word feature/score with the datamining feature/value    X.append(dict(word_score.items() + feature_dict.items()))div.fit_transform(X)  # training data based on both Tfidf features and pagerank

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注