如何使用scikit-learn对文本对进行分类?

我阅读了许多关于这个主题的不同博客,但始终没有找到一个明确的解决方案。我有以下场景:

  1. 我有一组文本对,每对文本都标记为1或-1。
  2. 对于每对文本,我希望特征以如下方式连接:f () = tfidf(t1) “concat” tfidf(t2)

关于如何实现这个目标有什么建议吗?我有以下代码,但它会报错:

    count_vect = TfidfVectorizer(analyzer=u'char', ngram_range=ngram_range)    X0_train_counts = count_vect.fit_transform([x[0] for x in training_documents])    X1_train_counts = count_vect.fit_transform([x[1] for x in training_documents])    combined_features = FeatureUnion([("x0", X0_train_counts), ("x1", X1_train_counts)])    clf = LinearSVC().fit(combined_features, training_target)    average_training_accuracy += clf.score(combined_features, training_target)

我得到的错误如下:

---------------------------------------------------------------------------TypeError                                 Traceback (most recent call last)scoreEdgesUsingClassifier(None, pos, neg, 1,ngram_range=(2,5), max_size=1000000, test_size=100000) scoreEdgesUsingClassifier(unc, pos, neg, number_of_iterations, ngram_range, max_size, test_size) X0_train_counts = count_vect.fit_transform([x[0] for x in training_documents]) X1_train_counts = count_vect.fit_transform([x[1] for x in training_documents]) combined_features = FeatureUnion([("x0", X0_train_counts), ("x1", X1_train_counts)]) print "Done transforming, now training classifier"lib/python2.7/site-packages/sklearn/pipeline.pyc in __init__(self, transformer_list, n_jobs, transformer_weights)616         self.n_jobs = n_jobs617         self.transformer_weights = transformer_weights--> 618         self._validate_transformers()619 620     def get_params(self, deep=True):lib/python2.7/site-packages/sklearn/pipeline.pyc in _validate_transformers(self)660                 raise TypeError("All estimators should implement fit and "661                                 "transform. '%s' (type %s) doesn't" %--> 662                                 (t, type(t)))663 664     def _iter(self):TypeError: All estimators should implement fit and transform. '  (0, 49025) 0.0575144797079 (254741, 38401)    0.184394443164 (254741, 201747)   0.186080393768 (254741, 179231)   0.195062580945 (254741, 156925)   0.211367771299 (254741, 90026)    0.202458920022' (type <class 'scipy.sparse.csr.csr_matrix'>) doesn't

更新

这是解决方案:

    count_vect = TfidfVectorizer(analyzer=u'char', ngram_range=ngram_range)    training_docs_combined = [x[0] for x in training_documents] + [x[1] for x in training_documents]            X_train_counts = count_vect.fit_transform(training_docs_combined)    concat_features  = hstack((X_train_counts[0:len(training_docs_combined) / 2 ], X_train_counts[len (training_docs_combined) / 2:]))    clf = LinearSVC().fit(concat_features, training_target)    average_training_accuracy += clf.score(concat_features, training_target)

回答:

scikit-learn中的FeatureUnion接受的是估计器,而不是数据数组。

你可以使用scipy.sparse.hstack简单地连接X0_train_countsX1_train_counts数组,或者创建两个独立的TfidfVectorizer实例,对它们应用FeatureUnion,然后调用fit_transform方法。

Related Posts

L1-L2正则化的不同系数

我想对网络的权重同时应用L1和L2正则化。然而,我找不…

使用scikit-learn的无监督方法将列表分类成不同组别,有没有办法?

我有一系列实例,每个实例都有一份列表,代表它所遵循的不…

f1_score metric in lightgbm

我想使用自定义指标f1_score来训练一个lgb模型…

通过相关系数矩阵进行特征选择

我在测试不同的算法时,如逻辑回归、高斯朴素贝叶斯、随机…

可以将机器学习库用于流式输入和输出吗?

已关闭。此问题需要更加聚焦。目前不接受回答。 想要改进…

在TensorFlow中,queue.dequeue_up_to()方法的用途是什么?

我对这个方法感到非常困惑,特别是当我发现这个令人费解的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注