Python (sklearn) – 为什么我在SVR中对每个测试元组的预测结果都相同?

Stackoverflow上类似问题的回答建议更改SVR()实例中的参数值,但我不知道如何处理这些参数。

这是我使用的代码:

import jsonimport numpy as npfrom sklearn.svm import SVRf = open('training_data.txt', 'r')data = json.loads(f.read())f.close()f = open('predict_py.txt', 'r')data1 = json.loads(f.read())f.close()features = []response = []predict = []for row in data:    a = [        row['star_power'],        row['view_count'],        row['like_count'],        row['dislike_count'],        row['sentiment_score'],        row['holidays'],        row['clashes'],    ]    features.append(a)    response.append(row['collection'])for row in data1:    a = [        row['star_power'],        row['view_count'],        row['like_count'],        row['dislike_count'],        row['sentiment_score'],        row['holidays'],        row['clashes'],    ]    predict.append(a)X = np.array(features).astype(float)Y = np.array(response).astype(float)predict = np.array(predict).astype(float)svm = SVR()svm.fit(X,Y)print('svm prediction')svm_pred = svm.predict(predict)print(svm_pred)

这是我在代码中使用的两个文本文件的链接

training_data.txt

predict_py.txt

输出:

svm prediction[ 36.07  36.07  36.07  36.07  36.07  36.07  36.07  36.07  36.07  36.0736.07  36.07  36.07]

根据要求添加两个文本文件的样本:

1) training_data.txt:

[{"star_power":"1300","view_count":"50602729","like_count":"348059","dislike_count":"31748","holidays":"1","clashes":"0","sentiment_score":"0.32938596491228","collection":"383"},{"star_power":"1700","view_count":"36012808","like_count":"205694","dislike_count":"20130","holidays":"0","clashes":"0","sentiment_score":"0.1130303030303","collection":"300.68"},{"star_power":"0","view_count":"23892902","like_count":"86380","dislike_count":"4426","holidays":"0","clashes":"0","sentiment_score":"0.16004079254079","collection":"188.72"},{"star_power":"0","view_count":"27177685","like_count":"374671","dislike_count":"10372","holidays":"0","clashes":"0","sentiment_score":"0.16032407407407","collection":"132.85"},{"star_power":"500","view_count":"7481738","like_count":"42734","dislike_count":"1885","holidays":"0","clashes":"0","sentiment_score":"0.38622493734336","collection":"128.45"},{"star_power":"400","view_count":"16895259","like_count":"99158","dislike_count":"4188","holidays":"0","clashes":"0","sentiment_score":"0.22791203703704","collection":"127.48"},{"star_power":"200","view_count":"16646480","like_count":"63472","dislike_count":"13652","holidays":"1","clashes":"1","sentiment_score":"0.16873480902778","collection":"112.14"},{"star_power":"400","view_count":"18717042","like_count":"67497","dislike_count":"14165","holidays":"0","clashes":"0","sentiment_score":"0.30881006493506","collection":"109.14"}]

2) predict_py.txt

[{"star_power":"0","view_count":"3717403","like_count":"13399","dislike_count":"909","sentiment_score":"0.154167","holidays":"0","clashes":"0"},{"star_power":"0","view_count":"1640896","like_count":"2923","dislike_count":"328","sentiment_score":"0.109112","holidays":"0","clashes":"0"},{"star_power":"100","view_count":"14723084","like_count":"95088","dislike_count":"9816","sentiment_score":"0.352344","holidays":"0","clashes":"0"},{"star_power":"0","view_count":"584922","like_count":"4032","dislike_count":"212","sentiment_score":"0.3495","holidays":"0","clashes":"0"},{"star_power":"0","view_count":"14826843","like_count":"94788","dislike_count":"4169","sentiment_score":"0.208472","holidays":"0","clashes":"0"},{"star_power":"0","view_count":"1866184","like_count":"2750","dislike_count":"904","sentiment_score":"0.1275","holidays":"0","clashes":"0"},{"star_power":"200","view_count":"22006916","like_count":"184780","dislike_count":"13796","sentiment_score":"0.183611","holidays":"0","clashes":"0"},{"star_power":"0","view_count":"2645992","like_count":"4698","dislike_count":"1874","sentiment_score":"0.185487","holidays":"0","clashes":"0"},{"star_power":"0","view_count":"13886030","like_count":"116879","dislike_count":"6608","sentiment_score":"0.243479","holidays":"0","clashes":"0"},{"star_power":"0","view_count":"3102123","like_count":"36790","dislike_count":"769","sentiment_score":"0.065651","holidays":"0","clashes":"0"},{"star_power":"300","view_count":"16469439","like_count":"110054","dislike_count":"17892","sentiment_score":"0.178432","holidays":"0","clashes":"0"},{"star_power":"0","view_count":"6353017","like_count":"81236","dislike_count":"2154","sentiment_score":"0.0480556","holidays":"0","clashes":"0"},{"star_power":"0","view_count":"8679597","like_count":"89531","dislike_count":"6923","sentiment_score":"0.152083","holidays":"0","clashes":"0"}]

有什么建议吗?谢谢你。


回答:

修改你的代码以标准化数据。

from sklearn.preprocessing import RobustScalerrbX = RobustScaler()X = rbX.fit_transform(X)rbY = RobustScaler()Y = rbY.fit_transform(Y)

然后进行fit()

svm = SVR()svm.fit(X,Y)

在预测时,仅根据rbX转换predict

svm_pred = svm.predict(rbX.transform(predict))

现在svm_pred是标准化形式的。你希望预测的Y值以正确形式呈现,因此根据rbY对svm_pred进行逆转换。

svm_pred = rbY.inverse_transform(svm_pred)

然后打印svm_pred。它将给出满意的结果。

Related Posts

L1-L2正则化的不同系数

我想对网络的权重同时应用L1和L2正则化。然而,我找不…

使用scikit-learn的无监督方法将列表分类成不同组别,有没有办法?

我有一系列实例,每个实例都有一份列表,代表它所遵循的不…

f1_score metric in lightgbm

我想使用自定义指标f1_score来训练一个lgb模型…

通过相关系数矩阵进行特征选择

我在测试不同的算法时,如逻辑回归、高斯朴素贝叶斯、随机…

可以将机器学习库用于流式输入和输出吗?

已关闭。此问题需要更加聚焦。目前不接受回答。 想要改进…

在TensorFlow中,queue.dequeue_up_to()方法的用途是什么?

我对这个方法感到非常困惑,特别是当我发现这个令人费解的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注