我在尝试创建一个用于分类的决策树,但它并未被创建。使用相同的数据,SVM(训练集与测试集相同)的准确率达到了0.85,“play”是目标变量…
有什么想法能告诉我哪里做错了吗?这是数据和代码:https://gist.github.com/romeokienzler/c471819cbf156a69f73daf49f8c700c6
outlook,temp,humidity,windy,play
sunny,hot,high,false,no
sunny,hot,high,true,no
overcast,hot,high,false,yes
rainy,mild,high,false,yes
rainy,cool,normal,false,yes
rainy,cool,normal,true,no
overcast,cool,normal,true,yes
sunny,mild,high,false,no
sunny,cool,normal,false,yes
rainy,mild,normal,false,yes
sunny,mild,normal,true,yes
overcast,mild,high,true,yes
overcast,hot,normal,false,yes
rainy,mild,high,true,no
为了使用SVM,我对数据进行了编码:https://gist.github.com/romeokienzler/9bfce4182eda3d7662315621462c9cc6
outlook,temp,humidity,windy,play
1,1,2,FALSE,FALSE
1,1,2,TRUE,FALSE
2,1,2,FALSE,TRUE
3,2,2,FALSE,TRUE
3,3,1,FALSE,TRUE
3,3,1,TRUE,FALSE
2,3,1,TRUE,TRUE
1,2,2,FALSE,FALSE
1,3,1,FALSE,TRUE
3,2,1,FALSE,TRUE
1,2,1,TRUE,TRUE
2,2,2,TRUE,TRUE
2,1,1,FALSE,TRUE
3,2,2,TRUE,FALSE
这是SVM的案例:
library(e1071)df= read.csv("5.tennis_encoded.csv")attach(df)x <- subset(df, select=-play)y <- playdetach(df)model = svm(x,y,type = "C")pred = predict(model,x)truthVector = pred == ygood = length(truthVector[truthVector==TRUE])bad = length(truthVector[truthVector==FALSE])good/(good+bad)
[1] 0.8571429
这是决策树的案例
df= read.csv("5.tennis_encoded.csv")library(rpart)model = rpart(play ~ .,method = "class", data=df)print(model)
1) root 14 5 TRUE (0.3571429 0.6428571) *
所以我得到的树只有一个根节点,并且play == yes的概率为0.64%
有什么想法能告诉我哪里做错了吗?
回答:
很可能是因为你传递给算法的数据量太少,无法进行分裂。
请查看rpart.control函数以获取更多详细信息
rpart.control(minsplit = 20, minbucket = round(minsplit/3), cp = 0.01, maxcompete = 4, maxsurrogate = 5, usesurrogate = 2, xval = 10, surrogatestyle = 0, maxdepth = 30, ...)
如你所见,最小分裂大小是20。
如果你
model = rpart(play ~ .,method = "class", data=df, control= rpart.control(minsplit=2))
你应该会得到更多的分裂