有趣的是,我多次执行相同的代码,每次得到的accuracy_score
都不同。我意识到在train_test splitting
时没有使用任何random_state
值。于是我设置了random_state=0
,得到了稳定的Accuracy_score
为82%。但是……然后我想尝试不同的random_state
值,我设置了random_state=128
,Accuracy_score
变成了84%。现在我需要理解这是为什么,以及random_state
如何影响模型的准确性。输出如下:1> 没有random_state:
runfile('C:/Users/spark/OneDrive/Documents/Machine Learing/Datasets/Titanic/Colab File.py', wdir='C:/Users/spark/OneDrive/Documents/Machine Learing/Datasets/Titanic'): boolean use_inf_as_null had been deprecated and will be removed in a future version. Use `use_inf_as_na` instead.[[90 22] [21 46]]0.7597765363128491runfile('C:/Users/spark/OneDrive/Documents/Machine Learing/Datasets/Titanic/Colab File.py', wdir='C:/Users/spark/OneDrive/Documents/Machine Learing/Datasets/Titanic'): boolean use_inf_as_null had been deprecated and will be removed in a future version. Use `use_inf_as_na` instead.[[104 16] [ 14 45]]0.8324022346368715runfile('C:/Users/spark/OneDrive/Documents/Machine Learing/Datasets/Titanic/Colab File.py', wdir='C:/Users/spark/OneDrive/Documents/Machine Learing/Datasets/Titanic'): boolean use_inf_as_null had been deprecated and will be removed in a future version. Use `use_inf_as_na` instead.[[90 18] [12 59]]0.8324022346368715runfile('C:/Users/spark/OneDrive/Documents/Machine Learing/Datasets/Titanic/Colab File.py', wdir='C:/Users/spark/OneDrive/Documents/Machine Learing/Datasets/Titanic'): boolean use_inf_as_null had been deprecated and will be removed in a future version. Use `use_inf_as_na` instead.[[99 9] [19 52]]0.8435754189944135
2> 使用random_state = 128 (Accuracy_score = 84%)
runfile('C:/Users/spark/OneDrive/Documents/Machine Learing/Datasets/Titanic/Colab File.py', wdir='C:/Users/spark/OneDrive/Documents/Machine Learing/Datasets/Titanic'): boolean use_inf_as_null had been deprecated and will be removed in a future version. Use `use_inf_as_na` instead.[[106 13] [ 15 45]]0.8435754189944135runfile('C:/Users/spark/OneDrive/Documents/Machine Learing/Datasets/Titanic/Colab File.py', wdir='C:/Users/spark/OneDrive/Documents/Machine Learing/Datasets/Titanic'): boolean use_inf_as_null had been deprecated and will be removed in a future version. Use `use_inf_as_na` instead.[[106 13] [ 15 45]]0.8435754189944135
3> 使用random_state = 0 (Accuracy_score = 82%)
runfile('C:/Users/spark/OneDrive/Documents/Machine Learing/Datasets/Titanic/Colab File.py', wdir='C:/Users/spark/OneDrive/Documents/Machine Learing/Datasets/Titanic'): boolean use_inf_as_null had been deprecated and will be removed in a future version. Use `use_inf_as_na` instead.[[93 17] [15 54]]0.8212290502793296runfile('C:/Users/spark/OneDrive/Documents/Machine Learing/Datasets/Titanic/Colab File.py', wdir='C:/Users/spark/OneDrive/Documents/Machine Learing/Datasets/Titanic'): boolean use_inf_as_null had been deprecated and will be removed in a future version. Use `use_inf_as_na` instead.[[93 17] [15 54]]0.8212290502793296
回答:
本质上,random_state
确保每次运行代码时输出相同的结果,因为每次都进行相同的精确数据分割。这主要对你的初始训练/测试分割有帮助,并且可以创建其他人可以精确复制的代码。
相同的数据分割与不同的数据分割
首先要理解的是,如果你不使用random_state
,那么每次数据的分割都会不同,这意味着你的训练集和测试集每次都会不同。这可能不会造成很大的不同,但会导致模型参数/准确率等的轻微变化。如果你每次都将random_state
设置为相同的值,比如random_state=0
,那么数据每次都会以相同的方式分割。
每个random_state值导致不同的分割
其次要理解的是,每个random_state
值都会导致不同的分割和不同的行为。因此,如果你想能够复制结果,你需要保持random_state
为相同的值。
你的模型可能有多个random_state部分
第三要理解的是,你的模型的多个部分可能包含随机性。例如,train_test_split
可以接受random_state
,RandomForestClassifier
也是如此。因此,为了每次获得完全相同的结果,你需要为模型中包含随机性的每个部分设置random_state
。
结论
如果你使用random_state
来进行初始的训练/测试分割,你需要设置一次并继续使用这个分割,以避免对测试集过拟合。
一般来说,你可以使用交叉验证来评估模型的准确性,而不必过多担心random_state
。
一个非常重要的注意事项是,你不应该使用random_state
来尝试提高模型的准确性。这样做本质上会导致模型对数据过拟合,并且对未见过的数据的泛化能力不佳。