我在YouTube上跟着一个教程学习,结果不小心漏掉了model.add(Dense(6, activation='relu'))
这一行代码,结果准确率只有36%。当我添加了这行代码后,准确率提升到了86%。这是为什么呢?
这是代码
from sklearn.model_selection import train_test_splitimport kerasfrom keras.models import Sequentialfrom keras.layers import Dense import numpy as npnp.random.seed(3)classifications = 3dataset = np.loadtxt('wine.csv', delimiter=",")X = dataset[:,1:14]Y = dataset[:,0:1]x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.66, random_state=5)y_train = keras.utils.to_categorical(y_train-1, classifications)y_test = keras.utils.to_categorical(y_test-1, classifications)model = Sequential()model.add(Dense(10, input_dim=13, activation='relu'))model.add(Dense(8, activation='relu'))model.add(Dense(6, activation='relu')) # 这是我漏掉的代码model.add(Dense(6, activation='relu'))model.add(Dense(4, activation='relu'))model.add(Dense(2, activation='relu'))model.add(Dense(classifications, activation='softmax'))model.compile(loss="categorical_crossentropy", optimizer="adam", metrics= ['accuracy'])model.fit(x_train, y_train, batch_size=15, epochs=2500, validation_data= (x_test, y_test))
回答:
层数是一个超参数,就像学习率和神经元数量一样。这些因素在决定准确率方面起着重要作用。所以在你的情况下,
model.add(Dense(6, activation='relu'))
这一层起到了关键作用。我们无法确切理解这些层到底在做什么。我们能做的最好的事情就是进行超参数调优,以找到最佳的超参数组合。