所以,我明白标准化对于训练神经网络是很重要的。
我也明白,我必须使用训练集的参数来标准化验证集和测试集(例如,请参考这个讨论:https://stats.stackexchange.com/questions/77350/perform-feature-normalization-before-or-within-model-validation)
我的问题是:在Keras中如何做到这一点?
我目前的做法是:
import numpy as npfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.callbacks import EarlyStoppingdef Normalize(data): mean_data = np.mean(data) std_data = np.std(data) norm_data = (data-mean_data)/std_data return norm_datainput_data, targets = np.loadtxt(fname='data', delimiter=';')norm_input = Normalize(input_data)model = Sequential()model.add(Dense(25, input_dim=20, activation='relu'))model.add(Dense(1, activation='sigmoid'))model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])early_stopping = EarlyStopping(monitor='val_acc', patience=50) model.fit(norm_input, targets, validation_split=0.2, batch_size=15, callbacks=[early_stopping], verbose=1)
但在这里,我首先根据整个数据集标准化数据,然后再分割验证集,这根据上述讨论是错误的做法。
保存训练集的均值和标准差(training_mean和training_std)并不是什么大事,但如何使用training_mean和training_std分别对验证集进行标准化呢?
回答:
你可以在拟合模型之前使用sklearn.model_selection.train_test_split
手动将数据分割成训练和测试数据集。然后,根据训练数据的均值和标准差来标准化训练和测试数据。最后,使用validation_data
参数调用model.fit
。
代码示例
import numpy as npfrom sklearn.model_selection import train_test_splitdata = np.random.randint(0,100,200).reshape(20,10)target = np.random.randint(0,1,20)X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.2)def Normalize(data, mean_data =None, std_data =None): if not mean_data: mean_data = np.mean(data) if not std_data: std_data = np.std(data) norm_data = (data-mean_data)/std_data return norm_data, mean_data, std_dataX_train, mean_data, std_data = Normalize(X_train)X_test, _, _ = Normalize(X_test, mean_data, std_data)model.fit(X_train, y_train, validation_data=(X_test,y_test), batch_size=15, callbacks=[early_stopping], verbose=1)