如何在循环中保存损失函数?

早上好,我构建了一个神经网络来预测物理量。我想运行模型10次以观察模型的稳定性。我如何创建一个包含所有损失函数(包括验证和训练)的DataFrame,这些损失函数是在10次尝试中评估的?

for i in range(10): #per in numero di esperimenti    test_size = 0.2    dataset = pd.read_csv('CompleteDataSet_original_Clean_TP.csv', decimal=',', delimiter = ";")    label = dataset.iloc[:,-1]    features = dataset[feat_labels]    y_max_pre_normalize = max(label)    y_min_pre_normalize = min(label)    def denormalize(y):        final_value = y*(y_max_pre_normalize-y_min_pre_normalize)+y_min_pre_normalize        return final_value    X_train1, X_test1, y_train1, y_test1 = train_test_split(features, label, test_size = test_size, shuffle = True)    y_test2 = y_test1.to_frame()    y_train2 = y_train1.to_frame()    scaler1 = preprocessing.MinMaxScaler()    scaler2 = preprocessing.MinMaxScaler()    X_train = scaler1.fit_transform(X_train1)    X_test = scaler2.fit_transform(X_test1)    scaler3 = preprocessing.MinMaxScaler()    scaler4 = preprocessing.MinMaxScaler()    y_train = scaler3.fit_transform(y_train2)    y_test = scaler4.fit_transform(y_test2)    from keras import backend as K    # =============================================================================    # Creo la rete    # =============================================================================    optimizer = tf.keras.optimizers.Adam(lr=0.001)    model = Sequential()    model.add(Dense(100, input_shape = (X_train.shape[1],), activation = 'relu',kernel_initializer='glorot_uniform'))    model.add(Dropout(0.2))    model.add(Dense(100, activation = 'relu',kernel_initializer='glorot_uniform'))    model.add(Dropout(0.2))    model.add(Dense(100, activation = 'relu',kernel_initializer='glorot_uniform'))    model.add(Dropout(0.2))    model.add(Dense(100, activation = 'relu',kernel_initializer='glorot_uniform'))    model.add(Dense(1,activation = 'linear',kernel_initializer='glorot_uniform'))    model.compile(loss = 'mse', optimizer = optimizer, metrics = ['mse', r2_score])    history = model.fit(X_train, y_train, epochs = 200,                        validation_split = 0.1, shuffle=False, batch_size=250)    history_dict = history.history    loss_values = history_dict['loss']    val_loss_values = history_dict['val_loss']    y_train_pred = model.predict(X_train)    y_test_pred = model.predict(X_test)    y_train_pred = denormalize(y_train_pred)    y_test_pred = denormalize(y_test_pred)    from sklearn.metrics import r2_score    from sklearn import metrics    r2_test.append(r2_score(y_test_pred, y_test1))    r2_train.append(r2_score(y_train_pred, y_train1))     # Measure MSE error.      MSE_test.append(metrics.mean_squared_error(y_test_pred,y_test1))    MSE_train.append(metrics.mean_squared_error(y_train_pred,y_train1))    RMSE_test.append(np.sqrt(metrics.mean_squared_error(y_test_pred,y_test1)))    RMSE_train.append(np.sqrt(metrics.mean_squared_error(y_train_pred,y_train1)))

回答:

即使不使用Pandas,也可以将它们添加到列表中。如果确实需要DataFrame,可以使用pd.DataFrame()构造函数。

loss_and_val_loss = []for i in range(...):    # ...    loss_values = history_dict['loss']    val_loss_values = history_dict['val_loss']    loss_and_val_loss.append((loss_values, val_loss_values))# ...

假设这两个_values都是按每轮次的数字列表,可以如下方式将其转换为DataFrame:

# (示例数据,两个试验,每个试验有3个轮次)loss_and_val_loss = [    ([1, 2, 3], [4, 5, 6]),    ([7, 8, 9], [10, 11, 12]),]losses, val_losses = zip(*loss_and_val_loss)losses_df = pd.DataFrame(losses)val_losses_df = pd.DataFrame(val_losses)

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注