我正在尝试验证自定义训练循环是否会改变Keras模型的权重。我当前的方法是在训练前使用deepcopy
复制model.trainable_weights
列表,然后在训练后将此列表与model.trainable_weights
进行比较。这种比较方法是否有效?我方法的结果显示权重确实发生了变化(这实际上是预期的结果,因为每轮损失明显减少),但我只是想确认我所做的是有效的。以下是稍作调整的Keras自定义训练循环教程的代码,以及我用来比较模型训练前后权重变化的代码:
# Importsimport tensorflow as tffrom tensorflow import kerasfrom tensorflow.keras import layersimport numpy as npfrom copy import deepcopy# The modelinputs = keras.Input(shape=(784,), name="digits")x1 = layers.Dense(64, activation="relu")(inputs)x2 = layers.Dense(64, activation="relu")(x1)outputs = layers.Dense(10, name="predictions")(x2)model = keras.Model(inputs=inputs, outputs=outputs)########################### WEIGHTS BEFORE TRAINING########################### 在这里使用deepcopy以避免在训练过程中改变权重列表weights_before_training = deepcopy(model.trainable_weights)########################### Keras Tutorial########################### Load data(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()x_train = np.reshape(x_train, (-1, 784))x_test = np.reshape(x_test, (-1, 784))# Reduce the size of the data to speed up trainingx_train = x_train[:128] x_test = x_test[:128]y_train = y_train[:128]y_test = y_test[:128]# Make tf datasettrain_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))train_dataset = train_dataset.shuffle(buffer_size=64).batch(16)# The training loopprint('Begin Training')optimizer = keras.optimizers.SGD(learning_rate=1e-3)loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)epochs = 2for epoch in range(epochs): # Logging start of epoch print("\nStart of epoch %d" % (epoch,)) # Save loss values for logging loss_values = [] # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): with tf.GradientTape() as tape: logits = model(x_batch_train, training=True) # Logits for this minibatch loss_value = loss_fn(y_batch_train, logits) # Append to list for logging loss_values.append(loss_value) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) print('Epoch Loss:', np.mean(loss_values))print('End Training')########################### WEIGHTS AFTER TRAINING##########################weights_after_training = model.trainable_weights# Note: `trainable_weights` is a list of kernel and bias tensors.print()print('Begin Trainable Weights Comparison')for i in range(len(weights_before_training)): print(f'Trainable Tensors for Element {i + 1} of List Are Equal:', tf.reduce_all(tf.equal(weights_before_training[i], weights_after_training[i])).numpy())print('End Trainable Weights Comparison')>>> Begin Training>>> Start of epoch 0>>> Epoch Loss: 44.66055>>> >>> Start of epoch 1>>> Epoch Loss: 5.306543>>> End Training>>>>>> Begin Trainable Weights Comparison>>> Trainable Tensors for Element 1 of List Are Equal : False>>> Trainable Tensors for Element 2 of List Are Equal : False>>> Trainable Tensors for Element 3 of List Are Equal : False>>> Trainable Tensors for Element 4 of List Are Equal : False>>> Trainable Tensors for Element 5 of List Are Equal : False>>> Trainable Tensors for Element 6 of List Are Equal : False>>> End Trainable Weights Comparison
回答:
总结评论并补充一些信息,以造福社区:
上面代码中采用的方法,即在训练
前比较deepcopy(model.trainable_weights)
与模型
使用自定义训练循环
训练后的model.trainable_weights
,是正确的做法。
此外,如果我们不希望模型被训练,我们可以使用代码model.trainable = false
来冻结模型的所有层
。