我一直在遵循 Google 官方 TensorFlow 指南,并尝试使用 Keras 构建一个简单的神经网络。但在训练模型时,它并没有使用整个数据集(包含60000个条目),而是只使用了1875个条目进行训练。有可能的解决方法吗?
import tensorflow as tffrom tensorflow import kerasimport numpy as npfashion_mnist = keras.datasets.fashion_mnist(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()train_images = train_images / 255.0test_images = test_images / 255.0class_names = ['T-shirt', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot']model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(10)])model.compile(optimizer='adam', loss= tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])model.fit(train_images, train_labels, epochs=10)
输出:
Epoch 1/101875/1875 [==============================] - 3s 2ms/step - loss: 0.3183 - accuracy: 0.8866Epoch 2/101875/1875 [==============================] - 3s 2ms/step - loss: 0.3169 - accuracy: 0.8873Epoch 3/101875/1875 [==============================] - 3s 2ms/step - loss: 0.3144 - accuracy: 0.8885Epoch 4/101875/1875 [==============================] - 3s 2ms/step - loss: 0.3130 - accuracy: 0.8885Epoch 5/101875/1875 [==============================] - 3s 2ms/step - loss: 0.3110 - accuracy: 0.8883Epoch 6/101875/1875 [==============================] - 3s 2ms/step - loss: 0.3090 - accuracy: 0.8888Epoch 7/101875/1875 [==============================] - 3s 2ms/step - loss: 0.3073 - accuracy: 0.8895Epoch 8/101875/1875 [==============================] - 3s 2ms/step - loss: 0.3057 - accuracy: 0.8900Epoch 9/101875/1875 [==============================] - 3s 2ms/step - loss: 0.3040 - accuracy: 0.8905Epoch 10/101875/1875 [==============================] - 3s 2ms/step - loss: 0.3025 - accuracy: 0.8915<tensorflow.python.keras.callbacks.History at 0x7fbe0e5aebe0>
这是我在 Google Colab 上工作的原始笔记本:https://colab.research.google.com/drive/1NdtzXHEpiNnelcMaJeEm6zmp34JMcN38
回答:
在模型拟合过程中显示的数字 1875
不是训练样本的数量;它是 批次 的数量。
model.fit
包含一个可选参数 batch_size
,根据 文档:
如果未指定,
batch_size
将默认为32。
所以,这里发生的事情是 – 你使用了默认的批次大小32(因为你没有指定其他值),因此你数据的总批次数量为
60000/32 = 1875