我在尝试制作一个cifar100模型。当我开始训练模型时,遇到了这个错误
节点: ‘sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits’接收到的标签值为99,超出了有效范围[0, 10)。标签值: 1 47 23 85 26 78 60 78 26 85 11 13 24 60 1 65 97 7 14 59 20 35 94 65 79 43 24 78 47 41 0 91 56 2 63 78 32 96 87 32 62 71 2 16 79 60 61 37 82 92 28 55 7 71 14 14 85 69 12 48 3 26 18 26 96 69 10 34 28 96 88 13 99 17 69 65 12 92 46 89 41 93 23 13 2 93 87 83 72 27 49 7 65 48 39 73 51 79 22 22[[{{node sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits}}]] [Op:__inference_train_function_657]
我的代码是
import tensorflow as tfimport tensorflow.keras.datasets as datasetsimport numpy as npimport matplotlib.pyplot as pltdataset = datasets.cifar100(training_images, training_labels), (validation_images, validation_labels) = dataset.load_data()training_images = training_images / 255.0validation_images = validation_images / 255.0model = tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(32,32,3)), tf.keras.layers.Dense(500, activation='relu'), tf.keras.layers.Dense(300, activation='relu'), tf.keras.layers.Dense(10, activation= 'softmax') ])model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy'])history = model.fit(training_images, training_labels, batch_size=100, epochs=10, validation_data = (validation_images, validation_labels) )
我使用的是Ubuntu 22.04
回答:
你的数据集由100个不同的类别组成,而不是10个,所以在“cifar100”中才有100。因此,只需更改代码中的这一行:
tf.keras.layers.Dense(100, activation= 'softmax')
这样就能正常工作了。