我正在尝试使用Cifar10数据集实现一个简单的逻辑回归进行图像分类。我只能使用TensorFlow 1.x进行训练(我可以使用Keras和其他库来处理数据)。
我的问题是,我构建的模型没有学习…所有epoch的训练集和测试集的准确率都是0.1。
我认为在将数据发送给模型之前,数据处理上可能存在一些问题,我很乐意了解为什么模型没有学习。
代码如下:
%tensorflow_version 1.ximport tensorflow as tfimport numpy as npimport kerasimport cv2 as cv2import matplotlib.pyplot as pltfrom keras.utils import to_categoricalfrom keras.datasets import mnist, cifar10def get_cifar10(): """Retrieve the CIFAR dataset and process the data.""" # Set defaults. nb_classes = 10 batch_size = 64 input_shape = (3072,) # Get the data. (x_train, y_train), (x_test, y_test) = cifar10.load_data() x_train = x_train.reshape(50000, 3072) x_test = x_test.reshape(10000, 3072) x_train = x_train.astype('float32') x_test = x_test.astype('float32') # x_train /= 255 # x_test /= 255 # convert class vectors to binary class matrices y_train = to_categorical(y_train, nb_classes) y_test = to_categorical(y_test, nb_classes) return (nb_classes, batch_size, input_shape, x_train, x_test, y_train, y_test) nb_classes, batch_size, input_shape, x_train, x_test, y_train, y_test = get_cifar10()features = 3072categories = nb_classesx = tf.placeholder(tf.float32, [None, features])y_ = tf.placeholder(tf.float32, [None, categories])W = tf.Variable(tf.zeros([features,categories]))b = tf.Variable(tf.zeros([categories]))y = tf.nn.softmax(tf.matmul(x, W) + b)loss = -tf.reduce_mean(y_*tf.log(y))update = tf.train.GradientDescentOptimizer(0.0001).minimize(loss)correct_prediction = tf.equal(tf.argmax(y, 1),tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))sess = tf.Session()sess.run(tf.global_variables_initializer())for epoch in range(0,1000): sess.run(update, feed_dict = {x:x_train, y_:y_train}) #BGD train_acc = sess.run(accuracy, feed_dict={x:x_train, y_:y_train}) test_acc = sess.run(accuracy, feed_dict={x:x_test, y_:y_test}) if(epoch % 10 == 0): print ("epoch: %3d train_acc: %f test_acc: %f" % (epoch,train_acc, test_acc))
运行模型后得到以下结果:
epoch: 0 train_acc: 0.099880 test_acc: 0.099900epoch: 10 train_acc: 0.100000 test_acc: 0.100000epoch: 20 train_acc: 0.100000 test_acc: 0.100000epoch: 30 train_acc: 0.100000 test_acc: 0.100000epoch: 40 train_acc: 0.100000 test_acc: 0.100000epoch: 50 train_acc: 0.100000 test_acc: 0.100000epoch: 60 train_acc: 0.100000 test_acc: 0.100000epoch: 70 train_acc: 0.100000 test_acc: 0.100000epoch: 80 train_acc: 0.100000 test_acc: 0.100000epoch: 90 train_acc: 0.100000 test_acc: 0.100000epoch: 100 train_acc: 0.100000 test_acc: 0.100000epoch: 110 train_acc: 0.100000 test_acc: 0.100000epoch: 120 train_acc: 0.100000 test_acc: 0.100000epoch: 130 train_acc: 0.100000 test_acc: 0.100000
提前感谢!
回答:
因此,你有三个问题
-
取消注释这两行:
# x_train /= 255# x_test /= 255
你应该对输入进行归一化处理。
-
损失不是对数损失的平均值,而是总和(你处理的是互斥类别)
loss = -tf.reduce_sum(y_*tf.log(y))
-
改变你的优化器,或学习率。我使用了Adam,现在损失值正常了
update = tf.train.AdamOptimizer(0.0001).minimize(loss)
输出如下:
epoch: 0 train_acc: 0.099940 test_acc: 0.099900epoch: 10 train_acc: 0.258440 test_acc: 0.258300epoch: 20 train_acc: 0.287600 test_acc: 0.291300epoch: 30 train_acc: 0.306160 test_acc: 0.308000epoch: 40 train_acc: 0.320680 test_acc: 0.321400epoch: 50 train_acc: 0.332040 test_acc: 0.331700epoch: 60 train_acc: 0.340040 test_acc: 0.337500epoch: 70 train_acc: 0.345100 test_acc: 0.345100epoch: 80 train_acc: 0.350460 test_acc: 0.348900epoch: 90 train_acc: 0.354780 test_acc: 0.353200epoch: 100 train_acc: 0.358020 test_acc: 0.356400epoch: 110 train_acc: 0.361180 test_acc: 0.359400epoch: 120 train_acc: 0.364420 test_acc: 0.361600epoch: 130 train_acc: 0.367260 test_acc: 0.362900epoch: 140 train_acc: 0.369220 test_acc: 0.365700epoch: 150 train_acc: 0.371540 test_acc: 0.367900epoch: 160 train_acc: 0.373560 test_acc: 0.368700epoch: 170 train_acc: 0.375220 test_acc: 0.371300epoch: 180 train_acc: 0.377040 test_acc: 0.372900epoch: 190 train_acc: 0.378840 test_acc: 0.375000epoch: 200 train_acc: 0.380340 test_acc: 0.377500epoch: 210 train_acc: 0.381780 test_acc: 0.379800epoch: 220 train_acc: 0.383640 test_acc: 0.380400epoch: 230 train_acc: 0.385340 test_acc: 0.380600epoch: 240 train_acc: 0.386500 test_acc: 0.381300epoch: 250 train_acc: 0.387640 test_acc: 0.381900...
显然,使用逻辑回归来处理图像并不是最佳选择。为了获得更好、更快的结果,最好使用卷积神经网络(CNN)。