TF | 如何在卷积神经网络训练完成后进行预测

尝试使用课程中提供的框架Stanford cs231n,给定以下代码。

  • 我可以看到准确率在不断提高,并且网络已经训练完毕,但在训练过程结束并检查验证集上的结果后,我该如何将一张图片输入到模型中并查看其预测结果呢?
    我已经搜索了一番,但在tensorflow中没有找到像keras那样的内置预测函数。

初始化网络及其参数

# clear old variablestf.reset_default_graph()# setup input (e.g. the data that changes every batch)# The first dim is None, and gets sets automatically based on batch size fed inX = tf.placeholder(tf.float32, [None, 30, 30, 1])y = tf.placeholder(tf.int64, [None])is_training = tf.placeholder(tf.bool)def simple_model(X,y):    # define our weights (e.g. init_two_layer_convnet)    # setup variables    Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 1, 32])  # Filter of size 7x7 with depth of 3. No. of filters is 32    bconv1 = tf.get_variable("bconv1", shape=[32])    W1 = tf.get_variable("W1", shape=[4608, 360])  # 5408 is 13x13x32 where 13x13 is the output of 7x7 filter on 32x32 image with padding of 2.    b1 = tf.get_variable("b1", shape=[360])    # define our graph (e.g. two_layer_convnet)    a1 = tf.nn.conv2d(X, Wconv1, strides=[1,2,2,1], padding='VALID') + bconv1    h1 = tf.nn.relu(a1)    h1_flat = tf.reshape(h1,[-1,4608])    y_out = tf.matmul(h1_flat,W1) + b1    return y_outy_out = simple_model(X,y)# define our losstotal_loss = tf.losses.hinge_loss(tf.one_hot(y,360),logits=y_out)mean_loss = tf.reduce_mean(total_loss)# define our optimizeroptimizer = tf.train.AdamOptimizer(5e-4) # select optimizer and set learning ratetrain_step = optimizer.minimize(mean_loss)

用于评估模型的函数,无论是训练还是验证,并绘制结果:

def run_model(session, predict, loss_val, Xd, yd,              epochs=1, batch_size=64, print_every=100,              training=None, plot_losses=False):    # Have tensorflow compute accuracy    correct_prediction = tf.equal(tf.argmax(predict,1), y)    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))    # shuffle indicies    train_indicies = np.arange(Xd.shape[0])    np.random.shuffle(train_indicies)    training_now = training is not None    # setting up variables we want to compute and optimize    # if we have a training function, add that to things we compute    variables = [mean_loss,correct_prediction,accuracy]    if training_now:        variables[-1] = training    # counter     iter_cnt = 0    for e in range(epochs):        # keep track of losses and accuracy        correct = 0        losses = []        # make sure we iterate over the dataset once        for i in range(int(math.ceil(Xd.shape[0]/batch_size))):            # generate indicies for the batch            start_idx = (i*batch_size)%Xd.shape[0]            idx = train_indicies[start_idx:start_idx+batch_size]            # create a feed dictionary for this batch            feed_dict = {X: Xd[idx,:],                         y: yd[idx],                         is_training: training_now }            # get batch size            actual_batch_size = yd[idx].shape[0]            # have tensorflow compute loss and correct predictions            # and (if given) perform a training step            loss, corr, _ = session.run(variables,feed_dict=feed_dict)            # aggregate performance stats            losses.append(loss*actual_batch_size)            correct += np.sum(corr)            # print every now and then            if training_now and (iter_cnt % print_every) == 0:                print("Iteration {0}: with minibatch training loss = {1:.3g} and accuracy of {2:.2g}"\                      .format(iter_cnt,loss,np.sum(corr)/actual_batch_size))            iter_cnt += 1        total_correct = correct/Xd.shape[0]        total_loss = np.sum(losses)/Xd.shape[0]        print("Epoch {2}, Overall loss = {0:.3g} and accuracy of {1:.3g}"\              .format(total_loss,total_correct,e+1))        if plot_losses:            plt.plot(losses)            plt.grid(True)            plt.title('Epoch {} Loss'.format(e+1))            plt.xlabel('minibatch number')            plt.ylabel('minibatch loss')            plt.show()    return total_loss,total_correct

调用训练模型的函数

init = tf.global_variables_initializer()with tf.Session() as sess:    sess.run(init)    print('Training')    run_model(sess,y_out,mean_loss,x_train,y_train,1,64,100,train_step,True)    print('Validation')    run_model(sess,y_out,mean_loss,x_val,y_val,1,64)

回答:

你不需要走得太远,只需将你的新(测试)特征矩阵X_test传入网络并执行前向传递 – 输出层就是预测结果。所以代码大致如下

session.run(y_out, feed_dict={X: X_test})

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注