如何在TensorFlow中添加dropout层?以及如何在Python 3.x中增强Numpy数组?

首先,我在这个领域是新手,我正在尝试添加dropout层以观察模型性能的变化。我无法确定在以下代码中应在哪里以及如何添加dropout层。此外,我希望对大小为39*200的numpy数组进行数据增强(移位),以便第一列移到第二列,第二列移到第三列,依此类推,最后一列移到第一列。这就像是剪切图像的最后部分并粘贴到最前面一样。

def conv2d(x, W, b, strides=1):x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')x = tf.nn.bias_add(x, b)return tf.nn.relu(x) def maxpool2d(x, k=2):return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],padding='SAME')weights = {'wc1': tf.get_variable('W0', shape=(3,3,1,32), initializer=tf.contrib.layers.xavier_initializer()), 'wc2': tf.get_variable('W1', shape=(3,3,32,64), initializer=tf.contrib.layers.xavier_initializer()), 'wc3': tf.get_variable('W2', shape=(3,3,64,32), initializer=tf.contrib.layers.xavier_initializer()), 'wc4': tf.get_variable('W3', shape=(3,3,32,128), initializer=tf.contrib.layers.xavier_initializer()),'wc5': tf.get_variable('W4', shape=(3,3,128,64), initializer=tf.contrib.layers.xavier_initializer()),'wd1': tf.get_variable('W7', shape=(4*4*56,64), initializer=tf.contrib.layers.xavier_initializer()), 'out': tf.get_variable('W8', shape=(64,n_classes), initializer=tf.contrib.layers.xavier_initializer()), 

}

biases = {'bc1': tf.get_variable('B0', shape=(32), initializer=tf.contrib.layers.xavier_initializer()),'bc2': tf.get_variable('B1', shape=(64), initializer=tf.contrib.layers.xavier_initializer()),'bc3': tf.get_variable('B2', shape=(32), initializer=tf.contrib.layers.xavier_initializer()),'bc4': tf.get_variable('B3', shape=(128), initializer=tf.contrib.layers.xavier_initializer()),'bc5': tf.get_variable('B4', shape=(64), initializer=tf.contrib.layers.xavier_initializer()),'bd1': tf.get_variable('B7', shape=(64), initializer=tf.contrib.layers.xavier_initializer()),'out': tf.get_variable('B8', shape=(2), initializer=tf.contrib.layers.xavier_initializer()),

}

def conv_net(x, weights, biases):  conv1 = conv2d(x, weights['wc1'], biases['bc1'])conv1 = maxpool2d(conv1, k=2)conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])conv2 = maxpool2d(conv2, k=2)conv3 = conv2d(conv2, weights['wc3'], biases['bc3'])conv3 = maxpool2d(conv3, k=2)conv4 = conv2d(conv3, weights['wc4'], biases['bc4'])conv4 = maxpool2d(conv4, k=2)conv5 = conv2d(conv4, weights['wc5'], biases['bc5'])conv5 = maxpool2d(conv5, k=2)fc1 = tf.reshape(conv5, [-1, weights['wd1'].get_shape().as_list()[0]])fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])fc1 = tf.nn.relu(fc1) out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])return outpred = conv_net(x, weights, biases)cost =tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y), name='Cost')optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))init = tf.global_variables_initializer(),with tf.Session() as sess:sess.run(init)train_loss = []test_loss = []train_accuracy = []test_accuracy = []if not os.path.exists('summaries'):    os.mkdir('summaries')if not os.path.exists(os.path.join('summaries','first')):    os.mkdir(os.path.join('summaries','first'))summary_writer = tf.summary.FileWriter(os.path.join('summaries','first'), sess.graph)  for i in range(training_iters):    for batch in range(len(X_train)//batch_size):        batch_x = X_train[batch*batch_size:min((batch+1)*batch_size,len(X_train))]        batch_y = Y_train[batch*batch_size:min((batch+1)*batch_size,len(Y_train))]            opt = sess.run(optimizer, feed_dict={x: batch_x,                                                          y: batch_y})        loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x,                                                          y: batch_y})    print("Iter " + str(i) + ", Loss= " + \                  "{:.6f}".format(loss) + ", Training Accuracy= " + \                  "{:.5f}".format(acc))    print("Optimization Finished!")    test_acc,valid_loss = sess.run([accuracy,cost], feed_dict={x: X_test,y : Y_test})    train_loss.append(loss)    test_loss.append(valid_loss)    train_accuracy.append(acc)    test_accuracy.append(test_acc)    print("Testing Accuracy:","{:.5f}".format(test_acc))    print("Accuracy:", accuracy.eval({x: X_test, y: Y_test}))

代码链接: [1]: https://drive.google.com/file/d/1BcbLAlVG0QR8QKToyij9gniQ7E9gvaCc/view?usp=sharing


回答:

Dropout

您可以在最大池化层之后添加dropout层,类似于以下方式:

# _____________ 第一层最大池化层 _____________________A_pool1 = tf.nn.max_pool(A_conv1)# _____________ 第一层dropout层 _____________________A_out1 = tf.nn.dropout(x=A_pool1, rate=dropout_prob)# _____________ 第二层卷积层 _____________________A_conv2 = tf.nn.relu(tf.nn.conv2d(A_out1, W_conv2))

其中dropout_prob是x中每个元素被丢弃的概率。

另一个例子可以在这里找到,他们在密集层之后(在末尾)添加了dropout层。

在您的具体情况下,您可以这样做:

conv2 = maxpool2d(conv2, k=2)A_out1 = tf.nn.dropout(x=conv2, rate=0.5)conv3 = conv2d(A_out1, weights['wc3'], biases['bc3'])

数据增强

要实现这一点,您可以使用Numpy的roll函数,如这里所解释的

array([8, 9, 0, 1, 2, 3, 4, 5, 6, 7])x2 = np.reshape(x, (2,5))>>> x2array([[0, 1, 2, 3, 4],   [5, 6, 7, 8, 9]])np.roll(x2, 1, axis=1)array([[4, 0, 1, 2, 3],       [9, 5, 6, 7, 8]])

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注