我正在使用一个简单的前馈神经网络进行电力负荷预测。以下是我的代码:
...num_periods = 24f_horizon = 48 #forecast horizon...#RNN designningtf.reset_default_graph()inputs = num_periods #input vector sizehidden = 100 output = num_periods #output vector sizelearning_rate = 0.01seed = 128x = tf.placeholder(tf.float32, [None, inputs])y = tf.placeholder(tf.float32, [None, output])weights = { 'hidden': tf.Variable(tf.random_normal([inputs, hidden], seed=seed)), 'output': tf.Variable(tf.random_normal([hidden, output], seed=seed))}biases = { 'hidden': tf.Variable(tf.random_normal([1,hidden], seed=seed)), 'output': tf.Variable(tf.random_normal([1,output], seed=seed))}hidden_layer = tf.add(tf.matmul(x, weights['hidden']), biases['hidden'])hidden_layer = tf.nn.relu(hidden_layer)output_layer = tf.matmul(hidden_layer, weights['output']) + biases['output']cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = output_layer, labels = y))optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)init = tf.initialize_all_variables() #initialize all the variablesepochs = 1000 #number of iterations or training cycles, includes both the FeedFoward and Backpropogationmape = []...for st in state.values(): print("State: ", st, end='\n') with tf.Session() as sess: init.run() for ep in range(epochs): sess.run([optimizer, cost], feed_dict={x: x_batches[st], y: y_batches[st]}) print("\n")
我们可以看到,随着周期的增加,成本在持续上升。这是为什么呢?
回答:
您使用了错误的损失函数,因为电力负荷预测听起来像是回归问题,而交叉熵仅适用于分类问题。
您应该使用均方误差之类的损失函数来代替。