如何让我的神经网络正确地进行线性回归?

我使用了迈克尔·尼尔森的书《神经网络与深度学习》中关于第一个神经网络的代码,该代码用于识别手写数字。它使用了带有小批量的随机梯度下降法和Sigmoid激活函数。我设置了一个输入神经元、两个隐藏神经元和一个输出神经元。然后我给它提供了一组数据,这些数据代表一条直线,基本上是介于零和1之间的若干点,其中输入和输出相同。不管我如何调整学习率和使用的epoch数量,网络都无法进行线性回归。这是由于我使用了Sigmoid激活函数吗?如果是的话,我可以使用其他什么函数?

基于新输入的网络预测

蓝色线条代表网络的预测,绿色线条是训练数据,网络预测的输入只是介于0到3之间的数字,间隔为0.01。

这是代码:

"""network.py~~~~~~~~~~A module to implement the stochastic gradient descent learningalgorithm for a feedforward neural network.  Gradients are calculatedusing backpropagation.  Note that I have focused on making the codesimple, easily readable, and easily modifiable.  It is not optimized,and omits many desirable features."""#### Libraries# Standard libraryimport random# Third-party librariesimport numpy as npfrom sklearn.datasets import make_regressionimport matplotlib.pyplot as pltclass Network(object):    def __init__(self, sizes):        """The list ``sizes`` contains the number of neurons in the        respective layers of the network.  For example, if the list        was [2, 3, 1] then it would be a three-layer network, with the        first layer containing 2 neurons, the second layer 3 neurons,        and the third layer 1 neuron.  The biases and weights for the        network are initialized randomly, using a Gaussian        distribution with mean 0, and variance 1.  Note that the first        layer is assumed to be an input layer, and by convention we        won't set any biases for those neurons, since biases are only        ever used in computing the outputs from later layers."""        self.num_layers = len(sizes)        self.sizes = sizes        '''creates a list of arrays with random numbers with mean 0 and variance 1;        These arrays represent the biases of each neuron in each layer so one random number is assigned per neuron in         each layer and every array represents one layer of biases        '''        self.biases = [np.random.randn(y, 1) for y in sizes[1:]]        self.weights = [np.random.randn(y, x)                        for x, y in zip(sizes[:-1], sizes[1:])]    #self always refers to an instance of a class    def feedforward(self, a):        # a are the activations of the neurons        """Return the output of the network if ``a`` is input."""        for b, w in zip(self.biases, self.weights):            a = sigmoid(np.dot(w, a)+b)                    return a    def SGD(self, training_data, epochs, mini_batch_size, eta,            test_data=None):        """Train the neural network using mini-batch stochastic        gradient descent.  The ``training_data`` is a list of tuples        ``(x, y)`` representing the training inputs and the desired        outputs.  The other non-optional parameters are        self-explanatory.  If ``test_data`` is provided then the        network will be evaluated against the test data after each        epoch, and partial progress printed out.  This is useful for        tracking progress, but slows things down substantially."""        if test_data: n_test = len(test_data)        n = len(training_data)        #this is done as many times as the number of epochs say -> that is how often the network is trained        for j in range(epochs):            random.shuffle(training_data)            mini_batches = [                training_data[k:k+mini_batch_size]                for k in range(0, n, mini_batch_size)]            #data is made into appropriately sized mini-batches            for mini_batch in mini_batches:                self.update_mini_batch(mini_batch, eta)                for x,y in mini_batch:                    print("Loss: ", (self.feedforward(x) - y)**2)            if test_data:                print ("Epoch {0}: {1} / {2}".format(                    j, self.evaluate(test_data), n_test))            else:                print ("Epoch {0} complete".format(j))    def update_mini_batch(self, mini_batch, eta):        """Update the network's weights and biases by applying        gradient descent using backpropagation to a single mini batch.        The ``mini_batch`` is a list of tuples ``(x, y)``, and ``eta``        is the learning rate."""        #nabla_b and nabla_w are the same lists of matrices as "biases" and         #"weights" but all matrices are filled with zeroes; Thus, it is reset to 0 for every mini_batch.                nabla_b = [np.zeros(b.shape) for b in self.biases]        nabla_w = [np.zeros(w.shape) for w in self.weights]        for x, y in mini_batch:            delta_nabla_b, delta_nabla_w = self.backprop(x, y)            nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]            nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]        #updates the weights and biases by subtracting the average of the sum of the derivatives of the cost        #function wrt to the biases/weights that were added for every training example in the mini_batch.        self.weights = [w-(eta/len(mini_batch))*nw                        for w, nw in zip(self.weights, nabla_w)]        self.biases = [b-(eta/len(mini_batch))*nb                       for b, nb in zip(self.biases, nabla_b)]    def backprop(self, x, y):        """Return a tuple ``(nabla_b, nabla_w)`` representing the        gradient for the cost function C_x.  ``nabla_b`` and        ``nabla_w`` are layer-by-layer lists of numpy arrays, similar        to ``self.biases`` and ``self.weights``."""        """Makes two lists filled with zeros in the same shape as biases and weights"""        nabla_b = [np.zeros(b.shape) for b in self.biases]        nabla_w = [np.zeros(w.shape) for w in self.weights]        # feedforward        activation = x        activations = [x]        zs = [] # list to store all the z vectors, layer by layer        for b, w in zip(self.biases, self.weights):            #multiplies w matrix for each layer by activation vector and adds bias            z = np.dot(w, activation)+b            zs.append(z)            activation = sigmoid(z)            activations.append(activation)        # backward pass        #this calculates the output error        delta = self.cost_derivative(activations[-1], y) * \            sigmoid_prime(zs[-1])        #this is the derivative of the cost function wrt the biases in the last layer        nabla_b[-1] = delta        #this is the derivative of the cost function wrt the weights in the last layer        nabla_w[-1] = np.dot(delta, activations[-2].transpose())        for l in range(2, self.num_layers): #Code really is this: for l in range(2, self.num_layers):            z = zs[-l]            sp = sigmoid_prime(z)            #This is the vector of errors of the layer -l            delta = np.dot(self.weights[-l+1].transpose(), delta) * sp            #fills the matrices nabla_b and nabla_w with the derivatives of the             #cost function with respect to the biases and weights in layers -l            nabla_b[-l] = delta            nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())        return (nabla_b, nabla_w)    def evaluate(self, test_data):        """Return the number of test inputs for which the neural        network outputs the correct result. Note that the neural        network's output is assumed to be the index of whichever        neuron in the final layer has the highest activation."""        test_results = [(np.argmax(self.feedforward(x)), y)                        for (x, y) in test_data]        #returns the number of inputs that were preducted correctly.        return sum(int(x == y) for (x, y) in test_results)    def cost_derivative(self, output_activations, y):        """Return the vector of partial derivatives \partial C_x /        \partial a for the output activations."""        return (output_activations-y)#### Miscellaneous functionsdef sigmoid(z):    """The sigmoid function."""     return 1.0/(1.0+np.exp(-z))def sigmoid_prime(z):    """Derivative of the sigmoid function."""    return sigmoid(z)*(1-sigmoid(z))

回答:

Sigmoid激活函数用于分类任务,在你的案例中是识别手写数字。而线性回归是回归任务,输出应为连续值。如果你希望输出层执行回归任务,你应该使用linear激活函数,这是Keras Dense层的默认设置。

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注