FANN XOR 训练

我正在开发一个使用FANN(快速人工神经网络库)的软件。在多次尝试编写自己的ANN代码失败后,我尝试编译了一个FANN示例程序,这里是C++的XOR近似程序。以下是源代码。

#include "../include/floatfann.h"#include "../include/fann_cpp.h"#include <ios>#include <iostream>#include <iomanip>using std::cout;using std::cerr;using std::endl;using std::setw;using std::left;using std::right;using std::showpos;using std::noshowpos;// Callback function that simply prints the information to coutint print_callback(FANN::neural_net &net, FANN::training_data &train,    unsigned int max_epochs, unsigned int epochs_between_reports,    float desired_error, unsigned int epochs, void *user_data){    cout << "Epochs     " << setw(8) << epochs << ". "         << "Current Error: " << left << net.get_MSE() << right << endl;    return 0;}// Test function that demonstrates usage of the fann C++ wrappervoid xor_test(){    cout << endl << "XOR test started." << endl;    const float learning_rate = 0.7f;    const unsigned int num_layers = 3;    const unsigned int num_input = 2;    const unsigned int num_hidden = 3;    const unsigned int num_output = 1;    const float desired_error = 0.001f;    const unsigned int max_iterations = 300000;    const unsigned int iterations_between_reports = 10000;    ////Make array for create_standard() workaround (prevent "FANN Error 11: Unable to allocate memory.")    const unsigned int num_input_num_hidden_num_output__array[3] = {num_input, num_hidden, num_output};    cout << endl << "Creating network." << endl;    FANN::neural_net net;//    cout<<"Debug 1"<<endl;    //net.create_standard(num_layers, num_input, num_hidden, num_output);//doesn't work    net.create_standard_array(num_layers, num_input_num_hidden_num_output__array);//this might work -- create_standard() workaround    net.set_learning_rate(learning_rate);    net.set_activation_steepness_hidden(1.0);    net.set_activation_steepness_output(1.0);    //Sample Code, changed below    net.set_activation_function_hidden(FANN::SIGMOID_SYMMETRIC_STEPWISE);    net.set_activation_function_output(FANN::SIGMOID_SYMMETRIC_STEPWISE);    //changed above to sigmoid    //net.set_activation_function_hidden(FANN::SIGMOID);    //net.set_activation_function_output(FANN::SIGMOID);    // Set additional properties such as the training algorithm    //net.set_training_algorithm(FANN::TRAIN_QUICKPROP);    // Output network type and parameters    cout << endl << "Network Type                         :  ";    switch (net.get_network_type())    {    case FANN::LAYER://only connected to next layer        cout << "LAYER" << endl;        break;    case FANN::SHORTCUT://connected to all other layers        cout << "SHORTCUT" << endl;        break;    default:        cout << "UNKNOWN" << endl;        break;    }    net.print_parameters();    cout << endl << "Training network." << endl;    FANN::training_data data;    if (data.read_train_from_file("xor.data"))    {        // Initialize and train the network with the data        net.init_weights(data);        cout << "Max Epochs " << setw(8) << max_iterations << ". "            << "Desired Error: " << left << desired_error << right << endl;        net.set_callback(print_callback, NULL);        net.train_on_data(data, max_iterations,            iterations_between_reports, desired_error);        cout << endl << "Testing network. (not really)" << endl;        //I don't really get this code --- the funny for loop. Whatever. I'll skip it.                for (unsigned int i = 0; i < data.length_train_data(); ++i)                {                    // Run the network on the test data                    fann_type *calc_out = net.run(data.get_input()[i]);                    cout << "XOR test (" << showpos << data.get_input()[i][0] << ", "                         << data.get_input()[i][1] << ") -> " << *calc_out                         << ", should be " << data.get_output()[i][0] << ", "                         << "difference = " << noshowpos                         << fann_abs(*calc_out - data.get_output()[i][0]) << endl;                }        cout << endl << "Saving network." << endl;        // Save the network in floating point and fixed point        net.save("xor_float.net");        unsigned int decimal_point = net.save_to_fixed("xor_fixed.net");        data.save_train_to_fixed("xor_fixed.data", decimal_point);        cout << endl << "XOR test completed." << endl;    }}/* Startup function. Synchronizes C and C++ output, calls the test function   and reports any exceptions */int main(int argc, char **argv){    try    {        std::ios::sync_with_stdio(); // Synchronize cout and printf output        xor_test();    }    catch (...)    {        cerr << endl << "Abnormal exception." << endl;    }    return 0;}

这是我的输出结果。

XOR test started.Creating network.Network Type                         :  LAYERInput layer                          :   2 neurons, 1 bias  Hidden layer                       :   3 neurons, 1 biasOutput layer                         :   1 neuronsTotal neurons and biases             :   8Total connections                    :  13Connection rate                      :   1.000Network type                         :   FANN_NETTYPE_LAYERTraining algorithm                   :   FANN_TRAIN_RPROPTraining error function              :   FANN_ERRORFUNC_TANHTraining stop function               :   FANN_STOPFUNC_MSEBit fail limit                       :   0.350Learning rate                        :   0.700Learning momentum                    :   0.000Quickprop decay                      :  -0.000100Quickprop mu                         :   1.750RPROP increase factor                :   1.200RPROP decrease factor                :   0.500RPROP delta min                      :   0.000RPROP delta max                      :  50.000Cascade output change fraction       :   0.010000Cascade candidate change fraction    :   0.010000Cascade output stagnation epochs     :  12Cascade candidate stagnation epochs  :  12Cascade max output epochs            : 150Cascade min output epochs            :  50Cascade max candidate epochs         : 150Cascade min candidate epochs         :  50Cascade weight multiplier            :   0.400Cascade candidate limit              :1000.000Cascade activation functions[0]      :   FANN_SIGMOIDCascade activation functions[1]      :   FANN_SIGMOID_SYMMETRICCascade activation functions[2]      :   FANN_GAUSSIANCascade activation functions[3]      :   FANN_GAUSSIAN_SYMMETRICCascade activation functions[4]      :   FANN_ELLIOTCascade activation functions[5]      :   FANN_ELLIOT_SYMMETRICCascade activation functions[6]      :   FANN_SIN_SYMMETRICCascade activation functions[7]      :   FANN_COS_SYMMETRICCascade activation functions[8]      :   FANN_SINCascade activation functions[9]      :   FANN_COSCascade activation steepnesses[0]    :   0.250Cascade activation steepnesses[1]    :   0.500Cascade activation steepnesses[2]    :   0.750Cascade activation steepnesses[3]    :   1.000Cascade candidate groups             :   2Cascade no. of candidates            :  80Training network.Max Epochs   300000. Desired Error: 0.001Epochs            1. Current Error: 0.25Epochs        10000. Current Error: 0.25Epochs        20000. Current Error: 0.25Epochs        30000. Current Error: 0.25Epochs        40000. Current Error: 0.25Epochs        50000. Current Error: 0.25Epochs        60000. Current Error: 0.25Epochs        70000. Current Error: 0.25Epochs        80000. Current Error: 0.25Epochs        90000. Current Error: 0.25Epochs       100000. Current Error: 0.25Epochs       110000. Current Error: 0.25Epochs       120000. Current Error: 0.25Epochs       130000. Current Error: 0.25Epochs       140000. Current Error: 0.25Epochs       150000. Current Error: 0.25Epochs       160000. Current Error: 0.25Epochs       170000. Current Error: 0.25Epochs       180000. Current Error: 0.25Epochs       190000. Current Error: 0.25Epochs       200000. Current Error: 0.25Epochs       210000. Current Error: 0.25Epochs       220000. Current Error: 0.25Epochs       230000. Current Error: 0.25Epochs       240000. Current Error: 0.25Epochs       250000. Current Error: 0.25Epochs       260000. Current Error: 0.25Epochs       270000. Current Error: 0.25Epochs       280000. Current Error: 0.25Epochs       290000. Current Error: 0.25Epochs       300000. Current Error: 0.25Testing network. (not really)XOR test (+0, -1.875) -> +0, should be +0, difference = -0XOR test (+0, -1.875) -> +0, should be +0, difference = -0XOR test (+0, +1.875) -> +0, should be +0, difference = -0XOR test (+0, +1.875) -> +0, should be +0, difference = -0Saving network.XOR test completed.

训练数据(xor.data)如下所示:

4 2 1    -1 -1    -1    -1 1    1    1 -1    11 1    -1

为什么ANN表现出如此诡异的学习不足现象?我非常确信在某个地方配置得非常不对劲,尤其是考虑到这是示例程序。ANN专家们,有什么建议吗?


回答:

应用FANN补丁,并确保所有对floatfanndoublefann等的引用是一致的。

Related Posts

L1-L2正则化的不同系数

我想对网络的权重同时应用L1和L2正则化。然而,我找不…

使用scikit-learn的无监督方法将列表分类成不同组别,有没有办法?

我有一系列实例,每个实例都有一份列表,代表它所遵循的不…

f1_score metric in lightgbm

我想使用自定义指标f1_score来训练一个lgb模型…

通过相关系数矩阵进行特征选择

我在测试不同的算法时,如逻辑回归、高斯朴素贝叶斯、随机…

可以将机器学习库用于流式输入和输出吗?

已关闭。此问题需要更加聚焦。目前不接受回答。 想要改进…

在TensorFlow中,queue.dequeue_up_to()方法的用途是什么?

我对这个方法感到非常困惑,特别是当我发现这个令人费解的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注