Pylearn2通常被推荐为Python的神经网络资源。
我想创建一个单隐藏层的神经网络,并使用反向传播算法进行训练。
这应该是一个基础的内容,但我不知道如何用pylearn2来实现。我找到了一个关于多层感知机的教程,但尽管如此,我仍然感到迷惑。(http://nbviewer.ipython.org/github/lisa-lab/pylearn2/blob/master/pylearn2/scripts/tutorials/multilayer_perceptron/multilayer_perceptron.ipynb)
n = 200p = 20X = np.random.normal(0, 1, (n, p))y = X[:,0]* X[:, 1] + np.random.normal(0, .1, n)
我想创建一个具有40个隐藏节点和Sigmoid激活函数的单层神经网络。
有人能帮我吗?
编辑:
我已经能够编写这段代码,但它仍然无法工作
ds = DenseDesignMatrix(X=X, y=y)hidden_layer = mlp.Sigmoid(layer_name='hidden', dim=10, irange=.1, init_bias=1.)output_layer = mlp.Linear(1, 'output', irange=.1)trainer = sgd.SGD(learning_rate=.05, batch_size=10, termination_criterion=EpochCounter(200))layers = [hidden_layer, output_layer]ann = mlp.MLP(layers, nvis=1)trainer.setup(ann, ds)while True: trainer.train(dataset=ds) ann.monitor.report_epoch() ann.monitor() if not trainer.continue_learning(ann): break
回答:
这是我当前的解决方案:
n = 200p = 2X = np.random.normal(0, 1, (n, p))y = X[:,0]* X[:, 1] + np.random.normal(0, .1, n)y.shape = (n, 1)ds = DenseDesignMatrix(X=X, y=y)hidden_layer = mlp.Sigmoid(layer_name='hidden', dim=10, irange=.1, init_bias=1.)output_layer = mlp.Linear(dim=1, layer_name='y', irange=.1)trainer = sgd.SGD(learning_rate=.05, batch_size=10, termination_criterion=EpochCounter(200))layers = [hidden_layer, output_layer]ann = mlp.MLP(layers, nvis=2)trainer.setup(ann, ds)while True: trainer.train(dataset=ds) ann.monitor.report_epoch() ann.monitor() if not trainer.continue_learning(ann): breakinputs = X y_est = ann.fprop(theano.shared(inputs, name='inputs')).eval()