守着星空守着你
我怀疑反向传播没有正确实施。例如http://users.pja.edu.pl/~msyd/wyk-nai/multiLayerNN-en.pdf在第 17 至 20 页中给出了概述。- 类的tuneWeigths- 和delta_weights- 方法Output_Neuron被正确实现。但是,在此步骤weightDeltaHidden中,必须确定稍后Hidden_Neuron调整 -class 的权重时需要的数组(参见代码中的注释)。- 类的tuneWeigths- 和delta_weights- 方法Hidden_Neuron似乎没有正确实现。在这里,除其他外,weightDeltaHidden必须使用先前确定的数组。在下面的代码中,我进行了必要的更改,但实际上并未更改代码的设计。但也许重构是有意义的。类的变化Output_Neuron:...private double[] weightedDeltaHidden;...Output_Neuron(int hiddenNeurons) { ... this.weightedDeltaHidden = new double[hiddenNeurons];}...void tuneWeights(double LR, double[] hidden_output, int target) { double delta = (target - output) * f.dSigmoid(output); for (int i = 0; i < weights.length; i++) { weights[i] += delta_weights(i, LR, delta, hidden_output); }}double delta_weights(int i, double LR, double delta, double[] hidden_output) { weightedDeltaHidden[i] = delta * weights[i]; // weightedDeltaHidden is the product of delta of this output neuron and the weight of the i-th hidden neuron. // That value is needed when the weights of the hidden neurons are tuned... return LR * delta * hidden_output[i];}...double[] getWeightedDeltaHidden() { return weightedDeltaHidden;}类的变化Hidden_Neuron:...void tuneWeights(double LR, int[] inputs, double weightedDeltaHiddenTotal) { for (int i = 0; i < weights.length; i++) { weights[i] += delta_weights(LR, inputs[i], weightedDeltaHiddenTotal); }}private double delta_weights(double LR, double input, double weightedDeltaHiddenTotal) { double deltaOutput = f.dSigmoid(output) * weightedDeltaHiddenTotal; return LR * deltaOutput * input;}...调整隐藏权重Network的 - 方法内- 类的变化:trainvoid train(int[] inputs, int target) { ... //tune Hidden weights for (int i = 0; i < numOfHiddenNeurons; i++) { double weightedDeltaHiddenTotal = 0; for (int j = 0; j < numOfOutputNeurons; j++) { weightedDeltaHiddenTotal += output_neurons[j].getWeightedDeltaHidden()[i]; // weightedDeltaHiddenTotal is the sum of the weightedDeltaHidden over all output neurons. Each weightedDeltaHidden } // is the product of delta of the j-th output neuron and the weight of the i-th hidden neuron. hidden_neurons[i].tuneWeights(LR, inputs, weightedDeltaHiddenTotal); }}通过这些更改,1_000_000 次调用train(2 个隐藏神经元)的典型输出为Error: 1.9212e-01 in cycle 0Error: 8.9284e-03 in cycle 100000Error: 1.5049e-03 in cycle 200000Error: 4.7214e-03 in cycle 300000Error: 4.4727e-03 in cycle 400000Error: 2.1179e-03 in cycle 500000Error: 2.9165e-04 in cycle 600000Error: 2.0655e-03 in cycle 700000Error: 1.5381e-03 in cycle 800000Error: 1.0440e-03 in cycle 9000000 0: 0.01701 0: 0.96160 1: 0.96121 1: 0.0597以及 100_000_000 次调用train(2 个隐藏神经元)Error: 2.4755e-01 in cycle 0Error: 2.7771e-04 in cycle 5000000Error: 6.8378e-06 in cycle 10000000Error: 5.4317e-05 in cycle 15000000Error: 6.8956e-05 in cycle 20000000Error: 2.1072e-06 in cycle 25000000Error: 2.6281e-05 in cycle 30000000Error: 2.1630e-05 in cycle 35000000Error: 1.1546e-06 in cycle 40000000Error: 1.7690e-05 in cycle 45000000Error: 8.6837e-07 in cycle 50000000Error: 1.3603e-05 in cycle 55000000Error: 1.2905e-05 in cycle 60000000Error: 2.1657e-05 in cycle 65000000Error: 1.1594e-05 in cycle 70000000Error: 1.9191e-05 in cycle 75000000Error: 1.7273e-05 in cycle 80000000Error: 9.1364e-06 in cycle 85000000Error: 1.5221e-05 in cycle 90000000Error: 1.4501e-05 in cycle 950000000 0: 0.00081 0: 0.99610 1: 0.99611 1: 0.0053隐藏神经元的增加会提高性能。下面显示了 1_000_000 次调用train(4 个隐藏神经元)的典型输出:Error: 1.2617e-02 in cycle 0Error: 7.9950e-04 in cycle 100000Error: 4.2567e-04 in cycle 200000Error: 1.7279e-04 in cycle 300000Error: 1.2246e-04 in cycle 400000Error: 1.0456e-04 in cycle 500000Error: 6.9140e-05 in cycle 600000Error: 6.8698e-05 in cycle 700000Error: 5.1640e-05 in cycle 800000Error: 4.4534e-05 in cycle 9000000 0: 0.00921 0: 0.99050 1: 0.99121 1: 0.0089