从一个月开始,我就自己开始学习机器学习,尤其是深度学习,并为此而努力。学习完所有数学概念后,我决定自己用一个神经元的python程序来完成此工作,该神经元可以正常工作。(超精度)
现在,我决定使用2个神经元,1个输出神经元和2个输入的一个隐藏层来执行此操作,但这是行不通的。实际上,成本并没有降低,准确性也没有提高。但是该程序有效(输出如下)
import numpy as np
import matplotlib.pyplot as plt
def init_variables():
"""
Init model variables (weights, biais)
"""
weights_11 = np.random.normal(size=2)
weights_12 = np.random.normal(size=2)
weight_ouput = np.random.normal(size=2)
bias_11 = 0
bias_12 = 0
bias_output = 0
return weights_11, weights_12, weight_ouput, bias_11, bias_12, bias_output
def get_dataset():
"""
Method used to generate the dataset
"""
#Number of rows per class
row_per_class = 100
#generate rows
sick_people = (np.random.randn(row_per_class,2)) + np.array([-2,-2])
sick_people2 = (np.random.randn(row_per_class,2)) + np.array([2,2])
healthy_people = (np.random.randn(row_per_class,2)) + np.array([-2,2])
healthy_people2 = (np.random.randn(row_per_class,2)) + np.array([2,-2])
features = np.vstack([sick_people,sick_people2, healthy_people, healthy_people2])
targets = np.concatenate((np.zeros(row_per_class*2), np.zeros(row_per_class*2)+1))
#plt.scatter(features[:,0], features[:,1], c=targets, cmap = plt.cm.Spectral)
#plt.show()
return features, targets
def pre_activation(features, weights, bias):
"""
compute pre activation of the neural
"""
return np.dot(features, weights) + bias
def activation(z):
"""
compute the activation (sigmoide)
"""
return 1 / ( 1 + np.exp(-z) )
def derivative_activation(z):
"""
compute the derivative of the activation (derivative of sigmoide)
"""
return activation(z) * (1 - activation(z))
def cost(predictions, targets):
"""
make the difference between predictions and results
"""
return np.mean((predictions - targets)**2)
代码效率不高,因为我试图逐步了解所有内容,我知道问题出在隐藏层的训练上,但是它们尊重我在互联网上看到的公式(神经输入*(预测 - 目标)* sigmoid'(预测)*(weightOfTheNextLayer),这就是为什么我真的不明白。
慕桂英3389331
相关分类