我有非常基本的线性回归样本。下面的实现(没有正则化)
class Learning:
def assume(self, weights, x):
return np.dot(x, np.transpose(weights))
def cost(self, weights, x, y, lam):
predict = self.assume(weights, x) \
.reshape(len(x), 1)
val = np.sum(np.square(predict - y), axis=0)
assert val is not None
assert val.shape == (1,)
return val[0] / 2 * len(x)
def grad(self, weights, x, y, lam):
predict = self.assume(weights, x)\
.reshape(len(x), 1)
val = np.sum(np.multiply(
x, (predict - y)), axis=0)
assert val is not None
assert val.shape == weights.shape
return val / len(x)
我想检查渐变,它是否有效,与scipy.optimize.
learn = Learning()
INPUTS = np.array([[1, 2],
[1, 3],
[1, 6]])
OUTPUTS = np.array([[3], [5], [11]])
WEIGHTS = np.array([1, 1])
t_check_grad = scipy.optimize.check_grad(
learn.cost, learn.grad, WEIGHTS,INPUTS, OUTPUTS, 0)
print(t_check_grad)
# Output will be 73.2241602235811!!!
我从头到尾手动检查了所有计算。它实际上是正确的实现。但是在输出中我看到了非常大的差异!是什么原因?
慕神8447489
逻辑回归和线性回归的区别 机器学习?
回归模型
梯度下降的公式的解释
为什么计算X的梯度值是1.5
相关分类