我正在 MNIST 数据集上使用梯度下降 + L2 正则化实现多项逻辑回归。我的训练数据是一个形状为(n_samples=1198,features=65)的数据框。在梯度下降的每次迭代中,我采用权重和输入的线性组合来获得 1198 次激活 (beta^T * X)。然后我通过 softmax 函数传递这些激活。但是,我对如何获得每个激活的 10 个输出类的概率分布感到困惑?
我的权重是这样初始化的
n_features = 65
# init random weights
beta = np.random.uniform(0, 1, n_features).reshape(1, -1)
这是我当前的实现。
def softmax(x:np.ndarray):
exps = np.exp(x)
return exps/np.sum(exps, axis=0)
def cross_entropy(y_hat:np.ndarray, y:np.ndarray, beta:np.ndarray) -> float:
"""
Computes cross entropy for multiclass classification
y_hat: predicted classes, n_samples x n_feats
y: ground truth classes, n_samples x 1
"""
n = len(y)
return - np.sum(y * np.log(y_hat) + beta**2 / n)
def gd(X:pd.DataFrame, y:pd.Series, beta:np.ndarray,
lr:float, N:int, iterations:int) -> (np.ndarray,np.ndarray):
"""
Gradient descent
"""
n = len(y)
cost_history = np.zeros(iterations)
for it in range(iterations):
activations = X.dot(beta.T).values
y_hat = softmax(activations)
cost_history[it] = cross_entropy(y_hat, y, beta)
# gradient of weights
grads = np.sum((y_hat - y) * X).values
# update weights
beta = beta - lr * (grads + 2/n * beta)
return beta, cost_history
温温酱
相关分类