我尝试在Python中构建一个梯度下降函数。我使用了二元交叉熵作为损失函数,Sigmoid作为激活函数。
def sigmoid(x): return 1/(1+np.exp(-x))def binary_crossentropy(y_pred,y): epsilon = 1e-15 y_pred_new = np.array([max(i,epsilon) for i in y_pred]) y_pred_new = np.array([min(i,1-epsilon) for i in y_pred_new]) return -np.mean(y*np.log(y_pred_new) + (1-y)*np.log(1-y_pred_new))def gradient_descent(X, y, epochs=10, learning_rate=0.5): features = X.shape[0] w = np.ones(shape=(features, 1)) bias = 0 n = X.shape[1] for i in range(epochs): weighted_sum = w.T@X + bias y_pred = sigmoid(weighted_sum) loss = binary_crossentropy(y_pred, y) d_w = (1/n)*(X@(y_pred-y).T) d_bias = np.mean(y_pred-y) w = w - learning_rate*d_w bias = bias - learning_rate*d_bias print(f'Epoch:{i}, weights:{w}, bias:{bias}, loss:{loss}') return w, bias
因此,我输入了以下内容
X = np.array([[0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.4, 0.6, 0.2, 0.4], [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.9, 0.4, 0.7]])y = 2*X[0] - 3*X[1] + 0.4
然后执行w, bias = gradient_descent(X, y, epochs=100)
,得到的输出是w = array([[-20.95],[-29.95]])
,b = -55.50000017801383
,以及loss:40.406546076763014
。随着epoch的增加,权重变得越来越负,偏置也在减少。预期的输出是w = [[2],[-3]],b = 0.4。
我不知道自己哪里做错了,损失也没有收敛。在所有epoch中,损失一直保持不变。
回答:
通常情况下,二元交叉熵
损失函数用于二元分类任务。然而,你的任务是线性回归,所以我建议使用均方误差
作为损失函数。以下是我的建议:
def gradient_descent(X, y, epochs=1000, learning_rate=0.5): w = np.ones((X.shape[0], 1)) bias = 1 n = X.shape[1] for i in range(epochs): y_pred = w.T @ X + bias mean_square_err = (1.0 / n) * np.sum(np.power((y - y_pred), 2)) d_w = (-2.0 / n) * (y - y_pred) @ X.T d_bias = (-2.0 / n) * np.sum(y - y_pred) w -= learning_rate * d_w.T bias -= learning_rate * d_bias print(f'Epoch:{i}, weights:{w}, bias:{bias}, loss:{mean_square_err}') return w, biasX = np.array([[0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.4, 0.6, 0.2, 0.4], [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.9, 0.4, 0.7]])y = 2*X[0] - 3*X[1] + 0.4w, bias = gradient_descent(X, y, epochs=5000, learning_rate=0.5)print(f'w = {w}')print(f'bias = {bias}')
输出:
w = [[ 1.99999999], [-2.99999999]]bias = 0.40000000041096756