> 文章列表 > 手撕深度学习中的损失函数(上)

手撕深度学习中的损失函数(上)

手撕深度学习中的损失函数(上)

面试中经常会问到损失函数的相关问题,本文推导了深度学习中常用损失函数的计算公式和反向传播公式,并使用numpy实现。


定义损失函数基类:

class Loss:def loss(self, predictions, targets):raise NotImplementedErrordef grad(self, predictions, targets):raise NotImplementedError

定义损失函数中会用到的数学函数

def sigmoid(x):return 1.0 / (1.0 + np.exp(-x))def softmax(x, dim=-1):x_max = np.max(x, axis=dim, keepdims=True)# 减去最大值,防止上溢x_exp = np.exp(x - x_max)return x_exp / np.sum(x_exp, axis=dim, keepdims=True)def log_softmax(x, dim=-1):x_max = np.max(x, axis=dim, keepdims=True)x_exp = np.exp(x - x_max)exp_sum = np.sum(x_exp, axis=dim, keepdims=True)return x - x_max - np.log(exp_sum)def one_hot(labels, n_classes):return np.eye(n_classes)[labels.reshape(-1)]

MAE

平均绝对误差(Mean Absolute Error):模型预测值yyy和真实值y^\\hat yy^之间距离的平均值
loss=1n∑i∣yi−y^i∣loss = \\frac{1}{n}\\sum_i{|y_i-\\hat y_i|} loss=n1iyiy^i

grad=1nsign(y−y^)grad = \\frac{1}{n}sign(y-\\hat y) grad=n1sign(yy^)

class MAE(Loss):def loss(self, predictions, targets):return np.sum(np.abs(predictions - targets)) / targets.shape[0]def grad(self, predictions, targets):return np.sign(predictions - targets) / targets.shape[0]

MSE

均方差损失(Mean Squared Error Loss):模型预测值yyy和真实值y^\\hat yy^之间差值平方的平均值
loss=12n∑i∣∣yi−y^i∣∣2loss = \\frac{1}{2n}\\sum_i{||y_i-\\hat y_i||^2} loss=2n1i∣∣yiy^i2

grad=1n(y−y^)grad = \\frac{1}{n}(y - \\hat y) grad=n1(yy^)

class MSE(Loss):def loss(self, predictions, targets):return 0.5 * np.sum((predictions - targets)  2) / targets.shape[0]def grad(self, predictions, targets):return (predictions - targets) / targets.shape[0]

Huber Loss

结合了MAE和MSE的优点,也被称为 Smooth Mean Absolute Error

在误差较小时使用MSE,误差较大时使用MAE
loss=1n{0.5∗∑i∣∣yi−y^i∣∣2,∣yi−y^∣<δ∑iδ∣yi−y^i∣−0.5∗δ2,otherwiseloss=\\frac{1}{n}\\left\\{ \\begin{aligned} & 0.5* \\sum_i{||y_i-\\hat y_i||^2}, \\quad \\quad |y_i - \\hat y| < \\delta \\\\ & \\sum_i{\\delta |y_i-\\hat y_i| - 0.5 * \\delta^2}, \\quad \\quad otherwise \\\\ \\end{aligned} \\right. loss=n10.5i∣∣yiy^i2,yiy^<δiδyiy^i0.5δ2,otherwise

grad=1n{y−y^,∣y−y^∣<δδsign(y−y^),otherwisegrad=\\frac{1}{n}\\left\\{ \\begin{aligned} & y-\\hat y,\\quad \\quad |y - \\hat y| < \\delta \\\\ & \\delta sign(y-\\hat y),\\quad \\quad otherwise \\\\ \\end{aligned} \\right. grad=n1{yy^,yy^<δδsign(yy^),otherwise


MAE, MSE, Huber Loss函数曲线 手撕深度学习中的损失函数(上)

class HuberLoss(Loss):def __init__(self, delta=1.0):self.delta = deltadef loss(self, predictions, targets):dist = np.abs(predictions - targets)l2_mask = dist < self.deltal1_mask = ~l2_maskl2_loss = 0.5 * (predictions - targets)  2l1_loss = self.delta * dist - 0.5 * self.delta  2total_loss = np.sum(l2_loss * l2_mask + l1_loss * l1_mask) / targets.shape[0]return total_lossdef grad(self, predictions, targets):error = predictions - targetsl2_mask = np.abs(error) < self.deltal1_mask = ~l2_maskl2_grad = errorl1_grad = self.delta * np.sign(error)total_grad = l2_grad * l2_mask + l1_grad * l1_maskreturn total_grad / targets.shape[0]

Cross Entropy Loss

熵定义为:信息的数学期望。
H=−∑ip(xi)log(p(xi))H = -\\sum_i{p(x_i)log(p(x_i))} H=ip(xi)log(p(xi))
KL散度

KL散度(Kullback-Leibler Divergence)也叫做相对熵,用于度量两个概率分布之间的差异程度,p相对q的KL散度为
Dq(p)=Hq(p)−H(p)=∑xplog(p)−plog(q)D_q(p) = H_q(p) - H(p) = \\sum_x{plog(p)-plog(q)} Dq(p)=Hq(p)H(p)=xplog(p)plog(q)
交叉熵

预测概率分布q与真实概率分布p的差异
Hq(p)=−∑xplog(q)H_q(p) = -\\sum_{x} p log(q) Hq(p)=xplog(q)
详细推导:
loss=−1N∑i=1N∑k=1Ky^ik⋅log_softmax(yik)=−∑i=1Nlog_softmax(yic)loss = -\\frac{1}{N} \\sum_{i=1}^N\\sum_{k=1}^{K}\\hat y_i^k \\cdot log\\_softmax(y_i^k) = -\\sum_{i=1}^Nlog\\_softmax(y_i^c) loss=N1i=1Nk=1Ky^iklog_softmax(yik)=i=1Nlog_softmax(yic)
ycy^cyc是标签,y^\\hat yy^是标签的独热编码,yyy是预测概率向量,向量长度等于yyy的类别KKK

化简:
CE=−log_softmax(yc)=−log(exp(yc)∑k=1Kexp(yk))=−yc+log(∑k=1Kexp(yk))CE = -log\\_softmax(y^c) = -log(\\frac {exp(y^c)}{\\sum_{k=1}^{K}exp(y^k)})=-y^c + log(\\sum_{k=1}^{K}exp(y^k)) CE=log_softmax(yc)=log(k=1Kexp(yk)exp(yc))=yc+log(k=1Kexp(yk))

求导:
grad=1N∑i=1N∂CEi∂yi=1N∑i=1Nsoftmax(yi)−y^igrad = \\frac{1}{N}\\sum_{i=1}^N\\frac{\\partial CE_i}{\\partial y_i} = \\frac{1}{N}\\sum_{i=1}^N softmax(y_i) - \\hat y_i grad=N1i=1NyiCEi=N1i=1Nsoftmax(yi)y^i

class CrossEntropy(Loss):def loss(self, predictions, targets):# targets是one-hot向量ce = -log_softmax(predictions, dim=1) * targetsreturn np.sum(ce) / targets.shape[0]def grad(self, predictions, targets):logits = softmax(predictions, dim=1)return (logits - targets) / targets.shape[0]

上面的代码中不同类别的权重是相等的,如果不同类别的样本数量差异过大,可以调整不同类别的权重。

顺便实现一下KL散度

注意,交叉熵的标签是one-hot向量,KL散度的标签是概率分布向量

下面是更通用的推导,y^\\hat yy^是概率向量,∑k=1Ky^k=1\\sum_{k=1}^{K}\\hat y^k=1k=1Ky^k=1
CE=−∑k=1Ky^klog_softmax(yc)=−∑k=1Ky^klog(exp(yk)∑k=1Kexp(yk))=−∑k=1K[yik−log(∑k=1Kexp(yk))]⋅y^ik=log(∑k=1Kexp(yk))−∑k=1Kyik⋅y^ikCE = -\\sum_{k=1}^{K}\\hat y^klog\\_softmax(y^c) = -\\sum_{k=1}^{K}\\hat y^klog(\\frac {exp(y^k)}{\\sum_{k=1}^{K}exp(y^k)})=-\\sum_{k=1}^{K}[y_i^k - log(\\sum_{k=1}^{K}exp(y^k))]\\cdot \\hat y_i^k = log(\\sum_{k=1}^{K}exp(y^k))-\\sum_{k=1}^{K}y_i^k \\cdot \\hat y_i^k CE=k=1Ky^klog_softmax(yc)=k=1Ky^klog(k=1Kexp(yk)exp(yk))=k=1K[yiklog(k=1Kexp(yk))]y^ik=log(k=1Kexp(yk))k=1Kyiky^ik

grad=1N∑i=1N∂CEi∂yi=1N∑i=1Nsoftmax(yi)−y^igrad = \\frac{1}{N}\\sum_{i=1}^N\\frac{\\partial CE_i}{\\partial y_i} = \\frac{1}{N}\\sum_{i=1}^N softmax(y_i) - \\hat y_i grad=N1i=1NyiCEi=N1i=1Nsoftmax(yi)y^i

class KLDivLoss(Loss):def loss(self, predictions, targets):# targets是概率向量ce = (log_softmax(targets) - np.log(predictions)) * targetsreturn np.sum(ce) / targets.shape[0]def grad(self, predictions, targets):logits = softmax(predictions, dim=1)return (logits - targets) / targets.shape[0]

扩展:

为什么在分类任务中,用交叉熵而不是MSE?

Logistic Regression为例,分别用交叉熵和MSE推导反向传播过程
fw,b(x)=σ(∑iwixi+b)f_{w,b}(x) = \\sigma(\\sum_i{w_ix_i+b}) fw,b(x)=σ(iwixi+b)
其中,σ\\sigmaσSigmoid激活函数,f是预测值,0 < f(x) < 1

1.交叉熵
L=−[yln(f(x))+(1−y)ln(1−f(x))]y∈{0,1}L = -[yln(f(x)) + (1-y)ln(1-f(x))] \\quad y \\in \\{0,1\\} L=[yln(f(x))+(1y)ln(1f(x))]y{0,1}
反向传播:
∂L(w,b)∂w=−[y∂ln(f(x))∂w+(1−y)∂(1−ln(f(x)))∂w]\\frac{\\partial L(w,b)}{\\partial w} = - [y \\frac{\\partial ln(f(x))}{\\partial w} + (1-y) \\frac{\\partial (1-ln(f(x)))}{\\partial w}] wL(w,b)=[ywln(f(x))+(1y)w(1ln(f(x)))]
z=wx+bz=wx+bz=wx+b
∂ln(f(x))∂z=∂ln(σ(z))∂z=1−σ(z)∂(1−ln(f(x)))∂w=−σ(z)\\frac{\\partial ln(f(x))}{\\partial z} = \\frac{\\partial ln(\\sigma(z))}{\\partial z} = 1 - \\sigma(z) \\qquad \\frac{\\partial (1-ln(f(x)))}{\\partial w} = - \\sigma(z) zln(f(x))=zln(σ(z))=1σ(z)w(1ln(f(x)))=σ(z)

∂L(w,b)∂z∂z∂w=−[y(1−σ(z))+(1−y)(−σ(z))]∂z∂w=−(y−σ(z))x\\frac{\\partial L(w,b)}{\\partial z} \\frac{\\partial z}{\\partial w} = -[y(1-\\sigma (z)) + (1-y)(-\\sigma (z))]\\frac{\\partial z}{\\partial w} = - (y - \\sigma(z))x zL(w,b)wz=[y(1σ(z))+(1y)(σ(z))]wz=(yσ(z))x

梯度下降:
wt=wt−1−η∑i(−(yi−σ(wx+b))xi)w_t = w_{t-1} - \\eta \\sum_i(- (y_i - \\sigma(wx+b))x_i) wt=wt1ηi((yiσ(wx+b))xi)

  • 当标签yyy和预测值f(x)f(x)f(x)差距越大,梯度越大,下降的越快

2.MSE
L=12(f(x)−y)2y∈{0,1}L = \\frac{1}{2}(f(x) - y)^2 \\quad y \\in \\{0,1\\} L=21(f(x)y)2y{0,1}
反向传播:
∂L(w,b)∂w=(f(x)−y)∂f(x)∂w\\frac{\\partial L(w,b)}{\\partial w} = (f(x) - y) \\frac{\\partial f(x)}{\\partial w} wL(w,b)=(f(x)y)wf(x)
同令 z=wx+bz=wx+bz=wx+b
∂f(x)∂z∂z∂w=σ(z)(1−σ(z))x\\frac{\\partial f(x)}{\\partial z} \\frac{\\partial z}{\\partial w} = \\sigma(z)(1-\\sigma(z))x zf(x)wz=σ(z)(1σ(z))x

∂L(w,b)∂w=(f(x)−y)f(x)(1−f(x))x\\frac{\\partial L(w,b)}{\\partial w} = (f(x) - y) f(x)(1-f(x))x wL(w,b)=(f(x)y)f(x)(1f(x))x

  • y=0y=0y=0f(x)f(x)f(x)趋近1时,因为有1−f(x)1-f(x)1f(x)这一项,梯度会消失,然而预测是完全错误的
  • y=1y=1y=1f(x)f(x)f(x)趋近0时,因为有f(x)f(x)f(x)这一项,梯度也会消失,预测完全错误

总结:

​交叉熵损失函数关于输入权重的梯度表达式与预测值与真实值的误差成正比且不含激活函数的梯度,而均方误差损失函数关于输入权重的梯度表达式中则含有,由于常用的sigmoid/tanh等激活函数存在梯度饱和区,使得MSE对权重的梯度会很小,参数w调整的慢,训练也慢,而交叉熵损失函数则不会出现此问题,其参数w会根据误差调整,训练更快,效果更好。


参考资料:

borgwang/tinynn: A lightweight deep learning library