回归问题的焦点损失?
我有一个训练组的回归问题,可以认为这是不平衡的。因此,我想创建一个加权损失函数,该损失函数重视艰难和简单的例子的损失贡献,而硬性示例具有更大的贡献。
我知道这是可能的加权损失类型,因为它在使用焦点损失时实现了。
我的问题是,是否可以使用L1丢失和线性输出层转换基于回归问题的焦点损失?
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.cuda.amp as amp
##
class FocalLoss(nn.Module):
def __init__(self,
alpha=0.25,
gamma=2,
reduction='mean',):
super(FocalLoss, self).__init__()
self.alpha = alpha
self.gamma = gamma
self.reduction = reduction
self.crit = nn.BCEWithLogitsLoss(reduction='none')
def forward(self, logits, label):
'''
Usage is same as nn.BCEWithLogits:
>>> criteria = FocalLoss()
>>> logits = torch.randn(8, 19, 384, 384)
>>> lbs = torch.randint(0, 2, (8, 19, 384, 384)).float()
>>> loss = criteria(logits, lbs)
'''
probs = torch.sigmoid(logits)
coeff = torch.abs(label - probs).pow(self.gamma).neg()
log_probs = torch.where(logits >= 0,
F.softplus(logits, -1, 50),
logits - F.softplus(logits, 1, 50))
log_1_probs = torch.where(logits >= 0,
-logits + F.softplus(logits, -1, 50),
-F.softplus(logits, 1, 50))
loss = label * self.alpha * log_probs + (1. - label) * (1. - self.alpha) * log_1_probs
loss = loss * coeff
if self.reduction == 'mean':
loss = loss.mean()
if self.reduction == 'sum':
loss = loss.sum()
return loss
I have a regression problem with a training set which can be considered unbalanced. I therefore want to create a weighted loss function which values the loss contributions of hard and easy examples differently, with hard examples having a larger contribution.
I know this is possible type of weighted loss is possible as its implemented when using Focal loss.
My question is if its possible to convert focal loss for regression based problems using L1 loss and a linear output layer?
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.cuda.amp as amp
##
class FocalLoss(nn.Module):
def __init__(self,
alpha=0.25,
gamma=2,
reduction='mean',):
super(FocalLoss, self).__init__()
self.alpha = alpha
self.gamma = gamma
self.reduction = reduction
self.crit = nn.BCEWithLogitsLoss(reduction='none')
def forward(self, logits, label):
'''
Usage is same as nn.BCEWithLogits:
>>> criteria = FocalLoss()
>>> logits = torch.randn(8, 19, 384, 384)
>>> lbs = torch.randint(0, 2, (8, 19, 384, 384)).float()
>>> loss = criteria(logits, lbs)
'''
probs = torch.sigmoid(logits)
coeff = torch.abs(label - probs).pow(self.gamma).neg()
log_probs = torch.where(logits >= 0,
F.softplus(logits, -1, 50),
logits - F.softplus(logits, 1, 50))
log_1_probs = torch.where(logits >= 0,
-logits + F.softplus(logits, -1, 50),
-F.softplus(logits, 1, 50))
loss = label * self.alpha * log_probs + (1. - label) * (1. - self.alpha) * log_1_probs
loss = loss * coeff
if self.reduction == 'mean':
loss = loss.mean()
if self.reduction == 'sum':
loss = loss.sum()
return loss
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论