如何重新训练逻辑回归模型的假阳性和假阴性标签?
我已经培训了具有极端不平衡类的逻辑回归模型。我使用了所有技术(SMOTE,班级体重调整,决策阈值)来提高性能指标(精度和召回),但结果不是很有希望 - 非常高的假阳性级别率:(
我听说重新训练假阳性/负面因素可以改善该级数结果,我不确定该过程是如何进行误报,将其重新标记为(1,0),并以相同的模型训练它吗? :
I have trained a logistic regression model with an extreme imbalanced classes. I used all techniques (SMOTE, class weight adjustment, decision threshold) to improve performance metrics (Precision and Recall) but results are not very promising - very high false positive class rate :(
I have heard that retraining false positive/negatives could improve the result but I am not sure how the process is. for example should I take false positives, relabel it 50/50 as (1, 0) and train it with the same model? I do highly appreciate if someone can elaborate on process with details:
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论