Sklearn的下垂和LBFG是否会惩罚拦截?
我们知道,在Sklearn实施中进行惩罚是我们必须处理的“设计错误”。一项工作是根据文档:
注意!合成特征重量作为所有其他特征都需要L1/L2正则化。为了减少正则化对合成特征重量的影响(因此在截距上)必须增加截距。
必须增加。
但是,同一文档说此参数仅在solver ='liblinear'
时才有用。
我的问题:
其他求解器会惩罚拦截吗?我试图查看来源,我认为他们不确定,但我不确定,在任何地方都找不到明确的答案。
We know that penalizing intercept in sklearn implementation is a "design mistake" that we have to deal with. One work around is to set intercept_scaling
to a very large number, per the documentation:
Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased.
However, same documentation says that this parameter is useful only when solver='liblinear'
.
My question:
Do other solvers penalise the intercept? I tried to look at the source and I think they don't but I am not sure and I couldn't find clear answer anywhere.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
logisticRegress
的唯一求解器是“ liblinear”。官方文档>
参见 //i.sstatic.net/cmhqj.png“ rel =” nofollow noreferrer“>
The only solver of
LogisticRegression
that penalizes the intercept is "liblinear".See the official documentation: