我注意到SVR的数学指出,SVR使用L1惩罚或Epsilon不敏感的损失函数。但是Sklearn SVR模型文档提到了L2惩罚。我与SVR没有太多经验的经验认为拥有经验的社区可以为此提供一些启示。
这是文档:
c:float,默认值= 1.0
正则化参数。力量
正则化与C成反比。
积极的。罚款是平方 l2罚款。
I noticed the math for SVR states that SVR uses L1 penalty or epsilon insensitive loss function. But sklearn SVR model documentation mentions L2 penalty. I don't have much experience with SVR thought the community who has experience could shed some light on this.
Here is the snippet from the documentation:
C: float, default=1.0
Regularization parameter. The strength of the
regularization is inversely proportional to C. Must be strictly
positive. The penalty is a squared l2 penalty.
发布评论
评论(1)
查看此链接:。引用 - 在这里,我们正在对预测至少远离其真正目标的样本进行惩罚
Check out this link: https://scikit-learn.org/stable/modules/svm.html#svm-regression. quote - Here, we are penalizing samples whose prediction is at least away from their true target
