分类器算法的参数优化

发布于 2024-10-19 03:13:48 字数 191 浏览 0 评论 0原文

据说不同的算法有不同的参数。我真的不认为这是真的,如果它是树决策算法和朴素贝叶斯算法,那么每个算法的参数是什么?有人可以给我举个例子吗?

如果是这种情况,那么对将使用决策树算法运行的数据进行 5 倍交叉验证与贝叶斯算法不同吗?

另外,对于参数优化,我将进行 5 倍交叉验证。有没有办法使用 weka 自动执行此操作来确定参数的设置值键?

It is said that different algorithms have different parameters. I don't really see this as true, say if it is a tree decision algorithm and naive bayesian algorithm, what is the parameter for each? Can someone give me an example..

If this is the case then doing a 5-fold cross validation for a data that is going to be run using a decision tree algorithm is different with bayesian?

Also for the parameter optimization I will do a 5-fold cross validation. Is there a way to automatically do this to determine the set values key of parameters using weka?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

糖果控 2024-10-26 03:13:48

由于您使用的是 Weka,因此您可以通过在 Explorer 中打开数据集,转到 Classify,选择算法,然后单击算法框来查看每个算法的参数。例如,朴素贝叶斯分类器的参数会影响它处理连续数据的方式(离散化或使用内核估计器)

Since you are using Weka, you can see the parameters for each algorithm by opening dataset in Explorer, going to Classify, choosing algorithm and then Clicking on algorithm box. So for instance Naive Bayes classifier has parameters that affect how it deals with continuous data (discretization or using kernel estimator)

抚你发端 2024-10-26 03:13:48

即使随着算法中时间的推移,决策算法的参数也可能会发生变化,当然算法之间也是如此。

假设您有一个人工智能决策树,用于确定在战场上移动的士兵。你可能有一个防御算法,它会寻求一个尽可能最大化自己寿命的决定。你可能有一个激进的算法,它将寻求对其他士兵的最大伤害。您可能有寻找墙壁结构损坏的拆除算法。其中每一个都有不同的参数来确定做出哪个决定。

随着模拟的进行,决策参数可能会发生变化。例如,激进算法可以以2:1的方式权衡造成的损害与受到的损害。假设 AI 愿意对未来进行 100 个模拟周期来做出决定。它可能会发现,尽管权重为 2:1,但它为做出决策而运行的模拟与实际发生的情况并不相符。如果它计算出它会受到100点伤害,但会造成200点伤害,但实际上它受到了150点伤害,这在它几乎无法造成70点伤害之前就杀死了它,(假设它是这样设计的)它可以考虑到这一点。同样,它可能会发现,当它在某些条件下选择重新定位时,它能够在 T+10 刻度期间避免伤害,获得有利位置,并在 T+40 到 T+80 刻度期间造成比正常情况更多的伤害。这将导致它比以前更多地考虑更安全的情况。

The parameters to the decision algorithm may change even as time goes on in the algorithm, and certainly between algorithms.

Let's say you have an AI decision tree for determining moving soldiers around a battle field. You may have a defensive algorithm, which will seek a decision that maximizes its own life where it can. You may have an aggressive algorithm, which will seek maximum damage against other soldiers. You may have demolition algorithms that seek structural damage to walls. Each of these will have different parameters for determining which decision to make.

And the decision parameters may change as the simulation goes on. For example, the aggressive algorithm may weigh damage done to damage taken in a 2:1 manner. Let's say the AI is willing to look 100 simulation cycles into the future to make a decision. It may find that even though it was weighing 2:1, the simulations it ran to make the decision didn't match what actually happened. If it calculated it would take 100 damage, but do 200 damage, but it actually took 150 damage, which killed it before it could barely do 70 damage, (assuming it's designed to) it could take this into consideration. Simularly, it may find that when it chose to reposition under certain conditions, it was able to avoid damage during ticks T+10, gain a vantage point, and do more damage during ticks T+40 to T+80 than it would have normally. This will cause it to consider the safer situations more than it would have before.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文