在ID3实现中,算法中的递归应该在此时停止

发布于 2024-09-24 02:09:38 字数 29 浏览 3 评论 0原文

在 ID3 实现中,算法中的递归此时应停止。

In ID3 implementation, at which point the recursion in Algorithm should stop.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

青瓷清茶倾城歌 2024-10-01 02:09:38

当没有可供分类的示例或没有可供分类的属性时,分支就会停止。 维基百科上的算法描述非常容易理解,并且有很多示例和讨论的链接也在那儿。

A branch stops when there are no examples left to classify or there are no attributes left to classify with. The algorithm description on Wikipedia is pretty easy to follow and there's a bunch of links to examples and discussions on there too.

浪漫之都 2024-10-01 02:09:38

好吧,只要满足分裂标准,您就继续分裂(从现有节点形成两个新节点)。

分裂标准通常是父节点信息增益之间差异的负值,又名,(如果变量是离散的,则为方差而不是分类)和假定子节点的加权平均 IG 加权平均信息增益:

if weighted_mean(IG_child1, IG_child2) < IG_parent :
    createNodes(IG_child1, IG_child2)
else :
    continue 

所以这是一个微不足道的答案,但你的问题背后可能有一个更复杂的意图,如果你不这样做不介意,我会稍微改一下,只要满足分割标准,您是否应该继续创建节点?

正如您可能刚刚发现的那样,如果您正在编写 ID3 算法时,应用无约束的分割标准通常会导致过度拟合(即,您从训练数据构建的树不能很好地概括,因为它没有区分来自真实模式的噪音)。

因此,这更可能是您问题的答案:“约束”节点分裂(从而处理过度拟合问题)的技术都属于两类之一 -自上而下或< em>自下而上。自上而下的一个例子:设置一些阈值(例如,如果下面的子节点的加权平均值小于5%,则不分裂)?

自下而上的示例:

这两种方法没有相同的效果,事实上修剪是更好的技术。原因是:如果你自上而下强制执行分割阈值,那么当然会阻止一些分割;然而,如果允许发生,则下一次分割(两个子节点中的一个或全部变成孙子节点)可能是有效的分割(即高于阈值),但该分割永远不会发生。修剪当然可以解释这一点。

Well you continue to split (form two new nodes from an extant one) as long as the splitting criterion is satisfied.

The splitting criterion is usually a negative value for the difference between the parent node Information Gain, aka entropy, (or variance if the variable is discrete rather than categorical) and the weighted average IG of the putative child nodes the weighted average Information Gain:

if weighted_mean(IG_child1, IG_child2) < IG_parent :
    createNodes(IG_child1, IG_child2)
else :
    continue 

So this is the trivial answer, but there's likely a more sophisticated intention behind your question, which, if you don't mind, i'll re-word slightly as should you continue to create nodes as long as the splitting criterion is satisfied?

As you might have just found out if you are coding an ID3 algorithm, applying the splitting criterion without constraint will often cause over-fitting (i.e., the tree that you've build from the training data doesn't generalize well because it hasn't distinguished the noise from the genuine patterns).

So this is more likely the answer to your question: the techniques to 'constrain' node splitting (and therefore deal with the over-fitting problem) all fall into one of two categories--top down or bottom up. An example of top down: set some threshold (e.g., if the weighted mean of the child nodes is less than 5% below, then do not split)?

An example of bottom up: pruning. Pruning means to let the algorithm split as long as the splitting criterion is satisfied then after it has stopped, start from the bottom layer of nodes and 'unsplit' any nodes in which the IG difference between the child nodes and the parent is less than some threshold.

These two approaches do not have the same effect, and in fact pruning is the superior technique. The reason: if you enforce a splitting threshold top-down, then of course some splitting will be prevented; however, if it had been allowed to occur, the next split (one or both of the two child nodes into grandchildren) might have been a valid split (i.e., above the threshold) but that split would never occur. Pruning of course accounts for this.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文