Forward chaining inference engines support specifications in full first-order logic (translated to if-then rules), while decision trees can only march down a set to a specific subset. If you're using both for, say, determining what car a user wants, then in first-order logic you can say (CHR syntax; <=> replaces LHS by RHS):
in addition to all the rules that determine the brand/type of car the user wants, and the inference engine will pick the color as well as the other attributes.
With decision trees, you'd have to set up an extra tree for the color. That's okay as long as color doesn't interact with other properties, but once they do, you're screwed: you may have to replicate the entire tree for every color except those colors that conflict with other properties, where you'd need to also modify the tree.
(I admit color is a very stupid example, but I hope it gets the idea across.)
To say so I have not used inference engines or decision trees in practice. In my point of view you should use decision trees if you to want learn form a given training set and then predict outcomes. As an example, if you have a data set with information which states if you went out for a barbecue given the weather condition (wind, temperatur, rain, ...). With that data set you can build a decision tree. The nice thing about decision tree is that you can use pruning to avoid overfitting and therefore avoid to model noise.
I think inference engines are better than decision trees if you have specific rules, which you can use for reasoning. Larsmans has already provided a good example.
发布评论
评论(2)
前向链接推理引擎支持完整的一阶逻辑规范(转换为 if-then 规则),而决策树只能将集合向下推进到特定子集。如果您使用两者来确定用户想要什么车,那么在一阶逻辑中您可以说 (CHR 语法;
<=>
用 RHS 替换 LHS):除了确定用户想要的汽车品牌/类型的所有规则以及推理引擎之外会选择颜色以及其他属性。
使用决策树,您必须为颜色设置一棵额外的树。只要颜色不与其他属性相互作用就可以了,但是一旦它们相互作用,你就完蛋了:你可能必须为每种颜色复制整个树,除了那些与其他属性冲突的颜色,你需要也修改树。
(我承认颜色是一个非常愚蠢的例子,但我希望它能传达这个想法。)
Forward chaining inference engines support specifications in full first-order logic (translated to if-then rules), while decision trees can only march down a set to a specific subset. If you're using both for, say, determining what car a user wants, then in first-order logic you can say (CHR syntax;
<=>
replaces LHS by RHS):in addition to all the rules that determine the brand/type of car the user wants, and the inference engine will pick the color as well as the other attributes.
With decision trees, you'd have to set up an extra tree for the color. That's okay as long as color doesn't interact with other properties, but once they do, you're screwed: you may have to replicate the entire tree for every color except those colors that conflict with other properties, where you'd need to also modify the tree.
(I admit color is a very stupid example, but I hope it gets the idea across.)
这么说吧,我在实践中没有使用过推理引擎或决策树。在我看来,如果您想从给定的训练集中学习然后预测结果,您应该使用决策树。举个例子,如果您有一个数据集,其中包含的信息表明您是否在给定天气条件(风、温度、雨……)的情况下出去烧烤。利用该数据集,您可以构建决策树。决策树的好处在于,您可以使用修剪来避免过度拟合,从而避免建模噪声。
我认为如果你有特定的规则可以用来推理,那么推理引擎比决策树更好。拉斯曼斯已经提供了一个很好的例子。
我希望这有帮助
To say so I have not used inference engines or decision trees in practice. In my point of view you should use decision trees if you to want learn form a given training set and then predict outcomes. As an example, if you have a data set with information which states if you went out for a barbecue given the weather condition (wind, temperatur, rain, ...). With that data set you can build a decision tree. The nice thing about decision tree is that you can use pruning to avoid overfitting and therefore avoid to model noise.
I think inference engines are better than decision trees if you have specific rules, which you can use for reasoning. Larsmans has already provided a good example.
I hope that helps