多目标优化:使用 NSGA 进行选择与使用 VEGA 进行选择
我想知道在多目标优化选择的背景下,向量生成遗传算法(VEGA)和非支配排序遗传算法(NSGA)算法之间存在什么区别?
(我知道 NSGA 是基于帕累托的,而 VEGA 是非基于帕累托的。)
I was wondering what differences exist between the Vector Generated Genetic Algorithm (VEGA) and Nondominated Sorting Genetic Algorithm (NSGA) algorithms in the context of selection in Multi Objective Optimisation?
(I am aware that NSGA is pareto-based while VEGA is non-pareto based.)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
差异相当大。正如你所说,一个是基于帕累托的,另一个不是。在 MOO 中,这是一件大事。 VEGA 的工作原理是将总体划分为不相交的集合,并迫使不同的集合朝着不同的单一目标演化。有一些机制可以帮助将它们组合成帕累托集的有意义的表示,但它基本上只是针对不同目标的解决方案的联合。选择是通过选择相对于单独设置的目标函数更好的解决方案来完成的。
NSGA 和其他基于 Pareto 的方法完全不同。他们的选择不是基于任何特定的目标选择,而是基于解决方案之间的比较属性。每个这样的算法在如何执行这些比较方面都会做出稍微不同的选择,并且 NSGA-II(您绝对应该使用该算法的第二个版本)通过非支配排序来实现。基本上,您找到所有非支配解并将它们称为集合 #1。然后,如果删除集合 #1 的元素,您会发现所有非支配解 - 它们将成为集合 #2。你继续下去,直到所有的解决方案都被考虑在内,结果就像剥洋葱层一样。选择过程是,您始终选择较低级别的成员(设置#1,然后设置#2,依此类推)。如果你不能接受特定级别的所有元素,你可以通过在该级别内选择远离其他级别的解决方案来打破联系,这个想法是,如果你不能接受所有元素,你至少应该尝试不选择你确实从一小群中取出的那些。
一般来说,您应该考虑基于帕累托的方法。至少 10-15 年来,它们一直是经过验证的选择。特别是,您应该关注基于精英帕累托的方法,例如 NSGA-II、SPEA2、epsilon-MOEA 以及一些最近的竞争者。
The differences are quite large. As you say, one is Pareto-based and the other is not. In MOO, that's a huge thing. VEGA works by partitioning the population into disjoint sets and forcing the different sets to evolve towards different single objectives. There's a bit of machinery there to help combine them into a meaningful representation of the Pareto set, but it's basically just a union of solutions with respect to different objectives. Selection is done by selecting solutions that are better with respect to their individually set objective functions.
NSGA and other Pareto-based methods are completely different. They do selection not based on any particular choice of objective, but on the properties of the solutions as compared to one another. Each such algorithm makes slightly different choices in how they perform these comparisons, and NSGA-II (you should definitely use the second version of the algorithm) does it by non-dominated sorting. Basically, you find all the non-dominated solutions and call them set #1. Then you find all the solutions that would be non-dominated if you removed the elements of set #1 -- they become set #2. You keep going until all the solutions are accounted for, and the result is something like peeling the layers of an onion. The selection procedure is then that you always select members of the lower classes (set #1, then #2, and so on). If you can't take all the elements of a particular level, you break ties by choosing solutions within that level that further from the others, the idea being that if you can't take them all, you should at least try not to pick the ones you do take from one tiny little cluster.
In general, you should be looking at Pareto-based methods. They've been the proven choice for at least 10-15 years. In particular, you should focus on elitist Pareto-based methods like NSGA-II, SPEA2, the epsilon-MOEA, and a few more recent contenders.