神经网络“育种”

发布于 2024-08-08 05:00:52 字数 271 浏览 4 评论 0原文

我刚刚观看了有关“Polyworld”的 Google 技术谈话视频(此处),他们进行了交谈关于培育两个神经网络以形成后代。我的问题是,如何组合两个神经网络?它们看起来如此不同,以至于任何将它们结合起来的尝试都只会形成第三个完全不相关的网络。也许我错过了一些东西,但我没有找到一种好方法来获取两个独立神经网络的积极方面并将它们组合成一个。如果有人能详细说明这个过程,我将不胜感激。

I just watched a Google tech talk video covering "Polyworld" (found here) and they talk about breeding two neural networks together to form offspring. My question is, how would one go about combining two neural networks? They seem so different that any attempt to combine them would simply form a third, totally unrelated network. Perhaps I'm missing something, but I don't see a good way to take the positive aspects of two separate neural networks and combine them into a single one. If anyone could elaborate on this process, I'd appreciate it.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

倾城月光淡如水﹏ 2024-08-15 05:00:52

到目前为止,这两种反应都不符合 Polyworld 的本质!...

它们都描述了典型的遗传算法 (GA) 应用程序。虽然 GA 融合了 Polyworld 中的一些元素(育种、选择),但 GA 还暗示了某种形式的“客观”标准,旨在引导进化朝着[相对]特定的目标发展。

另一方面,Polyworld 是人工生命 (ALife) 的框架。对于 ALife,个体生物的生存及其将基因传递给其他世代的能力,并不很大程度上取决于它们满足特定“适应功能”的能力,而是与 各种更广泛的、非目标导向的标准,例如个体以与其体型和新陈代谢相称的方式进食的能力、躲避捕食者的能力、寻找交配伙伴的能力以及还有不同剂量的运气和随机性。

与生物及其世界相关的Polyworld模型相对固定的(例如,它们都可以访问(尽管可能选择不使用)各种基本传感器(例如颜色、形状……)和各种执行器(吃、交配、转动、移动……的“装置”),这些基本的感觉和运动功能不会进化(就像在自然界中一样,例如当生物找到方法时)对热或声音敏感和/或找到与原始运动原语不同的移动方式等...)

另一方面,生物的大脑具有结构和连接,它们都是以下的产物生物的基因构成(来自其祖先的“东西”)和自身的经验例如,用于确定神经元之间连接强度的主要算法使用赫布逻辑(即)。在生物的一生中(我猜是在早期,因为算法通常有一个“冷却”因素,随着时间的推移,它会最大限度地减少其大幅改变事物的能力) 。目前还不清楚该模型是否包括某种形式的拉马克进化,即一些高级行为是通过基因[直接]传递的,而不是[可能]与每一代人重新学习(间接基于一些遗传性的行为)结构)。

ALife 和 GA 之间的显着区别(还有其他区别!)是 ALife 的重点是以非定向方式观察和培养、涌现的行为——无论它们是什么——例如,当一些生物进化出一种妆容,促使它们等待附近成堆的绿色食物并等待深绿色生物杀死它们,或者一些生物可能开始与例如,为了交配等其他目的而寻求彼此的存在。对于 GA,重点是正在进化的程序的特定行为。例如,目标可能是让程序识别视频图像中的边缘,因此朝着这个特定方向发展是有利的。就进化而言,更好地执行此任务的单个程序(通过某些“适应度函数”来衡量)受到青睐。

另一个不太明显但重要的区别是生物(或遗传算法中的程序)繁殖自身的方式。借助 ALife,个体生物一开始是随机地找到自己的交配伙伴,但一段时间后,它们可能会学会只与表现出特定属性或行为的生物进行繁殖。另一方面,对于 GA,“性别”留给 GA 框架本身,例如,它选择优先杂交在适应度函数上得分较高的个体(及其克隆) (并且总是为一些随机性留有空间,以免解决方案搜索停留在某些局部最大值,但要点是 GA 框架主要决定谁与谁发生性关系)......

澄清这一点后,我们可以回到 OP的原始问题...
...如何组合两个神经网络?它们看起来如此不同,以至于任何将它们结合起来的尝试都只会形成第三个完全不相关的网络。 ...我看不出有什么好方法可以将两个独立的神经网络的积极方面结合起来......
特定生物的“基因构成”会影响参数,例如生物的大小、颜色等。它还包括与大脑相关的参数,特别是其结构:神经元的数量、来自各种传感器的连接的存在(例如,生物是否很好地看到蓝色?)、与各种执行器的连接的存在(例如,传感器)。该生物使用它的光吗?)。神经元之间的特定连接及其相对强度也可能在基因中传递,即使只是作为初始值,也会在大脑学习阶段迅速改变。
通过采用两个生物,我们[自然!]可以以或多或少随机的方式选择哪些参数来自第一个生物,哪些参数来自另一个生物(以及一些新颖的“突变”) “这不是来自父母)。例如,如果“父亲”与红色传感器有很多连接,但母亲没有,则后代在该区域可能看起来像父亲,但也得到母亲的 4 神经元层结构而不是父亲的 6 神经元层结构.
这样做的目的是发现个人的新能力;在上面的例子中,该生物现在可以更好地检测红色捕食者,并且在其稍微简单的大脑中更快地处理信息(与父亲的大脑相比)。并非所有的后代都比他们的父母装备更好,这些较弱的个体可能会在短时间内消失(或者可能幸运地生存足够长的时间,以提供他们移动和躲避掠食者的奇特方式,即使他们的父母让他们失明或太大或其他什么......再次关键的是:不要太担心特定特征的立即有用性,只需看到它的长期作用

Neither response so far is true to the nature of Polyworld!...

They both describe a typical Genetic Algorithm (GA) application. While GA incorporates some of the elements found in Polyworld (breeding, selection), GA also implies some form of "objective" criteria aimed at guiding evolution towards [relatively] specific goals.

Polyworld, on the other hand is a framework for Artificial Life (ALife). With ALife, the survival of individual creatures and their ability to pass their genes to other generations, is not directed so much by their ability to satisfy a particular "fitness function", but instead it is tied to various broader, non-goal-oriented, criteria, such as the ability of the individual to feed itself in ways commensurate with its size and its metabolism, its ability to avoid predators, its ability to find mating partners and also various doses of luck and randomness.

Polyworld's model associated with the creatures and their world is relatively fixed (for example they all have access to (though may elect not to use) various basic sensors (for color, for shape...) and various actuators ("devices" to eat, to mate, to turn, to move...) and these basic sensorial and motor functions do not evolve (as it may in nature, for example when creatures find ways to become sensitive to heat or to sounds and/or find ways of moving that are different from the original motion primitives etc...)

On the other hand, the brain of creatures has structure and connections which are both the product of the creature's genetic make-up ("stuff" from its ancestors) and of its own experience. For example the main algorithm used to determine the strength of connections between neurons uses Hebbian logic (i.e. fire-together, wire-together) during the lifetime of the creature (early on, I'm guessing, as the algorithm often has a "cooling" factor which minimize its ability to change things in a big way, as times goes by). It is unclear if the model includes some form of Lamarkian evolution, whereby some of the high-level behaviors are [directly] passed on through the genes, rather than being [possibly] relearnt with each generation (on the indirect basis of some genetically passed structure).

The salient difference between ALife and GA (and there are others!) is that with ALife, the focus is on observing and fostering in non-directed ways, emergent behaviors -whatever they may be- such as, for example, when some creatures evolve a makeup which prompts them to wait nearby piles of green food and wait for dark green creatures to kill them, or some creatures may start collaborating with one another, for example by seeking each other's presence for other purposes than mating etc. With GA, the focus is on a particular behavior of the program being evolved. For example the goal may be to have the program recognize edges in a video image, and therefore evolution is favored in this specific direction. Individual programs which perform this task better (as measure with some "fitness function") are favored with regards to evolution.

Another less obvious but important difference regards the way creatures (or programs in the case of GA) reproduce themselves. With ALife, individual creatures find their own mating partners, at random at first although, after some time they may learn to reproduce only with creatures exhibiting a particular attribute or behavior. With GA, on the other hand, "sex" is left to the GA framework itself, which chooses, for example, to preferably cross-breed individuals (and clones thereof) which score well on the fitness function (and always leaving room for some randomness, lest the solution search stays stuck at some local maxima, but the point is that the GA framework decides mostly who has sex with whom)...

Having clarified this, we can return to the OP's original question...
... how would one go about combining two neural networks? They seem so different that any attempt to combine them would simply form a third, totally unrelated network. ...I don't see a good way to take the positive aspects of two separate neural networks and combine them into a single one...
The "genetic makeup" of a particular creature affects parameters such as the size of the creature, its color and such. It also includes parameters associated with the brain, in particular its structure: the number of neurons, the existence of connection from various sensors (eg. does the creature see the Blue color very well ?) the existence of connections towards various actuators (eg. does the creature use its light?). The specific connections between neurons and the relative strength of these may also be passed in the genes, if only to serve as initial values, to be quickly changed during brain learning phase.
By taking two creatures, we [nature!] can select in a more or less random fashion, which parameter come from the first creature and which come from the other creature (as well as a few novel "mutations" which come from neither parents). For example if the "father" had many connections with red color sensor, but the mother didn't the offspring may look like the father in this area, but also get his mother's 4 neuron-layers structure rather than father's 6 neuron-layers structure.
The interest of doing so is to discover new capabilities from the individuals; in the example above, the creature may now better detect red colored predators, and also process info more quickly in its slightly simpler brain (compared with the father's). Not all offspring are better equipped than their parents, such weaker individuals, may disappear in short order (or possibly and luckily survive long enough, to provide, say, their fancy way of moving and evading predators, even though their parent made them blind or too big or whatever... The key thing again: is not to be so worried about immediate usefulness of a particular trait, only to see it play in the long term.

爱人如己 2024-08-15 05:00:52

他们不会真正将两个神经网络培育在一起。据推测,他们有多种遗传算法,可以在给定特定“基因”序列的情况下产生特定的神经网络结构。他们将从一组基因序列开始,产生其特有的神经网络,然后将每个网络暴露在相同的训练方案中。据推测,其中一些网络会比其他网络更好地响应训练(即它们更容易“训练”以实现所需的行为)。然后,他们将获取产生最佳“受训者”的基因序列,将它们相互杂交,产生它们特有的神经网络,然后将其接受相同的训练方案。据推测,第二代的一些神经网络甚至比第一代的神经网络更容易训练。他们将成为第三代的父母,依此类推。

They wouldn't really be breeding two neural networks together. Presumably they have a variety of genetic algorithm that produces a particular neural network structure given a particular sequence of "genes". They would start with a population of gene sequences, produce their characteristic neural networks, and then expose each of these networks to the same training regimen. Presumably, some of these networks would respond to the training better than some others (i.e. they would be more easily "trainable" to achieve the desired behavior). They would then take the genetic sequences that produced the best "trainees", cross-breed them with each other, produce their characteristic neural networks, which would then be exposed to the same training regimen. Presumably, some of these neural networks in the second generation would be even more trainable than those from the first generation. These would become the parents of the third generation, and so on and so forth.

情场扛把子 2024-08-15 05:00:52

在这种情况下,神经网络(可能)不是任意树。它们可能是具有恒定结构的网络,即相同的节点和连接,因此“培育”它们将涉及“平均”节点的权重。您可以对两个相应网络中每对节点的权重进行平均,以生成“后代”网络。或者,您可以使用依赖于更多相邻节点集的更复杂的函数 - 可能性是巨大的。
如果关于固定结构的假设是错误的或没有根据的,我的答案是不完整的。

Neural networks aren't (probably) in this case arbitrary trees. They are probably networks with a constant structure, i.e. same nodes and connections, so 'breeding' them would involve 'averaging' the weights of nodes. You could average the weights for each pair of nodes in the two corresponding nets to produce the 'offspring' net. Or you could use a more complicated function dependent on ever-further sets of neighboring nodes – the possibilities are Vast.
My answer is incomplete if the assumption about the fixed structure is false or unwarranted.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文