使用遗传算法优化神经网络

发布于 2024-07-19 04:24:06 字数 113 浏览 6 评论 0原文

遗传算法是优化隐藏节点数量和人工神经网络训练量的最有效方法吗?

我正在使用 Matlab 中的 NNToolbox 编码神经网络。 我愿意接受任何其他优化技术的建议,但我最熟悉的是 GA。

Is a genetic algorithm the most efficient way to optimize the number of hidden nodes and the amount of training done on an artificial neural network?

I am coding neural networks using the NNToolbox in Matlab. I am open to any other suggestions of optimization techniques, but I'm most familiar with GA's.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

回忆躺在深渊里 2024-07-26 04:24:06

实际上,关于神经网络,您可以使用遗传算法优化很多东西。
您可以优化结构(节点数、层数、激活函数等)。
您还可以使用 GA 进行训练,这意味着设置权重。

遗传算法永远不会是最有效的,但当您不知道要使用哪些数字时,通常会使用遗传算法。

对于训练,您可以使用其他算法,包括反向传播、 nelder-mead 等。

你说您想要优化隐藏节点的数量,为此,遗传算法可能就足够了,尽管远非“最佳”。 您正在搜索的空间可能太小而无法使用遗传算法,但它们仍然可以工作,而且据我所知,它们已经在 matlab 中实现了,所以没什么大不了的。

优化训练量是什么意思? 如果你指的是时期数,那么没关系,只要记住训练在某种程度上取决于起始权重,并且它们通常是随机的,因此用于 GA 的适应度函数实际上并不是一个函数。

Actually, there are multiple things that you can optimize using GA regarding NN.
You can optimize the structure (number of nodes, layers, activation function etc.).
You can also train using GA, that means setting the weights.

Genetic algorithms will never be the most efficient, but they usually used when you have little clue as to what numbers to use.

For training, you can use other algorithms including backpropagation, nelder-mead etc..

You said you wanted to optimize number hidden nodes, for this, genetic algorithm may be sufficient, although far from "optimal". The space you are searching is probably too small to use genetic algorithms, but they can still work and afaik, they are already implemented in matlab, so no biggie.

What do you mean by optimizing amount of training done? If you mean number of epochs, then that's fine, just remember that training is somehow dependent on starting weights and they are usually random, so the fitness function used for GA won't really be a function.

空名 2024-07-26 04:24:06

神经网络和遗传编程的一个很好的例子是 NEAT 架构(增强拓扑的神经进化)。 这是一种寻找最佳拓扑的遗传算法。 众所周知,它还擅长减少隐藏节点的数量。

他们还用它制作了一款名为 Nero 的游戏。 非常独特且非常令人惊奇的实际结果。

Stanley 博士的主页:

http://www.cs.ucf.edu/~kstanley/

在这里您将找到几乎所有与 NEAT 相关的内容,因为他是 NEAT 的发明者。

A good example of neural networks and genetic programming is the NEAT architecture (Neuro-Evolution of Augmenting Topologies). This is a genetic algorithm that finds an optimal topology. It's also known to be good at keeping the number of hidden nodes down.

They also made a game using this called Nero. Quite unique and very amazing tangible results.

Dr. Stanley's homepage:

http://www.cs.ucf.edu/~kstanley/

Here you'll find just about everything NEAT related as he is the one who invented it.

睫毛溺水了 2024-07-26 04:24:06

遗传算法可有效应用于优化神经网络,但您必须稍微考虑一下您想要做什么。

大多数“经典”神经网络训练算法(例如反向传播)仅优化神经元的权重。 遗传算法可以优化权重,但这通常效率低下。 然而,正如您所问,他们可以优化网络的拓扑以及训练算法的参数。 不过,您必须特别警惕创建“过度训练”的网络。

另一种使用修改后的遗传算法的技术可用于克服反向传播的问题。 反向传播通常会找到局部最小值,但它可以准确且快速地找到它们。 将遗传算法与反向传播相结合,例如在拉马克遗传算法中,可以发挥两者的优点。 GAUL 教程中简要描述了此技术

Genetic algorithms can be usefully applied to optimising neural networks, but you have to think a little about what you want to do.

Most "classic" NN training algorithms, such as Back-Propagation, only optimise the weights of the neurons. Genetic algorithms can optimise the weights, but this will typically be inefficient. However, as you were asking, they can optimise the topology of the network and also the parameters for your training algorithm. You'll have to be especially wary of creating networks that are "over-trained" though.

One further technique with a modified genetic algorithms can be useful for overcoming a problem with Back-Propagation. Back-Propagation usually finds local minima, but it finds them accurately and rapidly. Combining a Genetic Algorithm with Back-Propagation, e.g., in a Lamarckian GA, gives the advantages of both. This technique is briefly described during the GAUL tutorial

木格 2024-07-26 04:24:06

当目标函数不连续时,使用遗传算法来训练神经网络有时很有用。

It is sometimes useful to use a genetic algorithm to train a neural network when your objective function isn't continuous.

笨死的猪 2024-07-26 04:24:06

我不确定你是否应该为此使用遗传算法。

我想遗传算法的初始解决方案群体将包括神经网络的训练集(给定特定的训练方法)。 通常,初始解决方案群体由问题的随机解决方案组成。 然而,随机训练集并不能真正训练你的神经网络。

遗传算法的评估算法将是所需训练量、解决特定问题的神经网络质量以及隐藏节点数量的加权平均值。

因此,如果您运行此程序,您将获得在神经网络质量(= 训练时间、隐藏节点数量、网络问题解决能力)方面提供最佳结果的训练集。

或者您正在考虑采用完全不同的方法?

I'm not sure whether you should use a genetic algorithm for this.

I suppose the initial solution population for your genetic algorithm would consist of training sets for your neural network (given a specific training method). Usually the initial solution population consists of random solutions to your problem. However, random training sets would not really train your neural network.

The evaluation algorithm for your genetic algorithm would be a weighed average of the amount of training needed, the quality of the neural network in solving a specific problem and the numer of hidden nodes.

So, if you run this, you would get the training set that delivered the best result in terms of neural network quality (= training time, number hidden nodes, problem solving capabilities of the network).

Or are you considering an entirely different approach?

可遇━不可求 2024-07-26 04:24:06

我不完全确定你正在处理什么样的问题,但 GA 听起来有点矫枉过正。 根据您正在使用的参数范围,详尽的(或其他不智能的)搜索可能会起作用。 尝试根据前几个值的隐藏节点数量绘制神经网络的性能,从小处开始,然后以越来越大的增量跳跃。 根据我的经验,许多神经网络的性能稳定期早得惊人。 您也许能够很好地了解隐藏节点编号的范围最有意义。

神经网络的训练迭代通常也是如此。 更多的培训可以在一定程度上帮助网络,但很快就不再有太大效果。

在大多数情况下,这些神经网络参数不会以非常复杂的方式影响性能。 一般来说,增加它们会暂时提高性能,但随后就会出现收益递减。在这种简单曲线上找到一个好的值并不需要遗传算法; 如果隐藏节点(或训练迭代)的数量确实导致性能以复杂的方式波动,那么像遗传算法这样的元启发法可能是合适的。 但在走这条路之前,请先尝试一下暴力方法。

I'm not entirely sure what kind of problem you're working with, but GA sounds like a little bit of overkill here. Depending on the range of parameters you're working with, an exhaustive (or otherwise unintelligent) search may work. Try plotting your NN's performance with respect to number of hidden nodes for a first few values, starting small and jumping by larger and larger increments. In my experience, many NNs plateau in performance surprisingly early; you may be able to get a good picture of what range of hidden node numbers makes the most sense.

The same is often true for NNs' training iterations. More training helps networks up to a point, but soon ceases to have much effect.

In the majority of cases, these NN parameters don't affect performance in a very complex way. Generally, increasing them increases performance for a while but then diminishing returns kick in. GA is not really necessary to find a good value on this kind of simple curve; if the number of hidden nodes (or training iterations) really does cause the performance to fluctuate in a complicated way, then metaheuristics like GA may be apt. But give the brute-force approach a try before taking that route.

开始看清了 2024-07-26 04:24:06

我倾向于说遗传算法是一个好主意,因为你可以从最小的解决方案开始并增加神经元的数量。 您想要找到最佳点的“质量函数”很可能是平滑的并且只有很少的颠簸。

如果您必须经常找到这个最佳的神经网络,我建议您使用优化算法,在您的情况下,如数值配方中所述,对于函数评估成本昂贵的问题来说,这是最佳的。

I would tend to say that genetic algorithms is a good idea since you can start with a minimal solution and grow the number of neurons. It is very likely that the "quality function" for which you want to find the optimal point is smooth and has only few bumps.

If you have to find this optimal NN frequently I would recommend using optimization algorithms and in your case quasi newton as described in numerical recipes which is optimal for problems where the function is expensive to evaluate.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文