神经网络可以用来求函数最小值(a)吗?

发布于 2024-07-15 02:29:40 字数 117 浏览 7 评论 0原文

我对神经网络有点感兴趣,并考虑在 python 中使用一个神经网络来进行一个轻项目,该项目比较时域中的各种最小化技术(这是最快的)。

然后我意识到我什至不知道神经网络是否适合最小化。 你怎么认为?

I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest).

Then I realized I didn't even know if a NN is good for minimization. What do you think?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

把回忆走一遍 2024-07-22 02:29:40

在我看来,这个问题比神经网络更适合遗传算法。 神经网络往往需要解决有界问题,需要针对已知数据等进行训练,而遗传算法的工作原理是在不需要训练的情况下找到问题的越来越好的近似解决方案。

It sounds to me like this is a problem more suited to genetic algorithms than neural networks. Neural nets tend to need a bounded problem to solve, requiring training against known data, etc. - whereas genetic algorithms work by finding better and better approximate solutions to a problem without requiring training.

醉梦枕江山 2024-07-22 02:29:40

反向传播的工作原理是最小化误差。 然而,你确实可以最小化你想要的任何东西。 因此,您可以使用类似反向传播的更新规则来找到最小化输出的人工神经网络输入。

这是一个很大的问题,抱歉我的回答很简短。 我还应该补充一点,与更成熟的方法相比,我建议的方法听起来效率相当低,并且只能找到局部最小值。

Back-propagation works by minimizing the error. However, you can really minimize whatever you want. So, you could use back-prop-like update rules to find the Artificial Neural Network inputs that minimize the output.

This is a big question, sorry for the short answer. I should also add that my suggested approach sounds pretty inefficient compared to more established methods and would only find a local minima.

吃颗糖壮壮胆 2024-07-22 02:29:40

反向传播神经网络的训练过程通过最小化最佳结果的误差来工作。 但是让训练有素的神经网络找到未知函数的最小值将非常困难。

如果将问题限制在特定的函数类中,它就可以工作,而且速度也很快。 神经网络擅长寻找模式(如果有的话)。

The training process of a back-propagation neural network works by minimizing the error from the optimal result. But having a trained neural network finding the minimum of an unknown function would be pretty hard.

If you restrict the problem to a specific function class, it could work, and be pretty quick too. Neural networks are good at finding patterns, if there are any.

放低过去 2024-07-22 02:29:40

尽管这对于这个问题的作者来说有点太晚了。
也许有人想测试一些优化算法,当他读到这篇文章时...

如果您正在使用机器学习中的回归(NN、SVM、多元线性回归、K 最近邻)并且您想要最小化(最大化)您的回归函数,实际上这是可能的,但此类算法的效率取决于您正在搜索的区域的平滑度(步长...等)。

为了构建此类“机器学习回归”,您可以使用 scikit-learn。 您必须训练和验证 MLR 支持向量回归
(“fit”方法)

SVR.fit(Sm_Data_X,Sm_Data_y)

然后您必须定义一个函数,该函数返回数组“x”的回归预测。

def fun(x):
    return SVR.predict(x)

您可以使用 scipiy.optimize.minimize优化。 请参阅文档链接后面的示例。

Although this comes a bit too late for the the author of this question.
Maybe somebody wants to test some optimization algorithms, when he reads this...

If you are working with regressions in machine learning (NN, SVM, Multiple Linear Regression, K Nearest Neighbor) and you want to minimize (maximize) your regression-function, actually this is possible but the efficiency of such algorithms depends on smootheness, (step-size... etc.) of the region you are searching in.

In order to construct such "Machine Learning Regressions" you could use scikit- learn. You have to train and validate your MLR Support Vector Regression.
("fit" method)

SVR.fit(Sm_Data_X,Sm_Data_y)

Then you have to define a function which returns a prediction of your regression for an array "x".

def fun(x):
    return SVR.predict(x)

You can use scipiy.optimize.minimize for optimization. See the examples following the doc-links.

素染倾城色 2024-07-22 02:29:40

他们的目的非常糟糕; 神经网络的一大问题是它们陷入局部极小值。 您可能想研究一下支持向量机。

They're pretty bad for the purpose; one of the big problems of neural networks is that they get stuck in local minima. You might want to look into support vector machines instead.

沐歌 2024-07-22 02:29:40

实际上,您可以使用神经网络来找到函数最小值,但它与 Erik 提到的遗传算法结合起来效果最好。

基本上,NN 倾向于找到与函数局部最小值或最大值相对应的解决方案,但这样做非常精确(评论 Tetha答案指出 NN 是可以使用的分类器(无论数据输入是否最小),

相比之下,遗传算法倾向于从可能的输入的整个范围中找到更通用的解决方案,然后给出近似结果。

解决方案是将这两个世界结合起来

  1. 从遗传算法中获取近似结果
  2. 使用该结果使用神经网络找到更精确的答案

Actually you could use the NN to find a function minimum, but it would work best combined with genetic algorithms mentioned by Erik.

Basically NN tent to find solutions which correspond to a function local minimum or maximum but in doing so are pretty precise (to comment on Tetha answer stating that NN are classifiers you can use if to say it the data input is minimum or not)

in contrast genetic algorithms tend to find more universal solution from the whole range of the inputs possible but then give you the proximate results.

The solution is to combine the 2 worlds

  1. Get the approximate result from genetic algorithms
  2. Use that result to find the more precise answer using NN
〃温暖了心ぐ 2024-07-22 02:29:40

您可以教神经网络逼近函数。 如果函数是可微的或者您的神经网络具有多个隐藏层,您可以教它给出函数的导数。

示例:

You can train  a 1 input 1 output NN to give output=sin(input)

You can train it also give output=cos(input) which is derivative of sin()

You get a minima/maxima of sin when you equate cos to zero.

Scan for zero output while giving many values from input. 0=cos() -> minima of sin

当输出达到零时,您知道输入值是函数的最小值。

训练花费的时间更少,清零则花费的时间更长。

You can teach a NN to approximate a function. If a function is differentiable or your NN has more than one hidden layers, you can teach it to give derivative of a function.

Example:

You can train  a 1 input 1 output NN to give output=sin(input)

You can train it also give output=cos(input) which is derivative of sin()

You get a minima/maxima of sin when you equate cos to zero.

Scan for zero output while giving many values from input. 0=cos() -> minima of sin

When you reach zero output, you know that the input value is the minima of the function.

Training takes less, sweeping for zero takes long.

我最亲爱的 2024-07-22 02:29:40

神经网络是分类器。 它们分隔两类数据元素。 他们(通常)通过预先分类的数据元素来学习这种分离。 因此,我说:不,除非你做了很大的努力,超越了破损。

Neural networks are classifiers. They separate two classes of data elements. They learn this separation (usually) by preclassified data elements. Thus, I say: No, unless you do a major stretch beyond breakage.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文