粒子群优化 (PSO) 学习和适应
我最近实现了 PSO 的基本算法,当提供 2 个变量(x,y)的函数时,该算法将返回一定范围内函数的最小值。
现在的问题是 - 该功能未知。我的 PS 将提供数据集(数据集可能来自各个领域 - 例如移动计算)。例如,让它成为以下形式的元组:(x,y,f(x,y))。 [在学习阶段也提供了最佳值。] 在大约 1000 个样本数据之后,将使用另一组数据来测试 PS。 PS 应该返回最优值,即给定 (x,y) 返回 f(x,y)。
在我看来,这些问题与 ANN 非常相似。我不知道如何继续——我的 PS 应该尝试生成一个多边形吗?
I have recently implemented a basic algorithm of PSO which when provided with a function of 2 variables(x,y) would return the minima of the function within a range.
Now the issue is - the function is not known. My PS is to be fed with data sets (the data sets could be from various domains - like mobile computing). For instance let it be tuples of the form: (x,y,f(x,y)). [During the learning phase the optimum value is provided too.] After some 1000s of sample data, the PS would be tested with another set of data. The PS should supposedly return the optimum value, i.e. given (x,y) return f(x,y).
The problems seems to me very similar as ANN. I have no idea how to proceed on this - should my PS try and generate a polygon?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
根据您的描述,我了解到您打算使用 PSO 进行函数逼近。因此,对于包含许多行值 x、y、z 的数据集;您想要使用 PSO 来找到近似 z 的函数 f(x, y)(即误差 |z - f(x,y)| 很小)。
不过,我认为您可能有些术语有误;特别是,我想象多边形,你的意思是“多项式”。
是的,您可以使用多项式进行函数逼近。
例如,如果您想一开始就保持简单,可以从线性多项式 f(x,y) = ax + by + c 开始。然后,PSO 将尝试生成 a、b 和 c 的值。最小化每个值粒子的成本函数将是数据集中每个粒子的平方误差 (f(x,y) - z)^2 之和。
最终,您可能还希望考虑将数据拆分为训练集和验证集,以避免过度拟合......
From your description, I understand you intend to use PSO for function approximation. So that for a dataset containing many lines of values x, y, z; you want to use a PSO to find a function f(x, y) which approximates z (i.e. the error |z - f(x,y)| is small).
I think you might have some of the terms wrong, though; particularly, I imagine by polygon, you mean 'polynomial'.
And yes, you can use polynomials for function approximation.
For instance, if you want to keep it simple at first, you can start with the linear polynomial f(x,y) = ax + by + c. The PSO would then attempt to produce values for a, b, and c. The cost function to minimise for each particle of value would then be the sum of the squared error (f(x,y) - z)^2 for each in the dataset.
Eventually, you'll likely also want to look in splitting your data into a training and validation set to avoid over-fitting...