使用神经网络的骑士之旅

发布于 2024-08-07 07:06:24 字数 480 浏览 9 评论 0原文

我正在研究骑士之旅问题,并决定尝试使用神经网络在 python 中实现它来寻找解决方案。

该方法的一般解释可以在 维基百科 上找到,

虽然我认为我有正确实现它(我看不到任何其他错误),它不起作用,它更新了一些链接,删除了连接顶点的度数超过二的边,但它没有收敛于解决方案。

我想知道是否有人对我错误地实现了什么有任何想法(对可怕的代码感到抱歉)。

编辑
工作代码可以在 GitHub https://github.com/Yacoby/KnightsTour 找到

I was looking at the knights tour problem and decided to have a go at implementing it in python using a neural network to find solutions.

The general explanation of the method can be found on Wikipedia

While I think I have implemented it correctly (I can't see anything else that is wrong), it doesn't work, it updates a few links, removing the edges where the connecting vertex has a degree more than two, but it doesn't converge on the solution.

I was wondering if anyone had any ideas on what I have implemented incorrectly (Sorry about the horrible code).

EDIT
Working code can be found at GitHub https://github.com/Yacoby/KnightsTour

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

錯遇了你 2024-08-14 07:06:25

你无法就地更新神经元。由于 U[t+1] 取决于 U[t] 和 V[t],如果您已经更新了 V,则 U 的计算将是错误的

我认为您应该将更新分为两个阶段
update_state和update_output,所以所有的U都被更新,然后所有的V

    for n in neurons:
        n.update_state()
    for n in neurons:
        n.update_output()

You can't update the neurons in place. Since U[t+1] depends on U[t] and V[t], if you have already updated V the calculation for U will be wrong

I think you should split the update into two phases
update_state and update_output, so all the U are updated and then all the V

    for n in neurons:
        n.update_state()
    for n in neurons:
        n.update_output()
无远思近则忧 2024-08-14 07:06:25

第一印象是你的板只有一个缓冲区。我这样做的基础是我没有看到迭代之间有任何缓冲区交换 - 我没有仔细观察并且可能很容易出错。

如果您就地修改单个缓冲区,那么当您进行邻居计数时,您将基于部分修改的板 - 而不是您开始时的板。

First impression is that you only have one buffer for the board. I'm basing this on the fact that I don't see any buffer swaps between iterations - I haven't looked that closely and may easily be wrong.

If you modify a single buffer in place, when you do the neighbour counts, you base them on a partly modified board - not the board you had at the start.

燕归巢 2024-08-14 07:06:25

查看您的代码后,我认为您对所使用公式的解释可能不正确。您说,在更新状态时,您添加四个而不是两个,并减去神经元本身的输出。在我看来,你像是在减去神经元本身的输出两次。您查找邻居的代码似乎无法区分神经元的邻居和神经元本身,并且您运行此代码两次 - 每个顶点一次。

对我自己的代码的测试似乎证实了这一点。当我减去神经元自身的输出两次而不是一次时,收敛率会大大提高。

After looking over your code, I think your explanation for the formula you used may be incorrect. You say that when updating the state you add four rather than two and subtract the output of the neuron itself. It looks to me like you're subtracting the output of the neuron itself twice. Your code which finds neighbors does not appear to distinguish between neighbors of the neuron and the neuron itself, and you run this code twice- once for each vertex.

Tests on my own code seem to confirm this. Convergence rates drastically improve when I subtract the neuron's own output twice rather than once.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文