我的 Hopfield 神经网络解决旅行商问题的方法有什么问题?
首先,这是家庭作业。我认为很明显我已经做出了努力,并且我正在寻找提示,而不是代码。
问题如下。操作方程有四个部分用于改变给定的神经元。
- A)确保每个城市最多被访问一次的一部分。
- B) 确保每个位置(第一、第二、第三等)至多有一个城市。
- C) 一部分确保活跃神经元总数等于城市数量。
- D) 一个部件可以最小化距离。
如果我对 D 的权重足够大以至于它会产生任何影响,则网络会选择无效的旅行(例如,访问 A、D,无处访问、E、C)。然而,我可以减轻 D 的权重,并且代码会找到解决方案,但不是那些具有最小距离的解决方案。
我将非常感谢任何建议,我已经用头撞键盘有一段时间了。熟悉使用 Hopfield 网络求解 TSP 的任何人都应该可以理解该代码。
Das代码:
%parameters
n=5;
theta = .5;
u0 = 0.02;
h = .1;
limit = 2000;
%init u
u=zeros(n,n);
uinit = -u0/2*log(n-1); %p94 uINIT = - u0/2 * ln(n-1)
for i=1:n
for j=1:n
u(i,j) = uinit * (1+rand()*0.2-0.1); %add noise [-0.1*uInit 0.1*uINIT]
end
end
%loop
for index=1:limit
i = ceil(rand()*n);
k = ceil(rand()*n);
%runge kutta
k1 = h*du(u,i,k,0);
k2 = h*du(u,i,k, k1/2);
k3 = h*du(u,i,k, k2/2);
k4 = h*du(u,i,k, k3);
u(i,k) = u(i,k) + (k1 + 2*k2 + 2*k3 + k4)/6;
end
Vfinal = hardlim(V(u)-theta)
du()
function out=du(u,X,i,c)
dist = [0, 41, 45, 32, 32;
41, 0, 36, 64, 54;
45, 36, 0, 76, 32;
32, 64, 76, 0, 60;
32, 54, 32, 60, 0];
t = 1;
n = 5;
A = 10;
B = 10;
C = 10;
D = .0001;
AComp = A*sum(V(u(X,:))) - A*V(u(X,i));
BComp = B*sum(V(u(:,i))) - B*V(u(X,i));
CComp = C*(sum(sum(V(u)))-n);
DComp = 0;
before = i-1;
after = i+1;
if before == 0
before = 5;
end
if after == 6
after = 1;
end
for Y=1:5
DComp = DComp + dist(X,Y) * (V(u(Y,after)) + V(u(Y,before)));
end
DComp = DComp * D;
out = -1*(u(X,i)+c)/t - AComp - BComp - CComp - DComp;
V()
function out=V(u)
u0 = 0.02;
out = (1 + tanh(u/u0))/2;
First off, this is homework. I think it's clear I've made an effort and I'm looking for hints, not code.
The problem is the following. The equation of operation has four components for altering a given neuron.
- A) One part to ensure each city is visited at most once.
- B) One to ensure each position (first, second, third, etc) has at most one city.
- C) One part to ensure that the total number of active neurons is equal to the number of cities.
- D) One part to minimize distance.
If I weight D heavily enough that it has any effect, the network settles on an invalid tour (for example, visit A, D, nowhere, E, C). I can, however, deweight D and the code will find solutions, but not those with minimal distance.
I'd be extremely grateful for any advice, I've been banging my head against the keyboard for a while. The code should be understandable by anyone familiar which solving the TSP with a Hopfield network.
Das Code:
%parameters
n=5;
theta = .5;
u0 = 0.02;
h = .1;
limit = 2000;
%init u
u=zeros(n,n);
uinit = -u0/2*log(n-1); %p94 uINIT = - u0/2 * ln(n-1)
for i=1:n
for j=1:n
u(i,j) = uinit * (1+rand()*0.2-0.1); %add noise [-0.1*uInit 0.1*uINIT]
end
end
%loop
for index=1:limit
i = ceil(rand()*n);
k = ceil(rand()*n);
%runge kutta
k1 = h*du(u,i,k,0);
k2 = h*du(u,i,k, k1/2);
k3 = h*du(u,i,k, k2/2);
k4 = h*du(u,i,k, k3);
u(i,k) = u(i,k) + (k1 + 2*k2 + 2*k3 + k4)/6;
end
Vfinal = hardlim(V(u)-theta)
du()
function out=du(u,X,i,c)
dist = [0, 41, 45, 32, 32;
41, 0, 36, 64, 54;
45, 36, 0, 76, 32;
32, 64, 76, 0, 60;
32, 54, 32, 60, 0];
t = 1;
n = 5;
A = 10;
B = 10;
C = 10;
D = .0001;
AComp = A*sum(V(u(X,:))) - A*V(u(X,i));
BComp = B*sum(V(u(:,i))) - B*V(u(X,i));
CComp = C*(sum(sum(V(u)))-n);
DComp = 0;
before = i-1;
after = i+1;
if before == 0
before = 5;
end
if after == 6
after = 1;
end
for Y=1:5
DComp = DComp + dist(X,Y) * (V(u(Y,after)) + V(u(Y,before)));
end
DComp = DComp * D;
out = -1*(u(X,i)+c)/t - AComp - BComp - CComp - DComp;
V()
function out=V(u)
u0 = 0.02;
out = (1 + tanh(u/u0))/2;
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我从未尝试过用神经网络来解决 TSP,但我发现它采用遗传方法解决得非常好且非常快。
不过,我已经完成了许多神经网络项目,而且我猜想,由于 TSP 通常可以在(城市的)单个网络上拥有许多解决方案,因此神经网络可以在解决方案之间来回拖动,但从来没有真正实现过。成功收敛到任何一个。
约翰·R·多纳
I have never tried solving the TSP with a neural network, but I have found that it solves very well, and very quickly, taking a genetic approach.
I have done many neural network projects, though, and I would guess that since the TSP can, in general, have many solutions over a single network (of cities), that the neural network could be dragged back and forth between solutions, never really successfully converging on any one.
John R. Doner