WPF:沿路径查找元素
我尚未将此问题标记为“已回答”。
由于赏金时间限制,当前接受的答案被自动接受
参考这个编程游戏我目前正在构建。
正如您从上面的链接中看到的,我目前正在构建一个游戏,其中用户可编程机器人在竞技场中自主战斗。
现在,我需要一种方法来检测机器人是否以特定角度检测到另一个机器人(取决于炮塔可能面向的位置):
alt text http://img21.imageshack.us/img21/7839/robotdetectionrg5.jpg
从上图中可以看到,我画了一个点-我现在需要在游戏中模拟坦克的视野,以检查其中的每个点以查看是否有另一个机器人在视野中。
机器人只是在战斗竞技场(另一块画布)上不断翻译的画布。
我知道炮塔的方向(它当前面临的方向),因此,我需要查找其路径中是否有任何机器人(并且路径应该以“视点”方式定义,如上面的图像以红色“三角形”的形式出现,我希望该图像能够更清楚地表达我想要表达的内容,
我希望有人可以指导我解决这个问题所涉及的数学知识
。 ]
我已经尝试了您告诉我的计算,但它无法正常工作,因为正如您从图像中看到的那样, bot1 不应该看到 Bot2 ,这是一个示例:
替代文本 http://img12.imageshack.us/img12/7416/examplebattle2.png
中在上面的场景中,机器人 1 正在检查他是否可以看到机器人 2。以下是详细信息(根据 Waylon Flinn 的回答):
angleOfSight = 0.69813170079773179 //in radians (40 degrees)
orientation = 3.3 //Bot1's current heading (191 degrees)
x1 = 518 //Bot1's Center X
y1 = 277 //Bot1's Center Y
x2 = 276 //Bot2's Center X
y2 = 308 //Bot2's Center Y
cx = x2 - x1 = 276 - 518 = -242
cy = y2 - y1 = 308 - 277 = 31
azimuth = Math.Atan2(cy, cx) = 3.0141873380511295
canHit = (azimuth < orientation + angleOfSight/2) && (azimuth > orientation - angleOfSight/2)
= (3.0141873380511295 < 3.3 + 0.349065850398865895) && (3.0141873380511295 > 3.3 - 0.349065850398865895)
= true
根据上面的计算,Bot1 可以看到 Bot2,但从图像中可以看到,这是不可能的,因为它们面向不同的方向。
我在上面的计算中做错了什么?
I have not marked this question Answered yet.
The current accepted answer got accepted automatically because of the Bounty Time-Limit
With reference to this programming game I am currently building.
As you can see from the above link, I am currently building a game in where user-programmable robots fight autonomously in an arena.
Now, I need a way to detect if a robot has detected another robot in a particular angle (depending on where the turret may be facing):
alt text http://img21.imageshack.us/img21/7839/robotdetectionrg5.jpg
As you can see from the above image, I have drawn a kind of point-of-view of a tank in which I now need to emulate in my game, as to check each point in it to see if another robot is in view.
The bots are just canvases that are constantly translating on the Battle Arena (another canvas).
I know the heading the turret (the way it will be currently facing), and with that, I need to find if there are any bots in its path(and the path should be defined in kind of 'viewpoint' manner, depicted in the image above in the form of the red 'triangle'. I hope the image makes things more clear to what I am trying to convey.
I hope that someone can guide me to what math is involved in achieving this problem.
[UPDATE]
I have tried the calculations that you have told me, but it's not working properly, since as you can see from the image, bot1 shouldn't be able to see Bot2 . Here is an example :
alt text http://img12.imageshack.us/img12/7416/examplebattle2.png
In the above scenario, Bot 1 is checking if he can see Bot 2. Here are the details (according to Waylon Flinn's answer):
angleOfSight = 0.69813170079773179 //in radians (40 degrees)
orientation = 3.3 //Bot1's current heading (191 degrees)
x1 = 518 //Bot1's Center X
y1 = 277 //Bot1's Center Y
x2 = 276 //Bot2's Center X
y2 = 308 //Bot2's Center Y
cx = x2 - x1 = 276 - 518 = -242
cy = y2 - y1 = 308 - 277 = 31
azimuth = Math.Atan2(cy, cx) = 3.0141873380511295
canHit = (azimuth < orientation + angleOfSight/2) && (azimuth > orientation - angleOfSight/2)
= (3.0141873380511295 < 3.3 + 0.349065850398865895) && (3.0141873380511295 > 3.3 - 0.349065850398865895)
= true
According to the above calculations, Bot1 can see Bot2, but as you can see from the image, that is not possible, since they are facing different directions.
What am I doing wrong in the above calculations?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(9)
机器人之间的角度是 arctan(x-distance, y-distance)(大多数平台都提供这个 2 参数 arctan 来为您进行角度调整。然后您只需检查这个角度是否小于距机器人的某个数字)当前标题
编辑2020:这是基于问题中更新的示例代码和现已删除的imageshack图像的更完整的分析
Atan2:关键。您需要找到两点之间的角度的函数是
atan2
。它采用向量的 Y 坐标和 X 坐标,并返回该向量与正 X 轴之间的角度。被包裹在 -Pi 和 Pi 之间。方向与方向: <。 code>atan2,一般来说,所有数学函数都在“数学标准坐标系”中工作,这意味着角度“0”对应于正东,角度逆时针增加。 因此,由
atan2(1, 0)
给出的Pi / 2
的“数学角度”表示“从正东逆时针旋转 90 度”的方向,这与点(x=0,y=1)。 “航向”是一个导航概念,表示方向是与正北的顺时针角度。orientation_ Degrees = 90 - Heading_ Degrees 或
orientation_radians = Math.PI / 2 - Heading_radians 从前者获得后者,或者您可以在数学坐标中指定输入方向系统而不是航海航向坐标系。
检查一个角度是否位于另外两个向量之间:检查一个向量是否位于另外两个向量之间并不像检查数字角度值是否位于两者之间那么简单,因为角度的包裹方式不同π/-π。
The angle between the robots is arctan(x-distance, y-distance) (most platforms provide this 2-argument arctan that does the angle adjustment for you. You then just have to check whether this angle is less than some number away from the current heading.
Edit 2020: Here's a much more complete analysis based on the updated example code in the question and a now-deleted imageshack image.
Atan2: The key function you need to find an angle between two points is
atan2
. This takes a Y-coordinate and X-coordinate of a vector and returns the angle between that vector and the positive X axis. The value will always be wrapped to lie between -Pi and Pi.Heading vs Orientation:
atan2
, and in general all your math functions, work in the "mathematical standard coordinate system", which means an angle of "0" corresponds to directly east, and angles increase counterclockwise. Thus, an "mathematical angle" ofPi / 2
as given byatan2(1, 0)
means an orientation of "90 degrees counterclockwise from due east", which matches the point (x=0, y=1). "Heading" is a navigational idea that expresses orientation is a clockwise angle from due north.orientation_degrees = 90 - heading_degrees
ororientation_radians = Math.PI / 2 - heading_radians
, or alternatively you could specify input orientations in the mathematical coordinate system rather than the nautical heading coordinate system.Checking that an angle lies between two others: Checking that an vector lies between two other vectors is not as simple as checking that the numeric angle value is between, because of the way the angles wrap at Pi/-Pi.
计算每个机器人相对于当前机器人的相对角度和距离。 如果角度在当前航向的某个阈值内并且在最大视图范围内,则它可以看到它。
唯一棘手的事情是处理角度从 2pi 弧度到 0 的边界情况。
Calculate the relative angle and distance of each robot relative to the current one. If the angle is within some threshold of the current heading and within the max view range, then it can see it.
The only tricky thing will be handling the boundary case where the angle goes from 2pi radians to 0.
在您的机器人类中,类似这样的内容(C# 代码):
注意:
假设:
免责声明:这没有经过测试,甚至没有经过编译检查,请根据需要进行调整。
Something like this within your bot's class (C# code):
Notes:
This assumes that:
Disclaimer: This is not tested or even checked to compile, adapt it as necessary.
实现类似的东西后的一些建议(很久以前!):
以下假设您正在循环遍历战场上的所有机器人(不是一个特别好的做法,但可以快速轻松地让某些东西发挥作用!)
1)它是检查机器人是否在范围内比当前是否可以在 FOV 内看到它要容易得多,例如,
这确保了它可以潜在地缩短大量 FOV 检查并加快运行模拟的过程。 需要注意的是,您可以在这里设置一些随机性以使其更有趣,这样在一定距离后,看到的机会与机器人的范围成线性比例。
2)这篇文章似乎有FOV计算的东西。
3)作为一名人工智能毕业生……您尝试过神经网络,您可以训练它们识别机器人是否在范围内以及有效目标。 这将否定任何极其复杂和令人费解的数学! 您可以拥有一个多层感知器[1],[2] 输入机器人坐标和目标坐标并收到漂亮的开火/不开火最后决定。 警告:我有义务告诉您,这种方法并不是最容易实现的,并且当它出错时可能会非常令人沮丧。 由于这种算法形式的(简单)不确定性,调试可能会很痛苦。 另外,您将需要某种形式的学习反向传播(带有训练案例)或遗传算法(另一个需要完善的复杂过程)! 如果可以选择,我会使用 3 号,但它不适合所有人!
A couple of suggestions after implementing something similar (a long time ago!):
The following assumes that you are looping through all bots on the battlefield (not a particularly nice practice, but quick and easy to get something working!)
1) Its a lot easier to check if a bot is in range then if it can currently be seen within the FOV e.g.
This ensures that it can potentially short-cuircuit a lot of FOV checking and speed up the process of running the simulation. As a caveat, you could have some randomness here to make it more interesting, such that after a certain distance the chance to see is linearly proportional to the range of the bot.
2) This article seems to have the FOV calculation stuff on it.
3) As an AI graduate ... nave you tried Neural Networks, you could train them to recognise whether or not a robot is in range and a valid target. This would negate any horribly complex and convoluted maths! You could have a multi layer perceptron [1], [2] feed in the bots co-ordinates and the targets cordinates and recieve a nice fire/no-fire decision at the end. WARNING: I feel obliged to tell you that this methodology is not the easiest to achieve and can be horribly frustrating when it goes wrong. Due to the (simle) non-deterministic nature of this form of algorithm, debugging can be a pain. Plus you will need some form of learning either Back Propogation (with training cases) or a Genetic Algorithm (another complex process to perfect)! Given the choice I would use Number 3, but its no for everyone!
通过使用向量数学中称为点积的概念可以很容易地实现这一点。
http://en.wikipedia.org/wiki/Dot_product
它可能看起来很吓人,但它确实没有那么糟糕。 这是处理 FOV 问题的最正确方法,其美妙之处在于,无论您处理 2D 还是 3D(此时您知道解决方案是正确的),相同的数学运算都适用。
(注:如果有什么不清楚的地方,可以在评论区提问,我会补上缺失的链接。)
步骤:
1)你需要两个向量,一个是主坦克的航向向量。 您需要的另一个矢量是从相关坦克和主坦克的位置得出的。
为了我们的讨论,我们假设主坦克的航向向量是(ax,ay),主坦克位置和目标坦克之间的向量是(bx,by)。 例如,如果主坦克位于位置 (20, 30),目标坦克位于 (45, 62),则向量 b = (45 - 20, 62 - 30) = (25, 32)。
再次,为了讨论的目的,我们假设主坦克的航向向量是(3,4)。
这里的主要目标是找到这两个向量之间的角度,点积可以帮助您获得该角度。
2) 点积定义为
a * b = |a||b| cos(angle)
读作 a(点积)b,因为 a 和 b 不是数字,而是向量。
3)或以另一种方式表达(经过一些代数运算):
angle = acos((a * b) / |a||b|)
angle 是两个向量 a 和 b 之间的角度,因此仅此信息就可以告诉您是否一辆坦克可以看到另一辆坦克,也可以看不到另一辆坦克。
|一个| 是向量 a 的大小,根据毕达哥拉斯定理,它就是 sqrt(ax * ax + ay * ay),|b| 也是如此。
现在问题来了,如何求出a * b(点积b)才能求角度。
4)救援来了。 结果点积也可以表示为:
a * b = ax * bx + ay * by
所以 angle = acos((ax * bx + ay * by) / |a||b|)
如果角度较小超过您视野的一半,则可以看到有问题的坦克。 否则就不是。
因此,使用上面的示例数字:
基于我们的示例数字:
a = (3, 4)
b = (25, 32)
|a| = sqrt(3 * 3 + 4 * 4)
|b| = sqrt(25 * 25 + 32 * 32)
angle = acos((20 * 25 + 30 * 32) /|a||b|
(在与您的角度进行比较之前,请务必将所得角度转换为适当的角度或弧度 )视场)
It can be quite easily achieved with the use of a concept in vector math called dot product.
http://en.wikipedia.org/wiki/Dot_product
It may look intimidating, but it's not that bad. This is the most correct way to deal with your FOV issue, and the beauty is that the same math works whether you are dealing with 2D or 3D (that's when you know the solution is correct).
(NOTE: If anything is not clear, just ask in the comment section and I will fill in the missing links.)
Steps:
1) You need two vectors, one is the heading vector of the main tank. Another vector you need is derived from the position of the tank in question and the main tank.
For our discussion, let's assume the heading vector for main tank is (ax, ay) and vector between main tank's position and target tank is (bx, by). For example, if main tank is at location (20, 30) and target tank is at (45, 62), then vector b = (45 - 20, 62 - 30) = (25, 32).
Again, for purpose of discussion, let's assume main tank's heading vector is (3,4).
The main goal here is to find the angle between these two vectors, and dot product can help you get that.
2) Dot product is defined as
a * b = |a||b| cos(angle)
read as a (dot product) b since a and b are not numbers, they are vectors.
3) or expressed another way (after some algebraic manipulation):
angle = acos((a * b) / |a||b|)
angle is the angle between the two vectors a and b, so this info alone can tell you whether one tank can see another or not.
|a| is the magnitude of the vector a, which according to the Pythagoras Theorem, is just sqrt(ax * ax + ay * ay), same goes for |b|.
Now the question comes, how do you find out a * b (a dot product b) in order to find the angle.
4) Here comes the rescue. Turns out that dot product can also be expressed as below:
a * b = ax * bx + ay * by
So angle = acos((ax * bx + ay * by) / |a||b|)
If the angle is less than half of your FOV, then the tank in question is in view. Otherwise it's not.
So using the example numbers above:
Based on our example numbers:
a = (3, 4)
b = (25, 32)
|a| = sqrt(3 * 3 + 4 * 4)
|b| = sqrt(25 * 25 + 32 * 32)
angle = acos((20 * 25 + 30 * 32) /|a||b|
(Be sure to convert the resulting angle to degree or radian as appropriate before comparing it to your FOV)
这将告诉您canvas2 的中心是否可以被canvas1 击中。 如果你想考虑canvas2的宽度,事情会变得更复杂一些。 简而言之,您必须执行两项检查,一项针对 canvas2 的每个相关角,而不是针对中心进行一项检查。
This will tell you if the center of canvas2 can be hit by canvas1. If you want to account for the width of canvas2 it gets a little more complicated. In a nutshell, you would have to do two checks, one for each of the relevant corners of canvas2, instead of one check on the center.
看看你的两个问题,我认为你可以使用提供的数学来解决这个问题,然后你必须解决有关碰撞检测、发射子弹等的许多其他问题。这些问题解决起来并不简单,特别是如果你的机器人不这样做的话正方形。 我建议查看物理引擎 - codeplex 上的 farseer 是一个很好的 WPF 示例,但这使它成为进入一个比高中开发任务更大的项目。
我获得高分的最佳建议是,把简单的事情做得很好,不要部分交付出色的东西。
Looking at both of your questions I'm thinking you can solve this problem using the math provided, you then have to solve many other issues around collision detection, firing bullets etc. These are non trivial to solve, especially if your bots aren't square. I'd recommend looking at physics engines - farseer on codeplex is a good WPF example, but this makes it into a project way bigger than a high school dev task.
Best advice I got for high marks, do something simple really well, don't part deliver something brilliant.
你的炮塔真的有那么宽的射击模式吗? 子弹的路径是一条直线,并且不会随着行进而变大。 您应该有一个代表炮塔杀伤区的炮塔方向的简单向量。 每个坦克都有一个代表其脆弱区域的边界圆。 然后您可以按照他们使用光线追踪的方式进行操作。 简单的射线/圆相交。 请参阅文档二维线性和圆形分量的交点的第 3 节。
Does your turret really have that wide of a firing pattern? The path a bullet takes would be a straight line and it would not get bigger as it travels. You should have a simple vector in the direction of the the turret representing the turrets kill zone. Each tank would have a bounding circle representing their vulnerable area. Then you can proceed the way they do with ray tracing. A simple ray / circle intersection. Look at section 3 of the document Intersection of Linear and Circular Components in 2D.
您更新的问题似乎来自
orientation
和azimuth
的不同“零”方向:0的orientation
似乎意味着“直线向上”,但方位角
为 0“直右”。Your updated problem seems to come from different "zero" directions of
orientation
andazimuth
: anorientation
of 0 seems to mean "straight up", but anazimuth
of 0 "straight right".