如何利用神经网络解决“软”问题解决方案?
我正在考虑在我正在构建的太空射击游戏中使用神经网络为我的敌人提供动力,我想知道;当网络没有一组明确的良好输出时,如何训练神经网络?
I'm considering using a neural network to power my enemies in a space shooter game i'm building and i'm wondering; how do you train neural networks when there is no one definitive good set of outputs for the network?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
我目前正在研究神经网络,如果没有明确定义的输入和输出编码,它们似乎毫无用处,而且它们根本无法扩展到复杂性(请参阅http://en.wikipedia.org/wiki/VC_dimension)。这就是为什么神经网络研究自 20-30 多年前最初的炒作以来几乎没有什么应用,而基于语义/状态的人工智能由于在现实世界应用中的成功而吸引了每个人的兴趣。
) ,对你来说,在游戏的一小部分中使用神经网络可能会更好,而不是作为核心敌人 AI。
I'm studying neural networks at the moment, and they seem quite useless without well defined input and output encodings, and they don't scale at all to complexity (see http://en.wikipedia.org/wiki/VC_dimension). that's why neural network research has had so little application since the initial hype more than 20-30 years ago while semantic/state based AI took over everyone's interests because of it's success in real world applications.
In short, it's probably better for you to use Neural nets for a small portion of the game rather as the core enemy AI.
您可以查看AI动态游戏难度平衡了解各种AI技术和参考。
(IMO,你可以实现敌人行为,例如“包围敌人”,这将非常酷,而无需深入研究先进的人工智能概念)
编辑:因为你正在制作一款太空射击游戏,并且你想要为你的敌人提供某种人工智能,我相信你会发现这个链接很有趣:自主角色的转向行为
You can check out AI Dynamic game difficulty balancing for various AI techniques and references.
(IMO, you can implement enemy behaviors, like "surround the enemy", which will be really cool, without delving into advanced AI concepts)
Edit: since you're making a space shooter game and you want some kind of AI for your enemies, I believe you'll find interesting this link: Steering Behaviors For Autonomous Characters
您是否认为可以轻松修改 FSM 以响应刺激?毕竟它只是一个数字表,您可以将其保存在内存中的某个位置并随时更改数字。我在一篇引发热议的博客中写了一些相关内容,奇怪的是,它被一些游戏人工智能新闻网站转载了。然后,那个构建了可以击败人类并了解真实新闻的吃豆女士人工智能的人在我的博客上留下了评论,其中包含指向更有用信息的链接,
这是我的博客文章,其中包含我关于使用马尔可夫的一些想法的语无伦次的漫无目的的内容链不断适应游戏环境,也许还可以叠加和组合计算机已经了解的有关玩家对游戏情况如何反应的信息。
http://bustingseams.blogspot.com/2008/03/funny- obsessive-ideas.html
这是 先生关于强化学习的精彩资源的链接。聪明的麦克帕克曼为我发帖。
http://www.cs.ualberta.ca/% 7Esutton/book/ebook/the-book.html
这是另一个很酷的链接
http://aigamedev.com/open/architecture/online-adaptation-game-opponent/
这些不是神经网络方法,但它们确实适应并不断学习,并且可能比神经网络更适合游戏网络。
Have you considered that it's easily possible to modify an FSM in response to stimulus? It is just a table of numbers after all, you can hold it in memory somewhere and change the numbers as you go. I wrote about it a bit in one of my blog fuelled deleriums, and it oddly got picked up by some Game AI news site. Then the guy who built a Ms. Pacman AI that could beat humans and got on the real news left a comment on my blog with a link to even more useful information
here's my blog post with my incoherant ramblings about some idea I had about using markov chains to continually adapt to a game environment, and perhaps overlay and combine something that the computer has learned about how the player reacts to game situations.
http://bustingseams.blogspot.com/2008/03/funny-obsessive-ideas.html
and here's the link to the awesome resource about reinforcement learning that mr. smarty mcpacman posted for me.
http://www.cs.ualberta.ca/%7Esutton/book/ebook/the-book.html
here's another cool link
http://aigamedev.com/open/architecture/online-adaptation-game-opponent/
These are not neural net approaches, but they do adapt and continually learn, and are probably better suited to games than neural networks.
我将向您推荐马修·巴克兰的两本书。
第二本书介绍了反向传播 ANN,这就是大多数人在讨论时的意思。
无论如何,谈谈NN。
也就是说,如果你想创建有意义的游戏人工智能,我认为第一本书更有用。有一个关于成功使用 FSM 的精彩而充实的部分(是的,很容易让自己被 FSM 绊倒)。
I'll refer you to two of Matthew Buckland's books.
The second book goes into back-propagation ANN, which is what most people mean when they
talk about NN anyway.
That said, I think the first book is more useful if you want to create meaningful game AI. There's a nice, meaty section on using FSM successfully (and yes, it's easy to trip yourself up with a FSM).