计算机国际象棋树搜索的最新技术是什么?

发布于 2024-07-13 06:36:15 字数 206 浏览 14 评论 0原文

我对带来百分之几速度的微小优化不感兴趣。 我对 alpha-beta 搜索最重要的启发式方法感兴趣。 以及评价函数最重要的组成部分。

我对具有最大(改进/代码大小)比率的算法特别感兴趣。 (不是(改进/复杂性))。

谢谢。

聚苯乙烯 启发式杀手棋就是一个完美的例子——易于实施且功能强大。 启发式数据库太复杂。

I'm not interested in tiny optimizations giving few percents of the speed.
I'm interested in the most important heuristics for alpha-beta search. And most important components for evaluation function.

I'm particularly interested in algorithms that have greatest (improvement/code_size) ratio.
(NOT (improvement/complexity)).

Thanks.

PS
Killer move heuristic is a perfect example - easy to implement and powerful.
Database of heuristics is too complicated.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

暮色兮凉城 2024-07-20 06:36:15

不确定您是否已经意识到这一点,但请查看国际象棋编程维基 - 这是一个很棒的资源几乎涵盖了现代国际象棋人工智能的各个方面。 特别是,与您的问题相关,请参阅主页上的搜索和评估部分(在原则主题下)。 您或许还可以发现此处列出的一些程序中使用的一些有趣的技术。 如果您的问题仍然没有得到解答,我绝对建议您在国际象棋编程论坛,那里可能有更多的专家来回答。 (并不是说您不一定会在这里得到好的答案,只是更有可能在特定主题的专家论坛上获得答案)。

Not sure if you're already aware of it, but check out the Chess Programming Wiki - it's a great resource that covers just about every aspect of modern chess AI. In particular, relating to your question, see the Search and Evaluation sections (under Principle Topics) on the main page. You might also be able to discover some interesting techniques used in some of the programs listed here. If your questions still aren't answered, I would definitely recommend you ask in the Chess Programming Forums, where there are likely to be many more specialists around to answer. (Not that you won't necessarily get good answers here, just that it's rather more likely on topic-specific expert forums).

愛放△進行李 2024-07-20 06:36:15

MTD(f)MTD 变体 是对标准 alpha-beta,前提是您的评估函数中没有非常详细的信息,并假设您正在使用 杀手启发式历史启发式也很有用。

顶级国际象棋程序 Rybka 显然已经放弃了 MDT(f),转而使用 PVS 在非 PV 节点上具有零期望窗口。

扩展的无效修剪,其中包含正常的无效修剪和深度剃刀,理论上不合理,但在实践中却非常有效。

迭代深化是另一种有用的技术。 我列出了很多好棋编程链接在这里

MTD(f) or one of the MTD variants is a big improvement over standard alpha-beta, providing you don't have really fine detail in your evaluation function and assuming that you're using the killer heuristic. The history heuristic is also useful.

The top-rated chess program Rybka has apparently abandoned MDT(f) in favour of PVS with a zero-aspiration window on the non-PV nodes.

Extended futility pruning, which incorporates both normal futility pruning and deep razoring, is theoretically unsound, but remarkably effective in practice.

Iterative deepening is another useful technique. And I listed a lot of good chess programming links here.

脸赞 2024-07-20 06:36:15

尽管国际象棋编程文献中讨论了许多基于启发式的优化(我的意思是在不实际搜索的情况下增加树深度的方法),但我认为其中大多数很少被使用。 原因是它们在理论上是良好的性能助推器,但在实践中却并非如此。

有时这些启发法也可能返回一个糟糕的(我的意思是不是最好的)举动。

与我交谈过的人总是建议优化 alpha-beta 搜索并实现代码的迭代深化,而不是尝试添加其他启发式方法。

主要原因是计算机的处理能力不断增强,并且研究[我想需要引用]表明,使用全部 CPU 时间将 alpha-beta 树暴力破解到最大深度的程序总是比分裂的程序跑得快。他们在一定水平的α-β和一些启发式之间的时间。

尽管使用一些启发式方法来扩展树深度可能弊大于利,但您可以将许多性能增强器添加到 alpha-beta 搜索算法中。

我相信您知道,为了让 alpha-beta 完全按照预期工作,您应该有一个移动排序机制(迭代深化)。 迭代深化可以给你带来大约 10% 的性能提升。

在 alpha beta 中添加主要变异搜索技术可能会给您带来 10% 的额外提升。

也尝试一下MTD(f)算法。 它还可以提高发动机的性能。

Even though many optimizations based on heuristics(I mean ways to increase the tree depth without actualy searching) discussed in chess programming literature, I think most of them are rarely used. The reason is that they are good performance boosters in theory, but not in practice.

Sometimes these heuristics can return a bad(I mean not the best) move too.

The people I have talked to always recommend optimizing the alpha-beta search and implementing iterative deepening into the code rather than trying to add the other heuristics.

The main reason is that computers are increasing in processing power, and research[need citation I suppose] has shown that the programs that use their full CPU time to brute force the alpha-beta tree to the maximum depth have always outrunned the programs that split their time between a certain levels of alpha-beta and then some heuristics,.

Even though using some heuristics to extend the tree depth can cause more harm than good, ther are many performance boosters you can add to the alpha-beta search algorithm.

I am sure that you are aware that for alpha-beta to work exactly as it is intended to work, you should have a move sorting mechanisn(iterative deepening). Iterative deepening can give you about 10% performace boost.

Adding Principal variation search technique to alpha beta may give you an additional 10% boost.

Try the MTD(f) algorithm too. It can also increase the performance of your engine.

千笙结 2024-07-20 06:36:15

尚未提及的一种启发式方法是空移动修剪

此外,Ed Schröder 有一个很棒的页面,解释了他在 Rebel 引擎中使用的许多技巧,以及每种技巧对速度/性能的贡献有多大:叛军内部

One heuristic that hasn't been mentioned is Null move pruning.

Also, Ed Schröder has a great page explaining a number of tricks he used in his Rebel engine, and how much improvement each contributed to speed/performance: Inside Rebel

涫野音 2024-07-20 06:36:15

使用带有 zobrist 哈希 的换位表

只需很少的代码即可实现 [每个上一个 XOR移动或取消移动,以及在游戏树中递归之前的 if 语句],并且好处非常好,特别是如果您已经在使用迭代深化,并且它非常可调整(使用更大的表、更小的表、替换策略等)

Using a transposition table with a zobrist hash

It takes very little code to implement [one XOR on each move or unmove, and an if statement before recursing in the game tree], and the benefits are pretty good, especially if you are already using iterative deepening, and it's pretty tweakable (use a bigger table, smaller table, replacement strategies, etc)

清音悠歌 2024-07-20 06:36:15

杀手动作是小代码大小和移动顺序巨大改进的好例子。

Killer moves are good example of small code size and great improvement in move ordering.

北城半夏 2024-07-20 06:36:15

大多数棋盘游戏 AI 算法都基于 http://en.wikipedia.org/wiki/Minmax 最小最大。 目标是最小化他们的选择,同时最大化你的选择。 尽管对于国际象棋来说,这是一个非常大且昂贵的运行时问题。 为了帮助减少这种情况,您可以将 minmax 与以前玩过的游戏的数据库结合起来。 任何具有相似棋盘位置并且具有根据如何为您的颜色赢得布局而建立的模式的游戏都可以用于“分析”下一步移动的位置。

我对你所说的改进/代码大小的含义有点困惑。 您真的是指改进/运行时分析(大 O(n) 与 o(n))吗? 如果是这种情况,请与 IBM 和 Big Blue 或 Microsoft 的 Parallels 团队联系。 在 PDC,我采访了一个人(我现在记不起他的名字了),他正在演示每个对手使用 8 个核心的麻将,他们在游戏算法设计竞赛中获得了第一名(我也记不清他的名字了)。

我不认为有任何“固定”算法可以总是赢得国际象棋并且做得非常快。 您必须这样做的方法是将之前玩过的所有可能的游戏都索引到一个非常大的基于字典的数据库中,并预先缓存每个游戏的分析。 在我看来,这将是一个非常复杂的算法,并且是一个非常糟糕的改进/复杂性问题。

Most board game AI algorithms are based on http://en.wikipedia.org/wiki/Minmax MinMax. The goal is to minimize their options while maximizing your options. Although with Chess this is a very large and expensive runtime problem. To help reduce that you can combine minmax with a database of previously played games. Any game that has a similar board position and has a pattern established on how that layout was won for your color can be used as far as "analyzing" where to move next.

I am a bit confused on what you mean by improvement/code_size. Do you really mean improvement / runtime analysis (big O(n) vs. o(n))? If that is the case, talk to IBM and big blue, or Microsoft's Parallels team. At PDC I spoke with a guy (whose name escapes me now) who was demonstrating Mahjong using 8 cores per opponent and they won first place in the game algorithm design competition (whose name also escapes me).

I do not think there are any "canned" algorithms out there to always win chess and do it very fast. The way that you would have to do it is have EVERY possible previously played game indexed in a very large dictionary based database and have pre-cached the analysis of every game. It would be a VERY compex algorithm and would be a very poor improvement / complexity problem in my opinion.

花想c 2024-07-20 06:36:15

我可能有点偏离主题,但“最先进的”国际象棋程序使用 MPI(例如 Deep Blue)来实现大规模并行能力。

只要考虑一下并行处理在现代国际象棋中发挥着重要作用

I might be slightly off topic but "state of the art" chess programs use MPI such as Deep Blue for massive parallel power.

Just consider than parallel processing plays a great role in modern chess

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文