随机球队比赛的球员评分

发布于 2024-09-06 12:05:41 字数 859 浏览 4 评论 0原文

我正在研究一种算法,用于在团队游戏中为个人玩家评分。问题是不存在固定的球队——每当 10 名球员想要比赛时,他们就会被分成两支(某种程度上)均匀的球队并互相比赛。因此,对球队进行评分是没有意义的,我们需要依靠个人球员的评分。

我希望考虑到许多问题:

  • 新玩家需要某种临时排名才能达到他们的“真实”评级,然后他们的评级与经验丰富的玩家相同。
  • 系统需要考虑到团队可能由不同技能水平的玩家组成 - 例如。一位非常好,一位很好,两位一般,一位非常差。因此,简单的玩家评分“平均值”可能是不够的,可能需要以某种方式进行加权。
  • 每场比赛后都会调整评级,因此算法需要基于每场比赛,而不是每个“评级周期”。如果出现一个好的解决方案,这种情况可能会改变(我知道 Glicko 使用评级期)。

请注意,作弊对于该算法来说不是问题,因为我们还有其他验证玩家的措施。

我研究过 TrueSkillGlickoELO(这是我们目前正在使用的)。我喜欢 TrueSkill/Glicko 的想法,其中有一个偏差用于确定评级的精确程度,但没有一种算法考虑随机团队的角度,并且似乎主要基于 1v1 或 FFA 游戏。

有人建议你对球员进行评分,就好像获胜球队中的每个球员都击败了失败球队中的所有球员(25“决斗”),但我不确定这是否是正确的方法,因为它可能会严重夸大评分当获胜球队中的一名非常差的球员与失败球队中的一名非常优秀的球员取得胜利时。

欢迎任何和所有建议!

编辑:我正在寻找一种针对成熟玩家的算法+某种对新手进行排名的方法,而不是两者的结合。抱歉造成混乱。

没有人工智能,玩家只能互相玩。比赛由输赢决定(没有平局)。

I am working on an algorithm to score individual players in a team-based game. The problem is that no fixed teams exist - every time 10 players want to play, they are divided into two (somewhat) even teams and play each other. For this reason, it makes no sense to score the teams, and instead we need to rely on individual player ratings.

There are a number of problems that I wish to take into account:

  • New players need some sort of provisional ranking to reach their "real" rating, before their rating counts the same as seasoned players.
  • The system needs to take into account that a team may consist of a mix of player skill levels - eg. one really good, one good, two mediocre, and one really poor. Therefore a simple "average" of player ratings probably won't suffice and it probably needs to be weighted in some way.
  • Ratings are adjusted after every game and as such the algorithm needs to be based on a per-game basis, not per "rating period". This might change if a good solution comes up (I am aware that Glicko uses a rating period).

Note that cheating is not an issue for this algorithm, since we have other measures of validating players.

I have looked at TrueSkill, Glicko and ELO (which is what we're currently using). I like the idea of TrueSkill/Glicko where you have a deviation that is used to determine how precise a rating is, but none of the algorithms take the random teams perspective into account and seem to be mostly based on 1v1 or FFA games.

It was suggested somewhere that you rate players as if each player from the winning team had beaten all the players on the losing team (25 "duels"), but I am unsure if that is the right approach, since it might wildly inflate the rating when a really poor player is on the winning team and gets a win vs. a very good player on the losing team.

Any and all suggestions are welcome!

EDIT: I am looking for an algorithm for established players + some way to rank newbies, not the two combined. Sorry for the confusion.

There is no AI and players only play each other. Games are determined by win/loss (there is no draw).

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

染火枫林 2024-09-13 12:05:42

对于在发布多年后在此遇到的任何人:TrueSkill 现在支持由多个玩家组成的团队并更改配置。

For anyone who stumbles in here years after it was posted: TrueSkill now supports teams made up of multiple players and changing configurations.

甜嗑 2024-09-13 12:05:42

每当有 10 名玩家想要玩时,
他们(某种程度上)分为两部分
甚至组队并互相比赛。

这很有趣,因为它意味着每个团队的平均技能水平相同(因此并不重要),并且每个团队都有平等的获胜机会。如果您假设此约束成立,那么对每个玩家的胜负进行简单计数应该是一个很好的衡量标准。

Every time 10 players want to play,
they are divided into two (somewhat)
even teams and play each other.

This is interesting, as it implies both that the average skill level on each team is equal (and thus unimportant) and that each team has an equal chance of winning. If you assume this constraint to hold true, a simple count of wins vs losses for each individual player should be as good a measure as any.

背叛残局 2024-09-13 12:05:41

临时排名系统总是不完美的,但更好的排名系统(例如Elo)的设计目的是比现有玩家的排名更快地调整临时排名。这承认,试图根据与其他玩家的几场比赛来建立能力评级本质上很容易出错。

我认为你应该使用对方球队所有球员的平均评分作为建立新手球员临时评分的输入,但只将其视为一场比赛,而不是N场比赛对N名球员。每个游戏实际上只是一个数据样本,Elo 系统会处理这些游戏的累积,以便在切换到正常排名系统之前随着时间的推移改进单个玩家的排名估计。

为简单起见,在计算另一队某些成员的新规定评级时,我也不会区分对方团队成员的既定评级和临时评级(除非 Elo 要求这样做)。所有这些评级都存在隐含误差,因此没有必要添加不必要的复杂性,而这些复杂性对于改进排名估计可能没有什么价值。

Provisional ranking systems are always imperfect, but the better ones (such as Elo) are designed to adjust provisional ratings more quickly than for ratings of established players. This acknowledges that trying to establish an ability rating off of just a few games with other players will inherently be error-prone.

I think you should use the average rating of all players on the opposing team as the input for establishing the provisional rating of the novice player, but handle it as just one game, not as N games vs. N players. Each game is really just one data sample, and the Elo system handles accumulation of these games to improve the ranking estimate for an individual player over time before switching over to the normal ranking system.

For simplicity, I would also not distinguish between established and provisional ratings for members of the opposing team when calculating a new provision rating for some member of the other team (unless Elo requires this). All of these ratings have implied error, so there is no point in adding unnecessary complications of probably little value in improving ranking estimates.

允世 2024-09-13 12:05:41

首先:你不太可能找到一个完美的系统。每个系统都会在某个地方存在缺陷。

并回答您的问题:也许这里的想法会有所帮助:OkBridge 的雷曼评级。

该评级系统自 1993 年起在名为 OKBridge 的互联网桥接网站上使用。桥牌是一种合作游戏,通常由 2 人团队对抗另一 2 人团队进行。评级系统旨在对个人玩家进行评级,并迎合许多人与不同伙伴一起玩的事实。

First off: It is very very unlikely that you will find a perfect system. Every system will have a flaw somewhere.

And to answer your question: Perhaps the ideas here will help: Lehman Rating on OkBridge.

This rating system is in use (since 1993!) on the internet bridge site called OKBridge. Bridge is a partnership game and is usually played with a team of 2 opposing another team of 2. The rating system was devised to rate the individual players and caters to the fact that many people play with different partners.

牛↙奶布丁 2024-09-13 12:05:41

如果没有这方面的任何背景,在我看来,排名系统基本上是一个统计模型。一个好的模型会随着时间的推移收敛到一致的排名,目标是尽快收敛。我想到了一些想法,其中一些已经在其他帖子中提到过:

  1. 显然,成熟的玩家有记录,而新玩家则没有。因此,对于新玩家来说,不确定性可能更大,尽管对于不稳定的玩家来说,不确定性可能非常高。此外,这可能取决于游戏主要使用先天技能还是后天技能。我认为你会想要每个玩家的“方差”参数。方差可以由两部分组成:真实方差和“温度”。温度就像模拟退火一样,温度会随着时间的推移而冷却。据推测,在玩了足够多的比赛后,温度会降至零。
  2. 是否有多个方面在起作用?就像在足球中一样,你可能有好的射手、好的传球手、有良好控球能力的球员等等。基本上,这些就是你系统中的自由度(在我的足球类比中,它们可能是也可能不是真正独立的)。看起来准确的模型会考虑这些,当然你可以有一个隐式处理这些的黑盒模型。然而,我希望了解系统中的自由度数量将有助于选择黑匣子。
  3. 你们如何划分团队?您的分组算法隐含了一个构建平等团队的模型。也许您可以使用此模型为每个玩家创建权重和/或预期的表现水平。如果球员技能有不同方面,也许你可以对某一方面表现明显优于预期的球员给予加分。
  4. 比赛到底是真的输还是赢,还是可以通过分差来发挥作用?既然你说没有平局,这可能不适用,但至少接近的分数可能意味着结果的不确定性更高。
  5. 如果您从头开始创建模型,我会带着改变的意图进行设计。至少,我预计可能有许多参数是可调的,甚至可能是自动调整的。例如,当您拥有更多玩家和更多游戏时,初始温度和初始评级值将会更好地了解(假设您正在跟踪统计数据)。但我当然预计玩的游戏越多,你可以构建的模型就越好。

只是一堆随机的想法,但这听起来是一个有趣的问题。

Without any background in this area, it seems to me a ranking systems is basically a statistical model. A good model will converge to a consistent ranking over time, and the goal would be to converge as quickly as possible. Several thoughts occur to me, several of which have been touched upon in other postings:

  1. Clearly, established players have a track record and new players don't. So the uncertainty is probably greater for new players, although for inconsistent players it could be very high. Also, this probably depends on whether the game primarily uses innate skills or acquired skills. I would think that you would want a "variance" parameter for each player. The variance could be made up of two parts: a true variance and a "temperature". The temperature is like in simulated annealing, where you have a temperature that cools over time. Presumably, the temperature would cool to zero after enough games have been played.
  2. Are there multiple aspects that come in to play? Like in soccer, you may have good shooters, good passers, guys who have good ball control, etc. Basically, these would be the degrees of freedom in you system (in my soccer analogy, they may or may not be truly independent). It seems like an accurate model would take these into account, of course you could have a black box model that implicitly handles these. However, I would expect understanding the number of degrees of freedom in you system would be helpful in choosing the black box.
  3. How do you divide teams? Your teaming algorithm implies a model of what makes equal teams. Maybe you could use this model to create a weighting for each player and/or an expected performance level. If there are different aspects of player skills, maybe you could give extra points for players whose performance in one aspect is significantly better than expected.
  4. Is the game truly win or lose, or could the score differential come in to play? Since you said no ties this probably doesn't apply, but at the very least a close score may imply a higher uncertainty in the outcome.
  5. If you're creating a model from scratch, I would design with the intent to change. At a minimum, I would expect there may be a number of parameters that would be tunable, and might even be auto tuning. For example, as you have more players and more games, the initial temperature and initial ratings values will be better known (assuming you are tracking the statistics). But I would certainly anticipate that the more games have been played the better the model you could build.

Just a bunch of random thoughts, but it sounds like a fun problem.

少女的英雄梦 2024-09-13 12:05:41

几年前,微软 TrueSkill 团队的一些人在游戏开发者杂志上发表了一篇文章,解释了他们做出决定背后的一些原因。它肯定提到了 Xbox Live 的团队游戏,所以它至少应该有一定的相关性。我没有该文章的直接链接,但您可以在此处订购过刊:http ://www.gdmag.com/archive/oct06.htm

我从文章中记得的一个具体观点是为整个团队评分,而不是给击杀最多的玩家更多的分数。这是为了鼓励人们帮助球队获胜,而不仅仅是试图最大化自己的分数。

我相信还有一些关于调整参数以尝试加速收敛以准确评估玩家技能的讨论,这听起来像是您感兴趣的。

希望有所帮助......

There was an article in Game Developer Magazine a few years back by some guys from the TrueSkill team at Microsoft, explaining some of their reasoning behind the decisions there. It definitely mentioned teams games for Xbox Live, so it should be at least somewhat relevant. I don't have a direct link to the article, but you can order the back issue here: http://www.gdmag.com/archive/oct06.htm

One specific point that I remember from the article was scoring the team as a whole, instead of e.g. giving more points to the player that got the most kills. That was to encourage people to help the team win instead of just trying to maximize their own score.

I believe there was also some discussion on tweaking the parameters to try to accelerate convergence to an accurate evaluation of the player skill, which sounds like what you're interested in.

Hope that helps...

最近可好 2024-09-13 12:05:41

“得分”是如何确定的?

如果一支球队总共得分 25 分(球队中所有球员的得分),您可以将球员得分除以球队总得分 * 100 以获得该球员得分的百分比为球队(或两队的所有分数)。

您可以用这些数据计算分数,
如果该百分比低于团队成员(或两个团队的成员)的 90%:
将玩家视为新手,并使用不同的权重因子计算分数。

有时,一个更简单的概念效果更好。

how is the 'scoring' settled?,

if a team would score 25 points in total (scores of all players in the team) you could divide the players score by the total team score * 100 to get the percentage of how much that player did for the team (or all points with both teams).

You could calculate a score with this data,
and if the percentage is lower than i.e 90% of the team members (or members of both teams):
treat the player as a novice and calculate the score with a different weighing factor.

sometimes an easier concept works out better.

薄荷港 2024-09-13 12:05:41

第一个问题有一个非常“游戏性”的解决方案。您可以为前几场游戏创建一个新手大厅,玩家在完成一定数量的游戏后才能看到自己的分数,从而为您提供足够的数据进行准确评分。
另一种选择是第一种的变体,但更简单——给他们一场与 AI 的比赛,用于确定开始得分(以 quake live 为例)。

The first question has a very 'gamey' solution. you can either create a newbie lobby for the first couple of games where the players can't see their score yet until they finish a certain amount of games that give you enough data for accurate rating.
Another option is a variation on the first but simpler-give them a single match vs AI that will be used to determine beginning score (look at quake live for an example).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文