有效地模拟滚动加权骰子(或遍历加权图),并进行频繁更新

发布于 2024-12-25 11:19:52 字数 1021 浏览 1 评论 0原文

我有一个密集的加权有向图,约有 20,000 个节点。

  1. 给定图中的一个节点,我随机选择一个相邻节点,其概率与相对权重相关。
  2. 每次选择后,我都会收到有关该选择是好还是坏的反馈,并更新网络。例如,在做出错误选择后,我会减少指向所选节点的所有边的权重。

昨天了解了 keithschwarz.com/darts-dice-coins/" rel="nofollow noreferrer">模拟掷加权骰子的别名方法,与做出一个选择相同(每个节点都是一个加权骰子,侧面对应于其他节点)。一卷的效率很高,但更新权重的效率却很低;别名方法可能不合适,因为我将更新比滚动更多的骰子!

我应该使用什么数据结构,允许频繁更新,以及什么相应的算法最适合做出选择?


一些想法/注释:

  • 我可以通过记录每次权重调整来减少更新,然后只实际进行必要时更新节点/骰子(即在掷骰之前)。但我仍然会为每个卷预先计算一次别名数据。
  • 相反,我可以简单地按原样存储图表(这样更新就很便宜)并放弃别名方法。我会在每次滚动之前动态计算相对权重(二进制搜索在这里工作)。
  • 动态计算相对权重的另一个好处是,我可以计算出每个节点的“全局权重”,以进一步减少更新。那么,错误的选择将导致仅进行 2 次更新:传入的边权重和节点的全局权重。
  • 补充:也许介于两者之间:一种在数据结构中维护局部相对权重的方法(例如树或别名方法),然后在每次滚动期间将它们与“全局权重”动态合并。

事实是,在实践中我不需要经常做出选择(不超过一分钟一次),因此我不需要最有效的解决方案。但这是一个有趣的业余项目,我有兴趣找到理论上的最佳解决方案。

I have a weighted, directed graph which is dense with around 20,000 nodes.

  1. Given an node in the graph, I choose an adjacent node randomly with a probability related to the relative weights.
  2. After each choice, I receive feedback about whether the choice was good or bad, and update the network. For example, after a bad choice I decrease the weight of all edges pointing to the chosen node.

I learned yesterday about the alias method for simulating rolling a weighted die, which is the same as making one choice (each node is one weighted die, and the sides correspond to other nodes). One roll is highly efficient, but updating the weights is not; the alias method may not be appropriate because I will be updating more dice than I will be rolling!

What data structure should I use, which allows for frequent updates, and what corresponding algorithm is best for making the choices?


Some ideas/notes:

  • I can decrease updates by recording each weight adjustment, and then only actually updating a node/die when necessary (i.e. directly before a roll). But I'd still be precomputing the alias data once for each roll.
  • Instead, I could simply store the graph as is (so that updates are cheap) and forgo the alias method. I would calculate relative weights on the fly before each roll (binary search works here).
  • An additional benefit of calculating relative weights on the fly is that I could factor out out the "global weight" for each node to further reduce updates. Then, a bad choice would result in only 2 updates: the incoming edge weight and the node's global weight.
  • added: Maybe there is something in between: a way to maintain local relative weights in a data structure (e.g. tree or alias method) and then during each roll merge them with "global weights" on the fly.

The truth is that in practice I don't need to make choices very often (no more than once a minute), so I don't need the most efficient solution. But this is a fun side project and I'm interested in finding a theoretically optimal solution.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

泪痕残 2025-01-01 11:19:52

我认为你可以用 log(k) 复杂度来做到这一点,其中 k 是骰子中的面数。

对于一个特定节点,令 p1, p2, ..., pk 为相对概率。令 p1+p2,...,+pk = p。

用这些相对概率作为叶子构造一个树结构。每个非叶节点的父节点是其子节点的相对概率之和。要“掷骰子”,请在 0 和 p 之间绘制一个随机值,并按照它遍历树。当您想要更新骰子面的相对概率时,只需更改相应的叶节点值并将其通过树向上传播即可。

通过这种方式,您可以选择一个随机值,其中一个随机数需要 log(k) 步来找到与该随机数对应的叶子,并且当您更新一个叶子时,需要 log(k) 时间来更新树。

这是解决方案的非常简单的描述,如果您需要完整的描述,请告诉我。我确信它有效,但不确定这是否足够有效满足您的需求。

总而言之,该算法需要:
1. 0和p之间只有一个随机数
2.“掷骰子”(即找到下一个节点)的复杂度为 O(log(k)),其中 k 是骰子中的面数
3. O(log(k)) 来“更新给定节点的骰子”。如果原始节点有 m 条边,则复杂度为 O(log(k1))+O(log(k2))...O((km)),其中 k1、k2、... km 是相邻节点的连通性节点。

====Tree Example====

如果骰子有 4 个面,相对概率为 1:50、2:80、3:20、4:70,则构建树如下:

          220
       /       \
    130         90
   /   \      /    \
 50    80    20    70
  |    |     |      |
  1    2     3      4

生成 0 到 220 之间的随机数 v。如果 v=100 :走左边的路线(因为100<130),然后走右边的路线(因为100>80)并更新v = v-80 = 20. 由于我们在 leaf 处声明 o/p 即 2

If v=210: left and v=v-130=80, left v=v-70=10, return leaf=4

如果 4 变为 60,则将 70 变为60、90 到 80 以及 220 到 210。

==== Lazy update variation ====

每当权重更改时,不要立即更新树。相反,只需将其标记为“脏权重”,直到您需要从该特定节点进行预测为止。

当您需要从特定节点进行预测并且某些权重是脏的时,要么仅使用脏节点更新树或 b.更新整个树。如果脏权重的数量为t,总权重的数量为k,如果t*log(k)<1。 k,然后仅更新与脏权重对应的树(t * O(k)),否则更新整个树(O(k))。

I think you can do it with log(k) complexity where k is number of faces in the dice.

for one particular node let p1, p2, ..., pk be the relative probabilities. Let p1+p2,...,+pk = p.

Construct a tree structure with these relative probabilities as leaves. the parent of each non-leaf node is sum of relative probabilities of of their children. To "roll a dice" draw a random value between 0 and p, and follow it through the tree. When you want to update relative probability of a dice face, just change the corresponding leaf node value and propagate it up through the tree.

In this way you can choose a random value with one random number with log(k) steps needed to find the leaf corresponding to the random number, and when you update one leaf it takes log(k) time to update the tree.

This is a very simple description of solution and let me know if you need complete description. I am sure it works, but not sure if this is efficient enough for you needs.

to summarize, this algorithm needs:
1. Only one random number between 0 and p
2. O(log(k)) complexity to "roll the dice" (i.e. find the next node) where k is the number of faces in the dice
3. O(log(k)) to "update the dice" of a given node. If there are m edges from the original node, then complexity is O(log(k1))+O(log(k2))...O((km)) where k1, k2, ... km are connectivity of adjacent nodes.

====Tree Example====

If there are 4 faces to the dice and the relative probabilities are 1:50, 2:80, 3:20, 4:70 construct the tree as follows:

          220
       /       \
    130         90
   /   \      /    \
 50    80    20    70
  |    |     |      |
  1    2     3      4

Generate a random number v between 0 and 220. If it is v=100: take the left route (since 100<130) and then take the right route (since 100>80) and update v = v-80 = 20. Since we are at leaf declare o/p i.e. 2

If v=210: left and v=v-130=80, left v=v-70=10, return leaf=4

If 4 changes to 60, change 70 to 60, 90 to 80 and 220 to 210.

==== Lazy update variation ====

Whenever weights are changed don't update the tree immediately. Instead, just mark it as a "dirty weight" wait until you need to make prediction from this particular node.

When you need to make a prediction from a particular node and if some of the weights are dirty, either a. update the tree with only dirty nodes or b. update the whole tree. If the number of dirty weights is t and number of total weights k, if t*log(k) < k, then only update the tree corresponding to dirty weights ( t*O(k)) otherwise update the whole tree (O(k)).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文