打乱列表(包含重复项)以避免相同的元素彼此相邻

发布于 2024-07-10 16:55:06 字数 136 浏览 9 评论 0原文

我想知道是否有一种“最佳”方法来打乱包含重复项的元素列表,以便尽可能避免 array[i] == array[i+1] 的情况。

我正在研究加权广告显示(我可以调整任何给定广告商的每次旋转的显示数量),并且希望避免同一广告商连续出现两次。

I am wondering if there is a "best" way to shuffle a list of elements that contains duplicates such that the case where array[i] == array[i+1] is avoided as much as possible.

I am working on a weighted advertising display (I can adjust the number of displays per rotation for any given advertiser) and would like to avoid the same advertister appearing twice in a row.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

不喜欢何必死缠烂打 2024-07-17 16:55:06

这与 这个非常相似问题。 如果您用广告商替换示例中的 A、B 和 C,我认为您会遇到同样的问题。 也许为此建议的一些解决方案可以帮助您。

This is pretty similar to this question. If you replace A, B, and C in the example given over there with your advertisers, I think you arrive at the same problem. Maybe some of the solutions suggested for that one can help you.

孤寂小茶 2024-07-17 16:55:06

基本随机化应该在大集合中引起足够的分散。

如果您想进一步减少这种情况(根据集合的不同,这甚至可能没有必要),最简单的方法是在随机化后肯定找到附近的受骗者并移动它们(但您可能会创建模式)。 更好的方法可能是创建包含并排重复的子集并重做随机化。

对于较小的集合,可能什么都不可能,具体取决于受骗者的数量。 因此,对于非常小的集合,解决方案只能是良好的基本随机化(我们回到第一句话)。

呆伯特随机数生成器

Basic randomizing should cause enough dispersion in a large set.

If you want to minimize that even more (which might not even be necessary depending on the sets), the simplest way would be to definitely find the close by dupes after randomization and move them around (but you might create patterns). A better approach might be to create subsets containing the side by side dupes and redo the randomization.

For a smaller set nothing might be possible, depending on the number of dupes. So the solution for a very small set would only be good basic randomization (And we're back at the first sentence).

Dilbert's randon number generator

时光沙漏 2024-07-17 16:55:06

就我个人而言,我认为处理这个问题的最简单方法是随机化数组,然后迭代它,直到找到两个相邻的具有相同值的元素。 当您发现彼此相邻的两个相同值时,通过迭代数组将后一个移动到数组中的另一个位置,直到找到一个位置,使其不在另一个相同值旁边。 如果找不到值,请将其保留在原处,然后继续处理数组的下一个元素。 这可能不是最佳的解决方案,但对于较小的数据集来说就很好,而且可能是最简单的编程。

Personally I think the easiest way to hand this would be to randomize the array, and then iterate over it until you find 2 elements with the same value that are next to each other. When you find 2 of the same values beside eachother, move the later one to another spot in the array by iterating over the array until you find a spot such that it isn't beside another of the same value. If you can't find a value, just leave it where it is, and continue on with the next element of the array. This probably won't be the most optimal solution, but will be fine for smaller data sets, and probably the simplest to program.

故笙诉离歌 2024-07-17 16:55:06

您可能拥有的最大重复数是多少? 2、3,有吗?

What's the biggest number of duplicates you may have? 2, 3, any?

朮生 2024-07-17 16:55:06

作为参考,我的(非常)天真的方法是这样的(实际上使用 LINQ/SQL 调用,但这被简化了):

var advertisers = getAdvertisers();
var returnList = new List();
int totalWeight = sumOfAllAdvertisersWeight();
while (totalWeight > 0)
{
    for (int i=0; i<advertisers.Count; i++)
    {
        if (advertisers[i].Weight > 0)
        {
            returnList.add(advertisers[i]);
            advertisers[i].Weight--;
            totalWeight--;
        }
    }
}
return returnList;

这将避免重复直到最后,但是是的,之后通过 returnList 向后检查以及是否有任何重复是值得的重复拖尾,尝试将它们尽早放入混合物中。

For reference, my (very) naive approach was something like this (actually using LINQ/SQL calls but this is simplified):

var advertisers = getAdvertisers();
var returnList = new List();
int totalWeight = sumOfAllAdvertisersWeight();
while (totalWeight > 0)
{
    for (int i=0; i<advertisers.Count; i++)
    {
        if (advertisers[i].Weight > 0)
        {
            returnList.add(advertisers[i]);
            advertisers[i].Weight--;
            totalWeight--;
        }
    }
}
return returnList;

This will avoid duplicates until the end but yeah it would pay to check backwards through the returnList afterwards and if there are any duplicates tailing, try and place them in the mix earlier.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文