不同数据结构的速度/内存使用估计

发布于 11-19 14:24 字数 338 浏览 2 评论 0原文

我正在尝试决定以下使用哪种数据结构。

假设我可能有 1000 万个键,其中包含指向包含某些数据的唯一对象的指针。

键是 UUID,将它们视为 16 字节二进制数组。 UUID 是使用优质随机数生成器生成的。

我一直在考虑以下内容,但想知道每种方法在速度和内存消耗方面的优缺点。一些公平的估计,64 位平台上的最佳/最差/平均情况会很好。

我需要能够插入几乎无限的项目。

二叉树 哈希表 基数树(基于位或 2 位多路)

我需要的操作是:插入、删除、搜索

我喜欢基数树的想法,但事实证明它是最难实现的,而且我还没有找到合适的实现我可以将其纳入商业产品中。

I'm trying to decide which data structure to use for the following.

Lets say I have maybe 10 million keys that contain pointers to unique objects containing some data.

The keys are UUID's think of them as 16 byte binary arrays. The UUID's are generated using a good quality random number generator.

I've been considering the following but would like to know what the pros and cons in terms of speed and memory consumption would be for each. Some fair estimates, best/worst/average case on a 64bit platform would be nice.

I need to be able to have virtually unlimited items inserted.

Binary Tree
Hash Table
Radix Tree (bit based or 2bit multi-way)

The operations I need on these are: insert, delete, search

I like the idea of a radix tree but it's proving to be the hardest to implement and I haven't found a suitable implementation I could incorporate into a commercial product.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

第几種人2024-11-26 14:24:55
  • 您不关心排序
  • 您的密钥已经是随机的
  • 1000 万个项目

简短的答案

哈希表可能最适合您的情况。

速度

如果散列是常数,则散列表 (std::unordered_map) 将为 O( 1 )。在您的情况下,O( 1 ) 成立,因为您甚至不需要散列 — 只需使用随机 UUID 的低 32 位就足够了。查找的成本与一两个指针间接寻址类似。

二叉树 (std::map) 将为 O( log2 n ),因此对于 1000 万您将进行 24 次比较和 24 次潜在的缓存未命中。即使对于 n = 4,000,它也会使用 12 次比较,因此它很快就会变得比哈希表差得多。

基数树将为 O( k ),因此最多可以进行 k 次比较和 k潜在的缓存未命中。在极不可能的最佳情况下,基数树将与哈希表一样快。在最坏的情况下(对于 256 路树,假设 k = 一个合理的 16),它的性能会比二叉树好,但比哈希表差得多。

因此,如果速度是重中之重,请使用哈希表。

开销

如果已满,典型的哈希表每个项目将有大约 1-3 个开销指针。如果未满,则每个空槽可能会浪费 1 个指针空间。您应该能够保持它几乎满,同时仍然比二叉树更快,因为您有一个非常随机的密钥,但为了获得最大可能的速度,您当然需要给它足够的空间。对于 32 位计算机上的 1000 万个项目,整个表的开销预计为 38–114MiB。对于半满表,预计为 76–153MiB。

红黑树是最常见的 std::map 实现,每个项目有 3 个指针 + 1 个布尔值。一些实现利用指针对齐来将布尔值与指针之一合并。根据实现和哈希表的填充程度,红黑树的开销可能会稍微低一些。预计 114–153MiB。

基数树的每个项目有 1 个指针,每个空槽有 1 个指针。不幸的是,我认为如此大的随机键会导致您在树的边缘有很多空槽,因此它可能会比上述任何一个使用更多的内存。减少k可以降低这种开销,但同样会降低性能。

如果最小开销很重要,请使用哈希表或二叉树。如果它是优先级,请使用完整的哈希表。

请注意,std::unordered_map 不允许您控制何时调整大小,因此获得完整的地图将很困难。 Boost Intrusive 有一个非常好的 unordered_map 实现,它将让您直接控制这一点以及许多其他事情。

  • You don't care about ordering
  • Your key is already random
  • 10 million items

The short answer

A hash table will probably be the best for your case.

Speed

A hash table (std::unordered_map) will be O( 1 ) if hashing is constant. In your case, O( 1 ) holds because you don't even need to hash—just using the lower 32 bits of the random UUID should be good enough. The cost of a lookup will be similar to one or two pointer indirections.

A binary tree (std::map) will be O( log2 n ), so for 10 million items you'll have 24 comparisons and 24 potential cache misses. Even for n = 4,000 it'll use 12 comparisons, so it very quickly becomes significantly worse than a hash table.

A radix tree will be O( k ), so you'll have a maximum of k comparisons and k potential cache misses. At a very unlikely best, the radix tree will be as fast as a hash table. At worse (assuming k = a somewhat reasonable 16, for a 256-way tree) it'll perform better than a binary tree but far worse than a hash table.

So if speed is top priority, use a hash table.

Overhead

A typical hash table will have around 1–3 pointers of overhead per item if full. If not full, you'll probably be wasting 1 pointer of space per empty slot. You should be able to keep it nearly full while still being faster than a binary tree because you've got a very random key, but for maximum possible speed you'll of course want to give it plenty of headroom. For 10 million items on a 32-bit machine, expect 38–114MiB of overhead for a full table. For a half-full table, expect 76–153MiB.

A red-black tree, the most common std::map implementation, will have 3 pointers + 1 bool per item. Some implementations exploit pointer alignment to merge the bool with one of the pointers. Depending on implementations and how full the hash table is, a red-black tree might have slightly lower overhead. Expect 114–153MiB.

A radix tree will have 1 pointer per item and 1 pointer per empty slot. Unfortunately I think such large random keys will cause you to have very many empty slots toward the edge of a tree, so it will probably use more memory than either of the above. Decreasing k can lower this overhead but will similarly lower performance.

If minimum overhead is important, use a hash table or binary tree. If it's a priority, use a full hash table.

Note that std::unordered_map does not let you control when it will resize, so getting one full will be difficult. Boost Intrusive has a very nice unordered_map implementation that will put you directly in control of that and many other things.

余生一个溪2024-11-26 14:24:55

我会先尝试 std::mapstd::unordered_map

多年来,他们有许多聪明的人开发和改进它们。

有什么原因不能使用 std::mapstd::unordered_map 吗?

I would try std::map or std::unordered_map first.

They've had many smart people developing and improving them over many years.

Is there any reason why you can't use std::map or std::unordered_map?

醉城メ夜风2024-11-26 14:24:55

我刚刚做了一个快速计算,我认为您可能适合使用标准树。 1000 万个密钥是一个合理的数字。对于平衡树,只需检查 23 个节点的深度。对于基数树,您实际上需要检查 128 位的密钥长度。

您的密钥也可以以极其低廉的成本进行表示和比较。使用两个 64 位值的元组(boost 或 0x)来获取相同的 128 位密钥。元组排序足以在地图中使用。因此,与比较一样,密钥复制也很便宜。按原样比较整数可能比为基数深度搜索进行掩码和基于位的比较便宜。

因此,在这种情况下,地图可能效果很好。

*这里我会避免使用 unordered_map 因为 UUID 往往是结构化数据。这意味着标准散列过程(对于散列映射)的性能很容易变得非常差。 *

更新:

由于您使用的是随机 UUID,因此散列可能没问题——尽管如此大的散列表需要大量内存开销才能保持高效。

此外,给定完全随机的 UUID,基数最终可能会具有与树相同的平衡(因为密钥分布完全均匀)。因此,您甚至可能无法节省步骤,并且仍然会产生位操作的开销。但是有很多方法可以专门化和优化基数树,因此很难确定它是否可以更快,或者总是更慢。

I just did a quick calculation and I think you might be fine with a standard tree. 10million keys is a reasonable number. With a balanced tree that will be a depth of only 23 nodes to check. With a radix tree you'd actually have a key length of 128bites to check.

Your key can also be represented and compared extremely cheaply. Use a tuple (boost or 0x) of two 64bit values to get the same 128bit key. The tuple ordering will be enough for use in the map. Key copying is thus cheap, as is comparison. Comparing integers as-is is likely cheaper than doing masking and bit-based comparisons for the radix depth search.

So in this case a map is likely to work just fine.

*I'd avoid an unordered_map here since UUIDs tend to be structured data. This means that a standard hashing procedure (for a hash map) could easily be very poor in performance. *

Update:

Since you are using random UUIDs the hashing might be just fine -- though such large hash tables have a significant memory overhead to remain efficient.

Also, given totally random UUIDs the radix will likely end up having the same balancing as the tree (as key distribution is completely even). Thus you may not save even steps and still incur the overhead of the bit operations. But there are so many ways to specialize and optimize a radix tree that it's hard to say definitely if it could be faster, or always slower.

忆梦2024-11-26 14:24:55

IMO 基数树并不难实现。然而,一个简单的哈希表就足够了。只需分配 2^16 个对象列表的数组,并使用 UUID 的前 2 个字节来索引要插入对象的列表。然后您可以搜索包含大约 160 项的列表。

或者,分配 20M 指针数组。要存储对象,只需对 0-20M 范围内的 UUID 进行哈希,找到第一个空闲(NULL)指针并将其存储在那里。搜索意味着从哈希值走到第一个 NULL 值。删除也很简单......尝试阅读 http://en.wikipedia.org/wiki/Hash_function

IMO radix tree is not hard to implement. However, a simple hash table would be sufficient. Just allocate array of 2^16 lists of objects and use first 2 bytes of UUID to index the list where to insert the object. Then you can search list with just approximately 160 items.

Or, allocate array of 20M pointers. To store an object just make a hash of UUID in range 0-20M, find the first free (NULL) pointer and store it there. Searching means walk from the hash value to first NULL value. Deleting is also simple.... try read http://en.wikipedia.org/wiki/Hash_function

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文