无序(或哈希)映射中的迭代器
据我了解,哈希映射比标准映射更可取,因为您可以在接近 O(1) 的时间内找到元素。这是通过使用哈希或键作为数组查找来完成的。然后我们解决所有冲突并提取该值。
这对于查找来说非常有用,但是如果我们进行哈希查找的数组空间稀疏,那么 hashmap/unorderedmap 如何有效地迭代 hashmap 中的所有元素,而不需要详尽地遍历我们的数组空间?
编辑:Boost、SGI 和 C++11 哈希图/无序映射都有迭代器,那么它们是如何工作的呢?
As far as I understand, Hashmaps are preferable to standard maps because you can find elements in close to O(1) time. This is done by using a hash or the key as an array lookup. We then resolve any collisions and pull out the value.
This works great for lookup, but if our array-space into which we do the hash lookup is sparsely populated, how does the hashmap/unorderedmap efficiently iterate all the elements in our hashmap without exhaustively going through our array-space?
Edit: yet Boost, SGI and C++11 hashmaps/unordered maps have iterators, so how do they work?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
除非存在并行结构(例如
LinkedHashMap
)它不能:迭代需要检查每个存储桶的内容。因此,如果您的存储桶非常稀疏,这可能成为一个因素。这就是为什么您不想选择太高的存储桶计数的原因之一(较大的存储桶计数显然会浪费内存)。
Unless there's a parallel structure (for example a linked list as in a
LinkedHashMap
) it can't: iteration will need to check each bucket for content.So if your buckets are very sparsely populated, this can become a factor. That's one of the reasons why you don't want to choose a bucket count that is too high (the bigger one obviously being wasted memory).
迭代次数为 O(n),其中 n 是映射的容量(即桶的数量)。但通常情况下,您不应该有 100000 个容量来存储 6 个密钥。这意味着 O(size) 应该是 O(capacity),这意味着迭代通常也是 O(size)。
The iteration is O(n), where n is the capacity (i.e. the number of buckets) of the map. But normally, you shouldn't have a capacity of 100000 to store 6 keys. This means that O(size) should be O(capacity), which means that iteration is also normally O(size).