unordered_map 在 VS10 中抛出 bad_alloc 但在 VS9 中不会抛出 bad_alloc,这是一个错误吗?

发布于 2024-09-08 15:42:07 字数 2088 浏览 4 评论 0原文

写一篇关于project euler 的第 14 个问题 我遇到了 VC9 和 VC10 之间的行为差​​异。

以下代码在 VC9 中运行正常,但在 VC10 中 std::unordered_map 抛出 bad_alloc 异常。 奇怪的是,如果我从异常中恢复,未来的分配将会成功(容器的大小继续增长)。另外,如果我使用 boost::unordered_map ,它在两个编译器中都可以正常工作。

关于实际内存使用情况,我在一台 4GB RAM 的机器上运行(使用 1.7),VC9 版本在完成任务之前会占用约 810MB 内存,而 VC10 则在约 658MB 时崩溃。

这是VC10的bug吗?我在同一台机器上运行,当完成的工作量相同时,还有什么可能导致内存在一个版本中持续耗尽,而在另一个版本中却没有?

<编辑>
更多信息:第一次发生异常是在计算堆栈深度为 1 的 7,718,688 时(没有递归,只是 main->length)。之后,添加到缓存中的每个数字似乎都会发生这种情况。在异常发生之前,缓存中有 16,777,217 个元素(根据 cache.size())。有趣的是,即使 insert 失败,缓存大小也会增加一,因此看起来它不提供强异常保证(违反第 23.2.1.11 节)。

代码如下:

#include <iostream>
#include <unordered_map>

typedef std::unordered_map<_int64, int> cache_type;

_int64 collatz(_int64 i)
{
    return (i&1)? i*3+1 : i/2;
}

int length(_int64 n, cache_type& cache)
{
    if (n == 1)
        return 1;

    cache_type::iterator found = cache.find(n);
    if (found != cache.end())
        return found->second;
    int len = length(collatz(n), cache) + 1; 
    cache.insert(std::make_pair(n, len)); // this sometimes throws
    return len;
}

int main(int argc, char** argv)
{
    const int limit = 10000000;
    cache_type cache;
    std::pair<int, int> max = std::make_pair(0, 0);
    for (int i = 2; i <= limit; ++i) {
        int len = length(i, cache);
        if (len > max.second)
            max = std::make_pair(i, len);
    }

    std::cout<< "Number with longest orbit is " << max.first 
        << " with a lenght of " << max.second 
        << " cache size is " << cache.size() << std::endl;
}


任何人都可以重现这种行为,它曾经消失(并重新出现),所以我的配置可能有一些特殊之处。

While writing a post about project euler's 14th problem I ran into a difference in behaviour between VC9 and VC10.

The following code runs OK in VC9 but in VC10 std::unordered_map throws a bad_alloc exception.
The strange thing is that if I recover from the exception future allocations will succeed (the size of the container continues to grow). Also if I use boost::unordered_map it works fine in both compilers.

Regarding the actual memory usage, I'm running on a machine with 4GB RAM, (1.7 in use) the VC9 version gets up to ~810MB of memory before completing the task and the VC10 one crashes at ~658MB.

Is this a bug in VC10? I'm running on the same machine what else could cause memory to consistently run out in one version and not in the other when the amount of work done is identical?

<edit>
Some more information: The first time the exception takes place is when calculating 7,718,688 with a stack depth of 1 (no recursion just main->length). After that it seems to happen for each number that is added to the cache. The cache had 16,777,217 elements in it before the exception happened (according to cache.size()). The interesting thing is that even when insert fails the cache size grows by one so it appears that it doesn't supply the strong exception guarantee (in violation of §23.2.1.11).
</edit>

Code follows:

#include <iostream>
#include <unordered_map>

typedef std::unordered_map<_int64, int> cache_type;

_int64 collatz(_int64 i)
{
    return (i&1)? i*3+1 : i/2;
}

int length(_int64 n, cache_type& cache)
{
    if (n == 1)
        return 1;

    cache_type::iterator found = cache.find(n);
    if (found != cache.end())
        return found->second;
    int len = length(collatz(n), cache) + 1; 
    cache.insert(std::make_pair(n, len)); // this sometimes throws
    return len;
}

int main(int argc, char** argv)
{
    const int limit = 10000000;
    cache_type cache;
    std::pair<int, int> max = std::make_pair(0, 0);
    for (int i = 2; i <= limit; ++i) {
        int len = length(i, cache);
        if (len > max.second)
            max = std::make_pair(i, len);
    }

    std::cout<< "Number with longest orbit is " << max.first 
        << " with a lenght of " << max.second 
        << " cache size is " << cache.size() << std::endl;
}

<edit>
Also can anyone reproduce this behaviour, at one time it disappeared (and re-appeared) so there may be something special about my configuration.
</edit>

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

无边思念无边月 2024-09-15 15:42:07

这可能是偶然的,但更改 _SECURE_SCL 的值会导致您描述的行为。

即编译:

cl /EHa /MD /D_SECURE_SCL=1 /Ox /c t1.cpp
link /LIBPATH:"c:/Program Files/Microsoft Visual Studio 10.0/VC/lib" /LIBPATH:"C:/Program Files/Microsoft SDKs/Windows/v7.0A/Lib" t1.obj

崩溃,但 _SECURE_SCL=0 的相同命令在我的 XP 32 位机器上运行完成。 _SECURE_SCL 的 msdn 页面表示它已启用调试版本,但并非如此发布,如果您在 IDE 下构建,这可能很重要。

It might be incidental, but changing the value of _SECURE_SCL causes the behavoir you describe.

i.e Compiling with:

cl /EHa /MD /D_SECURE_SCL=1 /Ox /c t1.cpp
link /LIBPATH:"c:/Program Files/Microsoft Visual Studio 10.0/VC/lib" /LIBPATH:"C:/Program Files/Microsoft SDKs/Windows/v7.0A/Lib" t1.obj

crashes, but the same commands with _SECURE_SCL=0 runs to completion on my XP 32bit machine. The msdn page for _SECURE_SCL says it's enabled for debug builds, but not release, which might be important if you're building under the IDE.

勿挽旧人 2024-09-15 15:42:07

如果需要调整映射哈希表的大小,则插入单个元素可能会导致大量内存分配。运行结束时,地图大小约为 0.5GB。 (请参阅上面我的评论。)

可能会使用一些启发式方法来决定哈希表在需要增长时扩展多少,可以想象,这可能是每次都将其扩展一倍。因此,在复制哈希表时,旧数据+新数据将使用约 1.5GB。

因此,您的程序可能会达到进程内存大小的限制。 (再次参见评论。)如果是这样,VC10 总体上可能比 VC9 占用更多的内存,并且在程序的不同运行或构建中分配的内存量略有不同,因此 VC10 有时会达到限制,而 VC9 不会。从来没有击中过它。

Inserting a single element could result in a large memory allocation if the map's hash table needs to be resized. The map seems to be about 0.5GB at the end of the run. (See my comment above.)

There is presumably some heuristic being used to decide how much to expand the hash table when it needs to grow, and this could conceivably be to double it each time. This would therefore use ~1.5GB for old + new data while the hash table is being copied.

It's therefore possible your program is hitting a limit on process memory size. (See comment again.) If so, it's possible that VC10 takes slightly more memory overall than VC9, and that slightly different amounts of memory get allocated on different runs or builds of the program, so that VC10 hits the limit sometimes while VC9 doesn't ever hit it.

魔法唧唧 2024-09-15 15:42:07

_int64 是否有映射在分配时可能不遵守的对齐要求?

尝试改用 long long int 并查看行为是否发生变化。

Does _int64 have alignment requirements that the map may not be honoring in allocation?

Try using long long int instead and see if the behavior changes.

柒夜笙歌凉 2024-09-15 15:42:07

1 - 检查事件日志以查看是否有任何有关进程超出允许配额的事件。

2 - 如果您使用的是 32 位操作系统,请尝试使用 3GB 用户空间启动它。

3 - 看看是否有不同的分配器可用

4 - 9.0 和 10.0 中的 unordered_map 存在差异,并且它是内联文件,偶然添加了人为的大小限制器(“安全功能”:-)。它很可能位于 x86 和 x64 构建具有不同值的宏中。

5 - 尝试在分配器周围放置一个轻型包装器,并只打印每个分配的大小。这也会告诉你是否真的是分配器在抛出或者在它之前抛出了什么。

6 - 如果是分配器抛出异常,请查看从中进行的实际 WinNT API 调用(再次与 9.0 进行比较)

7 - 尝试预分配大块(例如 1 GB)。

1 - Check EventLog to see if there are any events talking about a process going over it's allowed quota.

2 - If you are on 32-bit OS try starting it with 3GB for user space.

3 - Look to see if you have different allocators available

4 - Diff unordered_map in 9.0 and 10.0 and it's in-lined file on on off-chance that there's an artificial size limiter added ("security features" :-). It would most probably be in a macro with different values for x86 and x64 build.

5 - Try to put a light wrapper around the allocator and just print sizes for each allocation. That will also tell you if it's really allocator that's throwing or something before it.

6 - If it's allocator throwing look at actual WinNT API calls made from it (and diff with 9.0 again)

7 - Try pre-allocating huge block (say 1 GB).

知足的幸福 2024-09-15 15:42:07

您在对 length() 的深度递归调用中破坏了堆栈。

You're blowing the stack in the deeply recursive call to length().

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文