Coredump 被截断

发布于 2024-12-25 07:24:50 字数 3164 浏览 2 评论 0原文

我正在设置,

ulimit -c unlimited. 

并且在我们正在执行的c++程序中

struct rlimit corelimit;
  if (getrlimit(RLIMIT_CORE, &corelimit) != 0) {
    return -1;
  }
  corelimit.rlim_cur = RLIM_INFINITY;
  corelimit.rlim_max = RLIM_INFINITY;
  if (setrlimit(RLIMIT_CORE, &corelimit) != 0) {
    return -1;
  }

,但是每当程序崩溃时,它生成的核心转储就会被截断。

BFD: Warning: /mnt/coredump/core.6685.1325912972 is truncated: expected core file size >= 1136525312, found: 638976.

可能是什么问题?

我们正在使用 Ubuntu 10.04.3 LTS

Linux ip-<ip> 2.6.32-318-ec2 #38-Ubuntu SMP Thu Sep 1 18:09:30 UTC 2011 x86_64 GNU/Linux

这是我的 /etc/security/limits.conf

# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain>        <type>  <item>  <value>
#
#Where:
#<domain> can be:
#        - an user name
#        - a group name, with @group syntax
#        - the wildcard *, for default entry
#        - the wildcard %, can be also used with %group syntax,
#                 for maxlogin limit
#        - NOTE: group and wildcard limits are not applied to root.
#          To apply a limit to the root user, <domain> must be
#          the literal username root.
#
#<type> can have the two values:
#        - "soft" for enforcing the soft limits
#        - "hard" for enforcing hard limits
#
#<item> can be one of the following:
#        - core - limits the core file size (KB)
#        - data - max data size (KB)
#        - fsize - maximum filesize (KB)
#        - memlock - max locked-in-memory address space (KB)
#        - nofile - max number of open files
#        - rss - max resident set size (KB)
#        - stack - max stack size (KB)
#        - cpu - max CPU time (MIN)
#        - nproc - max number of processes
#        - as - address space limit (KB)
#        - maxlogins - max number of logins for this user
#        - maxsyslogins - max number of logins on the system
#        - priority - the priority to run user process with
#        - locks - max number of file locks the user can hold
#        - sigpending - max number of pending signals
#        - msgqueue - max memory used by POSIX message queues (bytes)
#        - nice - max nice priority allowed to raise to values: [-20, 19]
#        - rtprio - max realtime priority
#        - chroot - change root to directory (Debian-specific)
#
#<domain>      <type>  <item>         <value>
#

#*               soft    core            0
#root            hard    core            100000
#*               hard    rss             10000
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#    ftp             -       chroot          /ftp
#@student        -       maxlogins       4



#for all users
* hard nofile 16384
* soft nofile 9000

更多详细信息

我正在使用 gcc 优化标志

O3 

我将堆栈线程大小设置为 .5 mb

I am setting

ulimit -c unlimited. 

And in c++ program we are doing

struct rlimit corelimit;
  if (getrlimit(RLIMIT_CORE, &corelimit) != 0) {
    return -1;
  }
  corelimit.rlim_cur = RLIM_INFINITY;
  corelimit.rlim_max = RLIM_INFINITY;
  if (setrlimit(RLIMIT_CORE, &corelimit) != 0) {
    return -1;
  }

but whenever program is getting crashed the core dump generated by it is getting truncated.

BFD: Warning: /mnt/coredump/core.6685.1325912972 is truncated: expected core file size >= 1136525312, found: 638976.

What can be the issue ?

We are using Ubuntu 10.04.3 LTS

Linux ip-<ip> 2.6.32-318-ec2 #38-Ubuntu SMP Thu Sep 1 18:09:30 UTC 2011 x86_64 GNU/Linux

This is my /etc/security/limits.conf

# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain>        <type>  <item>  <value>
#
#Where:
#<domain> can be:
#        - an user name
#        - a group name, with @group syntax
#        - the wildcard *, for default entry
#        - the wildcard %, can be also used with %group syntax,
#                 for maxlogin limit
#        - NOTE: group and wildcard limits are not applied to root.
#          To apply a limit to the root user, <domain> must be
#          the literal username root.
#
#<type> can have the two values:
#        - "soft" for enforcing the soft limits
#        - "hard" for enforcing hard limits
#
#<item> can be one of the following:
#        - core - limits the core file size (KB)
#        - data - max data size (KB)
#        - fsize - maximum filesize (KB)
#        - memlock - max locked-in-memory address space (KB)
#        - nofile - max number of open files
#        - rss - max resident set size (KB)
#        - stack - max stack size (KB)
#        - cpu - max CPU time (MIN)
#        - nproc - max number of processes
#        - as - address space limit (KB)
#        - maxlogins - max number of logins for this user
#        - maxsyslogins - max number of logins on the system
#        - priority - the priority to run user process with
#        - locks - max number of file locks the user can hold
#        - sigpending - max number of pending signals
#        - msgqueue - max memory used by POSIX message queues (bytes)
#        - nice - max nice priority allowed to raise to values: [-20, 19]
#        - rtprio - max realtime priority
#        - chroot - change root to directory (Debian-specific)
#
#<domain>      <type>  <item>         <value>
#

#*               soft    core            0
#root            hard    core            100000
#*               hard    rss             10000
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#    ftp             -       chroot          /ftp
#@student        -       maxlogins       4



#for all users
* hard nofile 16384
* soft nofile 9000

More Details

I am using gcc optimization flag

O3 

I am setting stack thread size to .5 mb.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

浅黛梨妆こ 2025-01-01 07:24:50

如果您使用的是 coredumpctl,可能的解决方案是编辑 /etc/systemd/coredump.conf 并增加 ProcessSizeMaxExternalSizeMax :

[Coredump]
#Storage=external
#Compress=yes
ProcessSizeMax=20G
ExternalSizeMax=20G
#JournalSizeMax=767M
#MaxUse=
#KeepFree=

If you are using coredumpctl, a possible solution could be to edit /etc/systemd/coredump.conf and increase ProcessSizeMax and ExternalSizeMax:

[Coredump]
#Storage=external
#Compress=yes
ProcessSizeMax=20G
ExternalSizeMax=20G
#JournalSizeMax=767M
#MaxUse=
#KeepFree=
情魔剑神 2025-01-01 07:24:50

我记得有一个硬限制可以由管理员设置,还有一个软限制可以由用户设置。如果软限制强于硬限制,则采用硬限制值。
我不确定这对任何 shell 都有效,我只从 bash 知道它。

I remember there is a hard limit which can be set by the administrator, and a soft limit which is set by the user. If the soft limit is stronger than the hard limit, the hard limit value is taken.
I'm not sure this is valid for any shell though, I only know it from bash.

无所谓啦 2025-01-01 07:24:50

我遇到了同样的问题,核心文件被截断。

进一步调查表明,ulimit -f(又名文件大小RLIMIT_FSIZE)也会影响核心文件,因此请检查限制是否也不受限制/适当高。 [我在 Linux 内核 3.2.0 / debian wheezy 上看到了这个。]

I had the same problem with core files getting truncated.

Further investigation showed that ulimit -f (aka file size, RLIMIT_FSIZE) also affects core files, so check that limit is also unlimited / suitably high. [I saw this on Linux kernel 3.2.0 / debian wheezy.]

笑忘罢 2025-01-01 07:24:50

硬限制和软限制有一些可能有点棘手的细节:请参阅 关于使用 sysctl 来最后命名更改。

您可以编辑一个文件,该文件应该使限制大小保持不变,虽然可能有一个相应的 sysctl 命令可以做到这一点......

Hard limits and soft limits have some specifics to them that can be a little hairy: see this about using sysctl to make name the changes last.

There is a file you can edit that should make the limit sizes last, although there is probably a corresponding sysctl command that will do so...

一枫情书 2025-01-01 07:24:50

当我使用kill -3手动终止程序时,也发生了类似的问题。
发生这种情况只是因为我没有等待足够的时间来完成核心文件的生成。

确保文件大小停止增长,然后才将其打开。

Similar issue happened when I killed the program manually with kill -3.
It happened simply because I did not wait enough time for core file to finish generating.

Make sure that the file stopped growing in size, and only then open it.

罪#恶を代价 2025-01-01 07:24:50

当使用自动错误报告工具 (abrt) 时,此解决方案有效。

在我尝试了已经建议的所有内容(没有任何帮助)后,我在 /etc/abrt/abrt.conf 中发现了另一个影响转储大小的设置,

MaxCrashReportsSize = 5000

并增加了其值。

然后重新启动 abrt 守护进程:sudo service abrtd restart,重新运行崩溃的应用程序并获得完整的核心转储文件。

This solution works when the automated bug reporting tool (abrt) is used.

After I tried everything that was already suggested (nothing helped), I found one more setting, which affects dump size, in the /etc/abrt/abrt.conf

MaxCrashReportsSize = 5000

and increased its value.

Then restarted abrt daemon: sudo service abrtd restart, re-ran the crashing application and got a full core dump file.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文