最小核心转储(堆栈跟踪+仅当前帧)

发布于 2024-12-26 04:23:01 字数 516 浏览 4 评论 0原文

我可以在 Linux 上配置核心转储中的内容吗?我想获得类似 Windows 小型转储的信息(应用程序崩溃时有关堆栈帧的最少信息)。我知道您可以使用 ulimit 设置核心文件的最大大小,但这不允许我控制核心内部的内容(即不能保证如果我将限制设置为 64kb例如,将转储堆栈的最后 16 页)。

另外,如果可能的话,我想以编程方式(从代码)设置它。 我查看了 man core 提到的 /proc/PID/coredump_filter 文件,但对于我的目的来说,它似乎太粗粒度了。

提供一些背景信息:我需要微小的核心文件,原因有多种:我需要通过网络为众多(数千)个客户收集它们;此外,这些是嵌入式设备,带有少量 SD 卡和用于网络连接的 GPRS 调制解调器。所以超过~200k 的任何东西都是不可能的。

编辑:我正在开发一个运行 linux 2.6.24 的嵌入式设备。处理器是PowerPC。不幸的是,目前 Breakpad 不支持 powerpc-linux,因此 google Breakpad 不是一个选项

Can I configure what goes into a core dump on Linux? I want to obtain something like the Windows mini-dumps (minimal information about the stack frame when the app crashed). I know you can set a max size for the core files using ulimit, but this does not allow me to control what goes inside the core (i.e. there is no guarantee that if I set the limit to 64kb it will dump the last 16 pages of the stack, for example).

Also, I would like to set it in a programmatic way (from code), if possible.
I have looked at the /proc/PID/coredump_filter file mentioned by man core, but it seems too coarse grained for my purposes.

To provide a little context: I need tiny core files, for multiple reasons: I need to collect them over the network, for numerous (thousands) of clients; furthermore, these are embedded devices with little SD cards, and GPRS modems for the network connection. So anything above ~200k is out of question.

EDIT: I am working on an embedded device which runs linux 2.6.24. The processor is PowerPC. Unfortunately, powerpc-linux is not supported in breakpad at the moment, so google breakpad is not an option

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

单身狗的梦 2025-01-02 04:23:01

我通过两种方式“解决”了这个问题:

  1. 我安装了 SIGSEGV 的信号处理程序,并使用 backtrace/backtrace_symbols 打印堆栈跟踪。我使用 -rdynamic 编译了代码,因此即使在删除调试信息后,我仍然会得到具有有意义名称的回溯(同时保持可执行文件足够紧凑)。
    我剥离了调试信息并将其放入一个单独的文件中,我将使用 strip 将其存储在安全的地方;从那里,我将使用 add22line 以及从回溯(地址)保存的信息来了解问题发生的位置。这样我只需存储几个字节。
  2. 或者,我发现我可以使用 /proc/self/coredump_filter 不转储内存(将其内容设置为“0”):只有线程和过程信息、寄存器、堆栈跟踪等保存在核心中。请参阅 这个答案

我仍然丢失了可能很宝贵的信息(全局和局部变量内容、参数.. )。我可以轻松找出要转储的页面,但不幸的是,无法为正常核心转储指定“转储这些页面”(除非您愿意去修补 maydump()< /code> 内核中的函数)。

目前,我对有 2 个解决方案感到非常满意(总比没有好。)我的下一步行动将是:

  • 看看将 Breakpad 移植到 powerpc-linux 有多困难:已经有 powerpc-darwin 和 i386-linux所以..有多难? :)
  • 尝试使用 google-coredumper 只转储当前 ESP 周围的几个页面(这应该给我局部变量和参数)和“&some_global”周围(这应该给我全局变量)。

I have "solved" this issue in two ways:

  1. I installed a signal handler for SIGSEGV, and used backtrace/backtrace_symbols to print out the stack trace. I compiled my code with -rdynamic, so even after stripping the debug info I still get a backtrace with meaningful names (while keeping the executable compact enough).
    I stripped the debug info and put it in a separate file, which I will store somewhere safe, using strip; from there, I will use add22line with the info saved from the backtrace (addresses) to understand where the problem happened. This way I have to store only a few bytes.
  2. Alternatively, I found I could use the /proc/self/coredump_filter to dump no memory (setting its content to "0"): only thread and proc info, registers, stacktrace etc. are saved in the core. See more in this answer

I still lose information that could be precious (global and local variable(s) content, params..). I could easily figure out which page(s) to dump, but unfortunately there is no way to specify a "dump-these-pages" for normal core dumps (unless you are willing to go and patch the maydump() function in the kernel).

For now, I'm quite happy with there 2 solutions (it is better than nothing..) My next moves will be:

  • see how difficult would be to port Breakpad to powerpc-linux: there are already powerpc-darwin and i386-linux so.. how hard can it be? :)
  • try to use google-coredumper to dump only a few pages around the current ESP (that should give me locals and parameters) and around "&some_global" (that should give me globals).
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文