核心已转储,但核心文件不在当前目录中?
运行 C 程序时,它显示“(核心转储)”,但我在当前路径下看不到任何文件。
我已经设置并验证了ulimit
:
ulimit -c unlimited
ulimit -a
我还尝试查找名为“core”的文件,但没有获得核心转储文件?
任何帮助,我的核心文件在哪里?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(16)
阅读 /usr/src/linux/Documentation/sysctl/kernel.txt 。
您的系统不是将核心转储写入磁盘,而是配置为将其发送到 abrt(意思是:自动错误报告工具,而不是“中止”)程序。 自动错误报告工具可能没有像它那样记录应该 是...
无论如何,快速的答案是您应该能够找到您的核心文件在
/var/cache/abrt
中,其中abrt
在调用后存储它。同样,使用 Apport 的其他系统可能会将核心存储在/var/crash
中,等等。Read /usr/src/linux/Documentation/sysctl/kernel.txt.
Instead of writing the core dump to disk, your system is configured to send it to the
abrt
(meaning: Automated Bug Reporting Tool, not "abort") program instead. Automated Bug Reporting Tool is possibly not as documented as it should be...In any case, the quick answer is that you should be able to find your core file in
/var/cache/abrt
, whereabrt
stores it after being invoked. Similarly, other systems using Apport may squirrel away cores in/var/crash
, and so on.在最近的 Ubuntu(我的例子是 12.04)上,可能会打印“分段错误(核心转储)”,但没有生成您可能期望的核心文件(例如对于本地编译的程序)。
如果您的核心文件大小 ulimit 为 0(您尚未执行
ulimit -c unlimited
),则可能会发生这种情况 - 这是 Ubuntu 上的默认设置。通常这会抑制“(核心转储)”,让你陷入错误,但在 Ubuntu 上,核心文件通过管道传输到 Apport (Ubuntu的崩溃报告系统)通过/proc/sys/kernel/core_pattern
,这似乎会导致误导性消息。如果 Appor 发现有问题的程序不是它应该报告崩溃的程序(您可以在
/var/log/apport.log
中看到发生的情况),它会回退到模拟默认内核行为将核心文件放入 cwd 中(这是在脚本/usr/share/apport/apport
中完成的)。这包括尊重 ulimit,在这种情况下它什么也不做。但是(我假设)就内核而言,生成了一个核心文件(并通过管道传输到 apport),因此出现消息“分段错误(核心转储)”。最终是 PEBKAC 忘记设置 ulimit,但是这个误导性的消息让我觉得我疯了一会儿,想知道是什么在吃掉我的核心文件。
(此外,一般来说,core(5) 手册页 --
man 5 core
-- 是一个很好的参考,可以了解核心文件的最终位置以及可能无法写入的原因。)On recent Ubuntu (12.04 in my case), it's possible for "Segmentation fault (core dumped)" to be printed, but no core file produced where you might expect one (for instance for a locally compiled program).
This can happen if you have a core file size ulimit of 0 (you haven't done
ulimit -c unlimited
) -- this is the default on Ubuntu. Normally that would suppress the "(core dumped)", cluing you into your mistake, but on Ubuntu, corefiles are piped to Apport (Ubuntu's crash reporting system) via/proc/sys/kernel/core_pattern
, and this seems to cause the misleading message.If Apport discovers that the program in question is not one it should be reporting crashes for (which you can see happening in
/var/log/apport.log
), it falls back to simulating the default kernel behaviour of putting a core file in the cwd (this is done in the script/usr/share/apport/apport
). This includes honouring ulimit, in which case it does nothing. But (I assume) as far as the kernel is concerned, a corefile was generated (and piped to apport), hence the message "Segmentation fault (core dumped)".Ultimately PEBKAC for forgetting to set ulimit, but the misleading message had me thinking I was going mad for a while, wondering what was eating my corefiles.
(Also, in general, the core(5) manual page --
man 5 core
-- is a good reference for where your core file ends up and reasons it might not be written.)随着 systemd 的推出,还有另一种情况。默认情况下,systemd 会将核心转储存储在其日志中,可以使用 systemd-coredumpctl 命令进行访问。在 core_pattern 文件中定义:
检查存储的核心转储的最简单方法是通过 coredumpctl list(较旧的核心转储可能已被自动删除)。
可以通过简单的“hack”来禁用此行为:
与往常一样,核心转储的大小必须等于或大于正在转储的核心的大小,例如通过
ulimit -c unlimited< /代码>。
With the launch of systemd, there's another scenario as well. By default systemd will store core dumps in its journal, being accessible with the
systemd-coredumpctl
command. Defined in the core_pattern-file:Easiest way to check for stored core dumps is via
coredumpctl list
(older core dumps may have been removed automatically).This behaviour can be disabled with a simple "hack":
As always, the size of core dumps has to be equal or higher than the size of the core that is being dumped, as done by for example
ulimit -c unlimited
.在Ubuntu 16.04 LTS下编写获取核心转储的指令:
正如@jtn在他的回答中提到的那样,Ubuntu将崩溃的显示委托给appor,而后者又拒绝写入转储,因为该程序不是已安装的软件包。
为了解决这个问题,我们需要确保apport也为非包程序写入核心转储文件。为此,请创建一个名为 ~/.config/apport/settings 的文件,其中包含以下内容:
<代码>[主要]
unpackaged=true
[可选] 要使转储可由 gdb 读取,请运行以下命令:
apport-unpack;
参考资料:
Core_dump – Oracle VM VirtualBox
Writing instructions to get a core dump under Ubuntu 16.04 LTS:
As @jtn has mentioned in his answer, Ubuntu delegates the display of crashes to apport, which in turn refuses to write the dump because the program is not an installed package.
To remedy the problem, we need to make sure apport writes core dump files for non-package programs as well. To do so, create a file named ~/.config/apport/settings with the following contents:
[main]
unpackaged=true
[Optional] To make the dumps readble by gdb, run the following command:
apport-unpack <location_of_report> <target_directory>
References:
Core_dump – Oracle VM VirtualBox
我在以下位置找到了 Ubuntu 20.04 系统的核心文件:
I found core files of my Ubuntu 20.04 system at;
我可以想到以下两种可能性:
正如其他人已经指出的那样,程序可能
chdir()
。运行该程序的用户是否可以写入它chdir()
编辑的目录?如果没有,它就无法创建核心转储。由于某些奇怪的原因,核心转储未命名为
core.*
您可以检查/proc/sys/kernel/core_pattern
。此外,您指定的 find 命令不会找到典型的核心转储。您应该使用find / -name "*core.*"
,因为核心转储的典型名称是core.$PID
I could think of two following possibilities:
As others have already pointed out, the program might
chdir()
. Is the user running the program allowed to write into the directory itchdir()
'ed to? If not, it cannot create the core dump.For some weird reason the core dump isn't named
core.*
You can check/proc/sys/kernel/core_pattern
for that. Also, the find command you named wouldn't find a typical core dump. You should usefind / -name "*core.*"
, as the typical name of the coredump iscore.$PID
在Ubuntu18.04中,获取core文件最简单的方法是输入以下命令来停止apport服务。
然后重新运行应用程序,您将在当前目录中获得转储文件。
In Ubuntu18.04, the most easist way to get a core file is inputing the command below to stop the apport service.
Then rerun the application, you will get dump file in current directory.
如果您在
RHEL
上以及使用abrt
时缺少二进制文件的核心转储,确保 /etc/abrt/abrt-action-save-package-data.conf
包含
这可以为不属于已安装的二进制文件创建崩溃报告(包括核心转储)包(例如本地构建)。
If you're missing core dumps for binaries on
RHEL
and when usingabrt
,make sure that
/etc/abrt/abrt-action-save-package-data.conf
contains
This enables the creation of crash reports (including core dumps) for binaries which are not part of installed packages (e.g. locally built).
对于 fedora25,我可以在
ccpp-2017-02-16-16:36:51-2974" 的位置找到核心文件,
其模式为 "%s %c %p %u %g %t %P % 按照 `/proc/sys/kernel/core_pattern'
For fedora25, I could find core file at
where
ccpp-2017-02-16-16:36:51-2974" is pattern "%s %c %p %u %g %t %P %
as per `/proc/sys/kernel/core_pattern'我使用的是 Linux Mint 19(基于 Ubuntu 18)。我想在当前文件夹中有
coredump
文件。我必须做两件事:/proc/sys/kernel/core_pattern
(通过# echo "core.%p.%s.%c.%d.%P > / proc/sys/kernel/core_pattern
或通过# sysctl -w kernel.core_pattern=core.%p.%s.%c.%d.%P)
$ ulimit -c unlimited
这已经写在答案中,但我写的只是为了简洁地总结有趣的是,更改限制不需要root权限(根据https://askubuntu.com/questions/162229/how-do- i-increase-the-open-files-limit-for-a-non-root-user 非 root 只能降低限制,所以这是意想不到的 - 欢迎对此发表评论)。
I'm on Linux Mint 19 (Ubuntu 18 based). I wanted to have
coredump
files in current folder. I had to do two things:/proc/sys/kernel/core_pattern
(by# echo "core.%p.%s.%c.%d.%P > /proc/sys/kernel/core_pattern
or by# sysctl -w kernel.core_pattern=core.%p.%s.%c.%d.%P)
$ ulimit -c unlimited
That was written already in the answers, but I wrote to summarize succinctly. Interestingly changing limit did not require root privileges (as per https://askubuntu.com/questions/162229/how-do-i-increase-the-open-files-limit-for-a-non-root-user non-root can only lower the limit, so that was unexpected - comments about it are welcome).
我在 WSL 的努力并没有成功。
对于在 Windows Linux 子系统 (WSL) 上运行的系统,目前似乎存在缺少核心转储文件的未决问题。
评论表明
Github 问题
Windows 开发人员反馈
My efforts in WSL have been unsuccessful.
For those running on Windows Subsystem for Linux (WSL) there seems to be an open issue at this time for missing core dump files.
The comments indicate that
Github issue
Windows Developer Feedback
就我而言,原因是 ulimit 命令仅影响当前终端。
如果我在第一个终端上设置
ulimit -c unlimited
。然后我启动一个新终端来运行该程序。核心转储时不会生成核心文件。您必须确认运行程序的终端的核心大小。
以下步骤适用于 ubuntu 20.04 和 ubuntu 21.04:
In my case, the reason is that the ulimit command only effect the current terminal.
If I set
ulimit -c unlimited
on the first terminal. Then I start a new terminal to run the program. It will not generate the core file when core dumped.You have to confirm the core size of the terminal which runs your program.
The following steps work on ubuntu 20.04 and ubuntu 21.04:
ulimit -c unlimited
使核心文件在“核心转储”后正确出现在当前目录中。ulimit -c unlimited
made the core file correctly appear in the current directory after a "core dumped".如果您使用Fedora,为了在二进制文件的同一目录中生成核心转储文件:
并且
If you use Fedora, in order to generate core dump file in the same directory of binary file:
And
获取最新核心转储路径的一行:
您当然可以将该行的最后一个
-1
修改为例如-4
以获取最后 4 个核心转储。注意: 这预计不会起作用,例如,如果路径模式在最后一个
/
之前使用变量,或者非核心转储文件位于该目录上。 >A one-liner to get the latest core dump path:
You can of course modify the last
-1
on that line to e.g.-4
to get the last 4 core dumps.Note: That's not expected to work e.g. in case the path pattern uses variables before the last
/
or when non core dump files are on that dir.我发现开始调试核心转储的最简单方法是使用 coredumpctl(man coredumpctl 了解更多信息)。
要使用最新的分段错误启动调试会话,只需键入“
查找并提取核心转储文件也很有用”。
The most easy way to start debugging a core dump I found is using coredumpctl (man coredumpctl for more info).
To start a debug session using the latest segmentation fault, just type
It is also useful to find and extract core dump files.