GDB 脚本太慢,需要帮助/建议
我正在编写一个 gdb 脚本来分析核心文件。其目的如下:
1]我正在寻找分散在64Mb空间中的数据包。该数据包的幻数为 4 字节。因此我必须一次读取 4 个字节。
2] 我必须从给定地址开始读取总共 64Mb 的数据。
3] 一旦我找到数据包,我应该打印数据包的详细信息并继续寻找其他数据包。
4] 因此,在我的脚本中,在最坏的情况下,主循环运行 64*1024*1024/4 =16777216 次。
问题是什么:
该脚本大约需要 3 小时或更长时间,这是完全不切实际的。
我假设这是因为它是一种解释性语言,而且循环数量也相当大。
欢迎任何建议/改进。请在这里帮助我。
I am writing a gdb script to analyse a core file. The purpose of whom is as follows:
1] I am looking for packets which are scattered in the 64Mb space. The packet has a magic number of 4 bytes. Hence I have to read 4 bytes at a time.
2] I have to read a total of 64Mb of data starting from a given address.
3] Once i find the packet I should print the deatils of the packet and continue looking for other packets.
4] Hence in my script the main loop runs for 64*1024*1024/4 =16777216 times in the worse case.
Whats the problem:
The script is taking about 3 hours or more which is totally impractical.
I am assuming this is because its a interpreted language, also the number of loops is pretty large.
Any suggestions/improvements are welcome. Kindly help me here.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
如果您认为问题在于 gdb 速度慢,您可以使用“转储二进制内存”转储您感兴趣的内存区域,然后使用用您认为更快的任何方式编写的小程序来分析转储。
If you think the problem is with gdb being slow you could dump the memory area you are interested in with "dump binary memory" then use a small program written in whatever you think will be faster to analyse the dump.
find 命令应该做你想做的一切,
不必每 4 个字节左右循环一次,
它将最后找到的数据包的地址存储在 $_ 中
(未经测试,但应该有效果)
如果你没有Python,你可以使用设置日志覆盖,设置日志记录,打印,设置日志记录关闭
the find command should do everything you want,
without having to loop every 4 bytes or so,
it stores the address of the last found packet in $_
(untested, but should be something to the effect of)
if you don't have python you can use set logging overwrite off,set logging on,print,set logging off