如何在 Ruby 中高效解析大型文本文件

发布于 2024-10-15 07:17:14 字数 1177 浏览 7 评论 0原文

我正在编写一个导入脚本,用于处理可能有数十万行的文件(日志文件)。使用一个非常简单的方法(如下)需要足够的时间和内存,我觉得它随时都会耗尽我的 MBP,所以我终止了该进程。

#...
File.open(file, 'r') do |f|
  f.each_line do |line|
    # do stuff here to line
  end
end

该文件特别有 642,868 行:

$ wc -l nginx.log                                                                                                                                        /code/src/myimport
  642868 ../nginx.log

有谁知道更有效的(内存/CPU)方法来处理该文件中的每一行?

更新

上面的 f.each_line 内部的代码只是将正则表达式与该行进行匹配。如果匹配失败,我会将该行添加到 @skipped 数组中。如果通过,我会将匹配项格式化为哈希值(由匹配项的“字段”作为键)并将其附加到 @results 数组中。

# regex built in `def initialize` (not on each line iteration)
@regex = /(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}) - (.{0})- \[([^\]]+?)\] "(GET|POST|PUT|DELETE) ([^\s]+?) (HTTP\/1\.1)" (\d+) (\d+) "-" "(.*)"/

#... loop lines
match = line.match(@regex)
if match.nil?
  @skipped << line
else
  @results << convert_to_hash(match)
end

我完全同意这是一个低效的过程。我可以使 convert_to_hash 内部的代码使用预先计算的 lambda,而不是每次都计算出计算结果。我想我只是假设问题在于行迭代本身,而不是每行代码。

I'm writing an import script that processes a file that has potentially hundreds of thousands of lines (log file). Using a very simple approach (below) took enough time and memory that I felt like it would take out my MBP at any moment, so I killed the process.

#...
File.open(file, 'r') do |f|
  f.each_line do |line|
    # do stuff here to line
  end
end

This file in particular has 642,868 lines:

$ wc -l nginx.log                                                                                                                                        /code/src/myimport
  642868 ../nginx.log

Does anyone know of a more efficient (memory/cpu) way to process each line in this file?

UPDATE

The code inside of the f.each_line from above is simply matching a regex against the line. If the match fails, I add the line to a @skipped array. If it passes, I format the matches into a hash (keyed by the "fields" of the match) and append it to a @results array.

# regex built in `def initialize` (not on each line iteration)
@regex = /(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}) - (.{0})- \[([^\]]+?)\] "(GET|POST|PUT|DELETE) ([^\s]+?) (HTTP\/1\.1)" (\d+) (\d+) "-" "(.*)"/

#... loop lines
match = line.match(@regex)
if match.nil?
  @skipped << line
else
  @results << convert_to_hash(match)
end

I'm completely open to this being an inefficient process. I could make the code inside of convert_to_hash use a precomputed lambda instead of figuring out the computation each time. I guess I just assumed it was the line iteration itself that was the problem, not the per-line code.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

中性美 2024-10-22 07:17:14

我刚刚对 600,000 行文件进行了测试,它在不到半秒的时间内迭代了该文件。我猜缓慢不在于文件循环,而在于行解析。您也可以粘贴您的解析代码吗?

I just did a test on a 600,000 line file and it iterated over the file in less than half a second. I'm guessing the slowness is not in the file looping but the line parsing. Can you paste your parse code also?

∞觅青森が 2024-10-22 07:17:14

博文包含多种解析大型日志文件的方法。也许这就是一个灵感。另请查看 file-tail gem

This blogpost includes several approaches to parsing large log files. Maybe thats an inspiration. Also have a look at the file-tail gem

你穿错了嫁妆 2024-10-22 07:17:14

如果您使用 bash (或类似的),您可能可以像这样进行优化:

In input.rb:

 while x = gets
      # Parse
 end

then in bash:

 cat nginx.log | ruby -n input.rb

-n 标志告诉 ruby​​ assume 'while gets() ; ... end' 围绕脚本循环,这可能会导致它执行一些特殊的优化操作。

您可能还想研究问题的预先编写的解决方案,因为这样会更快。

If you are using bash (or similar) you might be able to optimize like this:

In input.rb:

 while x = gets
      # Parse
 end

then in bash:

 cat nginx.log | ruby -n input.rb

The -n flag tells ruby to assume 'while gets(); ... end' loop around your script, which might cause it to do something special to optimize.

You might also want to look into a prewritten solution to the problem, as that will be faster.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文