限制迭代红宝石中的线程数

发布于 2025-02-12 06:57:26 字数 1626 浏览 0 评论 0原文

当我拥有这样的代码时,我会得到“无法创建线程,资源暂时不可用”。目录中有超过24k的文件要处理。

frames.each do |image|
    Thread.new do
        pipeline = ImageProcessing::MiniMagick.
            source(File.open("original/#{image}"))
            .append("-fuzz", "30%")
            .append("-transparent", "#ff00fe")
        result = pipeline.call

        puts result.path
        file_parts = image.split("_")
        frame_number = file_parts[2]
        FileUtils.cp(result.path, "transparent/image_transparent_#{frame_number}")

        puts "Done with #{image}!"
        puts "#{Dir.children("transparent").count.to_s} / #{Dir.children("original").count.to_s}"
        puts "\n"   
    end
end.each{ |thread| thread.join }

因此,我通过调用索引0-1000尝试了第一个1001个文件,然后以这种方式进行了操作:

frames[0..1000].each_with_index do |image, index|
    thread = Thread.new do
        pipeline = ImageProcessing::MiniMagick.
            source(File.open("original/#{image}"))
            .append("-fuzz", "30%")
            .append("-transparent", "#ff00fe")
        result = pipeline.call

        puts result.path
        file_parts = image.split("_")
        frame_number = file_parts[2]
        FileUtils.cp(result.path, "transparent/image_transparent_#{frame_number}")

        puts "Done with #{image}!"
        puts "#{Dir.children("transparent").count.to_s} / #{Dir.children("original").count.to_s}"
        puts "\n"   
    end
    thread.join
end

虽然这是处理的,但速度似乎就像我在观看单线时一样在终端。

但是我希望代码能够限制在OS允许之前允许的任何内容,以便它可以更快地解析它们。

或租赁时:

  1. 找到允许的最大线程
  2. 获得原始目录的计数,除以允许的线程数。
  3. 在该部门的批处理中运行每个。

When I have my code like this, I get "can't create thread, resource temporarily unavailable". There are over 24k files in the directory to process.

frames.each do |image|
    Thread.new do
        pipeline = ImageProcessing::MiniMagick.
            source(File.open("original/#{image}"))
            .append("-fuzz", "30%")
            .append("-transparent", "#ff00fe")
        result = pipeline.call

        puts result.path
        file_parts = image.split("_")
        frame_number = file_parts[2]
        FileUtils.cp(result.path, "transparent/image_transparent_#{frame_number}")

        puts "Done with #{image}!"
        puts "#{Dir.children("transparent").count.to_s} / #{Dir.children("original").count.to_s}"
        puts "\n"   
    end
end.each{ |thread| thread.join }

So, I tried the first 1001 files by calling the index 0-1000, and did it this way:

frames[0..1000].each_with_index do |image, index|
    thread = Thread.new do
        pipeline = ImageProcessing::MiniMagick.
            source(File.open("original/#{image}"))
            .append("-fuzz", "30%")
            .append("-transparent", "#ff00fe")
        result = pipeline.call

        puts result.path
        file_parts = image.split("_")
        frame_number = file_parts[2]
        FileUtils.cp(result.path, "transparent/image_transparent_#{frame_number}")

        puts "Done with #{image}!"
        puts "#{Dir.children("transparent").count.to_s} / #{Dir.children("original").count.to_s}"
        puts "\n"   
    end
    thread.join
end

And while this is processing, the speed seems to be about the same as if it was on a single thread when I'm watching it in the Terminal.

But I want the code to be able to limit to whatever the OS will allow before it disallows, so that it can parse through them all faster.

Or at lease:

  1. Find the maximum threads allowed
  2. Get original directory's count, divided by the number of threads allowed.
  3. Run this each in batches of that division.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文