如何使用后台工作者保存记录?

发布于 2024-10-17 10:11:27 字数 768 浏览 3 评论 0原文

如果我将大于 5mb 的视频文件保存到服务器上。我应该创建一个后台作业来保存此文件吗?

这应该怎么做呢?我的视频模型有标题、描述和附件列/字段。所有字段均为必填字段。

在 def create 中,我不应该执行 "if @video.save",而是应该执行 "if Resque.enqueue(Save, @video)" 之类的操作?

我不太确定这是如何做到的,因为将参数传递给 Resque.enqueue() 会将其转换为散列。其次,使用“”if Resque.enqueue(Save, @video)””,期望 true 或 false。但是,Resque.enqueue 无法返回任何内容。还是我错了?

纽扣线是。使用带有 resque + redis 的后台工作者保存记录的正确方法是什么?

理想情况下,我认为它应该看起来类似于:

def create
  @video = Video.new(params[:video])

  respond_to do |format|
    if Resque.enqueue(Save)
      ...
    end
end

module Save
  @queue = :save

  def self.perform
    video = Video.new(params[:video])
    video.save
    return true
  end
end

你的想法是什么?

If I am saving a video file onto the server that is greater than 5mb. Should I create a background job for saving this file?

How should this be done? My video model has a title, description and attachment columns/fields. All fields are required.

In def create, instead of doing "if @video.save", I should do something like "if Resque.enqueue(Save, @video)"?

I am not exactly sure how this can be done, since passing an argument to Resque.enqueue() turns it into a hash. Second, with " "if Resque.enqueue(Save, @video)"", expects a true or false. However, Resque.enqueue can't return anything. Or am I wrong?

Button line is. What is the appropriate way to save a record using a background worker with resque + redis?

Ideally, I was thinking it should look something similar to:

def create
  @video = Video.new(params[:video])

  respond_to do |format|
    if Resque.enqueue(Save)
      ...
    end
end

module Save
  @queue = :save

  def self.perform
    video = Video.new(params[:video])
    video.save
    return true
  end
end

What are your thoughts?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

私野 2024-10-24 10:11:27

再次认清形势。我不认为上传者应该推迟工作……想一想。如果上传文件排在第 10 个位置会发生什么情况。它期望从哪里获取文件?

正如另一位开发人员所引用的:“有 HTTP 请求处理程序和后台进程。您需要在 http 请求处理程序中处理原始上传,然后您可以启动外部后台进程将其从本地磁盘上传到 S3”。

这意味着,最初运行一些 HTTP 请求处理程序来处理这些类型的请求是正常的。

希望这能为最终遇到同样问题的用户解决一些问题。

Realising the situation again. I dont think uploaders are meant to have delayed jobs.. think about it. what happens if an upload file is queued at the 10nth position. Where does it expect to get the file from?

As quoted by another dev: "There are HTTP request handlers and just background processes. You need to handle the original upload in a http request handler and THEN you can fire up an external background process to upload it to S3 form the local disk".

Which means, it is normal to have a few HTTP request handlers running initially to handle these types of requests.

Hope this clears up a few things for users who end up coming across the same concern.

时光匆匆的小流年 2024-10-24 10:11:27

我会让用户上传未编码的视频。然后,在其 create 方法中,它启动一项对视频进行编码的作业(使用 resque/delayedjobs),这将创建一个 Video。

class UnencodedVideo
  def after_create
    Resque.enqueue(Encoder, this.id)
  end
end

class Encoder
  def self.perform(unencoded_video_id)
    unencoded_video = UnencodedVideo.find(unencoded_video_id)
    ...
    video.save
  end
end

class Video
end

I would have users upload an UnencodedVideo. Then, on its create method, it starts a job to encode the video (using resque/delayedjobs), which will create a Video.

class UnencodedVideo
  def after_create
    Resque.enqueue(Encoder, this.id)
  end
end

class Encoder
  def self.perform(unencoded_video_id)
    unencoded_video = UnencodedVideo.find(unencoded_video_id)
    ...
    video.save
  end
end

class Video
end
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文