为什么使用 OpenURI 下载文件会导致文件不完整?
我正在尝试使用 OpenURI 从 S3 下载文件,然后将其保存在本地,以便我可以使用 ActionMailer 将文件作为附件发送。
发生了一些奇怪的事情。正在下载和附加的图像已损坏,图像的底部部分丢失。
代码如下:
require 'open-uri'
open("#{Rails.root.to_s}/tmp/#{a.attachment_file_name}", "wb") do |file|
source_url = a.authenticated_url()
io = open(URI.parse(source_url).to_s)
file << io.read
attachments[a.attachment_file_name] = File.read("#{Rails.root.to_s}/tmp/#{a.attachment_file_name}")
end
a
是来自 ActionMailer 的附件。
接下来我可以尝试什么?
I'm trying to use OpenURI to download a file from S3, and then save it locally so I can send the file as an attachment with ActionMailer.
Something strange is going on. The images being downloaded and attached are corrupt, the bottom parts of the images are missing.
Here's the code:
require 'open-uri'
open("#{Rails.root.to_s}/tmp/#{a.attachment_file_name}", "wb") do |file|
source_url = a.authenticated_url()
io = open(URI.parse(source_url).to_s)
file << io.read
attachments[a.attachment_file_name] = File.read("#{Rails.root.to_s}/tmp/#{a.attachment_file_name}")
end
a
is the attachment from ActionMailer.
What can I try next?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
看起来您正在尝试在文件关闭之前读取该文件,这可能会使文件缓冲区的一部分未写入。
我会这样做:
它看起来像
source_url = a.authenticated_url()
将是一个字符串,因此将字符串解析为 URI,然后对其执行to_s
将除非 URI 正在做一些规范化,否则它是多余的,但我认为它不会这样做。根据我的系统管理员经验:副任务是清理下载/假脱机的文件。它们可以在附加后立即删除,或者您可以有一个每天运行的 cron 作业,删除一天以上的所有假脱机文件。
对此的另一个问题是,如果无法读取 URL,则不会进行错误处理,从而导致附件失败。使用临时假脱机文件,您可以检查该文件是否存在。更好的是,如果服务器返回 400 或 500 错误,您可能应该准备好处理异常。
为了避免使用临时假脱机文件,请尝试以下未经测试的代码:
It looks like you're trying to read the file before it's been closed, which could leave part of the file buffer unwritten.
I'd do it like this:
It looks like
source_url = a.authenticated_url()
will be a string, so parsing the string into a URI then doingto_s
on it will be redundant unless URI is doing some normalizing, which I don't think it does.Based on my sysadmin experience: A side task is cleaning up the downloaded/spooled files. They could be deleted immediately after being attached, or you could have a cron job that runs daily, deleting all spooled files over one day old.
An additional concern for this is there is no error handling in case the URL can't be read, causing the attachment to fail. Using a temp spool file you could check for the existence of the file. Even better, you should probably be prepared to handle an exception if the server returns a 400 or 500 error.
To avoid using a temporary spool file try this untested code: