我正在尝试将我的网站的图像从我的主机移动到 Amazon S3 云托管。这些图像是客户工作场所的图像,无法公开提供。我希望它们最好通过使用 Amazon 提供的 PHP SDK 来显示在我的网站上。
到目前为止,我已经能够编写转换脚本,以便在数据库中查找记录、获取文件路径、适当命名并将其发送到 Amazon。
//upload to s3
$s3->create_object($bucket, $folder.$file_name_new, array(
'fileUpload' => $file_temp,
'acl' => AmazonS3::ACL_PRIVATE, //access denied, grantee only own
//'acl' => AmazonS3::ACL_PUBLIC, //image displayed
//'acl' => AmazonS3::ACL_OPEN, //image displayed, grantee everyone has open permission
//'acl' => AmazonS3::ACL_AUTH_READ, //image not displayed, grantee auth users has open permissions
//'acl' => AmazonS3::ACL_OWNER_READ, //image not displayed, grantee only ryan
//'acl' => AmazonS3::ACL_OWNER_FULL_CONTROL, //image not displayed, grantee only ryan
'storage' => AmazonS3::STORAGE_REDUCED
)
);
在复制所有内容之前,我创建了一个简单的表单来测试图像的上传和显示。如果我使用 ACL_PRIVATE 上传图像,我可以获取公共 url,但我将无权访问,或者我可以使用临时密钥获取公共 url,并可以显示图像。
<?php
//display the image link
$temp_link = $s3->get_object_url($bucket, $folder.$file_name_new, '1 minute');
?>
<a href='<?php echo $temp_link; ?>'><?php echo $temp_link; ?></a><br />
<img src='<?php echo $temp_link; ?>' alt='finding image' /><br />
使用这种方法,我的缓存将如何工作?我猜每次刷新页面或修改我的一条记录时,我都会再次拉取该图像,从而增加我的获取请求。
我还考虑过使用存储桶策略仅允许从某些引用者检索图像。我是否正确理解亚马逊应该只从我指定的页面或域获取请求?
我参考了:
https://forums.aws.amazon.com/thread.jspa? messageID=188183𭼗 进行设置,但随后我对我的对象需要哪种安全性感到困惑。看起来如果我将它们设置为私有,它们仍然不会显示,除非我使用前面提到的临时链接。如果我将它们公开,我就可以直接导航到它们,无论引荐来源如何。
我离我在这里想做的事情还远吗? S3 是否真的不支持这一点,或者我错过了一些简单的东西?我已经浏览了 SDK 文档并进行了大量搜索,觉得应该更清楚地记录一下,所以希望这里的任何输入都可以帮助其他人解决这种情况。我读过其他人用唯一的 ID 命名文件,通过模糊性创建安全性,但这在我的情况下并不会减少它,并且对于任何试图确保安全的人来说可能不是最佳实践。
I am trying to move images for my site from my host to Amazon S3 cloud hosting. These images are of client work sites and cannot be publicly available. I would like them to be displayed on my site preferably by using the PHP SDK available from Amazon.
So far I have been able to script for the conversion so that I look up records in my database, grab the file path, name it appropriately, and send it to Amazon.
//upload to s3
$s3->create_object($bucket, $folder.$file_name_new, array(
'fileUpload' => $file_temp,
'acl' => AmazonS3::ACL_PRIVATE, //access denied, grantee only own
//'acl' => AmazonS3::ACL_PUBLIC, //image displayed
//'acl' => AmazonS3::ACL_OPEN, //image displayed, grantee everyone has open permission
//'acl' => AmazonS3::ACL_AUTH_READ, //image not displayed, grantee auth users has open permissions
//'acl' => AmazonS3::ACL_OWNER_READ, //image not displayed, grantee only ryan
//'acl' => AmazonS3::ACL_OWNER_FULL_CONTROL, //image not displayed, grantee only ryan
'storage' => AmazonS3::STORAGE_REDUCED
)
);
Before I copy everything over, I have created a simple form to do test upload and display of the image. If I upload an image using ACL_PRIVATE, I can either grab the public url and I will not have access, or I can grab the public url with a temporary key and can display the image.
<?php
//display the image link
$temp_link = $s3->get_object_url($bucket, $folder.$file_name_new, '1 minute');
?>
<a href='<?php echo $temp_link; ?>'><?php echo $temp_link; ?></a><br />
<img src='<?php echo $temp_link; ?>' alt='finding image' /><br />
Using this method, how will my caching work? I'm guessing every time I refresh the page, or modify one of my records, I will be pulling that image again, increasing my get requests.
I have also considered using bucket policies to only allow image retrieval from certain referrers. Do I understand correctly that Amazon is supposed to only fetch requests from pages or domains I specify?
I referenced:
https://forums.aws.amazon.com/thread.jspa?messageID=188183𭼗 to set that up, but then am confused as to which security I need on my objects. It seemed like if I made them Private they still would not display, unless I used the temp link like mentioned previously. If I made them public, I could navigate to them directly, regardless of referrer.
Am I way off what I'm trying to do here? Is this not really supported by S3, or am I missing something simple? I have gone through the SDK documentation and lots of searching and feel like this should be a little more clearly documented so hopefully any input here can help others in this situation. I've read others who name the file with a unique ID, creating security through obscurity, but that won't cut it in my situation, and probably not best practice for anyone trying to be secure.
发布评论
评论(4)
提供图像的最佳方法是使用 PHP SDK 生成 URL。这样,下载就会直接从 S3 发送给您的用户。
您不需要像 @mfonda 建议的那样通过服务器下载 - 您可以在 S3 对象上设置您喜欢的任何缓存标头 - 如果您这样做,您将失去使用 S3 的一些主要好处。
但是,正如您在问题中指出的那样,网址始终会发生变化(实际上是查询字符串),因此浏览器不会缓存该文件。简单的解决方法就是始终使用相同的到期日期,以便始终生成相同的查询字符串。或者更好的是自己“缓存”URL(例如在数据库中)并每次重复使用它。
显然,您必须将到期时间设置为遥远的未来,但如果您愿意,您可以经常重新生成这些网址。例如,在您的数据库中,您将存储生成的网址和到期日期(您也可以从网址中解析它)。然后,您可以只使用现有的 url,或者如果过期日期已过,则生成一个新的 url。 ETC...
The best way to serve your images is to generate a url using the PHP SDK. That way the downloads go directly from S3 to your users.
You don't need to download via your servers as @mfonda suggested - you can set any caching headers you like on S3 objects - and if you did you would be losing some major benefits of using S3.
However, as you pointed out in your question, the url will always be changing (actually the querystring) so browsers won't cache the file. The easy work around is simply to always use the same expiry date so that the same querystring is always generated. Or better still 'cache' the url yourself (eg in the database) and reuse it every time.
You'll obviously have to set the expiry time somewhere far into the future, but you can regenerate these urls every so often if you prefer. eg in your database you would store the generated url and the expiry date(you could parse that from the url too). Then either you just use the existing url or, if the expiry date has passed, generate a new one. etc...
您可以使用 Amazon 存储桶中的存储桶策略来允许应用程序的域访问该文件。事实上,您甚至可以将本地开发域(例如:mylocaldomain.local)添加到访问列表中,然后您将能够获取图像。 Amazon 在此提供示例存储桶策略:http://docs.aws.amazon。 com/AmazonS3/latest/dev/AccessPolicyLanguage_UseCases_s3_a.html。这对于帮助我提供图像非常有帮助。
下面的政策解决了让我想到这个主题的问题:
You can use bucket policies in your Amazon bucket to allow your application's domain to access the file. In fact, you can even add your local dev domain (ex: mylocaldomain.local) to the access list and you will be able to get your images. Amazon provides sample bucket policies here: http://docs.aws.amazon.com/AmazonS3/latest/dev/AccessPolicyLanguage_UseCases_s3_a.html. This was very helpful to help me serve my images.
The policy below solved the problem that brought me to this SO topic:
当您谈论安全性和保护数据免受未经授权的用户侵害时,有一点很明确:您必须在每次访问您有权访问的资源时进行检查。
这意味着,生成一个任何人都可以访问的 url(可能很难获取,但仍然......)。唯一的解决方案是图像代理。您可以使用 php 脚本来做到这一点。
亚马逊博客中有一篇很好的文章建议使用 readfile,http://blogs.aws.amazon.com/php/post/Tx2C4WJBMSMW68A/Streaming-Amazon-S3-Objects-From-a-Web-Server
When you talk about security and protecting data from unauthorized users, something is clear: you have to check every time you access that resource that you are entitled to.
That means, that generating an url that can be accessed by anyone (might be difficult to obtain, but still...). The only solution is an image proxy. You can do that with a php script.
There is a fine article from Amazon's blog that sugests using readfile, http://blogs.aws.amazon.com/php/post/Tx2C4WJBMSMW68A/Streaming-Amazon-S3-Objects-From-a-Web-Server
您可以从 S3 下载内容(在 PHP 脚本中),然后使用正确的标头提供它们。
作为一个粗略的例子,假设您在
image.php
中有以下内容:然后在您的 HTML 代码中,您可以执行以下操作:
You can download the contents from S3 (in a PHP script), then serve them using the correct headers.
As a rough example, say you had the following in
image.php
:Then in your HTML code, you can do