创建多个实例时云中的存储问题
在云托管环境(亚马逊、rackspace)中,您可以创建多个实例。假设我有一个数据库服务器(mysql)和其他持久数据。
如果我创建更多实例,数据会发生什么?前任。
1 个实例 ->用户表(在数据库中)
我创建了另外 3 个实例
4 个实例 ->每个都有一个用户表
错误:如果有人将数据添加到实例 3 上的表中,实例 4 会如何看到它?如果我将实例合并回一个,它会保留哪些实例数据?
谢谢
In a cloud hosting environment (amazon, rackspace,) you can create multiple instances. Let's say I have a database server (mysql,) and other persistent data.
If I create more instances, what happens to the data ? Ex.
1 Instance -> user table (in a db)
I make another 3 instances
4 Instances -> each one has it's one user table
Errors: if someone adds data to the table on instance 3 how does instance nr 4 see it ? If I merge the instances back to one, which instance data does it keep ?
Thank you
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我建议拥有一个(或多个)所有实例都连接到的专用数据库服务器。如果您使用 Amazon Web Services,请查看他们的 RDS 服务 ( http://aws.amazon.com/rds/< /a> )
这样你就不需要担心复制 - 如果你确实希望每个服务器运行它自己的数据库实例,你就必须研究复制 - 对于 MySQL,这是一个很好的指南: http://dev.mysql.com/doc/refman/5.0/en/replication.html
我强烈推荐前一种数据库解决方案。复制很难正确完成,并且维护起来可能是一场噩梦
如果您使用静态数据(例如图像),我建议使用亚马逊的 S3 服务上传到( http://aws.amazon.com/s3/ ) - 这样您的所有服务器都可以从单个点获取数据,而不必通过服务器进行复制,这总是会最终导致可扩展性较差的解决方案
I would suggest having one (or more) dedicated database servers that all the instances connect to. If you are using Amazon Web Services check out their RDS service ( http://aws.amazon.com/rds/ )
That way you don't need to worry about replication - if you do want each server running it's own db instance you'll have to look into replication - for MySQL this is a good guide: http://dev.mysql.com/doc/refman/5.0/en/replication.html
I would strongly recommend the former solution for the database. Replication is tricky to get right and can be a nightmare to maintain
If you are using static data eg images I would recommend using amazon's S3 service for uploading to ( http://aws.amazon.com/s3/ ) - that way all your servers are getting their data from a single point instead of having to replicate over servers, which is always going to end up a less scalable solution