如何进行 SimpleDB 备份?

发布于 2024-11-11 05:30:50 字数 688 浏览 3 评论 0原文

我正在开发一个 Facebook 应用程序,它使用 SimpleDB 来存储其数据,但我意识到 Amazon 没有提供备份该数据的方法(至少据我所知)

并且 SimpleDB 速度很慢。每秒大约可以获得 4 个列表,每个列表有 100 条记录。这不是备份大量记录的好方法。

我在网上找到了一些可以为您进行备份的服务,但我不愿意向他们提供我的 AWS 凭证。

所以我考虑使用线程。问题是,如果您对域中的所有键进行选择,则需要等待第一页的 next_token 值才能处理第二页,依此类推。

我想到的一个解决方案是根据 Facebook ID 的最后 2 位数字创建一个新属性。因此,我会启动一个线程,选择“00”,另一个选择“01”,依此类推,有可能运行 100 个线程并更快地进行备份(至少在理论上)。一种相关的解决方案是将该域拆分为 100 个域(这样我可以单独备份每个域),但这会破坏我需要执行的一些选择。另一种解决方案,可能对 PHP 更友好,是使用 cron 作业来备份,比如说 10,000 条记录并保存“next_token”,然后下一个作业从 next_token 开始,等等。

有没有人对此有更好的解决方案?如果它是一个 PHP 解决方案,那就太好了,但如果它涉及其他东西,无论如何它都会受到欢迎。

PS:在你提到之前,据我所知,PHP仍然不是线程安全的。我知道除非我在备份期间停止写入,否则将会出现一些一致性问题,但在这种特殊情况下我并不太担心。

I'm developing a Facebook application that uses SimpleDB to store its data, but I've realized Amazon does not provide a way to backup that data (at least that I know of)

And SimpleDB is slow. You can get about 4 lists per second, each list of 100 records. Not a good way to backup tons of records.

I found some services in the web that offer to do the backup for you, but I'm not comfortable about giving them my AWS Credentials.

So I though about using threads. Problem is that if you do a select for all the keys in the domain, you need to wait for the next_token value of the first page in order to process the second page and so on.

A solution I was thinking for this was to have a new attribute based on the last 2 digits of the Facebook id. So I'd start a thread with a select for "00", another for "01", and so on, potentially having the possibility of running 100 threads and doing backups much faster (at least in theory). A related solution would be to split that domain into 100 domains (so I can backup each one individually), but that would break some of the selects I need to do. Another solution, probably more PHP friendly, would be to use a cron job to backup lets say 10,000 records and save "next_token", then the next job starts at next_token, etc.

Does anyone have a better solution for this? If its a PHP solution it'd be great, but if it involves something else its welcome anyway.

PS: before you mention it, as far as I know, PHP is still not thread safe. And I'm aware that unless I stop the writes during the backup, there will be some consistency problems, but I'm not too worried about it in this particular case.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

故事灯 2024-11-18 05:30:50

根据我的经验,创建代理分片属性的方法当然有效。

或者,我们过去所做的是将备份分解为两个步骤的过程,以便尽可能获得多处理的潜力(尽管这是在java中,并且对于写入备份文件,我们可以依靠同步来确保写入安全 - 不确定 php 端的交易是什么)。

基本上,我们有一个线程对域内的数据进行选择,但不是“SELECT * FROM ...”,而是“SELECT itemName FROM ...”来获取需要备份的条目的键。然后将它们放入项目键队列中,线程池使用 getItem API 读取该队列,并以线程安全的方式写入备份文件。

与在单个线程上旋转相比,这为我们在单个域上提供了更好的吞吐量。

但最终,由于夜间备份中存在大量域,我们最终恢复到在单线程和“SELECT * FROM domain”类型模型中执行每个域备份,主要是因为我们已经有大量线程正在进行,并且线程负担过重开始成为备份处理器上的一个问题,而且还因为备份程序开始变得危险地复杂。

The approach of creating a proxy shard attribute certainly works, from experience where I am.

Alternatively, what we have done in the past is to break down the backup into a 2 step process, in order to get as much potential for multi-processing as possible (though this is in java and for the write to the backup file we can rely on synchronization to ensure write-safety - not sure what the deal is on php side).

Basically we have one thread which does a select across the data within a domain, but rather than "SELECT * FROM ...", it is just "SELECT itemName FROM ..." to get the keys to the entries needing backing up. These are then dropped into a queue of item keys which a pool of threads read with the getItem API and write in a thread safe manner to the backup file.

This gave us better throughput on a single domain than spinning on a single thread.

Ultimately though, with numerous domains in our nightly backup we ended up reverting back to doing each domain backup in the single thread and "SELECT * FROM domain" type model, mainly because we already had a shedload of threads going on and the thread overburden started to become an issue on the backup processor, but also because the backup program was starting to get dangerously complex.

七月上 2024-11-18 05:30:50

截至 2012 年 10 月,我已经研究了这个问题。三个主要问题似乎支配着选择:

  1. 没有“本机”方法来确保 SimpleDB 导出或导入一致。您有责任理解和管理这对您的应用程序代码的影响。
  2. Amazon 不提供托管备份解决方案,但许多第三方公司在该领域提供了一些解决方案(通常提供“备份到 S3”作为选项)。
  3. 对于一定量的数据,您需要考虑多线程方法,这对于一致性具有重要意义。

如果您需要的只是从单个域转储数据,并且您的数据量足够低,单线程导出有意义,那么这里是我编写的一些 Python 代码,它非常适合我。没有明示或暗示的保证,只有在您理解的情况下才使用:

#simpledb2json.py

import boto
import simplejson as json

AWS_KEY = "YOUR_KEY"
AWS_SECRET = "YOUR_SECRET"

DOMAIN = "YOUR_DOMAIN"


def fetch_items(boto_dom, dom_name, offset=None, limit=300):
    offset_predicate = ""

    if offset:
        offset_predicate = " and itemName() > '" + offset + "'"

    query = "select * from " \
        + "`" + dom_name + "`" \
        + " where itemName() is not null" \
        + offset_predicate \
        + " order by itemName() asc limit " + str(limit)

    rs = boto_dom.select(query)

    # by default, boto does not include the simpledb 'key' or 'name' in the
    # dict, it is a separate property. so we add it:
    result = []
    for r in rs:
        r['_itemName'] = r.name
        result.append(r)

    return result


def _main():
    con = boto.connect_sdb(aws_access_key_id=AWS_KEY, aws_secret_access_key=AWS_SECRET)

    dom = con.get_domain(DOMAIN)

    all_items = []
    offset = None

    while True:
        items = fetch_items(dom, DOMAIN, offset=offset)

        if not items:
            break

        all_items += items

        offset = all_items[-1].name

    print json.dumps(all_items, sort_keys=True, indent=4)

if __name__ == "__main__":
    _main()

I've researched this problem as of October 2012. Three major issues seem to govern choice:

  1. There is no 'native' way to ensure a consistent export or import with SimpleDB. It is your responsibility to understand and manage the implications of this w.r.t. your application code.
  2. No managed backup solution is available from Amazon, but a variety of third-party companies offer something in this space (typically with "backup to S3" as an option).
  3. At some volume of data, you'll need to consider a multi-threaded approach which, again, has important implications re: consistency.

If all you need is to dump data from a single domain and your data volumes are low enough such that single-threaded export makes sense, then here is some Python code I wrote which works great for me. No warranty is expressed or implied, only use this if you understand it:

#simpledb2json.py

import boto
import simplejson as json

AWS_KEY = "YOUR_KEY"
AWS_SECRET = "YOUR_SECRET"

DOMAIN = "YOUR_DOMAIN"


def fetch_items(boto_dom, dom_name, offset=None, limit=300):
    offset_predicate = ""

    if offset:
        offset_predicate = " and itemName() > '" + offset + "'"

    query = "select * from " \
        + "`" + dom_name + "`" \
        + " where itemName() is not null" \
        + offset_predicate \
        + " order by itemName() asc limit " + str(limit)

    rs = boto_dom.select(query)

    # by default, boto does not include the simpledb 'key' or 'name' in the
    # dict, it is a separate property. so we add it:
    result = []
    for r in rs:
        r['_itemName'] = r.name
        result.append(r)

    return result


def _main():
    con = boto.connect_sdb(aws_access_key_id=AWS_KEY, aws_secret_access_key=AWS_SECRET)

    dom = con.get_domain(DOMAIN)

    all_items = []
    offset = None

    while True:
        items = fetch_items(dom, DOMAIN, offset=offset)

        if not items:
            break

        all_items += items

        offset = all_items[-1].name

    print json.dumps(all_items, sort_keys=True, indent=4)

if __name__ == "__main__":
    _main()
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文