使用 memcache 限制对数据库的写入量

发布于 2024-10-20 20:02:37 字数 194 浏览 3 评论 0原文

我正在尝试修改留言簿示例 ​​webapp 以减少数据库写入量。

我想要实现的是将所有留言簿条目加载到我已经完成的内存缓存中。

不过,我希望能够使用新的留言簿条目直接更新内存缓存,然后每 30 秒将所有更改作为批量 put.() 写入数据库。

有谁有我如何实现上述目标的例子吗?这真的会对我有帮助!

谢谢 :)

I am trying to modify the guestbook example webapp to reduce the amount of database writes.

What I am trying to achieve is to load all the guestbook entries into memcache which I have done.

However I want to be able to directly update the memcache with new guestbook entries and then write all changes to the database as a batch put.() every 30 seconds.

Has anyone got an example of how I could achieve the above? it would really help me!

Thanks :)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

演出会有结束 2024-10-27 20:02:37

这是丢失数据的秘诀。我很难相信留言簿会导致足够的写入活动成为一个问题。此外,由于内存缓存不可搜索,因此涉及的簿记工作会很棘手。

This is a recipe for lost data. I have a hard time believing that a guest book is causing enough write activity to be an issue. Also, the bookkeeping involved in this would be tricky, since memcache isn't searchable.

北笙凉宸 2024-10-27 20:02:37

您想要实现的目标称为 后写式缓存,通常,以正确的方式实现它并不像乍看起来那么容易。据我所知,目前 Python 中还没有针对 Memcached/GAE 的现成解决方案,但您可以查看 Stockpyle< /a>.它具有一些基本功能 直写式缓存(请参阅 appengine.py 和 memcache.py),因此它可以作为您自己的解决方案的基础。

What are you trying to achieve is called Write-Behind Caching, and usually it's not so easy to implement the right way as it seems at first. As I know for now there is no ready solutions in Python for Memcached/GAE, but you can look at Stockpyle. It has some basic functionality for Write-Through Caching (see appengine.py and memcache.py), so it can serve you as a basis for your own solution.

自在安然 2024-10-27 20:02:37

Memcache 是一种易失性存储,用于存储留言簿条目等有价值的数据;请记住,例如,在内存不足的情况下,内存缓存数据可能会被驱逐。

如果您的留言簿流量较高,并且您遇到写入数据存储区超时/争用问题,请尝试使用速率限制的另一种方法 taskqueue 以减慢对数据存储的写入次数。

  1. 让用户编译留言簿条目
  2. 通过 将每个数据条目传递到速率受限的任务队列deferred
  3. 写入数据存储区

您可以在 queue.yaml 中定义低速率执行,放宽对数据存储区的写入,如下所示:

queue:
- name: relaxed-write
  rate: 1/s
  bucket_size: 1

每秒写入一次,您可能会得到一些零星的数据超时错误;在这种情况下,任务将再次执行,直到成功。

Memcache is such a volatile storage to store valuable data like guestbook entries; remember that memcache data could be evicted in case of low memory for example.

If your guestbook has an high traffic and you are suffering write datastore timeouts/contention, try with another approach using a rate limited taskqueue to slow down the number of writes to datastore.

  1. Let the user compile the guestbook entries
  2. Pass each data entry to a rate limited taskqueue via deferred library
  3. Write to datastore

You can relax the write to datastore defining a low rate execution in your queue.yaml with something like this:

queue:
- name: relaxed-write
  rate: 1/s
  bucket_size: 1

With one write per second, you would probably get some sporadic timeout errors; in this case the task will be executed again until it succeeds.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文