使用 memcache 限制对数据库的写入量
我正在尝试修改留言簿示例 webapp 以减少数据库写入量。
我想要实现的是将所有留言簿条目加载到我已经完成的内存缓存中。
不过,我希望能够使用新的留言簿条目直接更新内存缓存,然后每 30 秒将所有更改作为批量 put.() 写入数据库。
有谁有我如何实现上述目标的例子吗?这真的会对我有帮助!
谢谢 :)
I am trying to modify the guestbook example webapp to reduce the amount of database writes.
What I am trying to achieve is to load all the guestbook entries into memcache which I have done.
However I want to be able to directly update the memcache with new guestbook entries and then write all changes to the database as a batch put.() every 30 seconds.
Has anyone got an example of how I could achieve the above? it would really help me!
Thanks :)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
这是丢失数据的秘诀。我很难相信留言簿会导致足够的写入活动成为一个问题。此外,由于内存缓存不可搜索,因此涉及的簿记工作会很棘手。
This is a recipe for lost data. I have a hard time believing that a guest book is causing enough write activity to be an issue. Also, the bookkeeping involved in this would be tricky, since memcache isn't searchable.
您想要实现的目标称为 后写式缓存,通常,以正确的方式实现它并不像乍看起来那么容易。据我所知,目前 Python 中还没有针对 Memcached/GAE 的现成解决方案,但您可以查看 Stockpyle< /a>.它具有一些基本功能 直写式缓存(请参阅 appengine.py 和 memcache.py),因此它可以作为您自己的解决方案的基础。
What are you trying to achieve is called Write-Behind Caching, and usually it's not so easy to implement the right way as it seems at first. As I know for now there is no ready solutions in Python for Memcached/GAE, but you can look at Stockpyle. It has some basic functionality for Write-Through Caching (see appengine.py and memcache.py), so it can serve you as a basis for your own solution.
Memcache 是一种易失性存储,用于存储留言簿条目等有价值的数据;请记住,例如,在内存不足的情况下,内存缓存数据可能会被驱逐。
如果您的留言簿流量较高,并且您遇到写入数据存储区超时/争用问题,请尝试使用速率限制的另一种方法 taskqueue 以减慢对数据存储的写入次数。
您可以在
queue.yaml
中定义低速率执行,放宽对数据存储区的写入,如下所示:每秒写入一次,您可能会得到一些零星的数据超时错误;在这种情况下,任务将再次执行,直到成功。
Memcache is such a volatile storage to store valuable data like guestbook entries; remember that memcache data could be evicted in case of low memory for example.
If your guestbook has an high traffic and you are suffering write datastore timeouts/contention, try with another approach using a rate limited taskqueue to slow down the number of writes to datastore.
You can relax the write to datastore defining a low rate execution in your
queue.yaml
with something like this:With one write per second, you would probably get some sporadic timeout errors; in this case the task will be executed again until it succeeds.