Django 中的每个请求缓存?

发布于 2024-09-07 20:50:19 字数 939 浏览 11 评论 0原文

我想实现一个装饰器,它为任何方法(而不仅仅是视图)提供按请求缓存。这是一个示例用例。

我有一个自定义标签,用于确定是否 一长串记录中的一条记录是 一个“最爱”。为了检查是否 该商品是您的最爱,您必须查询 数据库。理想情况下,你会 执行一次查询即可获取所有 收藏夹,然后检查一下 针对每条记录的缓存列表。

一个解决方案是获取所有 视图中的收藏夹,然后通过 设置到模板中,然后 进入每个标签调用。

或者,标签本身也可以 执行查询本身,但仅执行 第一次被调用。然后 结果可以缓存以供后续使用 来电。好处是你可以使用 此标签来自任何模板,在任何 视图,而不提醒视图。

在现有的缓存机制中,你 只能将结果缓存 50 毫秒, 并假设这与 当前请求。我想做那个 相关性可靠。

这是我当前拥有的标签的示例。

@register.filter()
def is_favorite(record, request):

    if "get_favorites" in request.POST:
        favorites = request.POST["get_favorites"]
    else:

        favorites = get_favorites(request.user)

        post = request.POST.copy()
        post["get_favorites"] = favorites
        request.POST = post

    return record in favorites

有没有办法从 Django 获取当前请求对象,而不传递它?我可以从标签传递请求,该请求将始终存在。但我想从其他函数中使用这个装饰器。

是否存在按请求缓存的现有实现?

I would like to implement a decorator that provides per-request caching to any method, not just views. Here is an example use case.

I have a custom tag that determines if
a record in a long list of records is
a "favorite". In order to check if an
item is a favorite, you have to query
the database. Ideally, you would
perform one query to get all the
favorites, and then just check that
cached list against each record.

One solution is to get all the
favorites in the view, and then pass
that set into the template, and then
into each tag call.

Alternatively, the tag itself could
perform the query itself, but only the
first time it's called. Then the
results could be cached for subsequent
calls. The upside is that you can use
this tag from any template, on any
view, without alerting the view.

In the existing caching mechanism, you
could just cache the result for 50ms,
and assume that would correlate to the
current request. I want to make that
correlation reliable.

Here is an example of the tag I currently have.

@register.filter()
def is_favorite(record, request):

    if "get_favorites" in request.POST:
        favorites = request.POST["get_favorites"]
    else:

        favorites = get_favorites(request.user)

        post = request.POST.copy()
        post["get_favorites"] = favorites
        request.POST = post

    return record in favorites

Is there a way to get the current request object from Django, w/o passing it around? From a tag, I could just pass in request, which will always exist. But I would like to use this decorator from other functions.

Is there an existing implementation of a per-request cache?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

傲性难收 2024-09-14 20:50:19

使用自定义中间件,您可以获得保证为每个请求清除的 Django 缓存实例。

这是我在项目中使用的:

from threading import currentThread
from django.core.cache.backends.locmem import LocMemCache

_request_cache = {}
_installed_middleware = False

def get_request_cache():
    assert _installed_middleware, 'RequestCacheMiddleware not loaded'
    return _request_cache[currentThread()]

# LocMemCache is a threadsafe local memory cache
class RequestCache(LocMemCache):
    def __init__(self):
        name = 'locmemcache@%i' % hash(currentThread())
        params = dict()
        super(RequestCache, self).__init__(name, params)

class RequestCacheMiddleware(object):
    def __init__(self):
        global _installed_middleware
        _installed_middleware = True

    def process_request(self, request):
        cache = _request_cache.get(currentThread()) or RequestCache()
        _request_cache[currentThread()] = cache

        cache.clear()

要使用中间件,请在settings.py中注册它,例如:

MIDDLEWARE_CLASSES = (
    ...
    'myapp.request_cache.RequestCacheMiddleware'
)

然后您可以按如下方式使用缓存:

from myapp.request_cache import get_request_cache

cache = get_request_cache()

有关更多信息,请参阅 django 低级缓存 api 文档:

Django Low-Level Cache API

修改要使用的 memoize 装饰器应该很容易请求缓存。查看 Python 装饰器库,了解 memoize 装饰器的一个很好的示例:

Python 装饰器库

Using a custom middleware you can get a Django cache instance guaranteed to be cleared for each request.

This is what I used in a project:

from threading import currentThread
from django.core.cache.backends.locmem import LocMemCache

_request_cache = {}
_installed_middleware = False

def get_request_cache():
    assert _installed_middleware, 'RequestCacheMiddleware not loaded'
    return _request_cache[currentThread()]

# LocMemCache is a threadsafe local memory cache
class RequestCache(LocMemCache):
    def __init__(self):
        name = 'locmemcache@%i' % hash(currentThread())
        params = dict()
        super(RequestCache, self).__init__(name, params)

class RequestCacheMiddleware(object):
    def __init__(self):
        global _installed_middleware
        _installed_middleware = True

    def process_request(self, request):
        cache = _request_cache.get(currentThread()) or RequestCache()
        _request_cache[currentThread()] = cache

        cache.clear()

To use the middleware register it in settings.py, e.g:

MIDDLEWARE_CLASSES = (
    ...
    'myapp.request_cache.RequestCacheMiddleware'
)

You may then use the cache as follows:

from myapp.request_cache import get_request_cache

cache = get_request_cache()

Refer to the django low level cache api doc for more information:

Django Low-Level Cache API

It should be easy to modify a memoize decorator to use the request cache. Have a look at the Python Decorator Library for a good example of a memoize decorator:

Python Decorator Library

凤舞天涯 2024-09-14 20:50:19

编辑:

我提出的最终解决方案已编译成 PyPI 包: https://pypi.org/project/django-request-cache/

编辑 2016-06-15:

我发现了一个更简单的解决方案来解决这个问题,并且有点捂脸因为从一开始就没有意识到这应该是多么容易。

from django.core.cache.backends.base import BaseCache
from django.core.cache.backends.locmem import LocMemCache
from django.utils.synch import RWLock


class RequestCache(LocMemCache):
    """
    RequestCache is a customized LocMemCache which stores its data cache as an instance attribute, rather than
    a global. It's designed to live only as long as the request object that RequestCacheMiddleware attaches it to.
    """

    def __init__(self):
        # We explicitly do not call super() here, because while we want BaseCache.__init__() to run, we *don't*
        # want LocMemCache.__init__() to run, because that would store our caches in its globals.
        BaseCache.__init__(self, {})

        self._cache = {}
        self._expire_info = {}
        self._lock = RWLock()

class RequestCacheMiddleware(object):
    """
    Creates a fresh cache instance as request.cache. The cache instance lives only as long as request does.
    """

    def process_request(self, request):
        request.cache = RequestCache()

这样,您就可以使用 request.cache 作为缓存实例,该实例仅在 request 存在期间存在,并且当请求发生时将被垃圾收集器完全清理。完成了。

如果您需要从通常不可用的上下文访问请求对象,则可以使用可以在线找到的所谓“全局请求中间件”的各种实现之一。

** 初步答案:**

这里没有其他解决方案解决的一个主要问题是,当您在单个进程的生命周期中创建和销毁其中多个 LocMemCache 时,LocMemCache 会泄漏内存。 django.core.cache.backends.locmem 定义了几个全局字典,它们保存对每个 LocalMemCache 实例的缓存数据的引用,并且这些字典永远不会被清空。

下面的代码解决了这个问题。它首先是 @href_ 的答案和 @squarelogic.hayden 的评论中链接的代码所使用的更清晰的逻辑的组合,然后我进一步对其进行了完善。

from uuid import uuid4
from threading import current_thread

from django.core.cache.backends.base import BaseCache
from django.core.cache.backends.locmem import LocMemCache
from django.utils.synch import RWLock


# Global in-memory store of cache data. Keyed by name, to provides multiple
# named local memory caches.
_caches = {}
_expire_info = {}
_locks = {}


class RequestCache(LocMemCache):
    """
    RequestCache is a customized LocMemCache with a destructor, ensuring that creating
    and destroying RequestCache objects over and over doesn't leak memory.
    """

    def __init__(self):
        # We explicitly do not call super() here, because while we want
        # BaseCache.__init__() to run, we *don't* want LocMemCache.__init__() to run.
        BaseCache.__init__(self, {})

        # Use a name that is guaranteed to be unique for each RequestCache instance.
        # This ensures that it will always be safe to call del _caches[self.name] in
        # the destructor, even when multiple threads are doing so at the same time.
        self.name = uuid4()
        self._cache = _caches.setdefault(self.name, {})
        self._expire_info = _expire_info.setdefault(self.name, {})
        self._lock = _locks.setdefault(self.name, RWLock())

    def __del__(self):
        del _caches[self.name]
        del _expire_info[self.name]
        del _locks[self.name]


class RequestCacheMiddleware(object):
    """
    Creates a cache instance that persists only for the duration of the current request.
    """

    _request_caches = {}

    def process_request(self, request):
        # The RequestCache object is keyed on the current thread because each request is
        # processed on a single thread, allowing us to retrieve the correct RequestCache
        # object in the other functions.
        self._request_caches[current_thread()] = RequestCache()

    def process_response(self, request, response):
        self.delete_cache()
        return response

    def process_exception(self, request, exception):
        self.delete_cache()

    @classmethod
    def get_cache(cls):
        """
        Retrieve the current request's cache.

        Returns None if RequestCacheMiddleware is not currently installed via 
        MIDDLEWARE_CLASSES, or if there is no active request.
        """
        return cls._request_caches.get(current_thread())

    @classmethod
    def clear_cache(cls):
        """
        Clear the current request's cache.
        """
        cache = cls.get_cache()
        if cache:
            cache.clear()

    @classmethod
    def delete_cache(cls):
        """
        Delete the current request's cache object to avoid leaking memory.
        """
        cache = cls._request_caches.pop(current_thread(), None)
        del cache

编辑2016年6月15日:
我发现了一个明显更简单的解决这个问题的方法,但我有点不好意思,因为我从一开始就没有意识到这应该是多么容易。

from django.core.cache.backends.base import BaseCache
from django.core.cache.backends.locmem import LocMemCache
from django.utils.synch import RWLock


class RequestCache(LocMemCache):
    """
    RequestCache is a customized LocMemCache which stores its data cache as an instance attribute, rather than
    a global. It's designed to live only as long as the request object that RequestCacheMiddleware attaches it to.
    """

    def __init__(self):
        # We explicitly do not call super() here, because while we want BaseCache.__init__() to run, we *don't*
        # want LocMemCache.__init__() to run, because that would store our caches in its globals.
        BaseCache.__init__(self, {})

        self._cache = {}
        self._expire_info = {}
        self._lock = RWLock()

class RequestCacheMiddleware(object):
    """
    Creates a fresh cache instance as request.cache. The cache instance lives only as long as request does.
    """

    def process_request(self, request):
        request.cache = RequestCache()

这样,您就可以使用 request.cache 作为缓存实例,该实例仅在 request 存在期间存在,并且当请求发生时将被垃圾收集器完全清理。完成了。

如果您需要从通常不可用的上下文访问请求对象,则可以使用可以在线找到的所谓“全局请求中间件”的各种实现之一。

EDIT:

The eventual solution I came up with has been compiled into a PyPI package: https://pypi.org/project/django-request-cache/

EDIT 2016-06-15:

I discovered a significantly simpler solution to this problem, and kindof facepalmed for not realizing how easy this should have been from the start.

from django.core.cache.backends.base import BaseCache
from django.core.cache.backends.locmem import LocMemCache
from django.utils.synch import RWLock


class RequestCache(LocMemCache):
    """
    RequestCache is a customized LocMemCache which stores its data cache as an instance attribute, rather than
    a global. It's designed to live only as long as the request object that RequestCacheMiddleware attaches it to.
    """

    def __init__(self):
        # We explicitly do not call super() here, because while we want BaseCache.__init__() to run, we *don't*
        # want LocMemCache.__init__() to run, because that would store our caches in its globals.
        BaseCache.__init__(self, {})

        self._cache = {}
        self._expire_info = {}
        self._lock = RWLock()

class RequestCacheMiddleware(object):
    """
    Creates a fresh cache instance as request.cache. The cache instance lives only as long as request does.
    """

    def process_request(self, request):
        request.cache = RequestCache()

With this, you can use request.cache as a cache instance that lives only as long as the request does, and will be fully cleaned up by the garbage collector when the request is done.

If you need access to the request object from a context where it's not normally available, you can use one of the various implementations of a so-called "global request middleware" that can be found online.

** Initial answer: **

A major problem that no other solution here solves is the fact that LocMemCache leaks memory when you create and destroy several of them over the life of a single process. django.core.cache.backends.locmem defines several global dictionaries that hold references to every LocalMemCache instance's cache data, and those dictionaries are never emptied.

The following code solves this problem. It started as a combination of @href_'s answer and the cleaner logic used by the code linked in @squarelogic.hayden's comment, which I then refined further.

from uuid import uuid4
from threading import current_thread

from django.core.cache.backends.base import BaseCache
from django.core.cache.backends.locmem import LocMemCache
from django.utils.synch import RWLock


# Global in-memory store of cache data. Keyed by name, to provides multiple
# named local memory caches.
_caches = {}
_expire_info = {}
_locks = {}


class RequestCache(LocMemCache):
    """
    RequestCache is a customized LocMemCache with a destructor, ensuring that creating
    and destroying RequestCache objects over and over doesn't leak memory.
    """

    def __init__(self):
        # We explicitly do not call super() here, because while we want
        # BaseCache.__init__() to run, we *don't* want LocMemCache.__init__() to run.
        BaseCache.__init__(self, {})

        # Use a name that is guaranteed to be unique for each RequestCache instance.
        # This ensures that it will always be safe to call del _caches[self.name] in
        # the destructor, even when multiple threads are doing so at the same time.
        self.name = uuid4()
        self._cache = _caches.setdefault(self.name, {})
        self._expire_info = _expire_info.setdefault(self.name, {})
        self._lock = _locks.setdefault(self.name, RWLock())

    def __del__(self):
        del _caches[self.name]
        del _expire_info[self.name]
        del _locks[self.name]


class RequestCacheMiddleware(object):
    """
    Creates a cache instance that persists only for the duration of the current request.
    """

    _request_caches = {}

    def process_request(self, request):
        # The RequestCache object is keyed on the current thread because each request is
        # processed on a single thread, allowing us to retrieve the correct RequestCache
        # object in the other functions.
        self._request_caches[current_thread()] = RequestCache()

    def process_response(self, request, response):
        self.delete_cache()
        return response

    def process_exception(self, request, exception):
        self.delete_cache()

    @classmethod
    def get_cache(cls):
        """
        Retrieve the current request's cache.

        Returns None if RequestCacheMiddleware is not currently installed via 
        MIDDLEWARE_CLASSES, or if there is no active request.
        """
        return cls._request_caches.get(current_thread())

    @classmethod
    def clear_cache(cls):
        """
        Clear the current request's cache.
        """
        cache = cls.get_cache()
        if cache:
            cache.clear()

    @classmethod
    def delete_cache(cls):
        """
        Delete the current request's cache object to avoid leaking memory.
        """
        cache = cls._request_caches.pop(current_thread(), None)
        del cache

EDIT 2016-06-15:
I discovered a significantly simpler solution to this problem, and kindof facepalmed for not realizing how easy this should have been from the start.

from django.core.cache.backends.base import BaseCache
from django.core.cache.backends.locmem import LocMemCache
from django.utils.synch import RWLock


class RequestCache(LocMemCache):
    """
    RequestCache is a customized LocMemCache which stores its data cache as an instance attribute, rather than
    a global. It's designed to live only as long as the request object that RequestCacheMiddleware attaches it to.
    """

    def __init__(self):
        # We explicitly do not call super() here, because while we want BaseCache.__init__() to run, we *don't*
        # want LocMemCache.__init__() to run, because that would store our caches in its globals.
        BaseCache.__init__(self, {})

        self._cache = {}
        self._expire_info = {}
        self._lock = RWLock()

class RequestCacheMiddleware(object):
    """
    Creates a fresh cache instance as request.cache. The cache instance lives only as long as request does.
    """

    def process_request(self, request):
        request.cache = RequestCache()

With this, you can use request.cache as a cache instance that lives only as long as the request does, and will be fully cleaned up by the garbage collector when the request is done.

If you need access to the request object from a context where it's not normally available, you can use one of the various implementations of a so-called "global request middleware" that can be found online.

轻许诺言 2024-09-14 20:50:19

我想出了一个技巧,可以将东西直接缓存到请求对象中(而不是使用标准缓存,它将与 memcached、文件、数据库等绑定在一起)。

# get the request object's dictionary (rather one of its methods' dictionary)
mycache = request.get_host.__dict__

# check whether we already have our value cached and return it
if mycache.get( 'c_category', False ):
    return mycache['c_category']
else:
    # get some object from the database (a category object in this case)
    c = Category.objects.get( id = cid )

    # cache the database object into a new key in the request object
    mycache['c_category'] = c

    return c

所以,基本上我只是将缓存的值(类别对象)存储在在本例中)在请求字典中的新键“c_category”下。或者更准确地说,因为我们不能只在请求对象上创建一个密钥,所以我将密钥添加到请求对象的方法之一 - get_host() 中。

乔治.

I came up with a hack for caching things straight into the request object (instead of using the standard cache, which will be tied to memcached, file, database, etc.)

# get the request object's dictionary (rather one of its methods' dictionary)
mycache = request.get_host.__dict__

# check whether we already have our value cached and return it
if mycache.get( 'c_category', False ):
    return mycache['c_category']
else:
    # get some object from the database (a category object in this case)
    c = Category.objects.get( id = cid )

    # cache the database object into a new key in the request object
    mycache['c_category'] = c

    return c

So, basically I am just storing the cached value (category object in this case) under a new key 'c_category' in the dictionary of the request. Or to be more precise, because we can't just create a key on the request object, I am adding the key to one of the methods of the request object - get_host().

Georgy.

骑趴 2024-09-14 20:50:19

多年后,一个超级黑客在单个 Django 请求中缓存 SELECT 语句。您需要在请求范围的早期执行 patch() 方法,就像在中间件中一样。

from threading import local
import itertools
from django.db.models.sql.constants import MULTI
from django.db.models.sql.compiler import SQLCompiler
from django.db.models.sql.datastructures import EmptyResultSet
from django.db.models.sql.constants import GET_ITERATOR_CHUNK_SIZE


_thread_locals = local()


def get_sql(compiler):
    ''' get a tuple of the SQL query and the arguments '''
    try:
        return compiler.as_sql()
    except EmptyResultSet:
        pass
    return ('', [])


def execute_sql_cache(self, result_type=MULTI):

    if hasattr(_thread_locals, 'query_cache'):

        sql = get_sql(self)  # ('SELECT * FROM ...', (50)) <= sql string, args tuple
        if sql[0][:6].upper() == 'SELECT':

            # uses the tuple of sql + args as the cache key
            if sql in _thread_locals.query_cache:
                return _thread_locals.query_cache[sql]

            result = self._execute_sql(result_type)
            if hasattr(result, 'next'):

                # only cache if this is not a full first page of a chunked set
                peek = result.next()
                result = list(itertools.chain([peek], result))

                if len(peek) == GET_ITERATOR_CHUNK_SIZE:
                    return result

            _thread_locals.query_cache[sql] = result

            return result

        else:
            # the database has been updated; throw away the cache
            _thread_locals.query_cache = {}

    return self._execute_sql(result_type)


def patch():
    ''' patch the django query runner to use our own method to execute sql '''
    _thread_locals.query_cache = {}
    if not hasattr(SQLCompiler, '_execute_sql'):
        SQLCompiler._execute_sql = SQLCompiler.execute_sql
        SQLCompiler.execute_sql = execute_sql_cache

patch() 方法用名为execute_sql_cache 的替代方法替换了Django 内部的execute_sql 方法。该方法查看要运行的 sql,如果它是 select 语句,它首先检查线程本地缓存。只有当在缓存中没有找到它时,它才会继续执行 SQL。在任何其他类型的 sql 语句上,它都会破坏缓存。有一些逻辑不缓存大型结果集,即超过 100 条记录的结果集。这是为了保留 Django 的惰性查询集评估。

Years later, a super hack to cache SELECT statements inside a single Django request. You need to execute the patch() method from early on in your request scope, like in a piece of middleware.

from threading import local
import itertools
from django.db.models.sql.constants import MULTI
from django.db.models.sql.compiler import SQLCompiler
from django.db.models.sql.datastructures import EmptyResultSet
from django.db.models.sql.constants import GET_ITERATOR_CHUNK_SIZE


_thread_locals = local()


def get_sql(compiler):
    ''' get a tuple of the SQL query and the arguments '''
    try:
        return compiler.as_sql()
    except EmptyResultSet:
        pass
    return ('', [])


def execute_sql_cache(self, result_type=MULTI):

    if hasattr(_thread_locals, 'query_cache'):

        sql = get_sql(self)  # ('SELECT * FROM ...', (50)) <= sql string, args tuple
        if sql[0][:6].upper() == 'SELECT':

            # uses the tuple of sql + args as the cache key
            if sql in _thread_locals.query_cache:
                return _thread_locals.query_cache[sql]

            result = self._execute_sql(result_type)
            if hasattr(result, 'next'):

                # only cache if this is not a full first page of a chunked set
                peek = result.next()
                result = list(itertools.chain([peek], result))

                if len(peek) == GET_ITERATOR_CHUNK_SIZE:
                    return result

            _thread_locals.query_cache[sql] = result

            return result

        else:
            # the database has been updated; throw away the cache
            _thread_locals.query_cache = {}

    return self._execute_sql(result_type)


def patch():
    ''' patch the django query runner to use our own method to execute sql '''
    _thread_locals.query_cache = {}
    if not hasattr(SQLCompiler, '_execute_sql'):
        SQLCompiler._execute_sql = SQLCompiler.execute_sql
        SQLCompiler.execute_sql = execute_sql_cache

The patch() method replaces the Django internal execute_sql method with a stand-in called execute_sql_cache. That method looks at the sql to be run, and if it's a select statement, it checks a thread-local cache first. Only if it's not found in the cache does it proceed to execute the SQL. On any other type of sql statement, it blows away the cache. There is some logic to not cache large result sets, meaning anything over 100 records. This is to preserve Django's lazy query set evaluation.

熊抱啵儿 2024-09-14 20:50:19

这个使用 python dict 作为缓存(不是 django 的缓存),并且非常简单和轻量级。

  • 每当线程被销毁时,它的缓存就会自动进行。
  • 不需要任何中间件,每次访问时内容都不会pickle和depickle,速度更快。
  • 经过测试并可与 gevent 的猴子补丁一起使用。

同样的情况也可以通过线程本地存储来实现。
我不知道这种方法有任何缺点,请随时在评论中添加它们。

from threading import currentThread
import weakref

_request_cache = weakref.WeakKeyDictionary()

def get_request_cache():
    return _request_cache.setdefault(currentThread(), {})

This one uses a python dict as the cache (not the django's cache), and is dead simple and lightweight.

  • Whenever the thread is destroyed, it's cache will be too automatically.
  • Does not require any middleware, and the content is not pickled and depickled on every access, which is faster.
  • Tested and works with gevent's monkeypatching.

The same can be probably implemented with threadlocal storage.
I am not aware of any downsides of this approach, feel free to add them in the comments.

from threading import currentThread
import weakref

_request_cache = weakref.WeakKeyDictionary()

def get_request_cache():
    return _request_cache.setdefault(currentThread(), {})
风月客 2024-09-14 20:50:19

您始终可以手动进行缓存。

    ...
    if "get_favorites" in request.POST:
        favorites = request.POST["get_favorites"]
    else:
        from django.core.cache import cache

        favorites = cache.get(request.user.username)
        if not favorites:
            favorites = get_favorites(request.user)
            cache.set(request.user.username, favorites, seconds)
    ...

You can always do the caching manually.

    ...
    if "get_favorites" in request.POST:
        favorites = request.POST["get_favorites"]
    else:
        from django.core.cache import cache

        favorites = cache.get(request.user.username)
        if not favorites:
            favorites = get_favorites(request.user)
            cache.set(request.user.username, favorites, seconds)
    ...
最冷一天 2024-09-14 20:50:19

@href_ 给出的答案非常棒。

以防万一你想要更短的东西也可能做到这一点:

from django.utils.lru_cache import lru_cache

def cached_call(func, *args, **kwargs):
    """Very basic temporary cache, will cache results
    for average of 1.5 sec and no more then 3 sec"""
    return _cached_call(int(time.time() / 3), func, *args, **kwargs)


@lru_cache(maxsize=100)
def _cached_call(time, func, *args, **kwargs):
    return func(*args, **kwargs)

然后让收藏夹像这样调​​用它:

favourites = cached_call(get_favourites, request.user)

此方法利用 lru 缓存 并将其与时间戳相结合,我们确保缓存不会保存任何内容超过几秒钟。如果您需要在短时间内多次调用成本高昂的函数,这可以解决问题。

这不是使缓存失效的完美方法,因为有时它会丢失最近的数据: int(..2.99.. / 3) 后跟 int(..3.00..) / 3)。尽管有这个缺点,它在大多数命中中仍然非常有效。

另外,您还可以在请求/响应周期之外使用它,例如 celery 任务或管理命令作业。

Answer given by @href_ is great.

Just in case you want something shorter that could also potentially do the trick:

from django.utils.lru_cache import lru_cache

def cached_call(func, *args, **kwargs):
    """Very basic temporary cache, will cache results
    for average of 1.5 sec and no more then 3 sec"""
    return _cached_call(int(time.time() / 3), func, *args, **kwargs)


@lru_cache(maxsize=100)
def _cached_call(time, func, *args, **kwargs):
    return func(*args, **kwargs)

Then get favourites calling it like this:

favourites = cached_call(get_favourites, request.user)

This method makes use of lru cache and combining it with timestamp we make sure that cache doesn't hold anything for longer then few seconds. If you need to call costly function several times in short period of time this solves the problem.

It is not a perfect way to invalidate cache, because occasionally it will miss on very recent data: int(..2.99.. / 3) followed by int(..3.00..) / 3). Despite this drawback it still can be very effective in majority of hits.

Also as a bonus you can use it outside request/response cycles, for example celery tasks or management command jobs.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文