是否有一个装饰器可以简单地缓存函数返回值?

发布于 2024-07-18 20:03:03 字数 256 浏览 8 评论 0原文

考虑以下事项:

@property
def name(self):

    if not hasattr(self, '_name'):

        # expensive calculation
        self._name = 1 + 1

    return self._name

我是新手,但我认为缓存可以分解到装饰器中。 只是我没有找到类似的;)

PS 真正的计算并不依赖于可变值

Consider the following:

@property
def name(self):

    if not hasattr(self, '_name'):

        # expensive calculation
        self._name = 1 + 1

    return self._name

I'm new, but I think the caching could be factored out into a decorator. Only I didn't find one like it ;)

PS the real calculation doesn't depend on mutable values

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(20

流心雨 2024-07-25 20:03:03

从Python 3.2开始,有一个内置的装饰器:

@functools .lru_cache(maxsize=100, typed=False)

装饰器用记忆可调用函数包装函数,该可调用函数最多可保存 maxsize 最近的调用。 当使用相同参数定期调用昂贵的或 I/O 绑定的函数时,它可以节省时间。

用于计算 Fibonacci 数 的 LRU 缓存示例:

from functools import lru_cache

@lru_cache(maxsize=None)
def fib(n):
    if n < 2:
        return n
    return fib(n-1) + fib(n-2)

>>> print([fib(n) for n in range(16)])
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610]

>>> print(fib.cache_info())
CacheInfo(hits=28, misses=16, maxsize=None, currsize=16)

如果您无法使用 Python 2.x,请参阅以下示例其他兼容的记忆库列表:

Starting from Python 3.2 there is a built-in decorator:

@functools.lru_cache(maxsize=100, typed=False)

Decorator to wrap a function with a memoizing callable that saves up to the maxsize most recent calls. It can save time when an expensive or I/O bound function is periodically called with the same arguments.

Example of an LRU cache for computing Fibonacci numbers:

from functools import lru_cache

@lru_cache(maxsize=None)
def fib(n):
    if n < 2:
        return n
    return fib(n-1) + fib(n-2)

>>> print([fib(n) for n in range(16)])
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610]

>>> print(fib.cache_info())
CacheInfo(hits=28, misses=16, maxsize=None, currsize=16)

If you are stuck with Python 2.x, here's a list of other compatible memoization libraries:

熟人话多 2024-07-25 20:03:03

Python 3.8 functools.cached_property 装饰器

https://docs.python.org/dev/library/functools.html#functools.cached_property

来自 Werkzeug 的 cached_property 被提到:https://stackoverflow.com/a/5295190/895245 但据说派生版本将合并到 3.8 中,这非常棒。

该装饰器可以被视为缓存@property,或者当您没有任何参数时作为更干净的@functools.lru_cache

文档说:

@functools.cached_property(func) 
  

将类的方法转换为属性,其值计算一次,然后在实例的生命周期内作为普通属性缓存。 与 property() 类似,但增加了缓存。 对于实例的昂贵计算属性很有用,否则这些属性实际上是不可变的。

示例:

类数据集: 
      def __init__(self, 数字序列): 
          self._data = 数字序列 

      @cached_property 
      def 标准设备(自身): 
          返回statistics.stdev(self._data) 

      @cached_property 
      def 方差(自身): 
          返回统计数据.variance(self._data) 
  

3.8 版本中的新增功能。

注意此装饰器要求每个实例上的 dict 属性是可变映射。 这意味着它不适用于某些类型,例如元类(因为类型实例上的 dict 属性是类命名空间的只读代理),以及指定的类型> 不包含 dict 作为定义的槽之一(因为此类类根本不提供 dict 属性)。

Python 3.8 functools.cached_property decorator

https://docs.python.org/dev/library/functools.html#functools.cached_property

cached_property from Werkzeug was mentioned at: https://stackoverflow.com/a/5295190/895245 but a supposedly derived version will be merged into 3.8, which is awesome.

This decorator can be seen as caching @property, or as a cleaner @functools.lru_cache for when you don't have any arguments.

The docs say:

@functools.cached_property(func)

Transform a method of a class into a property whose value is computed once and then cached as a normal attribute for the life of the instance. Similar to property(), with the addition of caching. Useful for expensive computed properties of instances that are otherwise effectively immutable.

Example:

class DataSet:
    def __init__(self, sequence_of_numbers):
        self._data = sequence_of_numbers

    @cached_property
    def stdev(self):
        return statistics.stdev(self._data)

    @cached_property
    def variance(self):
        return statistics.variance(self._data)

New in version 3.8.

Note This decorator requires that the dict attribute on each instance be a mutable mapping. This means it will not work with some types, such as metaclasses (since the dict attributes on type instances are read-only proxies for the class namespace), and those that specify slots without including dict as one of the defined slots (as such classes don’t provide a dict attribute at all).

尾戒 2024-07-25 20:03:03

functools.cache 已在 Python 3.9 中发布 (docs):

from functools import cache

@cache
def factorial(n):
    return n * factorial(n-1) if n else 1

在以前的 Python 版本中,早期答案之一仍然是有效的解决方案:使用lru_cache作为普通缓存,没有限制和lru功能。 (文档

如果 maxsize 设置为 None,LRU 功能将被禁用,并且缓存
可以无限制地成长。

这是它的一个更漂亮的版本:

cache = lru_cache(maxsize=None)

@cache
def func(param1):
   pass

functools.cache has been released in Python 3.9 (docs):

from functools import cache

@cache
def factorial(n):
    return n * factorial(n-1) if n else 1

In previous Python versions, one of the early answers is still a valid solution: Using lru_cache as an ordinary cache without the limit and lru features. (docs)

If maxsize is set to None, the LRU feature is disabled and the cache
can grow without bound.

Here is a prettier version of it:

cache = lru_cache(maxsize=None)

@cache
def func(param1):
   pass
葬花如无物 2024-07-25 20:03:03

听起来您要求通用的记忆化装饰器(即,您对想要缓存不同参数值的返回值的一般情况不感兴趣)。 也就是说,你想要这样:

x = obj.name  # expensive
y = obj.name  # cheap

而通用的记忆装饰器会给你这个:

x = obj.name()  # expensive
y = obj.name()  # cheap

我认为方法调用语法是更好的风格,因为它表明了昂贵的计算的可能性,而属性语法表明了快速的计算抬头。

[更新:我之前链接到并引用的基于类的记忆装饰器不适用于方法。 我已将其替换为装饰器函数。] 如果您愿意使用通用的记忆化装饰器,这里有一个简单的:

def memoize(function):
  memo = {}
  def wrapper(*args):
    if args in memo:
      return memo[args]
    else:
      rv = function(*args)
      memo[args] = rv
      return rv
  return wrapper

示例用法:

@memoize
def fibonacci(n):
  if n < 2: return n
  return fibonacci(n - 1) + fibonacci(n - 2)

可以找到另一个对缓存大小有限制的记忆化装饰器 此处

It sounds like you're not asking for a general-purpose memoization decorator (i.e., you're not interested in the general case where you want to cache return values for different argument values). That is, you'd like to have this:

x = obj.name  # expensive
y = obj.name  # cheap

while a general-purpose memoization decorator would give you this:

x = obj.name()  # expensive
y = obj.name()  # cheap

I submit that the method-call syntax is better style, because it suggests the possibility of expensive computation while the property syntax suggests a quick lookup.

[Update: The class-based memoization decorator I had linked to and quoted here previously doesn't work for methods. I've replaced it with a decorator function.] If you're willing to use a general-purpose memoization decorator, here's a simple one:

def memoize(function):
  memo = {}
  def wrapper(*args):
    if args in memo:
      return memo[args]
    else:
      rv = function(*args)
      memo[args] = rv
      return rv
  return wrapper

Example usage:

@memoize
def fibonacci(n):
  if n < 2: return n
  return fibonacci(n - 1) + fibonacci(n - 2)

Another memoization decorator with a limit on the cache size can be found here.

等风来 2024-07-25 20:03:03
class memorize(dict):
    def __init__(self, func):
        self.func = func

    def __call__(self, *args):
        return self[args]

    def __missing__(self, key):
        result = self[key] = self.func(*key)
        return result

样品用途:

>>> @memorize
... def foo(a, b):
...     return a * b
>>> foo(2, 4)
8
>>> foo
{(2, 4): 8}
>>> foo('hi', 3)
'hihihi'
>>> foo
{(2, 4): 8, ('hi', 3): 'hihihi'}
class memorize(dict):
    def __init__(self, func):
        self.func = func

    def __call__(self, *args):
        return self[args]

    def __missing__(self, key):
        result = self[key] = self.func(*key)
        return result

Sample uses:

>>> @memorize
... def foo(a, b):
...     return a * b
>>> foo(2, 4)
8
>>> foo
{(2, 4): 8}
>>> foo('hi', 3)
'hihihi'
>>> foo
{(2, 4): 8, ('hi', 3): 'hihihi'}
踏雪无痕 2024-07-25 20:03:03

Werkzeug 有一个 cached_property 装饰器(docs来源

Werkzeug has a cached_property decorator (docs, source)

抚你发端 2024-07-25 20:03:03

我编写了这个简单的装饰器类来缓存函数响应。 我发现它对我的项目非常有用:

from datetime import datetime, timedelta 

class cached(object):
    def __init__(self, *args, **kwargs):
        self.cached_function_responses = {}
        self.default_max_age = kwargs.get("default_cache_max_age", timedelta(seconds=0))

    def __call__(self, func):
        def inner(*args, **kwargs):
            max_age = kwargs.get('max_age', self.default_max_age)
            if not max_age or func not in self.cached_function_responses or (datetime.now() - self.cached_function_responses[func]['fetch_time'] > max_age):
                if 'max_age' in kwargs: del kwargs['max_age']
                res = func(*args, **kwargs)
                self.cached_function_responses[func] = {'data': res, 'fetch_time': datetime.now()}
            return self.cached_function_responses[func]['data']
        return inner

用法很简单:

import time

@cached
def myfunc(a):
    print "in func"
    return (a, datetime.now())

@cached(default_max_age = timedelta(seconds=6))
def cacheable_test(a):
    print "in cacheable test: "
    return (a, datetime.now())


print cacheable_test(1,max_age=timedelta(seconds=5))
print cacheable_test(2,max_age=timedelta(seconds=5))
time.sleep(7)
print cacheable_test(3,max_age=timedelta(seconds=5))

I coded this simple decorator class to cache function responses. I find it VERY useful for my projects:

from datetime import datetime, timedelta 

class cached(object):
    def __init__(self, *args, **kwargs):
        self.cached_function_responses = {}
        self.default_max_age = kwargs.get("default_cache_max_age", timedelta(seconds=0))

    def __call__(self, func):
        def inner(*args, **kwargs):
            max_age = kwargs.get('max_age', self.default_max_age)
            if not max_age or func not in self.cached_function_responses or (datetime.now() - self.cached_function_responses[func]['fetch_time'] > max_age):
                if 'max_age' in kwargs: del kwargs['max_age']
                res = func(*args, **kwargs)
                self.cached_function_responses[func] = {'data': res, 'fetch_time': datetime.now()}
            return self.cached_function_responses[func]['data']
        return inner

The usage is straightforward:

import time

@cached
def myfunc(a):
    print "in func"
    return (a, datetime.now())

@cached(default_max_age = timedelta(seconds=6))
def cacheable_test(a):
    print "in cacheable test: "
    return (a, datetime.now())


print cacheable_test(1,max_age=timedelta(seconds=5))
print cacheable_test(2,max_age=timedelta(seconds=5))
time.sleep(7)
print cacheable_test(3,max_age=timedelta(seconds=5))
送君千里 2024-07-25 20:03:03

尝试joblib
https://joblib.readthedocs.io/en/latest/memory.html

from joblib import Memory

# customize the decorator
memory = Memory(cachedir=cachedir, verbose=0)

@memory.cache
def f(x):
    print('Running f(%s)' % x)
    return x

Try joblib
https://joblib.readthedocs.io/en/latest/memory.html

from joblib import Memory

# customize the decorator
memory = Memory(cachedir=cachedir, verbose=0)

@memory.cache
def f(x):
    print('Running f(%s)' % x)
    return x
天暗了我发光 2024-07-25 20:03:03

免责声明:我是kids.cache的作者。

您应该检查 kids.cache,它提供了一个适用于 python 2 和 python 3 的 @cache 装饰器。没有依赖项,大约 100 行代码。 使用起来非常简单,例如,考虑到您的代码,您可以像这样使用它:

pip install kids.cache

然后

from kids.cache import cache
...
class MyClass(object):
    ...
    @cache            # <-- That's all you need to do
    @property
    def name(self):
        return 1 + 1  # supposedly expensive calculation

或者您可以将 @cache 装饰器放在 @property (相同的结果)。

在属性上使用缓存称为延迟评估,kids.cache 可以做更多事情(它适用于具有任何参数、属性、任何类型的方法,甚至类的函数) ...)。 对于高级用户,kids.cache 支持 cachetools,它为 python 2 和 python 3 提供了精美的缓存存储(LRU、LFU、TTL、RR 缓存)。

重要说明kids.cache 的默认缓存存储是一个标准字典,不建议用于具有不同查询的长时间运行的程序,因为它会导致不断增长的数据缓存存储。 对于这种用法,您可以插入其他缓存存储,例如使用 (@cache(use=cachetools.LRUCache(maxsize=2)) 来装饰您的函数/属性/类/方法...)

DISCLAIMER: I'm the author of kids.cache.

You should check kids.cache, it provides a @cache decorator that works on python 2 and python 3. No dependencies, ~100 lines of code. It's very straightforward to use, for instance, with your code in mind, you could use it like this:

pip install kids.cache

Then

from kids.cache import cache
...
class MyClass(object):
    ...
    @cache            # <-- That's all you need to do
    @property
    def name(self):
        return 1 + 1  # supposedly expensive calculation

Or you could put the @cache decorator after the @property (same result).

Using cache on a property is called lazy evaluation, kids.cache can do much more (it works on function with any arguments, properties, any type of methods, and even classes...). For advanced users, kids.cache supports cachetools which provides fancy cache stores to python 2 and python 3 (LRU, LFU, TTL, RR cache).

IMPORTANT NOTE: the default cache store of kids.cache is a standard dict, which is not recommended for long running program with ever different queries as it would lead to an ever growing caching store. For this usage you can plugin other cache stores using for instance (@cache(use=cachetools.LRUCache(maxsize=2)) to decorate your function/property/class/method...)

画离情绘悲伤 2024-07-25 20:03:03

啊,只需要为此找到正确的名称:“延迟属性评估”。

我也经常这样做; 也许有一天我会在我的代码中使用这个食谱。

Ah, just needed to find the right name for this: "Lazy property evaluation".

I do this a lot too; maybe I'll use that recipe in my code sometime.

暮年慕年 2024-07-25 20:03:03

Python Wiki 上还有另一个 memoize 装饰器示例:

http:// /wiki.python.org/moin/PythonDecoratorLibrary#Memoize

这个例子有点聪明,因为如果参数是可变的,它不会缓存结果。 (检查该代码,它非常简单且有趣!)

There is yet another example of a memoize decorator at Python Wiki:

http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize

That example is a bit smart, because it won't cache the results if the parameters are mutable. (check that code, it's very simple and interesting!)

划一舟意中人 2024-07-25 20:03:03

如果您使用 Django 框架,它具有这样的属性来缓存 API 的视图或响应
使用 @cache_page(time) 并且还可以有其他选项。

示例:

@cache_page(60 * 15, cache="special_cache")
def my_view(request):
    ...

更多详细信息可以在此处找到。

If you are using Django Framework, it has such a property to cache a view or response of API's
using @cache_page(time) and there can be other options as well.

Example:

@cache_page(60 * 15, cache="special_cache")
def my_view(request):
    ...

More details can be found here.

情深缘浅 2024-07-25 20:03:03

fastcache,它是“Python 3 functools.lru_cache 的 C 实现。提供 10- 的加速比标准库高 30 倍。”

选择的答案相同,只是导入不同:

from fastcache import lru_cache
@lru_cache(maxsize=128, typed=False)
def f(a, b):
    pass

此外,它安装在Anaconda,与 functools 不同,需要已安装。

There is fastcache, which is "C implementation of Python 3 functools.lru_cache. Provides speedup of 10-30x over standard library."

Same as chosen answer, just different import:

from fastcache import lru_cache
@lru_cache(maxsize=128, typed=False)
def f(a, b):
    pass

Also, it comes installed in Anaconda, unlike functools which needs to be installed.

避讳 2024-07-25 20:03:03

连同 Memoize 示例 我发现了以下 python 包:

  • cachepy; 它允许设置 ttl 和/或缓存函数的调用次数; 此外,还可以使用基于加密文件的缓存...
  • percache

Along with the Memoize Example I found the following python packages:

  • cachepy; It allows to set up ttl and\or the number of calls for cached functions; Also, one can use encrypted file-based cache...
  • percache
早茶月光 2024-07-25 20:03:03

默认属性不好

@lru_cache 对于我的 @mem 装饰器的

import inspect
from copy import deepcopy
from functools import lru_cache, wraps
from typing import Any, Callable, Dict, Iterable


# helper
def get_all_kwargs_values(f: Callable, kwargs: Dict[str, Any]) -> Iterable[Any]:
    default_kwargs = {
        k: v.default
        for k, v in inspect.signature(f).parameters.items()
        if v.default is not inspect.Parameter.empty
    }

    all_kwargs = deepcopy(default_kwargs)
    all_kwargs.update(kwargs)

    for key in sorted(all_kwargs.keys()):
        yield all_kwargs[key]


# the best decorator
def mem(func: Callable) -> Callable:
    cache = dict()

    @wraps(func)
    def wrapper(*args, **kwargs) -> Any:
        all_kwargs_values = get_all_kwargs_values(func, kwargs)
        params = (*args, *all_kwargs_values)
        _hash = hash(params)

        if _hash not in cache:
            cache[_hash] = func(*args, **kwargs)

        return cache[_hash]

    return wrapper


# some logic
def counter(*args) -> int:
    print(f'* not_cached:', end='\t')
    return sum(args)


@mem
def check_mem(a, *args, z=10) -> int:
    return counter(a, *args, z)


@lru_cache
def check_lru(a, *args, z=10) -> int:
    return counter(a, *args, z)


def test(func) -> None:
    print(f'\nTest {func.__name__}:')

    print('*', func(1, 2, 3, 4, 5))
    print('*', func(1, 2, 3, 4, 5))
    print('*', func(1, 2, 3, 4, 5, z=6))
    print('*', func(1, 2, 3, 4, 5, z=6))
    print('*', func(1))
    print('*', func(1, z=10))


def main():
    test(check_mem)
    test(check_lru)


if __name__ == '__main__':
    main()

:输出:

Test check_mem:
* not_cached:   * 25
* 25
* not_cached:   * 21
* 21
* not_cached:   * 11
* 11

Test check_lru:
* not_cached:   * 25
* 25
* not_cached:   * 21
* 21
* not_cached:   * 11
* not_cached:   * 11

@lru_cache is not good with default attrs

my @mem decorator:

import inspect
from copy import deepcopy
from functools import lru_cache, wraps
from typing import Any, Callable, Dict, Iterable


# helper
def get_all_kwargs_values(f: Callable, kwargs: Dict[str, Any]) -> Iterable[Any]:
    default_kwargs = {
        k: v.default
        for k, v in inspect.signature(f).parameters.items()
        if v.default is not inspect.Parameter.empty
    }

    all_kwargs = deepcopy(default_kwargs)
    all_kwargs.update(kwargs)

    for key in sorted(all_kwargs.keys()):
        yield all_kwargs[key]


# the best decorator
def mem(func: Callable) -> Callable:
    cache = dict()

    @wraps(func)
    def wrapper(*args, **kwargs) -> Any:
        all_kwargs_values = get_all_kwargs_values(func, kwargs)
        params = (*args, *all_kwargs_values)
        _hash = hash(params)

        if _hash not in cache:
            cache[_hash] = func(*args, **kwargs)

        return cache[_hash]

    return wrapper


# some logic
def counter(*args) -> int:
    print(f'* not_cached:', end='\t')
    return sum(args)


@mem
def check_mem(a, *args, z=10) -> int:
    return counter(a, *args, z)


@lru_cache
def check_lru(a, *args, z=10) -> int:
    return counter(a, *args, z)


def test(func) -> None:
    print(f'\nTest {func.__name__}:')

    print('*', func(1, 2, 3, 4, 5))
    print('*', func(1, 2, 3, 4, 5))
    print('*', func(1, 2, 3, 4, 5, z=6))
    print('*', func(1, 2, 3, 4, 5, z=6))
    print('*', func(1))
    print('*', func(1, z=10))


def main():
    test(check_mem)
    test(check_lru)


if __name__ == '__main__':
    main()

output:

Test check_mem:
* not_cached:   * 25
* 25
* not_cached:   * 21
* 21
* not_cached:   * 11
* 11

Test check_lru:
* not_cached:   * 25
* 25
* not_cached:   * 21
* 21
* not_cached:   * 11
* not_cached:   * 11
如日中天 2024-07-25 20:03:03

创建您自己的装饰器并

from django.core.cache import cache
import functools

def cache_returned_values(func):
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        key = "choose a unique key here"
        results = cache.get(key)
        if not results:
            results = func(*args, **kwargs)
            cache.set(key, results)
        return results

    return wrapper

在函数端使用它

@cache_returned_values
def get_some_values(args):
  return x

Create your own decorator and use it

from django.core.cache import cache
import functools

def cache_returned_values(func):
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        key = "choose a unique key here"
        results = cache.get(key)
        if not results:
            results = func(*args, **kwargs)
            cache.set(key, results)
        return results

    return wrapper

Now at the function side

@cache_returned_values
def get_some_values(args):
  return x
缺⑴份安定 2024-07-25 20:03:03

我实现了类似的东西,使用 pickle 来实现持久性,并使用 sha1 来实现几乎肯定唯一的短 ID。 基本上,缓存对函数代码和参数历史进行哈希处理以获取 sha1,然后查找名称中包含该 sha1 的文件。 如果存在,则打开它并返回结果; 如果不是,则调用该函数并保存结果(可选地,仅在需要一定时间处理时才保存)。

也就是说,我发誓我发现了一个现有的模块可以做到这一点,并且发现自己在这里试图找到该模块...我能找到的最接近的是这个,它看起来差不多是正确的: http://chase-seibert.github.io/blog/2011/11 /23/pythondjango-disk-based-caching-decorator.html

我看到的唯一问题是它不适用于大型输入,因为它对 str(arg) 进行哈希处理,这对于巨型输入来说并不是唯一的数组。

如果有一个 unique_hash() 协议让一个类返回其内容的安全哈希值,那就太好了。 我基本上是为我关心的类型手动实现的。

I implemented something like this, using pickle for persistance and using sha1 for short almost-certainly-unique IDs. Basically the cache hashed the code of the function and the hist of arguments to get a sha1 then looked for a file with that sha1 in the name. If it existed, it opened it and returned the result; if not, it calls the function and saves the result (optionally only saving if it took a certain amount of time to process).

That said, I'd swear I found an existing module that did this and find myself here trying to find that module... The closest I can find is this, which looks about right: http://chase-seibert.github.io/blog/2011/11/23/pythondjango-disk-based-caching-decorator.html

The only problem I see with that is it wouldn't work well for large inputs since it hashes str(arg), which isn't unique for giant arrays.

It would be nice if there were a unique_hash() protocol that had a class return a secure hash of its contents. I basically manually implemented that for the types I cared about.

萤火眠眠 2024-07-25 20:03:03

如果您使用的是 Django 并且想要缓存视图,请参阅 Nikhil Kumar 的回答

但是如果你想缓存任何函数结果,你可以使用 django-cache-utils< /a>.

它重用 Django 缓存并提供易于使用的 cached 装饰器:

from cache_utils.decorators import cached

@cached(60)
def foo(x, y=0):
    print 'foo is called'
    return x+y

If you are using Django and want to cache views, see Nikhil Kumar's answer.

But if you want to cache ANY function results, you can use django-cache-utils.

It reuses Django caches and provides easy to use cached decorator:

from cache_utils.decorators import cached

@cached(60)
def foo(x, y=0):
    print 'foo is called'
    return x+y
孤君无依 2024-07-25 20:03:03

函数缓存简单解决方案

使用 ttl(生存时间)和 max_entries 的

  • 当修饰函数采用不可散列类型作为输入(例如字典)时,
  • 不起作用可选参数:ttl(每个条目的生存时间)
  • 可选参数:max_entries(如果缓存参数太多组合以不混乱存储)
  • 确保该函数没有重要的副作用

使用示例

import time

@cache(ttl=timedelta(minutes=3), max_entries=300)
def add(a, b):
    time.sleep(2)
    return a + b

@cache()
def substract(a, b):
    time.sleep(2)
    return a - b

a = 5
# function is called with argument combinations the first time -> it takes some time
for i in range(5):
    print(add(a, i))

# function is called with same arguments again? -> will answer from cache
for i in range(5):
    print(add(a, i))

复制装饰器代码

from datetime import datetime, timedelta

def cache(**kwargs):
  def decorator(function):
    # static function variable for cache, lazy initialization
    try: function.cache
    except: function.cache = {}
    def wrapper(*args):
        # if nothing valid in cache, insert something
        if not args in function.cache or datetime.now() > function.cache[args]['expiry']:
            if 'max_entries' in kwargs:
                max_entries = kwargs['max_entries']
                if max_entries != None and len(function.cache) >= max_entries:
                    now = datetime.now()
                    # delete the the first expired entry that can be found (lazy deletion)
                    for key in function.cache:
                        if function.cache[key]['expiry'] < now:
                            del function.cache[key]
                            break
                    # if nothing is expired that is deletable, delete the first
                    if len(function.cache) >= max_entries:
                        del function.cache[next(iter(function.cache))]
            function.cache[args] = {'result': function(*args), 'expiry': datetime.max if 'ttl' not in kwargs else datetime.now() + kwargs['ttl']}

        # answer from cache
        return function.cache[args]['result']
    return wrapper
  return decorator

Function cache simple solution

with ttl (time to life) and max_entries

  • doesnt work when the decorated function takes unhashable types as input (e.g. dicts)
  • optional parameter: ttl (time to live for every entry)
  • optional parameter: max_entries (if too many cache argument combination to no clutter the storage)
  • make sure the function has no important side effects

Example use

import time

@cache(ttl=timedelta(minutes=3), max_entries=300)
def add(a, b):
    time.sleep(2)
    return a + b

@cache()
def substract(a, b):
    time.sleep(2)
    return a - b

a = 5
# function is called with argument combinations the first time -> it takes some time
for i in range(5):
    print(add(a, i))

# function is called with same arguments again? -> will answer from cache
for i in range(5):
    print(add(a, i))

Copy the decorator code

from datetime import datetime, timedelta

def cache(**kwargs):
  def decorator(function):
    # static function variable for cache, lazy initialization
    try: function.cache
    except: function.cache = {}
    def wrapper(*args):
        # if nothing valid in cache, insert something
        if not args in function.cache or datetime.now() > function.cache[args]['expiry']:
            if 'max_entries' in kwargs:
                max_entries = kwargs['max_entries']
                if max_entries != None and len(function.cache) >= max_entries:
                    now = datetime.now()
                    # delete the the first expired entry that can be found (lazy deletion)
                    for key in function.cache:
                        if function.cache[key]['expiry'] < now:
                            del function.cache[key]
                            break
                    # if nothing is expired that is deletable, delete the first
                    if len(function.cache) >= max_entries:
                        del function.cache[next(iter(function.cache))]
            function.cache[args] = {'result': function(*args), 'expiry': datetime.max if 'ttl' not in kwargs else datetime.now() + kwargs['ttl']}

        # answer from cache
        return function.cache[args]['result']
    return wrapper
  return decorator
治碍 2024-07-25 20:03:03
from functools import wraps


def cache(maxsize=128):
    cache = {}

    def decorator(func):
        @wraps(func)
        def inner(*args, no_cache=False, **kwargs):
            if no_cache:
                return func(*args, **kwargs)

            key_base = "_".join(str(x) for x in args)
            key_end = "_".join(f"{k}:{v}" for k, v in kwargs.items())
            key = f"{key_base}-{key_end}"

            if key in cache:
                return cache[key]

            res = func(*args, **kwargs)

            if len(cache) > maxsize:
                del cache[list(cache.keys())[0]]
                cache[key] = res

            return res

        return inner

    return decorator


def async_cache(maxsize=128):
    cache = {}

    def decorator(func):
        @wraps(func)
        async def inner(*args, no_cache=False, **kwargs):
            if no_cache:
                return await func(*args, **kwargs)

            key_base = "_".join(str(x) for x in args)
            key_end = "_".join(f"{k}:{v}" for k, v in kwargs.items())
            key = f"{key_base}-{key_end}"

            if key in cache:
                return cache[key]

            res = await func(*args, **kwargs)

            if len(cache) > maxsize:
                del cache[list(cache.keys())[0]]
                cache[key] = res

            return res

        return inner

    return decorator

使用示例

import asyncio
import aiohttp


# Removes the aiohttp ClientSession instance warning.
class HTTPSession(aiohttp.ClientSession):
    """ Abstract class for aiohttp. """
    
    def __init__(self, loop=None) -> None:
        super().__init__(loop=loop or asyncio.get_event_loop())

    def __del__(self) -> None:
        if not self.closed:
            self.loop.run_until_complete(self.close())
            self.loop.close()
 

        return 
       

            

session = HTTPSession()

@async_cache()
async def query(url, method="get", res_method="text", *args, **kwargs):
    async with getattr(session, method.lower())(url, *args, **kwargs) as res:
        return await getattr(res, res_method)()


async def get(url, *args, **kwargs):
    return await query(url, "get", *args, **kwargs)
 

async def post(url, *args, **kwargs):
    return await query(url, "post", *args, **kwargs)

async def delete(url, *args, **kwargs):
    return await query(url, "delete", *args, **kwargs)
from functools import wraps


def cache(maxsize=128):
    cache = {}

    def decorator(func):
        @wraps(func)
        def inner(*args, no_cache=False, **kwargs):
            if no_cache:
                return func(*args, **kwargs)

            key_base = "_".join(str(x) for x in args)
            key_end = "_".join(f"{k}:{v}" for k, v in kwargs.items())
            key = f"{key_base}-{key_end}"

            if key in cache:
                return cache[key]

            res = func(*args, **kwargs)

            if len(cache) > maxsize:
                del cache[list(cache.keys())[0]]
                cache[key] = res

            return res

        return inner

    return decorator


def async_cache(maxsize=128):
    cache = {}

    def decorator(func):
        @wraps(func)
        async def inner(*args, no_cache=False, **kwargs):
            if no_cache:
                return await func(*args, **kwargs)

            key_base = "_".join(str(x) for x in args)
            key_end = "_".join(f"{k}:{v}" for k, v in kwargs.items())
            key = f"{key_base}-{key_end}"

            if key in cache:
                return cache[key]

            res = await func(*args, **kwargs)

            if len(cache) > maxsize:
                del cache[list(cache.keys())[0]]
                cache[key] = res

            return res

        return inner

    return decorator

Example use

import asyncio
import aiohttp


# Removes the aiohttp ClientSession instance warning.
class HTTPSession(aiohttp.ClientSession):
    """ Abstract class for aiohttp. """
    
    def __init__(self, loop=None) -> None:
        super().__init__(loop=loop or asyncio.get_event_loop())

    def __del__(self) -> None:
        if not self.closed:
            self.loop.run_until_complete(self.close())
            self.loop.close()
 

        return 
       

            

session = HTTPSession()

@async_cache()
async def query(url, method="get", res_method="text", *args, **kwargs):
    async with getattr(session, method.lower())(url, *args, **kwargs) as res:
        return await getattr(res, res_method)()


async def get(url, *args, **kwargs):
    return await query(url, "get", *args, **kwargs)
 

async def post(url, *args, **kwargs):
    return await query(url, "post", *args, **kwargs)

async def delete(url, *args, **kwargs):
    return await query(url, "delete", *args, **kwargs)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文