我正在尝试在Spring Boot应用程序中的方法级别上使用Spring Cache(@cach -Cach -Cach -cachentation),但是与其他Google Guava Cache不同,我不知道Spring Cache是否会导致内存泄漏问题。因为它没有尺寸限制或刷新策略,所以数据将在应用程序中存储在哪里以及多长时间?我认为这是内存,但是会自动清除它吗?如果没有,当可能会有数百万请求击中应用程序时,这会触发内存泄漏问题吗?
我的用例是,我每个请求有一个较重的方法,并且我只想在当前请求期间一次执行该方法,在请求完成后,无需将数据保存在缓存中,但是我将如何确保每次请求后,我的弹簧缓存将被清除吗?我知道有一个驱逐措施,但是,如果我的请求在击中我的缓存驱逐方法之前会出现错误,以便它直接返回500,这意味着我的最后一个请求数据始终会坐在缓存内存中,并带有越来越多的请求这可能会导致内存泄漏,正确吗?
I am trying to use Spring Cache (@Cacheable annotation) on method level in the Spring Boot Application, but unlike other google guava cache, I have no idea if Spring Cache will cause a memory leak issue. Because it didn't have a size limitation or refresh policy, where and how long would the data be stored in the application? I assume it'd be memory, but will Spring itself clear it automatically? If not, when there might be millions of requests coming in hitting the application, will that trigger a memory leak issue?
My use case is that I have a heavy method per request, and I would like to only execute that method one time during my current request, after the request is done there is no need to keep the data in Cache, but how would I ensure my Spring Cache would be cleared after each request? I know there is a evict action, however, what if my request errors out before hitting my cache evict method so that it returns 500 directly, that means my last request data would always sit in the cache memory, with more and more requests like that which might cause a memory leak, correct?
发布评论
评论(3)
您仍然可以指定所需的缓存。例如,您可以使用
咖啡因
,您仍然可以将其配置为例如最大程度启用
以启用它,您只需要添加依赖项并添加
@enablecaching
>在配置类中(并在您的方法中添加@cachable
)请参见:
You can still specify what cache you want. For instance you can use
Caffeine
whichAnd you can still configure it for instance for a maximumSize
To enable it you just have to add the dependency and add the
@EnableCaching
in a config class (and add@Cacheable
of course in your method)See:
来禁用缓存
在Spring应用程序中,也许您可以通过:默认情况下自动检测和配置的缓存类型 。但是,您可以通过在配置中添加spring.cache.type来指定要使用的缓存类型。为了禁用其将值设置为无。
当您想为特定配置文件添加到该配置文件应用程序时。在这种情况下,PROPERTIES修改了application-dev.properties,并添加
此将禁用缓存。
In Spring application, Perhaps you can disable the cache by :
The type of cache is by default automatically detected and configured. However you can specify which cache type to use by adding spring.cache.type to your configuration. To disable it set the value to NONE.
As you want to do it for a specific profile add it to that profiles application.properties in this case modify the application-dev.properties and add
This will disable caching.
春季框架的缓存支持和基础架构,无论是通过基于声明注释的缓存和春季应用程序组件的分界(使用春季注释,eg
@cachable
,或使用JSR- 107,jcache nortations ),或使用spring's 。默认情况下,没有基础缓存提供商(该SPI的实现)。当然,如果您使用的是Spring Boot(在或消耗Core Spring Framework),并且您 不配置显式缓存提供商(请参阅在这里),例如 redis ,然后默认情况下,然后Spring Boot将配置并为您的Spring Boot应用程序提供
concurrenthashmap
缓存提供商实现(请参见在这里)。当文档指出时:
这意味着当没有诸如redis之类的缓存库(例如使用 spring data redis )时,会在您的春季启动应用程序类路径。
但是,总的来说,最好选择一个基本的缓存提供商实现,例如redis或您的情况, google guava 可以将其定位为Spring Cache中的缓存提供商实现抽象(请参见,例如)。
鉴于弹簧框架的缓存抽象只是一个 a a ,带有多个缓存提供商实现的缓存API/SPI,有效地提供了缓存功能的最低共同点(例如,put,put,get,get,evict,vict,vict,vict,vict,驱逐,无效)缓存提供商,然后是您的问题,没有源自春季“缓存”的内存泄漏的原因,无论如何,这实际上都不是一件事情。从技术上讲,提供商实现的“缓存”,例如Google Guave,Redis,Hazelcast,Apache Geode(VMW Gemfire)等,如果首先在您的应用程序中存在泄漏,则实际上是内存泄漏的原因。
换句话说,如果有任何内存泄漏,则它源于缓存提供商。
您应该参考您的缓存提供商关于配置缓存关键方面的文档,例如内存管理,即明确说明的超出了Spring Framework的缓存抽象的控制。
这些方面超出了核心弹簧框架的控制之所功能和功能。
我希望这种解释可以使您清楚地了解Spring提供的高速缓存抽象的位置及其职责。
通过遵守抽象,如果您的应用程序要求,UC或SLA更改,它有效地使在缓存提供商之间进行切换变得更加容易。
The Spring Framework's caching support and infrastructure, whether consumed through declarative annotation-based caching and demarcation of Spring application components (using either Spring Annotations, e.g.
@Cacheable
, or with JSR-107, JCache Annotations), or by using Spring's Cache API directly (not common), is simply an "abstraction", hence the Spring Cache Abstraction. There is NO underlying caching provider (implementation of this SPI), by default.Of course, if you are using Spring Boot (on top of, or to consume the core Spring Framework), and you do not configure an explicit caching provider (see here), such as Redis, then by default, Spring Boot will configure and provide your Spring Boot application with a
ConcurrentHashMap
caching provider implementation (see here).When the documentation states:
It means when no caching library, like Redis (using Spring Data Redis, for instance), is detected on your Spring Boot application classpath.
In general, however, it is a good practice to choose an underlying caching provider implementation, such as Redis, or in your case, Google Guava, which is possible to position as a caching provider implementation in Spring's Cache Abstraction (see here, for example).
Given the Spring Framework's Cache Abstraction is simply a Facade with an caching API/SPI common to multiple caching provider implementations, effectively providing the lowest common denominator for caching functionality (e.g. put, get, evict, invalidate) across caching providers, then to your question, there is no cause of memory leak originating from Spring's "Cache", which really is not even a thing anyway. It is technically the provider implementation's "cache", like Google Guave, Redis, Hazelcast, Apache Geode (VMW GemFire), etc, that would actually be the cause of a memory leak if a leak existed in your application in the first place.
In other words, if there is any memory leak, then it originates with the caching provider.
You should refer to your caching provider's documentation on configuring critical aspects of the cache, such as memory management, that are explicitly stated to be beyond the control of the Spring Framework's Cache Abstraction.
The reason these aspects are beyond the control of the core Spring Framework is simply because the configuration of these low-level cache features (e.g. memory management) is usually very provider specific and varies widely from 1 provider to the next, especially with respect to the capabilities and features.
I hopes this explanation gives you clarity on the position of the Cache Abstraction provided by Spring and its responsibilities.
By adhering to the abstraction then it effectively makes it easier to switch between caching providers if your application requirements, UC or SLAs change.