如何让我的appDomain寿命更长?

发布于 2024-11-26 15:27:27 字数 560 浏览 4 评论 0原文

这就是我们所处的情况。

我们将程序集(纯 DLL)分发给我们的客户(我们无法控制他们的环境)。

他们通过传递商品 ID 列表来呼叫我们,我们通过庞大的数据库进行搜索并返回价格最高的商品。由于我们要满足 SLA(30 毫秒),因此我们将项目缓存在内存缓存中(使用 Microsoft MemoryCache) 我们正在缓存大约一百万个项目。

这里的问题是,它只在我们的客户端应用程序生命周期中进行缓存。当进程退出时,所有缓存的项目也退出。

有没有一种方法可以让我的内存缓存寿命更长,以便后续进程可以重用缓存的项目?

我考虑过拥有一个窗口服务并允许所有这些不同的进程与同一个盒子上的进程进行通信,但这会在部署时造成巨大的混乱。

我们使用 AppFabric 作为分布式缓存,但实现 SLA 的唯一方法是使用内存缓存。

任何帮助将不胜感激。谢谢

Here is the situation that we're in.

We are distributing our assemblies (purely DLL) to our clients (we don't have control over their environment).

They call us by passing a list of item's id and we search through our huge database and return items with highest price. Since we have our SLA (30 milisecond) to meet, we are caching our items in memory cache (using Microsoft MemoryCache) We are caching about a million items.

The problem here is, it only caches throughout our client application lifetime. When the process exit, so are all the cached items.

Is there a way i can make my memorycache live longer, so that subsequent process can reused cached items?

I have consider having a window service and allow all these different processes to communicate with one on the same box, but that's going to create a huge mess when it comes to deployment.

We are using AppFabric as our distributed cache but the only way we can achieve our SLA is to use memorycache.

Any help would be greatly appreciated. Thank you

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

过去的过去 2024-12-03 15:27:27

我没有找到一种方法来确保您的 AppDomain 寿命更长 - 因为所有调用程序集所要做的就是卸载 AppDomain...

一个选项可能是 - 尽管也很混乱 - 来实现某种“持久内存缓存”。 ..为了实现性能,您可以/将使用保存在 MemoryMappedFile 中的 ConcurrentDictionary...

另一种选择是使用本地数据库 - 甚至可以是Sqlite 并实现在内存中缓存接口,以便所有写入/更新/删除都是“直写”,而读取则是纯粹的 RAM 访问...

另一种选择可能是包含一个EXE(例如作为嵌入式资源),如果它没有运行,则从 DLL 内部启动它... EXE 提供 MemoryCache,可以通过 IPC 进行通信(例如共享内存...)。由于 EXE 是一个单独的进程,即使在卸载您的 AppDomain 后它也会保持活动状态...问题在于客户端是否喜欢和/或权限允许...

我真的很喜欢 Windows 服务方法,尽管我同意这可能是部署混乱...

I don't see a way to make sure that your AppDomain lives longer - since all the calling assembly has to do is unload the AppDomain...

One option could be -although messy too- to implement some sort of "persisting MemoryCache"... to achieve performance you could/would use a ConcurrentDictionary persisted in a MemoryMappedFile...

Another option would be to use a local database - could even be Sqlite and implement to cache interface in-memory such that all writes/updates/deletes are "write-through" while reads are pure RAM-access...

Another option could be to include a EXE (as embedded resource for example) and start that from inside the DLL if it is not running... the EXE provides the MemoryCache, communication could be via IPC (for example shared memory...). Since the EXE is a separate process it would stay alive even after unloading your AppDomain... the problem with this is more whether the client likes and/or permissions allow it...

I really like Windows Service approach although I agree that could be a deployment mess...

后eg是否自 2024-12-03 15:27:27

基本问题似乎是您无法控制运行时主机 - 这是控制寿命(以及缓存)的因素。

我会研究创建某种(轻量级?)主机 - 也许是 .exe 或服务。

您的大部分 DLL 将挂在新主机上,但您仍然可以部署一个“外观”DLL,它反过来调用您的主解决方案(绑定到您的主机)。是的,您可以让外部客户端直接调用您的新主机,但这意味着更改/重新配置这些外部调用者,因为保留原始 DLL/API 会将外部调用者与您的内部更改隔离开来。

这(我认为)意味着完全破坏并重新构建您的解决方案,特别是外部调用者当前访问的任何 DLL,因为它不会自行处理请求,而只是将请求传递给您的新主机。

性能

进程间通信比将其保留在进程内的成本更高 - 我不确定方法的改变将如何影响您的性能和达到 SLA 的能力。

特别是,启动主机的新实例将导致性能下降。

The basic issue seems to be that you don't have control of the run-time Host - which is what controls the lifespan (and hence the cache).

I'd investigate creating some sort of (light-weight ?) host - maybe a .exe or a service.

The bulk of your DLL's would hang off the new host, but you could still deploy a "facade" DLL which in turn called your main solution (tied to your host). Yes you could have the external clients call your new host directly but that would mean changing / re-configuring those external callers where-as leaving your original DLL / API in place would isolate the external callers from your internal changes.

This would (I assume) mean completely gutting and re-structuring your solution, particularly whatever DLLs the external callers currently hit, because instead of processing the requests itself it's just going to pass the request off to your new host.

Performance

Inter-process communication is more expensive than keeping it within a process - I'm not sure how the change in approach would affect your performance and ability to hit the SLA.

In-particular, sparking up a new instance of the host will incur a performance hit.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文