Node.js 服务器在进程耗尽堆内存之前返回错误的最佳方法

发布于 2025-01-11 14:05:05 字数 737 浏览 0 评论 0原文

我在一个具有相当严格内存限制的容器上运行 Node.js / Express 服务器。

我想公开的端点之一是“批量”端点,客户端可以从我的数据存储中批量请求数据对象列表。各个对象的大小各不相同,因此很难对一次可以请求的对象数量设置硬性限制。在大多数情况下,客户端可以请求大量对象而不会出现任何问题,但在某些边缘情况下,即使请求少量对象也会触发 OOM 错误。

我熟悉 Node 的 process.memoryUsage () & process.memoryUsage.rss(),但我担心在处理单个批处理请求时不断检查堆(或服务)内存使用情况对性能的影响。

从长远来看,我可能会考虑使用内存监控来为端点添加一些自动分页。然而,从短期来看,我只是希望能够在客户端在给定时间请求太多数据对象的情况下向客户端返回信息性错误(而不是让整个应用程序因 OOM 错误而崩溃) )。

我可以使用更有效的方法或工具来解决问题吗?

I'm running Node.js / Express server on a container with pretty strict memory constraints.

One of the endpoints I'd like to expose is a "batch" endpoint where a client can request a list of data objects in bulk from my data store. The individual objects vary in size, so it's difficult to set a hard limit on how many objects can be requested at one time. In most cases a client could request a large amount of objects without any issues, but it certain edge cases even requests for a small amount of objects will trigger an OOM error.

I'm familiar with Node's process.memoryUsage() & process.memoryUsage.rss(), but I'm worried about the performance implications of constantly checking heap (or service) memory usage while serving an individual batch request.

In the longer term, I might consider using memory monitoring to bake in some automatic pagination for the endpoint. In the short term, however, I'd just like to be able to return an informative error to the client in the event that they are requesting too many data objects at a given time (rather than have the entire application crash with an OOM error).

Are there any more effective methods or tools I could be using to solve the problem instead?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

风向决定发型 2025-01-18 14:05:05

你有几个选择。

选项 1.
您储存的最大物品是什么。我想说的是,你允许 api 上有一些{最大对象计数},并将容器内存设置为最大对象 x {允许的最大对象大小}。如果需要,您甚至可以添加一些分页概念,其中页面大小 = {最大对象计数}

选项 2。

使用 process.memoryUsage() 也应该没问题。我不认为这是一个不昂贵的电话,除非您在某处读过这篇文章。在每个对象拉取之前,请检查当前内存,只有在安全内存量可用时才继续操作。这种情况下的响应只能包含拉取的数据,并让客户端在下一次调用中拉取剩余的 id。也可以通过一些分页逻辑来实现。

选项 3.
探索溪流。我现在无法添加太多信息。

you have couple of options.

Options 1.
what is biggest object you have in store. I would say that you allow some {max object count} on api and set container memory to biggestObject x {max allowed objects size}. You can even have some pagination concept added if required where page size = {max object count}

Option 2.

Also using process.memoryUsage() should be fine too. I don't believe it is a not a costly call unless you have read this somewhere. Before each object pull check current memory and go ahead only if safe amount of memory is available.The response in this case can contain only pulled data and lets client to pull remaining ids in next call. Implementable via some paging logic too.

options 3.
explore streams. This I will not be able to add much info for now.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文