在 Node.js 中,有一个 cluster 模块可以利用机器上的所有可用内核,这非常棒,特别是与节点模块 pm2
一起使用时。但我对 Deno 的一些功能非常感兴趣,但我想知道如何在多核机器上最好地运行它。
我知道有些工作人员可以很好地完成特定任务,但对于正常的网络请求来说,多核机器的性能似乎有些浪费?在 Deno 中获得硬件最大可用性和利用率的最佳策略是什么?
我有点担心,如果你只有一个进程在运行,并且有一些 CPU 密集型任务,无论出于何种原因,它都会“阻止”所有其他进来的请求。在 Node.js 中,集群模块可以解决这个问题,因为另一个进程会处理该请求,但我不确定如何在 Deno 中处理这个问题?
我认为您可以在 Deno 中的不同端口上运行多个实例,然后在其前面放置某种负载均衡器,但相比之下,这似乎是一个相当复杂的设置。我还了解到您可以使用某种服务,例如 Deno Deploy 或其他服务,但我已经拥有想要运行它的硬件。
我有哪些替代方案?
预先感谢您的明智建议和更好的智慧。
In Node.js there is the cluster module to utilize all available cores on the machine which is pretty great, especially when used with the node module pm2
. But I am pretty stoked about some features of Deno but I have wondered about how to best run it on a multi-core machine.
I understand that there is workers which works great for a specific task but for normal web requests it seems like performance of multi-core machines is wasted somewhat? What is the best strategy to get maximum availability and utilization of my hardware in Deno?
I am a bit worried that if you only have a single process going on and there is some CPU intensive task for whatever reason it will "block" all other requests coming in. In node.js the cluster module would solve this, since another process would handle the request but I am unsure on how to handle this in Deno?
I think you could run several instances in Deno on different ports and then have some kind of load balancer in front of it but that seems like quite a complex setup in comparison. I also get that you could use some kind of service like Deno Deploy or whatever, but I already have hardware that I want to run it on.
What are the alternatives for me?
Thanks in advance for you sage advice and better wisdom.
发布评论
评论(3)
在 Deno 中,就像在 Web 浏览器中一样,您应该能够 使用 Web Workers 100% 地利用多核 CPU。
在集群中,您需要一个“管理器”节点(根据需要/适当,它本身也可以是工作器)。以类似的方式,Web Worker API 可用于创建然而,需要有许多敬业的工人。这意味着主线程永远不应该阻塞,因为它可以将所有可能阻塞的任务委托给其工作线程。不会阻塞的任务(例如简单的数据库或其他 I/O 绑定调用)可以像平常一样直接在主线程上完成。
Deno 还支持
navigator.hardwareConcurrency
这样您就可以查询可用的硬件并相应地确定所需的工作人员数量。不过,您可能不需要定义任何限制。从与先前生成的专用工作人员相同的来源生成新的专用工作人员可能足够快,可以按需执行此操作。即便如此,重用专门的工作人员而不是为每个请求生成一个新的工作人员可能是有价值的。通过可转让对象,可以向工作人员提供大型数据集或从工作人员那里获取大型数据集,而无需复制数据。这与消息传递一起使其变得非常简单委派任务,同时避免复制大型数据集带来的性能瓶颈。
根据您的用例,您还可以使用像 Comlink " 这样的库,它消除了思考 < code>postMessage 并隐藏您正在与工人一起工作的事实。”
例如
main.ts
worker.ts
ComlinkRequestHandler.ts
示例用法:
可能有更好的方法来做到这一点(例如通过
Comlink.transferHandlers
并注册传输处理程序Request
、Response
和/或ReadableStream
),但想法是相同的,并且在正文流式传输时将处理更大的请求或响应有效负载通过消息传递。In Deno, like in a web browser, you should be able to use Web Workers to utilize 100% of a multi-core CPU.
In a cluster you need a "manager" node (which can be a worker itself too as needed/appropriate). In a similar fashion the Web Worker API can be used to create however many dedicated workers as desired. This means the main thread should never block as it can delegate all tasks that will potentially block to its workers. Tasks that won't block (e.g. simple database or other I/O bound calls) can be done directly on the main thread like normal.
Deno also supports
navigator.hardwareConcurrency
so you can query about available hardware and determine the number of desired workers accordingly. You might not need to define any limits though. Spawning a new dedicated worker from the same source as a previously spawned dedicated worker may be fast enough to do so on demand. Even so there may be value in reusing dedicated workers rather than spawning a new one for every request.With Transferable Objects large data sets can be made available to/from workers without copying the data. This along with messaging makes it pretty straight forward to delegate tasks while avoiding performance bottlenecks from copying large data sets.
Depending on your use cases you might also use a library like Comlink "that removes the mental barrier of thinking about
postMessage
and hides the fact that you are working with workers."e.g.
main.ts
worker.ts
ComlinkRequestHandler.ts
Example usage:
There's probably a better way to do this (e.g. via
Comlink.transferHandlers
and registering transfer handlers forRequest
,Response
, and/orReadableStream
) but the idea is the same and will handle even large request or response payloads as the bodies are streamed via messaging.这一切都取决于您想推到线程的工作负载。如果您对在主线程上运行的DENO HTTP服务器的内置性能感到满意,但是您需要利用多线程以更有效地创建响应,那么它很简单,如Deno V1.29.4。
HTTP服务器将为您提供异步迭代器
Server
,您可以使用内置功能
poolmedmap
就像westwith
一样,只是一个处理收到的请求并生成响应对象的函数。如果rectionwith
已经是异步函数,那么您甚至不需要将其包裹在承诺中。但是,如果您想在单独的Therads上运行多个DENO HTTP服务器,那么这也是可能的,但是您需要一个负载平衡器,例如涉及到头部。在这种情况下,您应该在单独的线程处实例化多个DENO HTTP服务器,并在主线程中接收其重新汇编作为单独的异步迭代器。为了实现这一目标,您可以喜欢的每个线程;
在工作端IE
./服务器/server_800x.ts
;在主线程上,您可以轻松地将相关的Worker HTTP服务器转换为异步迭代器
,就像您还应该能够通过使用 muxasynciterators 功能在单个流中,然后由
pool> plimedmap
产生。 异步因此,如果您有2台HTTP服务器在
server_8000.ts
和server_8001.ts
上工作,那么您可以像显然应该能够产生新的新 通过使用poolmedmap
从muxedServer
接收到的请求的线程,如上所示。(*)如果您选择使用负载平衡器和多个DENO HTTP服务器,则应将特殊标题分配给负载平衡器的请求,并指定已将其转移到的服务器ID。这样,通过检查此磁带标头,您可以从哪个服务器响应任何特定请求。
It all depends on what workload you would like to push to the threads. If you are happy with the performance of the built in Deno HTTP server running on the main thread but you need to leverage multithreading to create the responses more efficiently then it's simple as of Deno v1.29.4.
The HTTP server will give you an async iterator
server
likeThen you may use the built in functionality
pooledMap
likeWhere
respondWith
is just a function which handles the recieved request and generates the respond object. IfrespondWith
is already an async function then you don't even need to wrap it into a promise.However, in case you would like to run multiple Deno HTTP servers on separate therads then that's also possible but you need a load balancer like GoBetween at the head. In this case you should instantiate multiple Deno HTTP servers at separate threads and receive their requsets at the main thread as separate async iterators. To achieve this, per thread you can do like;
At the worker side i.e.
./servers/server_800X.ts
;and at the main thread you can easily convert the correspodning worker http server into an async iterator like
You should also be able to multiplex either the HTTP (req) or the res async iterators by using the MuxAsyncIterators functionality in to a single stream and then spawn by
pooledMap
. So if you have 2 http servers working onserver_8000.ts
andserver_8001.ts
then you can multiplex them into a single async iterator likeObviously you should also be able to spawn new threads to process requests received from the
muxedServer
by utilizingpooledMap
as shown above.(*) In case you choose to use a load balancer and multiple Deno http servers then you should assign special headers to the requests at the load balancer, designating the server ID that it's been diverted to. This way, by inspecting this speical header you can decide from which server to respond for any particular request.
deNo使用用于IO操作,垃圾收集和其他实用程序操作,为您提供了更有效的主线程处理。结果,最有效的核心配置的计算可能是非平凡的过程,需要大量的实际测试。在直观级别上,我将为16核服务器创建11-12个工人,但需要其他验证。
Deno use up 8 threads for IO operations, garbage collecting and other utility operations, which giving to you a more efficient main thread processing. In a result, calculation of most effective core configuration can be non-trivial process and require a big number of practical tests. On intuitive level I will be going to create like 11–12 workers for 16-core server, but additional validation is needed.