onnxRuntime nodejs通过执行模式设置intraopnumthreads和Interopnumthreads

发布于 2025-01-26 03:16:08 字数 3464 浏览 4 评论 0 原文

我正在使用 onnxruntime 在nodejs中以执行 onnx onnx onnx innx 转换 cpu 后端到 run run pernection。 /a>

根据文档,可选参数如下:

     var options = {

            /**
             * 
             */
            executionProviders: ['cpu'],

            /*
             * The optimization level.
             * 'disabled'|'basic'|'extended'|'all'
            */
            graphOptimizationLevel: 'all',

            /**
             * The intra OP threads number.
             * change the number of threads used in the threadpool for Intra Operator Execution for CPU operators 
             */
            intraOpNumThreads: 1,

            /**
             * The inter OP threads number.
             * Controls the number of threads used to parallelize the execution of the graph (across nodes).
             */
            interOpNumThreads: 1,

            /**
             * Whether enable CPU memory arena.
             */
            enableCpuMemArena: false,

            /**
             * Whether enable memory pattern.
             *
             */
            enableMemPattern: false,

            /**
             * Execution mode.
             * 'sequential'|'parallel'
             */
            executionMode: 'sequential',

            /**
             * Log severity level
             * @see ONNX.Severity
             * 0|1|2|3|4
             */
            logSeverityLevel: ONNX.Severity.kERROR,

            /**
             * Log verbosity level.
             *
             */
            logVerbosityLevel: ONNX.Severity.kERROR,

        };

具体来说,我可以控制(如在张量中)螺纹参数 intraopnumthreads and code> InterOpnumThreads ,这些定义为定义为多于。

我想为顺序 Parallel 执行模式(由 executionMode 参数所定义的参数)优化两个。 我的方法就像是

var numCPUs = require('os').cpus().length;
options.intraOpNumThreads = numCPUs;

为了具有至少多个线程,例如可用CPU的数量,因此我在MacBook Pro上获得了 sequential 执行模式的此会话配置:

{
  executionProviders: [ 'cpu' ],
  graphOptimizationLevel: 'all',
  intraOpNumThreads: 8,
  interOpNumThreads: 1,
  enableCpuMemArena: false,
  enableMemPattern: false,
  executionMode: 'sequential',
  logSeverityLevel: 3,
  logVerbosityLevel: 3
}

以及 Parallel for /code>执行模式我同时设置:

{
  executionProviders: [ 'cpu' ],
  graphOptimizationLevel: 'all',
  intraOpNumThreads: 8,
  interOpNumThreads: 8,
  enableCpuMemArena: false,
  enableMemPattern: false,
  executionMode: 'parallel',
  logSeverityLevel: 3,
  logVerbosityLevel: 3
}

或其他方法可能是考虑可用CPU的一定百分比:

var perc = (val, tot) => Math.round( tot*val/100 );
var numCPUs = require('os').cpus().length;
if(options.executionMode=='parallel') { // parallel
   options.interOpNumThreads = perc(50,numCPUs);
   options.intraOpNumThreads = perc(10,numCPUs);
} else { // sequential
   options.interOpNumThreads = perc(100,numCPUs);
   options.intraOpNumThreads = 1;
}

但是我找不到任何文档来确保这是基于ExecutionMode('sequential''和expecutionMode( “并行”执行模式)。理论上正确正确吗?

I'm using Onnxruntime in NodeJS to execute onnx converted models in cpu backend to run inference.

According to the docs, the optional parameters are the following:

     var options = {

            /**
             * 
             */
            executionProviders: ['cpu'],

            /*
             * The optimization level.
             * 'disabled'|'basic'|'extended'|'all'
            */
            graphOptimizationLevel: 'all',

            /**
             * The intra OP threads number.
             * change the number of threads used in the threadpool for Intra Operator Execution for CPU operators 
             */
            intraOpNumThreads: 1,

            /**
             * The inter OP threads number.
             * Controls the number of threads used to parallelize the execution of the graph (across nodes).
             */
            interOpNumThreads: 1,

            /**
             * Whether enable CPU memory arena.
             */
            enableCpuMemArena: false,

            /**
             * Whether enable memory pattern.
             *
             */
            enableMemPattern: false,

            /**
             * Execution mode.
             * 'sequential'|'parallel'
             */
            executionMode: 'sequential',

            /**
             * Log severity level
             * @see ONNX.Severity
             * 0|1|2|3|4
             */
            logSeverityLevel: ONNX.Severity.kERROR,

            /**
             * Log verbosity level.
             *
             */
            logVerbosityLevel: ONNX.Severity.kERROR,

        };

Specifically, I can control (like in Tensorflow) the threading parameters intraOpNumThreads and interOpNumThreads, that are defined as above.

I want to optimize both of them for the sequential and parallel execution modes (controlled by executionMode parameter defined above).
My approach was like

var numCPUs = require('os').cpus().length;
options.intraOpNumThreads = numCPUs;

in order to have at least a number of threads like the number of available cpus, hence on my macbook pro I get this session configuration for sequential execution mode:

{
  executionProviders: [ 'cpu' ],
  graphOptimizationLevel: 'all',
  intraOpNumThreads: 8,
  interOpNumThreads: 1,
  enableCpuMemArena: false,
  enableMemPattern: false,
  executionMode: 'sequential',
  logSeverityLevel: 3,
  logVerbosityLevel: 3
}

and for parallel execution mode I set both:

{
  executionProviders: [ 'cpu' ],
  graphOptimizationLevel: 'all',
  intraOpNumThreads: 8,
  interOpNumThreads: 8,
  enableCpuMemArena: false,
  enableMemPattern: false,
  executionMode: 'parallel',
  logSeverityLevel: 3,
  logVerbosityLevel: 3
}

or another approach could be to consider a percentage of the available cpus:

var perc = (val, tot) => Math.round( tot*val/100 );
var numCPUs = require('os').cpus().length;
if(options.executionMode=='parallel') { // parallel
   options.interOpNumThreads = perc(50,numCPUs);
   options.intraOpNumThreads = perc(10,numCPUs);
} else { // sequential
   options.interOpNumThreads = perc(100,numCPUs);
   options.intraOpNumThreads = 1;
}

but I do not find any doc to ensure this is the optimal configuration for those two scenaries based on the executionMode ('sequential' and 'parallel' execution modes). Is theoretically correct this approach?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

偏闹i 2025-02-02 03:16:08

这实际上取决于模型结构。通常,我使用顺序执行模式,因为大多数模型是顺序模型 - 例如,对于CNN模型,每个层都取决于上一层,因此您必须一个一个一层执行每个层。

我的答案是尝试测试不同的配置,并根据PERT数字选择您的选择。

另一个考虑因素是您如何期望您的应用程序执行:消耗所有CPU以获得最佳性能(最低推理潜伏期)或达到绩效和功耗的平衡。选择完全取决于您。

It really depends on the model structure. Usually, I use sequential execution mode because most models are sequential models - for example for a CNN model each layer depends on the previous layer, so you have to execute each layer one by one.

My answer is to try testing different configs and pick your choice based on perf numbers.

Another consideration is how do you expect your application to perform: to consume all CPUs for best performance (lowest inference latency) or reach to a balance for performance and power consumption. The choice is totally up to you.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文