循环模型处理器调度中的资源分配算法
背景:我有以数据流图形式显示的独立任务(永远迭代)。非常简单的例子:
任务1:b1-> c
任务2:e-> b2
给出了处理器分配。该图本身解释了将在其上执行的可用处理器。对于上面的示例,b1 和 b2 在处理器 b 上执行,c 在 c 上执行,e 在 e 上执行。
所有任务的执行时间在编译时就已知。由于是循环建模,因此最坏情况修改每个任务的执行时间以捕获循环调度的资源仲裁。示例:假设上面示例中的所有任务最初的执行时间为 1。然后我们将计时更改为 b1 和 b2 占用 2 个时间单位,c 和 e 占用 1 个时间单位。这是因为处理器 b 运行以下代码: 同时(1) { run_task_b1(); run_task_b2(); } 因此,如果我们分析任务 1(吞吐量等),认为任务 b1 需要 2 个时间单位,就足以在处理器 b 上共享 b1 和 b2。
我的问题:每个任务都使用额外的资源 (DMA)。假设我们有 2 个 DMA。如果我们天真地分配 D1 和 D2 如下:
b1 获取 D1
c 获取 D2
e 获取 D1
b2 获取 D2
然后在循环建模之后 b1 和 b2 得到执行时间 3 (因为 b1 与 e 和 b1 共享 D1,b2 共享处理器b, 1+1+1 = 3),c 和 e 得到时间 2(因为 c 与 b2 共享 D2,e 与 b1 共享 D1)。所以时间是3,2,2,3。
但如果我们分配如下: b1 gets D1
c gets D2
e gets D2
b2 gets D1
建模执行时间为 2,2,2,2。
那么如何制定一个算法来在编译时(静态分配)为给定的任务图找到资源(DMA)分配,以使建模的 RR 时间最小?列表调度将给出低效的结果,如上面示例中所述。
Background: I have independent tasks shown as dataflow graphs (which iterate forever). Extremely simple example:
task 1: b1 -> c
task 2: e -> b2
Processor assignment is given. The graph itself explains available processors on which they will be executed. for above example b1 and b2 are executed on processor b, c on c and e on e.
Execution times of all tasks are known at compile time. Since it is round robin modeling, the worst case execution times of each task is modified to capture resource arbitration for round-robin scheduling. Example: lets say originally all tasks in above example had exec time 1. Then we change the timings to b1 and b2 take 2 time units, c and e take 1. This is coz processor b runs following code:
while(1)
{
run_task_b1();
run_task_b2();
}
Therefore effectively if we analyze task1 (for throughput, etc) to say task b1 takes 2 time units, suffices the sharing of b1 and b2 on processor b.
My problem: Each tasks uses additional resource (DMAs). Let say we have 2 DMAs. If we naively allot D1 and D2 as follows:
b1 gets D1
c gets D2
e gets D1
b2 gets D2
Then after round-robin modeling b1 and b2 get execution time 3 (coz b1 shares D1 with e and b1,b2 share processor b, 1+1+1 = 3) and c and e get time 2 (coz c shared D2 with b2 and e shares D1 with b1). So times are 3,2,2,3.
But if we give allotment as: b1 gets D1
c gets D2
e gets D2
b2 gets D1
Modeled execution times are 2,2,2,2.
So how to make an algorithm to find the resource (the DMAs) assignment at compile time (static assignment) for given tasks graphs such that modeled RR times are minimum? List scheduling will give inefficient results as explained in above example.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
这听起来像 Job Shop Scheduling,它是 NP 完全的。
由于您可能没有处理器时间来找到该问题的全局最优解,
只需将任务放入队列中(按某种任务优先级/难度降序排序)并每次取最上面的一个。这不是最佳选择,但很实用:)
另外,您可能想看看工作窃取队列。
This smells like Job Shop Scheduling, which is NP-complete.
Since you probably don't have the processor time to find the global optimum for that problem,
just put the task in a queue (sorted by some sort of task priority/difficulty descending) and take the top one each time. That won't be optimal, but it will be practical :)
Also, you might want to take a look at work-stealing queues.