迷你操作系统中不同信号量的就绪队列的优点和缺点

发布于 2024-09-02 21:53:01 字数 78 浏览 4 评论 0原文

在操作系统中,我们最初有一个线程就绪队列,请注意只有一个队列。为什么每个信号量有不同的队列会更好?优点和缺点可能与效率、复杂性或其他方面有关。

In a OS we originally have a ready-queue for the threads, note only one queue. Why would it be better to have different queues for each semaphore. Pros and cons can be about efficiency, complexity or something else.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

爱已欠费 2024-09-09 21:53:01

好的,要回答这个问题,请参阅我的答案 关于这里的另一个问题,关于“Linux 时分进程还是线程?”,让我们从这里使用 x86 处理器和 C 代码的假想内核的结构继续下去......我已经修改了这个结构以考虑信号量的帐户

struct regs{
  int eax, ebx, ecx, edx, es, ds, gs, fs, cs, ip, flags;
  struct tss *task_sel;
}

struct mem_loc{
   int *phys_mem_begin;
   int *phys_mem_end;
   int *semaphores;
}
struct thread{
    struct regs *regs;
    struct mem_loc *memlocTracker;
    int parent_id;
    struct thread *next;
}
struct task{
   struct regs *regs;
   struct mem_loc *task_mem;
   int *filehandles;
   int priority;
   int *num_threads;
   int quantum;
   int duration;
   int start_time, end_time;
   int parent_id;
   struct thread *task_thread;
     /* ... */
   struct task *next;
}

struct queue_ready{
   struct task *task_id;
   struct queue_ready *next;
}

在那里,我认为这看起来更好,对吧,相对于上面链接的我之前的答案,让我们看看当涉及队列时会发生什么,现在,看看这个 queue_ready 结构,现在,假设该结构中有一个由内核设置的任务,更准确地说,是一个 queue_ready 的链表,这是基于 priority 字段的code>task 结构本身,优先级为1,例如准备执行。

让我们看一个基于 C 的虚构调度器,如下所示,好吧,它可能很蹩脚并且容易受到挑剔,但先把它放在一边,集中精力讨论问题的方面......

static void scheduler(void){
 struct queue_ready *rdy_sched;

 while(1){

   for (rdy_sched = head_queue_ready; 
        rdy_sched != NULL; 
        rdy_sched = rdy_sched->next){

     if (time >= rdy_sched->task_id->start_time + rdy_sched->task_id->quantum){

        save_cpu_task_state(&task_reg->regs);

        save_mem_task_state(&rdy_sched->task_id->task_mem);

     }else{
       struct regs *task_reg = rdy_sched->task_id->regs;

       struct mem_loc *mem_task = rdy_sched->task_id->task_mem;

       load_cpu_task_state(task_reg);

       load_mem_task_state(mem_task);

       jmp_and_exec_cpu_task(task_reg->cs, task_reg->ip);

       save_cpu_task_state(&rdy_sched->task_id->regs);

       if (rdy_sched->task_id->num_threads > 0){

         struct thread *threads = rdy_sched->task_id->task_thread;

         while (threads->next != NULL){

           struct regs *thread_reg = threads->regs;

           load_cpu_thread_state(thread_reg);

           if (threads->memlocTracker->semaphores){

             /* House keeping in relation to semaphores */
             lock_memory(threads->memlocTracker);

             jmp_and_exec_cpu_thread(thread_reg->cs, thread_reg->ip);

             save_cpu_thread_state(&thread->regs);

             unlock_memory(threads->memlocTracker);

           }else{

             jmp_and_exec_cpu_thread(thread_reg->cs, thread_reg->ip);

             save_cpu_thread_state(&thread->regs);
           }
         }
       }
     }
   }
 }
}

内核的调度器看起来过于复杂,但是事实并非如此,我在代码中遗漏的唯一遗漏是优先级......现在可以在讨论OP问题时忽略它。

让我们分解一下这个调度程序函数 scheduler...但首先快速浏览一下并解释一下 scheduler 函数中使用了哪些函数:

  • “load_cpu_task_state”和“load_cpu_thread_state” - this加载CPU寄存器的状态,为了简洁起见,它们分别用于任务和线程。
  • 'load_mem_task_state' 和 'save_mem_task_state' - 分别加载和保存内存状态,可能从磁盘上的 phys_mem_beginphys_mem_end 字段指定的页面加载和保存上述任务的 mem_loc 结构。为了简洁起见,这将加载所述任务拥有的所有内存,包括线程。
  • 'jmp_and_exec_cpu_task' 和 'jmp_and_exec_cpu_task' - 这会导致内核发出魔法,跳转到所述任务的 regcs 和 ip 寄存器> 结构,对于线程来说也是相同的。
  • 'save_cpu_task_state' 和 'save_cpu_thread_state' - 这会导致内核分别保存任务和线程的 CPU 寄存器的状态。
  • 'lock_memory' 和 'unlock_memory' - 这是为了使用即将发挥作用的信号量锁定内存区域...

现在,调度程序将永远运行,并迭代任务的链接列表,需要检查任务是否已运行一段时间并超过量子量,例如20毫秒。然后它保存 cpu 状态和内存状态(可能保存到磁盘上的分页文件中),准备继续执行列表中的下一个任务。

如果还有剩余时间,它会加载任务的 CPU 寄存器并加载内存(可能从页面),然后跳转到最后一个指令指针和代码段所在的位置(分别为 ip 和 cs),然后任务恢复。

如果该任务有线程,它将迭代线程链表,加载CPU寄存器,并跳转到代码以恢复线程执行,然后保存CPU寄存器。

现在,事情变得有点棘手,尤其是在这个玩具操作系统中,如何管理信号量!呃呵呵,看起来操作系统开发人员的头疼得很厉害......

我想如果内存部分被锁定,信号量将是非NULL,并且会被请求信号量的用户态代码中的运行时管理器,它将锁定内存以防止线程践踏数据,并跳转到所述线程的代码中,并在该线程的持续时间完成时解锁它并保存线程的状态CPU寄存器...

结论:

这是我基于此的结论,至于支持我的事实,我无法,因为这个理论已经被详尽地讲授,检验在无数书籍的显微镜下,从美国的大学,一直到新西兰的大学,这是我的观察和理解,正如你从这段代码中看到的,复杂性和负担开发人员在编码时,不仅仅是那个,施加的约束,设计决策 - 硬件和软件。

在这种情况下管理信号量要复杂得多,要回答是否有更多可用的队列,就像我努力将其保留在一个队列queue_ready,内存使用、磁盘使用方面的成本因素最重要的是,随着内核必须跟踪那些发挥作用的缓解因素,时间使用量也会增加,另一个同样重要的因素是有多少优先级,因此在我看来,为不同的信号量准备一个就绪队列是不是一个可行的选择。

Ok, to answer this question please refer to my answer to another question here on SO, in relation to 'Does Linux time division processes or threads?', let's follow on from that structure of an imaginary kernel here using x86 processor and C code...I have amended this structure to take into account of semaphores

struct regs{
  int eax, ebx, ecx, edx, es, ds, gs, fs, cs, ip, flags;
  struct tss *task_sel;
}

struct mem_loc{
   int *phys_mem_begin;
   int *phys_mem_end;
   int *semaphores;
}
struct thread{
    struct regs *regs;
    struct mem_loc *memlocTracker;
    int parent_id;
    struct thread *next;
}
struct task{
   struct regs *regs;
   struct mem_loc *task_mem;
   int *filehandles;
   int priority;
   int *num_threads;
   int quantum;
   int duration;
   int start_time, end_time;
   int parent_id;
   struct thread *task_thread;
     /* ... */
   struct task *next;
}

struct queue_ready{
   struct task *task_id;
   struct queue_ready *next;
}

There, I think that looks better, right, in relation to following on from my previous answer linked above, let's see what happens when there's queues involved, now, look at this queue_ready structure, now, supposing there's a task in the that structure as set up by the kernel, a linked list of queue_ready to be more precise, this is based on the priority field of the task structure itself, and the priority is 1 for example, to be ready to executed.

Let's look at an imaginary scheduler based on C, as shown below, ok, it may be crappy and vulnerable to nit-picky, but put that aside to concentrate on the aspect of the question...

static void scheduler(void){
 struct queue_ready *rdy_sched;

 while(1){

   for (rdy_sched = head_queue_ready; 
        rdy_sched != NULL; 
        rdy_sched = rdy_sched->next){

     if (time >= rdy_sched->task_id->start_time + rdy_sched->task_id->quantum){

        save_cpu_task_state(&task_reg->regs);

        save_mem_task_state(&rdy_sched->task_id->task_mem);

     }else{
       struct regs *task_reg = rdy_sched->task_id->regs;

       struct mem_loc *mem_task = rdy_sched->task_id->task_mem;

       load_cpu_task_state(task_reg);

       load_mem_task_state(mem_task);

       jmp_and_exec_cpu_task(task_reg->cs, task_reg->ip);

       save_cpu_task_state(&rdy_sched->task_id->regs);

       if (rdy_sched->task_id->num_threads > 0){

         struct thread *threads = rdy_sched->task_id->task_thread;

         while (threads->next != NULL){

           struct regs *thread_reg = threads->regs;

           load_cpu_thread_state(thread_reg);

           if (threads->memlocTracker->semaphores){

             /* House keeping in relation to semaphores */
             lock_memory(threads->memlocTracker);

             jmp_and_exec_cpu_thread(thread_reg->cs, thread_reg->ip);

             save_cpu_thread_state(&thread->regs);

             unlock_memory(threads->memlocTracker);

           }else{

             jmp_and_exec_cpu_thread(thread_reg->cs, thread_reg->ip);

             save_cpu_thread_state(&thread->regs);
           }
         }
       }
     }
   }
 }
}

The kernel's scheduler looks overly complex, but it isn't really, the only omission I have left out in the code is the priorities...that can be ignored now for the discussion of the OP's question.

Let's break this scheduler function scheduler down...but first a quick peek and explanation of what functions are used within the scheduler function:

  • 'load_cpu_task_state' and 'load_cpu_thread_state' - this loads the state of the CPU registers, for brevity, they are for task and thread respectively.
  • 'load_mem_task_state' and 'save_mem_task_state' - this loads and save, respectively, the memory state, perhaps to/from a page on disk specified by the phys_mem_begin and phys_mem_end fields of the mem_loc structure for the said task. For brevity, this will load all memory owned by the said task, including threads.
  • 'jmp_and_exec_cpu_task' and 'jmp_and_exec_cpu_task' - this causes the kernel to issue the magic in jumping to the specified cs and ip registers of the said task's reg structure, again same for the thread respectively.
  • 'save_cpu_task_state' and 'save_cpu_thread_state' - this causes the kernel to save the state of CPU registers for both task and thread respectively.
  • 'lock_memory' and 'unlock_memory' - this is for the locking of the memory region using the semaphores which comes into play shortly...

Now, scheduler runs forever, and iterates over the linked list of tasks, the check is required to see if the task has run for a period and exceeded the quantum amount, for example, 20ms. Then it saves the cpu state, and state of memory, (perhaps saved to paging file on disk), ready to go on to the next task in the list.

If there is still time left, it loads the task's CPU registers and loads up the memory (perhaps from a page), and jumps to where the last instruction pointer and code segment was (ip and cs respectively) and the task resumes.

If the said task has threads, it will iterate over the linked-list of threads, loading the CPU registers, and jumping into the code to resume thread execution then save the CPU registers.

Now, this is where things gets a bit hairy, especially in this toy OS, how to manage semaphores! uh huh, looking like a big headache looming over the OS developer's head....

I would imagine if the section of memory is locked, the semaphores would be non NULL, and would have been acquired by the runtime manager in user-land code that requested the semaphore, it would lock the memory to prevent a thread trampling the data, and jump into the code for the said thread and would unlock it when that thread's duration is completed and saves that state of thread's CPU registers...

Conclusion:

This is my conclusion based on this, as for facts to back me up, I cannot, as this theory has been exhaustively lectured at, examined under the microscope in countless books, ranging from a University in the US, all the way around to the University in New Zealand, this is from my observations and my understanding, as you can see from this code, the complexity and the burden placed on the developer in coding this, not alone that, the constraints imposed, design decisions - both hardware and software.

Managing semaphores in this context is far more complicated, to answer, if there was more queues available, like I have strived to keep it to one queue queue_ready, the cost factor in terms of memory usage, disk usage and above all time usage, grows as the kernel would have to keep track of those mitigating factors that come into play, another equally important factor is how much priorities is there going to be, so having a ready queue for different semaphores in my opinion is not a viable and feasible option.

屋檐 2024-09-09 21:53:01

我假设您有一个单处理器系统(拥有多个处理器可能会影响就绪队列的数量)。

您的就绪队列中有什么?我见过的大多数就绪队列都包含所有准备运行的线程的列表。这当然与系统中的所有线程不同。如果此模型符合您的设置,我建议坚持使用单个就绪队列。当涉及到调度或选择下一个要运行的线程时,您只需检查一个队列,而不必担心该队列中的挂起任务。

每个信号量有一个挂起队列也可能是值得的。挂起队列将跟踪所有正在等待信号量的线程。由于线程正在等待,它还没有准备好,并且不在就绪队列中。

这样做有助于将处于相似状态的线程保持在一起。因此,当您必须搜索这些列表时,您可以将开销降至最低。

希望这有帮助。

I am going to assume that you have a uniprocessor system (having multiple processors may potentially impact the number of ready queues).

What is in your ready queue? Most ready queue's I have seen contain a list of all the threads that are ready to run. This of course is different than all the threads in the system. If this model matches your setup, I would recommend sticking with a single ready queue. When it comes to scheduling or picking the next thread to run, you only have to check one queue and you don't have to worry about suspended tasks in that queue.

It may also be worthwhile to have one pend queue per semaphore. The pend queue will keep track of all the threads that are waiting on the semaphore. As the thread is waiting, it is not ready, and not in the ready queue.

Doing it this way helps keep threads in similar states together. Consequently, when you have to search through these lists, you can keep the overhead to a minimum.

Hope this helps.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文