什么是调度抖动?
我一直在阅读一篇关于使用Linux操作系统的实时系统的论文,并且重复使用术语“调度抖动”而没有定义。
什么是调度抖动?这是什么意思?
I've been reading a paper on real-time systems using the Linux OS, and the term "scheduling jitter" is used repeatedly without definition.
What is scheduling jitter? What does it mean?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(6)
抖动是给定任务的后续时间段之间的差异。在实时操作系统中,将抖动降低到应用程序可接受的水平非常重要。这是抖动的图片。
Jitter is the difference between subsequent periods of time for a given task. In a real time OS it is important to reduce jitter to an acceptable level for the application. Here is a picture of jitter.
抖动是基于时间的信号的不规则性。例如,在网络中,抖动是网络中数据包延迟的可变性。在调度中,我假设抖动是指分配给进程的时间片的不平等。
在此处阅读更多信息 http://en.wikipedia.org/wiki/Jitter
Jitter is the irregularity of a time-based signal. For example, in networks, jitter would be the variability of the packet latency across a network. In scheduling, I'm assuming the jitter refers to inequality of slices of time allocated to processes.
Read more here http://en.wikipedia.org/wiki/Jitter
调度抖动是程序执行期间预期时间的最大方差。
这个概念在实时仿真系统中非常重要。我的经验来自于实时模拟行业(主要是飞行模拟)30 多年。理想情况下绝对没有抖动是可取的,而这正是硬实时调度的目标。
例如,假设实时仿真需要以 400 Hz 的频率执行某个计算机程序,以便对该子系统产生稳定且准确的仿真。这意味着我们需要期望系统每 2.5 毫秒执行一次该程序。为了在硬实时系统中实现这一点,需要使用高分辨率时钟以高优先级调度该模块,以使抖动几乎为零。如果这是软实时模拟,则预计会有更高的抖动量。如果调度抖动为 0.1 毫秒,则该节目的起点将每 2.5 毫秒 +/- 0.1 毫秒(或更短)一次。只要执行程序的时间不会超过 2.3 毫秒,那就可以接受。否则程序可能会“溢出”。如果发生这种情况,那么决定论就会消失,模拟也会失去保真度。
Scheduling jitter is the maximum variance in time expected for program execution period
This concept is very important in real-time simulation systems. My experience comes from over 30 years in the real-time simulation industry (mostly Flight Simulation). Ideally absolutely no jitter is desirable, and that is precisely the objective of hard real-time scheduling.
Suppose that for example a real-time simulation needs to execute a certain computer program at 400 Hz in order to produce a stable and accurate simulation of that subsystem. That means we need to expect that the system will execute the program once every 2.5 msec. To achieve that in a hard real-time system, high-resolution clocks are used to schedule that module at a high priority so that the jitter is nearly zero. If this were a soft real-time simulation, some higher amount of jitter would be expected. If the scheduling jitter was 0.1 msec, then the starting point for that program would every 2.5 msec +/- 0.1 msec (or less). That would be acceptable as long as it would never take longer than 2.3 msec to execute the program. Otherwise the program could "overrun". If that ever happens, then determinism is lost, and the simulation looses fidelity.
因此,根据 djc 的答案,上述问题中我的语义域的调度抖动将是:
调度抖动:系统调度程序分配给进程的时间片的不平等发生出于必要。可能发生这种情况的一个示例是:如果要求实时环境中的所有进程在每个计划时间使用不超过 100 毫秒的处理器时间,则需要并使用 150 毫秒时间的进程将导致显着的<在实时系统中调度抖动。
So, given djc's answer, scheduling jitter for my semantic domain in the question above would be:
Scheduling jitter: inequality of slices of time allocated to processes by the system scheduler that occur out of necessity. An example of where this might occur would be: If one has a requirement where all processes in a real-time environment would use no more than 100ms of processor time per scheduled time, a process that requires and uses 150ms of time would cause significant scheduling jitter in that real-time system.
实时操作系统中的抖动调度与进程的不同时间片无关。抖动是与理想定时事件的可变偏差。调度抖动就是延迟
任务应该开始的时间和任务正在执行的时间之间
开始了。例如,考虑一个任务应该在 10 毫秒后启动,但无论出于何种原因,它都会在 15 毫秒后启动。在我们的示例中,抖动为 5 毫秒!
Scheduling jitter in real time operating systems is not about different time slices of processes. Jitter is a variable deviation from ideal timing event. Scheduling jitter is the delay
between the time when task shall be started, and the time when the task is being
started. for example consider a task should be started after 10ms, but for whatever reason, it is started after 15ms. in our example the jitter is 5ms!
简单来说,抖动在操作系统概念中就意味着延迟。调度抖动是指实际相对启动时间与标称值的差异。
systick的发生点到唤醒的周期性任务的第一条指令的执行点
Simply jitter means delay in operating system concept . Scheduling jitter means difference of actual relative starting time from the nominal value.
Point of occurrence of systick to the point of execution of first instruction of the woken up periodic task