相对于之前的超时,循环软件计时器何时应该触发?

发布于 2024-10-26 23:14:29 字数 630 浏览 4 评论 0原文

我认为这是“vi vs. emacs”类型的问题之一,但无论如何我都会问,因为我想听听人们的意见。

通常,在嵌入式系统中,微控制器具有硬件定时器外设,为软件定时器子系统提供定时基础。该子系统允许开发人员创建任意数量(受系统资源限制)的计时器,这些计时器可用于生成和管理系统中的事件。软件定时器的典型管理方式是将硬件定时器设置为以固定间隔生成(或者有时仅在下一个活动定时器到期时生成)。在中断处理程序中,调用回调函数来执行特定于该计时器的操作。与往常一样,这些回调例程应该非常短,因为它们在中断上下文中运行。

假设我创建了一个每 1 毫秒触发一次的计时器,其回调例程需要 100 微秒才能执行,这是系统中发生的唯一有趣的事情。定时器子系统应何时安排该软件定时器的下一次处理?应该是从中断发生时起1ms,还是从回调完成时起1ms?

为了让事情变得更有趣,假设硬件开发人员出现并表示在某些操作模式下,CPU 速度需要降低到最大速度的 20% 以节省电量。现在回调例程需要 500us,而不是 100us,但计时器的间隔仍然是 1ms。假设回调中延迟的增加不会对该待机模式下的系统产生负面影响。同样,定时器子系统应该在什么时候安排下一次处理该软件时间? T+1ms还是T+500us+1ms?

或者也许在这两种情况下都应该分割差异并安排在 T+(execution_time/2)+1ms?

I think this is one of those "vi vs. emacs" type of questions, but I will ask anyway as I would like to hear people's opinions.

Often times in an embedded system, the microcontroller has a hardware timer peripheral that provides a timing base for a software timer subsystem. This subsystem allows the developer to create an arbitrary (constrained by system resources) number of timers that can be used to generate and manage events in the system. The way the software timers are typically managed is that the hardware timer is setup to generate at a fixed interval (or sometimes only when the next active timer will expire). In the interrupt handler, a callback function is called to do things specific for that timer. As always, these callback routines should be very short since they run in interrupt context.

Let's say I create a timer that fires every 1ms, and its callback routine takes 100us to execute, and this is the only thing of interest happening in the system. When should the timer subsystem schedule the next handling of this software timer? Should it be 1ms from when the interrupt occurred, or 1ms from when the callback is completed?

To make things more interesting, say the hardware developer comes along and says that in certain modes of operation, the CPU speed needs to be reduced to 20% of maximum to save power. Now the callback routine takes 500us instead of 100us, but the timer's interval is still 1ms. Assume that this increased latency in the callback has no negative effect on the system in this standby mode. Again, when should the timer subsystem schedule the next handling of this software time? T+1ms or T+500us+1ms?

Or perhaps in both cases it should split the difference and be scheduled at T+(execution_time/2)+1ms?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

ㄟ。诗瑗 2024-11-02 23:14:29

在实时操作系统中,定时器和延迟都与系统滴答同步,因此如果事件处理花费的时间少于一个定时器滴答,并且在定时器滴答边界上开始,则使用定时器或延迟之间不会有调度差异。

另一方面,如果处理花费了多个时钟周期,您将需要一个计时器事件来确保确定性的无抖动定时。

在大多数情况下,决定论很重要或必不可少,它使系统行为更加可预测。如果计时从处理结束开始是递增的,则处理中的可变性(无论是静态的 - 通过代码更改,还是通过差异执行路径的运行时),可能会导致难以调试或可能导致系统变化的行为和未经测试的极端情况失败。

In a real-time OS both timers and delays are synchronised to the system tick, so if the event processing takes less than one timer tick, and starts on a timer tick boundary, there would be no scheduling difference between using a timer or a delay.

If on the other hand the processing took more than one tick, you would require a timer event to ensure deterministic jitter free timing.

In most cases determinism is important or essential, and makes system behaviour more predictable. If timing were incremental from the end of processing, variability in the processing (either static - through code changes, or run-time through differencing execution paths), might lead to variable behaviour and untested corner cases that are hard to debug or may cause system failure.

浸婚纱 2024-11-02 23:14:29

我会让硬件计时器每 1 毫秒触发一次。我从来没有听说过硬件定时器考虑到如此快速的例程。特别是因为每次软件更改时您都必须重新计算。或者弄清楚当 CPU 改变时钟速度时该怎么做。或者弄清楚如果您决定升级/降级您正在使用的 CPU 该怎么办。

I would have the hardware timer fire every 1ms. I've never heard of a hardware timer taking in such a quick routine into account. Especially since you would have to recalculate every time there was a software change. Or figure out what to do when the CPU changes clock speeds. Or figure out what to do if you decide to upgrade/downgrade the CPU you're using.

旧城烟雨 2024-11-02 23:14:29

添加另外几个原因到此时的共识答案(计时器应每 1 毫秒触发一次):

  • 如果计时器每 1 毫秒触发一次,而您真正想要的是执行之间有 1 毫秒的间隙,则可以重置计时器在回调函数的出口处从该点开始 1 毫秒。

  • 但是,如果计时器在回调函数退出后 1 毫秒触发,而您想要其他行为,那么您就会陷入困境。

此外,每 1 毫秒触发一次的硬件要简单得多。为此,它只是生成事件并重置,除了设置点之外,软件不会向计时器提供任何反馈。如果计时器留下 1 毫秒的间隙,则软件需要通过某种方式向计时器发出信号,表明它正在退出回调。

而且你当然不应该“平分差价”。这对每个人来说都是错误的,如果有人想让它做其他事情,那就更令人讨厌了。

Adding another couple of reasons to what is at this point the consensus answer (the timer should fire every 1ms):

  • If the timer fires every 1ms, and what you really want is a 1ms gap between executions, you can reset the timer at the exit of your callback function to fire 1ms from that point.

  • However, if the timer fires 1ms after the callback function exits, and you want the other behavior, you are kind of stuck.

Further, it's far less complicated in the hardware to fire every 1ms. To do that, it just generates events and resets, and there's no feedback from the software back to the timer except at the point of setup. If the timer is leaving 1ms gaps, there needs to be some way for the software to signal to the timer that it's exiting the callback.

And you should certainly not "split the difference". That's doing the wrong thing for everyone, and it's even more obnoxious to work around if someone wants to make it do something else.

谁的年少不轻狂 2024-11-02 23:14:29

我倾向于让默认行为是让例行程序以尽可能一致的间隔开始,并让运行较晚的例行程序在限制范围内尝试“赶上”。有时,一个好的模式可能是这样的:

/* Assume 32,768Hz interrupt, and that we want foo() to execute 1024x/second */

  typedef unsigned short ui; /* Use whatever size int works out best */
  ui current_ticks;  /* 32768Hz ticks */

  ui next_scheduled_event;
  ui next_event;

void interrupt_handler(void)
{
  current_ticks++;
  ...
  if ((ui)(current_ticks - next_event)  EVENT_INTERVAL*EVENT_MAX_BACKLOG)  /* We're 32 ticks behind -- don't even try to catch up */
    {
      delta = EVENT_INTERVAL*EVENT_MAX_BACKLOG;
      next_scheduled_event = current_ticks - delta;
    }
    next_scheduled_event += EVENT_INTERVAL;
    next_event = next_scheduled_event;

    foo();

    /* See how much time there is before the next event */
    delta = (ui)(current_ticks - next_event - EVENT_MIN_SPACING);
    if (delta > 32768)
      next_event = current_ticks + EVENT_MIN_GAP;
  }

如果可以的话,此代码(未经测试)将以统一的速率运行 foo(),但在执行之间始终允许 EVENT_MIN_SPACING。如果有时无法以所需的速度运行,它将在执行之间使用 EVENT_MIN_SPACING 运行几次,直到“赶上”。如果它落后太多,它“追赶”的尝试就会受到限制。

My inclination is to have the default behavior be to have a routine start at intervals that are as nearly uniform as practical, and to have a routine which is running late try to "catch up", within limits. Sometimes a good pattern can be something like:

/* Assume 32,768Hz interrupt, and that we want foo() to execute 1024x/second */

  typedef unsigned short ui; /* Use whatever size int works out best */
  ui current_ticks;  /* 32768Hz ticks */

  ui next_scheduled_event;
  ui next_event;

void interrupt_handler(void)
{
  current_ticks++;
  ...
  if ((ui)(current_ticks - next_event)  EVENT_INTERVAL*EVENT_MAX_BACKLOG)  /* We're 32 ticks behind -- don't even try to catch up */
    {
      delta = EVENT_INTERVAL*EVENT_MAX_BACKLOG;
      next_scheduled_event = current_ticks - delta;
    }
    next_scheduled_event += EVENT_INTERVAL;
    next_event = next_scheduled_event;

    foo();

    /* See how much time there is before the next event */
    delta = (ui)(current_ticks - next_event - EVENT_MIN_SPACING);
    if (delta > 32768)
      next_event = current_ticks + EVENT_MIN_GAP;
  }

This code (untested) will run foo() at a uniform rate if it can, but will always allow EVENT_MIN_SPACING between executions. If it is sometimes unable to run at the desired speed, it will run a few times with EVENT_MIN_SPACING between executions until it has "caught up". If it gets too far behind, its attempts to play "catch up" will be limited.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文