睡眠时间准确
我对Sleep函数的理解是,它遵循“至少语义”,即sleep(5)将保证线程休眠5秒,但根据其他因素,它可能会保持阻塞超过5秒。有没有办法在指定的时间段内睡眠(无需忙于等待)。
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
我对Sleep函数的理解是,它遵循“至少语义”,即sleep(5)将保证线程休眠5秒,但根据其他因素,它可能会保持阻塞超过5秒。有没有办法在指定的时间段内睡眠(无需忙于等待)。
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
接受
或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
发布评论
评论(7)
正如其他人所说,您确实需要使用实时操作系统来尝试实现这一目标。精确的软件计时是相当棘手的。
然而......尽管并不完美,通过简单地提高需要更好时机的进程的优先级,您可以获得比“正常”更好的结果。在 Windows 中,您可以使用
SetPriorityClass 来实现此目的
函数。如果您将优先级设置为最高级别(REALTIME_PRIORITY_CLASS:0x00000100
),您将获得更好的计时结果。再说一遍 - 但这并不像您所要求的那样完美。这在 Windows 以外的其他平台上也可能是可能的,但我从来没有理由这样做,所以没有测试它。
编辑:根据 Andy T 的评论,如果您的应用程序是多线程的,您还需要注意分配给线程的优先级。对于 Windows,此内容记录在此处。
一些背景...
不久前,我使用
SetPriorityClass
来提高应用程序的优先级,在该应用程序中我正在对高速视频进行实时分析,并且我不会错过任何一帧。帧以每秒 300 帧 (fps) 的非常规则的频率(由外部帧捕获器硬件驱动)到达电脑,这在我随后处理的每一帧上触发了一个硬件中断。由于计时非常重要,我收集了很多有关中断计时的统计信息(使用QueryPerformanceCounter
之类的东西)来看看情况到底有多糟糕,并对结果的分布感到震惊。我手边没有统计数据,但基本上,当以正常优先级运行时,Windows 会在任何需要的时候提供中断服务。直方图非常混乱,标准差比我的大约 3 毫秒周期宽。我经常在中断服务中遇到 200 毫秒或更长的巨大间隙(回想一下,中断大约每 3 毫秒触发一次)!即:硬件中断远非准确!你被操作系统决定为你做什么所困。然而,当我发现
REALTIME_PRIORITY_CLASS
设置并使用该优先级进行基准测试时,它明显更好,并且服务间隔分布非常紧密。我可以以 300 fps 的速度运行 10 分钟而不会错过任何一帧。测得的中断服务周期几乎正好是 1/300 秒,且分布紧密。另外 - 尝试尽量减少操作系统正在执行的其他操作,以帮助提高您的计时在重要的应用程序中更好地工作的几率。例如:当您尝试使用其他代码获得精确计时时,无需后台视频转码或磁盘碎片整理或任何其他操作!
总结:
由于它可能会有所帮助(尽管有点偏离主题),因此这是我很久以前编写的一个小类,用于在 Windows 计算机上使用高性能计数器。它可能对您的测试有用:
CHiResTimer.h
CHiResTimer.cpp
As others have said, you really need to use a real-time OS to try and achieve this. Precise software timing is quite tricky.
However... although not perfect, you can get a LOT better results than "normal" by simply boosting the priority of the process that needs better timing. In Windows you can achieve this with the
SetPriorityClass
function. If you set the priority to the highest level (REALTIME_PRIORITY_CLASS: 0x00000100
) you'll get much better timing results. Again - this will not be perfect like you are asking for, though.This is also likely possible on other platforms than Windows, but I've never had reason to do it so haven't tested it.
EDIT: As per the comment by Andy T, if your app is multi-threaded you also need to watch out for the priority assigned to the threads. For Windows this is documented here.
Some background...
A while back I used
SetPriorityClass
to boost the priority on an application where I was doing real-time analysis of high-speed video and I could NOT miss a frame. Frames were arriving to the pc at a very regular (driven by external framegrabber HW) frequency of 300 frames per second (fps), which fired a HW interrupt on every frame which I then serviced. Since timing was very important, I collected a lot of stats on the interrupt timing (usingQueryPerformanceCounter
stuff) to see how bad the situation really was, and was appalled at the resulting distributions. I don't have the stats handy, but basically Windows was servicing the interrupt whenever it felt like it when run at normal priority. The histograms were very messy, with the stdev being wider than my ~3ms period. Frequently I would have gigantic gaps of 200 ms or greater in the interrupt servicing (recall that the interrupt fired roughly every 3 ms)!! ie: HW interrupts are FAR from exact! You're stuck with what the OS decides to do for you.However - when I discovered the
REALTIME_PRIORITY_CLASS
setting and benchmarked with that priority, it was significantly better and the service interval distribution was extremely tight. I could run 10 minutes of 300 fps and not miss a single frame. Measured interrupt servicing periods were pretty much exactly 1/300 s with a tight distribution.Also - try and minimize the other things the OS is doing to help improve the odds of your timing working better in the app where it matters. eg: no background video transcoding or disk de-fragging or anything while your trying to get precision timing with other code!!
In summary:
Since it may be helpful (although a bit off topic), here's a small class I wrote a long time ago for using the high performance counters on a Windows machine. It may be useful for your testing:
CHiResTimer.h
CHiResTimer.cpp
不。
它“至少在语义上”的原因是因为在这 5 秒之后,其他线程可能会很忙。
每个线程从操作系统获取一个时间片。操作系统控制线程的运行顺序。
当您将线程置于睡眠状态时,操作系统会将线程放入等待列表中,当计时器结束时,操作系统“唤醒”该线程。
这意味着该线程将被添加回活动线程列表,但不能保证 t 将被添加到第一个位置。 (如果在特定的一秒内需要唤醒 100 个线程怎么办?谁先唤醒?)
No.
The reason it's "at least semantics" is because that after those 5 seconds some other thread may be busy.
Every thread gets a time slice from the Operating System. The Operating System controls the order in which the threads are run.
When you put a thread to sleep, the OS puts the thread in a waiting list, and when the timer is over the operating system "Wakes" the thread.
This means that the thread is added back to the active threads list, but it isn't guaranteed that t will be added in first place. (What if 100 threads need to be awaken in that specific second ? Who will go first ?)
虽然标准 Linux 不是实时操作系统,但内核开发人员密切关注在持有内核锁时高优先级进程将保持饥饿状态的时间。因此,现有的 Linux 内核通常足以满足许多软实时应用程序的需要。
您可以使用
SCHED_FIFO
或SCHED_RR
通过sched_setscheduler(2)
调用将进程安排为实时任务。两者在语义上略有不同,但可能足以知道 SCHED_RR 任务最终会由于时间片的原因将处理器让给另一个具有相同优先级的任务,而 SCHED_FIFO< /code> 任务只会由于阻塞 I/O 或显式调用sched_yield(2)
而将 CPU 交给另一个具有相同优先级的任务。使用实时计划任务时要小心;由于它们始终优先于标准任务,因此您很容易发现自己编写了一个无限循环,永远不会放弃 CPU 并阻止管理员使用 ssh 来终止进程。因此,以更高的实时优先级运行 sshd 可能不会有什么坏处,至少在您确定已经修复了最严重的错误之前是这样。
有多种可用的 Linux 变体已被开发来提供硬实时保证。 RTLinux 有 商业支持; Xenomai 和 RTAI 是 Linux 实时扩展的竞争实现,但我对它们一无所知。
While standard Linux is not a realtime operating system, the kernel developers pay close attention to how long a high priority process would remain starved while kernel locks are held. Thus, a stock Linux kernel is usually good enough for many soft-realtime applications.
You can schedule your process as a realtime task with the
sched_setscheduler(2)
call, using eitherSCHED_FIFO
orSCHED_RR
. The two have slight differences in semantics, but it may be enough to know that aSCHED_RR
task will eventually relinquish the processor to another task of the same priority due to time slices, while aSCHED_FIFO
task will only relinquish the CPU to another task of the same priority due to blocking I/O or an explicit call tosched_yield(2)
.Be careful when using realtime scheduled tasks; as they always take priority over standard tasks, you can easily find yourself coding an infinite loop that never relinquishes the CPU and blocks admins from using
ssh
to kill the process. So it might not hurt to run ansshd
at a higher realtime priority, at least until you're sure you've fixed the worst bugs.There are variants of Linux available that have been worked on to provide hard-realtime guarantees. RTLinux has commercial support; Xenomai and RTAI are competing implementations of realtime extensions for Linux, but I know nothing else about them.
正如之前的回答者所说:没有办法准确(有些建议是实时操作系统或硬件中断,甚至这些都不准确)。我认为你正在寻找的是比 sleep() 函数更精确的东西,你会发现这取决于你的操作系统,例如 Windows Sleep() 函数或 GNU 下的 nanosleep() 函数。
两者都会在几毫秒内为您提供精度。
As previous answerers said: there is no way to be exact (some suggested realtime-os or hardware interrupts and even those are not exact). I think what you are looking for is something that is just more precise than the sleep() function and you find that depending on your OS in e.g. the Windows Sleep() function or under GNU the nanosleep() function.
Both will give you precision within a few milliseconds.
好吧,你试图解决一个难题,但实现精确计时是不可行的:你能做的最好就是使用硬件中断,其实现将取决于您的底层硬件和操作系统(即,您将需要 实时操作系统,大多数常规桌面操作系统都不是)。您的确切目标平台是什么?
Well, you try to tackle a difficult problem, and achieving exact timing is not feasible: the best you can do is to use hardware interrupts, and the implementation will depend on both your underlying hardware, and your operating system (namely, you will need a real-time operating system, which most regular desktop OS are not). What is your exact target platform?
不会。因为您始终依赖操作系统来在正确的时间处理唤醒线程。
No. Because you're always depending on the OS to handle waking up threads at the right time.
使用标准 C 无法在指定的时间段内休眠。您至少需要一个提供更大粒度的第三方库,并且可能还需要一个特殊的操作系统内核,例如实时 Linux 内核。
例如,这里有一个讨论如何接近 Win32 系统。
这不是C题。
There is no way to sleep for a specified time period using standard C. You will need, at minimum, a 3rd party library which provides greater granularity, and you might also need a special operating system kernel such as the real-time Linux kernels.
For instance, here is a discussion of how close you can come on Win32 systems.
This is not a C question.