Windows XP 与 Windows 7 上的 Java 计时精度
我有一个奇怪的问题 - 我希望有人可以向我解释发生了什么以及可能的解决方法。我正在 Java 中实现 Z80 核心,并尝试通过在单独的线程中使用 java.util.Timer 对象来减慢它的速度。
基本设置是我有一个线程运行一个执行循环,每秒 50 次。在此执行循环中,无论执行多少个周期,都会调用 wait()。外部定时器线程将每 20 毫秒调用 Z80 对象上的 notificationAll(),模拟 3.54 MHz(左右)的 PAL Sega Master 系统时钟频率。
我上面描述的方法在 Windows 7 上完美运行(尝试了两台机器),但我也尝试了两台 Windows XP 机器,在两台机器上,Timer 对象似乎睡过头了大约 50% 左右。这意味着在 Windows XP 计算机上,一秒的仿真时间实际上需要大约 1.5 秒左右。
我尝试使用 Thread.sleep() 而不是 Timer 对象,但这具有完全相同的效果。我意识到大多数操作系统中的时间粒度不超过 1 毫秒,但我可以忍受 999 毫秒或 1001 毫秒,而不是 1000 毫秒。我不能忍受的是 1562ms - 我只是不明白为什么我的方法在较新版本的 Windows 上运行正常,但在旧版本上却不行 - 我已经研究了中断周期等,但似乎没有已经制定了解决方法。
谁能告诉我这个问题的原因和建议的解决方法?非常感谢。
更新:这是我构建的一个较小应用程序的完整代码,用于显示相同的问题:
import java.util.Timer;
import java.util.TimerTask;
public class WorkThread extends Thread
{
private Timer timerThread;
private WakeUpTask timerTask;
public WorkThread()
{
timerThread = new Timer();
timerTask = new WakeUpTask(this);
}
public void run()
{
timerThread.schedule(timerTask, 0, 20);
while (true)
{
long startTime = System.nanoTime();
for (int i = 0; i < 50; i++)
{
int a = 1 + 1;
goToSleep();
}
long timeTaken = (System.nanoTime() - startTime) / 1000000;
System.out.println("Time taken this loop: " + timeTaken + " milliseconds");
}
}
synchronized public void goToSleep()
{
try
{
wait();
}
catch (InterruptedException e)
{
System.exit(0);
}
}
synchronized public void wakeUp()
{
notifyAll();
}
private class WakeUpTask extends TimerTask
{
private WorkThread w;
public WakeUpTask(WorkThread t)
{
w = t;
}
public void run()
{
w.wakeUp();
}
}
}
主类所做的就是创建并启动这些工作线程之一。在 Windows 7 上,此代码产生的时间约为 999 毫秒 - 1000 毫秒,这完全没问题。然而,在 Windows XP 上运行相同的 jar 会产生大约 1562 毫秒 - 1566 毫秒的时间,这是我在两台单独的 XP 机器上进行测试的。它们都运行 Java 6 update 27。
我发现这个问题正在发生,因为计时器休眠了 20 毫秒(相当小的值) - 如果我将所有执行循环一秒钟插入 wait wait() - notificationAll() 循环,这会产生正确的结果 - 我确信看到我正在尝试做的事情(以 50fps 模拟 Sega Master 系统)的人会看到这不是一个解决方案 - 它不会给出交互式响应时间,每 50 个中跳过 49 个。正如我所说,Win7 可以很好地应对这一点。抱歉,如果我的代码太大:-(
I have a bizarre problem - I'm hoping someone can explain to me what is happening and a possible workaround. I am implementing a Z80 core in Java, and attempting to slow it down, by using a java.util.Timer object in a separate thread.
The basic setup is that I have one thread running an execute loop, 50 times per second. Within this execute loop, however many cycles are executed, and then wait() is invoked. The external Timer thread will invoke notifyAll() on the Z80 object every 20ms, simulating a PAL Sega Master System clock frequency of 3.54 MHz (ish).
The method I have described above works perfectly on Windows 7 (tried two machines) but I have also tried two Windows XP machines and on both of them, the Timer object seems to be oversleeping by around 50% or so. This means that one second of emulation time is actually taking around 1.5 seconds or so on a Windows XP machine.
I have tried using Thread.sleep() instead of a Timer object, but this has exactly the same effect. I realise granularity of time in most OSes isn't better than 1ms, but I can put up with 999ms or 1001ms instead of 1000ms. What I can't put up with is 1562ms - I just don't understand why my method works OK on newer version of Windows, but not the older one - I've investigated interrupt periods and so on, but don't seem to have developed a workaround.
Could anyone please tell me the cause of this problem and a suggested workaround? Many thanks.
Update: Here is the full code for a smaller app I built to show the same issue:
import java.util.Timer;
import java.util.TimerTask;
public class WorkThread extends Thread
{
private Timer timerThread;
private WakeUpTask timerTask;
public WorkThread()
{
timerThread = new Timer();
timerTask = new WakeUpTask(this);
}
public void run()
{
timerThread.schedule(timerTask, 0, 20);
while (true)
{
long startTime = System.nanoTime();
for (int i = 0; i < 50; i++)
{
int a = 1 + 1;
goToSleep();
}
long timeTaken = (System.nanoTime() - startTime) / 1000000;
System.out.println("Time taken this loop: " + timeTaken + " milliseconds");
}
}
synchronized public void goToSleep()
{
try
{
wait();
}
catch (InterruptedException e)
{
System.exit(0);
}
}
synchronized public void wakeUp()
{
notifyAll();
}
private class WakeUpTask extends TimerTask
{
private WorkThread w;
public WakeUpTask(WorkThread t)
{
w = t;
}
public void run()
{
w.wakeUp();
}
}
}
All the main class does is create and start one of these worker threads. On Windows 7, this code produces a time of around 999ms - 1000ms, which is totally fine. Running the same jar on Windows XP however produces a time of around 1562ms - 1566ms, and this is on two separate XP machines that I have tested this. They are all running Java 6 update 27.
I find this problem is happening because the Timer is sleeping for 20ms (quite a small value) - if I bung all the execute loops for a single second into wait wait() - notifyAll() cycle, this produces the correct result - I'm sure people who see what I'm trying to do (emulate a Sega Master System at 50fps) will see how this is not a solution though - it won't give an interactive response time, skipping 49 of every 50. As I say, Win7 copes fine with this. Sorry if my code is too large :-(
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
您看到的问题可能与时钟分辨率有关。某些操作系统(Windows XP 及更早版本)因睡眠过度和等待/通知/睡眠(通常是中断)缓慢而臭名昭著。与此同时,其他操作系统(我见过的每一个 Linux)都非常擅长在几乎指定的时刻返回控制权。
解决方法?对于较短的持续时间,请使用实时等待(忙循环)。如果持续时间较长,睡眠时间应少于您真正想要的时间,然后继续等待剩下的时间。
The problem you are seeing probably has to do with clock resolution. Some Operating Systems (Windows XP and earlier) are notorious for oversleeping and being slow with wait/notify/sleep (interrupts in general). Meanwhile other Operating Systems (every Linux I've seen) are excellent at returning control at quite nearly the moment specified.
The workaround? For short durations, use a live wait (busy loop). For long durations, sleep for less time than you really want and then live wait the remainder.
我会放弃 TimerTask 并只使用繁忙循环:
两毫秒的延迟为主机操作系统提供了充足的时间来处理其他事情(无论如何,您可能都在多核上)。剩下的程序代码就简单很多了。
如果硬编码的两毫秒太过生硬,您可以计算所需的睡眠时间并使用 Thread.sleep(long, int) 重载。
I'd forgo the
TimerTask
and just use a busy loop:The two millisecond delay gives the host OS plenty of time to work on other stuff (and you're likely to be on a multicore anyway). The remaining program code is a lot simpler.
If the hard-coded two milliseconds are too much of a blunt instrument, you can calculate the required sleep time and use the
Thread.sleep(long, int)
overload.您可以在Windows XP 上设置计时器分辨率。
http://msdn.microsoft .com/en-us/library/windows/desktop/dd757624%28v=vs.85%29.aspx
因为这是系统范围的设置时,您可以使用工具来设置分辨率,以便验证这是否是您的问题。
尝试一下,看看是否有帮助:http://www.lucashale.com/timer-resolution/< /a>
您可能会在较新版本的 Windows 上看到更好的计时,因为默认情况下,较新版本的计时可能更严格。此外,如果您正在运行 Windows Media Player 等应用程序,它还会提高计时器分辨率。因此,如果您在运行模拟器时碰巧正在听一些音乐,您可能会得到很好的时机。
You can set the timer resolution on Windows XP.
http://msdn.microsoft.com/en-us/library/windows/desktop/dd757624%28v=vs.85%29.aspx
Since this is a system-wide setting, you can use a tool to set the resolution so you can verify whether this is your problem.
Try this out and see if it helps: http://www.lucashale.com/timer-resolution/
You might see better timings on newer versions of Windows because, by default, newer version might have tighter timings. Also, if you are running an application such as Windows Media Player, it improves the timer resolution. So if you happen to be listening to some music while running your emulator, you might get great timings.