伪实时线程
所以我构建了一个具有物理引擎和显示器的小型应用程序。显示器连接到处理物理引擎的控制器(实际上是处理控制器的视图模型,但详细信息)。
目前,控制器是一个委托,它通过开始调用激活并通过取消令牌停用,然后通过结束调用获取。在 lambda 画笔内部,PropertyChanged(挂钩到 INotifyPropertyChanged)使 UI 保持最新。
据我了解,BeginInvoke 方法激活一个任务而不是另一个线程(在我的计算机上它确实激活另一个线程,但这并不能保证我已经完成的阅读,这取决于线程池如何获取任务已完成),从我所做的所有测试来看,这很好。在取消 CancellationToken 之前,lambda 不会完成。它有一个睡眠和一个更新(所以它有点模拟实时物理引擎......它很粗糙,但我不需要实时的真正精度,只需足以感受一下)
我的问题是的,这可以在其他计算机上工作吗?或者我应该切换到我启动和取消的显式线程?我想到的场景是在 1 核处理器上,第二个任务是否有可能获得大量更少的处理器时间,从而使我的可接受的不准确模型变得不可接受的不准确(即在切换之前等待毫秒而不是微秒?)。或者他们有一些我没有想到的更好的方法吗?
So I have built a small application that has a physics engine and a display. The display is attached to a controller which handles the physics engine(well, actually a view model that handles the controller, but details).
Currently the controller is a delegate that gets activated by a begin-invoke and deactivated by a cancellation token, and then reaped by an endinvoke. Inside the lambda brushes PropertyChanged(hooked into INotifyPropertyChanged) which keeps the UI up to date.
From what I understand the BeginInvoke method activates a task rather than another thread(which on my computers does activate another thread, but this isn't a guarantee from the reading I have done,it's up to the thread pool how it wants to get the task completed), which is fine from all the testing I have done. The lambda doesn't complete until a CancellationToken is killed. It has a sleep and an update(so it is sort of simulating a real-time physics engine...it's crude, but I don't need real precision on the real time, just enough to get a feel)
The question I have is, will this work on other computers, or should I switch over to explicit threads that I start and cancel? The scenario I am thinking of is on a 1 core processor, is it possible the second task will get massively less processor time and thereby make my acceptably inaccurate model into something unacceptably inaccurate(i.e. waiting for milliseconds before switching rather than microseconds?). Or is their some better way of doing this that I haven't come up with?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
根据我的经验,按照您所描述的方式使用线程池几乎可以保证大多数计算机上的合理最佳性能,而无需您费力去弄清楚如何分配线程。
线程与核心不同;线程与核心不同。您仍然会在单核计算机上获得多个线程,并且这些线程将各自承担处理负载的一部分。您不会遇到您所描述的“死锁”情况,除非您对线程执行一些不寻常的操作,例如给予其中一个线程实时优先级。
也就是说,微秒对于线程之间的上下文切换来说并不是很多时间,所以 YMMV。你必须尝试一下,看看效果如何;可能需要一些调整。
In my experience, using the threadpool in the way you described will pretty much guarantee reasonably optimal performance on most computers, without you having to go to the trouble to figure out how to divvy up the threads.
A thread is not the same thing as a core; you will still get multiple threads on a single-core machine, and those threads will each take part of the processing load. You won't get the "deadlock" condition you describe, unless you do something unusual with the threads, like give one of them real-time priority.
That said, microseconds is not a lot of time for context switching between threads, so YMMV. You'll have to try it, and see how well it works; there may be some tweaking required.