设置线程在特定核心上工作的优点?
是否有任何证据表明通过手动选择在哪个处理器上运行线程可以提高系统性能?
例如,假设您将完成最多工作的线程专用于一个核心,并将所有其他“辅助”线程专用于第二个核心。
Is there any evidence to suggest that by manually picking which processor to run a thread on you can improve system performance?
For example say you dedicated the thread that does the most work to one core and all other "helper" threads to a second.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我看不出有这种情况。
请记住,任何支持多处理器的操作系统都会自动分配它认为合适的处理器时间,以尝试平衡处理器负载。
这意味着实际上运行的任何进程线程都将根据线程优先级不断中断,以便操作系统可以将处理器时间分配给其他进程。 同一线程内的各个计算甚至可能无法在同一处理器上执行。
如果您将进程代码修复为仅在一个指定的处理器上运行,那么这可能会影响其性能,因为它不允许操作系统平衡处理器负载。
我想您可以将其中的大部分作为关键部分,但这会阻碍您在其他领域的应用程序,尤其是任何子线程的处理。
I can't see that there is a case for this.
Remember that any multiprocessor enabled OS will automatically allocate processor time as it sees fit in an attempt to balance out processor loading.
This means that in reality any process thread that you have running will be constantly interrupted, based on the thread priority, so that the OS can allocate processor time to other processes. Individual computations within the same thread may not even be executed on the same processor.
If you fixed process code to run on just the one specified processor then this would likely hinder its performance as it would not allow the OS to balance processor loading.
I suppose that you could make large parts of it a critical section but this would hinder your application in other areas, especially the processing of any sub threads.
至少在理论上,线程在不同内核之间来回切换似乎可能会在一定程度上降低系统性能。 由于许多多核设计涉及每个核心的单独 L1 缓存,因此每当线程移动到新核心时,线程之前访问的任何数据都不再缓存在新核心中,而必须从更高级别的缓存中获取(或记忆)。
保持线程在同一核心上运行将增加 L1 缓存拥有与线程正在执行的操作相关的数据的可能性。 当然,影响有多大还取决于其他因素,例如缓存的大小以及核心上“同时”调度的其他线程数量。
It seems possible, at least in theory, that system performance could be degraded somewhat by having a thread bounce around between different cores. Since many multi-core designs involve a separate L1 cache for each core, any time a thread moves to a new core, any data that thread was accessing before is no longer cached in the new core and has to be fetched from a higher level cache (or memory).
Keeping a thread running on the same core will increase the likelihood that the L1 cache has data pertinent to what the thread is doing. Of course, how much of an impact this has depends on other factors as well, like the size of the cache and how many other threads are being scheduled "concurrently" on the core.