我们什么时候在 C# 中使用多线程?
我知道如何使用 C# 实现多线程。但我想知道它是如何工作的。
- 一次只会运行一个线程,当该线程等待时,它会执行第二个线程吗?
- 如果第二个线程正在执行并且第一个线程已准备好。会发生什么?
- 哪个线程将获得优先级?
我对这个概念的理解很困惑。我想了解为什么我们要采用多线程以及何时使用它。
提前致谢。
I know how to implement multithreading using c#. But I want to know how is it working like.
- will only one thread run at a time and when that thread is waiting will it execute the second thread?
- If the second thread is executing and the first thread is ready. What will happen?
- Which thread will be given the priority?
I am confused in understanding the concept. I want to understand why do we go for multithreading and when do we use it .
Thanks in advance.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
线程可能同时运行,也可能不同时运行。在单处理器机器上,一次只有一个线程运行。在多处理器系统(多处理器、多核、超线程)上,多个线程可以同时运行,每个处理器一个线程。
操作系统调度程序决定线程何时运行。 Windows 是一个抢占式多任务系统。它将运行一个线程一段时间,称为时间片(Windows 上为 10 毫秒或 15 毫秒),停止该线程,然后确定下一个要运行的线程,该线程可能是正在运行的同一线程。实际的算法很复杂。
线程确实有优先级,因此也会影响这一点,在所有条件相同的情况下,较高优先级线程将比较低优先级线程获得更多时间。如果您没有手动设置线程的优先级,那么它默认为“正常优先级”。在一个简单的情况下,两个线程的优先级相同,准备运行,那么两个线程将运行相同的时间,可能循环赛。
为什么我们要进行多线程处理,有两个基本原因:
Threads may or may not be running at the same time. On a single processor machine only one thread will is running at a time. On a multiprocessor system (multi-processor, multi-core, hyper-threading) then multiple threads can be running at the same time, one thread per processor.
The operation system scheduler determines when a thread gets to run. Windows is a preemptive multitasking system. It will run a thread for a certain amount of time, called a time slice (10ms or 15ms on Windows), stop the thread, then determine which thread to run next, which could be the same thread that is running. The actual algorithm is complex.
Threads do have priorities so that affects this as well, all things being equal a higher priority thread will get more time than a lower priority thread. If you don't manually set a priority on a thread, then it defaults to "Normal priority" In a simple case, two threads of the same priority that a ready to run, then both threads will run an equal amount of time, probably round-robin.
On why do we do multi-threading there are two basic reasons:
多线程在一个操作不需要阻止另一个操作的环境中非常有用。
主要的例子是后台进程不应锁定主用户界面线程。
操作系统通常会决定谁可以在何时做什么。如果一台计算机只有一个核心,那么除了上面列出的好处之外,多线程几乎没有什么好处。但是,随着添加更多核心,可以同时执行更多操作。
然而,即使在单核系统中,多线程也可以促进非阻塞 IO,这对于提高应用程序的响应能力非常重要。
Multithreading is useful in environments where one action needs to not BLOCK another action.
The primary example of that is in the case of a background process that shouldn't lock up the main user interface thread.
The operating system is generally going to decide who can do what, when. If a computer has only one core, multithreading has little benefit except the one listed above. But, as more cores are added, more actions can be performed concurrently.
However, even in a single core system, multithreading can facilitate non-blocking-IO which is very important in increasing the responsiveness of your application.
如果程序中有可并行的部分,多线程可以加快程序执行速度。
您可能想查看多线程的不同资源以了解更多信息。
想象一下您有一个需要尽快解决的问题。你有一个简单的;数到十亿。您可以执行一个循环: for (var i = 0; i < Math.Pow(10,9); i++) {} 然后这将仅在一个核心上执行。这将需要 x 时间。现在想象一下在两个内核上执行此操作:
值得庆幸的是,如果您从 2008 年下载 MS Threading 库,您将免费获得它。
还有一个适用于 VS2010 的新工具,它以图形形式显示线程如何阻塞、等待 io 等 。
.Net/操作系统中有一个调度程序,允许线程具有不同的交错
几天前,MS 发布了有关如何在 .Net 4 中进行并行操作的文档。
有 在此处下载/阅读
Multithreading speeds up program execution if there are parallelizable parts of the program.
You may want to have a look at different resources for multithreading to understand more about it.
Imagine you have a problem that needs to be done as quickly as possible. You have an easy one; count to a billion. You can do a loop: for (var i = 0; i < Math.Pow(10,9); i++) {} and then this will execute on one core only. It will take x amount of time. Now imagine doing it on two cores instead:
Thankfully, if you download the MS Threading library from 2008 you will get this for free with
There's also a new tool for VS2010 which displays in a graphical form how the threads are blocking, waiting for io etc.
There's a scheduler in .Net/the OS that allows threads to have different interleavings.
A few days ago, MS released documentation on how to do parallel operations in .Net 4.
Have a download/read here
如果您查看 Windows 计算机上任务管理器中的“进程”选项卡,您将看到计算机上当前活动的进程。如果将“线程”列添加到视图中,您将看到每个进程中当前存在的线程数。操作系统 (OS) 决定如何调度所有进程中的所有线程在处理器上执行。因此,实际上,操作系统不断确定哪些线程有工作要做,并调度这些线程在处理器上执行。
现在我们假设一台单处理器、单核机器。
在此示例中,您的应用程序是唯一正在执行任何操作的进程。假设您的应用程序有两个具有相同优先级的线程(下面将详细介绍)。在这种情况下,操作系统将在这两个线程之间交替,先调度一个线程执行,然后再调度另一个线程,直到它们正在执行的工作完成。为了实现这一点,操作系统向第一个调度的线程授予一个时间片。出于示例目的,假设时间片为 10 毫秒(实际上比这短得多)。所以线程A将执行10毫秒。然后,操作系统将抢占线程 A,以便线程 B 可以执行其时间片(也是 10 毫秒)。
这种来回将不间断地继续,直到两个线程都完成其工作或直到发生某些事件。例如,假设线程 A 在线程 B 之前完成其工作。在这种情况下,线程 A 没有其他事情可做,因此操作系统将继续向线程 B 授予时间片,因为它是唯一有工作要做的线程。可能发生的另一件事是线程 A 可以等待事件,例如
System.Threading.ManualResetEvent
,或套接字的异步读取。在该事件发出信号或在套接字上接收到数据之前,线程 A 基本上处于死状态,因此操作系统将继续向线程 B 授予时间片,直到线程 A 正在等待的事件/套接字发生。此时,操作系统将恢复在线程A和线程B之间切换执行。当今大多数应用程序执行的后台打印就是一个很好的例子。应用程序的主线程专用于处理 UI 事件 - 按钮单击、键盘按下、拖放等。如果您从您最喜欢的文字处理器打印文档,从概念上讲,将打印指令发送到打印机被委托给辅助线程。此时,您的应用程序有两个正在运行的线程 - 一个线程服务于 UI,另一个线程处理打印作业。由于这是在单处理器、单核机器上,操作系统在两个线程之间交换,为每个线程授予时间片。在这种情况下,打印作业线程将在发送完打印指令后结束,然后只剩下您的UI线程。
此时您可能有一个问题:
答案是肯定的。这种方式确实需要更长的时间。但请考虑替代方案。如果打印作业在 UI 线程上执行,则用户界面将不会响应您的输入,即单击按钮、按下键盘等,直到打印作业完成。作为用户,这会让您感到沮丧,因为应用程序不会响应您的输入。因此,实际上,多线程实际上是并行性的幻觉,至少在单处理器、单核机器上是如此。然而,您会感到满意的是,当打印作业在另一个线程上完成时,能够与应用程序进行交互,即使打印作业以这种方式需要更长的时间。
现在让我们转向多核机器。如果您的进程有相同的两个线程 A 和 B 要执行,则每个线程可以调度在单独的核心上。在这种情况下,两个线程同时运行而不会中断。操作系统不必在线程之间交换,因为每个线程都有自己的核心来运行。有道理吗?
最后,让我们考虑与线程相关的优先级(再次假设单处理器、单核)。默认情况下,给定应用程序中的每个线程都具有相同的优先级。这意味着操作系统将在调度方面认为所有线程都是平等的。如果有两个线程要执行,它们将在处理器上获得大致相同的时间。但是,您可以通过增加/减少一个线程相对于另一个线程的优先级来调整这一点。在这种情况下,出于调度目的,具有较高优先级的线程比具有较低优先级的线程更受青睐,这意味着它比其他线程获得更多的时间片。在某些有限的情况下,调整线程的优先级可以提高应用程序的性能,但对于大多数应用程序来说,这是没有必要的。需要注意的是不要让线程“挨饿”,尤其是 UI 线程。操作系统不会完全使线程处于饥饿状态,从而有助于防止这种情况发生。尽管如此,如果 UI 线程“节食”,可以这么说,调整优先级仍然会使您的应用程序显得迟缓,甚至完全没有响应。
您可以在此处了解有关线程优先级的更多信息和此处。
我希望这有帮助。
If you look at the Processes tab in Task Manager on your Windows machine, you will see the processes that are currently active on the machine. If you add the Threads column to the view, you will see the number of threads that currently exist in each process. The operating system (OS) is the one that determines how all of these threads across all of these processes are scheduled for execution on the processor. So in effect, the OS is constantly determining which threads have work to do and scheduling those threads for execution on the processor.
Let's assume a single processor, single core machine for now.
In this example, your application is the only process that is doing anything. Say your application has two threads of equal priority (more on this below). In this case, the OS will alternate between these two threads, scheduling one for execution and then the other until the work that they are doing is complete. To accomplish this, the OS grants a timeslice to the first scheduled thread. For example purposes, let's say the timeslice is 10 milliseconds (it's actually much shorter than this). So thread A will execute for 10 milliseconds. The OS will then preempt thread A so thread B can execute for its timeslice, also 10 milliseconds.
This back-and-forth will continue uninterrupted until both threads have finished their work or until certain events occur. For example, let's say that thread A finishes its work before thread B. In this case, thread A has nothing else to so, so the OS will continue to grant timeslices to thread B since it is the only one with work to do. Another thing that can happen is that thread A can wait on an event, such as a
System.Threading.ManualResetEvent
, or an asynchronous read of a socket. Until that event is signaled or data is received on the socket, thread A is essentially dead in its tracks, so the OS will continue to grant timeslices to thread B until the event/socket that thread A is waiting on occurs. At that point, the OS will resume switching between thread A and thread B for execution.A good example of this is the background printing that most applications do today. An application's main thread is dedicated to processing UI events - button clicks, keyboard presses, drag-and-drop, etc. If you print a document from your favorite word processor, what happens conceptually is that the task of sending the print instructions to the printer is delegated to a secondary thread. So at this point, your application has two threads that are running - one thread servicing the UI and the other thread handling the print job. Since this is on a single processor, single core machine, the OS swaps between the two threads, granting timeslices to each. In this case, the print job thread will end after it finishes sending the print instructions, and then only your UI thread will be left.
A question you may have at this point is this:
And the answer is YES. It does take longer this way. But consider the alternative. If the print job were executed on the UI thread, the user interface would be unresponsive to your input, i.e., button clicks, keyboard presses, etc., until the print job was complete. And this would frustrate you as the user because the application isn't responding to your input. So, in effect, multithreading is really an illusion of parallelism, at least on a single processor, single core machine. However, you get the satisfaction of being able to interact with your application while the print job is accomplished on another thread, even though the print job takes longer doing it this way.
Now let's move to a multicore machine. If your process has the same two threads, A and B, to execute, then each thread can be scheduled on a separate core. In this case, both threads run simultaneously without the interruption. The OS doesn't have to swap between the threads because each thread has its own core to run on. Make sense?
Finally, let's consider the priority associated with threads (assume single processor, single core again). Each thread in a given application has, by default, the same priority. What this means is that the OS will consider all threads equal with regard to scheduling. If you have two threads to be executed, they will get roughly the same amount of time on the processor. You can adjust this, however, by increasing/decreasing the priority of one thread over the other. In this case, the thread with the higher priority is favored for scheduling purposes over the thread with a lower priority, meaning that it gets more timeslices than the other thread. In some limited cases, adjusting the priority of threads can improve your application's performance, but for most applications, it is not necessary. The thing to be cautious of is to not "starve" a thread, especially the UI thread. The OS helps to prevent this by not starving a thread altogether. Still, adjusting the priorities can still make your application appear sluggish, if not altogether unresponsive, if the UI thread is "put on a diet," so to speak.
You can read more about thread priorities here and here.
I hope this helps.
线程的用途
还有其他的,但我认为这是线程的基本用途
Purposes of a thread
There are others, but i think this are the basic uses of a thread