协程的设计模式替代方案

发布于 2024-08-02 10:55:59 字数 579 浏览 7 评论 0原文

目前,我有大量 C# 计算(方法调用)驻留在将按顺序运行的队列中。每个计算都会使用一些高延迟服务(网络、磁盘......)。

我打算使用 Mono 协程来允许计算队列中的下一个计算继续,而上一个计算正在等待高延迟服务返回。然而,我宁愿不依赖 Mono 协程。

是否有一种可以在纯 C# 中实现的设计模式,使我能够在等待高延迟服务返回时处理额外的计算?

谢谢

更新:

我需要执行大量(>10000)任务,并且每个任务都将使用一些高延迟服务。在 Windows 上,您无法创建那么多线程。

更新:

基本上,我需要一种设计模式来模拟Stackless Python中tasklet的优点(如下所示)(http://www.stackless.com/)

  1. 大量任务
  2. 如果一个任务阻塞队列中的下一个任务执行
  3. 不会浪费 CPU 周期
  4. 任务之间的切换开销最小

Currently, I have a large number of C# computations (method calls) residing in a queue that will be run sequentially. Each computation will use some high-latency service (network, disk...).

I was going to use Mono coroutines to allow the next computation in the computation queue to continue while a previous computation is waiting for the high latency service to return. However, I prefer to not depend on Mono coroutines.

Is there a design pattern that's implementable in pure C# that will enable me to process additional computations while waiting for high latency services to return?

Thanks

Update:

I need to execute a huge number (>10000) of tasks, and each task will be using some high-latency service. On Windows, you can't create that much threads.

Update:

Basically, I need a design pattern that emulates the advantages (as follows) of tasklets in Stackless Python (http://www.stackless.com/)

  1. Huge # of tasks
  2. If a task blocks the next task in the queue executes
  3. No wasted cpu cycle
  4. Minimal overhead switching between tasks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(10

尘曦 2024-08-09 10:55:59

您可以使用 IEnumerable 模拟协作微线程。不幸的是,这不适用于阻塞 API,因此您需要找到可以轮询的 API,或者具有可用于发送信号的回调的 API。

考虑一个方法,

IEnumerable Thread ()
{
    //do some stuff
    Foo ();

    //co-operatively yield
    yield null;

    //do some more stuff
    Bar ();

    //sleep 2 seconds
    yield new TimeSpan (2000);
}

C# 编译器会将其解包到状态机中 - 但外观是协作微线程的外观。

该模式非常简单。您实现一个“调度程序”来保存所有活动 IEnumerator 的列表。当它循环遍历列表时,它使用 MoveNext ()“运行”每个列表。如果 MoveNext 的值为 false,则线程已结束,调度程序将其从列表中删除。如果为 true,则调度程序将访问 Current 属性来确定线程的当前状态。如果它是一个 TimeSpan,则线程希望睡眠,并且调度程序将其移动到某个队列,当睡眠时间跨度结束时,该队列可以刷新回主列表。

您可以使用其他返回对象来实现其他信号机制。例如,定义某种WaitHandle。如果线程产生其中之一,则可以将其移至等待队列,直到发出句柄信号为止。或者您可以通过生成等待句柄数组来支持 WaitAll。您甚至可以实施优先级。

我用大约 150LOC 完成了这个调度程序的简单实现,但我还没有抽出时间在博客上写代码。它适用于我们的 PhyreSharp PhyreEngine 包装器(不会公开),在我们的一个演示中,它似乎可以很好地控制数百个角色。我们借用了 Unity3D 引擎的概念——他们有一些在线文档从用户的角度对其进行了解释。

You can simulate cooperative microthreading using IEnumerable. Unfortunately this won't work with blocking APIs, so you need to find APIs that you can poll, or which have callbacks that you can use for signalling.

Consider a method

IEnumerable Thread ()
{
    //do some stuff
    Foo ();

    //co-operatively yield
    yield null;

    //do some more stuff
    Bar ();

    //sleep 2 seconds
    yield new TimeSpan (2000);
}

The C# compiler will unwrap this into a state machine - but the appearance is that of a co-operative microthread.

The pattern is quite straightforward. You implement a "scheduler" that keeps a list of all the active IEnumerators. As it cycles through the list, it "runs" each one using MoveNext (). If the value of MoveNext is false, the thread has ended, and the scheduler removes it from the list. If it's true, then the scheduler accesses the Current property to determine the current state of the thread. If it's a TimeSpan, the thread wishes to sleep, and the scheduler moved it onto some queue that can be flushed back into the main list when the sleep timespans have ended.

You can use other return objects to implement other signalling mechanisms. For example, define some kind of WaitHandle. If the thread yields one of these, it can be moved to a waiting queue until the handle is signalled. Or you could support WaitAll by yielding an array of wait handles. You could even implement priorities.

I did a simple implementation of this scheduler in about 150LOC but I haven't got round to blogging the code yet. It was for our PhyreSharp PhyreEngine wrapper (which won't be public), where it seems to work pretty well for controlling a couple of hundred characters in one of our demos. We borrowed the concept from the Unity3D engine -- they have some online docs that explain it from a user point of view.

∞琼窗梦回ˉ 2024-08-09 10:55:59

我建议使用 线程池 来执行使用从任务队列中获取的活动任务列表,以可管理的批次一次性处理队列中的多个任务。

在这种情况下,您的主工作线程最初会将 N 个任务从队列中弹出到活动任务列表中,以分派到线程池(最有可能使用 QueueUserWorkItem),其中 N 表示可管理的数量,不会使线程池超载,不会因线程调度和同步而使您的应用程序陷入困境成本,或者由于每个任务的组合 I/O 内存开销而占用可用内存。

每当任务向工作线程发出完成信号时,您都可以将其从活动任务列表中删除,并从任务队列中添加下一个要执行的任务。

这将允许您从队列中滚动获得 N 个任务。您可以操纵 N 来影响性能特征并找到适合您特定情况的最佳方案。

由于最终会受到硬件操作(磁盘 I/O 和网络 I/O、CPU)的瓶颈,我认为越小越好。处理磁盘 I/O 的两个线程池任务很可能不会比一个线程池任务执行得更快。

您还可以通过将活动任务列表限制为一定数量的特定类型任务来实现活动任务列表的大小和内容的灵活性。例如,如果您在一台 4 核计算机上运行,​​您可能会发现最高性能的配置是四个 CPU 密集型任务与一个磁盘密集型任务和一个网络任务同时运行。

如果您已经有一个任务被归类为磁盘 IO 任务,您可以选择等到它完成后再添加另一个磁盘 IO 任务,同时您可以选择调度 CPU 密集型或网络密集型任务。

希望这是有道理的!

PS:你对任务的顺序有依赖性吗?

I'd recommend using the Thread Pool to execute multiple tasks from your queue at once in manageable batches using a list of active tasks that feeds off of the task queue.

In this scenario your main worker thread would initially pop N tasks from the queue into the active tasks list to be dispatched to the thread pool (most likely using QueueUserWorkItem), where N represents a manageable amount that won't overload the thread pool, bog your app down with thread scheduling and synchronization costs, or suck up available memory due to the combined I/O memory overhead of each task.

Whenever a task signals completion to the worker thread, you can remove it from the active tasks list and add the next one from your task queue to be executed.

This will allow you to have a rolling set of N tasks from your queue. You can manipulate N to affect the performance characteristics and find what is best in your particular circumstances.

Since you are ultimately bottlenecked by hardware operations (disk I/O and network I/O, CPU) I imagine smaller is better. Two thread pool tasks working on disk I/O most likely won't execute faster than one.

You could also implement flexibility in the size and contents of the active task list by restricting it to a set number of particular type of task. For example if you are running on a machine with 4 cores, you might find that the highest performing configuration is four CPU-bound tasks running concurrently along with one disk-bound task and a network task.

If you already have one task classified as a disk IO task, you may choose to wait until it is complete before adding another disk IO task, and you may choose to schedule a CPU-bound or network-bound task in the meanwhile.

Hope this makes sense!

PS: Do you have any dependancies on the order of tasks?

梦忆晨望 2024-08-09 10:55:59

您绝对应该查看并发和协调运行时。他们的一个示例准确地描述了您所讨论的内容:您调用长延迟服务,并且 CCR 有效地允许在您等待时运行其他一些任务。它可以处理大量任务,因为它不需要为每个任务生成一个线程,但如果您要求的话,它会使用您的所有核心。

You should definitely check out the Concurrency and Coordination Runtime. One of their samples describes exactly what you're talking about: you call out to long-latency services, and the CCR efficiently allows some other task to run while you wait. It can handle huge number of tasks because it doesn't need to spawn a thread for each one, though it will use all your cores if you ask it to.

回心转意 2024-08-09 10:55:59

这不是多线程处理的常规用法吗?

此处查看 Reactor 等模式

Isn't this a conventional use of multi-threaded processing?

Have a look at patterns such as Reactor here

戏剧牡丹亭 2024-08-09 10:55:59

将其编写为使用 异步 IO 可能就足够了。

如果设计中没有强大的结构,这可能会导致令人讨厌、难以调试的代码。

Writing it to use Async IO might be sufficient.

This can lead to nasy, hard to debug code without strong structure in the design.

友欢 2024-08-09 10:55:59

您应该看一下:

http://www.replicator.org/node/80

这应该完全符合你的要求。不过,这是一个黑客行为。

You should take a look at this:

http://www.replicator.org/node/80

This should do exactly what you want. It is a hack, though.

柠檬色的秋千 2024-08-09 10:55:59

关于 .NET 中的实现的“反应式”模式(如另一位发帖者提到的)的更多信息;又名“Linq to Events”

http://themechanicalbride。 blogspot.com/2009/07/introducing-rx-linq-to-events.html

-Oisin

Some more information about the "Reactive" pattern (as mentioned by another poster) with respect to an implementation in .NET; aka "Linq to Events"

http://themechanicalbride.blogspot.com/2009/07/introducing-rx-linq-to-events.html

-Oisin

晌融 2024-08-09 10:55:59

事实上,如果你使用一个线程来完成一项任务,你就会输掉比赛。思考一下为什么 Node.js 可以支持大量连接。使用少量线程进行异步 IO!异步和等待函数可以帮助解决这个问题。

foreach (var task in tasks)
{
    await SendAsync(task.value);
    ReadAsync(); 
}

SendAsync() 和 ReadAsync() 是异步 IO 调用的伪造函数。

任务并行也是一个不错的选择。但我不确定哪一个更快。你可以测试它们两个
在你的情况下。

In fact, if you use one thread for a task, you will lose the game. Think about why Node.js can support huge number of conections. Using a few number of thread with async IO!!! Async and await functions can help on this.

foreach (var task in tasks)
{
    await SendAsync(task.value);
    ReadAsync(); 
}

SendAsync() and ReadAsync() are faked functions to async IO call.

Task parallelism is also a good choose. But I am not sure which one is faster. You can test both of them
in your case.

友欢 2024-08-09 10:55:59

是的,当然可以。您只需要构建一个调度程序机制,该机制将回调您提供的 lambda 并进入队列。我在 unity 中编写的所有代码都使用这种方法,并且我从不使用协程。我包装使用协程(例如 WWW 内容)的方法来摆脱它。理论上,协程可以更快,因为开销更少。实际上,它们向语言引入了新的语法来执行相当琐碎的任务,而且您无法正确跟踪协同例程中错误的堆栈跟踪,因为您将看到的只是 ->Next。然后,您必须实现在另一个线程上运行队列中的任务的能力。然而,最新的 .net 中有并行函数,您本质上是在编写类似的功能。实际上不会有很多行代码。

如果有人有兴趣,我会发送代码,不要在我身上。

Yes of course you can. You just need to build a dispatcher mechanism that will call back on a lambda that you provide and goes into a queue. All the code I write in unity uses this approach and I never use coroutines. I wrap methods that use coroutines such as WWW stuff to just get rid of it. In theory, coroutines can be faster because there is less overhead. Practically they introduce new syntax to a language to do a fairly trivial task and furthermore you can't follow the stack trace properly on an error in a co-routine because all you'll see is ->Next. You'll have to then implement the ability to run the tasks in the queue on another thread. However, there is parallel functions in the latest .net and you'd be essentially writing similar functionality. It wouldn't be many lines of code really.

If anyone is interested I would send the code, don't have it on me.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文