异步 UDP 操作的 .NET IOCP 线程池开销

发布于 2024-11-09 18:01:00 字数 2443 浏览 3 评论 0原文

我开发了一个 VoIP 媒体服务器,它与远程 SIP 端点交换 RTP 数据包。它需要很好地扩展——虽然我最初担心我的 C# 实现不会接近它所取代的 C++ 版本,但我使用了各种分析器来磨练实现,并且性能非常接近。

我通过创建可重用对象池消除了大多数对象分配,我使用 ReceiveFromAsync 和 SendToAsync 发送/接收数据报,并且使用生产者/消费者队列在系统中传递 RTP 数据包。在配备 2 个 2.4GHz Xeon 处理器的机器上,我现在可以处理大约 1000 个并发流,每个流每秒发送/接收 50 个数据包。然而,迭代配置文件/调整/配置文件让我着迷 - 而且我确信某处有更高的效率!

触发处理的事件是在 SocketAsyncEventArgs 上调用的 Completed 委托,该委托又通过处理管道发送 RTP 数据包。

剩下的令人沮丧的是 IOCP 线程池似乎有很大的开销。探查器显示,只有 72% 的包含采样时间在“我的代码”中 - 在此之前的时间似乎是线程池开销(下面的堆栈帧)。

所以,我的问题是:

  1. 我的理解中是否遗漏了一些东西?
  2. 是否可以减少这个开销?
  3. 是否可以替换异步套接字函数使用的线程池,以使用开销更少的自定义轻量级线程池?
100% MediaGateway

95.35% Thread::intermediateThreadProc(void *)

88.37% ThreadNative::SetDomainLocalStore(class Object *)

88.37% BindIoCompletionCallbackStub(unsigned long,unsigned long,struct _OVERLAPPED *)

86.05% BindIoCompletionCallbackStubEx(unsigned long,unsigned long,struct _OVERLAPPED *,int)

86.05% ManagedThreadBase::ThreadPool(struct ADID,void (*)(void *),void *)

86.05% CrstBase::Enter(void)

86.05% AppDomainStack::PushDomain(struct ADID)

86.05% Thread::ShouldChangeAbortToUnload(class Frame *,class Frame *)

86.05% AppDomainStack::ClearDomainStack(void)

83.72% ThreadPoolNative::CorWaitHandleCleanupNative(void *)

83.72% __CT??_R0PAVEEArgumentException@@@84

83.72% DispatchCallDebuggerWrapper(unsigned long *,unsigned long,unsigned long *,unsigned 
__int64,void *,unsigned __int64,unsigned int,unsigned char *,class ContextTransitionFrame *)

83.72% DispatchCallBody(unsigned long *,unsigned long,unsigned long *,unsigned __int64,void *,unsigned __int64,unsigned int,unsigned char *)

83.72% MethodDesc::EnsureActive(void)

81.40% _CallDescrWorker@20

81.40% System.Threading._IOCompletionCallback.PerformIOCompletionCallback(uint32,uint32,valuetype System.Threading.NativeOverlapped*)

76.74% System.Net.Sockets.SocketAsyncEventArgs.CompletionPortCallback(uint32,uint32,valuetype System.Threading.NativeOverlapped*)

76.74% System.Net.Sockets.SocketAsyncEventArgs.FinishOperationSuccess(valuetype System.Net.Sockets.SocketError,int32,valuetype System.Net.Sockets.SocketFlags)

74.42% System.Threading.ExecutionContext.Run(class System.Threading.ExecutionContext,class System.Threading.ContextCallback,object)

72.09% System.Net.Sockets.SocketAsyncEventArgs.ExecutionCallback(object)

72.09% System.Net.Sockets.SocketAsyncEventArgs.OnCompleted(class System.Net.Sockets.SocketAsyncEventArgs)

I have developed a VoIP media server which exchanges RTP packets with remote SIP endpoints. It needs to scale well - and while I was initially concerned that my C# implementation would not come close to the C++ version it replaces, I have used various profilers to hone the implementation and performance is pretty close.

I have elimitated most object allocations by creating pools of reusable objects, I am using ReceiveFromAsync and SendToAsync to send/receive datagrams, and I am using producer/consumer queues to pass RTP packets around the system. On a machine with 2 x 2.4GHz Xeon processors I can now handle about 1000 concurrent streams, each sending/receiving 50 packets per second. However, the iterative profile/tweak/profile has me hooked - and I am sure there is more efficiency in there somewhere!

The event that triggers processing is the Completed delegate being called on an SocketAsyncEventArgs - which in turn sends the RTP packets through the processing pipeline.

The remaining frustration is that there seems to be significant overhead in the IOCP threadpool. The profiler shows that only 72% of Inclusive Sample time is in 'my code' - the time before then appears to be threadpool overhead (stack frames below).

So, my questions are:

  1. Am I missing something in my understanding?
  2. Is it possible to reduce this overhead?
  3. Is it possible to replace the threadpool used by the async socket functions to use a custom, lightweight threadpool with less overhead?
100% MediaGateway

95.35% Thread::intermediateThreadProc(void *)

88.37% ThreadNative::SetDomainLocalStore(class Object *)

88.37% BindIoCompletionCallbackStub(unsigned long,unsigned long,struct _OVERLAPPED *)

86.05% BindIoCompletionCallbackStubEx(unsigned long,unsigned long,struct _OVERLAPPED *,int)

86.05% ManagedThreadBase::ThreadPool(struct ADID,void (*)(void *),void *)

86.05% CrstBase::Enter(void)

86.05% AppDomainStack::PushDomain(struct ADID)

86.05% Thread::ShouldChangeAbortToUnload(class Frame *,class Frame *)

86.05% AppDomainStack::ClearDomainStack(void)

83.72% ThreadPoolNative::CorWaitHandleCleanupNative(void *)

83.72% __CT??_R0PAVEEArgumentException@@@84

83.72% DispatchCallDebuggerWrapper(unsigned long *,unsigned long,unsigned long *,unsigned 
__int64,void *,unsigned __int64,unsigned int,unsigned char *,class ContextTransitionFrame *)

83.72% DispatchCallBody(unsigned long *,unsigned long,unsigned long *,unsigned __int64,void *,unsigned __int64,unsigned int,unsigned char *)

83.72% MethodDesc::EnsureActive(void)

81.40% _CallDescrWorker@20

81.40% System.Threading._IOCompletionCallback.PerformIOCompletionCallback(uint32,uint32,valuetype System.Threading.NativeOverlapped*)

76.74% System.Net.Sockets.SocketAsyncEventArgs.CompletionPortCallback(uint32,uint32,valuetype System.Threading.NativeOverlapped*)

76.74% System.Net.Sockets.SocketAsyncEventArgs.FinishOperationSuccess(valuetype System.Net.Sockets.SocketError,int32,valuetype System.Net.Sockets.SocketFlags)

74.42% System.Threading.ExecutionContext.Run(class System.Threading.ExecutionContext,class System.Threading.ContextCallback,object)

72.09% System.Net.Sockets.SocketAsyncEventArgs.ExecutionCallback(object)

72.09% System.Net.Sockets.SocketAsyncEventArgs.OnCompleted(class System.Net.Sockets.SocketAsyncEventArgs)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

我最亲爱的 2024-11-16 18:01:00

在 Windows 上每秒 50,000 个数据包已经相当不错了,我想说硬件和操作系统对于扩展来说是更重要的问题。不同的网络接口有不同的限制,英特尔服务器网卡主要是高性能和良好的跨平台驱动程序,但与 Linux 相比,Broadcom 在 Windows 上的记录并不好。仅当驱动程序支持这些功能时,才会启用 Windows 的高级核心网络 API,而 Broadcom 已证明是一家只为较新硬件启用高级功能的公司,尽管其他操作系统的旧设备也支持这些功能。

我将开始研究多个 NIC,例如使用四英特尔服务器 NIC,并使用 Windows 高级网络 API 将一个 NIC 绑定到每个处理核心。理论上,您可以通过一个 NIC 发送 50,000 个数据,通过另一个 NIC 发送 50,000 个数据。

http://msdn.microsoft.com/en- us/library/ff568337(v=VS.85).aspx

但是,您似乎并没有真正的基准来衡量代码的效率。我希望看到与不运行 VoIP 有效负载、在 TCP 传输而不是 UDP 上运行以及在其他操作系统上运行的服务器进行比较,以比较 IP 堆栈和 API 效率。

50,000 packet per second on Windows is pretty good, I would say that the hardware and operating system are more significant issues for scaling. Different network interfaces impose different limits, Intel Server NICs being predominantly high performance with good drivers cross platform, however Broadcom do not have a good record on Windows compared with Linux. The advanced core networking APIs of Windows are only enabled if the drivers support the features, and Broadcom have shown to be a company that only enable advanced features for newer hardware despite support in older devices from other operating systems.

I would start to investigate multiple NICs, for example with a quad-Intel Server NIC and use Windows advanced network APIs to bind one NIC to each processing core. In theory you could send 50,000 through one NIC and 50,000 through another.

http://msdn.microsoft.com/en-us/library/ff568337(v=VS.85).aspx

However it seems that you don't really have a baseline to measure the efficiency of the code against. I would expect to see comparison with servers running no VoIP payload, running on a TCP transport instead of UDP, and running on other operating systems to compare IP stack and API efficiency.

蓝颜夕 2024-11-16 18:01:00

还要添加一些信息 - 我最近发现 IOCP 线程池中存在一个错误,可能会影响您的性能:请参阅 http://support.microsoft.com/kb/2538826。它可能对您的情况有效。

Just too add some info - I recently discovered there is a bug present in the IOCP Thread Pool that might influence your performance: see point 3 of the 'cause' section in http://support.microsoft.com/kb/2538826. It might be valid for your case.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文