如何从 C# 使用 TIBCO RV 获得更好的性能?

发布于 2024-07-11 05:12:38 字数 235 浏览 6 评论 0原文

我正在使用 TIBCO RV .NET API (TIBCO.Rendezvous.dll)。

您是否知道在性能方面是否有更好的方法在 C# 中从 RV 通道接收和读取消息? 我发现 Message 类型(RV 消息的逻辑包装器)非常重。 通过名称或索引获取字段可能会非常慢,特别是当我们将其视为重复/高频操作时。

有任何想法吗?

I'm using TIBCO RV .NET API (TIBCO.Rendezvous.dll).

Do you know if there is a better way, in term of performance, to receive and read messages from a RV channel in C#? I found the Message type - the logical wrapper over a RV message - being quite heavy. Getting a field by name or by index could be pretty slow, especially when we consider that as a recurrent/high frequency operation.

Any ideas?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

倾城泪 2024-07-18 05:12:38

c# 包装器的主要问题是:

  • 为任何字段访问分配(c/C++ api 为最可能的基元公开强类型快速访问器,
  • 在每个字段访问上标记引用(除非您计划这样做,否则不需要这样做)在随后更新字段并希望避免泄漏内存时)
  • 按名称进行字段查找需要将字符串转换为 ac 风格的 ascii 字符串

这些方面将使基于底层字段/名称的查找的开销相形见绌,当您知道自己时,可以避免这些开销需要通过按顺序迭代字段来查看消息中的每个字段,这在 C/C++ 中速度很快,因为它通过内存线性工作,因此对缓存友好,

我们直接使用 C++ CLI 包装 C++ api(当时)。 您有多个应用程序域),但其性能与 C++ api(这是 C api 上的一个非常薄的包装器)接近。

.net 库的质量不合格)。这很难实现(特别是如果 您的分析告诉您消息访问中的分配正在阻止您的应用程序以您需要/想要的速度运行我担心您会遇到 .net 库的问题。 您可以使用反射来获取本机消息的底层 IntPtr,然后在 MessageToolbaox(dll 中的内部类)中使用完全相同的外部定义函数,这些函数会下降到本机 api,每次访问内部字段的成本消息可能会被更快的字段查找所抵消。 这显然很脆弱且难以维护,但如果您发现与完全重新实现其包装器相比这是值得的,它可能会有所帮助。

The main issue with the c# wrapper is that it:

  • Allocates for any field access (the c/C++ api expose strongly typed fast accessors for the most likely primitives
  • marks references on every field access (you don't need to do this unless you plan on subsequently updating the fields and wish to avoid leaking memory)
  • field lookups by name require conversion of the string into a c style ascii string

Those aspects will dwarf the overhead of the underlying field/name based lookups, which themselves can be avoided when you know you need to look at every field in the message by iterating through the fields in order. This is fast in C/C++ since it works linearly through memory and is thus cache friendly.

Personally we wrapped the C++ api directly with C++ CLI (at the time the .net library was of substandard quality). This is complex to get right (especially if you have multiple app domains) but gives close to the same performance as the C++ api (which is an incredibly thin wrapper on the C api).

If your profiling tells you that the allocations within the message access are preventing your app running at the speed you need/want I'm afraid you will have a problem with the .net library. You can use reflection to get at the underlying IntPtr to the native message and then use the very same externally defined functions in MessageToolbaox (an internal class in the dll) which drop down to the native api, the cost of accessing the internal field once per message may be offset by the faster field lookup. This is obviously fragile and hard to maintain but if you find that it's worth it compared to reimplementing their wrapper in full it might help.

So要识趣 2024-07-18 05:12:38

我也看到过同样的事情。 根据我的经验,与在 C++ 中访问字段相比,访问 C# Rv 消息中的字段非常慢。 因此,您希望避免在消息中添加和读取多个字段。

一种解决方案是不使用 Rv 自己的消息序列化。 也就是说,不要使用 Message.AddField() 添加大量字段,也不要使用 Message.GetField() 获取它们。 相反,您可以将数据序列化为不透明类型(这是一个二进制缓冲区,即字节数组)。 然后,您可以将此单个字段添加到消息中。

如果您所做的只是读取和写入一个字段,则开销非常小。 您应该能够非常快速地在自己的代码中序列化和反序列化数据。

I have seen the same thing. From my experience, accessing fields in C# Rv messages is very slow compared with accessing them in C++. So you want to avoid adding and reading multiple fields to and from the message.

One solution is to not use Rv's own message serialization. That is, don't add lots of fields with Message.AddField(), or get them with Message.GetField(). Instead, you can serialize your data to an Opaque type (which is a binary buffer, ie a byte array). You can then add this single field to the message.

If all you do is read and write one field, there is very little overhead. And you should be able to serialize and deserialize the data in your own code pretty fast.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文