protobuf-net 与 DataContractSerializer 在 WCF 上的性能比较

发布于 2024-11-03 12:06:28 字数 4632 浏览 3 评论 0原文

我测试了 protobuf 序列化,似乎对于低于一定数量的对象,它比常规数据契约序列化慢。使用 DataContractSerializer 的传输大小更大,但在序列化和反序列化过程中,使用 DataContractSerializer 速度更快,

您认为这是正常的还是我犯了错误?

[DataContract]
public partial class Toto
{
    [DataMember]
    public string NomToto { get; set; }

    [DataMember]
    public string PrenomToto { get; set; }
} 

这是我的 datacontract 类,这与 protobuf 相同

[ProtoContract]
public partial class Titi
{
    [ProtoMember(1)]
    public string NomTiti { get; set; }

    [ProtoMember(2)]
    public string PrenomTiti { get; set; }
}

这是我使用 protobuf 的 WCF 服务的方法(与没有 ms 的 datacontract 相同)

public class TitiService : ITitiService
{
    public byte[] GetAllTitis()
    {
        List<Titi> titiList = new List<Titi>();
        for (int i = 0; i < 20000; i++)
        {
            var titi = new Titi
            {
                NomTiti = "NomTiti" + i,
                PrenomTiti = "PrenomTiti" + i
            };
            titiList.Add(titi);
        }
        var ms = new MemoryStream();
        Serializer.Serialize(ms, titiList);

        byte[] arr = ms.ToArray();
        return arr;
    }
}

这里使用 datacontract 的服务

public class TotoService : ITotoService
{
    public List<Toto> GetAllTotos()
    {
        List<Toto> totoList = new List<Toto>();
        for (int i = 0; i<20000; i++)
        {
            var toto = new Toto
            {
                NomToto = "NomToto" + i,
                PrenomToto = "PrenomToto" + i
            };
            totoList.Add(toto);
        }
        return totoList;
    }
}

是客户端调用

    public partial class Program
{
    static ProtobufTestAzure.Client.TitiService.TitiServiceClient TitiClient;
    static ProtobufTestAzure.Client.TotoService.TotoServiceClient TotoClient;

    public static void Main(string[] args)
    {
        Stopwatch stopwatch1 = new Stopwatch();
        Stopwatch stopwatch2 = new Stopwatch();
        Stopwatch stopwatch3 = new Stopwatch();

        stopwatch1.Start();

        TitiClient = new ProtobufTestAzure.Client.TitiService.TitiServiceClient();
        Byte[] titiByte = TitiClient.GetAllTitis();
        TitiClient.Close();

        stopwatch1.Stop();


        stopwatch2.Start();

        var ms = new MemoryStream(titiByte);
        List<Titi> TitiList = Serializer.Deserialize<List<Titi>>(ms);

        stopwatch2.Stop();

        Console.WriteLine(" ");

        stopwatch3.Start();

        TotoClient = new ProtobufTestAzure.Client.TotoService.TotoServiceClient();
        var TotoList = TotoClient.GetAllTotos();
        TotoClient.Close();

        stopwatch3.Stop();

        Console.WriteLine("Time elapse for reception (Protobuf): {0} ms ({1} éléments)", stopwatch1.ElapsedMilliseconds, TitiList.Count);
        Console.WriteLine("Time elapse for deserialization (Protobuf : {0} ms ({1} éléments)", stopwatch2.ElapsedMilliseconds, TitiList.Count);
        Console.WriteLine("Time elapse for réception (Datacontract Serialization) : {0} ms ({1} éléments)", stopwatch3.ElapsedMilliseconds, TotoList.Count);

        Console.ReadLine();
    }
}

和 10000 个对象的结果

接收时间经过(Protobuf): 3359 毫秒(10000 个元素) 反序列化所需时间(Protobuf):138 ms(10000 个元素) 接收时间(数据合约序列化):2200ms(10000 个元素)

我用 20000 个对象进行测试 它给了我第一次通话

接收时间(Protobuf):11258ms(20000 个元素) 反序列化所需时间(Protobuf):133ms(20000 个元素) 接收时间(数据合约序列化):

第二次调用

3726 毫秒(20000 个元素)接收时间(Protobuf):2844 毫秒(20000 个元素) 反序列化所需时间 (Protobuf):141 ms(20000 个元素) 接收时间(数据合约序列化):

第三次

7541 毫秒(20000 个元素)接收时间(Protobuf):2767 毫秒(20000 个元素) 反序列化所需时间 (Protobuf):145 ms(20000 个元素) 接收时间(数据合约序列化):3989 毫秒(20000 个元素)

在“Protobuf 传输”上激活 MTOM 后,它给了我:

第一次调用

接收时间(Protobuf):3316 毫秒(20000 个元素) 反序列化所需时间 (Protobuf):63 ms(20000 个元素) 接收时间(数据合约序列化):

第二次调用

3769 毫秒(20000 个元素)接收时间(Protobuf):2279 毫秒(20000 个元素) 反序列化所需时间 (Protobuf):57 毫秒(20000 个元素) 接收时间(数据契约序列化):3959 毫秒(20000 个元素)

我添加了这部分代码以获取对象大小,

            long totoSize = new long();
        using (Stream s = new MemoryStream())
        {
            BinaryFormatter formatter = new BinaryFormatter();
            formatter.Serialize(s, totoList);
            totoSize = s.Length;
        }

        long titiSize = titiByte.Count();

它使用 protobuf 给了我 637780,使用 DataContractSerializer 给了我 1038236 今天早上通话时间更好更稳定 第一次通话 协议缓冲区 = 2498 毫秒 5085 毫秒

数据合约 =第二次调用 协议缓冲区 = 3649 毫秒 3840 毫秒

数据合约 =第三次调用 协议缓冲区 = 2498 毫秒 数据合约 = 5085 毫秒

I tested protobuf serialization and it seems that for below a certain quantity of objects, it's slower than regular datacontract serialization. The transmission size is bigger using DataContractSerializer but during serialization and deserialization it is faster to use DataContractSerializer

Do you think this is normal or did I made a mistake?

[DataContract]
public partial class Toto
{
    [DataMember]
    public string NomToto { get; set; }

    [DataMember]
    public string PrenomToto { get; set; }
} 

here is my class for datacontract this is the same for protobuf

[ProtoContract]
public partial class Titi
{
    [ProtoMember(1)]
    public string NomTiti { get; set; }

    [ProtoMember(2)]
    public string PrenomTiti { get; set; }
}

here's my methods for WCF services with protobuf (same for datacontract without ms )

public class TitiService : ITitiService
{
    public byte[] GetAllTitis()
    {
        List<Titi> titiList = new List<Titi>();
        for (int i = 0; i < 20000; i++)
        {
            var titi = new Titi
            {
                NomTiti = "NomTiti" + i,
                PrenomTiti = "PrenomTiti" + i
            };
            titiList.Add(titi);
        }
        var ms = new MemoryStream();
        Serializer.Serialize(ms, titiList);

        byte[] arr = ms.ToArray();
        return arr;
    }
}

The service with datacontract

public class TotoService : ITotoService
{
    public List<Toto> GetAllTotos()
    {
        List<Toto> totoList = new List<Toto>();
        for (int i = 0; i<20000; i++)
        {
            var toto = new Toto
            {
                NomToto = "NomToto" + i,
                PrenomToto = "PrenomToto" + i
            };
            totoList.Add(toto);
        }
        return totoList;
    }
}

here is the client call

    public partial class Program
{
    static ProtobufTestAzure.Client.TitiService.TitiServiceClient TitiClient;
    static ProtobufTestAzure.Client.TotoService.TotoServiceClient TotoClient;

    public static void Main(string[] args)
    {
        Stopwatch stopwatch1 = new Stopwatch();
        Stopwatch stopwatch2 = new Stopwatch();
        Stopwatch stopwatch3 = new Stopwatch();

        stopwatch1.Start();

        TitiClient = new ProtobufTestAzure.Client.TitiService.TitiServiceClient();
        Byte[] titiByte = TitiClient.GetAllTitis();
        TitiClient.Close();

        stopwatch1.Stop();


        stopwatch2.Start();

        var ms = new MemoryStream(titiByte);
        List<Titi> TitiList = Serializer.Deserialize<List<Titi>>(ms);

        stopwatch2.Stop();

        Console.WriteLine(" ");

        stopwatch3.Start();

        TotoClient = new ProtobufTestAzure.Client.TotoService.TotoServiceClient();
        var TotoList = TotoClient.GetAllTotos();
        TotoClient.Close();

        stopwatch3.Stop();

        Console.WriteLine("Time elapse for reception (Protobuf): {0} ms ({1} éléments)", stopwatch1.ElapsedMilliseconds, TitiList.Count);
        Console.WriteLine("Time elapse for deserialization (Protobuf : {0} ms ({1} éléments)", stopwatch2.ElapsedMilliseconds, TitiList.Count);
        Console.WriteLine("Time elapse for réception (Datacontract Serialization) : {0} ms ({1} éléments)", stopwatch3.ElapsedMilliseconds, TotoList.Count);

        Console.ReadLine();
    }
}

and the result for 10000 objects

Time elapse for reception (Protobuf): 3359 ms (10000 elements)
Time elapse for deserialization (Protobuf): 138 ms (10000 elements)
Time elapse for reception (Datacontract Serialization): 2200ms (10000 elements)

I test it whith 20000 objects
It gave me for the first call

Time elapse for reception (Protobuf): 11258ms (20000 elements)
Time elapse for deserialization (Protobuf): 133ms (20000 elements)
Time elapse for reception (Datacontract Serialization): 3726ms (20000 elements)

for the second call

Time elapse for reception (Protobuf): 2844 ms (20000 elements)
Time elapse for deserialization (Protobuf): 141 ms (20000 elements)
Time elapse for reception (Datacontract Serialization): 7541 ms (20000 elements)

for the third

Time elapse for reception (Protobuf): 2767ms (20000 elements)
Time elapse for deserialization (Protobuf): 145 ms (20000 elements)
Time elapse for reception (Datacontract Serialization): 3989 ms (20000 elements)

After MTOM activation on 'Protobuf transfert' it gaves me:

for first call

Time elapse for reception (Protobuf): 3316 ms (20000 elements)
Time elapse for deserialization (Protobuf): 63 ms (20000 elements)
Time elapse for reception (Datacontract Serialization): 3769 ms (20000 elements)

for second call

Time elapse for reception (Protobuf): 2279 ms (20000 elements)
Time elapse for deserialization (Protobuf): 57 ms (20000 elements)
Time elapse for reception (Datacontract Serialization): 3959 ms (20000 elements)

I add this part of code for objects size

            long totoSize = new long();
        using (Stream s = new MemoryStream())
        {
            BinaryFormatter formatter = new BinaryFormatter();
            formatter.Serialize(s, totoList);
            totoSize = s.Length;
        }

        long titiSize = titiByte.Count();

it gave me 637780 with protobuf and 1038236 with DataContractSerializer
Durations for call are better and more stable this morning
first call
protobuf = 2498 ms
datacontract = 5085 ms

second call
protobuf = 3649 ms
datacontract = 3840 ms

third call
protobuf = 2498 ms
datacontract = 5085 ms

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

唱一曲作罢 2024-11-10 12:06:28

影响性能的一些因素:

  • 序列化器是否准备好?每种类型第一次使用时这是自动的;第一次,它需要做大量的检查等来弄清楚你的模型是如何工作的。您可以通过在启动期间的某个地方调用 Serializer.PrepareSerializer() 来弥补这一点
    • 或者作为替代方案,在 v2(以“alpha”形式提供)中,如果您需要最快的冷启动性能,您可以将序列化器预生成为 dll
  • 传输是什么?特别是对于 WCF,您需要记住您的 byte[] 是如何编码的(当然,这在套接字上不是问题);例如,运输可以使用 MTOM 吗?或者是对byte[]进行base-64编码?
    • 还要注意,Streambyte[] 的处理方式可能不同;如果您可以测量带宽,您可能想尝试两者
    • 如果您的目标是绝对速度,那么启用 MTOM 的 basic-http 是我对 WCF 传输的偏好;或套接字(如果您想更接近极限)

Some factors that impact performance:

  • is the serializer prepared? This is automatic on the first use per-type; the first time through, it needs to do quite a bit of inspection etc to figure out how your model works. You can offset this by calling Serializer.PrepareSerializer<YourType>() somewhere during startup
    • or as an alternative, in v2 (available as "alpha") you can pre-generate the serializer as a dll if you need the fastest possible cold-start performance
  • what is the transport? in particular with WCF, you need to keep in mind how your byte[] is encoded (this isn't a problem on sockets, of course); for example, can the transport use MTOM? or is it base-64 encoding the byte[]?
    • and note also that it is possible that Stream and byte[] are handled differently; if you can measure the bandwidth, you might want to try both
    • basic-http with MTOM enabled is my preference for WCF transports if absolute speed is your aim; or sockets if you want to get closer to the limit
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文