粘合二进制文件(list_to_binary)有多昂贵?
一个进程在异步套接字上侦听服务器,并且在每条消息 {tcp, Socket, Bin} 上获取其缓冲区,并且:
Data = list_to_binary([Buffer, Bin]),
{next_state, 'READY', State#state{buffer = Data}}.
在某些事件上,它会刷新缓冲区:
'READY'({flush}, #state{buffer = Buffer} = State) ->
{reply, {Buffer}, 'READY', State#state{buffer = <<>>}}.
昂贵吗?也许最好只是创建一个列表并在刷新时创建 list_to_binary(lists:reverse()) 一次?
One process listen to server on async socket and on each message {tcp, Socket, Bin} takes it's buffer and:
Data = list_to_binary([Buffer, Bin]),
{next_state, 'READY', State#state{buffer = Data}}.
On some events it flushes buffer:
'READY'({flush}, #state{buffer = Buffer} = State) ->
{reply, {Buffer}, 'READY', State#state{buffer = <<>>}}.
Is it expensive? Maybe better just to make a list and make list_to_binary(lists:reverse()) once on flush?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
您的第一种方法似乎比第二种方法慢得多(在我的平台上慢了大约 3000 倍):
输出:
Your first method appears to be much slower than your second method (by a factor of about 3000 on my platform):
Output:
在当前版本中,模拟器对二进制文件的处理已得到显着改进,
所以现在您还可以采用更简单的路径并按块生成二进制块:
buffer =
#state{buffer = <>}
。我没有针对其他方法对其进行测试,但应该不错。
不同实现之间的性能也可能取决于您在同一缓冲区上执行此操作的次数以及每个块有多大。
In current releases, the handling of binaries by the emulator has been significant improved,
so now you could also take the simpler path and generate the binary chunk by chunk:
buffer =
#state{buffer = <<Buffer/binary, Bin/binary>>}
.I didn't test it against the other approach, but shouldn't be bad.
Performance between different implementations will also likely depends on how many times you are performing this on the same buffer and how big each chunk is.