Linux 与 Win 运行时时序
我有一个从 Windows 移植到 Linux 的应用程序。现在,相同的代码可以在 VS C++ 和 g++ 上编译,但在 Win 上运行和在 Linux 上运行时性能存在差异。此应用程序的范围是缓存。它是服务器和客户端之间的节点,它将客户端请求和服务器响应缓存在列表中,以便任何其他发出已被服务器处理的请求的客户端,该节点将响应而不是将其转发到服务器。
当该节点在Windows上运行时,客户端在大约7秒内获得所需的一切。但是当同一节点在Linux(Ubuntu 9.04)上运行时,客户端在35秒内启动。每一次测试都是从头开始。我试图理解为什么会出现这种时间差异。一个奇怪的场景是当节点在 Linux 上运行但在由 Win 托管的虚拟机中时。在本例中,加载时间约为 7 秒,就像在本机运行 Win 一样。所以,我的印象是网络有问题。
该节点使用UDP协议发送和接收网络数据,并使用boost::asio作为实现。我尝试更改所有支持的套接字标志,更改缓冲区大小,但什么也没有。
有人知道为什么会发生这种情况,或者与 UDP 相关的任何网络设置可能会影响性能吗?
谢谢。
I have an application which was ported from Windows to Linux. Now the same code compiles on VS C++ and g++, but there is a difference in performance when it's running on Win and when it's running on Linux. The scope of this application is caching. It's a node between a server and a client, and it's caching client requests and server response in a list, so that any other client which makes requests that was already processed by the server, this node will response instead of forwarding it to server.
When this node runs on Windows, the client gets all it needs in about 7 seconds. But when same node is running on Linux (Ubuntu 9.04), the client starts up in 35 seconds. Every test is from scratch. I'm trying to understand why is this timing difference. A weird scenario is when the node is running on Linux but in a Virtual Machine, hosted by Win. In this case, load time is around 7 seconds, just like it was running Win natively. So, my impression is that there is a problem with networking.
This node is using UDP protocol for sending and receiving network data, and it's using boost::asio as implementation. I tried to change all supported socket flags, changed buffer size, but nothing.
Does someone know why is this happening, or any network settings related with UDP that might influence the performance?
Thanks.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
如果您怀疑存在网络问题,请进行网络捕获(Wireshark 对于此类问题非常有用)并查看流量。
根据网络捕获或分析器的输出找出时间花在哪里。
一旦你知道你已经成功了一半。
If you suspect a network problem take a network capture (Wireshark is great for this kind of problem) and look at the traffic.
Find out where the time is being spent, either based on the network capture or based on the output of a profiler.
Once you know that you're half way to a solution.
这些时间差异可能取决于许多因素,但首先想到的是您使用的是现代 Windows 版本。 XP 已经具有将最近使用的应用程序保留在内存中的功能,但在 Vista 中,这一功能得到了更好的优化。对于您加载的每个应用程序,都会创建一个特殊的加载文件,该文件与其在内存中的外观相同。下次加载应用程序时,它应该会快得多。
我不了解 Linux,但它很可能每次都需要完全加载您的应用程序。如果在运行时比较性能,您可以更好地测试两个系统之间的性能差异。让您的应用程序保持打开状态(如果您的设计可行)并再次比较。
系统优化内存方式的这些差异由使用 VM 方法的场景支持。
基本上,如果您排除其他正在运行的应用程序,并且如果您以高优先级模式运行应用程序,则性能应该接近相同,但这取决于您是否使用操作系统特定的代码、如何访问文件系统、如何使用使用UDP协议等
These timing differences can depend on many factors, but the first one coming to mind is that you are using a modern Windows version. XP already had features to keep recently used applications in memory, but in Vista this was much better optimized. For each application you load, a special load file is created that is equal to how it looks in memory. Next time you load your application, it should go a lot faster.
I don't know about Linux, but it is very well possible that it needs to load your app completely each time. You can test the difference in performance between the two systems much better if you compare performance when running. Leave your application open (if it is possible with your design) and compare again.
These differences in how the system optimizes memory are backed up by your scenario using the VM approach.
Basically, if you rule out other running applications and if you run your application in high priority mode, the performance should be close to equal, but it depends on whether you use operating system specific code, how you access the file system, how you you use the UDP protocol etc etc.