并行 ping 多个网络设备的最佳方法是什么?
我通过迭代 ping 轮询网络中的许多设备(超过 300 个)。
该程序按顺序轮询设备,因此速度很慢。 我想提高轮询的速度。
在 Delphi 7 中有一些方法可以做到这一点:
- 每个设备都有一个执行 ping 操作的线程。手动管理线程。
- 学习和使用 Indy 10。需要示例。
- 使用基于窗口消息的重叠 I/O。
- 根据事件使用完成端口。
什么更快、更容易?请提供一些示例或链接。
I poll a lot of devices in network (more than 300) by iterative ping.
The program polls the devices sequentially, so it's slow.
I'd like to enhance the speed of polling.
There some ways to do this in Delphi 7:
- Each device has a thread doing ping. Manage threads manually.
- Learn and use Indy 10. Need examples.
- Use overlapped I/O based on window messages.
- Use completion ports based on events.
What is faster, easier? Please, provide some examples or links for example.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(6)
Windows 上不推荐直接 ICMP 访问。 Windows 上对 ICMP 协议的直接访问受到控制。由于恶意使用 ICMP/ping/traceroute 风格的原始套接字,我相信在某些版本的 Windows 上您将需要使用 Windows 自己的 api。特别是 Windows XP、Vista 和 Windows 7,不允许用户程序访问原始套接字。
我已经使用了 ICMP.dll 中的预设功能,这是一些 Delphi ping 组件所做的,但下面的评论提醒我这样一个事实:这被认为是“使用未记录的 API 接口”。
下面是主要 delphi ping 组件调用本身的示例:
我相信大多数现代 Ping 组件实现将基于与上面类似的代码,并且我已使用它在后台线程中运行此 ping 操作,没有任何问题。 (演示程序包含在下面的链接中)。
基于 ICMP.DLL 的演示的完整示例源代码位于此处。
更新 更现代的 IPHLPAPI.DLL 示例位于 About.com
Direct ICMP access is deprecated on windows. Direct access to the ICMP protocol on Windows is controlled. Due to malicious use of ICMP/ping/traceroute style raw sockets, I believe that on some versions of Windows you will need to use Windows own api. Windows XP, Vista, and Windows 7, in particular, don't let user programs access raw sockets.
I have used the canned-functionality in ICMP.dll, which is what some Delphi ping components do, but a comment below alerted me to the fact that this is considered "using an undocumented API interface".
Here's a sample of the main delphi ping component call itself:
I believe that most modern Ping component implementations are going to be based on a similar bit of code to the one above, and I have used it to run this ping operation in a background thread, without any probems. (Demo program included in link below).
Full sample source code for the ICMP.DLL based demo is here.
UPDATE A more modern IPHLPAPI.DLL sample is found at About.com here.
这里有一篇来自Delphi3000的文章展示了如何使用IOCP创建线程池。我不是这段代码的作者,但作者的信息在源代码中。
我在这里重新发布评论和代码:
Here's an article from Delphi3000 showing how to use IOCP to create a thread pool. I am not the author of this code, but the author's information is in the source code.
I'm re-posting the comments and code here:
您是否需要网络上每台机器的响应,或者这 300 台机器只是更大网络的子集?
如果您需要每台机器的响应,您可以考虑使用广播地址 或 多播地址 用于您的回显请求。
Do you need a response from every machine on the network, or are these 300 machines just a subset of the larger network?
If you need a response from every machine, you could consider using a broadcast address or multicast address for your echo request.
请尝试 Linux 的“chknodes”并行 ping,它将向网络的所有节点发送单个 ping。如果指定的话,它还会进行 dns 反向查找并请求 http 响应。它完全用 bash 编写,即您可以轻松检查它或根据您的需要修改它。以下是帮助的打印输出:
chknodes -h
chknodes ---- 快速并行 ping
chknodes [-l|--log] [-h|--help] [-H|--http] [-u|--uninstall ] [-v|--版本] [-V|--详细]
-l | --log 记录到文件
-h | --help 显示此帮助屏幕
-H | --http 还检查 http 响应
-n | --names 还获取主机名
-u | --uninstall 删除安装
-v | --version 显示版本
-V | --verbose 显示 ping 到的每个 ip 地址
您需要为其提供执行权限(就像任何 sh/bash 脚本一样)才能运行它:
在第一次运行时,即
它会建议将自身安装到 /usr/local/bin/ chknodes,之后给予
就足够了。您可以在这里找到它:
www.homelinuxpc.com/download/chknodes
Please give a try on "chknodes" parallel ping for Linux which will send a single ping to all nodes of your network. It will do also dns reverse lookup and request http response if specified so. It's written completely in bash i.e. you can easily check it or modify it to your needs. Here is a printout of help:
chknodes -h
chknodes ---- fast parallel ping
chknodes [-l|--log] [-h|--help] [-H|--http] [-u|--uninstall] [-v|--version] [-V|--verbose]
-l | --log Log to file
-h | --help Show this help screen
-H | --http Check also http response
-n | --names Get also host names
-u | --uninstall Remove installation
-v | --version Show version
-V | --verbose Show each ip address pinged
You need to give execute right for it (like with any sh/bash script) in order to run it:
On the first run i.e.
it will suggest to install itself to /usr/local/bin/chknodes, after that giving just
will be enough. You can find it here:
www.homelinuxpc.com/download/chknodes
用 ICMP 淹没网络并不是一个好主意。
您可能需要考虑某种线程池并对 ping 请求进行排队,并使用固定数量的线程来执行请求。
Flooding the network with ICMP is not a good idea.
You might want to consider some kind of thread pool and queue up the ping requests and have a fixed number of threads doing the requests.
我个人会选择 IOCP。我在 NexusDB 中的传输实现中非常成功地使用了它。
如果您想使用阻塞套接字和线程并行执行 300 个发送/接收周期,则最终需要 300 个线程。
使用IOCP,将套接字与IOCP关联后,您可以执行300个发送操作,并且它们将在操作完成之前立即返回。当操作完成时,所谓的完成包将在 IOCP 中排队。然后,您有一个线程池在 IOCP 上等待,操作系统会在完成数据包进入时唤醒它们。为了响应已完成的发送操作,您可以执行接收操作。接收操作也会立即返回,一旦实际完成,就会排队到 IOCP。
IOCP 的真正特殊之处在于它知道哪些线程属于它并且当前正在处理完成包。并且,只有当活动线程总数(不处于内核模式等待状态)低于 IOCP 的并发数(默认情况下等于机器上可用的逻辑核心数)时,IOCP 才会唤醒新线程。另外,如果有线程在 IOCP 上等待完成包(尽管完成包正在排队,但尚未启动,因为活动线程数等于并发数),当前正在处理的线程之一完成包由于任何原因进入内核模式等待状态,等待线程之一被启动。
返回 IOCP 的线程按 LIFO 顺序拾取完成包。也就是说,如果一个线程正在返回 IOCP,并且还有仍在等待的完成包,则该线程会直接拾取下一个完成包,而不是进入等待状态并唤醒等待时间最长的线程。
在最佳条件下,你将有等于可用核心数量的线程同时运行(每个核心一个),拾取下一个完成包,处理它,返回到IOCP并直接拾取下一个完成包,所有这些都无需进入内核模式等待状态或必须进行线程上下文切换。
如果您有 300 个线程并阻塞操作,那么您不仅会浪费至少 300 MB 地址空间(用于堆栈的保留空间),而且当一个线程进入等待状态(等待发送或接收完成),并且下一个线程已完成发送或接收唤醒。 – 托尔斯滕·恩格勒 12 小时前
Personally I would go with IOCP. I'm using that very successfully for the transport implementation in NexusDB.
If you want to perform 300 send/receive cycles using blocking sockets and threads in parallel, you end up needing 300 threads.
With IOCP, after you've associated the sockets with the IOCP, you can perform the 300 send operations, and they will return instantly before the operation is completed. As the operations are completed, so called completion packages will be queued to the IOCP. You then have a pool of threads waiting on the IOCP, and the OS wakes them up as the completion packets come in. In reaction to completed send operations you can then perform the receive operations. The receive operations also return instantly and once actually completed get queued to the IOCP.
The real special thing about an IOCP is that it knows which threads belong to it and are currently processing completion packages. And the IOCP only wakes up new threads if the total number of active threads (not in a kernel mode wait state) is lower than the concurrency number of the IOCP (by default that equals the number of logical cores available on the machine). Also, if there are threads waiting for completion packages on the IOCP (which haven't been started yet despite completion packages being queued because the number of active threads was equal to the concurrancy number), the moment one of the threads that is currently processing a completion package enters a kernel mode wait state for any reason, one of the waiting threads is started.
Threads returning to the IOCP pick up completion packages in LIFO order. That is, if a thread is returning to the IOCP and there are completion packages still waiting, that thread directly picks up the next completion package, instead of being put into a wait state and the thread waiting for the longest time waking up.
Under optimal conditions, you will have a number of threads equal to the number of available cores running concurrently (one on each core), picking up the next completion package, processing it, returning to the IOCP and directly picking up the next completion package, all without ever entering a kernel mode wait state or a thread context switch having to take place.
If you would have 300 threads and blocking operations instead, not only would you waste at least 300 MB address space (for the reserved space for the stacks), but you would also have constant thread context switches as one thread enters a wait state (waiting for a send or receive to complete) and the next thread with a completed send or receive waking up. – Thorsten Engler 12 hours ago