使用外部工具、subprocess.Popen 和线程进行多端口扫描

发布于 2024-10-08 06:08:46 字数 2451 浏览 0 评论 0原文

我正在使用端口扫描仪来扫描我的子网。不幸的是,端口扫描器一次只能扫描一台主机的一个端口。此外,扫描仪对于无法访问的主机有 1 秒的超时时间。扫描仪(作为外部程序)必须从 subprocess.Popen() 运行并加快速度 - 这样我就可以在以前的一些探针等待回复时发送多个探针 - 我使用线程。对于具有大量线程的完整 /24 子网扫描,会出现此问题。一些实际打开的端口显示为关闭。我怀疑输出会出现乱码。请注意,如果我扫描较少的主机或一次扫描一台主机,则不会发生这种情况。

以下代码是我尝试创建一个线程池,该线程池采用 IP 地址并对定义的端口运行“顺序”端口扫描。扫描完所有指定端口后,它会从列表中选取下一个 IP。

        while True:
            if not thread_queue.empty():
                try:
                    hst = ip_iter.next()
                except StopIteration:
                    break
                m=thread_queue.get()
                l=ThreadWork(self,hst,m)
                l.start()
        while open_threads != 0:
            pass      

这个片段设置线程队列的地方

        thread_list = [x for x in range(num_threads)]
        for t in thread_list:
            thread_queue.put(str(t))
        ip_iter=iter(self.final_target)

在 ThreadWork 函数中,我保留了一个打开线程的选项卡(因为 thread_queue.empty 被证明是不可靠的,我不得不使用这种粗略的方式)

class ThreadWork(threading.Thread):
    def __init__(self,i,hst,thread_no):
        global open_threads
        threading.Thread.__init__(self)
        self.host = hst
        self.ptr = i
        self.t = thread_no
        lock.acquire()
        open_threads = open_threads + 1
        lock.release() 

    def run(self):
        global thread_queue
        global open_threads
        global lock
        user_log.info("Executing sinfp for IP Address : %s"%self.host)
        self.ptr.result.append(SinFpRes(self.host,self.ptr.init_ports,self.ptr.all_ports,self.ptr.options,self.ptr.cf))
        lock.acquire()
        open_threads = open_threads - 1
        lock.release()
        thread_queue.put(self.t)

对 SinFpRes 的调用为一个 IP 创建一个结果对象并启动顺序仅扫描该 IP 的端口。 每个端口的实际扫描如图所示。

        com_string = '/usr/local/sinfp/bin/sinfp.pl '+self.options+' -ai '+str(self.ip)+' -p '+str(p)
        args = shlex.split(com_string) 
        self.result=subprocess.Popen(args,stdout=subprocess.PIPE).communicate()[0]
        self.parse(p)

然后,解析函数利用 self.result 中存储的结果来存储该端口的输出。所有端口的聚合构成了 IP 的扫描结果。

使用 10 个线程调用此代码可提供准确的 o/p(与 nmap 输出相比)。在提供 15 个线程时,偶尔会错过打开的端口。如果提供 20 个线程,则会错过更多开放端口。当提供 50 个线程时,许多端口都会被错过。

PS - 作为初学者,这段代码非常复杂。向清教徒道歉。

PPS - 对于整个 C 类子网,即使是线程端口扫描也需要 15 分钟,扫描的端口几乎没有 20 个。我想知道是否应该将此代码移至另一种语言并使用 Python 仅解析结果。有人可以建议我一门语言吗? 注意 - 我正在探索 S.Lott 所示的 Shell 选项,但在将其转储到文件中之前需要手动处理。

I am using a port scanner to scan my subnet. The port scanner unfortunately can scan one port of only one host at a time. Also the scanner has a 1 sec timeout for unreachable hosts. The scanner(being an outside program) has to be run from subprocess.Popen() and to speed it up - so that I may send multiple probes while some previous ones are waiting for replies- I use threads. The problem comes up for a complete /24 subnet scan with large number of threads. SOme of the actually open ports are displayed as closed. I suspect somehow the output gets garbled. Note that this does not occur if I scan fewer hosts or one host at a time

The following code is my attempt to create a pool of threads which take an IP address and run 'sequential' port scan for defined port. Once all specified ports are scanned, it picks up next IP from the list.

        while True:
            if not thread_queue.empty():
                try:
                    hst = ip_iter.next()
                except StopIteration:
                    break
                m=thread_queue.get()
                l=ThreadWork(self,hst,m)
                l.start()
        while open_threads != 0:
            pass      

Where this fragment sets up thread queue

        thread_list = [x for x in range(num_threads)]
        for t in thread_list:
            thread_queue.put(str(t))
        ip_iter=iter(self.final_target)

In the ThreadWork function I keep a tab of open threads(since thread_queue.empty proved to be unreliable, I had to use this crude way)

class ThreadWork(threading.Thread):
    def __init__(self,i,hst,thread_no):
        global open_threads
        threading.Thread.__init__(self)
        self.host = hst
        self.ptr = i
        self.t = thread_no
        lock.acquire()
        open_threads = open_threads + 1
        lock.release() 

    def run(self):
        global thread_queue
        global open_threads
        global lock
        user_log.info("Executing sinfp for IP Address : %s"%self.host)
        self.ptr.result.append(SinFpRes(self.host,self.ptr.init_ports,self.ptr.all_ports,self.ptr.options,self.ptr.cf))
        lock.acquire()
        open_threads = open_threads - 1
        lock.release()
        thread_queue.put(self.t)

The call to SinFpRes creates a result object for one IP and initiate sequential scanning of ports for that IP only.
The actual scan per port is as shown

        com_string = '/usr/local/sinfp/bin/sinfp.pl '+self.options+' -ai '+str(self.ip)+' -p '+str(p)
        args = shlex.split(com_string) 
        self.result=subprocess.Popen(args,stdout=subprocess.PIPE).communicate()[0]
        self.parse(p)

The parse function then utilises result stored in self.result to store output for that PORT. An aggregate of all the ports is what constitues a scan result for an IP.

Calling this code using 10 threads gives accurate o/p (when compared to nmap output). On giving 15 threads an occasional open port is missed. On giving 20 threads, more open ports are missed. On giving 50 threads many ports are missed.

P.S. - As a first timer, this code is very convoluted. Apologies to the puritans.

P.P.S. - Even a threaded port scan takes 15 minutes for an entire class C subnet with hardly 20 ports scanned. I was wondering if I should move this code to another language and use Python to only parse the results. Could somebody suggest me a language?
Note -I am exploring Shell option as shown by S.Lott but manual processing is required before dumping it into file.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

恏ㄋ傷疤忘ㄋ疼 2024-10-15 06:08:47

使用 shell

 for h in host1 host2 host3
 do
     scan $h >$h.scan &
 done
 cat *.scan >all.scan

这将同时扫描整个主机列表,每个主机都在一个单独的进程中。没有线程。

每次扫描都会生成一个 .scan 文件。然后,您可以将所有 .scan 文件合并到一个庞大的 all.scan 文件中,以便进一步处理或执行您正在执行的任何操作。

Use the shell

 for h in host1 host2 host3
 do
     scan $h >$h.scan &
 done
 cat *.scan >all.scan

This will scan the entire list of hosts all at the same time, each in a separate process. No threads.

Each scan will produce a .scan file. You can then cat all the .scan files into a massive all.scan file for further processing or whatever it is you're doing.

祁梦 2024-10-15 06:08:47

你为什么不尝试一下呢?

(答案:不,他们会有自己的管道)

Why don't you try it?

(Answer: No they will have their own pipe)

走过海棠暮 2024-10-15 06:08:47

使用 Perl 而不是 Python。该程序(SinFP)是用 Perl 编写的,您可以修改代码以满足您的需要。

Use Perl instead of Python. The program (SinFP) is written in Perl, you can modify the code to suite your needs.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文