如何捕获网络数据包到MySQL

发布于 2024-12-04 04:42:48 字数 1064 浏览 2 评论 0原文

我要设计一个 WiFi (802.11) 网络分析仪 目前,我使用 tshark 捕获并解析 WiFi 帧,然后将输出通过管道传输到 perl 脚本,以将解析的信息存储到 Mysql 数据库。

我刚刚发现我在这个过程中错过了很多帧。我检查了一下,帧似乎在管道期间丢失了(当输出传递到 perl 以在 Mysql 中获取时) 事情是这样的

(Tshark)--------帧丢失----> (Perl)--------> (MySQL) 这就是我将 tshark 的输出传输到脚本的方式:

sudo tshark -i mon0 -t ad -T fields -e frame.time -e frame.len -e frame.cap_len -e radiotap.length | perl tshark-sql-capture.pl 

这是我使用的 perl 脚本的简单模板(tshark-sql-capture.pl)

# preparing the MySQL
my $dns = "DBI:mysql:capture;localhost";
my $dbh = DBI->connect($dns,user,pass);
my $db = "captured";

while (<STDIN>) {
    chomp($data = <STDIN>);
    ($time, $frame_len, $cap_len, $radiotap_len) = split "  ", $data;
    my $sth = $dbh-> prepare("INSERT INTO $db VALUES (str_to_date('$time','%M %d, %Y %H:%i:%s.%f'), '$frame_len', '$cap_len', '$radiotap_len'\n)" );
    $sth->execute;
}

#Terminate MySQL
$dbh->disconnect;

任何有助于提高性能的想法都会受到赞赏。或者可能有可以做得更好的替代机制。 现在我的性能是 50%,这意味着我可以将捕获的数据包的大约一半存储在 mysql 中。

I'm going to design a network Analyzer for WiFi (802.11)
Currently I use tshark to capture and parse the WiFi frames and then pipe the output to a perl script to store the parsed information to Mysql database.

I just find out that I miss alot of frames in this process. I checked and the frames seem to be lost during the Pipe (when the output is delivered to perl to get srored in Mysql)
Here is how it goes

(Tshark) -------frames are lost----> (Perl) --------> (MySQL)
this is the how I pipe the output of tshark to script:

sudo tshark -i mon0 -t ad -T fields -e frame.time -e frame.len -e frame.cap_len -e radiotap.length | perl tshark-sql-capture.pl 

this is simple template of the perl script I use (tshark-sql-capture.pl)

# preparing the MySQL
my $dns = "DBI:mysql:capture;localhost";
my $dbh = DBI->connect($dns,user,pass);
my $db = "captured";

while (<STDIN>) {
    chomp($data = <STDIN>);
    ($time, $frame_len, $cap_len, $radiotap_len) = split "  ", $data;
    my $sth = $dbh-> prepare("INSERT INTO $db VALUES (str_to_date('$time','%M %d, %Y %H:%i:%s.%f'), '$frame_len', '$cap_len', '$radiotap_len'\n)" );
    $sth->execute;
}

#Terminate MySQL
$dbh->disconnect;

Any Idea which can help to make the performance better is appreciated.Or may be there is an Alternative mechanism which can do better.
Right now my performance is 50% means I can store in mysql around half of the packets I'v captured.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

执笔绘流年 2024-12-11 04:42:48

在管道中写入的内容不会丢失,真正发生的情况可能是 tshark 尝试写入管道,但 perl+mysql 处理输入太慢,因此 pipelineb 已满,写入会阻塞,因此 tshark 只是删除数据包。

瓶颈可能是 MySQL 或 Perl 本身,但也可能是 DB。检查CPU使用率,测量插入率。然后选择更快的数据库或写入多个数据库。您还可以尝试批量插入并增加管道缓冲区的大小。

更新

while (<STDIN>)

这会将一行读入$_,然后您忽略它。

Things written in a pipe don't get lost, what's probably really going on is that tshark tries to write to the pipe but perl+mysql is too slow to process the input so the pipeb is full, write would block so tshark just drops the packets.

Bottleneck could be either MySQL or Perl itself but probably the DB. Check CPU usage, measure insert rate. Then pick a faster DB or write to multiple DBs. You can also try batch inserts and increasing the size of the pipe buffer.

Update

while (<STDIN>)

this reads a line into $_, then you ignore it.

左秋 2024-12-11 04:42:48

对于管道问题,您可以使用 GULP http: //staff.washington.edu/corey/gulp/

从手册页:

1) reduce packet loss of a tcpdump packet capture:
      (gulp -c works in any pipeline as it does no data interpretation)

        tcpdump -i eth1 -w - ... | gulp -c > pcapfile
      or if you have more than 2, run tcpdump and gulp on different CPUs
        taskset -c 2 tcpdump -i eth1 -w - ... | gulp -c > pcapfile

      (gulp uses CPUs #0,1 so use #2 for tcpdump to reduce interference)

For pipe problems, you can improve packet capture with GULP http://staff.washington.edu/corey/gulp/

From the Man pages:

1) reduce packet loss of a tcpdump packet capture:
      (gulp -c works in any pipeline as it does no data interpretation)

        tcpdump -i eth1 -w - ... | gulp -c > pcapfile
      or if you have more than 2, run tcpdump and gulp on different CPUs
        taskset -c 2 tcpdump -i eth1 -w - ... | gulp -c > pcapfile

      (gulp uses CPUs #0,1 so use #2 for tcpdump to reduce interference)
日记撕了你也走了 2024-12-11 04:42:48

您可以使用 FIFO 文件,然后使用插入延迟读取数据包并插入到 mysql 中。

sudo tshark -i mon0 -t ad -T fields -e frame.time -e frame.len -e frame.cap_len -e radiotap.length > MYFIFO

you can use a FIFO file, then read the packets and inserts in mysql using insert delay.

sudo tshark -i mon0 -t ad -T fields -e frame.time -e frame.len -e frame.cap_len -e radiotap.length > MYFIFO
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文