将 pcap 数据导出到 csv:时间戳、字节、上行链路/下行链路、额外信息

发布于 2024-12-15 05:51:05 字数 1539 浏览 3 评论 0原文

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

原谅过去的我 2024-12-22 05:51:05

TShark
以下是一些示例:

$ tshark -r test.pcap -T fields -e frame.number -e eth.src -e eth.dst -e ip.src -e ip.dst -e frame.len > test1.csv

$ tshark -r test.pcap -T fields -e frame.number -e eth.src -e eth.dst -e ip.src -e ip.dst -e frame.len -E header=y -E separator=, > test2.csv

$ tshark -r test.pcap -R "frame.number>40" -T fields -e frame.number -e frame.time -e frame.time_delta -e frame.time_delta_displayed -e frame.time_relative -E header=y > test3.csv

$ tshark -r test.pcap -R "wlan.fc.type_subtype == 0x08" -T fields -e frame.number -e wlan.sa -e wlan.bssid > test4.csv

$ tshark -r test.pcap -R "ip.addr==192.168.1.6 && tcp.port==1696 && ip.addr==67.212.143.22 && tcp.port==80" -T fields -e frame.number -e tcp.analysis.ack_rtt -E header=y > test5.csv

$ tshark -r test.pcap -T fields -e frame.number -e tcp.analysis.ack_rtt -E header=y > test6.csv

TShark
Here are some examples:

$ tshark -r test.pcap -T fields -e frame.number -e eth.src -e eth.dst -e ip.src -e ip.dst -e frame.len > test1.csv

$ tshark -r test.pcap -T fields -e frame.number -e eth.src -e eth.dst -e ip.src -e ip.dst -e frame.len -E header=y -E separator=, > test2.csv

$ tshark -r test.pcap -R "frame.number>40" -T fields -e frame.number -e frame.time -e frame.time_delta -e frame.time_delta_displayed -e frame.time_relative -E header=y > test3.csv

$ tshark -r test.pcap -R "wlan.fc.type_subtype == 0x08" -T fields -e frame.number -e wlan.sa -e wlan.bssid > test4.csv

$ tshark -r test.pcap -R "ip.addr==192.168.1.6 && tcp.port==1696 && ip.addr==67.212.143.22 && tcp.port==80" -T fields -e frame.number -e tcp.analysis.ack_rtt -E header=y > test5.csv

$ tshark -r test.pcap -T fields -e frame.number -e tcp.analysis.ack_rtt -E header=y > test6.csv
尘曦 2024-12-22 05:51:05

别再犹豫了,wireshark 是你最好的朋友。它可以打开您的 pcap 文件并允许您指定所需的额外列。之后,您可以简单地将它们导出为 csv。在主界面上,只需右键单击任一列并选择“列首选项”即可。这将打开一个非常直观的新窗口。只需添加一个新列并指定字段名称即可。就这么简单。

我曾尝试过 tshark,但相信我,它会变得有点烦人,尤其是这样:

 tshark: Read filters were specified both with "-R" and with additional command-line arguments."

如果您包含太多列或出于任何未知原因,则会弹出此消息。

Look no further, wireshark is your best friend. It can open your pcap file and allow you to specify extra columns which you want. After this you can simply export them as csv. On the main interface, simply right on any one of the columns and select "column preference". This opens a new window which is very intuitive. Just add a new column and specify the field name. As simple as that.

I had tried tshark but trust me it becomes a bit annoying especially with this:

 tshark: Read filters were specified both with "-R" and with additional command-line arguments."

This message pops up if you include too many columns or for whatever unknown reason.

梅窗月明清似水 2024-12-22 05:51:05

看起来您想要 Bro 的连接日志:

bro -r trace.pcap
head conn.log

输出:

#separator \x09
#set_separator  ,
#empty_field    (empty)
#unset_field    -
#path   conn
#fields ts  uid id.orig_h   id.orig_p   id.resp_h   id.resp_p   proto   service duration    orig_bytes  resp_bytes  conn_state  local_orig  missed_bytes    history orig_pkts   orig_ip_bytes   resp_pkts   resp_ip_bytes
#types  time    string  addr    port    addr    port    enum    string  intervacount    count   string  bool    count   string  count   count   count   count
1258531221.486539   gvuu4KIHDph 192.168.1.102   68  192.168.1.1 67  udp -   0.163820    301 300 SF  -   0   Dd  1   329 1   328
1258531680.237254   6nWmFGj6kWg 192.168.1.103   137 192.168.1.255   137 udp dns 3.780125    350 0   S0  -   0   546 0   0
1258531693.816224   y2lMKyrnnO6 192.168.1.102   137 192.168.1.255   137 udp dns 3.748647    350 0   S0  -   0   546 0   0

现在解析相关字段:

bro-cut ts id.orig_h id.orig_p id.resp_h id.resp_p service orig_bytes resp_bytes < conn.log | head

1258531221.486539   192.168.1.102   68  192.168.1.1     67  -   301 300
1258531680.237254   192.168.1.103   137 192.168.1.255   137 dns 350 0
1258531693.816224   192.168.1.102   137 192.168.1.255   137 dns 350 0
1258531635.800933   192.168.1.103   138 192.168.1.255   138 -   560 0
1258531693.825212   192.168.1.102   138 192.168.1.255   138 -   348 0
1258531803.872834   192.168.1.104   137 192.168.1.255   137 dns 350 0
1258531747.077012   192.168.1.104   138 192.168.1.255   138 -   549 0
1258531924.321413   192.168.1.103   68  192.168.1.1     67  -   303 300
1258531939.613071   192.168.1.102   138 192.168.1.255   138 -   -   -
1258532046.693816   192.168.1.104   68  192.168.1.1 67  -   311 300

It looks like you want Bro's connection logs:

bro -r trace.pcap
head conn.log

Output:

#separator \x09
#set_separator  ,
#empty_field    (empty)
#unset_field    -
#path   conn
#fields ts  uid id.orig_h   id.orig_p   id.resp_h   id.resp_p   proto   service duration    orig_bytes  resp_bytes  conn_state  local_orig  missed_bytes    history orig_pkts   orig_ip_bytes   resp_pkts   resp_ip_bytes
#types  time    string  addr    port    addr    port    enum    string  intervacount    count   string  bool    count   string  count   count   count   count
1258531221.486539   gvuu4KIHDph 192.168.1.102   68  192.168.1.1 67  udp -   0.163820    301 300 SF  -   0   Dd  1   329 1   328
1258531680.237254   6nWmFGj6kWg 192.168.1.103   137 192.168.1.255   137 udp dns 3.780125    350 0   S0  -   0   546 0   0
1258531693.816224   y2lMKyrnnO6 192.168.1.102   137 192.168.1.255   137 udp dns 3.748647    350 0   S0  -   0   546 0   0

Now parse the relevant fields:

bro-cut ts id.orig_h id.orig_p id.resp_h id.resp_p service orig_bytes resp_bytes < conn.log | head

1258531221.486539   192.168.1.102   68  192.168.1.1     67  -   301 300
1258531680.237254   192.168.1.103   137 192.168.1.255   137 dns 350 0
1258531693.816224   192.168.1.102   137 192.168.1.255   137 dns 350 0
1258531635.800933   192.168.1.103   138 192.168.1.255   138 -   560 0
1258531693.825212   192.168.1.102   138 192.168.1.255   138 -   348 0
1258531803.872834   192.168.1.104   137 192.168.1.255   137 dns 350 0
1258531747.077012   192.168.1.104   138 192.168.1.255   138 -   549 0
1258531924.321413   192.168.1.103   68  192.168.1.1     67  -   303 300
1258531939.613071   192.168.1.102   138 192.168.1.255   138 -   -   -
1258532046.693816   192.168.1.104   68  192.168.1.1 67  -   311 300
羁客 2024-12-22 05:51:05

通过终端安装 argus

sudo apt-get install argus-client

将 .pcap 转换为 .argus 文件格式

argus -r filename.pcap -w filename.argus  
-r <FILE> Read FILE  
-w <FILE> Write FILE  

将 .argus 转换为 .csv 文件格式,同时选择要提取的功能

ra -r filename.argus -u -s <features-comma-seprated>

Example:
ra -r filename.argus -u -s rank, stime, ltime, dur
-r <FILE> Read FILE
-u Print time values using Unix time format (seconds from the Epoch).
-s Specify the fields to print.

可以找到要打印的可用字段列表 此处

此信息是从我的原始博客复制的,您可以阅读此处

Install argus via terminal

sudo apt-get install argus-client

Convert .pcap to .argus file format

argus -r filename.pcap -w filename.argus  
-r <FILE> Read FILE  
-w <FILE> Write FILE  

Convert .argus to .csv file forrmat while choosing which features to extract

ra -r filename.argus -u -s <features-comma-seprated>

Example:
ra -r filename.argus -u -s rank, stime, ltime, dur
-r <FILE> Read FILE
-u Print time values using Unix time format (seconds from the Epoch).
-s Specify the fields to print.

The list of available fields to print can be found here

This information is copied from my original blog which you can read here

远昼 2024-12-22 05:51:05

您可以从 Wireshark 应用程序本身执行此操作:

  • 确保您已将文件保存到磁盘 (File>Save)(如果您刚刚保存了文件)
    完成捕获)
  • 转到 File>Export Packet Dissesctions>as "CSV" [etc]
  • 然后输入文件名(确保在末尾添加 .csv,因为 WS 不会
    这样做!)

You can do this from the Wireshark application itself:

  • Make sure you have saved the file to disk already (File>Save) (if you have just
    done a capture)
  • Go to File>Export Packet Dissesctions>as "CSV" [etc]
  • Then enter a filename (make sure you add .csv on the end as WS does not
    do this!)

Voila

遥远的她 2024-12-22 05:51:05

这是将 pcap 划分为流并将提取的特征输出到 CSV 文件的 python 工具

尝试使用 python 中的 Flows_to_weka 工具

这需要在您的系统中安装 scapy 版本最好将 scapy 文件夹复制到 weka 文件夹中。并将 wfe.py、tcp_stream.py 和 entropy.py 文件复制到 scapy 文件夹中。完成此操作后
您当前的目录应如下所示:

C:\Users\INKAKA\flows_to_weka\scapy

将 .pcap 文件复制到此文件夹中并尝试运行此命令:

$python  wfe.py -i input.pcap -t csv > output.csv

您还可以通过在 tcp_stream.py 和 wfe.py 中添加所需的功能来检索所需的功能。

作为参考,您可以访问:
https://github.com/fichtner/flows_to_weka

Here is the python tool to divide the pcap into flows and output the extracted features into a CSV file

Try using flows_to_weka tool in python

This requires a version of scapy installed in your system and better to copy the scapy folder inside the weka folder. And copy the wfe.py, tcp_stream.py and entropy.py files inside the scapy folder. After you done this
Your current directory should look something like this:

C:\Users\INKAKA\flows_to_weka\scapy

and copy the .pcap file into this folder and try running this command :

$python  wfe.py -i input.pcap -t csv > output.csv

and you can also retrieve the features that you want by adding the required features in tcp_stream.py and wfe.py.

For reference you can visit :
https://github.com/fichtner/flows_to_weka

热血少△年 2024-12-22 05:51:05

如问题评论中所述,要以 csv 格式输出捕获文件中帧的 IP 地址,请使用以下内容:

tshark -r <filename> -t fields -e ip.addr

有关在 csv 输出中设置分隔符和引用字符的选项的更多信息,请参阅 tshark 帮助。

可以通过使用 Wireshark 检查捕获文件并在详细信息窗格中选择特定字段来确定字段名称。然后,字段名称将显示在 Wireshark 窗口底部的状态行中。

As noted in the comments to the question, to output the ip addresses for frames in a capture file in csv format use something like:

tshark -r <filename> -t fields -e ip.addr

See the tshark help for more information about options to set the separator and quoting characters in the csv output.

Field names can be determined by using Wireshark to examine the capture file and selecting a particular field in the details pane. The field name will be then shown in the status line at the bottom of the Wireshark window.

欢你一世 2024-12-22 05:51:05

我们是否可以设置除逗号之外的字段分隔符?
因为在我的 PCap 文件中,如果我设置分隔符 =,那么输出文件 (.csv) 中的数据看起来不太好,因为我的大多数列中都有 , 。

所以我想知道有什么方法可以像其他字符一样设置字段分隔符,即 | (点)等

谢谢

Is it possible that we can set fields separator other than comma ?
Because in my PCap file, if i set the separator=, then my data in output file (.csv) doesn't looks good because i have , in my most of the columns.

So i want to know that is there any way we can set the field separator like other charactors i.e., | (pip) etc

Thanks

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文