使用 gawk 解析 CSV 文件

发布于 2024-07-08 16:39:42 字数 845 浏览 8 评论 0原文

如何使用 gawk 解析 CSV 文件? 简单地设置 FS="," 是不够的,因为内部带有逗号的带引号的字段将被视为多个字段。

使用 FS="," 的示例不起作用:

文件内容:

one,two,"three, four",five
"six, seven",eight,"nine"

gawk 脚本:

BEGIN { FS="," }
{
  for (i=1; i<=NF; i++) printf "field #%d: %s\n", i, $(i)
  printf "---------------------------\n"
}

错误输出:

field #1: one
field #2: two
field #3: "three
field #4:  four"
field #5: five
---------------------------
field #1: "six
field #2:  seven"
field #3: eight
field #4: "nine"
---------------------------

所需输出:

field #1: one
field #2: two
field #3: "three, four"
field #4: five
---------------------------
field #1: "six, seven"
field #2: eight
field #3: "nine"
---------------------------

How do you parse a CSV file using gawk? Simply setting FS="," is not enough, as a quoted field with a comma inside will be treated as multiple fields.

Example using FS="," which does not work:

file contents:

one,two,"three, four",five
"six, seven",eight,"nine"

gawk script:

BEGIN { FS="," }
{
  for (i=1; i<=NF; i++) printf "field #%d: %s\n", i, $(i)
  printf "---------------------------\n"
}

bad output:

field #1: one
field #2: two
field #3: "three
field #4:  four"
field #5: five
---------------------------
field #1: "six
field #2:  seven"
field #3: eight
field #4: "nine"
---------------------------

desired output:

field #1: one
field #2: two
field #3: "three, four"
field #4: five
---------------------------
field #1: "six, seven"
field #2: eight
field #3: "nine"
---------------------------

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(9

浮生面具三千个 2024-07-15 16:39:42

gawk 版本 4 手册 说使用 < code>FPAT = "([^,]*)|(\"[^\"]+\")"

当定义 FPAT 时,它会禁用 FS 并按内容而不是分隔符指定字段。

The gawk version 4 manual says to use FPAT = "([^,]*)|(\"[^\"]+\")"

When FPAT is defined, it disables FS and specifies fields by content instead of by separator.

〃安静 2024-07-15 16:39:42

简短的回答是“如果 CSV 包含尴尬的数据,我不会使用 gawk 来解析 CSV”,其中“awkward”意味着 CSV 字段数据中的逗号之类的内容。

下一个问题是“您还要进行哪些其他处理”,因为这将影响您使用的替代方案。

我可能会使用 Perl 和 Text::CSV 或 Text::CSV_XS 模块来读取和处理数据。 请记住,Perl 最初部分是作为 awksed 杀手级程序编写的 - 因此 a2ps2p 程序仍然存在与 Perl 一起分发,将 awksed 脚本(分别)转换为 Perl。

The short answer is "I wouldn't use gawk to parse CSV if the CSV contains awkward data", where 'awkward' means things like commas in the CSV field data.

The next question is "What other processing are you going to be doing", since that will influence what alternatives you use.

I'd probably use Perl and the Text::CSV or Text::CSV_XS modules to read and process the data. Remember, Perl was originally written in part as an awk and sed killer - hence the a2p and s2p programs still distributed with Perl which convert awk and sed scripts (respectively) into Perl.

小女人ら 2024-07-15 16:39:42

您可以使用一个名为 csvquote 的简单包装函数来清理输入并在 awk 处理完后恢复它。 在开始和结束时通过它传输数据,一切都应该正常:

之前:

gawk -f mypgoram.awk input.csv

之后:

csvquote input.csv | gawk -f mypgoram.awk | csvquote -u

请参阅 https:// github.com/dbro/csvquote 获取代码和文档。

You can use a simple wrapper function called csvquote to sanitize the input and restore it after awk is done processing it. Pipe your data through it at the start and end, and everything should work out ok:

before:

gawk -f mypgoram.awk input.csv

after:

csvquote input.csv | gawk -f mypgoram.awk | csvquote -u

See https://github.com/dbro/csvquote for code and documentation.

明媚殇 2024-07-15 16:39:42

如果允许,我会使用Python csv 模块,特别注意使用的方言和所需的格式参数,解析您拥有的 CSV 文件。

If permissible, I would use the Python csv module, paying special attention to the dialect used and formatting parameters required, to parse the CSV file you have.

天赋异禀 2024-07-15 16:39:42

csv2delim.awk

# csv2delim.awk converts comma delimited files with optional quotes to delim separated file
#     delim can be any character, defaults to tab
# assumes no repl characters in text, any delim in line converts to repl
#     repl can be any character, defaults to ~
# changes two consecutive quotes within quotes to '

# usage: gawk -f csv2delim.awk [-v delim=d] [-v repl=`"] input-file > output-file
#       -v delim    delimiter, defaults to tab
#       -v repl     replacement char, defaults to ~

# e.g. gawk -v delim=; -v repl=` -f csv2delim.awk test.csv > test.txt

# abe 2-28-7
# abe 8-8-8 1.0 fixed empty fields, added replacement option
# abe 8-27-8 1.1 used split
# abe 8-27-8 1.2 inline rpl and "" = '
# abe 8-27-8 1.3 revert to 1.0 as it is much faster, split most of the time
# abe 8-29-8 1.4 better message if delim present

BEGIN {
    if (delim == "") delim = "\t"
    if (repl == "") repl = "~"
    print "csv2delim.awk v.m 1.4 run at " strftime() > "/dev/stderr" ###########################################
}

{
    #if ($0 ~ repl) {
    #   print "Replacement character " repl " is on line " FNR ":" lineIn ";" > "/dev/stderr"
    #}
    if ($0 ~ delim) {
        print "Temp delimiter character " delim " is on line " FNR ":" lineIn ";" > "/dev/stderr"
        print "    replaced by " repl > "/dev/stderr"
    }
    gsub(delim, repl)

    $0 = gensub(/([^,])\"\"/, "\\1'", "g")
#   $0 = gensub(/\"\"([^,])/, "'\\1", "g")  # not needed above covers all cases

    out = ""
    #for (i = 1;  i <= length($0);  i++)
    n = length($0)
    for (i = 1;  i <= n;  i++)
        if ((ch = substr($0, i, 1)) == "\"")
            inString = (inString) ? 0 : 1 # toggle inString
        else
            out = out ((ch == "," && ! inString) ? delim : ch)
    print out
}

END {
    print NR " records processed from " FILENAME " at " strftime() > "/dev/stderr"
}

测试.csv

"first","second","third"
"fir,st","second","third"
"first","sec""ond","third"
" first ",sec   ond,"third"
"first" , "second","th  ird"
"first","sec;ond","third"
"first","second","th;ird"
1,2,3
,2,3
1,2,
,2,
1,,2
1,"2",3
"1",2,"3"
"1",,"3"
1,"",3
"","",""
"","""aiyn","oh"""
"""","""",""""
11,2~2,3

测试.bat

rem test csv2delim
rem default is: -v delim={tab} -v repl=~
gawk                      -f csv2delim.awk test.csv > test.txt
gawk -v delim=;           -f csv2delim.awk test.csv > testd.txt
gawk -v delim=; -v repl=` -f csv2delim.awk test.csv > testdr.txt
gawk            -v repl=` -f csv2delim.awk test.csv > testr.txt

csv2delim.awk

# csv2delim.awk converts comma delimited files with optional quotes to delim separated file
#     delim can be any character, defaults to tab
# assumes no repl characters in text, any delim in line converts to repl
#     repl can be any character, defaults to ~
# changes two consecutive quotes within quotes to '

# usage: gawk -f csv2delim.awk [-v delim=d] [-v repl=`"] input-file > output-file
#       -v delim    delimiter, defaults to tab
#       -v repl     replacement char, defaults to ~

# e.g. gawk -v delim=; -v repl=` -f csv2delim.awk test.csv > test.txt

# abe 2-28-7
# abe 8-8-8 1.0 fixed empty fields, added replacement option
# abe 8-27-8 1.1 used split
# abe 8-27-8 1.2 inline rpl and "" = '
# abe 8-27-8 1.3 revert to 1.0 as it is much faster, split most of the time
# abe 8-29-8 1.4 better message if delim present

BEGIN {
    if (delim == "") delim = "\t"
    if (repl == "") repl = "~"
    print "csv2delim.awk v.m 1.4 run at " strftime() > "/dev/stderr" ###########################################
}

{
    #if ($0 ~ repl) {
    #   print "Replacement character " repl " is on line " FNR ":" lineIn ";" > "/dev/stderr"
    #}
    if ($0 ~ delim) {
        print "Temp delimiter character " delim " is on line " FNR ":" lineIn ";" > "/dev/stderr"
        print "    replaced by " repl > "/dev/stderr"
    }
    gsub(delim, repl)

    $0 = gensub(/([^,])\"\"/, "\\1'", "g")
#   $0 = gensub(/\"\"([^,])/, "'\\1", "g")  # not needed above covers all cases

    out = ""
    #for (i = 1;  i <= length($0);  i++)
    n = length($0)
    for (i = 1;  i <= n;  i++)
        if ((ch = substr($0, i, 1)) == "\"")
            inString = (inString) ? 0 : 1 # toggle inString
        else
            out = out ((ch == "," && ! inString) ? delim : ch)
    print out
}

END {
    print NR " records processed from " FILENAME " at " strftime() > "/dev/stderr"
}

test.csv

"first","second","third"
"fir,st","second","third"
"first","sec""ond","third"
" first ",sec   ond,"third"
"first" , "second","th  ird"
"first","sec;ond","third"
"first","second","th;ird"
1,2,3
,2,3
1,2,
,2,
1,,2
1,"2",3
"1",2,"3"
"1",,"3"
1,"",3
"","",""
"","""aiyn","oh"""
"""","""",""""
11,2~2,3

test.bat

rem test csv2delim
rem default is: -v delim={tab} -v repl=~
gawk                      -f csv2delim.awk test.csv > test.txt
gawk -v delim=;           -f csv2delim.awk test.csv > testd.txt
gawk -v delim=; -v repl=` -f csv2delim.awk test.csv > testdr.txt
gawk            -v repl=` -f csv2delim.awk test.csv > testr.txt
流绪微梦 2024-07-15 16:39:42

我不太确定这是否是正确的做事方式。 我宁愿处理一个 csv 文件,其中所有值都被引用或没有。 顺便说一句,awk 允许正则表达式作为字段分隔符。 检查一下是否有用。

I am not exactly sure whether this is the right way to do things. I would rather work on a csv file in which either all values are to quoted or none. Btw, awk allows regexes to be Field Separators. Check if that is useful.

暖风昔人 2024-07-15 16:39:42
{
  ColumnCount = 0
  $0 = $0 ","                           # Assures all fields end with comma
  while($0)                             # Get fields by pattern, not by delimiter
  {
    match($0, / *"[^"]*" *,|[^,]*,/)    # Find a field with its delimiter suffix
    Field = substr($0, RSTART, RLENGTH) # Get the located field with its delimiter
    gsub(/^ *"?|"? *,$/, "", Field)     # Strip delimiter text: comma/space/quote
    Column[++ColumnCount] = Field       # Save field without delimiter in an array
    $0 = substr($0, RLENGTH + 1)        # Remove processed text from the raw data
  }
}

遵循这一模式的模式可以访问 Column[] 中的字段。 ColumnCount 指示 Column[] 中找到的元素数。 如果并非所有行都包含相同数量的列,则在处理较短的行时,Column[] 会在 Column[ColumnCount] 之后包含额外的数据。

此实现速度很慢,但它似乎模拟了之前答案中提到的 gawk >= 4.0.0 中的 FPAT/patsplit() 功能。

参考

{
  ColumnCount = 0
  $0 = $0 ","                           # Assures all fields end with comma
  while($0)                             # Get fields by pattern, not by delimiter
  {
    match($0, / *"[^"]*" *,|[^,]*,/)    # Find a field with its delimiter suffix
    Field = substr($0, RSTART, RLENGTH) # Get the located field with its delimiter
    gsub(/^ *"?|"? *,$/, "", Field)     # Strip delimiter text: comma/space/quote
    Column[++ColumnCount] = Field       # Save field without delimiter in an array
    $0 = substr($0, RLENGTH + 1)        # Remove processed text from the raw data
  }
}

Patterns that follow this one can access the fields in Column[]. ColumnCount indicates the number of elements in Column[] that were found. If not all rows contain the same number of columns, Column[] contains extra data after Column[ColumnCount] when processing the shorter rows.

This implementation is slow, but it appears to emulate the FPAT/patsplit() feature found in gawk >= 4.0.0 mentioned in a previous answer.

Reference

回忆凄美了谁 2024-07-15 16:39:42

这就是我的想法。 任何意见和/或更好的解决方案将不胜感激。

BEGIN { FS="," }
{
  for (i=1; i<=NF; i++) {
    f[++n] = $i
    if (substr(f[n],1,1)=="\"") {
      while (substr(f[n], length(f[n]))!="\"" || substr(f[n], length(f[n])-1, 1)=="\\") {
        f[n] = sprintf("%s,%s", f[n], $(++i))
      }
    }
  }
  for (i=1; i<=n; i++) printf "field #%d: %s\n", i, f[i]
  print "----------------------------------\n"
}

基本思想是我循环遍历字段,任何以引号开头但不以引号结尾的字段都会附加到其后的下一个字段。

Here's what I came up with. Any comments and/or better solutions would be appreciated.

BEGIN { FS="," }
{
  for (i=1; i<=NF; i++) {
    f[++n] = $i
    if (substr(f[n],1,1)=="\"") {
      while (substr(f[n], length(f[n]))!="\"" || substr(f[n], length(f[n])-1, 1)=="\\") {
        f[n] = sprintf("%s,%s", f[n], $(++i))
      }
    }
  }
  for (i=1; i<=n; i++) printf "field #%d: %s\n", i, f[i]
  print "----------------------------------\n"
}

The basic idea is that I loop through the fields, and any field which starts with a quote but does not end with a quote gets the next field appended to it.

你爱我像她 2024-07-15 16:39:42

Perl 有 Text::CSV_XS 模块,该模块是专门为处理引号逗号的怪异而构建的。
或者尝试 Text::CSV 模块。

perl -MText::CSV_XS -ne 'BEGIN{$csv=Text::CSV_XS->new()} if($csv->parse($_)){@f=$csv->; fields();for $n (0..$#f) {print "field #$n: $f[$n]\n"};print "---\n"}' file.csv

产生以下输出:

field #0: one
field #1: two
field #2: three, four
field #3: five
---
field #0: six, seven
field #1: eight
field #2: nine
---

这是一个人类可读的版本。
将其另存为 parsecsv,chmod +x,并将其运行为“parsecsv file.csv”

#!/usr/bin/perl
use warnings;
use strict;
use Text::CSV_XS;
my $csv = Text::CSV_XS->new();
open(my $data, '<', $ARGV[0]) or die "Could not open '$ARGV[0]' $!\n";
while (my $line = <$data>) {
    if ($csv->parse($line)) {
        my @f = $csv->fields();
        for my $n (0..$#f) {
            print "field #$n: $f[$n]\n";
        }
        print "---\n";
    }
}

您可能需要指向计算机上不同版本的 perl,因为 Text::CSV_XS 模块可能未安装在您的默认版本上珀尔。

Can't locate Text/CSV_XS.pm in @INC (@INC contains: /home/gnu/lib/perl5/5.6.1/i686-linux /home/gnu/lib/perl5/5.6.1 /home/gnu/lib/perl5/site_perl/5.6.1/i686-linux /home/gnu/lib/perl5/site_perl/5.6.1 /home/gnu/lib/perl5/site_perl .).
BEGIN failed--compilation aborted.

如果您的 Perl 版本都没有安装 Text::CSV_XS,您需要:
sudo apt-get install cpanminus
sudo cpanm Text::CSV_XS

Perl has the Text::CSV_XS module which is purpose-built to handle the quoted-comma weirdness.
Alternately try the Text::CSV module.

perl -MText::CSV_XS -ne 'BEGIN{$csv=Text::CSV_XS->new()} if($csv->parse($_)){@f=$csv->fields();for $n (0..$#f) {print "field #$n: $f[$n]\n"};print "---\n"}' file.csv

Produces this output:

field #0: one
field #1: two
field #2: three, four
field #3: five
---
field #0: six, seven
field #1: eight
field #2: nine
---

Here's a human-readable version.
Save it as parsecsv, chmod +x, and run it as "parsecsv file.csv"

#!/usr/bin/perl
use warnings;
use strict;
use Text::CSV_XS;
my $csv = Text::CSV_XS->new();
open(my $data, '<', $ARGV[0]) or die "Could not open '$ARGV[0]' $!\n";
while (my $line = <$data>) {
    if ($csv->parse($line)) {
        my @f = $csv->fields();
        for my $n (0..$#f) {
            print "field #$n: $f[$n]\n";
        }
        print "---\n";
    }
}

You may need to point to a different version of perl on your machine, since the Text::CSV_XS module may not be installed on your default version of perl.

Can't locate Text/CSV_XS.pm in @INC (@INC contains: /home/gnu/lib/perl5/5.6.1/i686-linux /home/gnu/lib/perl5/5.6.1 /home/gnu/lib/perl5/site_perl/5.6.1/i686-linux /home/gnu/lib/perl5/site_perl/5.6.1 /home/gnu/lib/perl5/site_perl .).
BEGIN failed--compilation aborted.

If none of your versions of Perl have Text::CSV_XS installed, you'll need to:
sudo apt-get install cpanminus
sudo cpanm Text::CSV_XS

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文