解析和更改数字的脚本

发布于 2024-07-24 13:16:26 字数 311 浏览 6 评论 0原文

在编辑特定类型的文件时,我经常使用数字,这大多是乏味的工作。 该文件的格式如下:

damagebase = 8.834
    "abc_foo.odf" 3.77
    "def_bar.odf" 3.77
    "ghi_baz.odf" 3.77
    "jkl_blah.odf" 4.05
    ...

对于编写解析此文件并让我以编程方式更改每个数字的脚本,您会建议什么?

语言:我使用 C#、一些 F#(菜鸟)和 Lua。 如果您建议正则表达式,您能否提供具体的正则表达式,因为我不熟悉它们?

I am working with numbers a lot when editing a particular type of file, and it's mostly tedious work. The file has a format like this:

damagebase = 8.834
    "abc_foo.odf" 3.77
    "def_bar.odf" 3.77
    "ghi_baz.odf" 3.77
    "jkl_blah.odf" 4.05
    ...

What would you recommend for writing a script that parses this and lets me programmatically change each number?

Language: i use C#, some F# (noob), and Lua. If you suggest regexes, could you provide specific ones as i am not familiar with them?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

很酷不放纵 2024-07-31 13:16:27

Perl 非常适合做这样的事情。 这是一个 perl 脚本,可以完成您想要的操作。

#!/usr/bin/env perl

$multiplier = 2.0;

while (<>)
{
    $n = /=/ ? 2 : 1;
    @tokens = split;
    $tokens[$n] *= $multiplier;

    print "\t" if not /=/;
    print join(' ', @tokens) . "\n";
}

用法:

./file.pl input_file > output_file

Perl is pretty good for stuff like this. Here's a perl script that will do what you want.

#!/usr/bin/env perl

$multiplier = 2.0;

while (<>)
{
    $n = /=/ ? 2 : 1;
    @tokens = split;
    $tokens[$n] *= $multiplier;

    print "\t" if not /=/;
    print join(' ', @tokens) . "\n";
}

Usage:

./file.pl input_file > output_file
梦冥 2024-07-31 13:16:27

如果这确实是您想要做的,请使用 awk:

awk '{$NF *= 2.5 ; print }' < input_file > output_file

编辑:好吧,如果您想保留所描述的空白,这应该可以工作(尽管它变得不优雅)。

awk '{$NF *= 2.5} /^\"/{print "\t" $0} !/^\"/{print}' < input_file > output_file

If that's really all you want to do, use awk:

awk '{$NF *= 2.5 ; print }' < input_file > output_file

EDITED: All right, if you want to keep the whitespace as you describe, this should work (although it's getting inelegant).

awk '{$NF *= 2.5} /^\"/{print "\t" $0} !/^\"/{print}' < input_file > output_file
紫罗兰の梦幻 2024-07-31 13:16:27

您可以像这样使用 AWK(请注意如何轻松转换格式),

sed 's/damagebase =/damagebase=/g' input.txt |\
    awk '{printf "     %s %s\n",$1,3.1*$2}' |\
    sed 's/.*damagebase=/damagebase =/g'

我在此示例脚本中将第二列乘以 3.1
请注意,要恢复格式,
printf 的开头插入了一个 TAB,
两个 sed 命令将您的格式相互转换为适合 AWK 命令的格式

You can use AWK like this (note how the formatting was converted easily for the purpose),

sed 's/damagebase =/damagebase=/g' input.txt |\
    awk '{printf "     %s %s\n",$1,3.1*$2}' |\
    sed 's/.*damagebase=/damagebase =/g'

I am multiplying the 2nd column by 3.1 in this sample script.
Note that to restore your formatting,
there is a TAB inserted at the start of the printf and,
the two sed commands translate from-and-back your format to a suitable one for the AWK command

思念满溢 2024-07-31 13:16:27

您可以将非空格和平铺符匹配到 Double.Parse :

int multiplier = 3;

string input =
  "damagebase = 8.834\n" +
  "  \"abc.odf\" 3.77\n" +
  "  \"def.odf\" 3.77\n" +
  "  \"ghi.odf\" .77\n" +
  "  \"jkl.odf\" -4.05\n" +
  "  \"mno.odf\" 5\n";

Regex r = new Regex(@"^(\w+)\s*=\s*(\S+)" +
                    @"(?:\s+""([^""]+)""\s+(\S+))+",
                    RegexOptions.Compiled | RegexOptions.Multiline);

Match m = r.Match(input);
if (m.Success) {
  double header = Double.Parse(m.Groups[2].Value);
  Console.WriteLine("{0} = {1}", m.Groups[1].Value,
                                 header * multiplier);

  CaptureCollection files = m.Groups[3].Captures;
  CaptureCollection nums  = m.Groups[4].Captures;
  for (int i = 0; i < files.Count; i++) {
    double val = Double.Parse(nums[i].Value);
    Console.WriteLine(@"  ""{0}"" {1}", files[i].Value,
                                        val * multiplier);
  }
}
else
  Console.WriteLine("no match");

运行它给出

damagebase = 26.502
  "abc.odf" 11.31
  "def.odf" 11.31
  "ghi.odf" 2.31
  "jkl.odf" -12.15
  "mno.odf" 15

You can match runs of non-whitespace and punt to Double.Parse:

int multiplier = 3;

string input =
  "damagebase = 8.834\n" +
  "  \"abc.odf\" 3.77\n" +
  "  \"def.odf\" 3.77\n" +
  "  \"ghi.odf\" .77\n" +
  "  \"jkl.odf\" -4.05\n" +
  "  \"mno.odf\" 5\n";

Regex r = new Regex(@"^(\w+)\s*=\s*(\S+)" +
                    @"(?:\s+""([^""]+)""\s+(\S+))+",
                    RegexOptions.Compiled | RegexOptions.Multiline);

Match m = r.Match(input);
if (m.Success) {
  double header = Double.Parse(m.Groups[2].Value);
  Console.WriteLine("{0} = {1}", m.Groups[1].Value,
                                 header * multiplier);

  CaptureCollection files = m.Groups[3].Captures;
  CaptureCollection nums  = m.Groups[4].Captures;
  for (int i = 0; i < files.Count; i++) {
    double val = Double.Parse(nums[i].Value);
    Console.WriteLine(@"  ""{0}"" {1}", files[i].Value,
                                        val * multiplier);
  }
}
else
  Console.WriteLine("no match");

Running it gives

damagebase = 26.502
  "abc.odf" 11.31
  "def.odf" 11.31
  "ghi.odf" 2.31
  "jkl.odf" -12.15
  "mno.odf" 15
陈甜 2024-07-31 13:16:27

我尝试过

static void Main(string[] args)
{
    Console.WriteLine("Please enter the multiplier:");
    string stringMult = Console.ReadLine();
    int multiplier;
    Int32.TryParse(stringMult, out multiplier);
    StreamReader sr = new StreamReader(@"C:\Users\[obscured]\Desktop\Fleetops Mod\Data To Process.txt", true);
    string input = sr.ReadToEnd();
    sr.Close();
    StreamWriter sw = new StreamWriter(@"C:\Users\[obscured]\Desktop\Fleetops Mod\Data To Process.txt", false);
    Regex r = new Regex(@"^(\w+)\s*=\s*(\S+)" +
            @"(?:\s+""([^""]+)""\s+(\S+))+",
            RegexOptions.Compiled | RegexOptions.MultiLine);
    Match m = r.Match(input);
    if (m.Success) {
        double header = Double.Parse(m.Groups[2].Value);
        sw.WriteLine("{0} = {1}", m.Groups[1].Value,
                         header * multiplier);
        CaptureCollection files = m.Groups[3].Captures;
        CaptureCollection nums  = m.Groups[4].Captures;
        for (int i = 0; i < files.Count; i++) {
            double val = Double.Parse(nums[i].Value);
            sw.WriteLine(@"  ""{0}"" {1}", files[i].Value,
                                val * multiplier);
        }
    }
    else
        Console.WriteLine("no match");
    sw.Close();
    Console.WriteLine("Done!");
    Console.ReadKey();
}

(感谢gbacon)
即使我输入了正确的数据,它也会返回“不匹配”。 为什么要这样做?
这是测试数据:

damagebase = 8.098
    "bor_adaptor_03.odf" 3.77
    "bor_adaptor_13.odf" 3.77
    "bor_adaptor_23.odf" 3.77
    "bor_adaptor_33.odf" 4.05
    "bor_adaptor_R3.odf" 3.77
    "bor_adaptor_T3.odf" 3.77
    "bor_cube_BHHHMM.odf" 6.48
    "bor_cube_BRHHHM.odf" 4.52
    "bor_cube_BRHHMM.odf" 6.48
    "bor_cube_BTHHHM.odf" 4.52
    "bor_cube_BTHHMM.odf" 6.48
    "bor_cube_BTRHHM.odf" 4.52
    "bor_cube_BTRHMM.odf" 6.48
    "bor_cube_BTTHHM.odf" 4.52
    "bor_cube_BTTHMM.odf" 6.48
    "bor_cube_BTTRHM.odf" 4.52
    "bor_cube_BTTRMM.odf" 6.48
    "bor_cube_BTTTHM.odf" 4.52
    "bor_cube_BTTTMM.odf" 6.48
    "bor_cube_BTTTRM.odf" 4.52
    "bor_cube_RHHHMM.odf" 6.48
    "bor_cube_THHHMM.odf" 6.48
    "bor_cube_TRHHHM.odf" 4.52
    "bor_cube_TRHHMM.odf" 6.48
    "bor_cube_TTHHHM.odf" 4.52
    "bor_cube_TTHHMM.odf" 6.48
    "bor_cube_TTRHHM.odf" 4.52
    "bor_cube_TTRHMM.odf" 6.48
    "bor_cube_TTTHHM.odf" 4.52
    "bor_cube_TTTHMM.odf" 6.48
    "bor_cube_TTTRHM.odf" 4.52
    "bor_cube_TTTRMM.odf" 6.48
    "dom_battle_cruiserY2r6.odf" 4.123
    "dom_battle_cruiserYr6.odf" 4.123
    "dom_battle_cruiserZ2r6.odf" 4.123
    "dom_battle_cruiserZr6.odf" 4.123
    "dom_battle_cruiser_fed2r6.odf" 4.123
    "dom_battle_cruiser_fedr6.odf" 4.123
    "dom_defenderr4.odf" 7.775
    "dom_defenderr5.odf" 7.452
    "dom_defenderr6.odf" 3.793
    "dom_dreadnought_borr4.odf" 3.77
    "dom_dreadnought_borr5.odf" 3.77
    "dom_dreadnought_borr6.odf" 3.77
    "dom_dreadnought_fedr4.odf" 3.77
    "dom_dreadnought_fedr5.odf" 3.77
    "dom_dreadnought_fedr6.odf" 3.77
    "dom_dreadnought_klir4.odf" 3.77
    "dom_dreadnought_klir5.odf" 3.77
    "dom_dreadnought_klir6.odf" 3.77
    "dom_dreadnought_romr4.odf" 3.77
    "dom_dreadnought_romr5.odf" 3.77
    "dom_dreadnought_romr6.odf" 3.77
    "dom_intercept_destr4.odf" 5.346
    "dom_intercept_destr5.odf" 2.673
    "dom_intercept_destr6.odf" 2.673
    "dom_intercept_dest_romr4.odf" 5.346
    "dom_intercept_dest_romr5.odf" 2.673
    "dom_intercept_dest_romr6.odf" 2.673
    "fed_ambassadorMr6.odf" 5.67
    "fed_ambassadorr6.odf" 5.67
    "fed_intrepidYr6.odf" 5.67
    "fed_intrepidZr6.odf" 5.67
    "fed_intrepid_borr6.odf" 5.67
    "fed_mirandaii.odf" 5.905
    "fed_mirandaiiM.odf" 5.905
    "fed_mirandaiiMr2.odf" 5.905
    "fed_mirandaiiMr3.odf" 5.905
    "fed_mirandaiiMr4.odf" 5.905
    "fed_mirandaiiMr5.odf" 5.905
    "fed_mirandaiiMr6.odf" 5.905
    "fed_mirandaiir2.odf" 5.905
    "fed_mirandaiir3.odf" 5.905
    "fed_mirandaiir4.odf" 5.905
    "fed_mirandaiir5.odf" 5.905
    "fed_mirandaiir6.odf" 5.905
    "fed_monsoonr4.odf" 4.782
    "fed_monsoonr5.odf" 2.31
    "fed_monsoonr6.odf" 3.726
    "fed_monsoonZr4.odf" 4.782
    "fed_monsoonZr5.odf" 2.31
    "fed_monsoonZr6.odf" 3.726
    "fed_monsoon_bor.odf" 4.52
    "fed_monsoon_borr2.odf" 4.52
    "fed_monsoon_borr3.odf" 4.52
    "fed_monsoon_borr4.odf" 6.32
    "fed_monsoon_borr5.odf" 3.315
    "fed_monsoon_borr6.odf" 2.916
    "fed_monsoon_klir4.odf" 4.782
    "fed_monsoon_klir5.odf" 2.31
    "fed_monsoon_klir6.odf" 3.726
    "fed_sovereignr4.odf" 6.69
    "fed_sovereignr5.odf" 5.51
    "fed_sovereignr6.odf" 5.51
    "fed_sovereignYr4.odf" 6.69
    "fed_sovereignYr5.odf" 5.51
    "fed_sovereignYr6.odf" 5.51
    "kli_brelr4.odf" 7.452
    "kli_brelr5.odf" 6.69
    "kli_brelr6.odf" 6.69
    "kli_brelZr4.odf" 7.452
    "kli_brelZr5.odf" 6.69
    "kli_brelZr6.odf" 6.69
    "kli_brel_borr4.odf" 7.452
    "kli_brel_borr5.odf" 6.69
    "kli_brel_borr6.odf" 6.69
    "kli_brel_romr4.odf" 7.452
    "kli_brel_romr5.odf" 6.69
    "kli_brel_romr6.odf" 6.69
    "kli_edjenr4.odf" 7.452
    "kli_edjenr5.odf" 6.69
    "kli_edjenr6.odf" 6.69
    "kli_kvortr6.odf" 6.69
    "kli_kvortZr6.odf" 6.69
    "kli_kvort_fedr6.odf" 6.69
    "rom_generix_dreadr4.odf" 7.723
    "rom_generix_dreadr5.odf" 7.21
    "rom_generix_dreadr6.odf" 7.21
    "rom_generix_dreadYr4.odf" 7.723
    "rom_generix_dreadYr5.odf" 7.21
    "rom_generix_dreadYr6.odf" 7.21
    "rom_generix_dread_klir4.odf" 7.723
    "rom_generix_dread_klir5.odf" 7.21
    "rom_generix_dread_klir6.odf" 7.21

我的理论是,因为每个非标题行前面的空格是一个制表符(这里不会以这种方式显示),所以正则表达式不匹配。 如果您想知道,空白很重要。

I tried

static void Main(string[] args)
{
    Console.WriteLine("Please enter the multiplier:");
    string stringMult = Console.ReadLine();
    int multiplier;
    Int32.TryParse(stringMult, out multiplier);
    StreamReader sr = new StreamReader(@"C:\Users\[obscured]\Desktop\Fleetops Mod\Data To Process.txt", true);
    string input = sr.ReadToEnd();
    sr.Close();
    StreamWriter sw = new StreamWriter(@"C:\Users\[obscured]\Desktop\Fleetops Mod\Data To Process.txt", false);
    Regex r = new Regex(@"^(\w+)\s*=\s*(\S+)" +
            @"(?:\s+""([^""]+)""\s+(\S+))+",
            RegexOptions.Compiled | RegexOptions.MultiLine);
    Match m = r.Match(input);
    if (m.Success) {
        double header = Double.Parse(m.Groups[2].Value);
        sw.WriteLine("{0} = {1}", m.Groups[1].Value,
                         header * multiplier);
        CaptureCollection files = m.Groups[3].Captures;
        CaptureCollection nums  = m.Groups[4].Captures;
        for (int i = 0; i < files.Count; i++) {
            double val = Double.Parse(nums[i].Value);
            sw.WriteLine(@"  ""{0}"" {1}", files[i].Value,
                                val * multiplier);
        }
    }
    else
        Console.WriteLine("no match");
    sw.Close();
    Console.WriteLine("Done!");
    Console.ReadKey();
}

(thanks gbacon)
and it comes back with "no match" even when i put in the right data. Why does it do this?
Here's the test data:

damagebase = 8.098
    "bor_adaptor_03.odf" 3.77
    "bor_adaptor_13.odf" 3.77
    "bor_adaptor_23.odf" 3.77
    "bor_adaptor_33.odf" 4.05
    "bor_adaptor_R3.odf" 3.77
    "bor_adaptor_T3.odf" 3.77
    "bor_cube_BHHHMM.odf" 6.48
    "bor_cube_BRHHHM.odf" 4.52
    "bor_cube_BRHHMM.odf" 6.48
    "bor_cube_BTHHHM.odf" 4.52
    "bor_cube_BTHHMM.odf" 6.48
    "bor_cube_BTRHHM.odf" 4.52
    "bor_cube_BTRHMM.odf" 6.48
    "bor_cube_BTTHHM.odf" 4.52
    "bor_cube_BTTHMM.odf" 6.48
    "bor_cube_BTTRHM.odf" 4.52
    "bor_cube_BTTRMM.odf" 6.48
    "bor_cube_BTTTHM.odf" 4.52
    "bor_cube_BTTTMM.odf" 6.48
    "bor_cube_BTTTRM.odf" 4.52
    "bor_cube_RHHHMM.odf" 6.48
    "bor_cube_THHHMM.odf" 6.48
    "bor_cube_TRHHHM.odf" 4.52
    "bor_cube_TRHHMM.odf" 6.48
    "bor_cube_TTHHHM.odf" 4.52
    "bor_cube_TTHHMM.odf" 6.48
    "bor_cube_TTRHHM.odf" 4.52
    "bor_cube_TTRHMM.odf" 6.48
    "bor_cube_TTTHHM.odf" 4.52
    "bor_cube_TTTHMM.odf" 6.48
    "bor_cube_TTTRHM.odf" 4.52
    "bor_cube_TTTRMM.odf" 6.48
    "dom_battle_cruiserY2r6.odf" 4.123
    "dom_battle_cruiserYr6.odf" 4.123
    "dom_battle_cruiserZ2r6.odf" 4.123
    "dom_battle_cruiserZr6.odf" 4.123
    "dom_battle_cruiser_fed2r6.odf" 4.123
    "dom_battle_cruiser_fedr6.odf" 4.123
    "dom_defenderr4.odf" 7.775
    "dom_defenderr5.odf" 7.452
    "dom_defenderr6.odf" 3.793
    "dom_dreadnought_borr4.odf" 3.77
    "dom_dreadnought_borr5.odf" 3.77
    "dom_dreadnought_borr6.odf" 3.77
    "dom_dreadnought_fedr4.odf" 3.77
    "dom_dreadnought_fedr5.odf" 3.77
    "dom_dreadnought_fedr6.odf" 3.77
    "dom_dreadnought_klir4.odf" 3.77
    "dom_dreadnought_klir5.odf" 3.77
    "dom_dreadnought_klir6.odf" 3.77
    "dom_dreadnought_romr4.odf" 3.77
    "dom_dreadnought_romr5.odf" 3.77
    "dom_dreadnought_romr6.odf" 3.77
    "dom_intercept_destr4.odf" 5.346
    "dom_intercept_destr5.odf" 2.673
    "dom_intercept_destr6.odf" 2.673
    "dom_intercept_dest_romr4.odf" 5.346
    "dom_intercept_dest_romr5.odf" 2.673
    "dom_intercept_dest_romr6.odf" 2.673
    "fed_ambassadorMr6.odf" 5.67
    "fed_ambassadorr6.odf" 5.67
    "fed_intrepidYr6.odf" 5.67
    "fed_intrepidZr6.odf" 5.67
    "fed_intrepid_borr6.odf" 5.67
    "fed_mirandaii.odf" 5.905
    "fed_mirandaiiM.odf" 5.905
    "fed_mirandaiiMr2.odf" 5.905
    "fed_mirandaiiMr3.odf" 5.905
    "fed_mirandaiiMr4.odf" 5.905
    "fed_mirandaiiMr5.odf" 5.905
    "fed_mirandaiiMr6.odf" 5.905
    "fed_mirandaiir2.odf" 5.905
    "fed_mirandaiir3.odf" 5.905
    "fed_mirandaiir4.odf" 5.905
    "fed_mirandaiir5.odf" 5.905
    "fed_mirandaiir6.odf" 5.905
    "fed_monsoonr4.odf" 4.782
    "fed_monsoonr5.odf" 2.31
    "fed_monsoonr6.odf" 3.726
    "fed_monsoonZr4.odf" 4.782
    "fed_monsoonZr5.odf" 2.31
    "fed_monsoonZr6.odf" 3.726
    "fed_monsoon_bor.odf" 4.52
    "fed_monsoon_borr2.odf" 4.52
    "fed_monsoon_borr3.odf" 4.52
    "fed_monsoon_borr4.odf" 6.32
    "fed_monsoon_borr5.odf" 3.315
    "fed_monsoon_borr6.odf" 2.916
    "fed_monsoon_klir4.odf" 4.782
    "fed_monsoon_klir5.odf" 2.31
    "fed_monsoon_klir6.odf" 3.726
    "fed_sovereignr4.odf" 6.69
    "fed_sovereignr5.odf" 5.51
    "fed_sovereignr6.odf" 5.51
    "fed_sovereignYr4.odf" 6.69
    "fed_sovereignYr5.odf" 5.51
    "fed_sovereignYr6.odf" 5.51
    "kli_brelr4.odf" 7.452
    "kli_brelr5.odf" 6.69
    "kli_brelr6.odf" 6.69
    "kli_brelZr4.odf" 7.452
    "kli_brelZr5.odf" 6.69
    "kli_brelZr6.odf" 6.69
    "kli_brel_borr4.odf" 7.452
    "kli_brel_borr5.odf" 6.69
    "kli_brel_borr6.odf" 6.69
    "kli_brel_romr4.odf" 7.452
    "kli_brel_romr5.odf" 6.69
    "kli_brel_romr6.odf" 6.69
    "kli_edjenr4.odf" 7.452
    "kli_edjenr5.odf" 6.69
    "kli_edjenr6.odf" 6.69
    "kli_kvortr6.odf" 6.69
    "kli_kvortZr6.odf" 6.69
    "kli_kvort_fedr6.odf" 6.69
    "rom_generix_dreadr4.odf" 7.723
    "rom_generix_dreadr5.odf" 7.21
    "rom_generix_dreadr6.odf" 7.21
    "rom_generix_dreadYr4.odf" 7.723
    "rom_generix_dreadYr5.odf" 7.21
    "rom_generix_dreadYr6.odf" 7.21
    "rom_generix_dread_klir4.odf" 7.723
    "rom_generix_dread_klir5.odf" 7.21
    "rom_generix_dread_klir6.odf" 7.21

My theory is that because the whitespace preceding each non-header line is a tab (and it won't show up that way here), the regex doesn't match. In case you're wondering, the whitespace IS important.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文