迭代连接集后出现 PIG 错误 1066。

发布于 11-27 06:57 字数 2721 浏览 2 评论 0原文

尝试将月份中的天数与年月键上的数据集连接起来。在我加入并尝试对集合执行 FOREACH 后,我收到错误:1066 ...后端错误:标量在输出中有多于一行。

这是一个具有相同问题的缩写集:

$ hadoop fs -cat DIM/\*
2011,01,31
2011,02,28
2011,03,31
2011,04,30
2011,05,31
2011,06,30
2011,07,31
2011,08,31
2011,09,30
2011,10,31
2011,11,30
2011,12,31

$ hadoop fs -cat ACCT/\*
2011,7,26,key1,23.25,2470.0
2011,7,26,key2,10.416666666666668,232274.08333333334
2011,7,26,key3,82.83333333333333,541377.25
2011,7,26,key4,78.5,492823.33333333326
2011,7,26,key5,110.83333333333334,729811.9166666667
2011,7,26,key6,102.16666666666666,675941.25
2011,7,26,key7,118.91666666666666,770896.75

然后在 grunt 中:

grunt> DIM = LOAD 'DIM' USING PigStorage(',') AS (year:int, month:int, days:int);
grunt> ACCT = LOAD 'ACCT' USING PigStorage(',') AS (year:int, month:int, day: int, account:chararray, metric1:double, metric2:double);
grunt> AjD = JOIN ACCT BY (year,month), DIM  BY (year,month) USING 'replicated';
grunt> dump AjD;
...
(2011,7,26,key1,23.25,2470.0,2011,7,31)
(2011,7,26,key2,10.416666666666668,232274.08333333334,2011,7,31)
(2011,7,26,key3,82.83333333333333,541377.25,2011,7,31)
(2011,7,26,key4,78.5,492823.33333333326,2011,7,31)
(2011,7,26,key5,110.83333333333334,729811.9166666667,2011,7,31)
(2011,7,26,key6,102.16666666666666,675941.25,2011,7,31)
(2011,7,26,key7,118.91666666666666,770896.75,2011,7,31)
grunt> describe AjD;
AjD: {ACCT::year: int,ACCT::month: int,ACCT::day: int,ACCT::account: chararray,ACCT::metric1: double,ACCT::metric2: double,DIM::year: int,DIM::month: int,DIM::days: int}

grunt> FINAL = FOREACH AjD
>> GENERATE ACCT.year, ACCT.month, ACCT.account, (ACCT.metric2 / DIM.days);
grunt> dump FINAL;
...
ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias FINAL. Backend error : Scalar has more than one row in the output. 1st : (2011,7,26,key1,23.25,2470.0), 2nd :(2011,7,26,key2,10.416666666666668,232274.08333333334)

但是,如果我存储它并重新加载它以摆脱“连接”模式,它会起作用:

grunt> STORE AjD INTO 'AjD' using PigStorage(',');
grunt> AjD2 = LOAD 'AjD' USING PigStorage(',') AS (year:int, month:int, day:int, account:chararray, metric1:double, metric2:double, year2:int, month2:int, days:int);

grunt> FINAL = FOREACH AjD2                                                                   
>> GENERATE year, month, account, (metric2 /days);         

grunt> dump FINAL;
...
(2011,7,key1,79.6774193548387)
(2011,7,key2,7492.712365591398)
(2011,7,key3,17463.782258064515)
(2011,7,key4,15897.526881720427)
(2011,7,key5,23542.319892473122)
(2011,7,key6,21804.5564516129)
(2011,7,key7,24867.637096774193)

有没有一种方法可以在不存储和重新加载的情况下迭代(FOREACH)连接集?

Trying to join a one set which has number of days in the month with a data set on the year month key. After I join the and try to do a FOREACH over the set I get an ERROR: 1066 ... Backend error : Scalar has more than one row in the output.

Here is an abbreviated set with the same problem:

$ hadoop fs -cat DIM/\*
2011,01,31
2011,02,28
2011,03,31
2011,04,30
2011,05,31
2011,06,30
2011,07,31
2011,08,31
2011,09,30
2011,10,31
2011,11,30
2011,12,31

$ hadoop fs -cat ACCT/\*
2011,7,26,key1,23.25,2470.0
2011,7,26,key2,10.416666666666668,232274.08333333334
2011,7,26,key3,82.83333333333333,541377.25
2011,7,26,key4,78.5,492823.33333333326
2011,7,26,key5,110.83333333333334,729811.9166666667
2011,7,26,key6,102.16666666666666,675941.25
2011,7,26,key7,118.91666666666666,770896.75

Then in grunt:

grunt> DIM = LOAD 'DIM' USING PigStorage(',') AS (year:int, month:int, days:int);
grunt> ACCT = LOAD 'ACCT' USING PigStorage(',') AS (year:int, month:int, day: int, account:chararray, metric1:double, metric2:double);
grunt> AjD = JOIN ACCT BY (year,month), DIM  BY (year,month) USING 'replicated';
grunt> dump AjD;
...
(2011,7,26,key1,23.25,2470.0,2011,7,31)
(2011,7,26,key2,10.416666666666668,232274.08333333334,2011,7,31)
(2011,7,26,key3,82.83333333333333,541377.25,2011,7,31)
(2011,7,26,key4,78.5,492823.33333333326,2011,7,31)
(2011,7,26,key5,110.83333333333334,729811.9166666667,2011,7,31)
(2011,7,26,key6,102.16666666666666,675941.25,2011,7,31)
(2011,7,26,key7,118.91666666666666,770896.75,2011,7,31)
grunt> describe AjD;
AjD: {ACCT::year: int,ACCT::month: int,ACCT::day: int,ACCT::account: chararray,ACCT::metric1: double,ACCT::metric2: double,DIM::year: int,DIM::month: int,DIM::days: int}

grunt> FINAL = FOREACH AjD
>> GENERATE ACCT.year, ACCT.month, ACCT.account, (ACCT.metric2 / DIM.days);
grunt> dump FINAL;
...
ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias FINAL. Backend error : Scalar has more than one row in the output. 1st : (2011,7,26,key1,23.25,2470.0), 2nd :(2011,7,26,key2,10.416666666666668,232274.08333333334)

However if I store it and reload it to shed the "join" schema it works:

grunt> STORE AjD INTO 'AjD' using PigStorage(',');
grunt> AjD2 = LOAD 'AjD' USING PigStorage(',') AS (year:int, month:int, day:int, account:chararray, metric1:double, metric2:double, year2:int, month2:int, days:int);

grunt> FINAL = FOREACH AjD2                                                                   
>> GENERATE year, month, account, (metric2 /days);         

grunt> dump FINAL;
...
(2011,7,key1,79.6774193548387)
(2011,7,key2,7492.712365591398)
(2011,7,key3,17463.782258064515)
(2011,7,key4,15897.526881720427)
(2011,7,key5,23542.319892473122)
(2011,7,key6,21804.5564516129)
(2011,7,key7,24867.637096774193)

Is there a way to iterate (FOREACH) over the joined set without storing and reloading?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

追我者格杀勿论2024-12-04 06:57:48

您是否尝试过使用 :: Operator 来指定要指定哪一列得到?

(ACCT.metric2 / DIM.days) 替换为 (ACCT::metric2 / DIM::days)

例如

...
FINAL = FOREACH AjD
        GENERATE
             ACCT.year, ACCT.month, ACCT.account,(ACCT::metric2 / DIM::days);

Have you tried with the :: Operator which specifies which column to get?

Replacing (ACCT.metric2 / DIM.days) by (ACCT::metric2 / DIM::days).

e.g.

...
FINAL = FOREACH AjD
        GENERATE
             ACCT.year, ACCT.month, ACCT.account,(ACCT::metric2 / DIM::days);
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文