Google Cloud DataFlow splattabledofn不增加并行性
我有
pipeline
.apply("Read lines", TextIO.read().from(options.getFileInput()))
.apply("Split lines", ParDo.of(new LineSplittableDoFn()))
.apply("Log bundles", ParDo.of(new ResultLogDoFn()));
- @getInitialRterniction,@processelement和限制性导管的用法和 @splitReStriction。
- ResultLogDOFN- log唯一捆绑包。类似于 https://wwww.waitingforcode.com/ apache-beam/data-apartitioning-apache-beam/read
基于我的观察值:
- 只有textio.read()控制创建的捆绑包的数量。
- linesPlittabledofn-基于实现的逻辑创建和拆分初始限制,我可以从日志中看到它。
- 新元素是由LinesPlittabledfn创建的,Runner具有根据限制进行工作窃取的能力。
- 并行执行的数量保持不变。我无法添加更多的工人或工人线程,因为在限制分裂过程中没有创建新的捆绑包。基于我对文档束的理解,是光束世界中平行化的单位,因此并行性取决于它: https://beam.apache.org/documentation/runtime/model/#depperent-parallellism
reshuffle
的解决方案正常工作。我可以看到束和并行的数量增加。
Reshuffle.ViaRandomKey<ShardedRequest> shuffle = Reshuffle.viaRandomKey();
Reshuffle.ViaRandomKey<ShardedRequest> partitionedShuffle = shuffle.withNumBuckets(100);
pipeline
.apply("Read lines", TextIO.read().from(options.getFileInput()))
.apply("Split lines", ParDo.of(new LineNonSplittableDoFn()))
.apply(partitionedShuffle)
.apply("Log bundles", ParDo.of(new ResultLogDoFn()))
我正在使用DataFow Runner(通过Flex模板),Apache Beam版本是2.24.0
是否有可能使用splittabledfn增加捆绑包(和管道并行性)?
I have following Apache Beam pipeline
pipeline
.apply("Read lines", TextIO.read().from(options.getFileInput()))
.apply("Split lines", ParDo.of(new LineSplittableDoFn()))
.apply("Log bundles", ParDo.of(new ResultLogDoFn()));
- LineSplittableDoFn - Splittable DoFn implementation with @GetInitialRestriction, @ProcessElement with RestrictionTracker usage and @SplitRestriction.The DoFn splits the text line to 4 chunks.
- ResultLogDoFn - log unique bundles. Similar to code from https://www.waitingforcode.com/apache-beam/data-partitioning-apache-beam/read
Based on my observations:
- Only TextIO.read() controls amount of created bundles.
- LineSplittableDoFn - creates and splits initial restrictions based on implemented logic, I can see it from my logs.
- New elements are created by LineSplittableDoFn and runner has ability to do work stealing based on restrictions.
- Number of parallel executions stay same. I can't add more workers or worker threads because there are no new bundles created during restrictions splitting. Based on my understanding of documentation bundle is a unit of parallelization in Beam world so parallelism depends on it: https://beam.apache.org/documentation/runtime/model/#dependent-parallellism
The solution with Reshuffle
works fine. I can see increase in number of bundles and parallelism.
Reshuffle.ViaRandomKey<ShardedRequest> shuffle = Reshuffle.viaRandomKey();
Reshuffle.ViaRandomKey<ShardedRequest> partitionedShuffle = shuffle.withNumBuckets(100);
pipeline
.apply("Read lines", TextIO.read().from(options.getFileInput()))
.apply("Split lines", ParDo.of(new LineNonSplittableDoFn()))
.apply(partitionedShuffle)
.apply("Log bundles", ParDo.of(new ResultLogDoFn()))
I'm using Datafow Runner (via flex templates), Apache Beam version is 2.24.0
Is it possible to increase number of bundles(and pipeline parallelism) using SplittableDoFn?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论