如何将KAFKA的单个主题流式传输,通过键过滤到HDF的多个位置?
我不是要在多个HDFS位置上流式传输数据,该位置通过键过滤。因此,下面的代码不起作用。请帮助我找到编写此代码的正确方法
val ER_stream_V1 = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", configManager.getString("Kafka.Server"))
.option("subscribe", "Topic1")
.option("startingOffsets", "latest")
.option("failOnDataLoss", "false")
.load()
val ER_stream_V2 = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", configManager.getString("Kafka.Server"))
.option("subscribe", "Topic1")
.option("startingOffsets", "latest")
.option("failOnDataLoss", "false")
.load()
ER_stream_V1.toDF()
.select(col("key"), col("value").cast("string"))
.filter(col("key")==="Value1")
.select(functions.from_json(col("value").cast("string"), Value1Schema.schemaExecution).as("value")).select("value.*")
.writeStream
.format("orc")
.option("metastoreUri", configManager.getString("spark.datasource.hive.warehouse.metastoreUri"))
.option("checkpointLocation", "/tmp/teststreaming/execution/checkpoint2005")
.option("path", "/tmp/test/value1")
.trigger(Trigger.ProcessingTime("5 Seconds"))
.partitionBy("jobid")
.start()
ER_stream_V2.toDF()
.select(col("key"), col("value").cast("string"))
.filter(col("key")==="Value2")
.select(functions.from_json(col("value").cast("string"), Value2Schema.schemaJobParameters).as("value"))
.select("value.*")
.writeStream
.format("orc")
.option("metastoreUri", configManager.getString("spark.datasource.hive.warehouse.metastoreUri"))
.option("checkpointLocation", "/tmp/teststreaming/jobparameters/checkpoint2006")
.option("path", "/tmp/test/value2")
.trigger(Trigger.ProcessingTime("5 Seconds"))
.partitionBy("jobid")
.start()
I am not being to stream my data on multiple hdfs location , which is filtered by key. So below code is not working. Please help me to find the correct way to write this code
val ER_stream_V1 = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", configManager.getString("Kafka.Server"))
.option("subscribe", "Topic1")
.option("startingOffsets", "latest")
.option("failOnDataLoss", "false")
.load()
val ER_stream_V2 = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", configManager.getString("Kafka.Server"))
.option("subscribe", "Topic1")
.option("startingOffsets", "latest")
.option("failOnDataLoss", "false")
.load()
ER_stream_V1.toDF()
.select(col("key"), col("value").cast("string"))
.filter(col("key")==="Value1")
.select(functions.from_json(col("value").cast("string"), Value1Schema.schemaExecution).as("value")).select("value.*")
.writeStream
.format("orc")
.option("metastoreUri", configManager.getString("spark.datasource.hive.warehouse.metastoreUri"))
.option("checkpointLocation", "/tmp/teststreaming/execution/checkpoint2005")
.option("path", "/tmp/test/value1")
.trigger(Trigger.ProcessingTime("5 Seconds"))
.partitionBy("jobid")
.start()
ER_stream_V2.toDF()
.select(col("key"), col("value").cast("string"))
.filter(col("key")==="Value2")
.select(functions.from_json(col("value").cast("string"), Value2Schema.schemaJobParameters).as("value"))
.select("value.*")
.writeStream
.format("orc")
.option("metastoreUri", configManager.getString("spark.datasource.hive.warehouse.metastoreUri"))
.option("checkpointLocation", "/tmp/teststreaming/jobparameters/checkpoint2006")
.option("path", "/tmp/test/value2")
.trigger(Trigger.ProcessingTime("5 Seconds"))
.partitionBy("jobid")
.start()
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您不需要两个读者。创建一个并过滤两次。您可能还需要考虑
startoffsets
作为最早的
以读取现有主题数据。
You should not need two readers. Create one and filter twice. You might also want to consider
startingOffsets
asearliest
to read existing topic dataFor example.