从Azure Blob存储中读取连续的镶木材料文件作为dataStream
我想读取具有格式 yyyy-mm-dd/hh/mm/file_name.parquet
的目录中每秒在Azure Blob存储中生成的parquet文件。我想将数据读为datastream。
我们是否已经定义了任何可以做到这一点的Flink源?
我尝试使用:
// Partial version 1: the raw file is processed continuously
val path: String = "hdfs://hostname/path_to_file_dir/"
val textInputFormat: TextInputFormat = new TextInputFormat(new Path(path))
// monitor the file continuously every minute
val stream: DataStream[String] = streamExecutionEnvironment.readFile(textInputFormat, path, FileProcessingMode.PROCESS_CONTINUOUSLY, 60000)
上述方法中的问题:
无法找到ReadParquetFile支持。
它仅从路径中传递的特定目录处理文件。在我的情况下,给定目录格式为 yyyy-mm-dd/hh/mm/file_name.parquet
I want to read parquet files that are generated on Azure blob storage every second in the directory having format YYYY-MM-DD/HH/MM/file_name.parquet
I want to read the data as DataStream.
Do we have any Flink source already defined that can do this?
I tried to use :
// Partial version 1: the raw file is processed continuously
val path: String = "hdfs://hostname/path_to_file_dir/"
val textInputFormat: TextInputFormat = new TextInputFormat(new Path(path))
// monitor the file continuously every minute
val stream: DataStream[String] = streamExecutionEnvironment.readFile(textInputFormat, path, FileProcessingMode.PROCESS_CONTINUOUSLY, 60000)
Issues in the above approach:
Was not able to find readParquetFile support.
It was processing file only from a specific directory passed in the path. In my case directories are changing every minute given the directory format is YYYY-MM-DD/HH/MM/file_name.parquet
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论