是否可以使用Hadoop API将大型Parquet文件分为小文件?
我们的大小尺寸很大,约为100GB。
是否可以使用Hadoop API将文件分为较小的文件。
我们不使用火花,因此无法使用Spark API拆分。
我尝试使用Hadoop AP,但我认为它不起作用。
public class ParquetGeneratorDemo {
public static void main(String[] args) throws IOException {
Path path = new Path("parquet_data_" + ".parquet");
FileSystem fs = FileSystem.get(new Configuration());
fs.delete(path, false);
for (int i = 0; i < 1; i++) {
generateParquetFileFor();
}
System.out.println(fs.getFileStatus(path).getLen() / 1024);
FileSplit split = new FileSplit(path, 0L, 512L, new String[]{});
System.out.println(split.toString());
}
private static void generateParquetFileFor() {
try {
Schema schema = parseSchema();
Path path = new Path("parquet_data_" + ".parquet");
System.out.println(path.getName());
FileSystem fileSystem = FileSystem.get(new Configuration());
List<GenericData.Record> recordList = generateRecords(schema);
try (ParquetWriter<GenericData.Record> writer = AvroParquetWriter.<GenericData.Record>builder(path).withSchema(
schema).withCompressionCodec(CompressionCodecName.SNAPPY).withRowGroupSize(ParquetWriter.DEFAULT_BLOCK_SIZE).withPageSize(
ParquetWriter.DEFAULT_PAGE_SIZE).withConf(new Configuration()).withValidation(false).withDictionaryEncoding(false).build()) {
for (GenericData.Record record : recordList) {
writer.write(record);
}
}
} catch (Exception ex) {
ex.printStackTrace(System.out);
}
}
private static Schema parseSchema() {
String schemaJson = "{\"namespace\": \"org.myorganization.mynamespace\"," //Not used in Parquet, can put anything
+ "\"type\": \"record\"," //Must be set as record
+ "\"name\": \"myrecordname\"," //Not used in Parquet, can put anything
+ "\"fields\": [" + " {\"name\": \"myString\", \"type\": [\"string\", \"null\"]}" + ", {\"name\": \"myInteger\", \"type\": \"int\"}" //Required field
+ " ]}";
Schema.Parser parser = new Schema.Parser().setValidate(true);
return parser.parse(schemaJson);
}
private static List<GenericData.Record> generateRecords(Schema schema) {
List<GenericData.Record> recordList = new ArrayList<>();
for (int i = 1; i <= 2048 * 1000; i++) {
GenericData.Record record = new GenericData.Record(schema);
record.put("myInteger", i);
record.put("myString", i + " hi world of parquet!");
recordList.add(record);
}
return recordList;
}
}
We have very large parquet files of size which are around of 100GB.
is it possible to divide the files into smaller files using hadoop api.
We are not using spark so can not split using spark apis.
I tried using hadoop aps but i do not see it working.
public class ParquetGeneratorDemo {
public static void main(String[] args) throws IOException {
Path path = new Path("parquet_data_" + ".parquet");
FileSystem fs = FileSystem.get(new Configuration());
fs.delete(path, false);
for (int i = 0; i < 1; i++) {
generateParquetFileFor();
}
System.out.println(fs.getFileStatus(path).getLen() / 1024);
FileSplit split = new FileSplit(path, 0L, 512L, new String[]{});
System.out.println(split.toString());
}
private static void generateParquetFileFor() {
try {
Schema schema = parseSchema();
Path path = new Path("parquet_data_" + ".parquet");
System.out.println(path.getName());
FileSystem fileSystem = FileSystem.get(new Configuration());
List<GenericData.Record> recordList = generateRecords(schema);
try (ParquetWriter<GenericData.Record> writer = AvroParquetWriter.<GenericData.Record>builder(path).withSchema(
schema).withCompressionCodec(CompressionCodecName.SNAPPY).withRowGroupSize(ParquetWriter.DEFAULT_BLOCK_SIZE).withPageSize(
ParquetWriter.DEFAULT_PAGE_SIZE).withConf(new Configuration()).withValidation(false).withDictionaryEncoding(false).build()) {
for (GenericData.Record record : recordList) {
writer.write(record);
}
}
} catch (Exception ex) {
ex.printStackTrace(System.out);
}
}
private static Schema parseSchema() {
String schemaJson = "{\"namespace\": \"org.myorganization.mynamespace\"," //Not used in Parquet, can put anything
+ "\"type\": \"record\"," //Must be set as record
+ "\"name\": \"myrecordname\"," //Not used in Parquet, can put anything
+ "\"fields\": [" + " {\"name\": \"myString\", \"type\": [\"string\", \"null\"]}" + ", {\"name\": \"myInteger\", \"type\": \"int\"}" //Required field
+ " ]}";
Schema.Parser parser = new Schema.Parser().setValidate(true);
return parser.parse(schemaJson);
}
private static List<GenericData.Record> generateRecords(Schema schema) {
List<GenericData.Record> recordList = new ArrayList<>();
for (int i = 1; i <= 2048 * 1000; i++) {
GenericData.Record record = new GenericData.Record(schema);
record.put("myInteger", i);
record.put("myString", i + " hi world of parquet!");
recordList.add(record);
}
return recordList;
}
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论