是否可以使用Hadoop API将大型Parquet文件分为小文件?

发布于 2025-02-13 06:23:12 字数 2681 浏览 0 评论 0原文

我们的大小尺寸很大,约为100GB。

是否可以使用Hadoop API将文件分为较小的文件。

我们不使用火花,因此无法使用Spark API拆分。

我尝试使用Hadoop AP,但我认为它不起作用。

public class ParquetGeneratorDemo {

    public static void main(String[] args) throws IOException {

        Path path = new Path("parquet_data_" + ".parquet");
        FileSystem fs = FileSystem.get(new Configuration());
        fs.delete(path, false);

        for (int i = 0; i < 1; i++) {
            generateParquetFileFor();
        }
        System.out.println(fs.getFileStatus(path).getLen() / 1024);
        FileSplit split = new FileSplit(path, 0L, 512L, new String[]{});
        System.out.println(split.toString());

}
private static void generateParquetFileFor() {
    try {
        Schema schema = parseSchema();
        Path path = new Path("parquet_data_" + ".parquet");
        System.out.println(path.getName());
        FileSystem fileSystem = FileSystem.get(new Configuration());
        List<GenericData.Record> recordList = generateRecords(schema);

        try (ParquetWriter<GenericData.Record> writer = AvroParquetWriter.<GenericData.Record>builder(path).withSchema(
                schema).withCompressionCodec(CompressionCodecName.SNAPPY).withRowGroupSize(ParquetWriter.DEFAULT_BLOCK_SIZE).withPageSize(
                ParquetWriter.DEFAULT_PAGE_SIZE).withConf(new Configuration()).withValidation(false).withDictionaryEncoding(false).build()) {

            for (GenericData.Record record : recordList) {
                writer.write(record);
            }

        }
    } catch (Exception ex) {
        ex.printStackTrace(System.out);
    }
}


private static Schema parseSchema() {
    String schemaJson = "{\"namespace\": \"org.myorganization.mynamespace\"," //Not used in Parquet, can put anything
            + "\"type\": \"record\"," //Must be set as record
            + "\"name\": \"myrecordname\"," //Not used in Parquet, can put anything
            + "\"fields\": [" + " {\"name\": \"myString\",  \"type\": [\"string\", \"null\"]}" + ", {\"name\": \"myInteger\", \"type\": \"int\"}" //Required field
            + " ]}";

    Schema.Parser parser = new Schema.Parser().setValidate(true);
    return parser.parse(schemaJson);
}

private static List<GenericData.Record> generateRecords(Schema schema) {

    List<GenericData.Record> recordList = new ArrayList<>();
    for (int i = 1; i <= 2048 * 1000; i++) {
        GenericData.Record record = new GenericData.Record(schema);
        record.put("myInteger", i);
        record.put("myString", i + " hi world of parquet!");
        recordList.add(record);
    }
    return recordList;
}

}

We have very large parquet files of size which are around of 100GB.

is it possible to divide the files into smaller files using hadoop api.

We are not using spark so can not split using spark apis.

I tried using hadoop aps but i do not see it working.

public class ParquetGeneratorDemo {

    public static void main(String[] args) throws IOException {

        Path path = new Path("parquet_data_" + ".parquet");
        FileSystem fs = FileSystem.get(new Configuration());
        fs.delete(path, false);

        for (int i = 0; i < 1; i++) {
            generateParquetFileFor();
        }
        System.out.println(fs.getFileStatus(path).getLen() / 1024);
        FileSplit split = new FileSplit(path, 0L, 512L, new String[]{});
        System.out.println(split.toString());

}
private static void generateParquetFileFor() {
    try {
        Schema schema = parseSchema();
        Path path = new Path("parquet_data_" + ".parquet");
        System.out.println(path.getName());
        FileSystem fileSystem = FileSystem.get(new Configuration());
        List<GenericData.Record> recordList = generateRecords(schema);

        try (ParquetWriter<GenericData.Record> writer = AvroParquetWriter.<GenericData.Record>builder(path).withSchema(
                schema).withCompressionCodec(CompressionCodecName.SNAPPY).withRowGroupSize(ParquetWriter.DEFAULT_BLOCK_SIZE).withPageSize(
                ParquetWriter.DEFAULT_PAGE_SIZE).withConf(new Configuration()).withValidation(false).withDictionaryEncoding(false).build()) {

            for (GenericData.Record record : recordList) {
                writer.write(record);
            }

        }
    } catch (Exception ex) {
        ex.printStackTrace(System.out);
    }
}


private static Schema parseSchema() {
    String schemaJson = "{\"namespace\": \"org.myorganization.mynamespace\"," //Not used in Parquet, can put anything
            + "\"type\": \"record\"," //Must be set as record
            + "\"name\": \"myrecordname\"," //Not used in Parquet, can put anything
            + "\"fields\": [" + " {\"name\": \"myString\",  \"type\": [\"string\", \"null\"]}" + ", {\"name\": \"myInteger\", \"type\": \"int\"}" //Required field
            + " ]}";

    Schema.Parser parser = new Schema.Parser().setValidate(true);
    return parser.parse(schemaJson);
}

private static List<GenericData.Record> generateRecords(Schema schema) {

    List<GenericData.Record> recordList = new ArrayList<>();
    for (int i = 1; i <= 2048 * 1000; i++) {
        GenericData.Record record = new GenericData.Record(schema);
        record.put("myInteger", i);
        record.put("myString", i + " hi world of parquet!");
        recordList.add(record);
    }
    return recordList;
}

}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文