nodejs读/写二进制数据到/从文件中

发布于 2025-01-25 08:39:41 字数 1464 浏览 3 评论 0原文

好吧,所以我有一个带有二进制数据的字符串(“ 111011 10001等”),我正在尝试将其保存到他们的文件中,然后使用流在另一个文件上读取它,问题是,流正在切割数据OFF(切断块中的最终二进制数) ”在此处输入图像描述

这就是我将数据发送到文件的方式(读取文件,使用Golombrice Encoder编码并将其编码并将其存储在文件中,并使用数据块)

  const writer = fs.createWriteStream(
    `./encodedAndDecoded/encoded${filename}`,
    {
      encoding: "binary",
    }
  );
  const reader = fs.createReadStream(`./silesia/${filename}`, {
    encoding: "base64",
  });

  await new Promise((resolve, reject) => {
    reader.on("data", (chunk) => {
      writer.write(Buffer.from(golombRiceEncoding(chunk, 3)));
    });
    reader.on("end", () => {
      writer.end();
      resolve();
    });
  });

这就是我正在阅读的方式(读取编码的文件,用Golombrice编码解码并将其存储在文件中,使用大量数据,因此问题是块没有完整的二进制数据是因为流将其切开)

  const writer = fs.createWriteStream(`./encodedAndDecoded/decoded${filename}`);

  const reader = fs.createReadStream(`./encodedAndDecoded/encoded${filename}`, {
    encoding: "binary",
  });

  await new Promise((resolve, reject) => {
    reader.on("data", (chunk) => {
      writer.write(Buffer.from(golombRiceDecoding(chunk, 3), "base64"));
    });
    reader.on("end", () => {
      writer.end();
      resolve();
    });
  });

是否想知道是否有一种方法可以使其使用流读取数据,但没有削减二进制数字?我不介意一次读取X,问题是它剪切二进制数字,在解码时使数据无效。 谢谢

Alright, so I have a string with binary data ("111011 10001 etc"), and I'm trying to save it to a file to them read it on another file using streams, the issue is that, the stream is cutting the data off (final binary number in chunk is cut off)enter image description here

This is how I'm sending the data to the file (reading a file, encoding it with the golombRice encoder and storing it in a file, using chunks of data)

  const writer = fs.createWriteStream(
    `./encodedAndDecoded/encoded${filename}`,
    {
      encoding: "binary",
    }
  );
  const reader = fs.createReadStream(`./silesia/${filename}`, {
    encoding: "base64",
  });

  await new Promise((resolve, reject) => {
    reader.on("data", (chunk) => {
      writer.write(Buffer.from(golombRiceEncoding(chunk, 3)));
    });
    reader.on("end", () => {
      writer.end();
      resolve();
    });
  });

This is how I'm reading it (reading the encoded file, decoding it with the golombRice encoder and storing it in a file, using chunks of data, the issue with this is that the chunks don't have the full binary data because the stream cuts it)

  const writer = fs.createWriteStream(`./encodedAndDecoded/decoded${filename}`);

  const reader = fs.createReadStream(`./encodedAndDecoded/encoded${filename}`, {
    encoding: "binary",
  });

  await new Promise((resolve, reject) => {
    reader.on("data", (chunk) => {
      writer.write(Buffer.from(golombRiceDecoding(chunk, 3), "base64"));
    });
    reader.on("end", () => {
      writer.end();
      resolve();
    });
  });

Wonder if there's a way to make it read the data using streams, but without cutting a binary number? I don't mind if it reads x at a time, the issue is when it cuts a binary number, invalidating the data when decoding.
Thank you

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

魂ガ小子 2025-02-01 08:39:41

您试图解码的特定压缩方案看起来是一个变量字节方案。因此,如果块边界与变量字节的压缩大小并不完美,则您的减压库将失败。

要飞行地解码类似的内容,您必须将逻辑直接构建到解码器中,该解码器仅具有部分数据,然后缓冲将其与到达的下一个数据相结合的缓冲。由于压缩是可变字节的,因此您不知道在没有成为减压逻辑的一部分的情况下,适当的边界在哪里。

或者,您可以在拥有所有数据时立即放弃所有压缩数据的解码和缓冲,然后一次将其全部解压缩(那么您就没有任何块边界需要担心)。

Your particular compression scheme that you're trying to decode looks like it's a variable byte scheme. So, if a chunk boundary doesn't perfectly line up with the size of a variable byte piece of compression, your decompression library will fail.

To decode something like this on the fly, you have to build logic right into the decoder that recognizes when it only has a partial piece of data and then buffers that to combine it with the next chunk of data that arrives. Because the compression is variable byte, you can't known where proper boundaries are without being part of the decompression logic.

Or, you can give up doing on-the-fly decoding and buffer all the compressed data into memory and then decompress it all at once when you have all the data (then you don't have any chunk boundaries to worry about).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文