Concat流音频缓冲区进入一个文件并将其存储

发布于 2025-01-29 17:22:47 字数 1093 浏览 5 评论 0原文

我正在使用Google云语音来识别节点服务器上的流音频(从麦克风)。同时,我想将流音频存储到文件中。由于流媒体识别是在大量缓冲区上操作的,因此我如何将所有缓冲区组合并将其存储为一个音频文件?

我正在使用的实际代码是根据这个库,在将服务器端的音频编码定义为时

const encoding = 'LINEAR16';
const sampleRateHertz = 16000;

,每次有音频流时,它将通过云API发送:

client.on('binaryData', function (data) {
    if (recognizeStream !== null) {
      recognizeStream.write(data);
    }
  });

在客户端端,在此文件代码> int16array )。

那么,当流媒体完成时,我可以将每个缓冲区加入并保存到服务器端的PCM文件吗?还是我需要以复杂的方式合并缓冲区(例如 audioBuffer to to to wav < /a>?

I was using the Google Cloud Speech to recognize streaming audios (from microphone) on the node server. At the same time I would like to store the streamed audio into a file. Since the streaming recognition is operated on chunks of buffers, how could I combine all the buffers and store it as a single audio file?

The actual code I was using is adapted from this library, where the audio encoding on the server side is defined as

const encoding = 'LINEAR16';
const sampleRateHertz = 16000;

And each time there's a audio stream it will be sent via the cloud api:

client.on('binaryData', function (data) {
    if (recognizeStream !== null) {
      recognizeStream.write(data);
    }
  });

On the client side, the buffer was preprocessed in this file (downsampled into a Int16Array).

So can I just concatenate each buffer and save to a pcm file on the server side, when the streaming finishes? Or do I need to merge the buffers in a complex way (such as using audiobuffer-to-wav?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

错々过的事 2025-02-05 17:22:47

有一个答案在这里正是我的问题。它使用wav库添加标头,然后将音频缓冲区写入文件。

There is a SO answer here which solves exactly my question. It uses the wav library to add a header, and then write the audio buffer to a file.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文