Concat流音频缓冲区进入一个文件并将其存储
我正在使用Google云语音来识别节点服务器上的流音频(从麦克风)。同时,我想将流音频存储到文件中。由于流媒体识别是在大量缓冲区上操作的,因此我如何将所有缓冲区组合并将其存储为一个音频文件?
我正在使用的实际代码是根据这个库,在将服务器端的音频编码定义为时
const encoding = 'LINEAR16';
const sampleRateHertz = 16000;
,每次有音频流时,它将通过云API发送:
client.on('binaryData', function (data) {
if (recognizeStream !== null) {
recognizeStream.write(data);
}
});
在客户端端,在此文件代码> int16array )。
那么,当流媒体完成时,我可以将每个缓冲区加入并保存到服务器端的PCM
文件吗?还是我需要以复杂的方式合并缓冲区(例如 audioBuffer to to to wav < /a>?
I was using the Google Cloud Speech to recognize streaming audios (from microphone) on the node server. At the same time I would like to store the streamed audio into a file. Since the streaming recognition is operated on chunks of buffers, how could I combine all the buffers and store it as a single audio file?
The actual code I was using is adapted from this library, where the audio encoding on the server side is defined as
const encoding = 'LINEAR16';
const sampleRateHertz = 16000;
And each time there's a audio stream it will be sent via the cloud api:
client.on('binaryData', function (data) {
if (recognizeStream !== null) {
recognizeStream.write(data);
}
});
On the client side, the buffer was preprocessed in this file (downsampled into a Int16Array
).
So can I just concatenate each buffer and save to a pcm
file on the server side, when the streaming finishes? Or do I need to merge the buffers in a complex way (such as using audiobuffer-to-wav?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
有一个答案在这里正是我的问题。它使用
wav
库添加标头,然后将音频缓冲区写入文件。There is a SO answer here which solves exactly my question. It uses the
wav
library to add a header, and then write the audio buffer to a file.