将音频从 IO 单元写入磁盘
重写这个问题,使其更加简洁。
我的问题是我无法成功地将音频文件从远程 IO 单元写入磁盘。
我采取的步骤是
打开一个 mp3 文件并将其音频提取到缓冲区。 我根据图表的属性设置了一个 asbd 来与我的图表一起使用。 我设置并运行我的图表,循环提取的音频和声音成功地从扬声器中发出!
我遇到的困难是从远程 IO 回调获取音频样本并将它们写入磁盘上的音频文件,我正在使用 ExtAudioFileWriteASync。
音频文件确实被写入并且与原始 mp3 听起来有些相似,但听起来非常失真。
我不确定问题是否是
A) ExtAudioFileWriteAsync 无法像 io 单元回调提供的那样快地写入示例。
- 或 -
B) 我为 extaudiofile 引用设置了错误的 ASBD。我想首先保存一个 wav 文件。我不确定我是否在下面的 ASBD 中正确描述了这一点。
其次,我不确定创建音频文件时要为 inChannelLayout 属性传递什么值。
最后,我非常不确定 kExtAudioFileProperty_ClientDataFormat 使用什么 asbd。 我一直在使用立体声流格式,但仔细查看文档后发现这必须是 pcm。这应该与 Remoteio 的输出格式相同吗?如果是这样,我将远程 io 的输出格式设置为 Stereostreamformat 是否错误?
我意识到这个问题有很多问题,但我有很多不确定性,我似乎无法自己解决。
设置立体声流格式
- (void) setupStereoStreamFormat
{
size_t bytesPerSample = sizeof (AudioUnitSampleType);
stereoStreamFormat.mFormatID = kAudioFormatLinearPCM;
stereoStreamFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
stereoStreamFormat.mBytesPerPacket = bytesPerSample;
stereoStreamFormat.mFramesPerPacket = 1;
stereoStreamFormat.mBytesPerFrame = bytesPerSample;
stereoStreamFormat.mChannelsPerFrame = 2; // 2 indicates stereo
stereoStreamFormat.mBitsPerChannel = 8 * bytesPerSample;
stereoStreamFormat.mSampleRate = engineDescribtion.samplerate;
NSLog (@"The stereo stereo format :");
}
使用立体声流格式设置远程回调
AudioUnitSetProperty(engineDescribtion.masterChannelMixerUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
masterChannelMixerUnitloop,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
AudioUnitSetProperty(engineDescribtion.masterChannelMixerUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
masterChannelMixerUnitloop,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
static OSStatus masterChannelMixerUnitCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
// ref.equnit;
//AudioUnitRender(engineDescribtion.channelMixers[inBusNumber], ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);
Engine *engine= (Engine *) inRefCon;
AudioUnitRender(engineDescribtion.equnit, ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);
if(engine->isrecording)
{
ExtAudioFileWriteAsync(engine->recordingfileref, inNumberFrames, ioData);
}
return 0;
}
**录音设置**
-(void)startrecording
{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
destinationFilePath = [[NSString alloc] initWithFormat: @"%@/testrecording.wav", documentsDirectory];
destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)destinationFilePath, kCFURLPOSIXPathStyle, false);
OSStatus status;
// prepare a 16-bit int file format, sample channel count and sample rate
AudioStreamBasicDescription dstFormat;
dstFormat.mSampleRate=44100.0;
dstFormat.mFormatID=kAudioFormatLinearPCM;
dstFormat.mFormatFlags=kAudioFormatFlagsNativeEndian|kAudioFormatFlagIsSignedInteger|kAudioFormatFlagIsPacked;
dstFormat.mBytesPerPacket=4;
dstFormat.mBytesPerFrame=4;
dstFormat.mFramesPerPacket=1;
dstFormat.mChannelsPerFrame=2;
dstFormat.mBitsPerChannel=16;
dstFormat.mReserved=0;
// create the capture file
status= ExtAudioFileCreateWithURL(destinationURL, kAudioFileWAVEType, &dstFormat, NULL, kAudioFileFlags_EraseFile, &recordingfileref);
CheckError( status ,"couldnt create audio file");
// set the capture file's client format to be the canonical format from the queue
status=ExtAudioFileSetProperty(recordingfileref, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &stereoStreamFormat);
CheckError( status ,"couldnt set input format");
ExtAudioFileSeek(recordingfileref, 0);
isrecording=YES;
// [documentsDirectory release];
}
编辑1
我真的在黑暗中刺伤现在但是我需要使用音频转换器还是 kExtAudioFileProperty_ClientDataFormat 可以处理这个问题?
编辑 2
我附上 2 个音频样本。第一个是我循环并尝试复制的原始音频。第二个是该循环的录制音频。希望它能为某人提供有关出了什么问题的线索。
Rewriting this question to be a little more succient.
My problem is that I cant successfully write an audio file to disk from a remote IO Unit.
The steps I took were to
Open an mp3 file and extract its audio to buffers.
I set up an asbd to use with my graph based on the properties of the graph.
I setup and run my graph looping the extracted audio and sound successfully comes out the speaker!
What I'm having difficulty with is taking the audio samples from the remote IO callback and writing them to an audio file on disk which I am using ExtAudioFileWriteASync for.
The audio file does get written and bears some audiable resemblance to the original mp3 but it sounds very distorted.
I'm not sure if the problem is
A) ExtAudioFileWriteAsync cant write the samples as fast as the io unit callback provides them.
- or -
B) I have set up the ASBD for the extaudiofile refeference wrong. I wanted to begin by saving a wav file. I'm not sure if I have described this properly in the ASBD below.
Secondly I am uncertain what value to pass for the inChannelLayout property when creating the audio file.
And finally I am very uncertain about what asbd to use for kExtAudioFileProperty_ClientDataFormat.
I had been using my stereo stream format but a closer look at the docs says this must be pcm. Should this be the same format as the output for the remoteio? And if so was I wrong to set the output format of the remote io to stereostreamformat?
I realize there's and awful lot in this question but I have a lot of uncertainties that I cant seem to clear up on my own.
setup stereo stream format
- (void) setupStereoStreamFormat
{
size_t bytesPerSample = sizeof (AudioUnitSampleType);
stereoStreamFormat.mFormatID = kAudioFormatLinearPCM;
stereoStreamFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
stereoStreamFormat.mBytesPerPacket = bytesPerSample;
stereoStreamFormat.mFramesPerPacket = 1;
stereoStreamFormat.mBytesPerFrame = bytesPerSample;
stereoStreamFormat.mChannelsPerFrame = 2; // 2 indicates stereo
stereoStreamFormat.mBitsPerChannel = 8 * bytesPerSample;
stereoStreamFormat.mSampleRate = engineDescribtion.samplerate;
NSLog (@"The stereo stereo format :");
}
setup remoteio callback using stereo stream format
AudioUnitSetProperty(engineDescribtion.masterChannelMixerUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
masterChannelMixerUnitloop,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
AudioUnitSetProperty(engineDescribtion.masterChannelMixerUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
masterChannelMixerUnitloop,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
static OSStatus masterChannelMixerUnitCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
// ref.equnit;
//AudioUnitRender(engineDescribtion.channelMixers[inBusNumber], ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);
Engine *engine= (Engine *) inRefCon;
AudioUnitRender(engineDescribtion.equnit, ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);
if(engine->isrecording)
{
ExtAudioFileWriteAsync(engine->recordingfileref, inNumberFrames, ioData);
}
return 0;
}
**the recording setup **
-(void)startrecording
{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
destinationFilePath = [[NSString alloc] initWithFormat: @"%@/testrecording.wav", documentsDirectory];
destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)destinationFilePath, kCFURLPOSIXPathStyle, false);
OSStatus status;
// prepare a 16-bit int file format, sample channel count and sample rate
AudioStreamBasicDescription dstFormat;
dstFormat.mSampleRate=44100.0;
dstFormat.mFormatID=kAudioFormatLinearPCM;
dstFormat.mFormatFlags=kAudioFormatFlagsNativeEndian|kAudioFormatFlagIsSignedInteger|kAudioFormatFlagIsPacked;
dstFormat.mBytesPerPacket=4;
dstFormat.mBytesPerFrame=4;
dstFormat.mFramesPerPacket=1;
dstFormat.mChannelsPerFrame=2;
dstFormat.mBitsPerChannel=16;
dstFormat.mReserved=0;
// create the capture file
status= ExtAudioFileCreateWithURL(destinationURL, kAudioFileWAVEType, &dstFormat, NULL, kAudioFileFlags_EraseFile, &recordingfileref);
CheckError( status ,"couldnt create audio file");
// set the capture file's client format to be the canonical format from the queue
status=ExtAudioFileSetProperty(recordingfileref, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &stereoStreamFormat);
CheckError( status ,"couldnt set input format");
ExtAudioFileSeek(recordingfileref, 0);
isrecording=YES;
// [documentsDirectory release];
}
edit 1
I'm really stabbing in the dark here now but do I need to use an audio convertor or does kExtAudioFileProperty_ClientDataFormat take care of that?
edit 2
Im attaching 2 samples of audio. The first is the original audio that Im looping and trying to copy. The second is the recorded audio of that loop. Hopefully it might give somebody a clue as to whats going wrong.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
经过几天的泪水和拉头发我有一个解决方案。
在我的代码和其他示例中,我看到 extaudiofilewriteasync 在 Remoteio 单元的回调中被调用,如下所示。
** Remoteiounit 回调 **
在此回调中,我从另一个应用 eqs 并混合音频的音频单元提取音频数据。
我从remoteio回调中删除了extaudiofilewriteasync调用到remoteio拉动的另一个回调并且文件写入成功!
*equnits 回调函数 *
为了充分理解为什么我的解决方案有效,有人可以向我解释一下为什么从 Remoteio 的 iodata 缓冲区列表写入数据会导致音频失真,但写入数据会进一步降低链会产生完美的音频吗?
After a couple of days of tears & hair pulling I have a solution.
In my code and in other examples I have seen extaudiofilewriteasync was called in the callback for the remoteio unit like so.
** remoteiounit callback **
In this callback I'm pulling audio data from another audio unit that applies eqs and mixes audio.
I removed the extaudiofilewriteasync call from the remoteio callback to this other callback that the remoteio pulls and the file writes successfully!!
*equnits callback function *
In the interest of fully understanding why my solution worked could somebody explain to me why writing data to file from the iodata bufferlist of the remoteio causes distorted audio but writing data one further step down the chain results in perfect audio?