从音频输入捕获原始音频以在 Mac 上进行实时处理的最简单方法

发布于 2024-11-08 06:46:21 字数 118 浏览 0 评论 0原文

从内置音频输入捕获音频并能够在请求时实时读取原始采样值(如 .wav 中)的最简单方法是什么,就像从套接字读取一样。

希望代码使用 Apple 的框架之一(音频队列)。文档不是很清楚,我需要的是非常基础的。

What is the simplest way to capture audio from the built in audio input and be able to read the raw sampled values (as in a .wav) in real time as they come in when requested, like reading from a socket.

Hopefully code that uses one of Apple's frameworks (Audio Queues). Documentation is not very clear, and what I need is very basic.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

|煩躁 2024-11-15 06:46:21

为此尝试使用 AudioQueue 框架。您主要需要执行 3 个步骤:

  1. 设置音频格式 如何对传入的模拟音频进行采样
  2. 开始新的录音 AudioQueue 使用 AudioQueueNewInput()
  3. 注册一个处理传入音频数据包的回调例程

在步骤 3 中,您有机会分析传入的音频数据包音频数据与 AudioQueueGetProperty()

大致是这样的:

static void HandleAudioCallback (void                               *aqData,
                                 AudioQueueRef                      inAQ,
                                 AudioQueueBufferRef                inBuffer, 
                                 const AudioTimeStamp               *inStartTime, 
                                 UInt32                             inNumPackets, 
                                 const AudioStreamPacketDescription *inPacketDesc) {
    // Here you examine your audio data
}

static void StartRecording() {
    // now let's start the recording
    AudioQueueNewInput (&aqData.mDataFormat,  // The sampling format how to record
                        HandleAudioCallback,  // Your callback routine
                        &aqData,              // e.g. AudioStreamBasicDescription
                        NULL,
                        kCFRunLoopCommonModes, 
                        0, 
                        &aqData.mQueue);      // Your fresh created AudioQueue
    AudioQueueStart(aqData.mQueue,
                    NULL);
}

我建议 Apple AudioQueue 服务编程指南,了解有关如何启动和停止 AudioQueue 以及如何正确设置所有所需对象的详细信息。

您还可以仔细查看 Apple 的演示程序 SpeakHere。但恕我直言,一开始就有点令人困惑。

Try the AudioQueue Framework for this. You mainly have to perform 3 steps:

  1. setup an audio format how to sample the incoming analog audio
  2. start a new recording AudioQueue with AudioQueueNewInput()
  3. Register a callback routine which handles the incoming audio data packages

In step 3 you have a chance to analyze the incoming audio data with AudioQueueGetProperty()

It's roughly like this:

static void HandleAudioCallback (void                               *aqData,
                                 AudioQueueRef                      inAQ,
                                 AudioQueueBufferRef                inBuffer, 
                                 const AudioTimeStamp               *inStartTime, 
                                 UInt32                             inNumPackets, 
                                 const AudioStreamPacketDescription *inPacketDesc) {
    // Here you examine your audio data
}

static void StartRecording() {
    // now let's start the recording
    AudioQueueNewInput (&aqData.mDataFormat,  // The sampling format how to record
                        HandleAudioCallback,  // Your callback routine
                        &aqData,              // e.g. AudioStreamBasicDescription
                        NULL,
                        kCFRunLoopCommonModes, 
                        0, 
                        &aqData.mQueue);      // Your fresh created AudioQueue
    AudioQueueStart(aqData.mQueue,
                    NULL);
}

I suggest the Apple AudioQueue Services Programming Guide for detailled information about how to start and stop the AudioQueue and how to setup correctly all ther required objects.

You may also have a closer look into Apple's demo prog SpeakHere. But this is IMHO a bit confusing to start with.

氛圍 2024-11-15 06:46:21

这取决于您需要的“实时”程度,

如果您需要非常清晰的声音,请直接到底层并使用音频单元。这意味着设置一个 INPUT 回调。请记住,当此事件触发时,您需要分配自己的缓冲区,然后从麦克风请求音频。

即不要被参数中存在的缓冲区指针所迷惑...它之所以存在只是因为 Apple 对输入和渲染回调使用相同的函数声明。

这是我的一个项目的粘贴:

OSStatus dataArrivedFromMic(
                    void                        * inRefCon, 
                    AudioUnitRenderActionFlags  * ioActionFlags, 
                    const AudioTimeStamp        * inTimeStamp, 
                    UInt32                      inBusNumber, 
                    UInt32                      inNumberFrames, 
                    AudioBufferList             * dummy_notused )
{    
    OSStatus status;

    RemoteIOAudioUnit* unitClass = (RemoteIOAudioUnit *)inRefCon;

    AudioComponentInstance myUnit = unitClass.myAudioUnit;

    AudioBufferList ioData;
    {
        int kNumChannels = 1; // one channel...

        enum {
            kMono = 1,
            kStereo = 2
        };

        ioData.mNumberBuffers = kNumChannels;

        for (int i = 0; i < kNumChannels; i++) 
        {
            int bytesNeeded = inNumberFrames * sizeof( Float32 );

            ioData.mBuffers[i].mNumberChannels = kMono;
            ioData.mBuffers[i].mDataByteSize = bytesNeeded;
            ioData.mBuffers[i].mData = malloc( bytesNeeded );
        }
    }

    // actually GET the data that arrived
    status = AudioUnitRender( (void *)myUnit, 
                             ioActionFlags, 
                             inTimeStamp, 
                             inBusNumber, 
                             inNumberFrames, 
                             & ioData );


    // take MONO from mic
    const int channel = 0;
    Float32 * outBuffer = (Float32 *) ioData.mBuffers[channel].mData;

    // get a handle to our game object
    static KPRing* kpRing = nil;
    if ( ! kpRing )
    {
        //AppDelegate *  appDelegate = [UIApplication sharedApplication].delegate;

        kpRing = [Game singleton].kpRing;

        assert( kpRing );
    }

    // ... and send it the data we just got from the mic
    [ kpRing floatsArrivedFromMic: outBuffer
                            count: inNumberFrames ];

    return status;
}

It depends how ' real-time ' you need it

if you need it very crisp, go down right at the bottom level and use audio units. that means setting up an INPUT callback. remember, when this fires you need to allocate your own buffers and then request the audio from the microphone.

ie don't get fooled by the presence of a buffer pointer in the parameters... it is only there because Apple are using the same function declaration for the input and render callbacks.

here is a paste out of one of my projects:

OSStatus dataArrivedFromMic(
                    void                        * inRefCon, 
                    AudioUnitRenderActionFlags  * ioActionFlags, 
                    const AudioTimeStamp        * inTimeStamp, 
                    UInt32                      inBusNumber, 
                    UInt32                      inNumberFrames, 
                    AudioBufferList             * dummy_notused )
{    
    OSStatus status;

    RemoteIOAudioUnit* unitClass = (RemoteIOAudioUnit *)inRefCon;

    AudioComponentInstance myUnit = unitClass.myAudioUnit;

    AudioBufferList ioData;
    {
        int kNumChannels = 1; // one channel...

        enum {
            kMono = 1,
            kStereo = 2
        };

        ioData.mNumberBuffers = kNumChannels;

        for (int i = 0; i < kNumChannels; i++) 
        {
            int bytesNeeded = inNumberFrames * sizeof( Float32 );

            ioData.mBuffers[i].mNumberChannels = kMono;
            ioData.mBuffers[i].mDataByteSize = bytesNeeded;
            ioData.mBuffers[i].mData = malloc( bytesNeeded );
        }
    }

    // actually GET the data that arrived
    status = AudioUnitRender( (void *)myUnit, 
                             ioActionFlags, 
                             inTimeStamp, 
                             inBusNumber, 
                             inNumberFrames, 
                             & ioData );


    // take MONO from mic
    const int channel = 0;
    Float32 * outBuffer = (Float32 *) ioData.mBuffers[channel].mData;

    // get a handle to our game object
    static KPRing* kpRing = nil;
    if ( ! kpRing )
    {
        //AppDelegate *  appDelegate = [UIApplication sharedApplication].delegate;

        kpRing = [Game singleton].kpRing;

        assert( kpRing );
    }

    // ... and send it the data we just got from the mic
    [ kpRing floatsArrivedFromMic: outBuffer
                            count: inNumberFrames ];

    return status;
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文