在 iPhone 上播放 pcm 数据

发布于 2024-11-04 20:59:16 字数 1175 浏览 0 评论 0原文

我需要在 iPhone 上实时播放线性 PCM 数据。

我通过 RTSP 获得LIVE数据流,目前我可以从 iPhone 读取它,将其保存到文件中,在支持 pcm 的桌面音频播放器上播放,因此我认为传输没问题。

现在我被卡住了,我完全卡住了!不知道如何处理包含数据的 NSData 对象。

我做了一些研究,最终得到了 AudioUnits,但我只是无法将我的 NSdata 分配给音频缓冲区,我不知道如何分配。

就我而言,我分配了回调:

AURenderCallbackStruct input;
input.inputProc = makeSound;
input.inputProcRefCon = self;

并具有函数“makeSound”:

   OSStatus makeSound(
                void *inRefCon, 
                AudioUnitRenderActionFlags  *ioActionFlags, 
                const AudioTimeStamp        *inTimeStamp, 
                UInt32                      inBusNumber, 
                UInt32                      inNumberFrames, 
                AudioBufferList             *ioData)

      {

            //so what to do here?
            //ioData->mBuffers[0].mdata = [mySound bytes]; does not work, nor does
            //ioData->mBuffers = [mySound byes];

            return noErr;
       }

一般来说我的方法是错误的吗?

我需要了解/学习/实施什么?我是一个完整的音频新手,所以我的建议是,我不需要几个缓冲区,因为当我从 rtsp 获取新的声音包时,旧的声音包就结束了,因为它是一个实时流(我根据我的录音,只是附加了字节,没有查找演示时间戳,因为我无论如何都没有收到一些)

干杯

i need to play linear pcm data live on a iphone.

i get a LIVE datastream via RTSP, and i can currently read it out from iphone, save it into a file, play it on a desktop audioplayer that supports pcm, therefore i think the transport is okay.

now i got stuck, i have completely! no idea what to do with my NSData object containing the data.

i did a bit of research, ending up with AudioUnits, but i just cannot assign my NSdata to the audiobuffer, respectivly i have no clue how.

for my instance, i assigned the callback:

AURenderCallbackStruct input;
input.inputProc = makeSound;
input.inputProcRefCon = self;

and having the function 'makeSound':

   OSStatus makeSound(
                void *inRefCon, 
                AudioUnitRenderActionFlags  *ioActionFlags, 
                const AudioTimeStamp        *inTimeStamp, 
                UInt32                      inBusNumber, 
                UInt32                      inNumberFrames, 
                AudioBufferList             *ioData)

      {

            //so what to do here?
            //ioData->mBuffers[0].mdata = [mySound bytes]; does not work, nor does
            //ioData->mBuffers = [mySound byes];

            return noErr;
       }

is my approeach wrong in gerneral?

of what do i need to know/learn/implement? i am a complete audio-newbie, so my suggestion was, that i dont need several buffers, since when i get the new sound-package from rtsp, the old one is ended, since its a live stream (i base this on my recordings, that just appended the bytes w/o looking up presentation timestamps, since i dont receive some anyways)

Cheers

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

烟织青萝梦 2024-11-11 20:59:16

我不知道这是否正是您正在寻找的内容,但 Matt Gallagher 的一些 AudioStreamer 代码可能对您有帮助。特别是,看看他如何处理音频缓冲。
http://cocoawithlove.com/2010/03/streaming-mp3aac-audio -again.html

I don't know if this is exactly what you are looking for but some of Matt Gallagher's AudioStreamer code might be helpful to you. In particular, check out how he handles the audio buffering.
http://cocoawithlove.com/2010/03/streaming-mp3aac-audio-again.html

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文