如何在 iPhone 的 Core Audio(音频单元/远程 IO)中将录制的声音更改为男声

发布于 2024-09-15 02:25:46 字数 222 浏览 6 评论 0原文

我是核心音频新手,真的迷失了,我正在尝试录制音频,然后将语音调制应用于该录音并播放。我看过示例 Speak Here,它使用音频队列进行音频录制。我陷入了如何更改音频样本的部分。我知道可以在回调函数中使用音频单元来更改音频样本,但我不知道如何应用于这些样本来更改它们(更改音高会有帮助吗?)。

如果您可以引导我访问一些源代码或教程或任何解释 Objective C 语音调制的网站,将会对我很有帮助。预先感谢大家。

I am new to Core Audio and really lost, I am trying to record an audio and then apply voice modulation to that recording and play it back. I have looked at the example Speak Here which uses Audio Queue for audio recording. I am stuck at the part of how to change the audio samples. I understand that it can be done using Audio Unit in the call back function to change the audio samples, but I have no idea what to apply to those samples to change them (will changing pitch help ?).

If you could direct me to some source code or tutorial or any site that explains voice modulation for objective C will really really help me. Thank you all in advance.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

壹場煙雨 2024-09-22 02:25:46

您在这里尝试做的事情并不那么简单。基本上,您必须实现一个声码器(“语音编码器”)来改变声音。 维基百科链接应该可以帮助您。

然后,您仍然可以在 CoreAudio 中操作这些样本。您可以使用音频队列服务 但这并不完全是一个易于使用的 API。实际上,使用更简单的 CoreAudio API 之一并将声码器包装在音频单元中可能会更简单。

您有音频处理方面的经验吗?在没有一些有关音频处理的先验知识的情况下实现声码器通常是一项艰巨的任务。

What you are trying to do here is not that simple. Basically, you would have to implement a vocoder ("voice-coder") to change a voice. The Wikipedia links should help you there.

Then, you still have manipulate those samples in CoreAudio. You can do this using Audio Queue Services but that not exactly an easy-to-use API. It might actually be less trouble to use one of the simpler CoreAudio APIs and wrap your vocoder in an Audio Unit.

Do you have some experience with audio processing? Implementing a vocoder without some prior knowledge about audio processing in general is a tough task.

脱离于你 2024-09-22 02:25:46

首先,实际回答您的问题:当您调用 AudioQueueNewInput() 函数时,您向它传递了一个例程的名称,每次有数据可用时都会调用该例程。您可能将其称为 MyInputBufferHandler() 或其他名称。它的第三个参数是一个 AudioQueueBufferRef,它保存传入的数据。

请注意,这并不像查看每个样本(幅度)并降低或提高它那么简单。您接收时域(时间)域中的样本作为幅度。没有可用的音调或频率信息。您需要做的是将输入样本(波形)移至频域,其中该空间中的每个“点”都是一个频率及其伴随的功率和相位。您可以使用 FFT(快速傅里叶变换)来做到这一点,但数学有些复杂。 Apple 确实在加速框架中提供了 FFT 例程,但请注意,您正在涉入非常深的水。

First, to actually answer your question: When you called the AudioQueueNewInput() function, you pass it the name of a routine that will be called every time data is available to you. You probably called it MyInputBufferHandler() or something. It's third argument is an AudioQueueBufferRef which hold the incoming data.

Be aware that this is not as simple as looking at each sample (amplitude) and lowering or raising it. You receive samples in the temporal (time) domain as amplitudes. There is no pitch or frequency information available. What you need to do is move the incoming samples (waveform) into the frequency domain, wherein each "point" in that space is a frequency and it's accompanying power and phase. You can do that with an FFT (fast Fourier transform) but the mathematics are somewhat sophisticated. Apple does provide FFT routines in the Acceleration framework, but be aware that you are wading into very deep water here.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文