混响算法

发布于 2024-10-23 19:52:04 字数 1539 浏览 0 评论 0原文

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

丿*梦醉红颜 2024-10-30 19:52:04

这是“延迟线”的一个非常简单的实现,它将在现有数组中产生混响效果(C#,buffershort[]):

int delayMilliseconds = 500; // half a second
int delaySamples = 
    (int)((float)delayMilliseconds * 44.1f); // assumes 44100 Hz sample rate
float decay = 0.5f;
for (int i = 0; i < buffer.length - delaySamples; i++)
{
    // WARNING: overflow potential
    buffer[i + delaySamples] += (short)((float)buffer[i] * decay);
}

基本上,您可以采用每个样本的值,将其乘以衰减参数,并将结果添加到缓冲区 delaySamples 中的值。

这将产生真正的“混响”效果,因为每个声音都会被多次听到,并且幅度逐渐下降。为了获得更简单的回声效果(每个声音仅重复一次),您使用基本相同的代码,仅反向运行 for 循环。

更新:在此上下文中“混响”一词有两种常见用法。我上面的代码示例产生了卡通中常见的经典混响效果,而在音乐应用程序中,该术语用于表示混响,或者更一般地表示人工空间效果的创建。

有关混响的文献如此难以理解的一个重要原因是,创建良好的空间效果需要比我这里的示例方法复杂得多的算法。然而,大多数电子空间效应是使用多个延迟线建立的,因此该示例希望能够说明所发生情况的基础知识。为了产生真正好的效果,您还可以(或应该)使用 FFT 甚至简单的模糊来模糊混响的输出。

更新2:以下是多延迟线混响设计的一些技巧:

  • 选择不会相互干扰的延迟值(在波形意义上)。例如,如果一次延迟为 500 毫秒,另一次延迟为 250 毫秒,则两条线中会有很多点都有回声,从而产生不切实际的效果。通常将基本延迟乘以不同的素数,以帮助确保不会发生这种重叠。

  • 在一个大房间(在现实世界中)中,当您发出噪音时,您往往会立即听到一些相对不失真的尖锐回声(几毫秒),然后是更大、更微弱的回声“云” 。您可以通过使用一些向后运行的延迟线来创建初始回声和一些完整的混响线以及一些模糊来创建“云”来廉价地实现此效果。

  • 绝对的最佳技巧(我几乎觉得我不想放弃这个,但到底是什么)只有在你的音频是立体声时才有效。如果您稍微改变左右声道之间的延迟线参数(例如,左声道 490 毫秒,右声道 513 毫秒,或者左声道衰减 0.273,右声道衰减 0.2631),您将产生很多听起来更真实的混响。

Here is a very simple implementation of a "delay line" which will produce a reverb effect in an existing array (C#, buffer is short[]):

int delayMilliseconds = 500; // half a second
int delaySamples = 
    (int)((float)delayMilliseconds * 44.1f); // assumes 44100 Hz sample rate
float decay = 0.5f;
for (int i = 0; i < buffer.length - delaySamples; i++)
{
    // WARNING: overflow potential
    buffer[i + delaySamples] += (short)((float)buffer[i] * decay);
}

Basically, you take the value of each sample, multiply it by the decay parameter and add the result to the value in the buffer delaySamples away.

This will produce a true "reverb" effect, as each sound will be heard multiple times with declining amplitude. To get a simpler echo effect (where each sound is repeated only once) you use basically the same code, only run the for loop in reverse.

Update: the word "reverb" in this context has two common usages. My code sample above produces a classic reverb effect common in cartoons, whereas in a musical application the term is used to mean reverberation, or more generally the creation of artificial spatial effects.

A big reason the literature on reverberation is so difficult to understand is that creating a good spatial effect requires much more complicated algorithms than my sample method here. However, most electronic spatial effects are built up using multiple delay lines, so this sample hopefully illustrates the basics of what's going on. To produce a really good effect, you can (or should) also muddy the reverb's output using FFT or even simple blurring.

Update 2: Here are a few tips for multiple-delay-line reverb design:

  • Choose delay values that won't positively interfere with each other (in the wave sense). For example, if you have one delay at 500ms and a second at 250ms, there will be many spots that have echoes from both lines, producing an unrealistic effect. It's common to multiply a base delay by different prime numbers in order to help ensure that this overlap doesn't happen.

  • In a large room (in the real world), when you make a noise you will tend to hear a few immediate (a few milliseconds) sharp echoes that are relatively undistorted, followed by a larger, fainter "cloud" of echoes. You can achieve this effect cheaply by using a few backwards-running delay lines to create the initial echoes and a few full reverb lines plus some blurring to create the "cloud".

  • The absolute best trick (and I almost feel like I don't want to give this one up, but what the hell) only works if your audio is stereo. If you slightly vary the parameters of your delay lines between the left and right channels (e.g. 490ms for the left channel and 513ms for the right, or .273 decay for the left and .2631 for the right), you'll produce a much more realistic-sounding reverb.

烟雨凡馨 2024-10-30 19:52:04

数字混响通常有两种风格。

  • 卷积混响 卷积脉冲响应和输入信号。脉冲响应通常是真实房间或其他混响源的录音。混响的特征由脉冲响应定义。因此,卷积混响通常提供调整混响特性的有限方法。

  • 算法混响通过延迟、滤波器和反馈网络模拟混响。不同的方案将以不同的方式组合这些基本构建块。大部分的艺术在于了解如何调整网络。算法混响通常会向最终用户公开多个参数,以便可以调整混响特性以适应。

关于混响的一些知识帖子位于EarLevel 是对这个主题的一个很好的介绍。它解释了卷积混响和算法混响之间的差异,并显示了有关如何实现每种混响的一些详细信息。

Julius O. Smith 的物理音频信号处理有一章介绍混响算法,其中包括专门介绍 Freeverb 算法的部分。在搜索某些源代码示例时略读可能会有所帮助。

Sean Costello 的 Valhalla 博客充满了有趣的混响花絮。

Digital reverbs generally come in two flavors.

  • Convolution Reverbs convolve an impulse response and a input signal. The impulse response is often a recording of a real room or other reverberation source. The character of the reverb is defined by the impulse response. As such, convolution reverbs usually provide limited means of adjusting the reverb character.

  • Algorithmic Reverbs mimic reverb with a network of delays, filters and feedback. Different schemes will combine these basic building blocks in different ways. Much of the art is in knowing how to tune the network. Algorithmic reverbs usually expose several parameters to the end user so the reverb character can be adjusted to suit.

The A Bit About Reverb post at EarLevel is a great introduction to the subject. It explains the differences between convolution and algorithmic reverbs and shows some details on how each might be implemented.

Physical Audio Signal Processing by Julius O. Smith has a chapter on reverb algorithms, including a section dedicated to the Freeverb algorithm. Skimming over that might help when searching for some source code examples.

Sean Costello's Valhalla blog is full of interesting reverb tidbits.

熊抱啵儿 2024-10-30 19:52:04

您需要的是要建模或模拟的房间或混响室的脉冲响应。完整的脉冲响应将包括所有多重和多径回波。脉冲响应的长度大致等于脉冲声音完全衰减到可听阈值或给定本底噪声以下所需的时间长度(以样本为单位)。

给定长度为 N 的脉冲向量,您可以通过输入向量(由当前音频输入样本与之前的 N-1 个输入样本连接而成)与脉冲向量的向量乘法来生成音频输出样本,并进行适当的缩放。

有些人通过假设脉冲响应中的大多数抽头(除了 1 之外的所有抽头)为零来简化这一点,并且仅对剩余回声使用一些缩放延迟线,然后将其添加到输出中。

为了获得更真实的混响,您可能需要为每只耳朵使用不同的脉冲响应,并且响应会随着头部位置的不同而略有不同。小至四分之一英寸的头部移动可能会使脉冲响应中的峰值位置改变 1 个样本(在 44.1k 速率下)。

What you need is the impulse response of the room or reverb chamber which you want to model or simulate. The full impulse response will include all the multiple and multi-path echos. The length of the impulse response will be roughly equal to the length of time (in samples) it takes for an impulse sound to completely decay below audible threshold or given noise floor.

Given an impulse vector of length N, you could produce an audio output sample by vector multiplication of the input vector (made up of the current audio input sample concatenated with the previous N-1 input samples) by the impulse vector, with appropriate scaling.

Some people simplify this by assuming most taps (down to all but 1) in the impulse response are zero, and just using a few scaled delay lines for the remaining echos which are then added into the output.

For even more realistic reverb, you might want to use different impulse responses for each ear, and have the response vary a bit with head position. A head movement of as little as a quarter inch might vary the position of peaks in the impulse response by 1 sample (at 44.1k rates).

夜吻♂芭芘 2024-10-30 19:52:04

您可以使用 GVerb。从这里获取代码。GVerb是一个LADSPA插件,你可以去如果您想了解有关 LADSPA 的信息,请点击此处

这里是 GVerb 的 wiki,包括参数说明和一些即时混响设置。

我们也可以直接在 Objc 中使用它:

ty_gverb        *_verb;
_verb = gverb_new(16000.f, 41.f, 40.0f, 7.0f, 0.5f, 0.5f, 0.5f, 0.5f, 0.5f);
AudioSampleType *samples = (AudioSampleType*)dataBuffer.mBuffers[0].mData;//Audio Data from AudioUnit Render or ExtAuidoFileReader
float lval,rval;
for (int i = 0; i< fileLengthFrames; i++) {
     float value = (float)samples[i] / 32768.f;//from SInt16 to float
     gverb_do(_verb, value, &lval, &rval);
     samples[i] =  (SInt16)(lval * 32767.f);//float to SInt16
}

GVerb 是单声道效果,但如果您想要立体声效果,您可以单独运行每个通道的效果,然后根据需要将处理后的信号与干信号进行平移和混合

You can use GVerb. Get the code from here.GVerb is a LADSPA plug-in, you can go here if you want to know something about LADSPA.

Here is the wiki for GVerb , including explaining of the parameters and some instant reverb settings.

Also we can use it directly in Objc:

ty_gverb        *_verb;
_verb = gverb_new(16000.f, 41.f, 40.0f, 7.0f, 0.5f, 0.5f, 0.5f, 0.5f, 0.5f);
AudioSampleType *samples = (AudioSampleType*)dataBuffer.mBuffers[0].mData;//Audio Data from AudioUnit Render or ExtAuidoFileReader
float lval,rval;
for (int i = 0; i< fileLengthFrames; i++) {
     float value = (float)samples[i] / 32768.f;//from SInt16 to float
     gverb_do(_verb, value, &lval, &rval);
     samples[i] =  (SInt16)(lval * 32767.f);//float to SInt16
}

GVerb is a mono effect but if you want a stereo effect you could run each channel through the effect separately and then pan and mix the processed signals with the dry signals as required

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文