int BUFFER_SIZE = 1024; // just for purposes of this example
float buffer[BUFFER_SIZE]; // 1 channel of float samples between -1.0 and 1.0
float rms = 0.0f;
for (int i=0; i<BUFFER_SIZE; ++i) {
rms += buffer[i]*buffer[i];
}
rms = sqrt(rms/BUFFER_SIZE);
执行此操作的核心方法称为物理建模。它涉及为您尝试模拟的仪器的物理行为创建详细的数学模型。这样做可以给您带来非常逼真的过度吹奏效果,并且可以捕获呼吸输入和指法如何塑造声音的许多微妙效果。 普林斯顿声音实验室提供了此方法的快速概述和示例STK C++ 库 中的工具 - 但请注意,它不是对于数学上胆小的人来说!
That particular musical instrument sounds to me like it's a fairly simple synthesis module, based perhaps on a square wave or FM, with a reverb filter tacked on. So I'm guessing it's artifically generated sound all the way down. If you were going to build one of these instruments yourself, you could use a sample set as your basis instead if you wished. There's another possibility I'm going to mention a ways below.
Dealing with breath input: The breath input is generally translated to a value that represents the air pressure on the input microphone. This can be done by taking small chunks of the input audio signal and calculating the peak or RMS of each chunk. I prefer RMS, which is calculated by something like:
int BUFFER_SIZE = 1024; // just for purposes of this example
float buffer[BUFFER_SIZE]; // 1 channel of float samples between -1.0 and 1.0
float rms = 0.0f;
for (int i=0; i<BUFFER_SIZE; ++i) {
rms += buffer[i]*buffer[i];
}
rms = sqrt(rms/BUFFER_SIZE);
In MIDI, this value is usually transmitted on the channel CC2 as a value between 0 and 127. That value is then used to continually control the volume of the output sound. (On the iPhone, MIDI may or may not be used internally, but the concept's the same. I'll call this value CC2 from here on out regardless.)
Dealing with key presses: The key presses in this case are probably just mapped directly to the notes that they correspond to. These would then be sent as new note events to the instrument. I don't think there's any fancy modeling there.
Other forms of control: The Ocarina instrument uses the tilt of the iPhone to control vibrato frequency and volume. This is usually modeled simply by a low-frequency oscillator (LFO) that's scaled, offset, and multiplied with the output of the rest of your instrument to produce a fluttering volume effect. It can also be used to control the pitch of your instrument, where it will cause the pitch to fluctuate. (This can be hard to do right if you're working with samples, but relatively easy if you're using waveforms.) Fancy MIDI wind controllers also track finger pressure and bite-down pressure, and can expose those as parameters for you to shape your sound with as well.
Breath instruments 201: There are some tricks that people pull to make sounds more expressive when they are controlled by a breath controller:
Make sure that your output is only playing one note at a time; switching to a new note automatically ends the previous note.
Make sure that the volume from the old note to the new note remains smooth if the breath pressure is constant and the key presses are connected. This allows you to distinguish between legato playing and detached playing.
Breath instruments 301: And then we get to the fun stuff: how to simulate overblowing, timbre change, partial fingering, etc. like a real wind instrument can do. There are several approaches I can think of here:
Mix in the sound of the breath input itself, perhaps filtered in some way, to impart a natural chiff or breathiness to your sound.
Use crossfading between velocity layers to transform the sound at high velocities into an entirely different sound. In other words, you literally fade out the old sound while you're fading in the new sound; they're playing the same pitch, but the new tonal characteristics of the new sound will make themselves gradually apparent.
Use a complex sound with plenty of high-frequency components. Hook up a low-pass filter whose cutoff frequency is controlled by CC2. Have the cutoff frequency increase as the value of CC2 increases. This can increase the high frequency content in an interesting way as you blow harder on the input.
The hard-core way to do this is called physical modeling. It involves creating a detailed mathematical model of the physical behavior of the instrument you're trying to emulate. Doing this can give you a quite realistic overblowing effect, and it can capture many subtle effects of how the breath input and fingering shape the sound. There's a quick overview of this approach at Princeton's Sound Lab and a sample instrument to poke at in the STK C++ library – but be warned, it's not for the mathematically faint of heart!
First of all, I'm not quite sure what your question is.
There are quite a few kinds of sound synthesis. A few I know about are:
Frequency Modulation
Oscillation Wave
Table (sample based)
Oscillation is quite simple and probably the place to start. If you generate a square wave at 440Hz you have the note "A" or more specifically middle A.
That kind of simple synthesis is really quite fun and easy to do. Maybe you can start making a simple synth for the PC speaker. Oh, but I don't know if all OSes let you access that. LADSPA has some good examples. There are lots of libs for linux with docs to get you started. You might want to have a look at Csound for starters: http://www.csounds.com/chapter1/index.html
I played around with it a bit and have a couple corny synths going on...
发布评论
评论(2)
在我看来,这种特殊的乐器听起来像是一个相当简单的合成模块,可能基于方波或 FM,并附加了混响滤波器。所以我猜它一直都是人工产生的声音。如果您打算自己构建其中一种仪器,则可以根据需要使用样本集作为基础。还有另一种可能性,我将在下面提到一种方法。
处理呼吸输入:呼吸输入通常被转换为表示输入麦克风上的气压的值。这可以通过获取输入音频信号的小块并计算每个块的峰值或 RMS 来完成。我更喜欢 RMS,它的计算方式如下:
在 MIDI 中,该值通常作为 0 到 127 之间的值在通道 CC2 上传输。然后该值用于持续控制输出声音的音量。 (在 iPhone 上,MIDI 可能会或可能不会在内部使用,但概念是相同的。无论如何,从现在开始我都会将此值称为 CC2。)
处理按键:这种情况可能只是直接映射到它们对应的注释。然后这些将作为新音符事件发送到乐器。我不认为那里有任何花哨的模型。
其他形式的控制:Ocarina 乐器使用 iPhone 的倾斜来控制颤音频率和音量。这通常通过低频振荡器 (LFO) 进行简单建模,该振荡器经过缩放、偏移并与乐器其余部分的输出相乘,以产生颤动的音量效果。它还可用于控制乐器的音高,这会导致音高波动。 (如果您使用样本,这可能很难正确完成,但如果您使用波形,则相对容易。)精美的 MIDI 风控制器还可以跟踪手指压力和咬合压力,并可以将这些作为参数公开给您也可以塑造你的声音。
呼吸乐器 201: 人们会采用一些技巧,让受呼吸控制器控制的声音更具表现力:
呼吸乐器 301:然后我们开始讨论有趣的内容:如何像真正的管乐器一样模拟过度吹奏、音色变化、部分指法等。我在这里可以想到几种方法:
That particular musical instrument sounds to me like it's a fairly simple synthesis module, based perhaps on a square wave or FM, with a reverb filter tacked on. So I'm guessing it's artifically generated sound all the way down. If you were going to build one of these instruments yourself, you could use a sample set as your basis instead if you wished. There's another possibility I'm going to mention a ways below.
Dealing with breath input: The breath input is generally translated to a value that represents the air pressure on the input microphone. This can be done by taking small chunks of the input audio signal and calculating the peak or RMS of each chunk. I prefer RMS, which is calculated by something like:
In MIDI, this value is usually transmitted on the channel CC2 as a value between 0 and 127. That value is then used to continually control the volume of the output sound. (On the iPhone, MIDI may or may not be used internally, but the concept's the same. I'll call this value CC2 from here on out regardless.)
Dealing with key presses: The key presses in this case are probably just mapped directly to the notes that they correspond to. These would then be sent as new note events to the instrument. I don't think there's any fancy modeling there.
Other forms of control: The Ocarina instrument uses the tilt of the iPhone to control vibrato frequency and volume. This is usually modeled simply by a low-frequency oscillator (LFO) that's scaled, offset, and multiplied with the output of the rest of your instrument to produce a fluttering volume effect. It can also be used to control the pitch of your instrument, where it will cause the pitch to fluctuate. (This can be hard to do right if you're working with samples, but relatively easy if you're using waveforms.) Fancy MIDI wind controllers also track finger pressure and bite-down pressure, and can expose those as parameters for you to shape your sound with as well.
Breath instruments 201: There are some tricks that people pull to make sounds more expressive when they are controlled by a breath controller:
Breath instruments 301: And then we get to the fun stuff: how to simulate overblowing, timbre change, partial fingering, etc. like a real wind instrument can do. There are several approaches I can think of here:
首先,我不太清楚你的问题是什么。
声音合成有很多种。我知道的一些是:
振荡非常简单,可能是开始的地方。如果您生成 440Hz 的方波,您会得到音符“A”或更具体地说是中音 A。
这种简单的合成确实非常有趣且易于完成。也许您可以开始为 PC 扬声器制作一个简单的合成器。哦,但我不知道是否所有操作系统都允许您访问它。 LADSPA 有一些很好的例子。有很多适用于 Linux 的库和文档可以帮助您入门。您可能想先看看 Csound:http://www.csounds.com/ Chapter1/index.html
我玩了一下它并且有一些老生常谈的合成器正在进行......
First of all, I'm not quite sure what your question is.
There are quite a few kinds of sound synthesis. A few I know about are:
Oscillation is quite simple and probably the place to start. If you generate a square wave at 440Hz you have the note "A" or more specifically middle A.
That kind of simple synthesis is really quite fun and easy to do. Maybe you can start making a simple synth for the PC speaker. Oh, but I don't know if all OSes let you access that. LADSPA has some good examples. There are lots of libs for linux with docs to get you started. You might want to have a look at Csound for starters: http://www.csounds.com/chapter1/index.html
I played around with it a bit and have a couple corny synths going on...