如何在 iPhone 上编写实时准确的音频音序器?

发布于 2024-07-21 05:45:45 字数 2537 浏览 3 评论 0原文

我想在 iPhone 上编写一个简单的音频音序器,但我无法获得准确的计时。 最近几天,我在 iPhone 上尝试了所有可能的音频技术,从 AudioServicesPlaySystemSound、AVAudioPlayer 和 OpenAL 到 AudioQueues。

在我的最后一次尝试中,我尝试了 CocosDenshion 声音引擎,它使用 openAL 并允许将声音加载到多个缓冲区中,然后在需要时播放它们。 这是基本代码:

init:

int channelGroups[1];
channelGroups[0] = 8;
soundEngine = [[CDSoundEngine alloc] init:channelGroups channelGroupTotal:1];

int i=0;
for(NSString *soundName in [NSArray arrayWithObjects:@"base1", @"snare1", @"hihat1", @"dit", @"snare", nil])
{
    [soundEngine loadBuffer:i fileName:soundName fileType:@"wav"];
    i++;
}

[NSTimer scheduledTimerWithTimeInterval:0.14 target:self selector:@selector(drumLoop:) userInfo:nil repeats:YES];

在初始化中,我创建声音引擎,将一些声音加载到不同的缓冲区,然后使用 NSTimer 建立音序器循环。

音频循环:

- (void)drumLoop:(NSTimer *)timer
{
for(int track=0; track<4; track++)
{
    unsigned char note=pattern[track][step];
    if(note)
        [soundEngine playSound:note-1 channelGroupId:0 pitch:1.0f pan:.5 gain:1.0 loop:NO];
}

if(++step>=16)
    step=0;

}

就是这样,它按应有的方式工作,但时间不稳定且不稳定。 一旦发生其他事情(例如在视图中绘制),它就会失去同步。

据我了解声音引擎和 openAL,缓冲区已加载(在初始化代码中),然后准备立即使用 alSourcePlay(source); 启动 - 所以问题可能出在 NSTimer 上?

现在应用商店里有几十个声音音序器应用程序,它们都有准确的计时。 当缩放和绘图完成时,IG“idrum”即使在 180 bpm 的情况下也具有完美稳定的节拍。 所以必须有一个解决方案。

有人有什么想法吗?

感谢您提前提供任何帮助!

最好的问候,

Walchy


感谢您的回答。 它使我更进一步,但不幸的是没有达到目标。 这就是我所做的:

nextBeat=[[NSDate alloc] initWithTimeIntervalSinceNow:0.1];
[NSThread detachNewThreadSelector:@selector(drumLoop:) toTarget:self withObject:nil];

在初始化中,我存储下一个节拍的时间并创建一个新线程。

- (void)drumLoop:(id)info
{
    [NSThread setThreadPriority:1.0];

    while(1)
    {
        for(int track=0; track<4; track++)
        {
            unsigned char note=pattern[track][step];
            if(note)
                [soundEngine playSound:note-1 channelGroupId:0 pitch:1.0f pan:.5 gain:1.0 loop:NO];
        }

        if(++step>=16)
            step=0;     

        NSDate *newNextBeat=[[NSDate alloc] initWithTimeInterval:0.1 sinceDate:nextBeat];
        [nextBeat release];
        nextBeat=newNextBeat;
        [NSThread sleepUntilDate:nextBeat];
    }
}

在序列循环中,我将线程优先级设置为尽可能高,并进入无限循环。 播放声音后,我计算下一个节拍的下一个绝对时间,并将线程发送到睡眠状态,直到此时。

这再次有效,并且比我在没有 NSThread 的情况下的尝试更稳定,但如果发生其他情况,尤其是 GUI 的东西,它仍然不稳定。

有没有办法在 iPhone 上使用 NSThread 获得实时响应?

最好的问候,

瓦尔奇

I want to program a simple audio sequencer on the iphone but I can't get accurate timing. The last days I tried all possible audio techniques on the iphone, starting from AudioServicesPlaySystemSound and AVAudioPlayer and OpenAL to AudioQueues.

In my last attempt I tried the CocosDenshion sound engine which uses openAL and allows to load sounds into multiple buffers and then play them whenever needed. Here is the basic code:

init:

int channelGroups[1];
channelGroups[0] = 8;
soundEngine = [[CDSoundEngine alloc] init:channelGroups channelGroupTotal:1];

int i=0;
for(NSString *soundName in [NSArray arrayWithObjects:@"base1", @"snare1", @"hihat1", @"dit", @"snare", nil])
{
    [soundEngine loadBuffer:i fileName:soundName fileType:@"wav"];
    i++;
}

[NSTimer scheduledTimerWithTimeInterval:0.14 target:self selector:@selector(drumLoop:) userInfo:nil repeats:YES];

In the initialisation I create the sound engine, load some sounds to different buffers and then establish the sequencer loop with NSTimer.

audio loop:

- (void)drumLoop:(NSTimer *)timer
{
for(int track=0; track<4; track++)
{
    unsigned char note=pattern[track][step];
    if(note)
        [soundEngine playSound:note-1 channelGroupId:0 pitch:1.0f pan:.5 gain:1.0 loop:NO];
}

if(++step>=16)
    step=0;

}

Thats it and it works as it should BUT the timing is shaky and instable. As soon as something else happens (i.g. drawing in a view) it goes out of sync.

As I understand the sound engine and openAL the buffers are loaded (in the init code) and then are ready to start immediately with alSourcePlay(source); - so the problem may be with NSTimer?

Now there are dozens of sound sequencer apps in the appstore and they have accurate timing. I.g. "idrum" has a perfect stable beat even in 180 bpm when zooming and drawing is done. So there must be a solution.

Does anybody has any idea?

Thanks for any help in advance!

Best regards,

Walchy


Thanks for your answer. It brought me a step further but unfortunately not to the aim. Here is what I did:

nextBeat=[[NSDate alloc] initWithTimeIntervalSinceNow:0.1];
[NSThread detachNewThreadSelector:@selector(drumLoop:) toTarget:self withObject:nil];

In the initialisation I store the time for the next beat and create a new thread.

- (void)drumLoop:(id)info
{
    [NSThread setThreadPriority:1.0];

    while(1)
    {
        for(int track=0; track<4; track++)
        {
            unsigned char note=pattern[track][step];
            if(note)
                [soundEngine playSound:note-1 channelGroupId:0 pitch:1.0f pan:.5 gain:1.0 loop:NO];
        }

        if(++step>=16)
            step=0;     

        NSDate *newNextBeat=[[NSDate alloc] initWithTimeInterval:0.1 sinceDate:nextBeat];
        [nextBeat release];
        nextBeat=newNextBeat;
        [NSThread sleepUntilDate:nextBeat];
    }
}

In the sequence loop I set the thread priority as high as possible and go into an infinite loop. After playing the sounds I calculate the next absolute time for the next beat and send the thread to sleep until this time.

Again this works and it works more stable than my tries without NSThread but it is still shaky if something else happens, especially GUI stuff.

Is there a way to get real-time responses with NSThread on the iphone?

Best regards,

Walchy

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(9

べ繥欢鉨o。 2024-07-28 05:45:45

NSTimer 绝对不能保证它何时触发。 它会在运行循环上自行安排触发时间,并且当运行循环到达计时器时,它会查看是否有任何计时器过期。 如果是这样,它就会运行它们的选择器。 非常适合各种任务; 对这个没用。

第一步是您需要将音频处理移至其自己的线程并脱离 UI 线程。 对于计时,您可以使用普通的 C 方法构建自己的计时引擎,但我会首先查看 CAAnimation,尤其是 CAMediaTiming。

请记住,Cocoa 中有许多东西被设计为仅在主线程上运行。 例如,不要在后台线程上执行任何 UI 工作。 一般来说,请仔细阅读文档,看看他们是如何谈论线程安全的。 但一般来说,如果线程之间没有大量通信(在我看来,大多数情况下不应该有),那么线程在 Cocoa 中非常容易。 看看 NSThread。

NSTimer has absolutely no guarantees on when it fires. It schedules itself for a fire time on the runloop, and when the runloop gets around to timers, it sees if any of the timers are past-due. If so, it runs their selectors. Excellent for a wide variety of tasks; useless for this one.

Step one here is that you need to move audio processing to its own thread and get off the UI thread. For timing, you can build your own timing engine using normal C approaches, but I'd start by looking at CAAnimation and especially CAMediaTiming.

Keep in mind that there are many things in Cocoa that are designed only to run on the main thread. Don't, for instance, do any UI work on a background thread. In general, read the docs carefully to see what they say about thread-safety. But generally, if there isn't a lot of communication between the threads (which there shouldn't be in most cases IMO), threads are pretty easy in Cocoa. Look at NSThread.

﹉夏雨初晴づ 2024-07-28 05:45:45

我使用remoteIO 输出做类似的事情。 我不依赖 NSTimer。 我使用渲染回调中提供的时间戳来计算我的所有计时。 我不知道 iphone 的赫兹率有多准确,但我确定它非常接近 44100hz,所以我只是根据当前样本数计算何时应该加载下一个节拍。

此处可以找到使用远程 io 的示例项目查看渲染回调 inTimeStamp 参数。

编辑:此方法的工作示例(在应用商店上,可以在此处找到)

im doing something similar using remoteIO output. i do not rely on NSTimer. i use the timestamp provided in the render callback to calculate all of my timing. i dont know how acurate the iphone's hz rate is but im sure its pretty close to 44100hz, so i just calculate when i should be loading the next beat based on what the current sample number is.

an example project that uses remote io can be found here have a look at the render callback inTimeStamp argument.

EDIT : Example of this approach working (and on the app store, can be found here)

时常饿 2024-07-28 05:45:45

我选择使用 RemoteIO AudioUnit 和一个后台线程,该线程使用 AudioFileServices API 填充摆动缓冲区(一个用于读取,一个用于写入,然后进行交换)。 然后,缓冲区在 AudioUnit 线程中进行处理和混合。 AudioUnit 线程向 bgnd 线程发出信号,通知它何时应该开始加载下一个摆动缓冲区。 所有处理都是用 C 语言进行的,并使用 posix 线程 API。 所有 UI 内容都在 ObjC 中。

IMO,AudioUnit/AudioFileServices 方法提供了最大程度的灵活性和控制。

干杯,

I opted to use a RemoteIO AudioUnit and a background thread that fills swing buffers (one buffer for read, one for write which then swap) using the AudioFileServices API. The buffers are then processed and mixed in the AudioUnit thread. The AudioUnit thread signals the bgnd thread when it should start loading the next swing buffer. All the processing was in C and used the posix thread API. All the UI stuff was in ObjC.

IMO, the AudioUnit/AudioFileServices approach affords the greatest degree of flexibility and control.

Cheers,

Ben

眼眸 2024-07-28 05:45:45

您在这里有一些很好的答案,但我想我应该提供一些适合我的解决方案的代码。 当我开始研究这个问题时,我实际上寻找了游戏中的运行循环是如何工作的,并找到了一个很好的解决方案,该解决方案对我来说使用 mach_absolute_time 非常高效。

您可以在此处阅读一些有关它的功能的信息不足的是它返回的时间具有纳秒精度。 但是,它返回的数字并不完全是时间,它随您拥有的 CPU 的不同而变化,因此您必须首先创建一个 mach_timebase_info_data_t 结构体,然后使用它来标准化时间。

// Gives a numerator and denominator that you can apply to mach_absolute_time to
// get the actual nanoseconds
mach_timebase_info_data_t info;
mach_timebase_info(&info);

uint64_t currentTime = mach_absolute_time();

currentTime *= info.numer;
currentTime /= info.denom;

如果我们希望它每 16 个音符打勾一次,您可以这样做:

uint64_t interval = (1000 * 1000 * 1000) / 16;
uint64_t nextTime = currentTime + interval;

此时,currentTime 将包含一定数量的纳秒,并且您希望它每次 打勾间隔 纳秒过去了,我们将其存储在nextTime 中。 然后,您可以设置一个 while 循环,如下所示:

while (_running) {
    if (currentTime >= nextTime) {
        // Do some work, play the sound files or whatever you like
        nextTime += interval;
    }

    currentTime = mach_absolute_time();
    currentTime *= info.numer;
    currentTime /= info.denom;
}

mach_timebase_info 内容有点令人困惑,但是一旦您将其放入其中,它就可以很好地工作。 它对我的应用程序来说性能非常好。 还值得注意的是,您不想在主线程上运行它,因此将其分配给自己的线程是明​​智的。 您可以将上述所有代码放入其自己的名为 run 的方法中,并以如下方式启动:

[NSThread detachNewThreadSelector:@selector(run) toTarget:self withObject:nil];

您在此处看到的所有代码都是我开源项目的简化,您可以看到它并如果有任何帮助,请在此处自行运行。 干杯。

You've had a few good answers here, but I thought I'd offer some code for a solution that worked for me. When I began researching this, I actually looked for how run loops in games work and found a nice solution that has been very performant for me using mach_absolute_time.

You can read a bit about what it does here but the short of it is that it returns time with nanosecond precision. However, the number it returns isn't quite time, it varies with the CPU you have, so you have to create a mach_timebase_info_data_t struct first, and then use it to normalize the time.

// Gives a numerator and denominator that you can apply to mach_absolute_time to
// get the actual nanoseconds
mach_timebase_info_data_t info;
mach_timebase_info(&info);

uint64_t currentTime = mach_absolute_time();

currentTime *= info.numer;
currentTime /= info.denom;

And if we wanted it to tick every 16th note, you could do something like this:

uint64_t interval = (1000 * 1000 * 1000) / 16;
uint64_t nextTime = currentTime + interval;

At this point, currentTime would contain some number of nanoseconds, and you'd want it to tick every time interval nanoseconds passed, which we store in nextTime. You can then set up a while loop, something like this:

while (_running) {
    if (currentTime >= nextTime) {
        // Do some work, play the sound files or whatever you like
        nextTime += interval;
    }

    currentTime = mach_absolute_time();
    currentTime *= info.numer;
    currentTime /= info.denom;
}

The mach_timebase_info stuff is a bit confusing, but once you get it in there, it works very well. It's been extremely performant for my apps. It's also worth noting that you won't want to run this on the main thread, so dishing it off to its own thread is wise. You could put all the above code in its own method called run, and start it with something like:

[NSThread detachNewThreadSelector:@selector(run) toTarget:self withObject:nil];

All the code you see here is a simplification of a project I open-sourced, you can see it and run it yourself here, if that's of any help. Cheers.

流星番茄 2024-07-28 05:45:45

实际上,实现计时的最精确方法是对音频样本进行计数,并在经过一定数量的样本后执行您需要执行的操作。 无论如何,输出采样率是与声音相关的所有事物的基础,因此这是主时钟。

您不必检查每个样本,每隔几毫秒执行一次就足够了。

Really the most precise way to approach timing is to count audio samples and do whatever you need to do when a certain number of samples has passed. Your output sample rate is the basis for all things related to sound anyway so this is the master clock.

You don't have to check on each sample, doing this every couple of msec will suffice.

楠木可依 2024-07-28 05:45:45

可以提高实时响应能力的另一件事是在激活音频会话之前将音频会话的 kAudioSessionProperty_PreferredHardwareIOBufferDuration 设置为几毫秒(例如 0.005 秒)。 这将导致 RemoteIO 更频繁地请求较短的回调缓冲区(在实时线程上)。 不要在这些实时音频回调中花费太多时间,否则您将杀死应用程序的音频线程和所有音频。

仅计算较短的 RemoteIO 回调缓冲区就比使用 NSTimer 准确 10 倍且延迟更低。 对音频回调缓冲区中的样本进行计数以定位混音的开始位置将为您提供亚毫秒级的相对计时。

One additional thing that may improve real-time responsiveness is setting the Audio Session's kAudioSessionProperty_PreferredHardwareIOBufferDuration to a few milliseconds (such as 0.005 seconds) before making your Audio Session active. This will cause RemoteIO to request shorter callback buffers more often (on a real-time thread). Don't take any significant time in these real-time audio callbacks, or you will kill the audio thread and all audio for your app.

Just counting shorter RemoteIO callback buffers is on the order of 10X more accurate and lower latency than using an NSTimer. And counting samples within an audio callback buffer for positioning the start of your sound mix will give you sub-millisecond relative timing.

ペ泪落弦音 2024-07-28 05:45:45

通过测量循环中“做一些工作”部分所花费的时间并从 nextTime 中减去该持续时间,可以大大提高准确性:

while (loop == YES)
{
    timerInterval = adjustedTimerInterval ;

    startTime = CFAbsoluteTimeGetCurrent() ;

    if (delegate != nil)
    {
        [delegate timerFired] ; // do some work
    }

    endTime = CFAbsoluteTimeGetCurrent() ;

    diffTime = endTime - startTime ; // measure how long the call took. This result has to be subtracted from the interval!

    endTime = CFAbsoluteTimeGetCurrent() + timerInterval-diffTime ;

    while (CFAbsoluteTimeGetCurrent() < endTime)
    {
        // wait until the waiting interval has elapsed
    }
}

By measuring the time elapsed for the "Do some work" part in the loop and subtracting this duration from the nextTime greatly improves accuracy:

while (loop == YES)
{
    timerInterval = adjustedTimerInterval ;

    startTime = CFAbsoluteTimeGetCurrent() ;

    if (delegate != nil)
    {
        [delegate timerFired] ; // do some work
    }

    endTime = CFAbsoluteTimeGetCurrent() ;

    diffTime = endTime - startTime ; // measure how long the call took. This result has to be subtracted from the interval!

    endTime = CFAbsoluteTimeGetCurrent() + timerInterval-diffTime ;

    while (CFAbsoluteTimeGetCurrent() < endTime)
    {
        // wait until the waiting interval has elapsed
    }
}
薆情海 2024-07-28 05:45:45

如果提前构建序列不是限制,您可以使用 AVMutableComposition 获得精确的计时。 这将在 1 秒内均匀地播放 4 个声音:

// setup your composition

AVMutableComposition *composition = [[AVMutableComposition alloc] init];
NSDictionary *options = @{AVURLAssetPreferPreciseDurationAndTimingKey : @YES};

for (NSInteger i = 0; i < 4; i++)
{
  AVMutableCompositionTrack* track = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
  NSURL *url = [[NSBundle mainBundle] URLForResource:[NSString stringWithFormat:@"sound_file_%i", i] withExtension:@"caf"];
  AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:options];
  AVAssetTrack *assetTrack = [asset tracksWithMediaType:AVMediaTypeAudio].firstObject;
  CMTimeRange timeRange = [assetTrack timeRange];

  Float64 t = i * 1.0;
  NSError *error;
  BOOL success = [track insertTimeRange:timeRange ofTrack:assetTrack atTime:CMTimeMake(t, 4) error:&error];
  NSAssert(success && !error, @"error creating composition");
}

AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
self.avPlayer = [[AVPlayer alloc] initWithPlayerItem:playerItem];

// later when you want to play 

[self.avPlayer seekToTime:kCMTimeZero];
[self.avPlayer play];

此解决方案的原始来源:http://forum .theamazingaudioengine.com/discussion/638#Item_5

以及更多详细信息:使用 AVMutableComposition 进行精确计时

If constructing your sequence ahead of time is not a limitation, you can get precise timing using an AVMutableComposition. This would play 4 sounds evenly spaced over 1 second:

// setup your composition

AVMutableComposition *composition = [[AVMutableComposition alloc] init];
NSDictionary *options = @{AVURLAssetPreferPreciseDurationAndTimingKey : @YES};

for (NSInteger i = 0; i < 4; i++)
{
  AVMutableCompositionTrack* track = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
  NSURL *url = [[NSBundle mainBundle] URLForResource:[NSString stringWithFormat:@"sound_file_%i", i] withExtension:@"caf"];
  AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:options];
  AVAssetTrack *assetTrack = [asset tracksWithMediaType:AVMediaTypeAudio].firstObject;
  CMTimeRange timeRange = [assetTrack timeRange];

  Float64 t = i * 1.0;
  NSError *error;
  BOOL success = [track insertTimeRange:timeRange ofTrack:assetTrack atTime:CMTimeMake(t, 4) error:&error];
  NSAssert(success && !error, @"error creating composition");
}

AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
self.avPlayer = [[AVPlayer alloc] initWithPlayerItem:playerItem];

// later when you want to play 

[self.avPlayer seekToTime:kCMTimeZero];
[self.avPlayer play];

Original credit for this solution: http://forum.theamazingaudioengine.com/discussion/638#Item_5

And more detail: precise timing with AVMutableComposition

拿命拼未来 2024-07-28 05:45:45

我认为更好的时间管理方法是设置 bpm(例如 120),然后不再使用它。 在编写/制作音乐/音乐应用程序时,分钟和秒的测量几乎没有用处。

如果你看一下任何排序应用程序,它们都是按节拍而不是时间进行的。 另一方面,如果您查看波形编辑器,它会使用分钟和秒。

我不确定以任何方式实现这种代码方式的最佳方法,但我认为这种方法会为您省去很多麻烦。

I thought a better approach for the time management would be to have a bpm setting (120, for example), and go off of that instead. Measurements of minutes and seconds are near useless when writing/making music / music applications.

If you look at any sequencing app, they all go by beats instead of time. On the opposite side of things, if you look at a waveform editor, it uses minutes and seconds.

I'm not sure of the best way to implement this code-wise by any means, but I think this approach will save you a lot of headaches down the road.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文