AudioSink.OnSamples() 和 MediaStreamSource.GetSampleAsync() 未按时调用
我有一个 Silverlight 应用程序,它使用重写的 AudioSink.OnSamples() 来录制声音,并使用 MediaStreamSource.GetSampleAsync() 来播放声音。
例如:
protected override void GetSampleAsync(MediaStreamType mediaStreamType)
{
try
{
logger.LogSampleRequested();
var memoryStream = AudioController == null ? new MemoryStream() : AudioController.GetNextAudioFrame();
timestamp += AudioConstants.MillisecondsPerFrame * TimeSpan.TicksPerMillisecond;
var sample = new MediaStreamSample(
mediaStreamDescription,
memoryStream,
0,
memoryStream.Length,
timestamp, // (DateTime.Now - startTime).Ticks, // Testing shows that incrementing a long with a good-enough value is ~100x faster than calculating the ticks each time.
emptySampleDict);
ReportGetSampleCompleted(sample);
}
catch (Exception ex)
{
ClientLogger.LogDebugMessage(ex.ToString);
}
}
这两个方法通常应该每 20 毫秒调用一次,在大多数机器上,这正是发生的情况。然而,在某些机器上,它们不是每 20 毫秒调用一次,而是接近 22-24 毫秒。这很麻烦,但通过一些适当的缓冲,音频或多或少仍然可用。更大的问题是,在某些场景下,比如当CPU运行接近极限时,调用之间的间隔会上升到30-35毫秒之多。
所以:
(1)还有其他人看到过这个吗?
(2) 有人有任何建议的解决方法吗?
(3) 有没有人有解决此问题的任何提示?
I have a Silverlight application that uses an overridden AudioSink.OnSamples() to record sound, and MediaStreamSource.GetSampleAsync() to play sound.
For instance:
protected override void GetSampleAsync(MediaStreamType mediaStreamType)
{
try
{
logger.LogSampleRequested();
var memoryStream = AudioController == null ? new MemoryStream() : AudioController.GetNextAudioFrame();
timestamp += AudioConstants.MillisecondsPerFrame * TimeSpan.TicksPerMillisecond;
var sample = new MediaStreamSample(
mediaStreamDescription,
memoryStream,
0,
memoryStream.Length,
timestamp, // (DateTime.Now - startTime).Ticks, // Testing shows that incrementing a long with a good-enough value is ~100x faster than calculating the ticks each time.
emptySampleDict);
ReportGetSampleCompleted(sample);
}
catch (Exception ex)
{
ClientLogger.LogDebugMessage(ex.ToString);
}
}
Both of these methods should normally be called every 20 milliseconds, and on most machines, that's exactly what happens. However, on some machines, they get called not every 20 ms, but closer to 22-24 ms. That's troublesome, but with some appropriate buffering, the audio is still more-or-less usable. The bigger problem is that in certain scenarios, such as when the CPU is running close to its limit, the interval between calls rises to as much as 30-35 ms.
So:
(1) Has anyone else seen this?
(2) Does anyone have any suggested workarounds?
(3) Does anyone have any tips for troubleshooting this problem?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
![扫码二维码加入Web技术交流群](/public/img/jiaqun_03.jpg)
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
不管怎样,经过大量调查后,解决这个问题的基本方法就是不要使用太多的 CPU。在我们的例子中,这意味着跟踪 CPU 利用率,并在 CPU 开始持续以 80% 或更高运行时切换到不使用那么多 CPU 的编解码器(G711 与 Speex)。
For what it's worth, after much investigation, the basic solution to this problem is simply not to use as much CPU. In our case, this meant keeping track of the CPU utilization, and switching to a codec that didn't use as much CPU (G711 vs. Speex) when the CPU starts consistently running at 80% or higher.