将原始 PCM 输出代码从 Java 移植到 Android AudioTrack API 很困难
我正在尝试将播放chiptunes(NSF、SPC 等)音乐文件的应用程序从Java SE 移植到Android。 Android API 似乎缺少该应用程序用于输出原始 PCM 音频的 javax 多媒体类。我在 API 中找到的最接近的模拟是 AudioTrack,所以我一直在努力解决这个问题。
然而,当我尝试通过正在进行的端口运行我的示例音乐文件之一时,我得到的只是静态的。我怀疑是我设置的 AudioTrack 有问题。我尝试了各种不同的构造函数,但最终都只是输出静态。
原始代码中的 DataLine 设置类似于:
AudioFormat audioFormat = new AudioFormat( AudioFormat.Encoding.PCM_SIGNED,
44100, 16, 2, 4, 44100, true );
DataLine.Info lineInfo = new DataLine.Info( SourceDataLine.class, audioFormat );
DataLine line = (SourceDataLine)AudioSystem.getLine( lineInfo );
我现在使用的构造函数是:
AudioTrack = new AudioTrack( AudioManager.STREAM_MUSIC,
44100,
AudioFormat.CHANNEL_CONFIGURATION_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
AudioTrack.getMinBufferSize( 44100,
AudioFormat.CHANNEL_CONFIGURATION_STEREO,
AudioFormat.ENCODING_PCM_16BIT ),
AudioTrack.MODE_STREAM );
我已经替换了其中的常量和变量,以便它们尽可能简洁地有意义,但我的基本问题是是否存在任何明显的问题在我从一种格式转向另一种格式时所做的假设中。
I'm attempting to port an application that plays chiptunes (NSF, SPC, etc) music files from Java SE to Android. The Android API seems to lack the javax multimedia classes that this application uses to output raw PCM audio. The closest analog I've found in the API is AudioTrack and so I've been wrestling with that.
However, when I try to run one of my sample music files through my port-in-progress, all I get back is static. My suspicion is that it's the AudioTrack I've setup which is at fault. I've tried various different constructors but it all just outputs static in the end.
The DataLine setup in the original code is something like:
AudioFormat audioFormat = new AudioFormat( AudioFormat.Encoding.PCM_SIGNED,
44100, 16, 2, 4, 44100, true );
DataLine.Info lineInfo = new DataLine.Info( SourceDataLine.class, audioFormat );
DataLine line = (SourceDataLine)AudioSystem.getLine( lineInfo );
The constructor I'm using right now is:
AudioTrack = new AudioTrack( AudioManager.STREAM_MUSIC,
44100,
AudioFormat.CHANNEL_CONFIGURATION_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
AudioTrack.getMinBufferSize( 44100,
AudioFormat.CHANNEL_CONFIGURATION_STEREO,
AudioFormat.ENCODING_PCM_16BIT ),
AudioTrack.MODE_STREAM );
I've replaced constants and variables in those so they make sense as concisely as possible, but my basic question is if there are any obvious problems in the assumptions I made when going from one format to the other.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
所以今天我有一点时间进一步研究这个问题,我想我已经把它搞清楚了。上面第一个代码示例中的 AudioFormat 声明将大端参数设置为“true”,但 Android AudioTrack 期望 PCM 数据采用小端格式。
所以我写了一个快速的小循环来测试我的预感,如下所示:
基本上,这个循环翻转缓冲区中每个(16 位)样本的字节。这很有效,只是有点不稳定,因为效率非常低。我尝试使用 ByteBuffer 但这似乎并没有翻转各个样本中的字节。
我会想出一些更好的办法,但这里的基本问题已经解决了。希望其他人发现这很有用!
So I had a little time to look at this further today, and I think I've nailed it down. The AudioFormat declaration in the first code sample above has the big endian parameter set to "true", but the Android AudioTrack expects PCM data to be in little endian format.
So I wrote a quick little loop to test out my hunch like so:
Basically, this loop flips the bytes of every (16-bit) sample in the buffer. This works great, except it's a little choppy since it's terribly inefficient. I tried using a ByteBuffer but that doesn't seem to flip the bytes in the individual samples.
I'll figure something even better out going forward, but the basic problem here is solved. Hope someone else finds this useful!