android, AudioRecord.read() -->缓冲区溢出,如何处理缓冲区?

发布于 2024-10-20 09:48:36 字数 3422 浏览 6 评论 0原文

对于我的教授的一个大学项目。希望我编写一个 Android 应用程序,这将是我的第一个应用程序。我有一些 Java 经验,但我对 Android 编程很陌生,所以请温柔地对待我。

首先,我创建一个 Activity,其中只有两个按钮,一个用于启动 AsyncTask,一个用于停止它,我的意思是我只是将布尔值“isRecording”设置为 false,其他所有内容都在 AsyncTask 中处理,该 AsyncTask 作为源代码附加。

事情运行得很好,但过了一会儿我可以在 LogCat 中找到一些缓冲区溢出消息,然后它因未捕获的异常而崩溃。我弄清楚了它崩溃的原因,并且未捕获的异常不应该是该问题的目的。

03-07 11:34:02.474: INFO/buffer 247:(558): 40
03-07 11:34:02.484: WARN/AudioFlinger(33): RecordThread: buffer overflow
03-07 11:34:02.484: INFO/MutantAudioRecorder:doInBackground()(558): isRecoding
03-07 11:34:02.484: INFO/MutantAudioRecorder:doInBackground()(558): isRecoding
03-07 11:34:02.494: WARN/AudioFlinger(33): RecordThread: buffer overflow
03-07 11:34:02.494: INFO/buffer 248:(558): -50
  1. 我写出了缓冲区,如您所见,但不知何故,我认为我在正确配置 AudioRecord 时犯了一个错误,有人能告诉我为什么我会发生缓冲区溢出吗?

  2. 下一个问题是,我该如何处理缓冲区?我的意思是,我有里面的值,并希望它们在屏幕上以图形频谱图的形式显示。有人有这方面的经验吗?我可以给个提示吗?我该如何继续...

提前感谢您的帮助。

AsyncTask 的源代码:

package nomihodai.audio;

import android.media.AudioFormat;
import android.media.AudioRecord;
import android.os.AsyncTask;
import android.util.Log;



public class MutantAudioRecorder extends AsyncTask<Void, Void, Void> {

private boolean isRecording = false;
public AudioRecord audioRecord = null;
public int mSamplesRead;
public int buffersizebytes;
public int buflen;
public int channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO;
public int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;
public static short[] buffer;
public static final int SAMPLESPERSEC = 8000;


@Override
protected Void doInBackground(Void... params) {

    while(isRecording) {

        audioRecord.startRecording();
        mSamplesRead = audioRecord.read(buffer, 0, buffersizebytes);

        if(!readerT.isAlive())
            readerT.start();

        Log.i("MutantAudioRecorder:doInBackground()", "isRecoding");
    }

    readerT.stop();

    return null;
}


Thread readerT = new Thread() {
    public void run() {
        for(int i = 0; i < 256; i++){ 
            Log.i("buffer " + i + ": ", Short.toString(buffer[i]));
        }
    }
};


@Override
public void onPostExecute(Void unused) {
    Log.i("MutantAudioRecorder:onPostExecute()", "try to release the audio hardware");

    audioRecord.release();

    Log.i("MutantAudioRecorder:onPostExecute()", "released...");
}


public void setRecording(boolean rec) {
    this.isRecording = rec;

    Log.i("MutantAudioRecorder:setRecording()", "isRecoding set to " + rec);
}


@Override
protected void onPreExecute() {

    buffersizebytes = AudioRecord.getMinBufferSize(SAMPLESPERSEC, channelConfiguration, audioEncoding);
    buffer = new short[buffersizebytes];
    buflen = buffersizebytes/2;

    Log.i("MutantAudioRecorder:onPreExecute()", "buffersizebytes: " + buffersizebytes
                                                + ", buffer: " + buffer.length
                                                + ", buflen: " + buflen);

    audioRecord = new AudioRecord(android.media.MediaRecorder.AudioSource.MIC,
            SAMPLESPERSEC,
            channelConfiguration,
            audioEncoding,
            buffersizebytes);

    if(audioRecord != null)
        Log.i("MutantAudioRecorder:onPreExecute()", "audiorecord object created");
    else
        Log.i("MutantAudioRecorder:onPreExecute()", "audiorecord NOT created");
}

}

for a university project my prof. wants me to write an android application, would be my first one. I have some Java experience but I am new to Android programming, so please be gentle with me.

First I create an Activity where I have only two buttons, one for starting an AsyncTask and one for stopping it, I mean I just set the boolean "isRecording" to false, everything else is handled in the AsyncTask, which is attached as source code.

The thing is running quite okay, but after a while I can find some bufferoverflow messages in the LogCat and after that it crashes with an uncaught exception. I figured out why it's crashing, and the uncaught exception shouldn't be the purpose of that question.

03-07 11:34:02.474: INFO/buffer 247:(558): 40
03-07 11:34:02.484: WARN/AudioFlinger(33): RecordThread: buffer overflow
03-07 11:34:02.484: INFO/MutantAudioRecorder:doInBackground()(558): isRecoding
03-07 11:34:02.484: INFO/MutantAudioRecorder:doInBackground()(558): isRecoding
03-07 11:34:02.494: WARN/AudioFlinger(33): RecordThread: buffer overflow
03-07 11:34:02.494: INFO/buffer 248:(558): -50
  1. I write out the buffer as you can see, but somehow I think I made a mistake in configuring the AudioRecord correctly, can anybody tell why I get the bufferoverflow?

  2. And the next question would be, how can I handle the buffer? I mean, I have the values inside it and want them to show in graphical spectrogram on the screen. Does anyone have experience with it and can me give a hint? How can I go on ...

Thanks in advance for your help.

Source code of the AsyncTask:

package nomihodai.audio;

import android.media.AudioFormat;
import android.media.AudioRecord;
import android.os.AsyncTask;
import android.util.Log;



public class MutantAudioRecorder extends AsyncTask<Void, Void, Void> {

private boolean isRecording = false;
public AudioRecord audioRecord = null;
public int mSamplesRead;
public int buffersizebytes;
public int buflen;
public int channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO;
public int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;
public static short[] buffer;
public static final int SAMPLESPERSEC = 8000;


@Override
protected Void doInBackground(Void... params) {

    while(isRecording) {

        audioRecord.startRecording();
        mSamplesRead = audioRecord.read(buffer, 0, buffersizebytes);

        if(!readerT.isAlive())
            readerT.start();

        Log.i("MutantAudioRecorder:doInBackground()", "isRecoding");
    }

    readerT.stop();

    return null;
}


Thread readerT = new Thread() {
    public void run() {
        for(int i = 0; i < 256; i++){ 
            Log.i("buffer " + i + ": ", Short.toString(buffer[i]));
        }
    }
};


@Override
public void onPostExecute(Void unused) {
    Log.i("MutantAudioRecorder:onPostExecute()", "try to release the audio hardware");

    audioRecord.release();

    Log.i("MutantAudioRecorder:onPostExecute()", "released...");
}


public void setRecording(boolean rec) {
    this.isRecording = rec;

    Log.i("MutantAudioRecorder:setRecording()", "isRecoding set to " + rec);
}


@Override
protected void onPreExecute() {

    buffersizebytes = AudioRecord.getMinBufferSize(SAMPLESPERSEC, channelConfiguration, audioEncoding);
    buffer = new short[buffersizebytes];
    buflen = buffersizebytes/2;

    Log.i("MutantAudioRecorder:onPreExecute()", "buffersizebytes: " + buffersizebytes
                                                + ", buffer: " + buffer.length
                                                + ", buflen: " + buflen);

    audioRecord = new AudioRecord(android.media.MediaRecorder.AudioSource.MIC,
            SAMPLESPERSEC,
            channelConfiguration,
            audioEncoding,
            buffersizebytes);

    if(audioRecord != null)
        Log.i("MutantAudioRecorder:onPreExecute()", "audiorecord object created");
    else
        Log.i("MutantAudioRecorder:onPreExecute()", "audiorecord NOT created");
}

}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

只有影子陪我不离不弃 2024-10-27 09:48:36

这可能是一些对录制的音频字节进行实时分析的过程?

由于录制的缓冲区大小是有限的,一旦你的“分析过程”慢于录制的速度,缓冲区中的数据就会被卡住,但录制字节总是不断到来,从而导致缓冲区溢出。

尝试在录制时使用线程,在录制的字节上使用其他进程,这种方法有一个开源示例代码:http://musicg.googlecode.com/files/musicg_android_demo.zip

It's probably some live analyzing process working on the recorded audio bytes?

Since the buffer size for recording is limited, once your "analyzing process" is slower than the rate of recording, the data in the buffer will be stuck, but the recording bytes are always coming thus buffer overflows.

Try use threads on recording and the other process on the recorded bytes, there's a open source sample code for this approach: http://musicg.googlecode.com/files/musicg_android_demo.zip

年华零落成诗 2024-10-27 09:48:36

正如我们在聊天室中讨论的那样,解码音频数据并将其显示在屏幕上应该很简单。您提到音频缓冲区每秒有 8000 个样本,每个样本是 16 位,并且是单声道音频。

显示这个应该很简单。将每个样本视为视图中的垂直偏移。您需要将范围 -32k 到 +32k 缩放到视图的垂直高度。从视图的左边缘开始,每列绘制一个样本。当到达右边缘时,再次环绕(根据需要擦除前一行)。

这最终会将每个样本绘制为单个像素,这可能看起来不太好。您还可以在相邻样本之间画一条线。您可以尝试调整线宽、颜色等以获得最佳效果。

最后一点:您每秒将绘制 8000 次,再加上更多的次数来消除之前的示例。您可能需要采取一些快捷方式来确保帧速率能够跟上音频。您可能需要跳过示例。

As we discussed in the chat room, decoding the audio data and displaying it on the screen should be straightforward. You mentioned that the audio buffer has 8000 samples per second, each sample is 16 bit, and it's mono audio.

Displaying this should be straightforward. Treat each sample as a vertical offset in your view. You need to scale the range -32k to +32k to the vertical height of your view. Starting at the left edge of the view, draw one sample per column. When you reach the right edge, wrap around again (erasing the previous line as necessary).

This will end up drawing each sample as a single pixel, which may not look very nice. You can also draw a line between adjacent samples. You can play around with line widths, colors and so on to get the best effect.

One last note: You'll be drawing 8000 times per second, plus more to blank out the previous samples. You may need to take some shortcuts to make sure the framerate can keep up with the audio. You may need to skip samples.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文