通过 WiFi 在 Android 手机之间传输语音
我正在尝试通过 WiFi 将麦克风中的音频从 1 个 Android 设备传输到另一个 Android 设备。 在查看了一些示例后,我制作了 2 个应用程序,每个应用程序都有一个活动,1 个用于捕获和发送音频,另一个用于接收。
我使用 Audiorecord 和 Audiotrack 类来捕获和播放。但是,我只是听到一些噼啪声(在我做了一些更改后,虽然我恢复了,但现在已经停止了)
发送语音的活动。
public class VoiceSenderActivity extends Activity {
private EditText target;
private TextView streamingLabel;
private Button startButton,stopButton;
public byte[] buffer;
public static DatagramSocket socket;
private int port=50005; //which port??
AudioRecord recorder;
//Audio Configuration.
private int sampleRate = 8000; //How much will be ideal?
private int channelConfig = AudioFormat.CHANNEL_CONFIGURATION_MONO;
private int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
private boolean status = true;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
target = (EditText) findViewById (R.id.target_IP);
streamingLabel = (TextView) findViewById(R.id.streaming_label);
startButton = (Button) findViewById (R.id.start_button);
stopButton = (Button) findViewById (R.id.stop_button);
streamingLabel.setText("Press Start! to begin");
startButton.setOnClickListener (startListener);
stopButton.setOnClickListener (stopListener);
}
private final OnClickListener stopListener = new OnClickListener() {
@Override
public void onClick(View arg0) {
status = false;
recorder.release();
Log.d("VS","Recorder released");
}
};
private final OnClickListener startListener = new OnClickListener() {
@Override
public void onClick(View arg0) {
status = true;
startStreaming();
}
};
public void startStreaming() {
Thread streamThread = new Thread(new Runnable() {
@Override
public void run() {
try {
int minBufSize = AudioRecord.getMinBufferSize(sampleRate, channelConfig, audioFormat);
DatagramSocket socket = new DatagramSocket();
Log.d("VS", "Socket Created");
byte[] buffer = new byte[minBufSize];
Log.d("VS","Buffer created of size " + minBufSize);
DatagramPacket packet;
final InetAddress destination = InetAddress.getByName(target.getText().toString());
Log.d("VS", "Address retrieved");
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,sampleRate,channelConfig,audioFormat,minBufSize);
Log.d("VS", "Recorder initialized");
recorder.startRecording();
while(status == true) {
//reading data from MIC into buffer
minBufSize = recorder.read(buffer, 0, buffer.length);
//putting buffer in the packet
packet = new DatagramPacket (buffer,buffer.length,destination,port);
socket.send(packet);
}
} catch(UnknownHostException e) {
Log.e("VS", "UnknownHostException");
} catch (IOException e) {
Log.e("VS", "IOException");
}
}
});
streamThread.start();
}
}
接收语音的活动
public class VoiceReceiverActivity extends Activity {
private Button receiveButton,stopButton;
public static DatagramSocket socket;
private AudioTrack speaker;
//Audio Configuration.
private int sampleRate = 8000; //How much will be ideal?
private int channelConfig = AudioFormat.CHANNEL_CONFIGURATION_MONO;
private int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
private boolean status = true;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
receiveButton = (Button) findViewById (R.id.receive_button);
stopButton = (Button) findViewById (R.id.stop_button);
findViewById(R.id.receive_label);
receiveButton.setOnClickListener(receiveListener);
stopButton.setOnClickListener(stopListener);
}
private final OnClickListener stopListener = new OnClickListener() {
@Override
public void onClick(View v) {
status = false;
speaker.release();
Log.d("VR","Speaker released");
}
};
private final OnClickListener receiveListener = new OnClickListener() {
@Override
public void onClick(View arg0) {
status = true;
startReceiving();
}
};
public void startReceiving() {
Thread receiveThread = new Thread (new Runnable() {
@Override
public void run() {
try {
DatagramSocket socket = new DatagramSocket(50005);
Log.d("VR", "Socket Created");
byte[] buffer = new byte[256];
//minimum buffer size. need to be careful. might cause problems. try setting manually if any problems faced
int minBufSize = AudioRecord.getMinBufferSize(sampleRate, channelConfig, audioFormat);
speaker = new AudioTrack(AudioManager.STREAM_MUSIC,sampleRate,channelConfig,audioFormat,minBufSize,AudioTrack.MODE_STREAM);
speaker.play();
while(status == true) {
try {
DatagramPacket packet = new DatagramPacket(buffer,buffer.length);
socket.receive(packet);
Log.d("VR", "Packet Received");
//reading content from packet
buffer=packet.getData();
Log.d("VR", "Packet data read into buffer");
//sending data to the Audiotrack obj i.e. speaker
speaker.write(buffer, 0, minBufSize);
Log.d("VR", "Writing buffer content to speaker");
} catch(IOException e) {
Log.e("VR","IOException");
}
}
} catch (SocketException e) {
Log.e("VR", "SocketException");
}
}
});
receiveThread.start();
}
}
我使用wireshark来检查数据包是否正在发送并且我可以看到数据包。然而,源是发送设备的 MAC 地址,而目的地也是类似物理地址的东西。但不确定这是否相关。
那么问题出在哪里呢?
I'm trying to stream audio from the mic from 1 Android to another over WiFi.
After looking at some examples I made 2 applications with a single activity in each, 1 to capture and send audio and the other to receive.
I've used the Audiorecord and Audiotrack classes to capture and play. However, i just hear some crackling sound (which has now stopped after i made some changes though i reverted back)
The activity to send voice.
public class VoiceSenderActivity extends Activity {
private EditText target;
private TextView streamingLabel;
private Button startButton,stopButton;
public byte[] buffer;
public static DatagramSocket socket;
private int port=50005; //which port??
AudioRecord recorder;
//Audio Configuration.
private int sampleRate = 8000; //How much will be ideal?
private int channelConfig = AudioFormat.CHANNEL_CONFIGURATION_MONO;
private int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
private boolean status = true;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
target = (EditText) findViewById (R.id.target_IP);
streamingLabel = (TextView) findViewById(R.id.streaming_label);
startButton = (Button) findViewById (R.id.start_button);
stopButton = (Button) findViewById (R.id.stop_button);
streamingLabel.setText("Press Start! to begin");
startButton.setOnClickListener (startListener);
stopButton.setOnClickListener (stopListener);
}
private final OnClickListener stopListener = new OnClickListener() {
@Override
public void onClick(View arg0) {
status = false;
recorder.release();
Log.d("VS","Recorder released");
}
};
private final OnClickListener startListener = new OnClickListener() {
@Override
public void onClick(View arg0) {
status = true;
startStreaming();
}
};
public void startStreaming() {
Thread streamThread = new Thread(new Runnable() {
@Override
public void run() {
try {
int minBufSize = AudioRecord.getMinBufferSize(sampleRate, channelConfig, audioFormat);
DatagramSocket socket = new DatagramSocket();
Log.d("VS", "Socket Created");
byte[] buffer = new byte[minBufSize];
Log.d("VS","Buffer created of size " + minBufSize);
DatagramPacket packet;
final InetAddress destination = InetAddress.getByName(target.getText().toString());
Log.d("VS", "Address retrieved");
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,sampleRate,channelConfig,audioFormat,minBufSize);
Log.d("VS", "Recorder initialized");
recorder.startRecording();
while(status == true) {
//reading data from MIC into buffer
minBufSize = recorder.read(buffer, 0, buffer.length);
//putting buffer in the packet
packet = new DatagramPacket (buffer,buffer.length,destination,port);
socket.send(packet);
}
} catch(UnknownHostException e) {
Log.e("VS", "UnknownHostException");
} catch (IOException e) {
Log.e("VS", "IOException");
}
}
});
streamThread.start();
}
}
The activity to receive voice
public class VoiceReceiverActivity extends Activity {
private Button receiveButton,stopButton;
public static DatagramSocket socket;
private AudioTrack speaker;
//Audio Configuration.
private int sampleRate = 8000; //How much will be ideal?
private int channelConfig = AudioFormat.CHANNEL_CONFIGURATION_MONO;
private int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
private boolean status = true;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
receiveButton = (Button) findViewById (R.id.receive_button);
stopButton = (Button) findViewById (R.id.stop_button);
findViewById(R.id.receive_label);
receiveButton.setOnClickListener(receiveListener);
stopButton.setOnClickListener(stopListener);
}
private final OnClickListener stopListener = new OnClickListener() {
@Override
public void onClick(View v) {
status = false;
speaker.release();
Log.d("VR","Speaker released");
}
};
private final OnClickListener receiveListener = new OnClickListener() {
@Override
public void onClick(View arg0) {
status = true;
startReceiving();
}
};
public void startReceiving() {
Thread receiveThread = new Thread (new Runnable() {
@Override
public void run() {
try {
DatagramSocket socket = new DatagramSocket(50005);
Log.d("VR", "Socket Created");
byte[] buffer = new byte[256];
//minimum buffer size. need to be careful. might cause problems. try setting manually if any problems faced
int minBufSize = AudioRecord.getMinBufferSize(sampleRate, channelConfig, audioFormat);
speaker = new AudioTrack(AudioManager.STREAM_MUSIC,sampleRate,channelConfig,audioFormat,minBufSize,AudioTrack.MODE_STREAM);
speaker.play();
while(status == true) {
try {
DatagramPacket packet = new DatagramPacket(buffer,buffer.length);
socket.receive(packet);
Log.d("VR", "Packet Received");
//reading content from packet
buffer=packet.getData();
Log.d("VR", "Packet data read into buffer");
//sending data to the Audiotrack obj i.e. speaker
speaker.write(buffer, 0, minBufSize);
Log.d("VR", "Writing buffer content to speaker");
} catch(IOException e) {
Log.e("VR","IOException");
}
}
} catch (SocketException e) {
Log.e("VR", "SocketException");
}
}
});
receiveThread.start();
}
}
I used wireshark to check if the packets are being sent and i can see the packets. The source however, is the MAC address of the sending device and destination too something like a physical address. Not sure if this is relevant though.
So what's the problem with?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
嘿,有一个名为“Libstreaming”的开源库,用于使用 WIFI 通过网络传输语音/视频。看一下:
https://github.com/fyhertz/libstreaming
另外还提供了一些示例,请提供看看它:
https://github.com/fyhertz/libstreaming-examples
我已经使用该库来传输 RTSP 音频网络上的,希望可以可能有用。
Hey there is an Open Source library called "Libstreaming" that is used for streaming voice/video over the network using WIFI. Just Have a look at it:
https://github.com/fyhertz/libstreaming
There are also some examples provided, kindly have a look at it:
https://github.com/fyhertz/libstreaming-examples
I have used the library to Stream RTSP Audio over the network,hope it may be useful.
我会尝试将问题分为三个部分。
第 1 部分
确保套接字连接工作正常
通过注释与音频相关的所有内容
第 2 部分
仅发送来自发件人的任意文本消息 [Hello WiFi],然后接收并打印它接收方应用程序。
第3部分
录音机是否真正工作?
尝试在单独的项目中测试您的录音方式,看看它是否正常工作。
使用此代码捕获麦克风并播放它。
我曾经做过一个类似的项目并测试它,我所做的是在录制后我将录制的音频数据作为文件写入SD卡上
(这将是原始音频,因此大多数音乐播放器将无法播放它...我想 mPlayer 应该玩它)
I'd try dividing the problem into Three parts.
Part 1
Ensure the Socket Connection is working fine
by commenting everything related to audio
Part 2
Send simply just an arbitrary text message [Hello WiFi] from the sender, then receiving and printing it in the receiver side application.
Part 3
Whether the recorder is actually working?
try to test your recording way in separate project to see if it is working properly or not.
Use this code to capture microphone and play it.
I once worked on a similar project and to test it, what I did was after recording I wrote the recorded audio data as a file on the sdcard
(it would be raw audio, so most music players won't be able to play it... mPlayer should play it I guess)
您需要仔细考虑使用 UDP(DatagramSocket 类)作为网络协议。
UDP 是一种轻量级协议,不保证维持接收到的数据包的顺序。这可能是音频乱码的部分原因。乱序接收的数据包将导致音频数据包乱序播放。在这些失序数据包的边界处,您会听到咔哒声/爆裂声,其中音频样本实际上已损坏。除此之外,不保证 UDP 数据包能够成功传送。任何丢失的数据包显然都会增加听到的任何乱码或失真。
TCP(套接字类)将是最佳音频质量的更好选择。 TCP 是一种更强大的协议,它将维护数据包接收的顺序。它还具有内置错误检查功能,并将重新发送任何丢失的数据包。然而,由于这种注意力功能,TCP 具有更高的网络开销。
我在回答这个问题时首先说,您需要仔细考虑您使用的协议。这是因为,根据对您来说重要的内容,可以使用其中任何一种。
如果您想要超低延迟播放,但愿意牺牲音频质量,那么 UDP 就可以了。但是,需要进行一些实验才能找到最佳的缓冲区和样本大小。
如果您想要零失真的最佳音频再现,但又愿意引入稍长的延迟,那么 TCP 就是您的最佳选择。
我不能说 TCP 会增加多少延迟。但它有可能在不影响用户体验的情况下实现。找出答案的唯一方法就是尝试一下。
You need to carefully consider your use of UDP (DatagramSocket class) as your network protocol.
UDP is a lightweight protocol that does not guarantee to maintain the ordering of received packets. This may be part of the reason why the audio is garbled. A packet received out of order will result in a packets worth of audio being played out of order. At the boundary of these out-of-sequence packets you will hear clicks/pops where the audio sample is effectively corrupt. In addition to this, UDP packets aren't guaranteed to be delivered successfully. Any dropped packets will obviously add to any garbling or distortion that is heard.
TCP (Socket class) would be a better option for optimum audio quality. TCP is a more robust protocol that will maintain the order that packets are received. It also has in-built error checking and will resend any dropped packets. However, because of this attentional functionality, TCP has a higher network overhead.
I started this response by saying that you need to carefully consider which protocol you use. This is because there is a case for using either depending on what is important to you..
If you want ultra low latency playback but are happy to sacrifice the audio quality then UDP will work. However, it will take some experimentation to find the best buffer and sample size.
If you want the best possible audio re-production with zero distortion but are happy to introduce slightly more latency then TCP is the route to go.
I can't say how much more latency TCP would add. But it is possible that it could be implemented without impacting the user experience. The only way to find out is to try it and see.