Java - 通过 Java 套接字广播语音
我创建了一个从客户端接收声音的服务器应用程序,然后广播该声音,该声音以字节形式存储,并将字节发送回连接到服务器的客户端。现在我只使用一个客户端进行测试,该客户端正在接收回语音,但声音一直断断续续。有人可以告诉我我做错了什么吗?
我想我理解声音播放不流畅的部分原因,但不知道如何解决问题。
代码如下。
客户端:
向服务器发送语音的部分
public void captureAudio()
{
Runnable runnable = new Runnable(){
public void run()
{
first=true;
try {
final AudioFileFormat.Type fileType = AudioFileFormat.Type.AU;
final AudioFormat format = getFormat();
DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
line = (TargetDataLine)AudioSystem.getLine(info);
line.open(format);
line.start();
int bufferSize = (int) format.getSampleRate()* format.getFrameSize();
byte buffer[] = new byte[bufferSize];
out = new ByteArrayOutputStream();
objectOutputStream = new BufferedOutputStream(socket.getOutputStream());
running = true;
try {
while (running) {
int count = line.read(buffer, 0, buffer.length);
if (count > 0) {
objectOutputStream.write(buffer, 0, count);
out.write(buffer, 0, count);
InputStream input = new ByteArrayInputStream(buffer);
final AudioInputStream ais = new AudioInputStream(input, format, buffer.length /format.getFrameSize());
}
}
out.close();
objectOutputStream.close();
}
catch (IOException e) {
System.exit(-1);
System.out.println("exit");
}
}
catch(LineUnavailableException e) {
System.err.println("Line Unavailable:"+ e);
e.printStackTrace();
System.exit(-2);
}
catch (Exception e) {
System.out.println("Direct Upload Error");
e.printStackTrace();
}
}
};
Thread t = new Thread(runnable);
t.start();
}
从服务器接收数据字节的部分
private void playAudio() {
//try{
Runnable runner = new Runnable() {
public void run() {
try {
InputStream in = socket.getInputStream();
Thread playTread = new Thread();
int count;
byte[] buffer = new byte[100000];
while((count = in.read(buffer, 0, buffer.length)) != -1) {
PlaySentSound(buffer,playTread);
}
}
catch(IOException e) {
System.err.println("I/O problems:" + e);
System.exit(-3);
}
}
};
Thread playThread = new Thread(runner);
playThread.start();
//}
//catch(LineUnavailableException e) {
//System.exit(-4);
//}
}//End of PlayAudio method
private void PlaySentSound(final byte buffer[], Thread playThread)
{
synchronized(playThread)
{
Runnable runnable = new Runnable(){
public void run(){
try
{
InputStream input = new ByteArrayInputStream(buffer);
final AudioFormat format = getFormat();
final AudioInputStream ais = new AudioInputStream(input, format, buffer.length /format.getFrameSize());
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format);
sline = (SourceDataLine)AudioSystem.getLine(info);
sline.open(format);
sline.start();
Float audioLen = (buffer.length / format.getFrameSize()) * format.getFrameRate();
int bufferSize = (int) format.getSampleRate() * format.getFrameSize();
byte buffer2[] = new byte[bufferSize];
int count2;
ais.read( buffer2, 0, buffer2.length);
sline.write(buffer2, 0, buffer2.length);
sline.flush();
sline.drain();
sline.stop();
sline.close();
buffer2 = null;
}
catch(IOException e)
{
}
catch(LineUnavailableException e)
{
}
}
};
playThread = new Thread(runnable);
playThread.start();
}
}
I have created a Server app that receives sound from client, i then broadcast this sound which is stored as bytes and send the bytes back to the clients that are connected to the server. now i am only using one client at the moment for testing and the client is receiving the voice back but the sound is stuttering all the time. Could some one please tell me what i am doing wrong?
I think i understand some part of why the sound isn't playing smoothly but don't understand how to fix the problem.
the code is bellow.
The Client:
The part that sends the voice to server
public void captureAudio()
{
Runnable runnable = new Runnable(){
public void run()
{
first=true;
try {
final AudioFileFormat.Type fileType = AudioFileFormat.Type.AU;
final AudioFormat format = getFormat();
DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
line = (TargetDataLine)AudioSystem.getLine(info);
line.open(format);
line.start();
int bufferSize = (int) format.getSampleRate()* format.getFrameSize();
byte buffer[] = new byte[bufferSize];
out = new ByteArrayOutputStream();
objectOutputStream = new BufferedOutputStream(socket.getOutputStream());
running = true;
try {
while (running) {
int count = line.read(buffer, 0, buffer.length);
if (count > 0) {
objectOutputStream.write(buffer, 0, count);
out.write(buffer, 0, count);
InputStream input = new ByteArrayInputStream(buffer);
final AudioInputStream ais = new AudioInputStream(input, format, buffer.length /format.getFrameSize());
}
}
out.close();
objectOutputStream.close();
}
catch (IOException e) {
System.exit(-1);
System.out.println("exit");
}
}
catch(LineUnavailableException e) {
System.err.println("Line Unavailable:"+ e);
e.printStackTrace();
System.exit(-2);
}
catch (Exception e) {
System.out.println("Direct Upload Error");
e.printStackTrace();
}
}
};
Thread t = new Thread(runnable);
t.start();
}
The part that receives the bytes of data from the server
private void playAudio() {
//try{
Runnable runner = new Runnable() {
public void run() {
try {
InputStream in = socket.getInputStream();
Thread playTread = new Thread();
int count;
byte[] buffer = new byte[100000];
while((count = in.read(buffer, 0, buffer.length)) != -1) {
PlaySentSound(buffer,playTread);
}
}
catch(IOException e) {
System.err.println("I/O problems:" + e);
System.exit(-3);
}
}
};
Thread playThread = new Thread(runner);
playThread.start();
//}
//catch(LineUnavailableException e) {
//System.exit(-4);
//}
}//End of PlayAudio method
private void PlaySentSound(final byte buffer[], Thread playThread)
{
synchronized(playThread)
{
Runnable runnable = new Runnable(){
public void run(){
try
{
InputStream input = new ByteArrayInputStream(buffer);
final AudioFormat format = getFormat();
final AudioInputStream ais = new AudioInputStream(input, format, buffer.length /format.getFrameSize());
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format);
sline = (SourceDataLine)AudioSystem.getLine(info);
sline.open(format);
sline.start();
Float audioLen = (buffer.length / format.getFrameSize()) * format.getFrameRate();
int bufferSize = (int) format.getSampleRate() * format.getFrameSize();
byte buffer2[] = new byte[bufferSize];
int count2;
ais.read( buffer2, 0, buffer2.length);
sline.write(buffer2, 0, buffer2.length);
sline.flush();
sline.drain();
sline.stop();
sline.close();
buffer2 = null;
}
catch(IOException e)
{
}
catch(LineUnavailableException e)
{
}
}
};
playThread = new Thread(runnable);
playThread.start();
}
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
除了 HefferWolf 的回答之外,我还想补充一点,发送从麦克风读取的音频样本会浪费大量带宽。您没有说您的应用程序是否仅限于本地网络,但如果您要通过互联网,则在发送/接收时压缩/解压缩音频是很常见的。
常用的压缩方案是 SPEEX 编解码器(可以使用 Java 实现这里),尽管如果您不熟悉音频采样/压缩,文档看起来有点吓人,但它相对容易使用。
在客户端,您可以使用
org.xiph.speex.SpeexEncoder
进行编码:SpeexEncoder.init()
初始化编码器(这必须匹配AudioFormat
的采样率、通道数和字节序),然后使用SpeexEncoder.processData()
对帧进行编码,SpeexEncoder.getProcessedDataByteSize()
和SpeexEncoder.getProcessedData()
获取编码数据在客户端使用
org.xiph.speex.SpeexDecoder
来获取 编码数据解码您收到的帧:SpeexDecoder.init()
使用与编码器相同的参数初始化解码器,SpeexDecoder.processData()
用于解码帧,SpeexDecoder.getProcessedDataByteSize()
和SpeexDecoder.getProcessedData()
用于获取编码数据有一点涉及的内容比我概述的更多。例如,您必须将数据吐出为正确的编码大小,这取决于采样率、通道和每个样本的位数,但您会发现通过编码发送的字节数急剧下降网络。
In addition to HefferWolf's answer, I'd add that you're wasting a lot of bandwidth by sending the audio samples that you read from the microphone. You don't say if your app is restricted to a local network but if you're going over the Internet, it's common to compress/decompress the audio when sending/receiving.
A commonly used compression scheme is the SPEEX codec (a Java implementation is available here), which is relatively easy to use despite the documentation looking a bit scary if you're not familiar with audio sampling/compression.
On the client side, you can use
org.xiph.speex.SpeexEncoder
to do the encoding:SpeexEncoder.init()
to initialise an encoder (this will have to match the sample rate, number of channels and endianness of yourAudioFormat
) and thenSpeexEncoder.processData()
to encode a frame,SpeexEncoder.getProcessedDataByteSize()
andSpeexEncoder.getProcessedData()
to get the encoded dataOn the client side use
org.xiph.speex.SpeexDecoder
to decode the frames you receive:SpeexDecoder.init()
to initialise the decoder using the same parameters as the encoder,SpeexDecoder.processData()
to decode a frame,SpeexDecoder.getProcessedDataByteSize()
andSpeexDecoder.getProcessedData()
to get the encoded dataThere's a bit more involved that I've outlined. E.g., you'll have to spit the data into the correct size for the encoding, which depends on the sample rate, channels and bits per sample, but you'll see a dramatic drop in the number of bytes you're sending over the network.
您将声音数据包相当随机地分成 1000000 字节的片段,并在客户端播放这些片段,而不考虑您在服务器端计算的采样率和帧大小,因此您最终会将声音和平分为属于两个部分一起。
您需要在服务器上解码与在客户端发送的块相同的块。也许使用 http 多部分(其中分割数据非常容易)发送它们更容易,然后通过套接字以基本方式进行发送。最简单的方法是使用 apache commons http 客户端,看看这里: http://hc.apache.org/httpclient-3.x/methods/multipartpost.html
You split the sound packets into pieces of 1000000 bytes quite randomly and playback these on the client side not taking into account the sample rate and frame size which you calculated on the server side, so you will end up splitting peaces of sound into two which belong together.
You need to decode the same chunks on the server send as you send them on the client side. Maybe it is easier to send them using http multipart (where splitting up data is quite easy) then do it the basic way via sockets. Easiest way to this is to use apache commons http client, have a look here: http://hc.apache.org/httpclient-3.x/methods/multipartpost.html