使用 Java 进行音频混合(无 Mixer API)

发布于 2024-11-05 15:49:47 字数 3472 浏览 3 评论 0原文

我正在尝试混合几种不同的音频流,并尝试让它们同时播放而不是一次播放一个。

下面的代码一次播放一个,我无法找出不使用 Java Mixer API 的解决方案。不幸的是,我的声卡不支持使用 Mixer API 进行同步,我被迫找出一种通过代码来实现同步的方法。

请指教。

/////代码如下////

class MixerProgram {
public static AudioFormat monoFormat;
private JFileChooser fileChooser = new JFileChooser(); 
private static File[] files;
private int trackCount;   
private FileInputStream[] fileStreams = new FileInputStream[trackCount];
public static AudioInputStream[] audioInputStream;
private Thread trackThread[] = new Thread[trackCount];
private static DataLine.Info sourceDataLineInfo = null; 
private static SourceDataLine[] sourceLine;  

public MixerProgram(String[] s)
{
  trackCount = s.length;
  sourceLine = new SourceDataLine[trackCount];
  audioInputStream = new AudioInputStream[trackCount]; 
  files = new File[s.length];
}

public static void getFiles(String[] s)
{
  files = new File[s.length];
  for(int i=0; i<s.length;i++)
  {
    File f = new File(s[i]);
    if (!f.exists()) 
    System.err.println("Wave file not found: " + filename);
    files[i] = f;
  }
}


public static void loadAudioFiles(String[] s) 
{
  AudioInputStream in = null;
  audioInputStream = new AudioInputStream[s.length];
  sourceLine = new SourceDataLine[s.length];
  for(int i=0;i<s.length;i++){
    try 
    {
      in = AudioSystem.getAudioInputStream(files[i]); 
    } 
    catch(Exception e) 
    {
      System.err.println("Failed to assign audioInputStream");
    }
    monoFormat = in.getFormat();
    AudioFormat decodedFormat = new AudioFormat(
                                              AudioFormat.Encoding.PCM_SIGNED,
                                              monoFormat.getSampleRate(), 16, monoFormat.getChannels(),
                                              monoFormat.getChannels() * 2, monoFormat.getSampleRate(),
                                              false);
  monoFormat = decodedFormat; //give back name
  audioInputStream[i] = AudioSystem.getAudioInputStream(decodedFormat, in);
  sourceDataLineInfo = new DataLine.Info(SourceDataLine.class, monoFormat);
  try 
  {
    sourceLine[i] = (SourceDataLine) AudioSystem.getLine(sourceDataLineInfo); 
    sourceLine[i].open(monoFormat);
  } 
  catch(LineUnavailableException e) 
  {
    System.err.println("Failed to get SourceDataLine" + e);
  }
}               
}

public static void playAudioMix(String[] s)
{
  final int tracks = s.length;
  System.out.println(tracks);
  Runnable playAudioMixRunner = new Runnable()
  {
    int bufferSize = (int) monoFormat.getSampleRate() * monoFormat.getFrameSize();
    byte[] buffer = new byte[bufferSize]; 
    public void run()
    {
      if(tracks==0)
        return;
      for(int i = 0; i < tracks; i++)
      {
        sourceLine[i].start();
      }        
      int bytesRead = 0;
      while(bytesRead != -1)
      {
        for(int i = 0; i < tracks; i++)
        {
          try 
          {
            bytesRead = audioInputStream[i].read(buffer, 0, buffer.length);
          } 
          catch (IOException e) {
          // TODO Auto-generated catch block
            e.printStackTrace();
          }            
          if(bytesRead >= 0)
          {
            int bytesWritten = sourceLine[i].write(buffer, 0, bytesRead);
            System.out.println(bytesWritten);
          }
        }
      }
    }
  };
  Thread playThread = new Thread(playAudioMixRunner);
  playThread.start();
}
}

I am attempting to mix several different audio streams and trying to get them to play at the same time instead of one-at-a-time.

The code below plays them one-at-a-time and I cannot figure out a solution that does not use the Java Mixer API. Unfortunately, my audio card does not support synchronization using the Mixer API and I am forced to figure out a way to do it through code.

Please advise.

/////CODE IS BELOW////

class MixerProgram {
public static AudioFormat monoFormat;
private JFileChooser fileChooser = new JFileChooser(); 
private static File[] files;
private int trackCount;   
private FileInputStream[] fileStreams = new FileInputStream[trackCount];
public static AudioInputStream[] audioInputStream;
private Thread trackThread[] = new Thread[trackCount];
private static DataLine.Info sourceDataLineInfo = null; 
private static SourceDataLine[] sourceLine;  

public MixerProgram(String[] s)
{
  trackCount = s.length;
  sourceLine = new SourceDataLine[trackCount];
  audioInputStream = new AudioInputStream[trackCount]; 
  files = new File[s.length];
}

public static void getFiles(String[] s)
{
  files = new File[s.length];
  for(int i=0; i<s.length;i++)
  {
    File f = new File(s[i]);
    if (!f.exists()) 
    System.err.println("Wave file not found: " + filename);
    files[i] = f;
  }
}


public static void loadAudioFiles(String[] s) 
{
  AudioInputStream in = null;
  audioInputStream = new AudioInputStream[s.length];
  sourceLine = new SourceDataLine[s.length];
  for(int i=0;i<s.length;i++){
    try 
    {
      in = AudioSystem.getAudioInputStream(files[i]); 
    } 
    catch(Exception e) 
    {
      System.err.println("Failed to assign audioInputStream");
    }
    monoFormat = in.getFormat();
    AudioFormat decodedFormat = new AudioFormat(
                                              AudioFormat.Encoding.PCM_SIGNED,
                                              monoFormat.getSampleRate(), 16, monoFormat.getChannels(),
                                              monoFormat.getChannels() * 2, monoFormat.getSampleRate(),
                                              false);
  monoFormat = decodedFormat; //give back name
  audioInputStream[i] = AudioSystem.getAudioInputStream(decodedFormat, in);
  sourceDataLineInfo = new DataLine.Info(SourceDataLine.class, monoFormat);
  try 
  {
    sourceLine[i] = (SourceDataLine) AudioSystem.getLine(sourceDataLineInfo); 
    sourceLine[i].open(monoFormat);
  } 
  catch(LineUnavailableException e) 
  {
    System.err.println("Failed to get SourceDataLine" + e);
  }
}               
}

public static void playAudioMix(String[] s)
{
  final int tracks = s.length;
  System.out.println(tracks);
  Runnable playAudioMixRunner = new Runnable()
  {
    int bufferSize = (int) monoFormat.getSampleRate() * monoFormat.getFrameSize();
    byte[] buffer = new byte[bufferSize]; 
    public void run()
    {
      if(tracks==0)
        return;
      for(int i = 0; i < tracks; i++)
      {
        sourceLine[i].start();
      }        
      int bytesRead = 0;
      while(bytesRead != -1)
      {
        for(int i = 0; i < tracks; i++)
        {
          try 
          {
            bytesRead = audioInputStream[i].read(buffer, 0, buffer.length);
          } 
          catch (IOException e) {
          // TODO Auto-generated catch block
            e.printStackTrace();
          }            
          if(bytesRead >= 0)
          {
            int bytesWritten = sourceLine[i].write(buffer, 0, bytesRead);
            System.out.println(bytesWritten);
          }
        }
      }
    }
  };
  Thread playThread = new Thread(playAudioMixRunner);
  playThread.start();
}
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

十雾 2024-11-12 15:49:47

问题是您没有将样本添加在一起。如果我们查看 4 个轨道、16 位 PCM 数据,您需要将所有不同的值添加在一起,将它们“混合”成一个最终输出。因此,从纯数字的角度来看,它看起来像这样:

[Track1]  320  -16  2000   200  400
[Track2]   16    8   123   -87   91
[Track3]  -16  -34  -356  1200  805
[Track4] 1011 1230 -1230  -100   19
[Final!] 1331 1188   537  1213 1315

在上面的代码中,您应该只编写一个字节数组。该字节数组是所有轨道添加在一起的最终组合。问题是您正在为每个不同的轨道编写一个字节数组(因此正如您所观察到的那样,没有发生混音)。

如果你想保证没有任何“剪辑”,你应该取所有轨道的平均值(因此将上面的所有四个轨道相加并除以 4)。然而,选择这种方法会产生一些瑕疵(例如,如果您在三个轨道上静音,而在一个大声轨道上静音,则最终输出将比一个非静音轨道的音量安静得多)。您可以使用更复杂的算法来进行混合,但那时您正在编写自己的混合器:P。

The problem is that you are not adding the samples together. If we are looking at 4 tracks, 16-bit PCM data, you need to add all the different values together to "mix" them into one final output. So, from a purely-numbers point-of-view, it would look like this:

[Track1]  320  -16  2000   200  400
[Track2]   16    8   123   -87   91
[Track3]  -16  -34  -356  1200  805
[Track4] 1011 1230 -1230  -100   19
[Final!] 1331 1188   537  1213 1315

In your above code, you should only be writing a single byte array. That byte array is the final mix of all tracks added together. The problem is that you are writing a byte array for each different track (so there is no mixdown happening, as you observed).

If you want to guarantee you don't have any "clipping", you should take the average of all tracks (so add all four tracks above and divide by 4). However, there are artifacts from choosing that approach (like if you have silence on three tracks and one loud track, the final output will be much quiter than the volume of the one track that is not silent). There are more complicated algorithms you can use to do the mixing, but by then you are writing your own mixer :P.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文