如何编写基于网络的音乐可视化工具?

发布于 2024-09-06 05:18:43 字数 192 浏览 8 评论 0原文

我正在尝试找到构建音乐可视化工具以在网络浏览器中运行的最佳方法。 Unity 是一个选项,但我需要构建一个自定义音频导入/分析插件来获取最终用户的声音输出。 Quartz 可以满足我的需要,但只能在 Mac/Safari 上运行。 WebGL 似乎还没有准备好。 Raphael 主要是 2D,并且仍然存在获取用户声音的问题......有什么想法吗?以前有人这样做过吗?

I'm trying to find the best approach to build a music visualizer to run in a browser over the web. Unity is an option, but I'll need to build a custom audio import/analysis plugin to get the end user's sound output. Quartz does what I need but only runs on Mac/Safari. WebGL seems not ready. Raphael is mainly 2D, and there's still the issue of getting the user's sound... any ideas? Has anyone done this before?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

夏日浅笑〃 2024-09-13 05:18:43

让音频响应起来非常简单。 这是一个开源网站,其中包含大量音频反应示例

至于如何做到这一点,您基本上使用 Web Audio API 来传输音乐并使用其 AnalyserNode 来获取音频数据。

"use strict";
const ctx = document.querySelector("canvas").getContext("2d");

ctx.fillText("click to start", 100, 75);
ctx.canvas.addEventListener('click', start);  

function start() {
  ctx.canvas.removeEventListener('click', start);
  // make a Web Audio Context
  const context = new AudioContext();
  const analyser = context.createAnalyser();

  // Make a buffer to receive the audio data
  const numPoints = analyser.frequencyBinCount;
  const audioDataArray = new Uint8Array(numPoints);

  function render() {
    ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);

    // get the current audio data
    analyser.getByteFrequencyData(audioDataArray);

    const width = ctx.canvas.width;
    const height = ctx.canvas.height;
    const size = 5;

    // draw a point every size pixels
    for (let x = 0; x < width; x += size) {
      // compute the audio data for this point
      const ndx = x * numPoints / width | 0;
      // get the audio data and make it go from 0 to 1
      const audioValue = audioDataArray[ndx] / 255;
      // draw a rect size by size big
      const y = audioValue * height;
      ctx.fillRect(x, y, size, size);
    }
    requestAnimationFrame(render);
  }
  requestAnimationFrame(render);

  // Make a audio node
  const audio = new Audio();
  audio.loop = true;
  audio.autoplay = true;

  // this line is only needed if the music you are trying to play is on a
  // different server than the page trying to play it.
  // It asks the server for permission to use the music. If the server says "no"
  // then you will not be able to play the music
  // Note if you are using music from the same domain 
  // **YOU MUST REMOVE THIS LINE** or your server must give permission.
  audio.crossOrigin = "anonymous";

  // call `handleCanplay` when it music can be played
  audio.addEventListener('canplay', handleCanplay);
  audio.src = "https://twgljs.org/examples/sounds/DOCTOR%20VOX%20-%20Level%20Up.mp3";
  audio.load();


  function handleCanplay() {
    // connect the audio element to the analyser node and the analyser node
    // to the main Web Audio context
    const source = context.createMediaElementSource(audio);
    source.connect(analyser);
    analyser.connect(context.destination);
  }
}
canvas { border: 1px solid black; display: block; }
<canvas></canvas>

然后就由你来画一些有创意的东西了。

请注意您可能会遇到的一些麻烦。

  1. 目前(2017/1/3)Android Chrome 和 iOS Safari 均不支持分析流式音频数据。相反,您必须加载整首歌曲。 这是一个试图稍微抽象一下的库

  2. 在移动设备上 您无法自动播放音频。您必须根据用户输入(例如 'click''touchstart')在输入事件内启动音频。

  3. 正如示例中所指出的,只有当源来自同一域或者您请求 CORS 权限并且服务器授予权限时,您才能分析音频。据我所知,只有 Soundcloud 给予许可,并且是按每首歌为基础的。是否允许对特定歌曲进行音频分析取决于各个艺术家的歌曲设置。

    尝试解释这部分

    默认情况下,您有权访问同一域的所有数据,但没有其他域的权限。

    当你添加

    audio.crossOrigin = "匿名";
    

    这基本上就是“向服务器请求用户‘匿名’的许可”。服务器可以授予权限,也可以不授予权限。这取决于服务器。这包括询问同一域上的服务器,这意味着如果您要请求同一域上的歌曲,您需要 (a) 删除上面的行或 (b) 配置您的服务器以授予 CORS 权限。 大多数服务器不授予 CORS 权限,因此如果您添加该行,即使服务器是同一域,如果它不授予 CORS 权限,则尝试分析音频将失败。

    正如


音乐:DOCTOR VOX - 升级

Making something audio reactive is pretty simple. Here's an open source site with lots audio reactive examples.

As for how to do it you basically use the Web Audio API to stream the music and use its AnalyserNode to get audio data out.

"use strict";
const ctx = document.querySelector("canvas").getContext("2d");

ctx.fillText("click to start", 100, 75);
ctx.canvas.addEventListener('click', start);  

function start() {
  ctx.canvas.removeEventListener('click', start);
  // make a Web Audio Context
  const context = new AudioContext();
  const analyser = context.createAnalyser();

  // Make a buffer to receive the audio data
  const numPoints = analyser.frequencyBinCount;
  const audioDataArray = new Uint8Array(numPoints);

  function render() {
    ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);

    // get the current audio data
    analyser.getByteFrequencyData(audioDataArray);

    const width = ctx.canvas.width;
    const height = ctx.canvas.height;
    const size = 5;

    // draw a point every size pixels
    for (let x = 0; x < width; x += size) {
      // compute the audio data for this point
      const ndx = x * numPoints / width | 0;
      // get the audio data and make it go from 0 to 1
      const audioValue = audioDataArray[ndx] / 255;
      // draw a rect size by size big
      const y = audioValue * height;
      ctx.fillRect(x, y, size, size);
    }
    requestAnimationFrame(render);
  }
  requestAnimationFrame(render);

  // Make a audio node
  const audio = new Audio();
  audio.loop = true;
  audio.autoplay = true;

  // this line is only needed if the music you are trying to play is on a
  // different server than the page trying to play it.
  // It asks the server for permission to use the music. If the server says "no"
  // then you will not be able to play the music
  // Note if you are using music from the same domain 
  // **YOU MUST REMOVE THIS LINE** or your server must give permission.
  audio.crossOrigin = "anonymous";

  // call `handleCanplay` when it music can be played
  audio.addEventListener('canplay', handleCanplay);
  audio.src = "https://twgljs.org/examples/sounds/DOCTOR%20VOX%20-%20Level%20Up.mp3";
  audio.load();


  function handleCanplay() {
    // connect the audio element to the analyser node and the analyser node
    // to the main Web Audio context
    const source = context.createMediaElementSource(audio);
    source.connect(analyser);
    analyser.connect(context.destination);
  }
}
canvas { border: 1px solid black; display: block; }
<canvas></canvas>

Then it's just up to you to draw something creative.

note some troubles you'll likely run into.

  1. At this point in time (2017/1/3) neither Android Chrome nor iOS Safari support analysing streaming audio data. Instead you have to load the entire song. Here'a a library that tries to abstract that a little

  2. On Mobile you can not automatically play audio. You must start the audio inside an input event based on user input like 'click' or 'touchstart'.

  3. As pointed out in the sample you can only analyse audio if the source is either from the same domain OR you ask for CORS permission and the server gives permission. AFAIK only Soundcloud gives permission and it's on a per song basis. It's up to the individual artist's song's settings whether or not audio analysis is allowed for a particular song.

    To try to explain this part

    The default is you have permission to access all data from the same domain but no permission from other domains.

    When you add

    audio.crossOrigin = "anonymous";
    

    That basically says "ask the server for permission for user 'anonymous'". The server can give permission or not. It's up to the server. This includes asking even the server on the same domain which means if you're going to request a song on the same domain you need to either (a) remove the line above or (b) configure your server to give CORS permission. Most servers by default do not give CORS permission so if you add that line, even if the server is the same domain, if it does not give CORS permission then trying to analyse the audio will fail.


music: DOCTOR VOX - Level Up

我的奇迹 2024-09-13 05:18:43

通过 WebGL“尚未准备好”,我假设您指的是渗透(目前仅在 WebKit 和 Firefox 中支持)。

除此之外,使用 HTML5 音频和 WebGL 绝对可以实现均衡器。一位名叫 David Humphrey 的人在博客中介绍了如何使用 WebGL 制作不同的音乐可视化工具,并且能够创建一些确实令人印象深刻的。以下是一些可视化视频(点击观看):





By WebGL being "not ready", I'm assuming that you're referring to the penetration (it's only supported in WebKit and Firefox at the moment).

Other than that, equalisers are definitely possible using HTML5 audio and WebGL. A guy called David Humphrey has blogged about making different music visualisers using WebGL and was able to create some really impressive ones. Here's some videos of the visualisations (click to watch):





橘味果▽酱 2024-09-13 05:18:43

我使用 SoundManager2 从 mp3 文件中提取波形数据。该功能需要 Flash 9,因此它可能不是最好的方法。

我使用 HMTL5 Canvas 进行的波形演示:
http://www.momentumracer.com/electriccanvas/

和 WebGL:
http://www.momentumracer.com/electricwebgl/

来源:
https://github.com/pepez/Electric-Canvas

I used SoundManager2 to pull the waveform data from the mp3 file. That feature requires Flash 9 so it might not be the best approach.

My waveform demo with HMTL5 Canvas:
http://www.momentumracer.com/electriccanvas/

and WebGL:
http://www.momentumracer.com/electricwebgl/

Sources:
https://github.com/pepez/Electric-Canvas

葬﹪忆之殇 2024-09-13 05:18:43

根据复杂性,您可能有兴趣尝试Processing(http://www.processing.org),它具有非常简单的工具来制作基于Web的应用程序,并且它具有获取音频文件的FFT和波形的工具。

Depending on complexity you might be interested in trying out Processing (http://www.processing.org), it has really easy tools to make web-based apps, and it has tools to get the FFT and waveform of an audio file.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文