@7revor/beam-gl 中文文档教程
Beam
富有表现力的 WebGL
Introduction
Beam 是一个微型 (<10KB) WebGL 库。 它本身不是渲染器或 3D 引擎。 相反,Beam 提供了一些基本的抽象,允许您在非常小且易于使用的 API 界面中构建 WebGL 基础架构。
众所周知,WebGL API 冗长且学习曲线陡峭。 就像 jQuery 如何简化 DOM 操作一样,Beam 以简洁的方式包装 WebGL,从而更容易使用干净简洁的代码构建 WebGL 渲染器。
这怎么可能? Beam 不只是重新组织样板代码,而是在 WebGL 之上定义了一些基本概念,这些概念更容易理解和使用。 这些高度简化的概念包括:
- Shaders - Objects containing graphics algorithms. In contrast of JavaScript that only runs on CPU with a single thread, shaders are run in parallel on GPU, computing colors for millions of pixels every frame.
- Resources - Objects containing graphics data. Just like how JSON works in your web app, resources are the data passed to shaders, which mainly includes triangle arrays (aka buffers), image textures, and global options.
- Draw - Requests for running shaders with resources. To render a scene, different shaders and resources may be used. You are free to combine them, so as to fire multi draw calls that eventually compose a frame. In fact, each draw call will start the graphics render pipeline for once.
- Commands - Setups before firing a draw call. WebGL is very stateful. Before every draw call, WebGL states must be carefully configured. These changes are indicated via commands. Beam makes use of conventions that greatly reduces manual command maintenance. Certainly you can also define and run custom commands easily.
由于命令大部分可以自动化,因此只有 3 个概念可供初学者学习,以 Beam 中的 3 个核心 API 为代表:beam.shader、beam.resource 和 beam.draw。 从概念上讲,仅使用这 3 种方法,您就可以构建 WebGL 应用程序。
Installation
npm install beam-gl
或者你可以克隆这个存储库并启动一个静态 HTTP 服务器来尝试一下。 Beam 直接在现代浏览器中运行,无需构建或编译。
Hello World with Beam
现在我们将使用 Beam 编写一个最简单的 WebGL 应用程序,它呈现一个彩色三角形
:
下面是代码片段:
import { Beam, ResourceTypes } from 'beam-gl'
import { MyShader } from './my-shader.js'
const { VertexBuffers, IndexBuffer } = ResourceTypes
// Remember to create a `<canvas>` element in HTML
const canvas = document.querySelector('canvas')
// Init Beam instance
const beam = new Beam(canvas)
// Init shader for triangle rendering
const shader = beam.shader(MyShader)
// Init vertex buffer resource
const vertexBuffers = beam.resource(VertexBuffers, {
position: [
-1, -1, 0, // vertex 0, bottom left
0, 1, 0, // vertex 1, top middle
1, -1, 0 // vertex 2, bottom right
],
color: [
1, 0, 0, // vertex 0, red
0, 1, 0, // vertex 1, green
0, 0, 1 // vertex 2, blue
]
})
// Init index buffer resource with 3 indices
const indexBuffer = beam.resource(IndexBuffer, {
array: [0, 1, 2]
})
// Clear the screen, then draw with shader and resources
beam
.clear()
.draw(shader, vertexBuffers, indexBuffer)
现在让我们看一下这个例子中的一些代码片段。 首先我们需要用画布初始化 Beam 实例:
const canvas = document.querySelector('canvas')
const beam = new Beam(canvas)
然后我们可以用 beam.shader
初始化着色器。 MyShader
中的内容稍后会解释:
const shader = beam.shader(MyShader)
对于三角形,使用beam.resource
API创建它的数据,这些数据包含在不同的缓冲区中。 Beam 使用 VertexBuffers
类型来表示它们。 三角形有3个顶点,每个顶点有两个属性,分别是position和color。 每个顶点属性都有其顶点缓冲区,可以将其声明为平面和普通的 JavaScript 数组(或 TypedArray)。 Beam 会在后台将这些数据上传到 GPU:
const vertexBuffers = beam.resource(VertexBuffers, {
position: [
-1, -1, 0, // vertex 0, bottom left
0, 1, 0, // vertex 1, top middle
1, -1, 0 // vertex 2, bottom right
],
color: [
1, 0, 0, // vertex 0, red
0, 1, 0, // vertex 1, green
0, 0, 1 // vertex 2, blue
]
})
顶点缓冲区通常包含一个紧凑的数据集。 我们可以定义要渲染的子集或超集,这样我们就可以减少冗余并重用更多的顶点。 为此,我们需要引入另一种类型的缓冲区,称为 IndexBuffer
,它包含 vertexBuffers
中顶点的索引:
const indexBuffer = beam.resource(IndexBuffer, {
array: [0, 1, 2]
})
在这个例子中,每个索引指的是顶点中的 3 个空间大批。
最后我们可以使用 WebGL 进行渲染。 beam.clear
可以清除框架,然后可链接的 beam.draw
可以绘制一个着色器对象和多个资源对象:
beam
.clear()
.draw(shader, vertexBuffers, indexBuffer)
beam.draw
API 很灵活,如果你有多个着色器和资源,可以随意组合它们进行绘制调用,组成一个复杂
beam
.draw(shaderX, ...resourcesA)
.draw(shaderY, ...resourcesB)
.draw(shaderZ, ...resourcesC)
的场景: 有一个遗漏点:如何确定三角形的渲染算法? 变量中完成的,它是着色器对象的模式,看起来像这样:
import { SchemaTypes } from 'beam-gl'
const vertexShader = `
attribute vec4 position;
attribute vec4 color;
varying highp vec4 vColor;
void main() {
vColor = color;
gl_Position = position;
}
`
const fragmentShader = `
varying highp vec4 vColor;
void main() {
gl_FragColor = vColor;
}
`
const { vec4 } = SchemaTypes
export const MyShader = {
vs: vertexShader,
fs: fragmentShader,
buffers: {
position: { type: vec4, n: 3 },
color: { type: vec4, n: 3 }
}
}
这是在 MyShader 片段着色器和其他模式字段的字符串。 从非常简短的角度来看,顶点着色器每个顶点执行一次,片段着色器每个像素执行一次。 它们是用 GLSL 着色器语言编写的。 在 WebGL 中,顶点着色器始终写入
gl_Position
作为其输出,片段着色器写入 gl_FragColor
以获得最终像素颜色。 vColor
变量变量被插值并从顶点着色器传递到片段着色器,而 position
和 color
顶点属性变量,对应于缓冲区vertexBuffers
中的键。 这是简化样板的约定。
Build Something Bigger
现在我们已经知道如何使用 Beam 渲染三角形。 下一步是什么? 这是一个非常简短的指南,展示了我们如何使用 Beam 来处理更复杂的 WebGL 场景:
Render 3D Graphics
我们绘制的“Hello World”三角形只是一个二维形状。 盒子、球和其他复杂的 3D 模型怎么样? 只是多一点顶点和着色器设置。 让我们看看如何在 Beam 中渲染以下 3D 球:
3D 图形由三角形组成,三角形还进一步组成的顶点。 对于三角形示例,每个顶点都有两个属性,即position 和color。 对于基本的 3D 球,我们需要讨论位置 和法线。 法线属性包含在该位置垂直于球的矢量,这对计算光照至关重要。
此外,要将顶点从 3D 空间转换为 2D 屏幕坐标,我们需要一个由矩阵组成的“相机”。 对于传递给顶点着色器的每个顶点,我们应该对其应用相同的变换矩阵。 这些矩阵变量对于所有并行运行的着色器都是“全局的”,这在 WebGL 中称为 uniforms。 Uniforms
也是 Beam 中的一种资源类型,包含着色器的多个全局选项,如相机位置、线条颜色、效果强度因子等。
因此,为了渲染一个最简单的球,我们可以重用与三角形示例完全相同的片段着色器,只需更新顶点着色器字符串如下:
attribute vec4 position;
attribute vec4 normal;
// Transform matrices
uniform mat4 modelMat;
uniform mat4 viewMat;
uniform mat4 projectionMat;
varying highp vec4 vColor;
void main() {
gl_Position = projectionMat * viewMat * modelMat * position;
vColor = normal; // visualize normal vector
}
由于我们在着色器中添加了统一变量,模式也应该更新,使用新的 uniforms
字段:
const identityMat = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1]
const { vec4, mat4 } = SchemaTypes
export const MyShader = {
vs: vertexShader,
fs: fragmentShader,
buffers: {
position: { type: vec4, n: 3 },
normal: { type: vec4, n: 3 }
},
uniforms: {
// The default field is handy for reducing boilerplate
modelMat: { type: mat4, default: identityMat },
viewMat: { type: mat4 },
projectionMat: { type: mat4 }
}
}
然后我们仍然可以编写富有表现力
const beam = new Beam(canvas)
const shader = beam.shader(NormalColor)
const cameraMats = createCamera({ eye: [0, 10, 10] })
const ball = createBall()
beam.clear().draw(
shader,
beam.resource(VertexBuffers, ball.vertex),
beam.resource(IndexBuffer, ball.index),
beam.resource(Uniforms, cameraMats)
)
的 WebGL 渲染代码:仅此而已。 有关工作示例,请参阅 Basic Ball 页面。
Beam 是一个没有 3D 假设的 WebGL 库。 所以图形对象和矩阵算法不是它的一部分。 为了方便起见,Beam 示例附带了一些相关的实用程序,但不要期望对它们太严格。
Animate Graphics
如何在 WebGL 中移动图形对象? 当然,您可以使用新位置更新缓冲区,但这可能会很慢。 另一种解决方案是只更新我们上面提到的变换矩阵,它们是统一的,非常小的选项。
使用requestAnimationFrame
API,我们可以轻松缩放之前渲染的球:
const beam = new Beam(canvas)
const shader = beam.shader(NormalColor)
const ball = createBall()
const buffers = [
beam.resource(VertexBuffers, ball.vertex),
beam.resource(IndexBuffer, ball.index)
]
let i = 0; let d = 10
const cameraMats = createCamera({ eye: [0, d, d] })
const camera = beam.resource(Uniforms, cameraMats)
const tick = () => {
i += 0.02
d = 10 + Math.sin(i) * 5
const { viewMat } = createCamera({ eye: [0, d, d] })
// Update uniform resource
camera.set('viewMat', viewMat)
beam.clear().draw(shader, ...buffers, camera)
requestAnimationFrame(tick)
}
tick() // Begin render loop
camera
变量是Beam 中的一个Uniforms
资源实例,其数据是存储在键值对中。 您可以自由添加或修改不同的统一密钥。 当触发 beam.draw
时,只有与着色器匹配的键才会上传到 GPU。
有关工作示例,请参阅缩放球页面。
缓冲区资源也以类似的方式支持
set()
。 确保您知道自己在做什么,因为这对于 WebGL 中的繁重工作负载来说可能会很慢。
Render Images
我们在 Beam 中遇到了 VertexBuffers
、IndexBuffer
和 Uniforms
资源类型。 如果我们要渲染图像,我们需要最后一个关键资源类型,即 Textures
。 一个基本的相关示例是带有如下图像的 3D 框:
> >position 和normal,我们需要一个额外的texCoord 属性,它将图像与该位置的图形对齐,并在片段着色器中进行插值。 查看新的顶点着色器:
attribute vec4 position;
attribute vec4 normal;
attribute vec2 texCoord;
uniform mat4 modelMat;
uniform mat4 viewMat;
uniform mat4 projectionMat;
varying highp vec2 vTexCoord;
void main() {
vTexCoord = texCoord;
gl_Position = projectionMat * viewMat * modelMat * position;
}
和新的片段着色器:
uniform sampler2D img;
uniform highp float strength;
varying highp vec2 vTexCoord;
void main() {
gl_FragColor = texture2D(img, vTexCoord);
}
现在我们需要一个带有 textures
字段的新着色器模式:
const { vec4, vec2, mat4, tex2D } = SchemaTypes
export const MyShader = {
vs: vertexShader,
fs: fragmentShader,
buffers: {
position: { type: vec4, n: 3 },
texCoord: { type: vec2 }
},
uniforms: {
modelMat: { type: mat4, default: identityMat },
viewMat: { type: mat4 },
projectionMat: { type: mat4 }
},
textures: {
img: { type: tex2D }
}
}
最后让我们检查渲染逻辑:
const beam = new Beam(canvas)
const shader = beam.shader(MyShader)
const cameraMats = createCamera({ eye: [10, 10, 10] })
const box = createBox()
loadImage('prague.jpg').then(image => {
const imageState = { image, flip: true }
beam.clear().draw(
shader,
beam.resource(VertexBuffers, box.vertex),
beam.resource(IndexBuffer, box.index),
beam.resource(Uniforms, cameraMats),
// The 'img' key is defined to match the shader
beam.resource(Textures, { img: imageState })
)
})
这就是基本纹理资源使用的全部内容。 由于我们可以直接访问图像着色器,因此我们还可以使用 Beam 轻松添加图像处理效果。
请参阅 Image Box 页面以获取工作示例。
您可以随意将
createBox
替换为createBall
并查看不同之处。
Render Multi Objects
如何渲染不同的图形对象? 让我们看看 beam.draw
API 的灵活性:
渲染多球和多盒子,我们只需要 2 组 VertexBuffers
和 IndexBuffer
,一组用于球,一组用于盒子:
const shader = beam.shader(MyShader)
const ball = createBall()
const box = createBox()
const ballBuffers = [
beam.resource(VertexBuffers, ball.vertex),
beam.resource(IndexBuffer, ball.index)
]
const boxBuffers = [
beam.resource(VertexBuffers, box.vertex),
beam.resource(IndexBuffer, box.index)
]
然后在 for
循环中,我们可以很容易地用不同的制服选项绘制它们。 通过在 beam.draw
之前更改 modelMat
,我们可以更新对象在世界空间中的位置,这样盒子和球就可以多次出现在屏幕上:
const cameraMats = createCamera(
{ eye: [0, 50, 50], center: [10, 10, 0] }
)
const camera = beam.resource(Uniforms, cameraMats)
const baseMat = mat4.create()
const render = () => {
beam.clear()
for (let i = 1; i < 10; i++) {
for (let j = 1; j < 10; j++) {
const modelMat = mat4.translate(
[], baseMat, [i * 2, j * 2, 0]
)
camera.set('modelMat', modelMat)
const resources = (i + j) % 2
? ballBuffers
: boxBuffers
beam.draw(shader, ...resources, camera)
}
}
}
render()
render
函数以 beam.clear
开头,然后我们可以自由使用构成复杂渲染逻辑的 beam.draw
。
请参阅 Multi Graphics 页面以获取工作示例。
Offscreen Rendering
在 WebGL 中,我们使用帧缓冲区对象进行离屏渲染,将输出渲染为纹理。 Beam 有一个对应的 OffscreenTarget
资源类型。 请注意,这种类型的资源不能在 beam.draw
中传递。
假设默认的渲染逻辑看起来像这样:
beam
.clear()
.draw(shaderX, ...resourcesA)
.draw(shaderY, ...resourcesB)
.draw(shaderZ, ...resourcesC)
使用可选的 offscreen2D
方法,这个渲染逻辑可以通过这种方式简单地嵌套在函数范围中:
beam.clear()
beam.offscreen2D(offscreenTarget, () => {
beam
.draw(shaderX, ...resourcesA)
.draw(shaderY, ...resourcesB)
.draw(shaderZ, ...resourcesC)
})
这只是将渲染输出重定向到屏幕外纹理资源。
请参阅 Basic Mesh 页面以获取工作示例。
Advanced Render Techniques
对于实时渲染,基于物理的渲染 (PBR) 和阴影映射是两种主要的高级技术。 在示例中展示了对它们的基本支持,例如这些 PBR 材质球
:
Beam 完整性。 要开始使用,请查看:
- Material Ball page for a working PBR example.
- Basic Shadow page for a working shadow mapping example.
More Examples
查看 Beam 示例,了解更多基于 Beam 的通用 WebGL 代码片段,包括:
- Render multi 3D objects
- Mesh loading
- Texture config
- Classic lighting
- Physically based rendering (PBR)
- Chainable Image Filters
- Offscreen rendering (using FBO)
- Shadow mapping
- Basic particles
- WebGL extension config
- Customize your renderers
对新的 Pull requests也欢迎示例 :)
License
MIT
Beam
Expressive WebGL
Introduction
Beam is a tiny (<10KB) WebGL library. It's NOT a renderer or 3D engine by itself. Instead, Beam provides some essential abstractions, allowing you to build WebGL infrastructures within a very small and easy-to-use API surface.
The WebGL API is known to be verbose, with a steep learning curve. Just like how jQuery simplifies DOM operations, Beam wraps WebGL in a succinct way, making it easier to build WebGL renderers with clean and terse code.
How is this possible? Instead of just reorganizing boilerplate code, Beam defines some essential concepts on top of WebGL, which can be much easier to be understood and used. These highly simplified concepts include:
- Shaders - Objects containing graphics algorithms. In contrast of JavaScript that only runs on CPU with a single thread, shaders are run in parallel on GPU, computing colors for millions of pixels every frame.
- Resources - Objects containing graphics data. Just like how JSON works in your web app, resources are the data passed to shaders, which mainly includes triangle arrays (aka buffers), image textures, and global options.
- Draw - Requests for running shaders with resources. To render a scene, different shaders and resources may be used. You are free to combine them, so as to fire multi draw calls that eventually compose a frame. In fact, each draw call will start the graphics render pipeline for once.
- Commands - Setups before firing a draw call. WebGL is very stateful. Before every draw call, WebGL states must be carefully configured. These changes are indicated via commands. Beam makes use of conventions that greatly reduces manual command maintenance. Certainly you can also define and run custom commands easily.
Since commands can be mostly automated, there are only 3 concepts for beginners to learn, represented by 3 core APIs in Beam: beam.shader, beam.resource and beam.draw. Conceptually only with these 3 methods, you can build a WebGL app.
Installation
npm install beam-gl
Or you can clone this repository and start a static HTTP server to try it out. Beam runs directly in modern browser, without any need to build or compile.
Hello World with Beam
Now we are going to write a simplest WebGL app with Beam, which renders a colorful triangle:
Here is the code snippet:
import { Beam, ResourceTypes } from 'beam-gl'
import { MyShader } from './my-shader.js'
const { VertexBuffers, IndexBuffer } = ResourceTypes
// Remember to create a `<canvas>` element in HTML
const canvas = document.querySelector('canvas')
// Init Beam instance
const beam = new Beam(canvas)
// Init shader for triangle rendering
const shader = beam.shader(MyShader)
// Init vertex buffer resource
const vertexBuffers = beam.resource(VertexBuffers, {
position: [
-1, -1, 0, // vertex 0, bottom left
0, 1, 0, // vertex 1, top middle
1, -1, 0 // vertex 2, bottom right
],
color: [
1, 0, 0, // vertex 0, red
0, 1, 0, // vertex 1, green
0, 0, 1 // vertex 2, blue
]
})
// Init index buffer resource with 3 indices
const indexBuffer = beam.resource(IndexBuffer, {
array: [0, 1, 2]
})
// Clear the screen, then draw with shader and resources
beam
.clear()
.draw(shader, vertexBuffers, indexBuffer)
Now let's take a look at some pieces of code in this example. Firstly we need to init Beam instance with a canvas:
const canvas = document.querySelector('canvas')
const beam = new Beam(canvas)
Then we can init a shader with beam.shader
. The content in MyShader
will be explained later:
const shader = beam.shader(MyShader)
For the triangle, use the beam.resource
API to create its data, which is contained in different buffers. Beam use the VertexBuffers
type to represent them. There are 3 vertices in the triangle, each vertex has two attributes, which is position and color. Every vertex attribute has its vertex buffer, which can be declared as a flat and plain JavaScript array (or TypedArray). Beam will upload these data to GPU behind the scene:
const vertexBuffers = beam.resource(VertexBuffers, {
position: [
-1, -1, 0, // vertex 0, bottom left
0, 1, 0, // vertex 1, top middle
1, -1, 0 // vertex 2, bottom right
],
color: [
1, 0, 0, // vertex 0, red
0, 1, 0, // vertex 1, green
0, 0, 1 // vertex 2, blue
]
})
Vertex buffers usually contain a compact dataset. We can define a subset or superset of which to render, so that we can reduce redundancy and reuse more vertices. To do that we need to introduce another type of buffer called IndexBuffer
, which contains indices of the vertices in vertexBuffers
:
const indexBuffer = beam.resource(IndexBuffer, {
array: [0, 1, 2]
})
In this example, each index refers to 3 spaces in the vertex array.
Finally we can render with WebGL. beam.clear
can clear the frame, then the chainable beam.draw
can draw with one shader object and multi resource objects:
beam
.clear()
.draw(shader, vertexBuffers, indexBuffer)
The beam.draw
API is flexible, if you have multi shaders and resources, just combine them to make draw calls at your wish, composing a complex scene:
beam
.draw(shaderX, ...resourcesA)
.draw(shaderY, ...resourcesB)
.draw(shaderZ, ...resourcesC)
There's one missing point: How to decide the render algorithm of the triangle? This is done in the MyShader
variable, which is a schema of the shader object, and it looks like this:
import { SchemaTypes } from 'beam-gl'
const vertexShader = `
attribute vec4 position;
attribute vec4 color;
varying highp vec4 vColor;
void main() {
vColor = color;
gl_Position = position;
}
`
const fragmentShader = `
varying highp vec4 vColor;
void main() {
gl_FragColor = vColor;
}
`
const { vec4 } = SchemaTypes
export const MyShader = {
vs: vertexShader,
fs: fragmentShader,
buffers: {
position: { type: vec4, n: 3 },
color: { type: vec4, n: 3 }
}
}
This shows a simple shader schema in Beam, which is made of a string for vertex shader, a string for fragment shader, and other schema fields. From a very brief view, vertex shader is executed once per vertex, and fragment shader is executed once per pixel. They are written in the GLSL shader language. In WebGL, the vertex shader always writes to gl_Position
as its output, and the fragment shader writes to gl_FragColor
for final pixel color. The vColor
varying variable is interpolated and passed from vertex shader to fragment shader, and the position
and color
vertex attribute variables, are corresponding to the buffer keys in vertexBuffers
. That's a convention to simplify boilerplates.
Build Something Bigger
Now we have known how to render a triangle with Beam. What's next? Here is a very brief guide, showing how can we use Beam to handle more complex WebGL scenarios:
Render 3D Graphics
The "Hello World" triangle we have drawn, is just a 2D shape. How about boxes, balls, and other complex 3D models? Just a little bit more vertices and shader setups. Let's see how to render following 3D ball in Beam:
3D graphics are composed of triangles, which are still further composed of vertices. For the triangle example, every vertex has two attributes, which is position and color. For a basic 3D ball, we need to talk about position and normal. The normal attribute contains the vector perpendicular to the ball at that position, which is critical to compute lighting.
Moreover, to transform a vertex from 3D space to 2D screen coordinates, we need a "camera", which is compsed of matrices. For each vertex being passed to the vertex shader, we should apply same transform matrices to it. These matrix variables are "global" to all shaders running in parallel, which is called uniforms in WebGL. Uniforms
is also a resource type in Beam, containing multi global options for shaders, like camera positions, line colors, effect strength factors and so on.
So to render a simplest ball, we can reuse exactly the same fragment shader as the triangle example, just update the vertex shader string as following:
attribute vec4 position;
attribute vec4 normal;
// Transform matrices
uniform mat4 modelMat;
uniform mat4 viewMat;
uniform mat4 projectionMat;
varying highp vec4 vColor;
void main() {
gl_Position = projectionMat * viewMat * modelMat * position;
vColor = normal; // visualize normal vector
}
Since we have added uniform variables in shader, the schema should also be updated, with a new uniforms
field:
const identityMat = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1]
const { vec4, mat4 } = SchemaTypes
export const MyShader = {
vs: vertexShader,
fs: fragmentShader,
buffers: {
position: { type: vec4, n: 3 },
normal: { type: vec4, n: 3 }
},
uniforms: {
// The default field is handy for reducing boilerplate
modelMat: { type: mat4, default: identityMat },
viewMat: { type: mat4 },
projectionMat: { type: mat4 }
}
}
Then we can still write expressive WebGL render code:
const beam = new Beam(canvas)
const shader = beam.shader(NormalColor)
const cameraMats = createCamera({ eye: [0, 10, 10] })
const ball = createBall()
beam.clear().draw(
shader,
beam.resource(VertexBuffers, ball.vertex),
beam.resource(IndexBuffer, ball.index),
beam.resource(Uniforms, cameraMats)
)
And that's all. See the Basic Ball page for a working example.
Beam is a WebGL library without 3D assumptions. So graphics objects and matrix algorithms are not part of it. For convenience there are some related utils shipping with Beam examples, but don't expect too strict on them.
Animate Graphics
How to move the graphics object in WebGL? Certainly you can update the buffers with new positions, but that can be quite slow. Another solution is to just update the tranform matrices we mentioned above, which are uniforms, very small pieces of options.
With the requestAnimationFrame
API, we can easily zoom the ball we rendered before:
const beam = new Beam(canvas)
const shader = beam.shader(NormalColor)
const ball = createBall()
const buffers = [
beam.resource(VertexBuffers, ball.vertex),
beam.resource(IndexBuffer, ball.index)
]
let i = 0; let d = 10
const cameraMats = createCamera({ eye: [0, d, d] })
const camera = beam.resource(Uniforms, cameraMats)
const tick = () => {
i += 0.02
d = 10 + Math.sin(i) * 5
const { viewMat } = createCamera({ eye: [0, d, d] })
// Update uniform resource
camera.set('viewMat', viewMat)
beam.clear().draw(shader, ...buffers, camera)
requestAnimationFrame(tick)
}
tick() // Begin render loop
The camera
variable is a Uniforms
resource instance in Beam, whose data are stored in key-value pairs. You are free to add or modify different uniform keys. When beam.draw
is fired, only the keys that match the shader will be uploaded to GPU.
See the Zooming Ball page for a working example.
Buffer resources also supports
set()
in a similar way. Make sure you know what you are doing, since this can be slow for heavy workload in WebGL.
Render Images
We have met the VertexBuffers
, IndexBuffer
and Uniforms
resouce types in Beam. If we want to render an image, we need the last critical resouce type, which is Textures
. A basic related example would be a 3D box with image like this:
For graphics with texture, besides the position and normal, we need an extra texCoord attribute, which aligns the image to the graphics at that position, and also being interpolated in the fragment shader. See the new vertex shader:
attribute vec4 position;
attribute vec4 normal;
attribute vec2 texCoord;
uniform mat4 modelMat;
uniform mat4 viewMat;
uniform mat4 projectionMat;
varying highp vec2 vTexCoord;
void main() {
vTexCoord = texCoord;
gl_Position = projectionMat * viewMat * modelMat * position;
}
And the new fragment shader:
uniform sampler2D img;
uniform highp float strength;
varying highp vec2 vTexCoord;
void main() {
gl_FragColor = texture2D(img, vTexCoord);
}
Now we need a new shader schema with textures
field:
const { vec4, vec2, mat4, tex2D } = SchemaTypes
export const MyShader = {
vs: vertexShader,
fs: fragmentShader,
buffers: {
position: { type: vec4, n: 3 },
texCoord: { type: vec2 }
},
uniforms: {
modelMat: { type: mat4, default: identityMat },
viewMat: { type: mat4 },
projectionMat: { type: mat4 }
},
textures: {
img: { type: tex2D }
}
}
And finally let's checkout the render logic:
const beam = new Beam(canvas)
const shader = beam.shader(MyShader)
const cameraMats = createCamera({ eye: [10, 10, 10] })
const box = createBox()
loadImage('prague.jpg').then(image => {
const imageState = { image, flip: true }
beam.clear().draw(
shader,
beam.resource(VertexBuffers, box.vertex),
beam.resource(IndexBuffer, box.index),
beam.resource(Uniforms, cameraMats),
// The 'img' key is defined to match the shader
beam.resource(Textures, { img: imageState })
)
})
That's all for basic texture resource usage. Since we have direct access to image shaders, we can also easily add image processing effects with Beam.
See the Image Box page for a working example.
You are free to replace the
createBox
withcreateBall
and see the difference.
Render Multi Objects
How to render different graphics objects? Let's see the flexibility of beam.draw
API:
To render multi balls and multi boxes, we only need 2 group of VertexBuffers
and IndexBuffer
, one for ball and one for box:
const shader = beam.shader(MyShader)
const ball = createBall()
const box = createBox()
const ballBuffers = [
beam.resource(VertexBuffers, ball.vertex),
beam.resource(IndexBuffer, ball.index)
]
const boxBuffers = [
beam.resource(VertexBuffers, box.vertex),
beam.resource(IndexBuffer, box.index)
]
Then in a for
loop, we can easily draw them with different uniform options. By changing modelMat
before beam.draw
, we can update an object's position in world space, so that the box and ball can both appear on screen multi times:
const cameraMats = createCamera(
{ eye: [0, 50, 50], center: [10, 10, 0] }
)
const camera = beam.resource(Uniforms, cameraMats)
const baseMat = mat4.create()
const render = () => {
beam.clear()
for (let i = 1; i < 10; i++) {
for (let j = 1; j < 10; j++) {
const modelMat = mat4.translate(
[], baseMat, [i * 2, j * 2, 0]
)
camera.set('modelMat', modelMat)
const resources = (i + j) % 2
? ballBuffers
: boxBuffers
beam.draw(shader, ...resources, camera)
}
}
}
render()
The render
function begins with a beam.clear
, then we're free to use beam.draw
that makes up complex render logic.
See the Multi Graphics page for a working example.
Offscreen Rendering
In WebGL we use framebuffer object for offscreen rendering, which renders the output to a texture. Beam has a corresponding OffscreenTarget
resource type. Note that this type of resource can't be passed in beam.draw
.
Say the default render logic looks something like:
beam
.clear()
.draw(shaderX, ...resourcesA)
.draw(shaderY, ...resourcesB)
.draw(shaderZ, ...resourcesC)
With the optional offscreen2D
method, this render logic can be simply nested in a function scope in this way:
beam.clear()
beam.offscreen2D(offscreenTarget, () => {
beam
.draw(shaderX, ...resourcesA)
.draw(shaderY, ...resourcesB)
.draw(shaderZ, ...resourcesC)
})
This simply redirects the render output to the offscreen texture resource.
See the Basic Mesh page for a working example.
Advanced Render Techniques
For realtime rendering, physically based rendering (PBR) and shadow mapping are two major advanced techniques. Beam has demonstrated basic support of them in examples, like these PBR material balls:
These examples focus more on readability instead of completeness. To get started, checkout:
- Material Ball page for a working PBR example.
- Basic Shadow page for a working shadow mapping example.
More Examples
See Beam Examples for more versatile WebGL snippets based on Beam, including:
- Render multi 3D objects
- Mesh loading
- Texture config
- Classic lighting
- Physically based rendering (PBR)
- Chainable Image Filters
- Offscreen rendering (using FBO)
- Shadow mapping
- Basic particles
- WebGL extension config
- Customize your renderers
Pull requests for new examples are also welcomed :)
License
MIT