I’ve been fiddling around with WebGL a bit too much since 2016.

I’ve read the docs and code for a few frameworks.

And I don’t like them. Not even one of them.

It’s all in the abstractions.

So far, I’m knowledgeable of the following WebGL frameworks, listed in a rough higher-level to lower-level order:

And they’re all good - they serve a purpose, yet I feel that the abstractions that each of them provide are not the right ones for what I want to do.

A rant about Three.js

Three.js is too high-level. It provides abstractions for Materials and Cameras, but the stuff I do needs neither of these.

It’s impressively nice to display 3d graphics in perspective, if one wants to import 3D models from 3D CAD stuff and make 3D games. But try to do something with custom projection code and you’ll end up bypassing lots of functionality to get there.

Three.js is too high-level for me.

A rant about REGL

REGL is… too magical. It also implements a fully functional paradigm. Which means it works wonderfully when the use case is providing certain inputs (data & shaders) and expecting one output (a rendered image).

Problem is, I’m not a fan of functional programming, specially when it’s touted as automagical. In my experience, JS functional programming works fine until it doesn’t, and then debugging becomes a huge mess that cannot be easily traced.

REGL abstractions work well, but I want to be able to add/remove attributes from a vertex buffer, not re-create the whole buffer from an input Array. I want something with lower-level access to data structures. I want to update uniforms, not to provide uniform-returning callback functions.

REGL is too high-level for me.

A rant about Luma.gl and Stack.gl

Luma.gl and Stack.gl suffer from the same issues:

  • “Hello World!” examples are 50 lines long, and
  • The programmer has to drag the GL context around

Let’s take this piece of the luma.gl example of one triangle and one square:

const animationLoop = new AnimationLoop({
  onInitialize({gl, canvas, aspect}) {

    const program = new Program(gl, {
      vs: VERTEX_SHADER,

    const triangleVertexArray = new VertexArray(gl, {
      attributes: {
        positions: new Buffer(gl, new Float32Array(TRIANGLE_VERTS))
  /* etc */

Even though WebGL data structures are unique to the WebGL context (in web developer terms: unique to the destination <canvas>), the developer has to manually specify the WebGLRenderingContext instance.. Every. Friggin’. Time.

From my point of view, this goes directly against the engineering principles of high cohesion and loose coupling. In the above example, the AnimationLoop, Program, VertexArray and Buffer are tightly coupled because the programmer needs to manually link them together… but the AnimationLoop should be in control of that link, and hide that piece of information from the programmer.

For the seasoned C/C++/OpenGL programmer this might seem like a innocent enough thing to do (“after all, this is how WebGL has worked since the beginning”). I digress.

A rant about the WebGL API

As a OOP JS programmer, the WebGL API feels… weird, man.

Assume you’re using JS and you want to create a 6-element array, and then you want to modify the 3rd element in that array, after playing with other arrays. It should be something like…

var array1 = new Array(6);
array1[2] = 100;

Now, in order to do the same with a WebGLBuffer, things become… weird:

var buffer1 = glCtx.createBuffer();
glCtx.bindBuffer(gl.ARRAY_BUFFER, buffer1);
glCtx.bufferData(gl.ARRAY_BUFFER, 6, gl.STATIC_DRAW);

glCtx.bindBuffer(gl.ARRAY_BUFFER, buffer2);
/* Do something with buffer2 */

glCtx.bindBuffer(gl.ARRAY_BUFFER, buffer1);
glCtx.bufferSubData(gl.ARRAY_BUFFER, 2, Uint8Array.from([100]));

I really want to comment this line by line.

var buffer1 = glCtx.createBuffer();

This means “Create a data structure, but I’m not gonna tell you which specific kind of data structure, nor how big it is”.

glCtx.bindBuffer(gl.ARRAY_BUFFER, buffer1);

A WebGLRenderingContext can have at most one bound ARRAY_BUFFER and one bound ELEMENT_BUFFER. So set buffer1 as the currently-bound ARRAY_BUFFER of glCtx.

glCtx.bufferData(gl.ARRAY_BUFFER, 6, gl.STATIC_DRAW);

This means “Set the size of the currently-bound ARRAY_BUFFER to 6, and make sure it’s stored in a certain area of the GPU memory”. Note that one can not “Set the size of buffer1”, but rather “Set the size of the currently-bound ARRAY_BUFFER”.


When one does something with an ARRAY_BUFFER, there is no way to know which ARRAY_BUFFER is being affected, unless one starts looking up in the lines of code.

So in order to store some bytes…

glCtx.bufferSubData(gl.ARRAY_BUFFER, 2, Uint8Array.from([100]));

…one has to re-bind the buffer with glCtx.bindBuffer(gl.ARRAY_BUFFER, buffer1); before storing the values,in order to be sure to store stuff where it’s supposed to be stored.

I get it - the WebGL API is just the OpenGL ES 2.0 API, function for function and constant for constant. But still, the API shape could have been more OOP-y - this is ECMAscript we’re talking about, after all. Which leads me to an important realisation I had a while ago:

WebGL wasn’t about bringing OpenGL to web devs - it was about bringing OpenGL devs to the web.

So, what to do about it?

Reinvent the wheel, obviously.