delphic.me.uk Navigate

Alpha Blending in WebGL

Alpha blending is using the alpha channel of colours rendered to canvas in combination with a blend equation to render transparency and transluency effects. There are plenty of articles on how to go about alpha blending in OpenGL, but when working on a WebGL renderer for my own amusement / learning I was unable to find any articles specifically aimed at WebGL so I thought I would contribute the technique I used with code samples specific for WebGL.

I'm going to assume that you are already reasonably familiar with the basics of WebGL including the set up and rendering of opaque objects, if not I recommend checking out the fundamentals and 3D sections of WebGL Fundamentals.org. If I mention gl in code snippets this is your WebGL context object!

I'm going to be using the excellent glMatrix for vectors / quaternions / matrices and that you will too or will be comfortable determining the analogue in your favoured library. The short version of this is that they are all 1 dimensional typed arrays, e.g. a position vector where x = 0th index, y = 1st index, z = 2nd index. As well as that you're either using quaternions for rotations (and you should imo, they're not that complex, they're just an angle with axis to rotate around transformed a bit so you can multiply them together as if they were 4 component vectors) or you're capable of getting your rotations as quaternions.

The Blend Function

First up lets get the simple bit out of the way; the blend equation and blend function, when you enable alpha blending before drawing objects to the buffer by using gl.enable(gl.BLEND) the manner in which pixels are combined is determined by the blend equation, you can see a great visulation of the different options available on this gl.blendFunc() page by the awesome Mr.doob (the options are constants based various combinations of the alpha, and the colour of the two fragments to be combined).

Essentially into the blend function you pass two constants, which determines what ratio of of the source colour (i.e. the r, g, b, a of the fragment/pixel you are about to draw) and the ratio of the destination colour (the fragment/pixel that is already in the buffer) to be used, the function most used is gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA), that is the alpha value of source colour multiplied by the colour value of source, and 1 - alpha of the source colour multiplied by the destination colour, this is probably one you'll want, but it depends on what type of effect you want to achieve.

The blend equation determines how these caculated values are combined, if RS is the source colour multiplied by the value of the choosen source constant and RD is the destination colour multiplied by the value of choosen destination constant and RC is the resulting colour written to the buffer; gl.FUNC_ADD is the default, which is RC = RS + RD, there is also gl.FUNC_SUBTRACT which is RC = RS - RD, and gl.FUNC_REVERSE_SUBTRACT which is RC = RD - RS.

Depth Testing and Depth Mask

Now its time to discuss the z-buffer and depth testing! Simply put when you have set gl.enable(gl.DEPTH_TEST) each fragement drawn to screen is tested against the z-buffer and if it is closer to the camera is then written to the frame buffer and the z-value of that fragment is written to the z-buffer, if it is further away the fragment is discarded. This is obviously very useful with opaque objects, it allows us to draw our polygons in any order we like and those further away from the camera will not be written on top of those which are closer!

However as you might have guessed this is a problem with transparent or translucent objects, if we were to draw with this enabled objects behind the transparent object would be discarded and the fragment colour of 'transparent' object would replace the existing colour if draw in front. This isn't the effect we want, blending is an alternative, we want to see the objects which are behind our transparent/translusent polys! In order to get predictable behaviour and choose an appropriate blend function we're going to have to render our blended polys in order, specifically from further away to closer. This is called the painters alogrithm, as you can paint things far away then safely paint over them with closer details.

You could - but SHOULD NOT - disable the depth testing entirely when drawing blended polygons using gl.disable(gl.DEPTH_TEST) as the Learning WebGL lesson suggests, this would mean we would have to order all objects we wanted to render, both translucent and opaque. This obviously isn't practical, but there is an alternative, you can simply disable writing to the z-buffer with your new fragments whilst still testing against already drawn fragments by disabling the depth mask. You do this using gl.depthMask(false), now we only have to order the transparent objects with each other and draw them after drawing our all our opaque objects first with the depth mask enabled, the ordering of the blended fragments with respect to the opaque ones will take care of itself, hooray!

Putting it all together

I'm going to use Fury, the renderer I worked on last year for the example and pick out the relevant functions and present them in a simplified form below.

So let's assume you've got yourself a basic render set up, but it just doesn't do alpha blending yet. Before you render anything you're going to need to sort your alpha blended polys by depth, to do this you're going to need to be able to calculate the depth of said polys with respect to the camera. Here's function for doing that with the least calculations presuming - as stated earlier - that you're using quaternions for rotations.

var getDepth = function(cameraPosition, cameraRotation, objectPosition) {
	var p0 = cameraPosition[0], p1 = cameraPosition[1], p2 = cameraPosition[2],
	q0 = cameraRotation[0], q1 = cameraRotation[1], q2 = cameraRotation[2], q3 = cameraRotation[3],
	l0 = objectPosition[0], l1 = objectPosition[1], l2 = objectPosition[2];
	return 2*(q1*q3 + q0*q2)*(l0 - p0) + 2*(q2*q3 - q0*q1)*(l1 - p1) + (1 - 2*q1*q1 - 2*q2*q2)*(l2 - p2);
}

So now we have a function for that, we're going to need to add them to an array in sorted order, there's many many many ways to do this, but rather than go into an in depth discussion of sorting algorithms, I'll just show you how I did it. There is probably plenty of room for improvement here, and would probably be a good place to look if you get to the point of needing to optimise. In the interests of getting something working and not prematurely optimising however...

// Assumes:
// Objects have a property "id"
// There is an object "depths", and an array "alphaRenderObjects" in scope
// That latter has been cleared prior to sorting for a new frame
var addToAlphaList = function(object, depth) {
	depths[object.id] = depth;
	// Binary search
	var less, more, itteration = 1, inserted = false, index = Math.floor(alphaRenderObjects.length/2);
	while(!inserted) {
		less = (index === 0 || depths[alphaRenderObjects[index-1].id] <= depth);
		more = (index >= alphaRenderObjects.length || depths[alphaRenderObjects[index].id] >= depth);
		if(less && more) {
			alphaRenderObjects.splice(index, 0, object);
			inserted = true;
		} else {
			itteration++;
			var step = Math.ceil(alphaRenderObjects.length/(2*itteration));
			if(!less) {
				index -= step;
			} else {
				index += step;
			}
		}
	}
};

The final step is now pretty easy, just modify your existing render loop to use these functions and enable blending, it might look a little something like this.

var renderObjects = { keys: [] }; 	// A object of render objects w/ keys
// object for quick look up and an array "keys" of the ids for quick enumeration
// c.f. http://jsperf.com/reflection-vs-array-of-keys
var alphaRenderObjects = [];		// A sorted list
var gl;					// WebGL Context

// Ommitted: all your other rendering and set up code ;D

var render = function(camera) {
	// Ommitted: Camera / pMatrix set up

	alphaRenderObjects.length = 0;

	clear();

	for(var i = 0, l = renderObjects.keys.length; i < l; i++) {
		var renderObject = renderObjects[renderObjects.keys[i]];
		if(renderObject.material.alpha) {
			addToAlphaList(renderObject, getDepth(
				camera.position,
				camera.rotation,
				renderObject.position));
		} else {
			bindAndDraw(renderObject);
		}
	}
	for(i = 0, l = alphaRenderObjects.length; i < l; i++) {
		var renderObject = alphaRenderObjects[i];
		enableBlending(
			renderObject.material.sourceBlendType,
			renderObject.material.destinationBlendType,
			renderObject.material.blendEquation);
		bindAndDraw(renderObject);
	}
	disableBlending();
};

var enableBlending = function(sourceBlend, destinationBlend, equation) {
	if(equation) {
		gl.blendEquation(gl[equation]);
	}
	if(sourceBlend && destinationBlend) {
		gl.blendFunc(gl[sourceBlend], gl[destinationBlend]);
	}
	gl.enable(gl.BLEND);
	gl.depthMask(false);
};

var disableBlending = function() {
	gl.disable(gl.BLEND);
	gl.depthMask(true);
};

var clear = function() {
	gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight);
	gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
};

var getDepth = function(cameraPosition, cameraRotation, objectPosition) {
	// Ommitted: previously covered code
}

var addToAlphaList = function(object, depth) {
	// Ommitted: previously covered code
};

var bindAndDraw = function(renderObject) {
	// Ommitted : Code which binds uniforms, textures etc as necessary and calls relevant gl draw function
};

Hopefully that all made sense, this isn't code you can just copy and paste in your project (obviously) and some objects have properties on which I've not explicitly explained but it should be pretty obvious what they are (e.g. camera.rotation being a quatnerion, renderObject.material.blendEquation being a string presenting the desired blend equation to use, say "FUNC_ADD"). If you have any comments or feedback you can contact me via delphic.bsky.social

I'll leave you with the little test I wrote to see if alpha blending was working for my renderer.