For some time I’ve had an idea for a simple and quick method of procedural mesh generation and figured it would make for an enjoyable weekend project. The results are reasonably acceptable for how easy it is to implement, though there is room for improvement. It was implemented in C# for the Unity game engine and later ported to JavaScript using three.js.

This method may be suitable for games, where dynamic mesh generation may be used. The implementation developed for the demo, for example, could be used for generating uniquely shaped asteroids, which can have holes blown in it by missiles. With modifications for dividing the data into chunks (to get around the 65 thousand vertex limit on Unity meshes), and the generation of lower resolution versions of the meshes for levels-of-detail, it can be used for world terrain generation – though this is outside the scope of the article.

Density Data Grid

The idea is based on a 3D grid of density data, where the mesh will be generated as a surface along the boundary where the values equal 0.5 – that is 0 is a fully empty cell and 1 is an entirely filled cell (though there are no hard constraints on the range of valid values.)

Being a grid of data, values could be generated either procedurally using functions or be filled in or modified manually. For example, using a noise function to generate the aforementioned asteroid and then punching spherical holes in it for damage caused by explosions. Although this data presents limitations on the resolution, it provides flexibility and implementation simplicity.

Trilinear Interpolation

Trilinear interpolation is used to acquire density values between the data points – this is a blending of the values at eight corner points of a cube depending on the position within this volume. This will be useful for estimating where values on the surface equal 0.5. The actual mesh itself may not have the vertices placed exactly on the values of 0.5 in the data grid, but it will be close enough to present visually acceptable results.

Triliniar filtering.

The above diagram illustrates how the trilinear interpolation is computed. It is simply the result of linearly blending the values of the corner points (where we have data values), computed values of points along edges between those corners, points along edges between those points, etc. The diagram illustrates the idea more clearly than words could.

Using values for x, y, and z ranging from 0 to 1, the series of equations would look like this:

\begin{array}{lll} i & = & (1-x)a+xb\\ j & = & (1-x)c+xd\\ k & = & (1-x)e+xf\\ l & = & (1-x)g+xh\\ m & = & (1-y)i+yj\\ n & = & (1-y)k+yl\\ o & = & (1-z)m+zn \end{array}

Which can be simplified (with the help of my calculator) down to:

\begin{array}{lll} \mathrm{interpolate}(x,y,z) & = & (((-a+b+c-d+e-f-g+h)z+a-b-c+d)y \\ & & + (a-b-e+f)z-a+b)x \\ & & + ((a-c-e+g)z-a+c)y \\ & & + (e-a)z+a \end{array}


A gradient is a vector which points in the direction of maximum increase with a magnitude representing the rate of change in that direction. It is found by taking the partial derivative of a function with respect to x, y, and z to find the formulas for the vector components. The gradient of the interpolated data values is used to know how the positions of vertices need to be adjusted.

The gradient of the above trilinear interpolation formula (with the help of my trusty calculator) can be found to be:

\vec{\nabla}\mathrm{interpolate}(x,y,z)=\left\langle\frac\partial{\partial x}\mathrm{interpolate}(x,y,z),\frac\partial{\partial z}\mathrm{interpolate}(x,y,z),\frac\partial{\partial z}\mathrm{interpolate}(x,y,z)\right\rangle
\frac\partial{\partial x}\mathrm{interpolate}(x,y,z)=((-a+b+c-d+e-f-g+h)z+a-b-c+d)y+(a-b-e+f)z-a+b
\frac\partial{\partial y}\mathrm{interpolate}(x,y,z)=((-a+b+c-d+e-f-g+h)z+a-b-c+d)x+(a-c-e+g)z-a+c
\frac\partial{\partial z}\mathrm{interpolate}(x,y,z)=((-a+b+c-d+e-f-g+h)y+a-b-e+f)x+(a-c-e+g)y-a+e


Using this density data a poligonal mesh will be generated. As mentioned, this mesh will approximate the surface along the values of 0.5 in the data grid.

Boundary Quadrilaterals

A simple scan through each data cell with, a comparison to the above, side, and forward neighboring cells can be used to determine where to place quadrilaterals. The value on one side of the quadrilateral should be less than or equal to 0.5 with the value on the other side greater than 0.5. Because 3D graphics can do back-face culling, these faces should point from the higher value to the lower value (vertices should be counter-clockwise when looking at the front of the face.) The end result of this process will be to create the visible sides of boxes that surround sufficiently dense values in the 3D data grid – similar to what can be seen in the blocky landscapes of Minecraft.

Cube mesh net.

Moving Vertices Into Place

The vertices at the corners of these cubes can be moved along the gradient into a place which is closer to where the interpolated values in the data grid equal 0.5. This is done by first taking the interpolated value of the point where the vertex currently resides and then using that to find how far along the gradient to move it. The equation for finding this scalar value is pretty simple. Since the magnitude of the gradient is the rate of change in the direction of the gradient, that rate of change can be used in the simple line equation y = ms + b. Rearranged, this becomes s = (y - b) / m, where y is the target value we want, b is the current value, and m is the rate of change, and s will be the scalar value we want. Since we are targeting a value of 0.5, we will make y equal that.


This scalar, as mentioned earlier, is used to make an estimate for how much the vertex needs to be moved along the gradient to get it close to 0.5.


A one-pass adjustment may not place the vertex exactly on a value of 0.5, but additional iterations can bring it closer if desired (in my test, one seemed to be enough.)


The easiest (and most logical) means of computing the normal for a vertex would be to use the gradient at that location. It needs to be made a unit length. Since the gradient points in the direction of greatest increase, this vector will point in towards the more solid values – therefore, they need to be flipped to point outward.

I have found, however, that recomputed normals can look better, though they may not be as "accurate" of a representation of the gradients in the data.


Since we have been working with quadrilaterals, they will need to be split into triangles. To achieve a more appealing triangulation, the angles between vertex normals will be used to determine which diagonal should be used for the third edge of the new triangles. The angles between the normal vectors of vertices on opposite corners can be used to determine where the crease should be – the normals with a larger angle between them should be on either side of the new edge. Think of folding a piece of paper to get an intuition of why this works.


Interactive Demo

Up/down changes amplitude, left-right changes frequency.

Room For Improvement

The best visual quality can be achieved by creating high-resolution meshes with smooth, gradual curves. Sharp edges will result in a rough, jagged look. Some improvement in visual appearance could possibly be achieved using a tetrahedral lattice instead of a simple 3D grid – allowing the algorithm to work with triangles instead of quadrilaterals.

Also, since the vertices are moved along the gradient, when the plane of a quadrilateral is close to being in line with the gradient some faces can be rather narrow. Some means of either smoothing or collapsing short edges could help with improving the mesh under these circumstances. There might need to be some trade-off between generation time and visual quality.

One way of greatly improving the performance of the triangulation would be to push as much of this algorithm onto the GPU as possible. Since a lot of it works on individual vertices and faces, much of it could be done in parallel. This could result in being able to generate much higher resolution meshes more quickly.