Who Knew Drawing Lines Could Be Hard?1

Drawing Vector Fields and Vector Field Lines

We walk through the design and development process for improved rendering of our vector fields. Confronted with issues from the gl.LINES rendering, we examine alternatives and settle on fake volumetric lines, which prove to be both performant and flexible.

What About WebGL's Lines?

Why not just use WebGL's built in line drawing? Indeed, this built in rendering was used in early versions of the Vfield framework. We see this native line drawing in the first example.

The electric field rendered with gl.LINES.
The electric field rendered with gl.TRIANGLE_STRIP.

Several issues with gl.LINES rendering are evident even in this simple example.

  1. The lines are very thin.
  2. The lines do not stand out strongly from the background.
  3. The lines are visibly jaggy with a stair step appearance.
  4. The directional arrows disappear and reappear as the visualization is rotated.

Some Alternatives

The first instinct might be to simply increase the line width. However, the native line rendering is limited to a line width of 12.

The next option was to draw the field lines as tubes, using instances of the existing cylinder geometry. However, during the development process, I found that WebGL instancing was not supported by the current release of Safari iOS. From the instructional design vantage, ignoring support for such a widespread device was simply unacceptable.

In looking for additional options, including efficient ways to render a tube, I came across this discussion of fake volumetric lines. The trick here is to render the line so that is is always facing towards the observer, giving the impression of a volume rather than a flat surface.

Choosing Volumetric Lines

We see the effect in the second example. The field lines are initially sharper and clearer, and they are not diminished as we manipulate the visualization. This gives an illusion that the lines are three-dimensional solids, while avoiding the performance costs of actually rending a three-dimensional solid. We could even texture, light, or add end caps to the lines, but we will skip those for now. This simplified version seems well suited to our goal of rendering field lines.

Implementing Volumetric Lines

Let's start out with a quick review of the process of drawing the field lines with gl.LINES. We start with a seed point, and compute successive points from E ds |E| . We simply load these points into a vertex buffer, provide model view and projection matrix uniforms, and draw the points to the screen.

The standard vertices for a field line rendered with gl.LINES.

The built in line rendering is simple and straightforward, so why introduce something more complex? The first paragraph covered some shortcomings with gl.LINES, we will address those. And besides, a little bit of complexity is fine as long as it improves the overall user experience. And the truth is this new technique really isn't that complex.

We want to generate a set of triangles that follow the line and allow us to draw a thicker representation of field line than was generated by gl.LINES. The twist, that makes these fake volumetric lines, is that we always draw these triangles perpendicular to the viewer's line of sight.

We can build triangles to present the line AB.

How do we construct triangles for a more robust presentation of the line? Luckily, this is not too difficult. Start by breaking the full line into line segments. Each line segment stretches between two consecutive points along a line, A and B. At each of the points A and B, we generate two vertices by displacing A and then B in a direction perpendicular to the line AB.

For each line segment AB, the connecting line from A to B is L=B-A.3 The slope of this line is then L y L x and the slope of the normal to this line is - L x L y . We generate vertex 0 by moving one half of the desired line width away from point A along the direction perpendicular to the AB line. Generate vertex 1 by moving one half the line width away from point A in the opposite direction.

So, what makes this seemingly simple operation noteworthy? We have to do these operations in screen space, that is in terms of pixels as seen by the user. We will also extend the process for more useful rendering of vector fields.

This requires a bit of a mental gear shift towards doing more work in the vertex shader. To calculate the vertices at one end of the line requires the coordinates of that end of the line, the coordinates of the other end of the line, the width of the line, and the direction we are displacing this vertex.

The line width does not change from vertex to vertex, so we pass that as a uniform. The rest of the data is packed into a vertex buffer.

The vertex buffer contents we need to construct one vertex.
Let's take a walk through some of the interesting points of the code. The first thing we do is identify the inputs to the calculations, and set them up as attributes or uniforms, as inputs to the shader.

  attribute vec3  current;
  attribute vec3  other;
  attribute float direction;

  uniform   float aspect;
  uniform   float halfWidth;
  uniform   mat4  modelViewMatrix;
  uniform   mat4  projectionMatrix;

For a line segment AB, current will be A for the first two vertices, and then B for the last two. other, as you might expect, will be B for the first two vertices, and A for the last two. direction alternates between +1 and -1.

aspect - again as you might expect from the name, is the aspect ratio for the display canvas. The aspect ratio is used to adjust for cases where this ratio is not 1:1, which would otherwise distort the line. This is just one aspect of thinking about the problem in terms of pixels drawn to the screen. Build aspect from built in attributes on the rendering context.


    const aspect = gl.drawingBufferWidth / gl.drawingBufferHeight;

The first step in getting to the screen coordinates should be familiar. We apply the model view projection matrix to get our points into clip space. The unique aspect of this approach is that we apply this transformation to both of the points on our line segment, from which we will compute the vertex. A traditional vertex shader applies the transformations directly to the vertex.


    mat4 projModelView    = projectionMatrix * modelViewMatrix;

    vec4 currentProjected = projModelView * vec4(current, 1.0);
    vec4 otherProjected   = projModelView * vec4(other, 1.0);

This next step, division of the spatial components (x, y) of our position vectors by the homogeneous (w) component, is known as perspective division. Normally, this is done automatically after the fragment shader. This is the step that generates what we traditionally think of as perspective - objects scale down as they move further away. We are only concerned with the x, y as these determine the position on the screen.


    vec2 currentScreen    = currentProjected.xy / currentProjected.w;
    vec2 otherScreen      = otherProjected.xy / otherProjected.w;

The last step in getting to points on the screen is to adjust by the size of the viewport. Like the perspective divide, this is usually done automatically behind the scenes, but we need to deal with it directly. When the aspect ratio is 1:1 the vertical and horizontal resolution are equal and the aspect multiplication has no effect. With an aspect ratio of 1:2, the delta corresponding to a one pixel distance in the y direction, corresponds to a two pixel distance in the x direction. We correct for this by multiplying the x coordinates by the aspect ratio.


    currentScreen.x      *= aspect;
    otherScreen.x        *= aspect;

Now that we are in screen space, the code to generate the vertices follows very closely from the mathematics. The main exception is the adjustment by the aspect ratio for the offset. This makes sense though, if we pass the halfWidth in terms of pixels in the y direction, dividing by the aspect ratio converts to pixels in the x direction.


    vec2 dir              = normalize(currentScreen - otherScreen);"
    vec2 normal           = vec2(-dir.y, dir.x);"
    normal               *= halfWidth;"
    normal.x             /= aspect;"

    vec4 offset = vec4(normal * direction, 0.0, 0.0);"

    gl_Position = currentProjected + offset;"

We can cap off this discussion with the code to setup and run the line rendering.

The aspect ratio and the line width are constant along the line, so we set them once as uniforms.


  gl.uniform1f(lineProgram.aspectHandle,    aspect);
  gl.uniform1f(lineProgram.halfWidthHandle, lineWidth/gl.drawingBufferHeight);

Things get a little more interesting as we setup the vertex attributes. Successive points along the field line are found by a small displacement in the direction of the field E |E| ds . The code computes the electric field, and the field magnitude, from the charge distribution.


    field = charges.getField(x, y, z);
    f = Math.sqrt(field[0] * field[0] + field[1] * field[1] + field[2] * field[2]);

We then advance the line along, or contrary to, the electric field. The value of sgn is ±1 depending on whether we are tracing the line along, or contrary to, the electric field. Further ds is very small as we expect the field line to be continuously curved, and we want that curve to remain smooth even as the user zooms in.


    x += sgn * field[0] / f * ds;
    y += sgn * field[1] / f * ds;
    z += sgn * field[2] / f * ds;

Once we have the point along the field line, we push it to the field line object.


    fieldLine.pushPoint(x, y, z);

The field line object encapsulates the first bit of our line drawing. It generates the vertex attributes formatted for our shaders.

The vertex buffer contents we need to construct one vertex.

The FieldLine generates a line segment for each point pushed onto it after the first. This means the generation of four vertices, two for the beginning of the line segment


    pushVertex(lastPoint.x, lastPoint.y, lastPoint.z, // The current point
               x, y, z,                               // The other point
               +1.0);                                 // Displacement direction

    pushVertex(lastPoint.x, lastPoint.y, lastPoint.z, // The current point
               x, y, z,                               // The other point
               -1.0);                                 // The opposite displacement direction

and another two for the other end of the line segment.


    pushVertex(x, y, z,
               lastPoint.x, lastPoint.y, lastPoint.z,
               -1.0);

    pushVertex(x, y, z,
               lastPoint.x, lastPoint.y, lastPoint.z,
               +1.0);

Once we have accumulated all the points on the line, we use them to create a FieldLineVBO instance. FieldLineVBO is a thin layer over GLUtility. Initially, it uses GLUtility.createBuffer to load data into a GPU buffer. Later, if we change the field line, we call FieldLineVBO's reload method to refresh the buffer with new data.

There is just one last step before we get to render the line. Setup the pointers from the data to the shader attributes.

When we compiled the shader we stored the attribute locations, and now we use them to map data to those attributes. We have a utility method that handles this step for us. The glUtility.bindBuffer method combines the bindBuffer and vertexAttribPointer invocations.


  // Draw the line as a set of short line segments
  // between successive points on the field line
                       // Previously loaded line data into this vbo
  glUtility.bindBuffer(fieldLineVBO.fieldLineBufferHandle,
                       // load this data into the current location attribute
                       lineProgram.currentHandle,
                       // Local copy of FieldLine.FLOATS_PER_LOCATION
                       floatsPerLocation,
                       // They are all floating point numbers
                       gl.FLOAT,
                       // bytes between consecutive values for this attribute
                       locationStride,
                       // offset - where we start reading the initial data
                       0);
  glUtility.bindBuffer(fieldLineVBO.fieldLineBufferHandle,
                       lineProgram.otherHandle,
                       floatsPerLocation,
                       gl.FLOAT,
                       locationStride,
                       // This data starts after the first location
                       floatsPerLocation*Float32Array.BYTES_PER_ELEMENT);
  glUtility.bindBuffer(fieldLineVBO.fieldLineBufferHandle,
                       lineProgram.directionHandle,
                       1,
                       gl.FLOAT,
                       locationStride,
                       // The direction data follows both locations
                       2*floatsPerLocation*Float32Array.BYTES_PER_ELEMENT);

Finally we render the line. Seems a bit anticlimactic doesn't it?


  gl.drawArrays(gl.TRIANGLE_STRIP, 0, (fieldLineVBO.npoints-1)*4);
  1. OK, apparently this guy.
  2. MDN on gl.lineWidth
  3. Computer graphics, where you suddenly need all that math you thought you would never use.
Creative Commons License
VField Documentation by Vizit Solutions is licensed under a Creative Commons Attribution 4.0 International License.