The Schrödinger Equation on a GPU

This section, indeed this entire set of pages, is a work in progress. While conceptually they will be reasonable, there are certainly adjustments that need to be made to improve clarity and improve a few small issues of correctness.

Both quantum mechanics and GPGPU computing are complex and difficult topics. Appropriately, we will use each to shed light on the other. The Schrödinger equation will provide a specific and valuable application of GPGPU computing, and GPGPU computing will in turn provide interactive simulations and visualizations where you can experiment with and explore the sometimes highly counterintuitive world of quantum mechanics.

Luckily, one of the most popular and straightforward numerical methods, the finite difference method, is particularly well suited to implementations on the GPU.

The FDTD method is a wonderfully simple method that can be taught at the undergraduate or early graduate level. Yet,it is capable of solving extremely sophisticated engineering problems.

We will work through the process of developing a shader based FDTD treatment with enough detail to provide a good introduction, and hopefully enough insight to apply these methods to other topics such as thermodynamics, electrodynamics, and fluid dynamics.

Let's start with the one dimensional time dependent Schrödinger equation.

i t Ψ x t = - 2 2 m 2 x 2 Ψ x t + V x Ψ x t

The Schrödinger equation follows a pattern that we see over and over in physics and engineering. On one side we have a time derivative, and on the other we have spatial derivatives. This means we write our evolution in time as a function of differences over space. If we take each texture as containing the values of our function over space at a specific time, then each rendering from one texture to another is a step forward in time.

Finite Difference Concepts

What exactly do we mean when "we take each texture as containing the values of our function over space at a specific time"? We are looking at a 1D Schrödinger equation, so try to model it with discrete values on a 1D grid. In terms of variables in our code, the grid has xResolution points, with x varying from 0 to length in increments of δ = length/xResolution between texels.

A 1D grid

We have to be careful putting the Schrödinger equation on the grid. The i in the Schrödinger equation indicates that Ψ x t is a complex function, that it has both real and imaginary parts. Remember, though, that each texel has room for red, green, blue and alpha values. We will put the real part of the wave function in the red component of the texel, and the imaginary part in the green component of the texel.

A 1D grid with real and imaginary parts

We approximate the continuous function Ψ x t with values at discrete points at a fixed time on our grid. For the derivatives we use differences between points on the grid, hence the name of the technique we are using, finite differences.

Now, we can start toward a finite difference approximation to the Schrödinger equation. Start by writing out an approximation to the time derivative.

t Ψ x t Ψ x t + Δ t - Ψ x t Δ t

This is almost the familiar definition of the first derivative, but without the expected Δ t 0 limit. Without the limit it is clearly an approximation. The exact error can be found from the Taylor series expansion for Ψ x t , and is readily shown to be O ( Δ t 2 2 t 2 Ψ x t ) . As we expect, we can improve the quality of the approximation by choosing a smaller Δ t , making successive textures closer together in time.

Putting this into the Schrödinger equation and isolating the t + Δ t term on the left, while setting and m to 1 for simplicity, we get

i Ψ x t + Δ t i Ψ x t - 1 2 2 x 2 Ψ x t Δ t + V x Ψ x t Δ t

Which shows explicitly that the value for a texel, assigned to gl_FragColor within a shader, is the current value for the texel, modified according to the values of the surrounding texels.

Now that we have a grid with explicit real and imaginary parts, we break up the wave function and then the wave equation into real and imaginary parts.

Ψ x t = Re ( Ψ x t ) + i Im ( Ψ x t )

We put this into the Schrödinger equation, once again setting and m to 1 , then gather the real and imaginary parts into their own equations.

t Re ( Ψ x t ) = - 1 2 2 x 2 Im ( Ψ x t ) + V x Im ( Ψ x t ) t Im ( Ψ x t ) = 1 2 2 x 2 Re ( Ψ x t ) - V x Re ( Ψ x t )

To understand how to manifest these equations on a grid, focus on the 2 x 2 operator. As it turns out, it's actually pretty straight forward.

2 x 2 Ψ x t = x ( x Ψ x t ) x ( Ψ x + Δ x t - Ψ x t Δ x ) Ψ x + Δ x t - Ψ x t Δ x - Ψ x t - Ψ x - Δ x t Δ x Δ x = Ψ x + Δ x t - 2 Ψ x t + Ψ x - Δ x t Δ x 2

Now we have approximations of both the time and space derivatives in terms of values on our grid.

Re ( Ψ x t + Δ t ) Re ( Ψ x t ) - 1 2 Im ( Ψ x + Δ x t ) - 2 Im ( Ψ x t ) + Im ( Ψ x - Δ x t ) Δ x 2 Δ t + V x Im ( Ψ x t ) Δ t Im ( Ψ x t + Δ t ) Im ( Ψ x t ) + 1 2 Re ( Ψ x + Δ x t ) - 2 Re ( Ψ x t ) + Re ( Ψ x - Δ x t ) Δ x 2 Δ t - V x Re ( Ψ x t ) Δ t

We can look at how we can write the code. The output from a fragment shader is an RGBA color value encapsulated in gl_FragColor. We setup our rendering target so that the value from gl_FragColor is assigned to a texel. Now, when our mathematics makes an assignment to Re ( Ψ x t + Δ t ) the code will make an assignment to gl_FragColor.r. And Assignments to Im ( Ψ x t + Δ t ) map to gl_FragColor.g. gl_FragColor is then passed on as the value for the current fragment, or texel.

Every reference to Re ( Ψ x t ) is an access to texture2D(waveFunction, vTextureCoord).r. If you peak at our code you will see we pass the current values for the wave function into the fragment shader as uniform sampler2D waveFunction and the texture coordinates as varying vec2 vTextureCoord. But why a 2D texture? WebGL is based on OpenGL ES 2, which does not support 1D textures. So we simply use a one pixel height.

The x + Δ x terms are also easily related to the texture coordinates. This is the spacing to the next element of the texture, so it is an offset of the texture coordinate by 1.0/float(xResolution) where uniform int xResolution is the number of pixels, or texels in the x direction.

We also provide the potential, V x , as a texture uniform sampler2D potential;.


  dx = length/float(xResolution);
  ds = vec2(1.0/float(xResolution), 0.0);

  gl_FragColor.r = texture2D(waveFunction, vTextureCoord).r
                   - (0.5 * (texture2D(waveFunction, vTextureCoord+ds).g
                             - 2.0*texture2D(waveFunction, vTextureCoord).g
                             + texture2D(waveFunction, vTextureCoord-ds).g)/(dx*dx)
                   + texture2D(potential, vTextureCoord).r*texture2D(waveFunction, vTextureCoord).g)*dt;

  gl_FragColor.g = texture2D(waveFunction, vTextureCoord).g
                   + (0.5 * (texture2D(waveFunction, vTextureCoord+ds).r
                             - 2.0*texture2D(waveFunction, vTextureCoord).r
                             + texture2D(waveFunction, vTextureCoord-ds).r)/(dx*dx)
                   - texture2D(potential, vTextureCoord).r*texture2D(waveFunction, vTextureCoord).r)*dt;
      

You may notice that this still does not look like the expression in Schrodinger.js. The code uses a technique called swizzling, using explicit references to select or rearrange the components of a vector. Here, because of the swizzling, both the left and right side of this expression are vec2 expressions.


  // Vector to mix the real and imaginary parts in the wave function update.
  const vec2 mixing = vec2(-1.0, +1.0);

  dx = length/float(xResolution);
  ds = vec2(1.0/float(xResolution), 0.0);

  gl_FragColor.rg = texture2D(waveFunction, vTextureCoord).rg
                    + (0.5*(texture2D(waveFunction, vTextureCoord+ds).gr
                            - 2.0*texture2D(waveFunction, vTextureCoord).gr
                            + texture2D(waveFunction, vTextureCoord-ds).gr)/(dx*dx)
                    - texture2D(potential, vTextureCoord).r*texture2D(waveFunction, vTextureCoord).gr)*mixing*dt;
      

Our entire fragment shader is then


  #ifdef GL_FRAGMENT_PRECISION_HIGH
  precision highp float;
  #else
  precision mediump float;
  #endif

  // The delta-t for each timestep.
  uniform float dt;
  // The physical length of the grid in nm.
  uniform float length;
  // The number of points along the x axis.
  uniform int xResolution;

  // waveFunction.r is the real part waveFunction.g is the imaginary part.
  uniform sampler2D waveFunction;
  // Discrete representation of the potential function.
  uniform sampler2D potential;

  // Vector to mix the real and imaginary parts in the wave function update.
  const vec2 mixing = vec2(-1.0, +1.0);

  varying vec2 vTextureCoord;

  void main()
  {
    float dx;
    vec2  ds;
    vec4  value;

    dx    = length/float(xResolution);
    ds    = vec2(1.0/float(xResolution), 0.0);
    value = texture2D(waveFunction, vTextureCoord);

    gl_FragColor.rg =  value.rg
                       + (0.5*(texture2D(waveFunction, vTextureCoord+ds).gr
                                    - 2.0*value.gr
                                    + texture2D(waveFunction, vTextureCoord-ds).gr)/(dx*dx)
                          - texture2D(potential, vTextureCoord).r*value.gr)*mixing*dt;
};
      

At the start of each step, the input texture contains values for Ψ x t . The shaders use this texture as input to calculate Ψ x t + Δ t so that at the end of a step the output texture is populated with Ψ x t + Δ t .

At this point we no longer need the original Ψ x t texture. We swap textures and use the Ψ x t + Δ t texture as input, and write the Ψ x t + 2 Δ t into the texture that previously held the Ψ x t values.

After every step, we recycle the texture holding the previous timestep as the target for the next timestep. This technique of switching textures back and forth as source and target for our calculations is well known as the ping pong technique.

We implement the ping pong in the timestep method. Every time timestep is called, the step is toggled between zero and one.


  /**
   * Runs the program to do the actual work. On exit the framebuffer &
   * texture are populated with the next timestep of the wave function.
   * You can use gl.readPixels to retrieve texture values.
   */
  this.timestep = function()
  {
    gl.useProgram(program);

    gl.bindFramebuffer(gl.FRAMEBUFFER, fbos[step]);

    // Step switches back and forth between 0 and 1,
    // ping ponging the source & destination textures.
    step = (step+1)%2;

    gpgpUtility.getStandardVertices();

    gl.vertexAttribPointer(positionHandle,     3, gl.FLOAT, gl.FALSE, 20, 0);
    gl.vertexAttribPointer(textureCoordHandle, 2, gl.FLOAT, gl.FALSE, 20, 12);

    gl.uniform1i(xResolutionHandle, xResolution);
    gl.uniform1f(lengthHandle, length);

    gl.activeTexture(gl.TEXTURE0);
    gl.bindTexture(gl.TEXTURE_2D, textures[step]);
    gl.uniform1i(waveFunctionHandle, 0);

    gl.activeTexture(gl.TEXTURE1);
    gl.bindTexture(gl.TEXTURE_2D, potential);
    gl.uniform1i(potentialHandle, 1);

    gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
  };
      
  1. Explicit Numerical Scheme for Solving 1D Schrödinger Equation
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.