Numerical Instability

Hopefully we have provided you with an understanding of the foundations behind numerical methods on the GPU. We even provided a straightforward example application that generated some pretty cool visualizations. It all seems too good to be true. Unfortunately, when something seems too good to be true, it probably is. This simple finite difference implementation is numerically unstable. Even with the most carefully chosen initial conditions, as the simulation runs over time, errors will become evident and grow without bound. If we aren't careful with the initial conditions, these errors will quickly dominate.

For example, if we double the xResolution from 800 to 1600 grid points, even with a flat potential, we get some, shall we say, spectacular results.

Ψ x t explodes when we halve the spacing between grid points.

We can make significant improvements by revisiting our first approximation for the time derivative of the wave function.

t Ψ x t Ψ x t + Δ t - Ψ x t Δ t

This approximates the time derivative using the difference between the forward step and the current timestep, a technique known as a forward difference approximation. We can improve on this significantly with a central difference approximation. We will see, though that this comes at a cost in memory.

The central difference approximates the time derivative using the difference between the previous and the next timestep.

t Ψ x t Ψ x t + Δ t - Ψ x t - Δ t 2 Δ t

Generally, we expect this to yield more reasonable results. We can think of these difference equations as approximating the derivative at the midpoint between the two points. So the forward difference really approximates the derivative at t + 1 2 Δ t when we really want the derivative at t . This is what the central difference provides.

The forward difference, central difference and exact tangent to Ψ x t around a time t

This time, we put the central difference approximation into the the Schrödinger equation and isolate the t + Δ t term on the left.

i Ψ x t + Δ t i Ψ x t - Δ t - 2 x 2 Ψ x t Δ t + V x Ψ x t 2 Δ t

This looks almost exactly like our first path through the schrodinger equation, except for the first term on the right side, where instead of Ψ x t we have Ψ x t - Δ t

Remember that the terms on the right all became texture reads in the code. This equation shows that we will now need to read, and therefore keep, wavefunction values for t - Δ t . The wave function for t + Δ t becomes the wavefunction from t - Δ t plus adjustments from t . This gives raise to the name of the method, leapfrog, which we will cover next.

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.