Numerical Instability
Hopefully we have provided you with an understanding of the foundations behind numerical methods on the GPU. We even provided a straightforward example application that generated some pretty cool visualizations. It all seems too good to be true. Unfortunately, when something seems too good to be true, it probably is. This simple finite difference implementation is numerically unstable. Even with the most carefully chosen initial conditions, as the simulation runs over time, errors will become evident and grow without bound. If we aren't careful with the initial conditions, these errors will quickly dominate.
For example, if we double the xResolution
from 800 to 1600 grid points, even
with a flat potential, we get some, shall we say, spectacular results.
We can make significant improvements by revisiting our first approximation for the time derivative of the wave function.
This approximates the time derivative using the difference between the forward step and the current timestep, a technique known as a forward difference approximation. We can improve on this significantly with a central difference approximation. We will see, though that this comes at a cost in memory.
The central difference approximates the time derivative using the difference between the previous and the next timestep.
Generally, we expect this to yield more reasonable results. We can think of these difference equations as approximating the derivative at the midpoint between the two points. So the forward difference really approximates the derivative at when we really want the derivative at . This is what the central difference provides.
This time, we put the central difference approximation into the the Schrödinger equation and isolate the term on the left.
This looks almost exactly like our first path through the schrodinger equation, except for the first term on the right side, where instead of we have
Remember that the terms on the right all became texture reads in the code. This equation shows that we will now need to read, and therefore keep, wavefunction values for . The wave function for becomes the wavefunction from plus adjustments from . This gives raise to the name of the method, leapfrog, which we will cover next.