Leapfrog Verification

We stressed the need for verification with our central difference implementation. We need to repeat that analysis and show that the leapfrog technique yields comparable results and understand why the results are so similar.

Apply the same process to assess the staggered time, or leapfrog, approach as we used to assess the central difference approach. Otherwise, we risk putting our thumb on the scale. As before, start with a visual comparison, which again shows no visual distinction. We are off to a good start.

The FDTD is running while we collect error measurements below.
The exact solution provides a comparison with the FDTD solution.

Now proceed to collect the root-mean-square error for ψr,ψi and ψ*ψ. The table shows error data for every 200th frame from the animation. We use 501 FDTD steps between each frame, so this accounts for over 600,000 steps in the simulation. Over the course of this run, the ψ*ψ error quickly plateaus. However, the ψr,ψi errors both grow. This is actually expected and is an example of numerical dispersion.

The RMS error for ψr, ψi and ψ*ψ from the leapfrog implementation.
Frame Steps E ψ r E ψ i E ψ * ψ
0 0 0.0000 0.0000 2.4794e-18
200 100200 0.0021346 0.0021346 0.0021377
400 200400 0.0042689 0.0042691 0.0027197
600 300600 0.0064026 0.0064026 0.0026756
800 400800 0.0085362 0.0085362 0.0025045
1000 501000 0.010669 0.010668 0.0023283
1200 601200 0.012800 0.012800 0.0021721

This table is very close to the corresponding table for the central difference implementation.

The RMS error for ψr, ψi and ψ*ψ from the central difference implementation.
Frame Steps Eψr Eψi Eψ*ψ
0 0 0.0000 0.0000 3.1811e-18
200 100200 0.0021347 0.0021347 0.0021383
400 200400 0.0042693 0.0042693 0.0027217
600 300600 0.0064031 0.0064031 0.0026787
800 400800 0.0085367 0.0085367 0.0025088
1000 501000 0.010669 0.010669 0.0023335
1200 601200 0.012800 0.012800 0.0021778

There is a significant similarity between the central difference and the staggered time approach. Each starts with the wave function a base time, computes derivatives using an intermediate time wave function, then combine these to compute the wave function at a more advanced time. It is this skipping over the intermediate time wave function that gives raise to the term leapfrog.

We see that the central difference and the staggered time approaches are schematically similar, and differ mostly in the arrangement of the data.

Task Manager

We also take a peek at the task manager to compare the staggered time performance against the earlier approaches. Surprisingly, they are very different.

Staggered Time Performance

The graphics engine does significantly more work for the staggered time approach while the memory copy engine is having a much more leisurely time. This is likely because the staggered time approach makes two invocations of the compute shader for every time step, but we don't shuffle the buffers around. We haven't even added the boundary conditions onto the staggered time yet.

The graphics engine utilization peaks at just over 40%, then levels off at about 38% for a while.
The memory copy engine is doing much less work.

Central Difference Performance

These differences present an almost stereotypical choice encountered so frequently in software engineering. Both approaches yield similar performance, while one makes more demands on memory and the other makes more demands on compute resources.

The graphics engine utilization peaks at just over 25%.
The memory copy engine is doing significantly more work than for the staggered time implementation.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.