This is a follow up to the post I wrote a while back about
implementing numerical methods. The goal is to demonstrate a work-flow using tools that come included in the
Fedora Linux (and many other) distributions.
- In Maxima
- Define governing equations
- Substitute finite difference expressions for differential expressions in the governing equations
- Output appropriate difference expressions to Fortran using
f90()
- In Fortran: wrap the expression generated by Maxima in appropriate functions, subroutines and modules
- In Python
- Compile modules with
f2py
, which comes packaged along with numpy
- Write the high-level application code, parsing input decks, performing optimization, grid convergence studies, or visualizations that use the compiled Fortran modules
Maxima
The governing equation is the three-dimensional Poisson's equation.
In Cartesian coordinates
The Maxima code to define this equation is straightforward:
depends(u,x);
depends(u,y);
depends(u,z);
/* Poisson's equation: */
p : 'diff(u,x,2) + 'diff(u,y,2) + 'diff(u,z,2) = -f;
Now that we have the continuous differential equation defined we can decide on what sort of discrete approximation we are going to solve. The simple thing to do with second derivatives is to use central differences of second order accuracy, which require only information from closest neighbours in the finite difference grid. Replacing the differential expressions with difference expressions is accomplished in Maxima with the
ratsubst()
function.
/* substitue difference expressions for differential ones: */
p : ratsubst((u[i-1,j,k] - 2*u[i,j,k] + u[i+1,j,k])/dx**2 , 'diff(u,x,2), p);
p : ratsubst((u[i,j-1,k] - 2*u[i,j,k] + u[i,j+1,k])/dy**2 , 'diff(u,y,2), p);
p : ratsubst((u[i,j,k-1] - 2*u[i,j,k] + u[i,j,k+1])/dz**2 , 'diff(u,z,2), p);
/* substitute the correct array value of f:*/
p : ratsubst(f[i,j,k], f, p);
This assumes that the solution
u
and the forcing function
f
are stored in three dimensional arrays, if not then the indexing in the difference expressions becomes more complicated. If we want to apply a
Gauss-Seidel method to solve our elliptic problem we just need to solve
p
for
u[i,j,k]
, and then dump that expression to Fortran format.
gs : solve(p, u[i,j,k]);
del : part(gs[1],2) - u[i,j,k];
/* output for fortran: */
with_stdout("gs.f90", f90(d = expand(del)));
This gives the expression for the update to the solution at each iteration of the solver.
Fortran
The Maxima expressions developed above need control flow added so they get applied properly to our solution array.
do k = 1, nz
do j = 1, ny
do i = 1, nx
d = (dy**2*dz**2*u(i+1,j,k)+dx**2*dz**2*u(i,j+1,k)+dx**2*dy**2* & u(i,j,k+1)+dx**2*dy**2*dz**2*f(i,j,k)+dx**2*dy**2* & u(i,j,k-1)+dx**2*dz**2*u(i,j-1,k)+dy**2*dz**2*u(i-1,j,k))/ & ((2*dy**2+2*dx**2)*dz**2+2*dx**2*dy**2)-u(i,j,k) u(i,j,k) = u(i,j,k) + w*d
end do
end do
end do
The expression for the update was generated from our governing equations in Maxima, one of the things to notice is the
w
that multiplies our update, this is so we can do
successive over relaxation. Also, we'll go ahead and package several of these iterative schemes into a Fortran module
module iter_schemes
implicit none
contains
...bunches of related subroutines and functions ...
end module iter_schemes
Python
Now we want to be able to call our Fortran solution schemes from Python. Using F2py makes this quite simple:
[command-line]$ f2py -m iter_schemes -c iter_schemes.f90
This should result in a python module
iter_schemes.so
that can be imported just as any other module.
import numpy
from iter_schemes import *
... do some cool stuff with our new module in Python ...
Conclusion
This approach might seem like over-kill for the fairly simple scalar equation we used, but think about the complicated update expressions that can be generated for large vector partial differential equations like the
Navier-Stokes or
Maxwell's equations. Having these expressions automatically generated and ready to be wrapped in loops can save lots of initial development and subsequent debugging time. It also makes generating
unit tests easier, and so encourages
good development practice. But does it work? Of course, here's the convergence of the Gauss-Seidel and SOR schemes discussed above along with a
Jacobi scheme and a
conjugate gradient scheme applied to a 50 by 50 by 50 grid with a
manufactured solution.
The plot was generated with the Python module
matplotlib
.