Showing posts with label numerical methods. Show all posts
Showing posts with label numerical methods. Show all posts

Tuesday, January 5, 2021

Machine Learning for Fluid Dynamics: Patterns

 

The data set he mentions is Johns Hopkins Turbulence Databases. Many hundreds of Terabytes of direct numerical simulations with different governing equations and boundary conditions.

Wednesday, March 27, 2019

Engineering Sketch Pad


I haven't heard of Engineering Sketch Pad (source code as part of OpenMDAO, and here) before, but this is yet another NASA sponsored open source tool that could be useful to you for aircraft conceptual design. I read about it in a post on Another Fine Mesh about some interesting research the folks at Pointwise are doing. It reminds me of, but is different from, Open Vehicle Sketch Pad.

There's a seminar on the software given by one of the developers up on a NASA site: The Engineering Sketch Pad (ESP): Supporting Design Through Analysis. (yea, DARPA!)

It has some neat features that make it useful to support high-fidelity analysis. It creates watertight geometry, it can carry attributes with the geometry that could guide mesh resolution, it does "conservative" data transfer for discipline coupling (match a solver's numerical scheme), and most of its parts are differentiable which is useful for optimization.

I added this to my list of Open Source Aeronautical Engineering Tools.

Saturday, December 17, 2016

Hybrid Parallelism Approaches for CFD

This previous post, Plenty of Room at Exascale, focuses on one specific commercial approach to scaling CFD to large problems on heterogeneous hardware (CPU & GPU) clusters. Here's some more references I found interesting reading on this sort of approach.

Strategies

Recent progress and challenges in exploiting graphics processors in computational fluid dynamics provides some general strategies for using multiple levels of parallelism accross GPUs, CPU cores and cluster nodes based on that review of the literature:
  • Global memory should be arranged to coalesce read/write requests, which can improve performance by an order of magnitude (theoretically, up to 32 times: the number of threads in a warp)
  • Shared memory should be used for global reduction operations (e.g., summing up residual values, finding maximum values) such that only one value per block needs to be returned
  • Use asynchronous memory transfer, as shown by Phillips et al. and DeLeon et al. when parallelizing solvers across multiple GPUs, to limit the idle time of either the CPU or GPU.
  • Minimize slow CPU-GPU communication during a simulation by performing all possible calculations on the GPU.

Thursday, February 6, 2014

TPS Sizing with Complex Step Method

TPS Sizing Optimization Using Complex Variable Differentiation Sensitivity
I stumbled upon an interesting old presentation that shows a neat application of the complex step method of calculating numerical derivatives for use in optimizing thermal protection system (TPS) thickness. The great thing about the method is that it is minimally intrusive.

Tuesday, January 14, 2014

CFD Vision 2030: Discretizations, Solvers, and Numerics

There are lots of interesting parts to the study that Phil Roe mentioned in his Colorful Fluid Dynamics lecture. Continuing the theme that algorithm improvements are just as important as hardware improvements here are some of the areas concerning discretizations, solvers and numerics (pp 24) that the report claims will lower the need for high levels of human expertise and intervention in running and understanding CFD analysis:
  1. Incomplete or inconsistent convergence behavior: "There are many possible reasons for failure, ranging from poor grid quality to the inability of a single algorithm to handle singularities such as strong shocks, under-resolved features, or stiff chemically reacting terms. What is required is an automated capability that delivers hands-off solid convergence under all reasonable anticipated flow conditions with a high tolerance to mesh irregularities and small scale unsteadiness."
  2. Algorithm efficiency and suitability for emerging HPC: "In order to improve simulation capability and to effectively leverage new HPC hardware, foundational mathematical research will be required in highly scalable linear and non-linear solvers not only for commonly used discretizations but also for alternative discretizations, such as higher-order techniques89. Beyond potential advantages in improved accuracy per degree of freedom, higher-order methods may more effectively utilize new HPC hardware through increased levels of computation per degree of freedom."

Wednesday, January 8, 2014

Algorithmic Improvements: just as important as Moore's Law

There were a couple interesting comments on slashdot recently about future computing technologies that might allow us to enjoy the continued price/performance improvements in computing and avoid the end of Moore's Law. Here's one that highlights some promising emerging technologies (my emphasis):
I see many emerging technologies that promise further great progress in computing. Here are some of them. I wish some industry people here could post some updates about their way to the market. They may not literally prolong the Moore's Law in regards to the number of transistors, but they promise great performance gains, which is what really matters.

3D chips. As materials science and manufacturing precision advances, we will soon have multi-layered (starting at a few layers that Samsung already has, but up to 1000s) or even fully 3D chips with efficient heat dissipation. This would put the components closer together and streamline the close-range interconnects. Also, this increases "computation per rack unit volume", simplifying some space-related aspects of scaling.

Memristors. HP is ready to produce the first memristor chips but delays that for business reasons (how sad is that!) Others are also preparing products. Memristor technology enables a new approach to computing, combining memory and computation in one place. They are also quite fast (competitive with the current RAM) and energy-efficient, which means easier cooling and possible 3D layout.

Photonics. Optical buses are finding their ways into computers, and network hardware manufacturers are looking for ways to perform some basic switching directly with light. Some day these two trends may converge to produce an optical computer chip that would be free from the limitations of electric resistance/heat, EM interference, and could thus operate at a higher clock speed. Would be more energy efficient, too.

Spintronics. Probably further in the future, but potentially very high-density and low-power technology actively developed by IBM, Hynix and a bunch of others. This one would push our computation density and power efficiency limits to another level, as it allows performing some computation using magnetic fields, without electrons actually moving in electrical current (excuse me for my layman understanding).

Quantum computing. This could qualitatively speed up whole classes of tasks, potentially bringing AI and simulation applications to new levels of performance. The only commercial offer so far is Dwave, and it's not a classical QC, but so many labs are working on that, the results are bound to come soon.
3D chips, memristors, photonics, spintronics, QC

I think Moore's Law is a steamroller. But, like the genomics sequencing technology highlighted in that post on Nuit Blanche, there are improvements just as fast, or faster than Moore's law. The improvements from better algorithms can yield exponential speed-ups too. Here's a graph (from this report) depicting the orders of magnitude improvement in linear solver performance:
Couple these software improvements with continually improving hardware and things get pretty exciting. I'm happy to live in these interesting times!

Monday, July 23, 2012

Convergence for Falkner-Skan Solutions

About 6 months ago Dan Hughes sent me a link to an interesting paper on "chaotic" behavior in the trajectory of iterates in a numerical Falkner-Skan solution. It struck me that the novel results reported in that paper were an artifact of the numerical method, and had little to do with any "chaotic" physics that might be going on in boundary layers or other systems that might be well described by this equation. This is similar to the point I made in the Fun with Filip post: the choice of numerical method matters. Do not rush to judgment about problems until you have brought the most appropriate methods to bear.

There are some things about the paper that are not novel, and others that seem to be nonsense. It is well-known that there can be multiple solutions at given parameter values (non-uniqueness) for this equation, see White. There is the odd claim that "the flow starts to create shock waves in the medium [above the critical wedge angle], which is a representation of chaotic behavior in the flow field." Weak solutions (solutions with discontinuities/shocks) and chaotic dynamics are two different things. They use the fact that the method they choose does not converge when two solutions are possible as evidence of chaotic dynamics. Perhaps the iterates really do exhibit chaos, but this is purely an artifact of the method (i.e. there is no physical time in this problem, only the pseudo-time of the iterative scheme). By using a different approach you will get different "dynamics", and with proper choice of method, can get convergence (spectral even!) to any of the multiple solutions depending on what initial condition you give your iterative scheme. They introduce a parameter, \(\eta_{\infty}\), for the finite value of the independent variable at "infinity" (i.e. the domain is truncated). There is nothing wrong with this (actually it's a commonly used approach for this problem), but it is not a good idea to solve for this parameter as well as the shear at the wall in your Newton iteration. A more careful approach of mapping the boundary point "to infinity" as the grid resolution is increased (following one of Boyd's suggested mappings) removes the need to solve for this parameter, and gives spectral convergence for this problem even in the presence of non-uniqueness and the not uncommon vexation of a boundary condition defined at infinity (all of external aerodynamics has this helpful feature).

Friday, August 5, 2011

Computational Explosive Astrophysics

I got an update with an interesting set of slides on adaptive mesh refinement recently. I've always been more of a moving mesh kind of guy, but lots of people find the AMR approach useful. The slides are for a summer course on Computational Explosive Astrophysics. They have an index for the lecture notes where those slides came from. Last year's course was Galaxy Simulations.

Wednesday, October 20, 2010

NALPAL: Not A Livermore Physics Applications Language

Background and Motivation

The problem I am interested in: leverage high-level symbol manipulation capabilities in a computer algebra system (CAS) to generate compile-able code for number crunching. More specifically, I am interested in generating routines for numerical partial differential equation (PDE) solvers. This is not a new idea, significant work was accomplished starting in the early 1980s [123], continued into the late ’80s [456], and early ’90s [7]. However, those capabilities have not been consolidated and formally incorporated into my open-source CAS of choice, Maxima (though there is fairly recent work implementing similar ideas for Mathematica [89]). The early work [1] quotes Hamming’s Turing Award speech [10] for motivation:
We seem not to be able to use the machine, which we all believe is a very powerful tool for manipulating and transforming information, to do our own tasks in this very field. We have compilers, assemblers, monitors, etc. for others, and yet when I examine what the typical software person does, I am often appalled at how little he uses the machine in his own work.
I had been thinking about this problem for a while because of my interest in method of manufactured solutions (MMS) for code verification. I thought things were really going to take off when I came across Cook’s 1990 technical report describing code generation for PDE solvers using A Livermore Physics Applications Language (ALPAL), which was built on top of DOE-MACSYSMA. ALPAL aimed to be a general purpose domain specific language for turning symbolic descriptions of PDEs into code. Here’s a solution to the problem already developed! Of course, it couldn’t be that easy. I asked the Maxima mailing list if anyone knew what became of this effort: ALPAL question to Maxima list, and ended up answering my own question by getting a hold of some of the old ALPAL devs: ALPAL answer to the list. Unfortunately, there is a long history behind the divergence between different versions of MACSYMA (and different groups of developers) that mitigates against this old code ever working with the fork that became Maxima (should an archive of the source ever actually turn up, update: it did, see comment) [111213].
As you may have guessed from this post’s title, and the difficulties described in the previous paragraph, I’ve decided to pursue a more toolbox approach rather than (re)implementing a domain specific language and associated decision logic. The working title for my toolbox of utilities will be NALPAL, both as a nod to the valuable historical work in this area, and an indication of the path not taken.
Wirth categorizes three main approaches to systems for numerical PDE solution: black-box systems, subroutine packages, and code generation aids [1]. Black-box systems are intended for the novice user, and tend to be constrained to very specific problem types or incorporate a great deal of decision logic. Subroutine packages provide reusable building blocks, but require more knowledge and coding on the part of the user, and tend to remain problem specific. Code generation aids require the user to be an expert in numerical analysis, and tend not to automate any of the analytical work that precedes the code generation step.
Wirth’s suggested approach is a Unix-like toolbox approach where standard subroutine libraries are used when it makes sense to do so, and utilities for doing custom code generation are written in the CAS environment. The alternative is to create a full language, and incorporate the semantics and decision logic to automate the entire process for the novice user. The work of Wirth, Steinberg and Roache is a good example of the former approach, and Cook’s work on ALPAL is an example of the latter (though his early work [3] is more along the lines of the toolbox approach, going so far as to say “the viewpoint taken is that one should be able to start with integro-differential equations and end up with optimal FORTRAN code segments. The key in achieving this goal was to tackle only the specific algebraic problem-i at hand, and not to attempt to provide tools to cover every conceivable numerical scheme.”). The toolbox approach keeps more of the work for the problem solver in set-up/tear-down/customization, while the language approach loads more of the burden on to the domain specific language developer. Wirth’s second objective sums up well the approach that I think makes the most sense (emphasis original):
Build productivity enhancing tools of broad applicability for the expert user so that efficient, special purpose PDE codes can be built reliably and quickly, rather than attempt to second guess the expert and build general purpose PDE codes (black box systems) of doubtful efficiency and reliability.
There are still useful things to learn from ALPAL even if we have chosen the less ambitious road, since the problem being solved is the same. A basic description of the ALPAL use-case [7]:
  1. Take as input a PDE description, along with boundary and initial condition definitions
  2. Discretize the PDE
  3. Analyze the result (e.g. for stability)
  4. Calculate the Jacobian (needed for Newton methods in implicit time-integration or non-linear boundary value problem (BVP)s)
  5. Generate code
Even if we don’t have a full-featured language, the user of our set of utilities will still be interested in accomplishing largely these same steps. In fact, Wirth’s early work gives these steps (reproduced here verbatim) [1]:
  1. Manipulate the set of partial differential equations to cast them into a form that is amenable to numerical solution. For vector PDEs, this might include vector differential calculus operations and reexpression in scalar (component) form, and the application of a linearization approximation for non-linear PDEs.
  2. Discretize the time and space domain, and transform the partial differential operators in the PDEs into finite difference operators. This transforms the partial differential equations into a set of algebraic equations. A multitude of possible transformations for the differential operators are possible and the boundary conditions for the PDEs also must be appropriately handled. The resulting difference equations must be analyzed to see if they form an accurate and numerically stable approximation of the original equation set. For real world problems, this analysis is usually difficult and often intractable.
  3. After choosing a solution algorithm from numerical linear algebra, the finite difference equations and boundary conditions are coded in a programing language such as FORTRAN.
  4. The numerical algorithm is then integrated with code for file manipulations, operating system interactions, graphics output, etc. forming a complete computer program.
  5. The production program is then executed, and its output is analyzed, either in the form of numerical listings or computer-generated graphics.
Wirth goes on to say (emphasis added),
With continuing advances in computer technology, the last step in this process has become easier. For a given class of problems, answers can be calculated more quickly and economically. More importantly, harder problems which require more computational resources can be solved. But the first four steps have not yet benefited from advances in computer performance; in fact, they are aggravated by it.
Also, this additional bit of motivation from Chapter 5,
Taken together with the software described in other chapters, these tools allow the user to quickly generate a FORTRAN code, run numerical experiments, and discard the code without remorse if the numerical results are unsatisfactory.
This is similar in sentiment to the idea of numerical throw away code.

PDE Code Gen Recipes

With the problem background and motivation set, the rest of this post will focus on pulling out useful Maxima recipes demonstrated by folks who have generated FORTRAN from MACSYMA with intent to inflict grievous arithmurgical damage on hapless PDEs.
Much work is done in most of these articles from the 1980s to avoid array references and break up expressions by hand to reduce the computational cost in MACSYMA. Also, MACSYMA’s knowledge of the chain rule is not used for calculating the transformations because of an explosion in the number of terms [5], rather an identity for the derivative of a matrix inverse is used. A lot of this effort seems unnecessary today because speed and memory have improved so much. However, the basic approach and variables used to define the problem is still relevant (compare with this example of Burgers’ equation on a curvilinear grid):
  dep : [f, sigma]; /* the dependent variables */
  curvi : [xi, eta, zeta]; /* the curvilinear coordinates */
  indep : [x, y, z]; /* the independent variables */
  depends(curvi, indep);
  depends(dep, curvi);
  nn : length(indep);
  eqn : sum(diff(sigma * diff(f, indep[i]), indep[i]), i, 1, nn);
The result is rather large, but it illustrates Maxima’s knowledge of the chain rule. Of course, generally it is easiest to compute ∂x ∂ξ rather than ∂ξ ∂x for an arbitrary grid, so you need to make substitutions based on the inverse of the Jacobian of transformation. In Maxima we might do something like
  J : zeromatrix(3,3);
  for j : 1 thru 3 do (
    for i : 1 thru 3 do (
      J[i,j] : ’diff(indep[j],curvi[i])
    )
  );
  K : zeromatrix(3,3);
  for j : 1 thru 3 do (
    for i : 1 thru 3 do (
      K[i,j] : diff(curvi[j], indep[i])
    )
  );
  grid_trans_subs : matrixmap(”=”, K, invert(J));
  /* making substitutions from a list is easier than from a matrix */
  grid_trans_sublis : flatten(makelist(grid_trans_subs[i],i,1,3));
which gives us a list of nine equations we can use to make substitutions so that all our derivatives are with respect to the computational coordinates.
  trans_eqn : subst(grid_trans_sublis, eqn) $
  /* Evaluation took 0.0510 seconds (0.0553 elapsed) using 265.164 KB. */
  trans_eqn_factor : factor(trans_eqn) $
  /* Evaluation took 2.4486 seconds (2.5040 elapsed) using 48.777 MB. */
Factoring the result starts getting on up there in compute time and memory, but still not untractable, or even that uncomfortable for an interactive session. Of course we still haven’t made any difference expression substitutions, and that will expand the number of terms even further. The assumption that the grid is arbitrary is a decision point that would have to be dealt with in a black-box style implementation. Best to leave it to the (hopefully) knowledgeable user.
As an aside, linearization is another example of an assumption that is probably best left to the user. Wirth assumes that a linearization is required to solve nonlinear PDEs, whereas Cook provides for calculating the system Jacobians needed for Newton methods (of course Wirth does note that this is just one approach that could be used, and the tools are flexible enough to be extended to other methods). Using the Jacobian is a more modern approach made practical by the successful development of Newton-Krylov methods. It is impossible to predict the future development of numerical methods that will change the choices that need to be made in discretizing PDEs (though that doesn’t prevent speculation [14]), so a flexible toolbox approach is again indicated.
Once a PDE is defined substitutions of finite differences for partial derivatives must be made to create the discrete approximation. Wirth uses a function called DISCRETIZE which uses the dependencies list (created by a call to depends) to substitute indexed variables for the independent variables. Then the partial derivatives of the indexed variables are replaced by finite difference operators. The substitutions of difference expressions is controlled by using Maxima’s pattern matching rules. The basic recipe is
  1. Define dependencies between independent and dependent (and possibly computational coordinates)
  2. Associate a list if indices with the coordinates
  3. Define rules that transform differential terms into difference terms with the appropriate index shifts and constant multipliers corresponding to the coordinate which the derivative is with respect to and the selected finite difference expression
  4. Apply the rules to the PDE to give a finite difference equation (FDE)
  5. Use Maxima’s simplification and factoring capabilities to simplify the FDE
  6. Output the FDE in FORTRAN format and wrap with subroutine boilerplate using text processing macros
The boilerplate is a set of standard templates for a particular solution procedure. The example Wirth uses is an alternating direction implicit (ADI) scheme. A more modern example might be a Newton-Krylov scheme. Wirth describes an environment, Fast FORTRAN Programing (FFP), whose goal is to move computational physics out of a “cottage industry” state. He describes it thusly, “The system consists of two major components: a subroutine library and a command language for building, compiling and running codes.” Based on my experience, that sounds a whole lot like Scipy, which is built on top of Python, and packages together many of the classic scientific computing libraries.
Pattern matching in Maxima is relatively straight-forward. For instance, say I have a function that I’d like to use to calculate my derivatives (such as one based on the discrete cosine transform (DCT)), I could declare patterns that replaced derivatives with function calls.
  matchdeclare([fmatch,xmatch,nmatch],all),
  defrule(deriv_subst_1, ’diff(fmatch,xmatch,1), diff_1(fmatch,xmatch)),
  defrule(deriv_subst_2, ’diff(fmatch,xmatch,2), diff_2(fmatch,xmatch)),
  mat_expr : apply1(mat_expr, deriv_subst_1),
  mat_expr : apply1(mat_expr, deriv_subst_2),
This would result in all of the first and second derivatives in mat_expr being replaced by function calls diff_1 and diff_2 respectively.
The initial set-up steps Cook uses in [3] are roughly the same as those used by Wirth, with the addition of a constant list. The focus is more on what to do with the discrete expressions once they are generated. The basic recipe is
  1. reduce_cnst()
  2. gcfac()
  3. optimize()
  4. fortran()
The constant list allows reduce_cnst to pull constants out of loops, and optimize pulls out common sub-expressions to speed things up. optimize returns a Maxima block, which is similar to a Fortran subroutine. In fact, turning blocks into subroutines in other languages is a common problem which has been addressed on the Maxima mailing list (e.g. block2c, see also cform), and as Dan Stanger points out, this is the purpose of the GENTRAN package. GENTRAN is not yet successfully ported to work with Maxima [13] (though there is a gentran directory that ships in the Maxima source tarball if you want to take a look).
The only reason Cook went to the Lisp level in [3] was to reduce the cost of the factor routines (which is less of a concern now), and to make optimize work with lists of expressions (which it does now). One of the examples cites a need for an outrageous 500 million words of computer memory for one calculation, but that’s roughly half what’s in my old desktop computer I got used about a year ago (for about $100). The computing power available to the average amateur has come a long way since 1982.
Both Wirth and Cook assume that an arbitrary orthogonal coordinate system (like spherical, polar, cylindrical, Cartesian, etc.) will be defined. A slightly more modern approach is presented by Roache et al. [456]. They assume an arbitrary curvilinear coordinate system, which may not have analytical scale factors and metrics (i.e. they include the numerical grid generation task as part of the problem statement / solution procedure).
The approach demonstrated in [5] focuses on transforming a set of PDEs to arbitrary curvilinear coordinates described by Laplace’s equation. This approach couples the PDE solution and grid generation methods together. Modern approaches in the engineering world generally assume that the grid will be generated by a separate tool (the fully coupled approach can still be useful for adaptive or moving grids), though solving problems on a single, canonical grid seems to still be common in the sciences. The MACSYMA routines presented there are
change()
change variables, convert arbitrary second order differential equation in nn variables to an arbitrary coordinate frame in the variables xi[i]
notate
atomic notation for derivatives
notation(exp,vari)
primitive atomic notation
scheme()
introduce differences of unknowns
difference(u,f,exp)
primitive differences, scheme and difference collect the coefficients of the differences and calculate the stencil of the solver and coordinate transformations
myFORTRAN()
write the FORTRAN code
A lot of this extra effort is avoidable now, because it is tractable to use Maxima’s knowledge of the chain rule, and the built-in indexed variables and pattern matching facilities.

Conclusion

After digging in to the old literature on generating code from MACSYMA, and trying out a few things with current (v5.20.1) Maxima, it seems like all the pieces you need to generate PDE solving code already ship with Maxima (not touched on in this post were the vector calculus and tensor capabilities that Maxima has as well). I kind of already knew this since I'd been generating solver code in an ad hoc sort of way already. Perhaps there is a place for building up a pattern matching rules database, and maybe boilerplate templates. That seems to be the area that the modern efforts are focused on (e.g. Scinapse claims that it’s rules encoding PDE solution knowledge make up half it’s 120kloc and is the fastest growing part of the code-base [9]). The recipes presented here seem more like a documentation, or tutorial item rather than a possible new Maxima package.

References

[1]   Wirth, M. C., On the Automation of Computational Physics, Ph.D. thesis, University of California, Davis, 1980.
[2]   Wirth, M. C., “Automatic generation of finite difference equations and fourier stability analyses,” SYMSAC ’81: Proceedings of the fourth ACM symposium on Symbolic and algebraic computation, ACM, New York, NY, USA, 1981, pp. 73–78.
[3]   Cook, G. O., Development of a Magnetohydrodynamic Code for Axisymmetric, High-β Plasmas with Complex Magnetic Fields, Ph.D. thesis, Brigham Young University, December 1982.
[4]   Florence, M., Steinberg, S., and Roache, P., “Generating subroutine codes with MACSYMA,” Mathematical and Computer Modelling, Vol. 11, 1988, pp. 1107 – 1111.
[5]   Steinberg, S. and Roache, P. J., “Symbolic manipulation and computational fluid dynamics,” Journal of Computational Physics, Vol. 57, No. 2, 1985, pp. 251 – 284.
[6]   Steinber, S. and Roache, P., “Using VAXIMA to Write FORTRAN Code,” Applications of Computer Algebra, edited by R. Pavelle, Kulwer Academic Publishers, August 1984, pp. 74–94.
[7]   Cook, G. O., “Construction of large-scale simulation codes using ALPAL (A Livermore Physics Applications Language),” Technical Report UCRL-102469, Lawrence Livermore National Labs, October 1990.
[8]   Husa, S., Hinder, I., and Lechner, C., “Kranc: a Mathematica package to generate numerical codes for tensorial evolution equations,” Computer Physics Communications, Vol. 174, No. 12, 2006, pp. 983 – 1004.
[9]   Akers, R. L., Kant, E., Randall, C. J., Steinberg, S., and Young, R. L., Enabling Technologies for Computational Science, Vol. 548 of The Springer International Series in Engineering and Computer Science, chap. SCINAPSE: A problem solving environment for partial differential equations, Springer, 2000, pp. 109–122.
[10]   Hamming, R. W., “One Man’s View of Computer Science,” Journal of the Association for Computing Machinery, Vol. 16, No. 1, January 1969, pp. 3–12.
[11]   Cook, G. O., electronic mail communication, August 2010.
[12]   Fateman, R. J., electronic mail communication, August 2010.
[13]   Stanger, D., electronic mail communication, August 2010.
[14]   Houstis, E. N. and Rice, J. R., “Future problem solving environments for computational science,” Mathematics and Computers in Simulation, Vol. 54, No. 4-5, 2000, pp. 243 – 257.

Saturday, January 16, 2010

Chaotic Method of Manufactured Solutions: Lorenz 63

I quickly brushed over the method of manufactured solutions (MMS) aspects in the previous post on Lorenz 63, you might wonder why. Well, it turns out that applying the MMS to chaotic systems includes an additional consideration that is not normally of concern. Here is the ’standard’ sort of checklist for choosing manufactured solutions (taken from Verification of Computer Codes in Computational Science and Engineering, emphasis mine):

  1. Manufactured solutions should be sufficiently smooth on the problem domain so that the theoretical order-of-accuracy can be matched by the observed order-of-accuracy obtained from the test. A manufactured solution that is not sufficiently smooth may decrease the observed order-of-accuracy (however, see # 7 in this list).
  2. The solution should be general enough that it exercises every term in the governing equation. For example, one should not choose temperature T in the unsteady heat equation to be independent of time. If the governing equation contains spatial cross-derivatives, make sure the manufactured solution has a non-zero cross derivative.
  3. The solution should have a sufficient number of nontrivial derivatives. For example, if the code that solves the heat conduction equation is second order in space, picking T in the heat equation to be a linear function of time and space will not provide a sufficient test because the discretization error would be zero (to within round-off) even on coarse grids.
  4. Solution derivatives should be bounded by a small constant. This ensures that the solution is not a strongly varying function of space or time or both. If this guideline is not met, then one may not be able to demonstrate the required asymptotic order-of-accuracy using practical grid sizes. Usually, the free constants that are part of the manufactured solution can be selected to meet this guideline.
  5. The manufactured solution should not prevent the code from running successfully to completion during testing. Robustness issues are not a part of code order verification. For example, if the code (explicitly or implicitly) assumes the solution is positive, make sure the manufactured solution is positive; or if the heat conduction code expects time units of seconds, do not give it a solution whose units are nanoseconds.
  6. Manufactured solutions should be composed of simple analytical functions like polynomials, trigonometric, or exponential functions so that the exact solution can be conveniently and accurately computed. An exact solution composed of infinite series or integrals of functions with singularities is not convenient.
  7. The solution should be constructed in such a manner that the differential operators in the governing equations make sense. For example, in the heat conduction equation, the flux is required to be differentiable. Therefore, if one desires to test the code for the case of discontinuous thermal conductivity, then the manufactured solution for temperature must be constructed in such a way that the flux is differentiable. The resultant manufactured solution for temperature is nondifferentiable.
  8. Avoid manufactured solutions that grow exponentially in time to avoid confusion with numerical instability.

As emphasized in # 4 we often have to use the free constants in our manufactured solution to bound the derivatives in the domain. For chaotic systems we’ll need to use not only the free constants, but the initial conditions as well, to achieve another end: ensure that the chosen solution is in the convergent region at all times. This is a sufficient but not necessary condition for the source term to be in the basin of entrainment. When the source term is in the basin of entrainment then we can expect to be able to verify the ordered convergence of our implementation as we would in the non-chaotic case. Otherwises we can expect the truncation error to eventually lead to an arbitrarily large departure from the manufactured solution. This terminology comes from the nonlinear dynamics and control literature [1] [3]. A worked example for the Lorenz ’63 system showing the boundary of the convergent region is in Chapter 6 of [2].

The convergent region is defined by the region of phase space for which the eigenvalues of the system Jacobian has negative real parts. As in any other application of the MMS, we start by choosing some convenient solution consistent with the desiderata enumerated above. Trigonometric functions are convenient in this case:

x = cos(t) + bx
(1)

y = sin(t) + by
(2)

z = sin(t) + bz
(3)

Now we need the Jacobian of the system (and then its eigenvalues), since our governing equations are definied in Maxima, that is easy enough:

/* Need the Jacobian of the system to find the convergent region so 
   we can select a useful IC/amplitude for our manufactured solutions */ 
sys_eqns : [eqn_1, eqn_2, eqn_3] $ 
J_cr : zeromatrix(3,3) $ 
for j : 1 thru 3 do 
for i : 1 thru 3 do 
  ( 
    J_cr[i,j] : diff(rhs(sys_eqns[i]), unknowns[j]) 
    ) 
  ) $ 
[J_cr_eigvals, J_cr_eigmult] : eigenvalues(J_cr) $ 
J_cr_eigvals : fullratsimp(J_cr_eigvals) $ $

As before these expressions are output in Fortran90 format for compilation. Here are the eigenvalues in the complex plane for bx = 1.0, by = -1.0, bz = 10.0.


PIC

Figure 1: Jacobian eigenvalues for manufactured solution, bx = 1.0, by = -1.0, bz = 10.0


We can immediately see the problem from Figure 1: our manufactured solution is in the region of phase space where one of the Jacobian’s eigenvalues has a positive real part. We can adjust the initial conditions of our solution to get all of our eigenvalues on the left half-plane.


PIC

Figure 2: Jacobian eigenvalues for manufactured solution, bx = 10.0, by = -10.0, bz = 40.0


Being limited to only certain portions of the phase space for conducting verification is fine, because a single well-designed manufactured solution verifies the correctness of the entire implementation (as long as all of the terms are activated). This is the same reason that it is often emphasized in the V&V literature that there is no need for the manufactured solution to be physically meaningful.

References

[1]   Jackson, E.A., “Controls of dynamic flows with attractors,” Physics Review A, Vol. 44, Is. 8, 1991.

[2]   Wu, W., “Analytical and Numerical Methods Applied to Nonlinear Vessel Dynamics and Code Verification for Chaotic Systems,” Disertation, Virginia Polytechnic Institute and State University, Dec, 2009.

[3]   W. Wu, L.S. McCue, and C.J. Roy. “The method of manufactured solutions applied to chaotic systems,” Nonlinear Dynamics, 2009, under review.

Friday, January 15, 2010

Lorenz 63 Ensemble

Dan Hughes has started an ambitious series of posts about formally verifying (using the method of manufactured solutions (MMS)) a range of chaotic ordinary differential equations (ODE) systems.
One of the classic ones he’s looking at is Lorenz’s three-dimensional toy system from 1963:
∂x- ∂t = Pr (y - x)
(1)
∂y-= - y + Ra x - xz ∂t
(2)
∂z- = - bz + xy ∂t
(3)
where the system exhibits chaotic behaviour depending on the choice of parameter values (Ra = 28, Pr = 10.0, b = 8∕3 will be used throughout).
The first thing to do is define our governing equations in a computer algebra system (I like Maxima) because that makes writing bug free numerical software and complicated MMS forcing functions much easier.
  unknowns : [x,y,z] $ 
  depends(unknowns, t) $ 
 
  /* define the governing equations: */ 
  expr_1 : -Pr * x + Pr * y - diff(x,t) $ 
  eqn_1 : solve(expr_1, diff(x,t))[1] $ 
  expr_2 : -y + Ra * x - x * z - diff(y,t) $ 
  eqn_2 : solve(expr_2, diff(y,t))[1] $ 
  expr_3 : -* z + x * y - diff(z,t) $ 
  eqn_3 : solve(expr_3, diff(z,t))[1] $
Then we need to define our discrete approximation to the governing equations (I’ll skip the MMS stuff for now)
/* elements of the Butcher tableau: */ 
A : matrix([1/4, -1/4], 
           [1/4, 5/12]) $ 
bT : matrix([1/4, 3/4]) $ 
c : matrix([0], [2/3]) $ 
 
/* identity matrix for doing Kronecker products, dimension depends on 
   the size of the system you are trying to solve */ 
sys_I : diagmatrix(3, 1) $ 
 
K : transpose([k[1], k[2], k[3], k[4], k[5], k[6]]) $ 
Xn : transpose([x[n],y[n],z[n],x[n],y[n],z[n]]) $ 
tn : transpose([t[n], t[n], t[n], t[n], t[n], t[n]]) $ 
F : transpose([f[1], f[2], f[3], f[4], f[5], f[6]]) $ 
 
A_rk : kronecker_product(A, sys_I) $ 
bT_rk : kronecker_product(bT, sys_I) $ 
hc_rk : kronecker_product(t[n] + h*c, sys_I) $ 
 
/* argument to our system right-hand-side operator: */ 
nonlin_arg : Xn + h * A_rk . K $ 
/* set up the two-stage system: */ 
rk_sys : zeromatrix(6,1) $ 
q_sys : zeromatrix(6,1) $ 
rk_sys[1,1] : k[1] - subst(x=nonlin_arg[1,1], subst(y=nonlin_arg[2,1], 
    subst(z=nonlin_arg[3,1], rhs(eqn_1)))) $ 
rk_sys[2,1] : k[2] - subst(x=nonlin_arg[1,1], subst(y=nonlin_arg[2,1], 
    subst(z=nonlin_arg[3,1], rhs(eqn_2)))) $ 
rk_sys[3,1] : k[3] - subst(x=nonlin_arg[1,1], subst(y=nonlin_arg[2,1], 
    subst(z=nonlin_arg[3,1], rhs(eqn_3)))) $ 
rk_sys[4,1] : k[4] - subst(x=nonlin_arg[4,1], subst(y=nonlin_arg[5,1], 
    subst(z=nonlin_arg[6,1], rhs(eqn_1)))) $ 
rk_sys[5,1] : k[5] - subst(x=nonlin_arg[4,1], subst(y=nonlin_arg[5,1], 
    subst(z=nonlin_arg[6,1], rhs(eqn_2)))) $ 
rk_sys[6,1] : k[6] - subst(x=nonlin_arg[4,1], subst(y=nonlin_arg[5,1], 
    subst(z=nonlin_arg[6,1], rhs(eqn_3)))) $
This is basically the same approached that I applied to the simple harmonic oscillator, but this time our system is non-linear, so we can’t invert the time-integration operator directly. We need to use an iterative Newton’s method to solve for each time-step, for which we need to define a Jacobian:
/* calculate the system Jacobian for use in a Newtons method: */ 
J : zeromatrix(6,6) $ 
for j : 1 thru 6 do 
for i : 1 thru 6 do 
  ( 
    J[i,j] : diff(rk_sys[i,1], k[j]) 
    ) 
  ) $ 
 
time_update : factor(h* bT_rk . K) $ $
The really nice thing about putting in a bit of work up-front with the symbol manipulation software is you can just dump the resulting expressions to compile-able code:
load(f90) $ 
file_output_append : false $ 
fname : ”rhs.f90” $ 
with_stdout(fname, print(”! generated by lorentz63_mms.mac”)) $ 
file_output_append : true $ 
with_stdout(fname, f90(f[1] = rk_sys[1,1])) $ 
/* ...much more boring file IO follows... */
With all of our useful expressions safely (and correctly) in Fortran, we can compile with F2py and play with them in Python. Integrating the system with a third-order implicit Runge-Kutta method gives the following time-step convergence behaviour:

(a) X component
(b) Y component
(c) Z component
Figure 1: Time-step convergence for Lorenz ’63 system with 2-stage Radau-IA RK scheme

Figure 1 shows that as the time-step is decreased the trajectory stays with the ’pack’ for longer and longer times, but when the answers do diverge significantly the change is quite abrupt.
The most practically interesting thing with chaotic systems is the high sensitivity to initial conditions. This sensitivity competes with our uncertain knowledge of the true state of things in determining the limit on our ability to predict the trajectory of (some) dynamic systems (even if our knowledge were perfect, the finite precision of our machines still causes us significant trouble). Part of the way we deal with uncertainty is to do averaging over ensembles. To illustrate this idea I’ll start 7 realizations of the previous calculation, each adjusted by a small amount (on the order of the floating point round-off error for my machine).
nt = 400000 
T = 40.0 # integration period 
Pr = 10.0 
Ra = 28.0 
b = 8.0 / 3.0 
# knobs for the Newtons method: 
tol = 1e-12 
maxits = 20 
nic = 7 # number of different ICs in our ensemble 
h = T / float(nt) 
x = sp.zeros((nt,nic), dtype=float, order=’F’) 
y = sp.zeros((nt,nic), dtype=float, order=’F’) 
z = sp.zeros((nt,nic), dtype=float, order=’F’) 
t = sp.linspace(0.0, T, nt) 
# some perturbations: 
eps = 1e-16 * sp.array([0, 1, -1, 2, -2, 3, -3]) 
for i in xrange(nic): 
    x[0][i] = 1.0 + eps[i] 
    y[0][i] = -1.0 + eps[i] 
    z[0][i] = 10.0 + eps[i]
I’ll then calculate the pair-wise difference between each trajectory (thats 7 choose 2 or 21 differences). The combination function from the itertools module comes in handy here
from itertools import combinations 
from scipy.misc import comb 
 
# calculate our ensemble of trajectories: 
for i in xrange(nic): 
    L63.time_loop(x[:,i],y[:,i],z[:,i],t,Ra,Pr,b,h,tol,maxits) 
 
# calculate all the pair-wise differences between trajectories (nic 
# choose 2 of em): 
npairs = comb(nic, 2, exact=1) 
xdiffs = sp.zeros((nt,npairs), dtype=float, order=’F’) 
ydiffs = sp.zeros((nt,npairs), dtype=float, order=’F’) 
zdiffs = sp.zeros((nt,npairs), dtype=float, order=’F’) 
for i, pair in enumerate(combinations(xrange(nic), 2)): 
    xdiffs[:,i] = x[:,pair[0]] - x[:,pair[1]] 
    ydiffs[:,i] = y[:,pair[0]] - y[:,pair[1]] 
    zdiffs[:,i] = z[:,pair[0]] - z[:,pair[1]]
Here’s the resulting differences:

(a) X component
(b) Y component
(c) Z component
Figure 2: Differences between members of the ensemble

You can see that things go along well in the early time, but eventually the differences between ensemble members grows suddenly to be on the same order of magnitude as the things we care about predicting.