- Incomplete or inconsistent convergence behavior: "There are many possible reasons for failure, ranging from poor grid quality to the inability of a single algorithm to handle singularities such as strong shocks, under-resolved features, or stiff chemically reacting terms. What is required is an automated capability that delivers hands-off solid convergence under all reasonable anticipated flow conditions with a high tolerance to mesh irregularities and small scale unsteadiness."
- Algorithm efficiency and suitability for emerging HPC: "In order to improve simulation capability and to effectively leverage new HPC hardware, foundational mathematical research will be required in highly scalable linear and non-linear solvers not only for commonly used discretizations but also for alternative discretizations, such as higher-order techniques89. Beyond potential advantages in improved accuracy per degree of freedom, higher-order methods may more effectively utilize new HPC hardware through increased levels of computation per degree of freedom."
I think these are two areas that are certainly worthy of continued research effort. The first area is pretty clearly tied to lowering the amount of analyst attention and effort needed to get converged solutions. If you have an automated solution-adaptive grid scheme, then the "mesh irregularities" in an intial hand-crafted grid could be fixed by your code. It's not so much that solvers need to put up with bad grids, it's that analysts need more help making good ones. There's no magic numerical pixie dust that can create a good solution from a poor grid.
I'm not sure at all how the second area is tied to the goal of lowering the levels of human expertise and intervention for running and understanding CFD analysis. Clearly we'd like to have solvers that scale well to larger and larger numbers of cores, but I don't see the connection between a solver that scales well to the level of expertise or intervention required to run and interpret the results. If you think I'm missing something here please set me straight in the comments.
The inset after this section looks at some interesting work sponsored by DOE to scale multi-grid solvers to really large numbers of cores. The report presents results on how well the Hypre library of linear solvers scales to over 100k cores. One of the interesting things they mention is that they have added support for 64-bit integers to be able to address problems with over 2 billion unknowns: wow, that's big!
Coverage on Another Fine Mesh of an AIAA panel discussion on the content of this report.
ReplyDeleteHere's the final report on the NASA server.
Delete