The purpose of the workshop is to bring together a diverse group of computational scientists working in fields in which reliability of predictive computational models is important. Via formal presentations, structured discussions, and informal conversations, we seek to heighten awareness of the importance of reliable computations, which are becoming ever more critical in our world.It looks very interesting.
The intended audience is computational scientists and decision makers in fields as diverse as earth/atmospheric sciences, computational biology, engineering science, applied mechanics, applied mathematics, astrophysics, and computational chemistry.
Tuesday, October 11, 2011
Notre Dame V&V Workshop
Monday, October 3, 2011
J2X Time Series
So, I used mplayer to dump frames from the video (30 fps) to jpg files (about 1800 images), and the Python Image Library to crop to a rectangle focused on the splashes.
When a splash occurs the pixels in this region become much whiter, so the whiteness of the region should give an indication of the "splashiness". I then converted the images to black and white, and averaged the pixel values to get a scalar time-series. The whole time series is shown in the plot below.
Also, here's a text file if you want to play with the data.
Here's a PSD and autocorrelation for the section of the data excluding the start-up and shut-down transients.
Here's a recurrence plot of that section of the data.
This is a pretty short data set, but you can see that there are little "bursts" of periodic response in the recurrence plot (compare to some of the recurrence plots for Lorenz63 trajectories). I'm pretty sure this is not significant to engine development in any way, but I thought it was a neat source of time series data.
Saturday, October 1, 2011
Discussion of V and V
A few days after I put up my little post on a bit of V&V history Judith Curry had a post about VV&UQ which generated a lot of discussion (she has a high-traffic site, so this is not unusual) [1].
A significant part of the discussion was about a post by Steve Easterbrook on the (in)appropriateness of IV&V (“I” stands for independent) or commercial / industrial flavored V&V for climate models. As George Crews points out in discussion on Climate Etc., there’s a subtle rhetorical slight of hand that Easterbrook uses (which works for winning web-points with the uncritical because different communities use the V-words differently, see the introduction to this post of mine). He claims it would be inappropriate to uncritically apply the IV&V processes and formal methods he’s familiar with from developing flight control software for NASA to climate model development. Of course, he’s probably right (though we should always be on the look-out to steal good tricks from wherever we can find them). This basically correct argument gets turned in to “Easterbrook says V&V is inappropriate for climate models” (by all sorts of folks with various motivations, see the discussion thread on Climate Etc.). The obvious question for anyone who’s familiar with even my little Separation of V&V post is, “which definition of V or V?”
What Easterbrook is now on record as agreeing is appropriate is the sort of V&V done by the computational physics community. This is good. Easterbrook seemed to be arguing for a definition of “valid” that meant “implements the theory faithfully” [2]. This is what I’d call “verification” (are you solving the governing equations correctly). The problem with the argument built on that definition, is the conflation I pointed out at the end of this comment, which is kind of similar to the rhetorical leap mentioned in the previous paragraph and displayed on the thread at Climate Etc.
Now, his disagreement with Dan Hughes (who recommends an approach I find makes a great deal of sense, and that I've used in anger to commit arithmurgical damage on various and sundry PDEs) is that Dan thinks we should have independent V&V, and Steve thinks not. If all you care about is posing complex hypotheses then IV&V seems a waste. If you care about decision support, then it would probably be wise to invest in it (and in our networked-age much of the effort could probably be crowd-sourced, so the investment can conceivably be quite small). This actually has some parallels with the decision support dynamics I highlighted in No Fluid Dynamicist Kings in Flight Test.
One of the other themes in the discussion is voiced by Nick Stokes. He argues that all this V&V stuff doesn’t result in successful software, or that people calling for developing IV&V’d code should shut up and do it themselves. One of the funny things is that if the code is general enough that the user can specify arbitrary boundary and interior forcings, then any user can apply the method of manufactured solutions (MMS) to verify the code. What makes Nick’s comments look even more silly is that nearly all available commercial CFD codes are capable of being verified in this way. This is thanks in part to the efforts of folks like Patrick Roache for instance [3], and now it is a significant marketing point as Dan Hughes and Steven Mosher point out. Nick goes on to say that he’d be glad to hear of someone trying all this crazy V&V stuff. The fellows doing ice sheet modeling that I pointed out on the thread on Easterbrook’s are doing exactly that. This is because the activities referred to by the term “verification” are a useful set of tools for the scientific simulation developer in addition to being a credibility building exercise (as I mentioned in the previous post, see the report by Salari and Knupp [4]).
Of course doing calculation verification is the responsibility of the person doing the analysis (in sane fields of science and engineering anyway). So the “shut-up and do it yourself” response only serves to undermine the credibility of people presenting results to decision makers bereft of such analysis. On the other hand, the analyst properly has little to say on the validation question. That part of the process is owned by the decision maker.
References
[1] Curry, J., Verification, Validation and Uncertainty Quantification in Scientific Computing, http://judithcurry.com/2011/09/25/verification-validation-and-uncertainty-quantification-in-scientific-computing/,Climate Etc. Sunday 25th September, 2011.
[2] Easterbrook, S., Validating Climate Models, http://www.easterbrook.ca/steve/?p=2032, Serendipity, Tuesday 30th November, 2010.
[3] Roache, P., Building PDE Codes to be Verifiable and Validatable, Computing in Science and Engineering, IEEE, Sept-Oct, 2004.
[4] Knupp, P. and Salari, K., Code Verification by the Method of Manufactured Solutions, Tech. Rep. SAND2000-1444, Sandia National Labs, June 2000.