This was a guest post over on Pielke's site.
Dr Pielke's Honest Broker concepts resonate with me because of practical decision support experiences I've had, and this post is an attempt to share some of those from a realm pretty far removed from the geosciences. All the views and opinions expressed are my own and in no way represent the position or policy of the US Air Force, Department of Defense or US Government. I am writing as a simple student of good decision making. My background is not climate science. I am an Aeronautical Engineer with a background in computational fluid dynamics, flight test and weapons development. I got interested in the discussions of climate policy because the intersection of computational physics and decision making under uncertainty is an interesting one no matter what the subject area. The discussion in this area is much more public than the ones I'm accustomed to, so it makes a great target of opportunity. The decision support concepts Dr Pielke discusses make so much sense to me now, but I can see how hard they are for technical folks to grasp because I used to be a very linear thinker when I was a young engineer right out of school.
My journeyman's education in decision support came when I got the chance to lead a small team doing Live Fire Test and Evaluation for the Air Force (you may not be familiar with LFT&E, it is a requirement that grew out of the Army gaming testing of the Bradley fighting vehicle in the 1980s, a situation that was fairly accurately lampooned in the movie "Pentagon Wars"). The competing values of the different stakeholders (folks appointed by congress to ensure sufficient realistic testing compared to folks at the service level doing product development) was really an eye-opening education for a technical nerd like me. I initially thought, "if only everyone can agree on the facts, the proper course of action will be clear". How naive I was! Thankfully, the very experienced fellows working for me didn't mind training up a rash, newly-minted, young Captain.
It's tough for some technical specialists (engineers/scientists) to recognize worthy objectives their field of study doesn't encompass. The reaction I see from the more technically oriented folks like Tobis (see how he struggles) reminds me a lot of the reaction that engineers in product development offices would have to the role of my little Live Fire office. A difficulty we often encountered was the LFT&E oversight folks wanted to accomplish testing that didn't have direct payoff to narrower product development goals that concerned the engineers. "What those people want to do is wasteful and stupid!" This parallels the recent sand berm example. The preferred explanation from the technician's perspective is that the other guy is bat-shit crazy, and his views should be ridiculed and de-legitimized. The truth is usually closer to the other guy having different objectives that aren't contained within the realm of the technician's expertise. In fact, the other person is probably being quite rational, given their priors, utility function and state of knowledge.
In my little Live Fire Office we had lots of discussion about what to call the role we did, and how to best explain it to the program managers. I wish I had heard of Dr Pielke's book back then, because "Honest Broker" would have been an apt description for much of the role. We acted as a broker between the folks in the Pentagon with the mandate from congress for sufficient, realistic testing, and the Air Force level program office with the mandate for product development. The value we brought (as we saw it), was that we were separate from the direct program office chain of command (so we weren't advocates for their position), but we understood the technical details of the particular system, and we also understood the differing values of the folks in the Pentagon (which the folks in the program office loved to refuse to acknowledge as legitimate, sound familiar?). That position turns out to be a tough sell (program managers get offended if you seem to imply they are dishonest), so I can empathize with the virulent reaction Dr Pielke gets on applying the Honest Broker concepts to climate policy decision support. People love to take offense over their honor. That's a difficult snare to avoid while you try to make clear that, while there's nothing dishonest about advocacy, there remains significant value in honest brokering. Maybe Honest Broker wouldn't be the best title to assume though. The first reaction out of a tight-fisted program manager would likely be "I'm honest, why do I need you?"
One of the reason my little office existed was because of some "lessons learned" from the Tri-Service Standoff Missile debacle (all good things in defense acquisition must grow out of historical buffoonery). The broader Air Force leadership realized that it was counterproductive to have product development engineers and program managers constantly trying to de-legitimize the different values that the oversight stake-holders brought (the differences springing largely from different appetites for risk and priors for deception) by wrangling over largely inconsequential, technical nits (like tree rings in the Climate Wars). The wiser approach was to maintain an expertise whose sole job was to recognize and understand the legitimate concerns of the oversight folks and incorporate those into a decision that meets the service's constraints as quickly and efficiently as possible. Rather than wasting time arguing, product development folks could focus on product development.
The other area where I've seen this dynamic play out is in making flight test decisions. In that case though, the values of all the stake-holders tend to align more closely, so the separation between technical expertise and decision making is less contentious (Dr Pielke's Tornado analogy). In contrast to the climate realm where it's argued that science compels because we're in the Tornado mode, the flight-test engineers understand that the boss is taking personal responsibility for putting lives at risk based on their analysis. They tend to be respectful of their crucial, but limited, role in the broader risk management process. Computational fluid dynamics can't tell us if it's worth risking the life of an air crew to collect that flight test data. In that case there is no confusion about who is king, and over what questions the technical expert must "pass over in silence."
No comments:
Post a Comment