Interesting content covered on Nuite Blanche of a recent Paris Machine Learning meetup. The work on applying Bayesian Programming and Learning for Multi-Player Video Games was really neat. It's about developing a bot for playing Starcraft. Some additional related presentations:
- A Bayesian Model for Plan Recognition in RTS Games applied to StarCraft
- A Bayesian Model for RTS Units Control applied to StarCraft
The part I liked best (other than the interesting application) was the information about a bunch of good probability and statistics references and resources.
We first notice that all game AI challenge can be addressed with uncertain reasoning, and present in chapter 3 the basics of our Bayesian modeling formalism. As we present probabilistic modeling as an extension of logic, it may be an easy entry to building probabilistic models for novice readers. It is not sufficient to give a strong background on Bayesian modeling however, but there are multiple good books on the subject. We advise the reader who want a strong intuition of Bayesian modeling to read the seminal work by Jaynes [2003], and we found the chapter IV of the (free) book of MacKay [2003] to be an excellent and efficient introduction to Bayesian inference. Finally, a comprehensive review of the spectrum of applications of Bayesian programming (until 2008) is provided by [Bessière et al., 2008].
The first three chapters of Jaynes' book is available for free download, and there is an unpublished manuscript as well as other collected writings on probability as logic. The unofficial errata and commentary for Jaynes' classic is instructive too. MacKay's book is available for free download and viewing on-screen. The accompanying software (much of which uses Octave) is free to download and use as well.
Synnaeve used my favorite quote from Jaynes' book as the lead-in for Chapter 6: Tactics,
It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought.I'll admit that this little morsel, more than any other thing, really provided my motivation to be excited about a lot of the work on things like using sparse grids and adjoints for more efficient uncertainty quantification (as opposed to brute-force Monte Carlo). I think there's still plenty of room there to apply more thought, and I am sure the pay-off will be impressive!
I've used the Jaynes "unofficial errata" site you refer to and found it very helpful. Readers may also find my own unofficial website useful (www.etjaynesinfo.com), it has detailed name and subject indexes for Jaynes's book, as well a list of errata and typos.
ReplyDeleteThanks for the link to your site: another good resource
Delete