A computation-enabled biological perspective on cultural variation

Tuesday-Friday I attended a conference on cultural variation.  I went because I was invited & it sounded relevant to my new grant looking at cultural variation in a particular form of moral behaviour (anti-social punishment, which I will have to blog about later).  In fact though it was more about how to facilitate cultural integration, obviously a significant topic in the world in general, and in the Netherlands one of considerable political interest.  But the workshop was set up so that people could easily organise into sub-groups and discuss topics of interest.  One of my main interests is how cultures relate to other forms of modular decomposition in learning systems, e.g. regions in vertebrate brains, species in ecosystems.  By having concurrent modules specialising on particular approaches, you may facilitate whatever learning problem you are trying to optimise for --- presumably for cultures that problem is primarily the well-being and survival of people.  I was able to organise a group of people to talk about this, so that was interesting.

Anyway, my talk was called A computation-enabled biological perspective on cultural variation.  The slides are here.  That's a large file -- 6M.  Many of the papers I mention in the talk are linked from the bottom part of my primate learning research page, which concerns human-like culture.  (I really need to rewrite my web pages, especially now that I'm doing so much work on culture.)

Software for simulations in the above articles is available from the AmonI software page.

Comments

drevicko said…
Hi Joanna, Ian wood here..
What you wrote here put me in mind of universal Turing machines that can be used for "universal induction" - the asymptotically optimal learning (ie: prediction) strategy. This is a branch of algorithmic information theory.

A universal Turing machine is able to "emulate" any other Turing machine (what emulation is is only partially answered in the literature). You predict with it by computing with the shortest program that's consistent with the data (essentially...)

One way to make a universal Turing machine is to list all possible Turing machines (an infinite list, but doable with a finite program). A program for the universal machine is then an encoded prefix (telling which Turing machine to emulate) followed by a program for that machine.

So, in a sense, a universal machine has 'modules' (emulated TM's) for all possible 'approaches' (TM's). I wonder if any (algorithmic) information theoretic results can be of use here??

ps: thanks for putting up the slides (: