Heuristics improve rationality; AI is improving ours.

This is a quick response to a blog post by John Danaher about Reverse Turing Tests: Are Humans Becoming More Machine-Like? which features this awesome little figure:
John Danaher's figure, click to see his blogpost.
The answer to nearly the question John (and the article he reviews by Brett Frischmann) is "yes".  But that's not quite the answer to their question.

As I've said in a number of talks recently – see for example see slide 37 of my recent talk for Designing Moral TechnologiesThe Design of AI / The Root of the Moral Subject – what communication does for humans is helps us approach perfect computers.  Not because communication itself necessarily does computation, but because it links all of the amazing CPUs we have in our heads.  But computing better doesn't make us more machinelike.  Computation is as natural as physics or biology, and like them has natural laws.

Slide 37 of my Designing Moral Technologies talk.
Click for PDF of whole talk.

One example of a law of computation is bounded rationality.  Computing takes time, but its output is ordinarily only of any use transiently.  In other words, we cannot expect to have a perfect, optimal answer to any problem by the time we are through caring what that answer might be.  For example, if it takes us longer to recognise a car than it does to cross the street, then we can never compute when a good time to cross a busy street would be.

This is why we use heuristics, both in our daily lives and also for our AI.  Heuristic just means "short cut", it's a rule of thumb you use when you don't have time to work out the optimal action.

So to return to John's diagram, common sense is actually a set of heuristics which humanity has communicated and compiled into cultures.  We are getting better and better at mining that wisdom (and also the discoveries of biological evolution) and using them for artefacts.  That's why AI is getting better and better.

But what we learn even via using machine learning we are also recognising and understanding as humans.  For a somewhat trivial example, both the games of chess and go have been transformed but by no means killed by AI.

So increasingly, machine intelligence will contain our common sense.  And increasingly, humans will do things that are actually more rational than we did before, because we've just learned more.  Including how to compute faster using machines.  But we'll never behave perfectly optimally, because computation will always take time, and time will always be passing.

References:
  • Simon, Herbert Alexander. Models of bounded rationality: Empirically grounded economic reason. Vol. 3. MIT press, 1982.
  • Bryson, Joanna J. "Artificial Intelligence and Pro-Social Behaviour." Collective Agency and Cooperation in Natural and Artificial Systems. Springer International Publishing, 2015. 281-306.
  • Bryson, Joanna J. "Structuring intelligence: The role of hierarchy, modularity and learning in generating intelligent behaviour." The Complex Mind. Palgrave Macmillan UK, 2012. 126-143.
Related AiNI blog posts:
See also my web page on AI & Society / machine ethics.

Comments