People are unique, but brains are not magic, and neural networks aren't necessary for AI


Apropos hype in AI, an oped Ernie Davis and Gary Marcus wrote for The New York Times, came out this week: https://www.nytimes.com/2018/05/18/opinion/artificial-intelligence-challenges.html

I'm of mixed opinions about this. I totally agree with Gary that humans have a unique and incredibly-unlikely-to-be-synthetically-(unless you mean cloning)-reproduced set of cognitive traits.  This is central to two of my most recent papers: Of, For, and By the People: The Legal Lacuna of Synthetic Persons and Patiency Is Not a Virtue: The Design of Intelligent Systems and Systems of Ethics

But a backlash in which people decide that AI isn't actually possible or very smart or something just because no system we build happens to have our particular weird mix of cognitive constraints and primate goals could also be a form of social endangerment. I just finally caught up with Brown & Sandholm's massively super human poker system (Libratus) – at a great meeting at IAST with a bunch of brilliant economists and biologists, who were incapable of understanding how something so clever could have no neural networks in it.  

People are not seeing that enhanced computation is everywhere and exploding, and can give enhanced insight including predictive and manipulative abilities. They are supernaturalist about the abilities of brains which they (falsely, but that falseness is irrelevant here) believe neural networks to be some kind of reasonable approximation of. In the case of Libratus, AI has now completely altered the landscape of poker. So far AI cannot model the human players without itself getting manipulated, so B&S just bypassed the entire problem of bluffing etc and went straight to the optimal equilibria, entirely overwhelming the human players because it was so far beyond their cognitive ability. Basically, they just turned poker into being about poker and not about bluffing, which eliminates a massive part of the game–though only if you play against AI. They now have a version of this system that runs on 4 cores (that is, an ordinary laptop without very much power), and while it's a bit weaker than Libratus, they believe to probably still be superhuman. 

So how (in the long term) can you be sure you aren't playing against AI? If you are playing with your friends (that you trust) in somebody's home. So kind of a return to how games used to be, but a huge shift on the present landscape.

Noam Brown, on the job market this year, hire him if you dare negotiate with the guy who wrote Libratus

[2023 update: picture that was here linked to Noam's CMU home page which is gone, so I've just updated to another period picture from CMU. Facebook in fact hired him, then Open AI...]

Comments

Unknown said…
just read an article about Human-Focused Turing Tests and I remembered a statement you made in one of your articles. It was about how your perspective changed when you found out part of the human brain operates mechanically. Could you post a link to some of the literature on this?
Joanna Bryson said…
I wouldn't say mechanically, but algorithmically. That particular lecture was about how saccading works in the midbrain. You can find descriptions of things like that in standard textbooks, like Carlson's Physiology of Behavior. https://www.pearson.com/us/higher-education/product/Carlson-Physiology-of-Behavior-11th-Edition/9780205239399.html