This is from twitter, so you want to scroll down to the bottom of the page and then read back up to the top to get the tweets in order. The paper I mention in the one tweet I managed not to grab is A Role for Consciousness in Action Selection in the
International Journal of Machine Consciousness 4(2):471-482. Update: the below is my vision & understanding of the conversation, but Damien has storified his perspective on the conversation, which is more complete.
Comments
If intelligence is the application of knowledge, then consciousness is the understanding of (the outcome) of said application. And I really don't want to confuse DNN's and probability matching Go playing machines as being anything close to "knowing" how a move will play out over time. That is the realm of preset conditioning, and in a few months, Google did what nature spent millions of years bio-engineering, one molecule at a time (talk about long run times to test a hypothesis!).
So no, consciousness is not some cake we can slap together in our easy bake oven, and morality is a construct at an even higher level. Would an AI need morality to function? No. Should it? When driving your kids to school, hell yes.
* I'd like to point out that Google's car didn't know/understand that it hit a bus, nor did it worry about it's occupants, the vehicles owner, or the inconvenience it cause the riders of ObjectID43095.