This is really a follow on post from my previous one on The Intelligence Explosion. And it's just made of links to a couple things. I posted my first "professional" blogpost on Slate, to argue against something Douglas Hofstadter said (and many, many other people say and think), that Watson & Google aren't real AI. Watson and Google are real AI (that's a link to my post–note I'm also involved in some interesting discussions in the comments too, and that the post has a coauthor, Miles Brundage). They aren't exactly like people in every way, but how could they be, and why would we want them to engage in the kind of motivated action we do.
If it wasn't clear from both the previous posts, one of the reasons I think it's a problem worth taking the time to address that people think AI isn't here yet is because we don't see the real threats AI creates. It's not that it's going to take over the world and make us into paper clips (or cockroaches). It's that it makes it easier for people to control and discriminate against each other.
As such, my concern overlaps with those of the communities concerned about privacy. Just this week I was trying to explain to a researcher for the UK Parliament (Lydia Harriss) how AI interacts with big data. Rather than describe that conversation, I'm just going to reprint a comment recapitulating it that I just posted on an interesting blog post by Eva Galperin and Jillian C. York about taking control of our online privacy: Yes, Online Privacy Is Really Possible. My comment on their article is this:
If it wasn't clear from both the previous posts, one of the reasons I think it's a problem worth taking the time to address that people think AI isn't here yet is because we don't see the real threats AI creates. It's not that it's going to take over the world and make us into paper clips (or cockroaches). It's that it makes it easier for people to control and discriminate against each other.
As such, my concern overlaps with those of the communities concerned about privacy. Just this week I was trying to explain to a researcher for the UK Parliament (Lydia Harriss) how AI interacts with big data. Rather than describe that conversation, I'm just going to reprint a comment recapitulating it that I just posted on an interesting blog post by Eva Galperin and Jillian C. York about taking control of our online privacy: Yes, Online Privacy Is Really Possible. My comment on their article is this:
While I agree to some extent with the authors & @Bill Castle's comments, that there are technological things we can do, and how much we need to understand them depends on our professions and responsibilities, I disagree in two different ways.
- Potential of legislation: Getting everyone to install (and only use browsers to support) plugins like "https everywhere" is just a lot harder and for no reason than legislating minimal standards of privacy, at least for browsers distributed with commercial systems. Look at it this way: the US legislates maximum standards of encryption, how much harder would it be to legislate minimum?
- Utility of individual action: The problem here is one of AI & machine learning. The better a model we have of people in general, the less data we need about any particular person to predict what someone is going to do. Where do we get those better models? From the data sloppy people give away. It's nearly impossible for any one person to give NO data away, and if everyone else is giving LOTS of data away, then that little data is enough to identify their likely habits, whether voting, shopping, movement etc. So while we might protect ourselves as individuals from this or that, in the end I don't think a libertarian solution is sufficient. Once again, I think we need legislation and standards.
Comments