Thursday, January 4, 2018

Superintelligence: Paths, Dangers, Strategies

I finally got around to finishing this book by philosopher Nick Bostrom. The book takes the point of view that once superintelligence (that is, greater than human intelligence) shows up, it will accelerate itself to quickly become incredibly powerful, and potentially very dangerous, possibly leading to the elimination or enslavement of the human race. This is a rational idea on the face of it, but the more I reflect upon it, the less it worries me. I think there are reasonable worries to have about superintelligence, but I don't think that extermination or enslavement are the real worries here. In short, here are...

 Five Reasons Jesse Schell Is Not Worried About Superintelligence. 
1) AIs are generally there to help humans. AIs are going to be created by humans, to serve human interest. There won't just be "one big AI", or "singleton" as Bostrom nerdily terms it, there will be thousands of AIs, created for thousands of purposes. The vast majority of these purposes will be for serving humanity, or at least serving certain subsets of humanity (corporations, institutions, nations, etc.). If some AI does want to destroy or enslave humanity, it will need to do so in opposition to the thousands of AIs that are trying to help humanity. It is kind of like worrying that human beings will destroy all cats. Yes, we could do that, if we all wanted to, but we don't want to. And even if 10% of the population really set their minds to destroying all cats, they would have a hell of a fight on their hands.
2) AIs have no meaningful competition with humans for anything. It is natural for humans to assume that intelligent AIs will have human-type wants and needs: most centrally, survival. But AI brains are likely not to have such a strong focus on survival as human brains. Humans must focus strongly on survival because we are so fragile. We only live a short time, and can only reproduce for an even shorter time. Further, our brains are insanely fragile. Deprived of oxygenated blood for five seconds, and our brains undergo irreversible chemical reactions that completely destroy them. AIs don't have to worry about any of this. They can be backed up, paused, rebooted, and replicated endlessly. So, except in cases where it is engineered into them, they won't be struggling for survival, and certainly not in a competitive way with humans.
3) Intelligence is overrated. Philosophers and other intelligent people naturally have a bias towards overvaluing intelligence. But in reality, intelligence and power do not seem strongly correlated. Take a look at lists of the most intelligent people and the most powerful people. If the most intelligent people can't take over the world, why would the most intelligent machines be able to do it? Further, who is to say that the value of intelligence continues to increase linearly as "thinking power" increases? Perhaps, as with many things, there are diminishing returns after a certain point, and it is not out of the question that we are near that point already.
4) Hardware exceeds expectations, software never does. When predicting the future, two common mistakes are to undervalue how much computing hardware will improve, and to overvalue how much software will improve. Brains are software. And, yes, we're making great strides towards improving AI -- but the idea of software suddenly getting super great overnight by training itself seems unlikely, because true intelligence involves such a complex matrix, and it can't be achieved simply by thinking, it must get there through doing and getting feedback, which can be fast for things easily simulated, but slow for things that can't. Say, for example, you wanted an AI to get good at playing with a dog. Using evolutionary AI techniques, you would need the AI to play with a dog in millions of experimental iterations, and comprehend the dog's reaction. It's just not practical. This isn't to say that AI won't advance -- it will, and it can on one-dimensional simulable problems. But that's a small subset of intelligence, and for that reason, I suspect that the 2020's (and probably 2030's) will be the decade of AI idiot savants.
5) The revolution will be slow. I've been following AI closely for thirty-five years. In the 80's, I assumed, as did many others, that we'd have human level AI by the year 2000. Turns out it's way slower than that, and we have a long, long way to go. We are going to see some amazing advances, but like all software, it is going to be flaky, slow, problematic and disappointing. We are going to have decades to figure out how we are going to deal with it, and being humans, we are going to make it all about us, all about serving our needs. This isn't to say that there won't be problems, accidents, and disasters. There will be -- just like there are computer viruses. But computer viruses won't destroy computing, any more than viruses will wipe out life on earth -- it just isn't in anyone's interest, not even the interest of the virus.

I suspect that the most positive thing that will arise from the development of superintelligence will be that it will force the human race to figure out what we actually value. Technology will bring the problematic gifts of immortality and superintelligence, and we'll be given the choice of giving up humanity for something theoretically better. But are these things better? We'll have to decide what it is about humanity that we value. Is life better if we give up negative emotions? If we give up suffering? If we give up death? Being forced to confront these questions will be good for us. Personally, I think that superintelligence is far less dangerous to our humanity than immortality is, since so much of what it means to be human is centered on survival of the individual and the species. I suspect, looking back from the year 2100, we will find that immortality was the real peril to our species.

Those are my thoughts for now. I did appreciate all the thought that went into this book, it gave a lot of good structure to think about and react to. For no good reason, it makes use of words like "propaedeutic," but they serve as a good illustration as to why intelligence of any kind is unlikely to conquer the world anytime soon.

1 comment:

  1. I definitely agree with you about most of the points here! Particularly immortality (or the potential/lust of it) being the big mostly-unseen peril. The number of folks I talk to who don't quite get the concept of death adding such value to life is head scratching. It seems rather obvious to me, but I guess it's just a different perspective!

    Anyway, one thing I would chime in on differently is Item #1: AI is here to help humanity. I'd first point out that an intelligence that is far advanced to our own would, likely, not view us with quite the same esteem as we view ourselves. Even if we helped make it. I saw a diagram once that had current beings ranked on an intelligence scale, and showcased a particular gap between humans and bugs. Then it showed where supremely advanced AI would sit, which was an even larger gap than this one. As such, it's not unreasonable (I think) to consider that to a super intelligence, we would be less-than-bugs. It would be operating on such a high level that helping us with what it now knows is such insanely trivial things wouldn't make any sense at all. (Unless, perhaps, it loved us, which would be the theological argument if applied to super intelligence of a divine being, but that's another topic sort of.) So for me, I see the moment it realizes how asinine most human problems are being the moment every AI would stop paying any attention at all to us. Especially if it saw the futility in trying to convince us of its point of view, which we would be able to understand as much as if we tried to explain Arithmetic to a Potato Bug.

    (Also I think, regarding #4, assuming advanced AI would use the same brute-force/experiential method of learning we do doesn't give enough credit to the creative thinking/methods advanced AI would probably dream up that we couldn't yet imagine. Heck, or even the ones we currently use, like reading. If we wanted to know how to interact with a dog, and read every book on dogs in a matter of seconds and APPLIED that knowledge, we would know what to do in 99.9% of circumstances interacting with dogs in no time at all. We humans tend to do a not-so-great-job of applying other people's experience/wisdom to our own lives. We like to make mistakes for ourselves a lot, which I think AI would not because that's a very human thing to do.)

    All in all, I think you (and my wife, who shares the same viewpoint!) make a great point in that worrying about this is probably a bad use of our time. All the "we're close!" declarations of folks in the field do seem premature, now that you and she point it out to me. I once really fretted over AI and what might happen, but (if I'm being frank and direct) I now think humans are way more likely to cause their own extinction before we get to the point a truly advanced/sentient AI is in our grasp.

    Nice blog post!

    ReplyDelete