Thursday, January 4, 2018
Superintelligence: Paths, Dangers, Strategies
Five Reasons Jesse Schell Is Not Worried About Superintelligence.
1) AIs are generally there to help humans. AIs are going to be created by humans, to serve human interest. There won't just be "one big AI", or "singleton" as Bostrom nerdily terms it, there will be thousands of AIs, created for thousands of purposes. The vast majority of these purposes will be for serving humanity, or at least serving certain subsets of humanity (corporations, institutions, nations, etc.). If some AI does want to destroy or enslave humanity, it will need to do so in opposition to the thousands of AIs that are trying to help humanity. It is kind of like worrying that human beings will destroy all cats. Yes, we could do that, if we all wanted to, but we don't want to. And even if 10% of the population really set their minds to destroying all cats, they would have a hell of a fight on their hands.
2) AIs have no meaningful competition with humans for anything. It is natural for humans to assume that intelligent AIs will have human-type wants and needs: most centrally, survival. But AI brains are likely not to have such a strong focus on survival as human brains. Humans must focus strongly on survival because we are so fragile. We only live a short time, and can only reproduce for an even shorter time. Further, our brains are insanely fragile. Deprived of oxygenated blood for five seconds, and our brains undergo irreversible chemical reactions that completely destroy them. AIs don't have to worry about any of this. They can be backed up, paused, rebooted, and replicated endlessly. So, except in cases where it is engineered into them, they won't be struggling for survival, and certainly not in a competitive way with humans.
3) Intelligence is overrated. Philosophers and other intelligent people naturally have a bias towards overvaluing intelligence. But in reality, intelligence and power do not seem strongly correlated. Take a look at lists of the most intelligent people and the most powerful people. If the most intelligent people can't take over the world, why would the most intelligent machines be able to do it? Further, who is to say that the value of intelligence continues to increase linearly as "thinking power" increases? Perhaps, as with many things, there are diminishing returns after a certain point, and it is not out of the question that we are near that point already.
4) Hardware exceeds expectations, software never does. When predicting the future, two common mistakes are to undervalue how much computing hardware will improve, and to overvalue how much software will improve. Brains are software. And, yes, we're making great strides towards improving AI -- but the idea of software suddenly getting super great overnight by training itself seems unlikely, because true intelligence involves such a complex matrix, and it can't be achieved simply by thinking, it must get there through doing and getting feedback, which can be fast for things easily simulated, but slow for things that can't. Say, for example, you wanted an AI to get good at playing with a dog. Using evolutionary AI techniques, you would need the AI to play with a dog in millions of experimental iterations, and comprehend the dog's reaction. It's just not practical. This isn't to say that AI won't advance -- it will, and it can on one-dimensional simulable problems. But that's a small subset of intelligence, and for that reason, I suspect that the 2020's (and probably 2030's) will be the decade of AI idiot savants.
5) The revolution will be slow. I've been following AI closely for thirty-five years. In the 80's, I assumed, as did many others, that we'd have human level AI by the year 2000. Turns out it's way slower than that, and we have a long, long way to go. We are going to see some amazing advances, but like all software, it is going to be flaky, slow, problematic and disappointing. We are going to have decades to figure out how we are going to deal with it, and being humans, we are going to make it all about us, all about serving our needs. This isn't to say that there won't be problems, accidents, and disasters. There will be -- just like there are computer viruses. But computer viruses won't destroy computing, any more than viruses will wipe out life on earth -- it just isn't in anyone's interest, not even the interest of the virus.
I suspect that the most positive thing that will arise from the development of superintelligence will be that it will force the human race to figure out what we actually value. Technology will bring the problematic gifts of immortality and superintelligence, and we'll be given the choice of giving up humanity for something theoretically better. But are these things better? We'll have to decide what it is about humanity that we value. Is life better if we give up negative emotions? If we give up suffering? If we give up death? Being forced to confront these questions will be good for us. Personally, I think that superintelligence is far less dangerous to our humanity than immortality is, since so much of what it means to be human is centered on survival of the individual and the species. I suspect, looking back from the year 2100, we will find that immortality was the real peril to our species.
Those are my thoughts for now. I did appreciate all the thought that went into this book, it gave a lot of good structure to think about and react to. For no good reason, it makes use of words like "propaedeutic," but they serve as a good illustration as to why intelligence of any kind is unlikely to conquer the world anytime soon.