This is the big question these days and a great question. You might get different opinions from each of us here.
For me, it is very worrying. My personal opinion is that it will bring so much good in areas but if something brings a possibility of a mass extinction, then I think the risk gives reason to put the brakes on it, which I think is not possible. I personally think that AI surpassing human intelligence is an inevitability. Humans will continue to grow our technology as always. They now say that there’s a 50% chance that this point will happen within the next 40-50 years so we might even see it in our lifetime.
But what reason would they have to destroy us?
Sam Harris put it well, they would have no reason to get rid of us….until we conflict with their views, or in other terms, until we get in the way. Example given is that we don’t have anything against ants, we might even go out of our way to not step on an ant colony on the street. But if there was an ant hill on a construction site for a large building, we would have no problem getting rid of it. When our views conflict with that of an animal or species here on Earth, we usually don’t hesitate to act in our favour so they might be inclined to do the same.
I’m not sure that AI/robots would have a sense of good or bad. AI as it currently stands is more about figuring out how a robot can learn to navigate around a new location effectively, for example. Our brains process a huge amount of complicated information all the time, and trying to replicate that process is not easy. It will be a while before we have to deal with ‘sentient’ as opposed to ‘intelligent’ robots – ie robots that have an awareness of why they are doing what they do, as opposed to just figuring out what to do.
Comments