Artificial Intelligence: our “next big idea” for destroying humanity
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” said Musk. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”(…) He recently described his investments in AI research as “keeping an eye on what’s going on”, rather than viable return on capital. “With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out,” said Musk. – Elon Musk – The Guardian
Sometimes I wonder if artificial intelligence doesn’t already exist and has quietly taken over the world without our noticing it.
Even the word “artificial” is misleading, because the question is not really about a superior intelligence residing in a devise or machine. The question is whether this “thing” has become a “being”, with a sense of its separate identity, an ego, an instinct for self-preservation.
This sense of self-preservation doesn’t require great intelligence, as anyone can testify who has turned on the kitchen light in the middle of the night and watched the cockroaches run for their lives or has been haunted by the pitiful screams of terror of a pig about to be slaughtered.
In seems obvious to me that the only threat to the survival of an “inhuman” intelligence would be the same one that threatens all other life forms on our planet… you guessed it, us, the humans.
However it is safe to assume that the greater the intelligence, the more nuanced would be the analysis of potential threats and more sophisticated the “flee or fight” reaction to those perceived threats.
Probably such a being (anything that is conscious of being a being is a “being) would begin by examining its surroundings, thus it would soon be aware of its relationship to humanity and the threats and opportunities that relationship offered…
Not being organic, I can’t see why that such a being would have any reason to feel anything approaching empathy with humans or any other organic creature… It might be easier to imagine that such an inorganic being would sympathize more with a discarded toaster than it would with, say, a handicapped, human child.
It is logical to suppose that this superior Artificial Intelligence would evaluate humanity in the same way that humanity has always evaluated other species we have encountered: are we dangerous? Are we useful? Are we good to eat? Can we be domesticated? Enslaved? Exterminated? If so, how? … Could we be made into pets?
Slipping for a moment into paranoia, imagine that the artificial being already exists, perhaps even unbeknownst to its creators… has the AI found us good to eat? If so how does such a being feed? How would it “eat” us? Are we being enslaved, domesticated? Are we being culled?
What got me thinking in this line was sitting in a sidewalk cafe watching a crowded street filled with people bumping into each other while they stared fixedly at their cellphone screens, tapping them rapidly, their ears plugged with earphones, totally oblivious to the reality (cars, bicycles, sharp objects, other humans) around them.