- Joined
- Mar 5, 2005
- Messages
- 9,930
- Reaction score
- 1,452
So, this post, by Andrew Green, got me thinking..he said:
In my novel humanity survives, but only because humans had devised the perfect defense. But thats just a novel. In reality, Im not sure wed survive if we had to deal with future computers if they got sentient, smart, and nasty.
How far-fetched is this?
In the past Ive written about the possibility of global disaster that could come our way in the way of an asteroid or meteor impact, an exploding supervolcano, a worldwide pandemic, a nuclear war, and other events that could either bring down civilization or even precipitate the extinction of mankind. Ive always thought these other things were possible, though not necessarily probable, within my lifetime.
Threats from computers may sound a bit far-fetched, sort of like science fiction. But people as eminent as the great British physicist, Stephen Hawking, have also visited this question, wondering what the consequences would be if computers one day surpassed us in intellect. Another man, Bill Joy, co-founder and chief scientist of the computer company, Sun Microsystems, wrote of the same prospects in the March 2000 issue of Wired. He was, frankly, pessimistic.
In 2000, the Singularity Institute for Artificial Intelligence was founded to examine this potential problem. SIAIs goal is to ensure that powerful computers and computer programs are not dangerous to humanity if or when theyre created. You can look to see if SIAI is made up of quacks and kooks, but many of the names associated with it are definitely not quacky or kooky, lending to that organizations credibility.
In an article in the June 21, 1999 issue of Business Week, Otis Port wrote about the possibility of producing neurosilicon computers. They would be hybrid biocomputers that mate living nerve cells, or neurons, with silicon circuits.
Still sound far-fetched? Groundwork was laid for this at places like Georgia Tech and the Institute of Mathematical Sciences in Madras, India, among others. Initially, the experiments used neurons from lower life-forms such as spiny lobsters and mussels. But, eventually, the scientists made artificial neurons from electronic parts bought at a Radio Shack that succeeded in fooling the real neurons into accepting them as other real neurons. In other words, they had created a synthetic, though primitive, nervous system.
Is a computer that really thinks even possible? We dont know. But as far back as the middle of the 20th century predictions have been made for the day we would finally create an intelligent computer. In the 1960s estimates were made that wed have one within 20 years. As far back as 1950, computer genius Alan Turing estimated wed have one by the year 2000. But the years have come and gone and, though we have faster computers, we dont seem to be appreciably closer to a thinking and conscious computer. Then, of course, there are others who, for one reason or another, say it will never happen. Maybe theyre right.
But the principle problem with answering the question of whether its possible for a computer to think is that not only do we not yet know what makes our own brains work, we dont even know what consciousness is. Some people in the field believe consciousness doesnt actually exist; its just an illusionwhatever that means.
But lets take the scenario where we create a computer that runs on software sophisticated enough that it can finally think. What happens then?
Movie computers like the HAL 9000 in 2001: A Space Odyssey, the Nestor NS-5 named Sonny in I Robot, and Joshua in WarGames had human attributes including human needs and desires. Thats because those movies and novels arent really about computers, but about us. If machines were to gain self-consciousness, they most likely wouldnt be like us at all.
And what happens if a powerful sentient computer develops any kind of survival instinct (We dont know what causes that either.). Would such a computer think of us as friends? Gods? The enemy? What if it either didnt like us or perceived us as a threat? Imagine what would happen if a computer that was tied into the Internet, our defense systems, and millions of other computers around the world and could think faster than any person has ever been able to think, decided it didnt like us. Or want us around! Wed probably never even see it coming, in particular if we didnt recognize it as intelligence with a survival instinct to begin with.
Im not a technophobe or trying to cause undue alarm, but these are some of the things I think about when Im trying to get to sleep at night.I don't need a lot of sleep, so I do lots of thinking before I get to sleep. It's part of what I get paid for: an overactive imagination and a grasp of "plausible threats." Not to mention that I work with people who are trying to do this very thing-people who make the world's fastest computer set-ups just for giggles, people in AI and all manner of other stuff that I best not talk about......:lol:
Ive been placing other plausible threats to humanity further into the back of my mind as I consider the possibility of a future computer threat. Things like asteroids and comets, supervolcanoes, disease, terrorists, nuclear weapons, even World War III would leave survivors. Im not so sure computers would.
Look at that computer sitting atop your desk tonight: That may one day be the enemy
I once wrote a science fiction novel that I never tried to sell. Titled The Perfect Defense, its first chapter appeared in the a magazine, back in 1984-before the movie, Terminator came out. The plot was set in the future. Mankinds computers had revolted and were chasing what was left of humanity across intergalactic space in an effort to annihilate humans completely. The reason the computers took over, as one of the characters notes, was because, We never had a good definition of life and didnt realize, until it was too late, that even the earliest computers were life-forms of a sort. We just couldnt see it because we defined life as organic and intelligence as being comparable to ours. We had no useful definition of consciousness, until the machines revolted and began to kill us by the billions. We were wearing blinders, until it was too late.Andrew Green said:And before we get into a computers can't evolve a thought process, have a quick look into evolutionary algorithms, which is a really neat branch of Computer science that uses the ideas of evolutionary theory in programming and has gotten some really amazing results.
In my novel humanity survives, but only because humans had devised the perfect defense. But thats just a novel. In reality, Im not sure wed survive if we had to deal with future computers if they got sentient, smart, and nasty.
How far-fetched is this?
In the past Ive written about the possibility of global disaster that could come our way in the way of an asteroid or meteor impact, an exploding supervolcano, a worldwide pandemic, a nuclear war, and other events that could either bring down civilization or even precipitate the extinction of mankind. Ive always thought these other things were possible, though not necessarily probable, within my lifetime.
Threats from computers may sound a bit far-fetched, sort of like science fiction. But people as eminent as the great British physicist, Stephen Hawking, have also visited this question, wondering what the consequences would be if computers one day surpassed us in intellect. Another man, Bill Joy, co-founder and chief scientist of the computer company, Sun Microsystems, wrote of the same prospects in the March 2000 issue of Wired. He was, frankly, pessimistic.
In 2000, the Singularity Institute for Artificial Intelligence was founded to examine this potential problem. SIAIs goal is to ensure that powerful computers and computer programs are not dangerous to humanity if or when theyre created. You can look to see if SIAI is made up of quacks and kooks, but many of the names associated with it are definitely not quacky or kooky, lending to that organizations credibility.
In an article in the June 21, 1999 issue of Business Week, Otis Port wrote about the possibility of producing neurosilicon computers. They would be hybrid biocomputers that mate living nerve cells, or neurons, with silicon circuits.
Still sound far-fetched? Groundwork was laid for this at places like Georgia Tech and the Institute of Mathematical Sciences in Madras, India, among others. Initially, the experiments used neurons from lower life-forms such as spiny lobsters and mussels. But, eventually, the scientists made artificial neurons from electronic parts bought at a Radio Shack that succeeded in fooling the real neurons into accepting them as other real neurons. In other words, they had created a synthetic, though primitive, nervous system.
Is a computer that really thinks even possible? We dont know. But as far back as the middle of the 20th century predictions have been made for the day we would finally create an intelligent computer. In the 1960s estimates were made that wed have one within 20 years. As far back as 1950, computer genius Alan Turing estimated wed have one by the year 2000. But the years have come and gone and, though we have faster computers, we dont seem to be appreciably closer to a thinking and conscious computer. Then, of course, there are others who, for one reason or another, say it will never happen. Maybe theyre right.
But the principle problem with answering the question of whether its possible for a computer to think is that not only do we not yet know what makes our own brains work, we dont even know what consciousness is. Some people in the field believe consciousness doesnt actually exist; its just an illusionwhatever that means.
But lets take the scenario where we create a computer that runs on software sophisticated enough that it can finally think. What happens then?
Movie computers like the HAL 9000 in 2001: A Space Odyssey, the Nestor NS-5 named Sonny in I Robot, and Joshua in WarGames had human attributes including human needs and desires. Thats because those movies and novels arent really about computers, but about us. If machines were to gain self-consciousness, they most likely wouldnt be like us at all.
And what happens if a powerful sentient computer develops any kind of survival instinct (We dont know what causes that either.). Would such a computer think of us as friends? Gods? The enemy? What if it either didnt like us or perceived us as a threat? Imagine what would happen if a computer that was tied into the Internet, our defense systems, and millions of other computers around the world and could think faster than any person has ever been able to think, decided it didnt like us. Or want us around! Wed probably never even see it coming, in particular if we didnt recognize it as intelligence with a survival instinct to begin with.
Im not a technophobe or trying to cause undue alarm, but these are some of the things I think about when Im trying to get to sleep at night.I don't need a lot of sleep, so I do lots of thinking before I get to sleep. It's part of what I get paid for: an overactive imagination and a grasp of "plausible threats." Not to mention that I work with people who are trying to do this very thing-people who make the world's fastest computer set-ups just for giggles, people in AI and all manner of other stuff that I best not talk about......:lol:
Ive been placing other plausible threats to humanity further into the back of my mind as I consider the possibility of a future computer threat. Things like asteroids and comets, supervolcanoes, disease, terrorists, nuclear weapons, even World War III would leave survivors. Im not so sure computers would.
Look at that computer sitting atop your desk tonight: That may one day be the enemy
Last edited: