The Future Threat

elder999

El Oso de Dios!
Lifetime Supporting Member
Joined
Mar 5, 2005
Messages
9,930
Reaction score
1,452
Location
Where the hills have eyes.,and it's HOT!
So, this post, by Andrew Green, got me thinking..he said:
Andrew Green said:
And before we get into a computers can't evolve a thought process, have a quick look into evolutionary algorithms, which is a really neat branch of Computer science that uses the ideas of evolutionary theory in programming and has gotten some really amazing results.
I once wrote a science fiction novel that I never tried to sell. Titled The Perfect Defense, its first chapter appeared in the a magazine, back in 1984-before the movie, Terminator came out. The plot was set in the future. Mankind’s computers had revolted and were chasing what was left of humanity across intergalactic space in an effort to annihilate humans completely. The reason the computers took over, as one of the characters notes, was because, “We never had a good definition of life and didn’t realize, until it was too late, that even the earliest computers were life-forms of a sort. We just couldn’t see it because we defined life as ‘organic’ and intelligence as being comparable to ours. We had no useful definition of consciousness, until the machines revolted and began to kill us by the billions. We were wearing blinders, until it was too late.”
In my novel humanity survives, but only because humans had devised “the perfect defense.” But that’s just a novel. In reality, I’m not sure we’d survive if we had to deal with future computers if they got sentient, smart, and nasty.


How far-fetched is this?


In the past I’ve written about the possibility of global disaster that could come our way in the way of an asteroid or meteor impact, an exploding supervolcano, a worldwide pandemic, a nuclear war, and other events that could either bring down civilization or even precipitate the extinction of mankind. I’ve always thought these other things were possible, though not necessarily probable, within my lifetime.


Threats from computers may sound a bit far-fetched, sort of like science fiction. But people as eminent as the great British physicist, Stephen Hawking, have also visited this question, wondering what the consequences would be if computers one day surpassed us in intellect. Another man, Bill Joy, co-founder and chief scientist of the computer company, Sun Microsystems, wrote of the same prospects in the March 2000 issue of Wired. He was, frankly, pessimistic.


In 2000, the Singularity Institute for Artificial Intelligence was founded to examine this potential problem. SIAI’s goal is to ensure that powerful computers and computer programs are not dangerous to humanity if or when they’re created. You can look to see if SIAI is made up of quacks and kooks, but many of the names associated with it are definitely not quacky or kooky, lending to that organization’s credibility.


In an article in the June 21, 1999 issue of Business Week, Otis Port wrote about the possibility of producing neurosilicon computers. They would be hybrid “biocomputers” that mate living nerve cells, or neurons, with silicon circuits.


Still sound far-fetched? Groundwork was laid for this at places like Georgia Tech and the Institute of Mathematical Sciences in Madras, India, among others. Initially, the experiments used neurons from “lower” life-forms such as spiny lobsters and mussels. But, eventually, the scientists made artificial neurons from electronic parts bought at a Radio Shack that succeeded in fooling the real neurons into accepting them as other “real” neurons. In other words, they had created a synthetic, though primitive, nervous system.


Is a computer that really thinks even possible? We don’t know. But as far back as the middle of the 20th century predictions have been made for the day we would finally create an “intelligent” computer. In the 1960s estimates were made that we’d have one within 20 years. As far back as 1950, computer genius Alan Turing estimated we’d have one by the year 2000. But the years have come and gone and, though we have faster computers, we don’t seem to be appreciably closer to a “thinking” and “conscious” computer. Then, of course, there are others who, for one reason or another, say it will never happen. Maybe they’re right.

But the principle problem with answering the question of whether it’s possible for a computer to think is that not only do we not yet know what makes our own brains work, we don’t even know what consciousness is. Some people in the field believe consciousness doesn’t actually exist; it’s just an illusion—whatever that means.


But let’s take the scenario where we create a computer that runs on software sophisticated enough that it can finally “think.” What happens then?


Movie computers like the HAL 9000 in 2001: A Space Odyssey, the Nestor NS-5 named Sonny in I Robot, and Joshua in WarGames had human attributes including human needs and desires. That’s because those movies and novels aren’t really about computers, but about us. If machines were to gain self-consciousness, they most likely wouldn’t be like us at all.


And what happens if a powerful sentient computer develops any kind of “survival instinct” (We don’t know what causes that either.). Would such a computer think of us as friends? Gods? The enemy? What if it either didn’t like us or perceived us as a threat? Imagine what would happen if a computer that was tied into the Internet, our defense systems, and millions of other computers around the world and could think faster than any person has ever been able to think, decided it didn’t like us. Or want us around! We’d probably never even see it coming, in particular if we didn’t recognize it as intelligence with a survival instinct to begin with.

I’m not a technophobe or trying to cause undue alarm, but these are some of the things I think about when I’m trying to get to sleep at night.I don't need a lot of sleep, so I do lots of thinking before I get to sleep. It's part of what I get paid for: an overactive imagination and a grasp of "plausible threats." Not to mention that I work with people who are trying to do this very thing-people who make the world's fastest computer set-ups just for giggles, people in AI and all manner of other stuff that I best not talk about......:lol:


I’ve been placing other plausible threats to humanity further into the back of my mind as I consider the possibility of a future computer threat. Things like asteroids and comets, supervolcanoes, disease, terrorists, nuclear weapons, even World War III would leave survivors. I’m not so sure computers would.


Look at that computer sitting atop your desk tonight: That may one day be the enemy
 
Last edited:
Look at that computer sitting atop your desk tonight: That may one day be the enemy

As a programmer, I can honestly say today that the computer on my desk is indeed my enemy.

I'm not as worried about intelligent computers as I am of really dumb computers being used to handle tasks for which errors can be catastrophic. Think of the Mars Climate Orbiter, which cheerfully plunged into the Mars atmosphere because a NASA subcontractor was using imperial units rather than metric. Now consider what happens when your buggy code is running things like nuclear reactors or nuclear missile defense. :nuke:
 
In the past I’ve written about the possibility of global disaster that could come our way in the way of an asteroid or meteor impact, an exploding supervolcano, a worldwide pandemic, a nuclear war, and other events that could either bring down civilization or even precipitate the extinction of mankind. I’ve always thought these other things were possible, though not necessarily probable, within my lifetime.

How about simply running out of oil? We don't really know just how much is really left in the world.

Without a fuel source ready to take its place, we could come to a literal stand-still on a large scale. For those of us living in crowded urban areas that are removed from actual food production, we would literally starve to death once the supermarkets and our pantries began to run out of supplies. No way to ship in food supplies on a scale meaningful for the hundreds of thousands or even millions of people who live in our big cities, and no ready source of renewable food within those cities, and no fuel source with which to evacuate people to the food supplies. You've got one tank of gas in your car, you've got your bicycle, and you've got your feet. Think you can find food, when a million other urbanites are crowding the same highways, trying to get out of town and find the same food you are looking for?
 
I think that's pretty good stuff you've written down for this post. However, I think that if humanity decided to shoot itself in the foot by creating dangerous AI, then it should bleed a little. Maybe enslave the human race. Then I can raise an army and rise up against the machines, using the death of an infant as a catalyst for an anti-thinking machine crusade that will span millennia, leaving any calculating machine incapable of ever harming humanity by destroying them.

Humans would have to resort to expanding their own minds to do serious calculations when it came to anything from urban planning, to transportation, to interstellar warfare... Through the use of drugs and crazy yoga.... Eventually, we will again shoot ourselves in the foot by creating "superbeings" that will also enslave the human race, and the cycle goes on...

Or I could stand back while humanity gives birth to primitive Transformers and I end up hiding out in the woods, harvesting wheatgrass and raising flocks of guinea pigs.
 
I don't think it's too far fetched... from what shows I've seen on the discovery and history channels, Future Weapons, Beyond 2000, Modern Marvels, et al, the technology is growing. We're seeing more and more robotics coming to light and getting better, that weird walking dog that can carry heavy loads and is difficult to knock down and so on, the Japanese are really trying hard to make robots or even androids and getting them to look fairly realistic.
Computer technology that operates on it's own is already here, look at major automobile manufacturers today.
That a sinister virus/software program coming to light/awareness on it's own (aka Skynet) ... well I'm not computer savvy enough to say that this is even possible but knowledge IS power. One person could hack into the system and thus order attacks from un-manned vehicles upon civilian and military targets. Some will argue that is impossible to do... maybe, what about seriously ticking off the guy who wrote the program? Couldn't he be sitting in some basement somewhere tapping out command codes, accessing backdoors that HE created himself for the "just in case" scenario (good or bad)?
We are putting too much trust into technology and our reliance upon it is staggering. I'm just as guilty of it as anyone else. I've seen people actually freak out because they somehow forgotten their PDA's or whatever at home or the office and can't get to them.
We do need to be careful and trust to humans more and be more accepting of mistakes along the way.
It's scary to think about and not a "fun kind" of scary like you're supposed to get at the movies.
 
How about simply running out of oil? We don't really know just how much is really left in the world.

Who cares? I don't. Known it was coming in our lifetimes for most of my life.....see below.

Without a fuel source ready to take its place, we could come to a literal stand-still on a large scale. For those of us living in crowded urban areas that are removed from actual food production, we would literally starve to death once the supermarkets and our pantries began to run out of supplies. No way to ship in food supplies on a scale meaningful for the hundreds of thousands or even millions of people who live in our big cities, and no ready source of renewable food within those cities, and no fuel source with which to evacuate people to the food supplies. You've got one tank of gas in your car, you've got your bicycle, and you've got your feet. Think you can find food, when a million other urbanites are crowding the same highways, trying to get out of town and find the same food you are looking for?


That's why I live in a high mountain valley, with my own food, well, power sources, and a very large stockpile of stuff...I've also got livestock.....let the world go to hell......:lol:
 
Who cares? I don't. Known it was coming in our lifetimes for most of my life.....see below.




That's why I live in a high mountain valley, with my own food, well, power sources, and a very large stockpile of stuff...I've also got livestock.....let the world go to hell......:lol:


that's exactly what I'm talkin' about.
 
well hell, we really have no more natural predators so the only thing left to do is make our own doom like we always do.

That's true only so long as we still have our tools and weapons and live in large groups. If we somehow lose those tools and/or are separated from our groups, we are prey.
 
That's why I live in a high mountain valley, with my own food, well, power sources, and a very large stockpile of stuff...I've also got livestock.....let the world go to hell......:lol:
Hope you don't shoot me if I happen to wander on to your property thinking I'm some mini-terminator... :lol:
 
Hope you don't shoot me if I happen to wander on to your property thinking I'm some mini-terminator... :lol:


You'll be safe...as long as you approach the fence with your hands raised high over your head......and each one is holding an unopened bottle of tequila-the good stuff, none of tha' Jose Cuervo crap :lol: Patron is okay, but Corazon or Milagro is even better-for me, anyway:lol:

I'd never risk breaking the bottles by shooting you...:lol:
 
The big, hidden in plain view, problem with computer technology is that it just isn't reliable in many of the forms presented to the public.

I speak here as one of the architects of our doom, as my job is creating control systems for power distribution/generation networks :eek:.

We use outdated hardware running UNIX because the flash, powerful, Windows OS'd, hyper-speed boxes just don't cut it in the reliability stakes.

The Terminator scenario has it's roots in reality but is a long way from being reality yet.

That said, the big problem really is that much of the infrastructure relies upon computers one way or another these days. More than that, it depends upon the reliability of the software too and the databases that all rest upon.

As a professional in the field I can tell you that everything is not all it's cracked up to be (everyone exhibits no surprise :D). Our stuff is great because it is engineered to be resilient (with multiple redundancy factored in) and we, with no undue modesty, are amongst the very best at what we do.

Without that redundancy and obsessive focus on reliabilty (I've had one of my systems performing perfectly whilst the substation was on fire and the CCU was being hosed down :)), computer controlled systems are an accident waiting to happen.

It will happen - guaranteed.
 
Last edited:
As a programmer, I can honestly say today that the computer on my desk is indeed my enemy.

As a computer guy that does all things computer and some network but does not do programing I must say I agree except ALL computers and servers, including the PC I am currently on, are my enemy.... and futher more all I have to sa..........

Dave, this conversation can serve no purpose anymore. Goodbye
 
You'll be safe...as long as you approach the fence with your hands raised high over your head......and each one is holding an unopened bottle of tequila-the good stuff, none of tha' Jose Cuervo crap :lol: Patron is okay, but Corazon or Milagro is even better-for me, anyway:lol:

I'd never risk breaking the bottles by shooting you...:lol:
You must be a poor shot then... or are you worried I'll drop the bottles as soon as I suffer a sudden brain hemmorage? :lol:

Actually, if I was still drinking I'd be having a couple of quarts of good ole' Jack #7... I'm a Tennessee boah and we's loyal to our home made stuff.
 
The big, hidden in plain view, problem with computer technology is that it just isn't reliable in many of the forms presented to the public.

I speak here as one of the architects of our doom, as my job is creating control systems for power distribution/generation networks :eek:.

We use outdated hardware running UNIX because the flash, powerful, Windows OS'd, hyper-speed boxes just don't cut it in the reliabilty stakes.

The Terminator scenario has it's roots in reality but is a long way from being reality yet.

That said, the big problem really is that much of the infrastructure relies upon computers one way or another these days. More than that, it depends upon the reliability of the software too and the databases that all rest upon.

As a professional in the field I can tell you that everything is not all it's cracked up to be (everyone exhibits no surprise :D). Our stuff is great because it is engineered to be resilient (with multiple redundancy factored in) and we, with no undue modesty, are amongst the very best at what we do.

Without that redundancy and obsessive focus on reliabilty (I've had one of my systems performing perfectly whilst the substation was on fire and the CCU was being hosed down :)), computer controlled systems are an accident waiting to happen.

It will happen - guaranteed.
Yeah that's what I'm afraid of actually not actual independent AI, but that the WRONG system might fail. Think CDC or even Nuclear Silos or anything else... they ARE working on unmanned vehicles... the Predators (pardon the pun) are just one step closer to it. Some of them can be armed with a tomahawk or a smaller nuclear tipped missile ... think of what might happen if the computer guidance system or operator control systems fail on it.... completely fail... including the abort. How much time and how fast would a piloted interceptor YF-22 be able to get to it and be able to shoot it down before it reaches a (mistaken) target or runs out of fuel over a population? Even if it was capable of being shot down is there a guarantee that the nuke wouldn't have already been armed by then??
 
Back
Top