heretic888 said:
The difference is that, in the 20th century, scientists actually had a pretty good idea on how eukaryotes and aerodynamics actually work.
The arguments being presented for AI and similar technologies, as well as these speculations for a metahuman "superintelligence", all indicate a fundamental ignorance of the way the human brain and human consciousness actually operate. And, as many psychologists and neuroscientists will tell you, there's a helluva lot we just don't know. Ergo, the idea of artificially replicating these processes is rather silly.
It is not necessary to replicate every process in the human brain. The most accurate statement you made was "We just don't know". What's further, you assume that there is concensus on the level of technology that will be available in the next 30 to 50 years, and the idea that you can predict it's path is in itself rather silly.
heretic888 said:
But, whatever the case, at the very least these guys are not taking into account:
1) The domain-specific nature of "intelligence", as opposed to the mistaken impression that there is a single, unilateral, cross-domain variable you can call "intelligence".
2) The hierarchical-emergent and discontinuous nature of all domains of "intelligence", as opposed to the archaic notion that "intelligence" is something of a continuous variable like heat or elevation that simply goes up or down across a given wavelength or scale.
3) The irreducible complexity of the human brain, which: a) has more synaptic pathways than there are stars in the galaxy, b) is the most phenotypically plastic structure in nature, and c) is perhaps the most organically plastic strucure in nature.
4) The irreducible reality that cultural and socioeconomic variables have on individual "intelligence", including non-human organisms.
At present, much of this speculation is based on archaic, anemic, monopolar, "flat", and ridiculously simplistic notions about how intelligence actually operates.
You're entire argument is based on the inability for "Strong AI" to reproduce every aspect of the human brain. That argument is irrelavent. Reaching a species altering technological point can occur without replicating Human intelligence.
heretic888 said:
Once again, if such a phenomenon happened, it'd be a first in human history.
Is your argument really "It can't happen because it hasn't happened"?
heretic888 said:
Who would have guessed the majority of people in America would own --- or at least have easy access to --- computers, television sets, telephones, cellular phones, automobiles, and so forth?? If you had asked the individuals pioneering the original technology for these instruments, they'd have likely told you something similar to what you're commenting about nanotechnology, genetic engineering, cybernetics, and so on. I see something what you're predicting only happening in the short-term.
They would have told me something similiar? Why is that? What about TV's and Telephones would lead to replacing the bulk of human labor?
Just exactly what technology that we have previously had available to us even remotely equates to genetic engineering, cybernetics and nanotechnology on the scale of altering the human species? Genetic engineering itself has the potential to alter the very definition of what 'is' human. In fact, so do nanotechnology and cybernetics.
What you fail to grasp the snowballing effect technological, how each new step alters humanity further and faster than the steps before, rapidly increasing in speed and effect.
Make a list of the technological advancments that have fundamentally altered human life in the last 10,000 years.
How many were in the last 2000 years?
The last last 1000?
The last 500 years?
The last 200 years?
The last 100 years?
The last 60 years?
The last 50 years?
The last 20 years?
Does each list seem much larger than before? The printing press was societally altering, it changed the face of Europe in a couple of centuries.
How does access to printed words compare to near instant access to millions upon millions of pages of information at the click of a mouse? The last 100 years of human history have produced more species altering discoveries and inventions than the last 10,000 years of human history before it. What's more, that advancement of science and technology is speeding up, not slowing down.
heretic888 said:
In any event, the importation of new technology into the workforce does make many 'manual' or 'physical labor' jobs obsolete --- but it also creates a whole new array of jobs that need to be done. Overseeing the operation of the new technology, maintenance of the new technology, networking of the new technology into the jobsite's overall infrastructure, and probably dozens I have missed.
Again, how many jobs are necessary at the point at which robotics becomes self-replicating and self-repairing? The only job necessary is that of initial programmer. Even maintenance can more cheaply and efficiently be done with other automation. Supervisor of their function is ultimately even superfluous. Again, give me a list of the types of jobs necessary in that environment and who will fill them? I can tell you they will only fall in creative areas.
Anything that requires any sort of repetetition and very little creatively will more cheaply and efficiently be done with automation. Everything from factory work, much of which that is already automated, to all manner of service jobs.
So, lets hear this large laundry list of jobs that will still require human labor or be openned up by this level of technology. They will fall solely in to the realm of professional services, as they are increasingly now.
heretic888 said:
Assembly lines are becoming less and less common these days, its true. But, how many new jobs involving the use of computers or the maintenance and oversight of the technology now on assemply lines have arisen??
Its a bit more complex than all this wild speculation is making it out to be --- whether we're talking about socioeconomics or intelligence.
Laterz.
Not so many as you would argue. There are a number of professional service jobs, but the bulk of those who would formerly fill relatively high paying manufacturing jobs have been relegated to low paying unskilled service jobs. As automation continues to advance, with software with more advanced decision making processes, unskilled service jobs will disappear, leaving only professional service jobs. What happens to the majority of people unfit or unprepared for professional service jobs at that point.
Think about this the next time you walk in to a store with automatic checkout. It is designed to replace the function of human employees at a fraction of the cost. E-commerce is further pushing out service employees. What, then, happens to those employees? Do we create a massive welfare state where they are simply given money to remain consumers.
In short, it's clear your dismissive tone appears to be a knee jerk reaction to the suggestion of strong AI, that's made clear by your focus on attempting to declare that sole point pure fantasy.
This isn't simply about a "Super intelligence computer", that is one and only one path to technological singularity. It is not the sole path, or the most likely. It's about technological singularity beyond which the human species and our social structure is no longer recognizable. Your declaration that Strong AI is impossible does not answer the core questions.
Even without the implications of Strong AI, it should be clear to many that we are moving toward this phenomenon. The complaint that many new jobs in the US are increasingly low paying is blamed solely on the politics and the failures of political parties, but the truth is that it is likely, when examined from a bigger perspective, a natural result of the advancement of human technology and it's ability to allow more efficient and cheaper use of resources.
Consider outsourcing to India for telemarketing. It is the advancement in communications technology that has made that an attractive cheaper alternative. 30 years ago, that would not have been remotely an option.
Further, when discussing intelligence, it is not necessary to mystify intelligence and discuss all the varied ways in which intelligence is truly complex. The measure of intelligence enhancing technological advancement is on it's ability to enhance performance and creativity. Take the computer and the computer database. It allows an intelligent person to perform tasks that would have required many people 100 years ago. For that reason, it is intelligence inhancing. Declaring the issue of intelligence 'to complex' to discuss in those terms is really a dodge.
The point has been made that any moderately intelligent person with a masters level education and a computer can max out any written IQ test on the planet. Attempting to make the point that IQ tests are not a valid measure of the varied degrees of what has come to be known as "intelligence" doesn't really alter the intelligence enhancing technological advantage of computer technology.
Further, as computer technology becomes more integrated with human beings, the effect will be that much more pronounced. What took many people just a few years ago, will take only a very limited number when coupled with intelligence enhancing technology and self-replicating robotics.
True self-replicating robotics can mine raw material, transport raw material, process raw material, create products with raw material, create more self-republicating robotics with raw material, and repair itself and other self-replicating robotics. It does not require Strong AI to create a technological singularity effect. Couple this robotics with humans utilizing technological intelligence enhancement, and an extremly small number of human beings can perform what took many thousands of human beings up to present day. So, again, the focus on the "impossibility of strong AI" is irrelavent.
In fact, once human intelligence has begun the process, it can be carried out devoid of human interaction. Only the most cursory supervision of the process would be necessary to insure that no problems develope within the system.
If this type of technology becomes available to individuals, what is going to prevent them from utilizing to their own advantage? Some sort of human wide agreement that such technology would be unbeneficial to humans as a whole? That is going to prevent individuals and nations from utilizing the advantages of such technology? Such an idea hardly seems to grasp human nature. Any technology that is likely to gain individuals and groups an advantage will never remained bottled up, no matter how it is declared undesirable by humanity as a whole. We need look no further than the nuclear genie as an example of that.