Post by joshbachynskiDear Stephen et all,
I honestly don't mean to sound rude but you guys are skirting around
the issue ;) I will concede that modern AI (fully aware that they
don't know what intelligence is and therefore cleverly) decided to
limit their attempts to approximating intelligent-like behavior, and
not creating intelligence per se. Fine. But this is just sophistry
guys. If you don't know what intelligence is then you don't know
what intelligent-like behavior is either. It is simply prejudice to
assume that you do.
I think you are caught up in philosophy and are not being real. I mean
like the philosophers who asked how do I know I exist? Descartes
replied "I think therefore I am." But that reply does not prove existence
in an absolute objective manner. You are taking on the role of a
doubting Thomas philosopher about existence. Other people are not
going to care about your doubt because reality works satisfactorily.
When "Goedel" ,an automated theorem prover, proved some theorem
that human mathematicians had been struggling with for decades, they
considered that intelligent-like behavior. It didn't require a definition
of intelligence. I don't think Penrose disputed weak AI and currently
I am not sure AI has any rigid requirement stemming from weak AI.
It would have been different if you criticized strong AI, like Penrose,
and disputed consciousness, or intelligence, or mind as being incorrectly
attributed to AI program behavior. If a human had found the solution
or proof to the theorem mentioned above, people would have said
(s)he was creative or intelligent. It is part of our shared consensus
reality.
You don't need to call the machine's discovery intelligent or creative. Just
that the output displays artificial intelligence, because its output
produced
a result which would have been considered intelligent or creative if a
human would have done it. People make comparisons like this all the
time without defining the concept. Lots of concepts in language are
abstract and symbolic and don't have a physical thing to point to.
A logical consequence of your argument is that humans are not intelligent
because we can't define intelligence or consciousness precisely. But we
ascribe intelligence to others all the time. Don't you want to marry an
intelligent individual? People think chimps are smart if they can move a
chair so that they can climb up and grab a banana. They think that yellow
dog who saved the 7-year old has some intelligence because it nipped
at the kid's ankle so that the kid would flee from the shoreline.
The supreme court judges said we may not be able to define
pornography, but we know it when we see it. They are judging by
appearances which may not be quantifiable. In the case of calling AI
outputs intelligent-like (I don't care if you call it a calculation) they
are judging by observed reality and how it compares with previous
experience --> analogy, which is never a precise match in all categories.
But it works. It seems you are insisting that deductive reasoning be used
to proceed with building intelligent-like programs. You think intelligence
should be defined. In the real world, inductive arguments and their
conclusions, reach beyond their premises, exist along a range of probability
rather than being certain, and can be strengthened or weakened by new
evidence. This reasoning is employed in a lot of situations. Your claim
boils down that building a more general AI by the trial and error approach
of engineering is muddle-headed. You claim this despite the fact that
reality reports there are several intelligent-like programs in existence.
Your criticism would make a lot more sense if AI were claiming that
actual/identical human intelligence was achieved, that the AI was a
self-aware consciousness which possessed a mind. There are some AI
enthusiasts/visionaries who claim such things, but they are no longer the
majority like they were in the heyday of computationalism and
functionalism that was prominent 15 years. There is no sound reason
to exclude AI from the other concepts which use inductive reasoning.
People use reality to judge theories or philosophical positions. It is a
mistake to bend reality to meet your philosophical (mis)conceptions like
Einstein's Cosmic Fudge Factor
http://www.astronomycafe.net/anthol/fudge.html
"In 1917, Albert Einstein tried to use his newly developed theory of
general relativity to describe the shape and evolution of the universe.
The prevailing idea at the time was that the universe was static and
unchanging. Einstein had fully expected general relativity to support
this view, but, surprisingly, it did not. The inexorable force of
gravity pulling on every speck of matter demanded that the universe
collapse under its own weight.
His remedy for this dilemma was to add a new 'antigravity' term to his
original equations. It enabled his mathematical universe to appear as
permanent and invariable as the real one. This term, usually written as
an uppercase Greek lambda, is called the 'cosmological constant'. It has
exactly the same value everywhere in the universe, delicately chosen to
offset the tendency toward gravitational collapse at every point in space."
SH: So Einstein later called this the biggest mistake of his life (fudging).
You don't have some special insight when you remark that if the workings
of the brain were fully understood, the relationship of the brain to
conscious
activity, self-awareness and intelligence were fully understood and defined,
then building a smart robot program would be immeasureably easier. It
would be somewhat like reading the manual (which takes a lot less time) than
figuring out how some esoteric application works by trial and error.
But figuring out the brain may take 100 years or longer. In the meantime
many people are happy with useful intelligent-like programs. It is crazy
to deny reality and insist such programs don't exist because they didn't
meet your prerequisite metaphysical requirement of a precise definition!
value judgement: an assessment that reveals more about the values of the
person making the assessment than about the reality of what is assessed
Is that the general contemporary goal of AI?
"You know all those cheezy sci fi movies where the AI is ALWAYS evil?
Proceeding by the standard AI scientific method, if you are successful
(which IMHO you won't be) that aberrant childlike AI (which decides
all humans are dangerous and tries to kill us ala Matrix I mean
Terminator I mean Space Oddysey 2001 etc, etc.) is exactly what you
will create because the behavior of our *society* (which doesn't even
know what intelligence is) will be the model of intelligence.
This is really nonsense! Those movies assume a computer which
becomes sentient/self-aware. So you have introduced a strawman to
the argument. I've already told you the goal of contemporary AI is not
to build consciousness, self-awareness or intelligence with volitional ego.
(I don't mean that is true for everybody in the AI field, gripe to them).
That is why I keep typing intelligent-like and sometimes calcualtional.
Computers are already used to launch nuclear missiles. They have a
double key security protocol. Computers can already be used to destroy
the world. It would not be that hard to rewrite the program so that it
just needed the <enter> key tapped to launch the missiles by some human.
Computers did not exist in airplanes when they dropped A-bombs over
Japan. Computers don't have a semblance of free will. If they used your
idea to actually build a computer with emotions, intelligence, etc. then it
would have the capability of some independent self-aware action or
free will. So your estimation is exactly backward. Programs like Goedel
or Deep Blue don't have the potential to evolve into some ulterior purpose
outside of their programs. So they are not a threat anymore than your
telephone answering machine which is also connected to your computer
and ISP can spontaneously decide to connect to a defense computer
on the internet and order WW III. You are sharing delusions with the
AI fringe. Computers can already be used to start WW III without a
spark of machine awareness, humans will suffice. Can you show how modern AI
is capable of becoming such a threat? No. These SF
scenarios are fantasies are worthless speculation, not philosophy.
Post by joshbachynski"Do you see my point?"
No, you don't have one.
Post by joshbachynski"You say you don't understand the essence of X so
therefore you'll just try to make something as close to X as you can,
but you don't know what X is!"
No, but we know what it looks like, as do most people on the planet.
It looks like some aspect of what an intelligent human accomplishes.
Post by joshbachynski"How can you know the shortest or best
or most efficient method of producing something X-like when you don't
know what X is?"
Obviously, one doesn't. This may not be obvious to you since you are
also ignorant of AI along with the other topics you mentioned: AI does
not claim exact knowledge is a not preferable method. But many people
think that finding such a method, if possible will take a very long time.
Trial and error will provide some useful applications of AI along the way.
It is possible due to random searches that this may be a quicker method.
The main problem with your idea is that it is not time realistic or that
both
methods cannot proceed at the same time.
"It doesn't matter that I don't have the authority or experience. To
assume because I am not qualified or an admitted expert in AI that it
is therefore impossible, or even improbable, that I have anything
important or correct to say about AI is a fallacy."
SH: You've demonstrated that you don't have anything important
or correct to say about AI and it is due to the fact that you don't
know what you are talking about. Having expertise in AI mirrors
your claim for the most efficient method to proceed is with full
knowledge. And it most certainly is *"improbable"* just like the
chances that a 'trial and error' approach to AI will be quickly
successful. By chance, or trial and error, or meta-principles,
you might have had something important or correct to say about
AI, but that in fact did not turn out to be the case. It reminds me
of the gamlber on a losing streak who once announced: "My credit
is better than cash?" I will be moving on to another topic.
Research is the spice of life (Dune),
Stephen