Discussion:
Why Artificial Intelligence can Never Get it Right
(too old to reply)
joshbachynski
2005-01-05 07:09:28 UTC
Permalink
Hello,

I posted this essay on my blog sometime ago. It's called "Why
Artificial Intelligence can Never Get it Right". I thought some people
here may (cough cough troll cough) like to read it and post their views
here, or on my blog.

The short version is scientists do not have, fail to realize, and
refuse to believe in the verify existence of, the correct method for
determining what an intelligence is in order to create it. Largely
because they are determinate materialists.

One cannot re-create something if they don't know what it is. This is
an old idea, but I go further to suggest an alternate method:
Intelligible Method. Which is very old and the original method
Descartes used to write his Meditations.

Anyway here it is:

http://thymos.blogspot.com/2004/08/why-artificial-intelligence-can-never.html
josh
--
http://thymos.blogspot.com
Wolf Kirchmeir
2005-01-05 14:39:23 UTC
Permalink
joshbachynski wrote:
[...]
Post by joshbachynski
One cannot re-create something if they don't know what it is. This is
Intelligible Method. Which is very old and the original method
Descartes used to write his Meditations.
[...]

Yeah, but Descartes got it wrong. "Intelligible Method" is a fancy label
for "You can reason your way to the truth." Which a) begs a number of
questions about "truth"; and b) turns out to be simply wrong for several
meanings of "truth"; and c) for the rest it's a method of inventing, er,
sorry, "discovering" tautologies - a fun game, but useless if the
research ends there.

But you're right, everybody has their own notions of what "intelligence" is.
Josip Almasi
2005-01-05 16:31:48 UTC
Permalink
Post by Wolf Kirchmeir
Yeah, but Descartes got it wrong. "Intelligible Method" is a fancy label
for "You can reason your way to the truth."
Well, yeah, kinda goal-directed backpropagation.
Then again, which exact method did _you_ use to conclude that Descartes
got it wrong?
And why would that one be right and his wrong?

Regards...
joshbachynski
2005-01-05 18:28:20 UTC
Permalink
Dear Wolf,

1) I don't believe you know what the intelligilbe method is. And I
believe Descartes notion of truth was very simply the one used in
mathematics, ie: that which is true are those statements which
demonstrably correspond to reality. What we can KNOW (which is
different) is that which we can demonstrate "clearly and distinctly" -
that which we know cannot be any other way.

For a complete explanation of this and other problems epistemological
please see my blog here:

http://thymos.blogspot.com/

Now do you object that Descartes found these things with the
intelligible method, or that his his very definitions of truth and
knowledge were inadequete?

2) So what? Even if it was true that Descartes messed up using the
method does not mean a) the method is no good for discovering certain
things and b) that it should not be adopted for AI and general
knowledge.

Why don't you read my paper before inventing problems with it :) That's
called the lazy democrat method :)

josh
--
http://thymos.blogspot.com
Wolf Kirchmeir
2005-01-05 21:17:31 UTC
Permalink
Post by joshbachynski
Dear Wolf,
1) I don't believe you know what the intelligilbe method is. And I
believe Descartes notion of truth was very simply the one used in
mathematics, ie: that which is true are those statements which
demonstrably correspond to reality.
No, mathematical truth is that which can be derived from axioms
according to the rules of derivation. Whether the axioms correspond to
reality is another issue entirely.
Post by joshbachynski
What we can KNOW (which is
different) is that which we can demonstrate "clearly and distinctly" -
that which we know cannot be any other way.
Um, the problem is that too often when someone says "it's got to be this
way because I have demonstrated that it can be no other way", someone
will eiether a) show that it can only be some other way; and/or show
that in fact Nature doesn't care about your logic - it is some other way
after all.
Post by joshbachynski
For a complete explanation of this and other problems epistemological
http://thymos.blogspot.com/
I've no doubt you've got a blog on epistemology, but I doubt you've
completely explained it. I took a course in epistemology many years ago.
It consisted of reading and critiquing attempts to explain epistemology
completely. :-)
Post by joshbachynski
Now do you object that Descartes found these things with the
intelligible method, or that his his very definitions of truth and
knowledge were inadequete?
His definitions were inadequate. In essence, he claimed that which he
could not doubt must be true. He of course phrased his claim in terms of
what reasonable people couldn't doubt, but since he was obviously a
reasonable person, it came down to his in/ability to doubt certain
statements.
Post by joshbachynski
2) So what? Even if it was true that Descartes messed up using the
method does not mean a) the method is no good for discovering certain
things and b) that it should not be adopted for AI and general
knowledge.
Well, if we can agree precisely what the method is, we can certainly
agree on what problems it may solve. That's true of any method whatever,
intelligible or otherwise.
Post by joshbachynski
Why don't you read my paper before inventing problems with it :) That's
called the lazy democrat method :)
Ok, I'll read it.
Post by joshbachynski
josh
--
http://thymos.blogspot.com
PS:

I read a good deal of Descartes many, many years ago, and found him an
entertaining writer. But even when I first read him, I thought he was
rather, er, um, vague about what "demonstrable truth" was. He realised
that the senses cannot be trusted; and that self-evident truth wasn't
(if it were, there would be no argument about it, but there is.) So he
was reduced to searching for something he could trust, i.e., not doubt.
The famous Cogito ergo sum was his result. Which _I_ happen to doubt. :-)
Stephen Harris
2005-01-06 01:28:30 UTC
Permalink
Post by joshbachynski
Dear Wolf,
1) I don't believe you know what the intelligilbe method is. And I
believe Descartes notion of truth was very simply the one used in
mathematics, ie: that which is true are those statements which
demonstrably correspond to reality.
No, mathematical truth is that which can be derived from axioms according
to the rules of derivation. Whether the axioms correspond to reality is
another issue entirely.
Yes, this is why Penrose's attack on AI failed which is listed
in Josh's bibliography. I think dualism can be used to attack
AI but that requires a major assumption. Descartes was a dualist.

There is a relationship between the foundations of mathematics
and the reality into which it arises through human invention. The
relationship of corresponding to reality is a statistical inference,
not actually truth because of uncertainty so Wolf is correct and
Josh has an inconsistency in his definition of truth and his utilization
of "incomplete knowledge" which Penrose substitutes for uncertainty.
Penrose gives an alternate version of HUP which does not focus on
position and velocity, but predicting all trajectories since time began.
Josh's reasoning in 1) above is actually why Penrose's disproof/reason*
failed when he tried to bring in Goedel's Incompleteness (Inc.)Theorem.

*At the grand or meta-level, maybe there is some equivocation with Inc.
Stephen
Stephen Harris
2005-01-06 01:38:42 UTC
Permalink
Post by joshbachynski
Dear Wolf,
1) I don't believe you know what the intelligilbe method is. And I
believe Descartes notion of truth was very simply the one used in
mathematics, ie: that which is true are those statements which
demonstrably correspond to reality.
No, mathematical truth is that which can be derived from axioms according
to the rules of derivation. Whether the axioms correspond to reality is
another issue entirely.
I find these different views discussed in a 73 page paper at:
http://arxiv.org/ftp/math/papers/0407/0407529.pdf
Stephen Harris
2005-01-06 03:48:18 UTC
Permalink
Post by Wolf Kirchmeir
Post by joshbachynski
Dear Wolf,
Josh wrote in part:

"1) I don't believe you know what the intelligilbe method is. And I
believe Descartes notion of truth was very simply the one used in
mathematics, ie: that which is true are those statements which
demonstrably correspond to reality."
Post by Wolf Kirchmeir
No, mathematical truth is that which can be derived from axioms
Post by joshbachynski
Post by Wolf Kirchmeir
according to the rules of derivation. Whether the axioms correspond to
reality is another issue entirely.
http://arxiv.org/ftp/math/papers/0407/0407529.pdf which appear to
"The significance of Gödel's Theorems lies in the fact that they are
derived in a system of Axioms where the Rules of Inference lead to a
particularly rich body of expressions that can be assigned formal
truth-values under various interpretations of the symbols of the
theory. However, a major feature of such a system is that it also
lends itself to interpretations of the chosen Rules of Inference that
are non-constructive, in the sense that they are able to assign,
implicitly and sweepingly, non-verifiable formal truth-values in some
models to various expressions. Thus the language, in a sense, admits
formally true expressions under some interpretations that cannot be
correlated, even in principle, to any factual truths of a human
perception." [SH: that is, perception of reality]

SH: If one adopts the non-platonic or non-mathematical realism point
of view that mathematics is a tool invented by humans to describe the
unfurling of regular events (primarily causality) this will require the
assumption that humans have gained their knowledge of the object
(physical reality) thru observation --> through their "human perception".
Post by Wolf Kirchmeir
Post by joshbachynski
1) I don't believe you know what the intelligilbe method is. And I
believe Descartes notion of truth was very simply the one used in
mathematics, ie: that which is true are those statements which
SH: Thus your statement
Post by Wolf Kirchmeir
Post by joshbachynski
demonstrably correspond to reality.
is contradicted by:
"Thus the language, in a sense, admits formally true expressions under
some interpretations that cannot be correlated, even in principle, to any
factual truths of a human perception."

Penrose tries to correlate Godelian Incompleteness, which is a result
of a formal theory (truth) as if it has causal impact on what truths can
be perceived by the intuition of the mind of a mathematician which is
an aspect of human perception. No correlation and thus no causality.

Regards,
Stephen
Wolf Kirchmeir
2005-01-05 21:21:58 UTC
Permalink
Post by Josip Almasi
Post by Wolf Kirchmeir
Yeah, but Descartes got it wrong. "Intelligible Method" is a fancy
label for "You can reason your way to the truth."
Well, yeah, kinda goal-directed backpropagation.
Then again, which exact method did _you_ use to conclude that Descartes
got it wrong?
And why would that one be right and his wrong?
Regards...
Analysis of his reasoning, which amounted to:

a) if an action occurs, there must be an actor.
b) thinking is an action
c) therefore, there must be a thinker.

The problem is with his premise b), which begs the question.

BTW, what if something happens, but there is no actor?

Of course Descartes may be right: maybe thinking is an action. But
that's not something that can be established a priori, which is what he
does.
Lester Zick
2005-01-05 22:08:18 UTC
Permalink
On Wed, 05 Jan 2005 16:21:58 -0500, Wolf Kirchmeir
Post by Wolf Kirchmeir
Post by Josip Almasi
Post by Wolf Kirchmeir
Yeah, but Descartes got it wrong. "Intelligible Method" is a fancy
label for "You can reason your way to the truth."
Well, yeah, kinda goal-directed backpropagation.
Then again, which exact method did _you_ use to conclude that Descartes
got it wrong?
And why would that one be right and his wrong?
Regards...
a) if an action occurs, there must be an actor.
b) thinking is an action
c) therefore, there must be a thinker.
The problem is with his premise b), which begs the question.
BTW, what if something happens, but there is no actor?
You mean like spontaneous generation, Wolf?
Post by Wolf Kirchmeir
Of course Descartes may be right: maybe thinking is an action. But
that's not something that can be established a priori, which is what he
does.
Regards - Lester
Neil W Rickert
2005-01-05 18:41:46 UTC
Permalink
Post by joshbachynski
I posted this essay on my blog sometime ago. It's called "Why
Artificial Intelligence can Never Get it Right". I thought some people
here may (cough cough troll cough) like to read it and post their views
here, or on my blog.
From that blog:

I fully admit I know next to nothing about Artificial
Intelligence, theoretical Mathematics, physics, advanced
electrical engineering, advanced computer programming, and
advanced (or even basic) neurochemistry.

'nuff said.
joshbachynski
2005-01-05 20:00:48 UTC
Permalink
"nuff said"? Did you even read on from that point? It continues to say:

I do not intend to discuss or look at any of these complicated subjects
in my critique of the foundations and methods of artificial
intelligence or discussion of actual intelligence. One could admit, it
hardly seems someone with my apparent lack of intelligence should be
arguing about intelligence at all.

That being said, if I may be permitted, I would like to explain my
intention as not presenting a positive thesis or outright proving
anything new in these various subject matters which are beyond me, but
only critiquing what I think is a mistake made by them on a
meta-discussion or philosophical level, for which my abilities are yet
to be tested. In doing so, I should like to appeal to the common belief
in the suitability of common language argument to such an endeavor in
order to make a case for my thesis about mistakes made by a technical
discipline. In other words, I shall use the language of these schools
as it has been passed down to me through a few articles, but mainly
common media and parlance in order to make a simple argument in a
non-technical way with perhaps technical consequences. My readers will
be the judge if my language games win anything useful or true, or if my
ignorance of technical terms and actual goals renders my argument null
and void.

---

In other words, I'm saying that the problem lies prior to these
subjects. As such, I need not know anything about these subjects in
order to question the groundwork upon which they rest.

If you want to contest that point you need to show exactly why I am
wrong, not just "nuff said".

Boy you science guys are lazy. You need to take some philosophy courses
- sharpen your reason :)

josh
Neil W Rickert
2005-01-05 20:42:58 UTC
Permalink
Yes, I did.

Okay, school's on a break and you are bored. So you thought you would
try your skills at philosophy. I'll give you a D- .
Post by joshbachynski
That being said, if I may be permitted, I would like to explain my
intention as not presenting a positive thesis or outright proving
anything new in these various subject matters which are beyond me, but
only critiquing what I think is a mistake made by them on a
meta-discussion or philosophical level, for which my abilities are yet
to be tested. In doing so, I should like to appeal to the common belief
in the suitability of common language argument to such an endeavor in
order to make a case for my thesis about mistakes made by a technical
discipline.
Philosophers have been spouting about the mistakes made by technical
disciplines, at least since the time Galileo had his run-in with the
church. And the philosophers have usually been wrong.
Post by joshbachynski
In other words, I shall use the language of these schools
as it has been passed down to me through a few articles, but mainly
common media and parlance in order to make a simple argument in a
non-technical way with perhaps technical consequences. My readers will
be the judge if my language games win anything useful or true, or if my
ignorance of technical terms and actual goals renders my argument null
and void.
Why should anybody here care about your language games? Most of
the people who post here despise language games.

If you want to be taken seriously, give us some good reasons based on
physics or mathematics or other related scientific field. Otherwise
you are just blowing smoke.
Pierre-Normand Houle
2005-01-06 03:21:00 UTC
Permalink
Post by Neil W Rickert
Okay, school's on a break and you are bored. So you thought you would
try your skills at philosophy. I'll give you a D- .
Post by joshbachynski
That being said, if I may be permitted, I would like to explain my
intention as not presenting a positive thesis or outright proving
anything new in these various subject matters which are beyond me, but
only critiquing what I think is a mistake made by them on a
meta-discussion or philosophical level, for which my abilities are yet
to be tested. In doing so, I should like to appeal to the common belief
in the suitability of common language argument to such an endeavor in
order to make a case for my thesis about mistakes made by a technical
discipline.
Philosophers have been spouting about the mistakes made by technical
disciplines, at least since the time Galileo had his run-in with the
church. And the philosophers have usually been wrong.
He's just willing to essay himself at some argumentation on a Usenet
newsgroup and you're comparing him to the Grand Inquisitor!
Post by Neil W Rickert
Post by joshbachynski
In other words, I shall use the language of these schools
as it has been passed down to me through a few articles, but mainly
common media and parlance in order to make a simple argument in a
non-technical way with perhaps technical consequences. My readers will
be the judge if my language games win anything useful or true, or if my
ignorance of technical terms and actual goals renders my argument null
and void.
Why should anybody here care about your language games? Most of
the people who post here despise language games.
I'd care for his language games. Are you pretending not to understand
this innocuous Wittgensteinian idiom anymore?
Post by Neil W Rickert
If you want to be taken seriously, give us some good reasons based on
physics or mathematics or other related scientific field. Otherwise
you are just blowing smoke.
If one isn't even allowed to put forward philosophical arguments in this
philosophical newsgroup...
Stephen Harris
2005-01-10 01:31:28 UTC
Permalink
Post by Pierre-Normand Houle
Post by Neil W Rickert
If you want to be taken seriously, give us some good reasons
based on physics or mathematics or other related scientific field.
Otherwise you are just blowing smoke.
If one isn't even allowed to put forward philosophical arguments in this
philosophical newsgroup...
I argue, as philosophers have argued from long before science existed,
that the mind or soul is not physical at all - it is a different
subsistence
all together. It is not matter or energy (which is also matter). Thought
and the soul are incorporeal.
It is beating a dead horse to claim dualism defeats AI since how
one construct a physical program = "mind" which is neither matter
or energy? Implicit is Josh's postings is the claim that the goal of
AI is to create a "mind" and that hasn't been true for some time.
Declaring religious beliefs is not the same as a philosphical argument.
Josh disguised his religious motivations sufficiently to fool you, is all.
joshbachynski
2005-01-10 02:49:31 UTC
Permalink
What religious motivations? I am an atheist. Arguing for an incorporeal
soul does not necessitate a Divinity exists, although some have argued
this in the past. I do not.

Nor am I dualist btw. Nor am I positing a dualist position.

If Artificial Intelligence is not trying to create an artificial mind
then that is just word games.
You are all sophists. I compliment you on your consistency...

josh
Wolf Kirchmeir
2005-01-10 15:50:53 UTC
Permalink
Post by joshbachynski
What religious motivations? I am an atheist. Arguing for an incorporeal
soul does not necessitate a Divinity exists, although some have argued
this in the past. I do not.
You may be an atheist in the sense of denying, say, a Christian version
of the deity, but that doesn't mean you're an atheist in the fundamental
sense: that there is no thing or entity outside of this universe of
matter/energy.
Post by joshbachynski
Nor am I dualist btw. Nor am I positing a dualist position.
Oh yes, you are. You posit an incorporeal mind/soul "inside" the body of
matter/energy. That's what "dualism" _means", fer gawd's sake. Did you
actually take any courses in philsosphy? Or are you just as ignorant
about what you profess as your metier as you are about other things? --
Come to think of it, your phrase "as philsosphers have argued since
before there was science.." suggests you don't know squat about
philosophy. You're here under false pretences, josh. Come clean.
Post by joshbachynski
If Artificial Intelligence is not trying to create an artificial mind
then that is just word games.
Gee, josh, just because you don't like the professed aims of AI is no
reason to tie your knickers in a knot. It's _you_ who claim that AI
wants to buld a mind. Why can't you accept that people are sincere when
they say that they just want to build machines that can do intelligent
things? I mean, an aircraft engineer doesn't want to build an artificial
bird, he just wants to build something that flies. Even if he wants to
bulld a machine that flaps its wings like a bird, he doesn't claim that
he wants top build a bird - just somethning that, in a limited way,
behaves like a bird. No nesting. No eggs. No hatchlings that have to be
taught how to fly. None of that. Just a machine that takes off, flies,
and lands.
Post by joshbachynski
You are all sophists. I compliment you on your consistency...
Ooh, sarcasm, yet! The most vicious form of ad hominem! Ow! Mercy!

If you knew what a sophist actually was, that would really hurt. Look in
the mirror if you want to see a sophist, josh.
Lester Zick
2005-01-10 16:34:20 UTC
Permalink
On Mon, 10 Jan 2005 10:50:53 -0500, Wolf Kirchmeir
Post by Wolf Kirchmeir
Post by joshbachynski
What religious motivations? I am an atheist. Arguing for an incorporeal
soul does not necessitate a Divinity exists, although some have argued
this in the past. I do not.
You may be an atheist in the sense of denying, say, a Christian version
of the deity, but that doesn't mean you're an atheist in the fundamental
sense: that there is no thing or entity outside of this universe of
matter/energy.
Post by joshbachynski
Nor am I dualist btw. Nor am I positing a dualist position.
Oh yes, you are. You posit an incorporeal mind/soul "inside" the body of
matter/energy. That's what "dualism" _means", fer gawd's sake. Did you
actually take any courses in philsosphy? Or are you just as ignorant
about what you profess as your metier as you are about other things? --
Come to think of it, your phrase "as philsosphers have argued since
before there was science.." suggests you don't know squat about
philosophy. You're here under false pretences, josh. Come clean.
Post by joshbachynski
If Artificial Intelligence is not trying to create an artificial mind
then that is just word games.
Gee, josh, just because you don't like the professed aims of AI is no
reason to tie your knickers in a knot. It's _you_ who claim that AI
wants to buld a mind. Why can't you accept that people are sincere when
they say that they just want to build machines that can do intelligent
things?
Is that anything like building a machine to do mental things, Wolf? Or
are transistors intelligent things?
Post by Wolf Kirchmeir
I mean, an aircraft engineer doesn't want to build an artificial
bird, he just wants to build something that flies. Even if he wants to
bulld a machine that flaps its wings like a bird, he doesn't claim that
he wants top build a bird - just somethning that, in a limited way,
behaves like a bird. No nesting. No eggs. No hatchlings that have to be
taught how to fly. None of that. Just a machine that takes off, flies,
and lands.
Post by joshbachynski
You are all sophists. I compliment you on your consistency...
Ooh, sarcasm, yet! The most vicious form of ad hominem! Ow! Mercy!
If you knew what a sophist actually was, that would really hurt. Look in
the mirror if you want to see a sophist, josh.
Regards - Lester
Stephen Harris
2005-01-11 07:20:41 UTC
Permalink
Post by joshbachynski
What religious motivations? I am an atheist. Arguing for an incorporeal
soul does not necessitate a Divinity exists, although some have argued
this in the past. I do not.
You may be an atheist in the sense of denying, say, a Christian version of
the deity, but that doesn't mean you're an atheist in the fundamental
sense: that there is no thing or entity outside of this universe of
matter/energy.
Post by joshbachynski
Nor am I dualist btw. Nor am I positing a dualist position.
Oh yes, you are. You posit an incorporeal mind/soul "inside" the body of
matter/energy. That's what "dualism" _means", fer gawd's sake. Did you
actually take any courses in philsosphy? Or are you just as ignorant about
what you profess as your metier as you are about other things?
Josh doesn't use standard philosophical terms the way everybody
else does, and then he sometimes changes horses in midstream and
plonks into the eternal river running through the Platonic realm.
Come to think of it, your phrase "as philsosphers have argued since before
there was science.." suggests you don't know squat about philosophy.
You're here under false pretences, josh. Come clean.
Post by joshbachynski
If Artificial Intelligence is not trying to create an artificial mind
then that is just word games.
There are a group of Transhumanists who are trying to bring
about the Singularity, the advent of general super-intelligence
by way of a self-modifying AI program. Because there is a
minority group striving for this goal does not transfer that goal
unto AI researchers who would be happy with a generally
"intelligent-like" reasoning machine that could do a good job
of translating French into English for instance.

Josh, you are just another person who insists that their ignorance qualifies
them to arrive at their imagined conclusions because they
are full of intellectual vanity. It has been known for a long time
that one cannot prove or disprove claims about "entities" that
exist outside of energy/matter, space and time. Supposing that
there is any such influence outside of the physical universe is
called dualism whether the entity is labelled God or a Platonic
Ideal or some other label. Since you are confused you probably
did not mean "It is not matter or energy (which is also matter)."
(SH: This means outside of the physical universe.]
even though that is what you wrote; because you continued with
"Thought and the soul are incorporeal." which means you can't
use a virtual idea of a worm as bait to catch a real fish; incorporeal
does not mean not part of reality-->as in independent of reality.
When I read the blog I had the same impression that you were
using terms with a standard meaning that I agreed with but that you
had your own meaning for those terms which would lead you
to a peculiar conclusion that is not same conclusion that other
people reading the same words arrive at. I don't think you are
clever enought to be a sophist. You think you can get by without
understanding how the physical world works to confirm your pov.

Electromagnetically,
Stephen
Lester Zick
2005-01-10 16:31:40 UTC
Permalink
On Mon, 10 Jan 2005 01:31:28 GMT, "Stephen Harris"
Post by Stephen Harris
Post by Pierre-Normand Houle
Post by Neil W Rickert
If you want to be taken seriously, give us some good reasons
based on physics or mathematics or other related scientific field.
Otherwise you are just blowing smoke.
If one isn't even allowed to put forward philosophical arguments in this
philosophical newsgroup...
I argue, as philosophers have argued from long before science existed,
that the mind or soul is not physical at all - it is a different
subsistence
all together. It is not matter or energy (which is also matter). Thought
and the soul are incorporeal.
It is beating a dead horse to claim dualism defeats AI since how
one construct a physical program = "mind" which is neither matter
or energy? Implicit is Josh's postings is the claim that the goal of
AI is to create a "mind" and that hasn't been true for some time.
Declaring religious beliefs is not the same as a philosphical argument.
Josh disguised his religious motivations sufficiently to fool you, is all.
It hasn't been true for some time only because ai scientists failed
and gave up the attempt. As I understand the concept that was their
original intent or at least hope. AI now is just a blanket catch all
for computers and programming.

Regards - Lester
Wolf Kirchmeir
2005-01-05 21:29:00 UTC
Permalink
joshbachynski wrote:
[...]>
Post by joshbachynski
Boy you science guys are lazy. You need to take some philosophy courses
- sharpen your reason :)
josh
I did - even embarked on a degreee in philosphy, before I realised that
playing word games is much more fun in a poetry class. And I found that
three courses in logic did a fair bit to sharpen my reason; not that I
claim perfection. In particular, it created a habit of trying to analyse
the pattern of an argument (or proof.) It's the patterns that make
arguments valid. It's their contents that make them sound. (snipped a
rant here, knowing it would just bring out more trolls.)
Lester Zick
2005-01-05 22:06:26 UTC
Permalink
On 5 Jan 2005 12:00:48 -0800, "joshbachynski"
Post by joshbachynski
I do not intend to discuss or look at any of these complicated subjects
in my critique of the foundations and methods of artificial
intelligence or discussion of actual intelligence. One could admit, it
hardly seems someone with my apparent lack of intelligence should be
arguing about intelligence at all.
That being said, if I may be permitted, I would like to explain my
intention as not presenting a positive thesis or outright proving
anything new in these various subject matters which are beyond me, but
only critiquing what I think is a mistake made by them on a
meta-discussion or philosophical level, for which my abilities are yet
to be tested. In doing so, I should like to appeal to the common belief
in the suitability of common language argument to such an endeavor in
order to make a case for my thesis about mistakes made by a technical
discipline. In other words, I shall use the language of these schools
as it has been passed down to me through a few articles, but mainly
common media and parlance in order to make a simple argument in a
non-technical way with perhaps technical consequences. My readers will
be the judge if my language games win anything useful or true, or if my
ignorance of technical terms and actual goals renders my argument null
and void.
---
In other words, I'm saying that the problem lies prior to these
subjects. As such, I need not know anything about these subjects in
order to question the groundwork upon which they rest.
If you want to contest that point you need to show exactly why I am
wrong, not just "nuff said".
Positivism in action.
Post by joshbachynski
Boy you science guys are lazy. You need to take some philosophy courses
- sharpen your reason :)
josh
Regards - Lester
Stephen Harris
2005-01-06 00:58:19 UTC
Permalink
Post by joshbachynski
Post by joshbachynski
In other words, I'm saying that the problem lies prior to these
subjects. As such, I need not know anything about these subjects in
order to question the groundwork upon which they rest.
If you want to contest that point you need to show exactly why I am
wrong, not just "nuff said".
Boy you science guys are lazy. You need to take some philosophy courses
- sharpen your reason :)
josh
Post by joshbachynski
In other words, I'm saying that the problem lies prior to these subjects.
The main problem with your essay is that it is now like a strawman.
Already lots of people don't believe in Computationalism which had
as a major premise that running the correct program would instantiate
a mind. Also called strong AI (olden days). The Turing Test has already been
criticized. So now AI has a less ambitious meaning and it seems to
me that your essay presumes the old meaning, without specifying it.

One criteria now is that AI works, like in expert systems or chess
and that general intelligent problems solving can be approached by
an engineering method. It will not need to be provably identical to
human intelligence, which is a reoccurrence of incomple information idea,
and which is why the repeat human evolutinary approach to building AI
doesn't work (and is known) due to too many unknown random factors.

So your essay beats a dead horse for those with philosophical
backgrounds. See Harnad. Now AI seeks to simulate intelligence,
which may not be totally human, but perhaps just analogy/prediction
with a different sensory array. So many people care about simulation
which is prospective as Myhill would say, and requires engineering skills.
Your definition of AI is no longer the working definition, now you need:

"I fully admit I know next to nothing about Artificial
Intelligence, theoretical Mathematics, physics, advanced
electrical engineering, advanced computer programming, and
advanced (or even basic) neurochemistry."

This is/actually uses the pragmatic (relative) position/approach which you
endorsed, now for building a practical AI. Figuring out how the brain works
from neurology can be used to transpose functions to a computer
program. When a lot of the functions of the brain which produces
intelligence is understood then a lot of intelligence can be transferred
into a computer program (though one might not prefer the word
intelligence, but consider it calculation). AI can now produce a
great chess playing program which uses calculation. There are lots
of expert programs and "Goedel" a theorem prover. When enough
expert systems are made and organized you have a useful general AI.

So do you have an in principle argument why the area of expert
function cannot be extended by ever closer approximation to
most human "intelligent like" functions? You are going to fail if
you try with some concept like "creativity". I am a great fan of
Roger Penrose, but his arguments failed in SOTM. That actually
gets into making predictions about physical reality from formal
results which are abstract (non-physical) and based to some degree
on assumptions about observed reality which contain uncertainty.

Turing's hope for AI no longer defines what is meant by AI today
for the majority of people in AI. I don't think your essay was wrong, it
just is not new. The meaning of AI and expectations has already evolved.
So your essay holds less appeal for people already aware of the change.
There are no doubt some people not aware of the change, who hope.

Regards,
Stephen
joshbachynski
2005-01-06 07:06:42 UTC
Permalink
Dear Stephen et all,

0) Thank you for the thoughtful responses - those that gave thoughtful
responses :)

1) Too many posts to reply to each one - so I reply to all relevant
responses here in bulk.

2) Let's forget what Descartes said. Although I'd love to vindicate him
here as a few of the critiques I've heard here are simply dead wrong,
that's not why I am here. My argument does not necessarily rest upon
Descartes. I was just mentioning him because I thought he may be
familiar to you. Apparently your professor's incorrect version of him
is familiar to you. (ok that was a flame but we can argue over him
later)

3) Stephen's response was the most thoughtful so I'll focus on yours,
but I believe it makes the same materialist error that is embedded in
most of the other responses anyways. (no offense :)

Stephen I understand the difference between hard and soft AI. It
doesn't matter that I don't have the authority or experience. To
assume because I am not qualified or an admitted expert in AI that it
is therefore impossible, or even improbable, that I have anything
important or correct to say about AI is a fallacy. You wouldn't make
such a silly mistake. So let's get down to what I DO say :)

The problem is IMHO the error of hard AI is still apparent in soft AI.
It is a methodological error. That is the whole thesis of my paper,
which I wrote 2 years ago. Then I offer an alternate method which I
argue is already established and more appropriate to the subject
matter.

Essentially, AI enthusiasts (of any stripe) assume intelligence is
discoverable by scientific method (whether or not they agree on
Turing's particular method or not - they all agree it should be a
scientific / mathematical method, or at least that intelligence is a
material thing).

MY POINT: This has never been demonstrated and is instead assumed by AI
enthusiasts, including everyone here I've read so far. AI scientists
believe that intelligence (whatever this is - they all seem to admit
they don't know what it is although they "know" it must be
physical and they can recreate one, although they admit they don't
know what they are recreating) can be discovered by observation,
understanding the physical nature of the brain, and attempting to make
a re-creatable model of it with synthetic components. Where has it been
demonstrated that intelligence is simply material and that the best
method for discovering what it is empirical in nature? It hasn't. If
so, explain to me with certainty or even plausibility what
self-consciousness is and why and how exactly it springs forth from
inert material. What is the exact difference 5 seconds before death in
a self-conscious human and 5 seconds after the self-consciousness
appears to be inert or gone? Science alone cannot prove this - they
cannot even frame the question.

So, ignore that for the moment. My point is twofold: #1) you guys have
the wrong method and assume your subject matter is material and #2)
here is a new method. To make any progress we need to agree on my basic
premise first and deal with the possibility of point 1 first. If I'm
wrong at point 1 then there is no need to continue (actually that is
not true but I will concede - it's still possible I'm wrong at
point 1 but there could still be a better method to use unbeknownst to
everyone).

After we agree on the possibility of the method and subject matter
being undemonstrated and therefore perhaps incorrect, then we will go
on to me proving the immaterial nature of the soul and the methods used
to discover that which take us to the queen of the sciences (ie: out of
the sciences and into philosophy).

And to answer your direct question:

"So do you have an in principle argument why the area of expert
function cannot be extended by ever closer approximation to
most human "intelligent like" functions?"

In fact I do - you don't know what to approximate to - therefore
you are just guessing! You assume you CAN approximate "ever closer"
- this has not been demonstrated. Most AI enthusiasts even admit they
don't know what it is they are ever closer approximating to!
Intelligent like functions? What, like a thermostat? Give me a break :)

So in principle yes you could do that, if you knew what to approximate
to, which you don't. You assume intelligence is material and humans
are strictly determined. These are both unproven propositions that
underlie your whole endeavor.
JGCASEY
2005-01-06 07:55:46 UTC
Permalink
Post by joshbachynski
Dear Stephen et all,
[...]
Post by joshbachynski
MY POINT: This has never been demonstrated and is instead assumed by AI
enthusiasts, including everyone here I've read so far. AI scientists
believe that intelligence (whatever this is - they all seem to admit
they don't know what it is although they "know" it must be
physical and they can recreate one, although they admit they don't
know what they are recreating) can be discovered by observation,
understanding the physical nature of the brain, and attempting to make
a re-creatable model of it with synthetic components. Where has it been
demonstrated that intelligence is simply material and that the best
method for discovering what it is empirical in nature? It hasn't. If
so, explain to me with certainty or even plausibility what
self-consciousness is and why and how exactly it springs forth from
inert material. What is the exact difference 5 seconds before death in
a self-conscious human and 5 seconds after the self-consciousness
appears to be inert or gone? Science alone cannot prove this - they
cannot even frame the question.
You are confusing consciousness, which hasn't been explained,
with intelligent behaviour. There is no evidence I know of that
you have to be intelligent, or behave intelligently, to be
conscious.

[...]

--
John Casey
Stephen Harris
2005-01-06 12:40:51 UTC
Permalink
Post by JGCASEY
Post by joshbachynski
Dear Stephen et all,
[...]
Post by joshbachynski
MY POINT: This has never been demonstrated and is instead assumed by
AI
Post by joshbachynski
enthusiasts, including everyone here I've read so far. AI scientists
believe that intelligence (whatever this is - they all seem to admit
they don't know what it is although they "know" it must be
physical and they can recreate one, although they admit they don't
know what they are recreating) can be discovered by observation,
understanding the physical nature of the brain, and attempting to
make
Post by joshbachynski
a re-creatable model of it with synthetic components. Where has it
been
Post by joshbachynski
demonstrated that intelligence is simply material and that the best
method for discovering what it is empirical in nature? It hasn't. If
so, explain to me with certainty or even plausibility what
self-consciousness is and why and how exactly it springs forth from
inert material. What is the exact difference 5 seconds before death
in
Post by joshbachynski
a self-conscious human and 5 seconds after the self-consciousness
appears to be inert or gone? Science alone cannot prove this - they
cannot even frame the question.
You are confusing consciousness, which hasn't been explained,
with intelligent behaviour. There is no evidence I know of that
you have to be intelligent, or behave intelligently, to be
conscious.
[...]
--
John Casey
You might be right. I thought he was confusing approximating intelligence
with the current goal of AI (for many) of approximating intelligent
behavior,

an engineering task,
Stephen
Stephen Harris
2005-01-06 12:34:53 UTC
Permalink
Post by joshbachynski
Dear Stephen et all,
Stephen I understand the difference between hard and soft AI. It
doesn't matter that I don't have the authority or experience. To
assume because I am not qualified or an admitted expert in AI that it
is therefore impossible, or even improbable, that I have anything
important or correct to say about AI is a fallacy. You wouldn't make
such a silly mistake. So let's get down to what I DO say :)
The problem is IMHO the error of hard AI is still apparent in soft AI.
It is a methodological error. That is the whole thesis of my paper,
which I wrote 2 years ago. Then I offer an alternate method which I
argue is already established and more appropriate to the subject
matter.
Essentially, AI enthusiasts (of any stripe) assume intelligence is
discoverable by scientific method (whether or not they agree on
Turing's particular method or not - they all agree it should be a
scientific / mathematical method, or at least that intelligence is a
material thing).
MY POINT: This has never been demonstrated and is instead
assumed by AI
enthusiasts, including everyone here I've read so far. AI scientists
believe that intelligence (whatever this is - they all seem to admit
they don't know what it is although they "know" it must be
physical and they can recreate one, although they admit they don't
know what they are recreating) can be discovered by observation,
understanding the physical nature of the brain, and attempting to make
a re-creatable model of it with synthetic components. Where has it been
demonstrated that intelligence is simply material and that the best
method for discovering what it is empirical in nature? It hasn't. If
so, explain to me with certainty or even plausibility what
self-consciousness is and why and how exactly it springs forth from
inert material. What is the exact difference 5 seconds before death in
a self-conscious human and 5 seconds after the self-consciousness
appears to be inert or gone? Science alone cannot prove this - they
cannot even frame the question.
So, ignore that for the moment. My point is twofold: #1) you guys have
the wrong method and assume your subject matter is material and #2)
here is a new method. To make any progress we need to agree on my >basic
premise first and deal with the possibility of point 1 first. If I'm
wrong at point 1 then there is no need to continue (actually that is
not true but I will concede - it's still possible I'm wrong at
point 1 but there could still be a better method to use unbeknownst to
everyone).
After we agree on the possibility of the method and subject matter
being undemonstrated and therefore perhaps incorrect, then we will go
on to me proving the immaterial nature of the soul and the methods used
to discover that which take us to the queen of the sciences (ie: out of
the sciences and into philosophy).
"So do you have an in principle argument why the area of expert
function cannot be extended by ever closer approximation to
most human "intelligent like" functions?"
In fact I do - you don't know what to approximate to - therefore
you are just guessing! You assume you CAN approximate "ever closer"
- this has not been demonstrated. Most AI enthusiasts even admit they
don't know what it is they are ever closer approximating to!
Intelligent like functions? What, like a thermostat? Give me a break :)
You don't need to approximate intelligence, that is not the claim.
What is being approximated is "intelligent like" behavior. A standard
defiinition of AI is behavior that would be considered intelligent if
a human were doing it. So take Deep Blue. Nobody thinks that Deep
Blue plays or "thinks" about chess in the same way as humans but you
have an end product that can beat grandmasters at chess, so it joins
the ranks of top grandmaster intelligent like behavior.

So there are expert programs that can perform human intelligent
like behaviors such as medical diagnosis. There are automated
theorem provers that have proved a few problems that humans
were stuck on. But humans can very the proof, which is a produced
thing, a behavior. We built airplanes after first seeing birds fly. The way
birds fly is not being approximated-- flight has been approximated and
achieved. Many such intelligent like functions for AI have not only been
approximated, they have been achieved. Human inventions are not
detailed before they are made, often how to make something happen
is discovered while trying to create some particular behavior.
Post by joshbachynski
You assume you CAN approximate "ever closer"
- this has not been demonstrated. Most AI enthusiasts even admit they
don't know what it is they are ever closer approximating to!
It already has been demonstrated. AI enthusiasts are approximating
the function of intelligent like behavior, and trying to produce that
function in another medium, not duplicate the identical way a human
brain performs the function--that is how engineering works, like
planes don't flap their wings.
Post by joshbachynski
So in principle yes you could do that, if you knew what to approximate
to, which you don't. You assume intelligence is material and humans
are strictly determined. These are both unproven propositions that
underlie your whole endeavor.
You don't need to approximate human intelligence, but the observed
function behavior/function of intelligence, what would be called intelligent
if a human did it.

For your other point about intelligence and materiality, SH wrote:

"Yes, this is why Penrose's attack on AI failed which is listed
in Josh's bibliography. I think dualism can be used to attack
AI but that requires a major assumption. Descartes was a dualist."

If you assume that intelligence does not arise from the brain activity
then you are claiming dualism. As I already pointed out people know
that if you require a God for endowing a human with a soul, spirit
or a mind then a robot won't have one. Or it is dualistic to claim
a Platonic realm or mathematical realism. But that doesn't make your
case. Because you AI enthusiasts can't prove physicalism does not
constitute that reality is dualistic. That happens to be a logical fallacy.
Just as religious people can never prove God does not prove that
atheists are correct. It can't be proven either way.

One thing which is not being claimed. That AI will achieve every
aspect of human intelligence and that this can be proven to be identical.
Approximation converges on intelligent like behavior. That means the dozens
of experts systems accumulate (when organized to cooperate)
into a general system that acquires more and more intelligent like
human behaviors as they are perfected. That will be judged just on
behavior which is primarily what is used to judge a human as more
intelligent than another human. Humans have more presumption of
intelligence, because I can say that human looks like me, has a brain
like me so has intelligence like me.

Now the programmers can never anticipate every event which might
befall a human and create/predict/determine the correct human like
intelligent response in all cases. But eventually you can write enough
rules as an engineering task that will cover all common situations. And
then rarer and rarer events so that at some time there are enough rules
that another human will not be able to tell that there is not an intelligent
human at the other end of an email. That is what I mean by approximation
not that the intelligent like program is immune from extended probing
questions. Though approximation can fill in the gaps so that the extended
questioning needs to get ever more subtle -- eventually you get past
what is commonly acknowledged as a normal human response, there
will no longer be a consensus. Behavior may not be the best way to
determine intelligence but I don't know of a better way. Surely assuming
that a super-self-aware entity endows human intelligence is some process
apart from a normal physical occurrence is not a better approach than
judging intelligence by behavior. Remember what mom told her dumb kid

Stupid is as stupid does,
Stephen
joshbachynski
2005-01-07 02:46:43 UTC
Permalink
Dear Stephen et all,

I honestly don't mean to sound rude but you guys are skirting around
the issue ;) I will concede that modern AI (fully aware that they
don't know what intelligence is and therefore cleverly) decided to
limit their attempts to approximating intelligent-like behavior, and
not creating intelligence per se. Fine. But this is just sophistry
guys. If you don't know what intelligence is then you don't know
what intelligent-like behavior is either. It is simply prejudice to
assume that you do.

So the discussion seems to go something like this:

Josh: How do you know when to call something intelligent behavior?
AI Dude: When it seems to you to be intelligent.
Josh: Oh, what's intelligence?
AI Dude: I don't know, I'll just try and make something that
appears to be intelligent.
Josh: What is something that appears to be intelligent?
AI Dude: Something like me, I guess. I assume I'm the most
intelligent thing in all the ways intelligence expresses itself....

Or what? What is the AI answer here? AI doesn't care? They just want
to make some adaptive software that can perform more complicated
functions with less explicit instructions, right? Until you get to a
point that the thing you made seems to decide what it is going to do
more or less on its own? Is that the general contemporary goal of AI?

What you are actually saying when you say you are going to approximate
intelligent like behavior is you a) assume you are the perfect example
of all types of intelligence (are you also a poet, a philosopher, a
mathematician, a logician, a composer, a behavioral psychologist, and
all the other intelligence types? What if even the most intelligent
people (whoever that is - you don't know that yet - scientists
seem to assume it's some form of pragmatism) don't *behave*
intelligently?) and then b) try to make a thing which the end result is
it acts kind of like you in most "normal situations" (I would also
ask how you know to call a situation a normal situation). IMHO that is
doomed to no more than lack luster results and ultimate failure.

You know all those cheezy sci fi movies where the AI is ALWAYS evil?
Proceeding by the standard AI scientific method, if you are successful
(which IMHO you won't be) that aberrant childlike AI (which decides
all humans are dangerous and tries to kill us ala Matrix I mean
Terminator I mean Space Oddysey 2001 etc, etc.) is exactly what you
will create because the behavior of our *society* (which doesn't even
know what intelligence is) will be the model of intelligence.

Do you see my point? You say you don't understand the essence of X so
therefore you'll just try to make something as close to X as you can,
but you don't know what X is! How can you know the shortest or best
or most efficient method of producing something X-like when you don't
know what X is?

Yes, in theory it should be possible. But take a really hard look at
that belief (as I suspect Stephen is with his admission "Behavior may
not be the best way to determine intelligence but I don't know of a
better way"). EXACTLY! I'm saying behavior isn't the best way and
I think I know a batter one :) Wouldn't it be more efficient, faster
and correct to first know what it is you want to build before building
it?

It is true you can make something fly without making its wings flap.
I'm not saying it's impossible to make a thing which appears to
think. I'm saying it's a) unlikely (because you do not yet have a
sufficient explanation of thinking or intelligence to begin with), and
b) simply the incorrect way to go about it. It's like trying to do
calculus with counting stones - wrong subject matter, wrong method.
Instead of making something which appears to think, why not make
something which actually does?

josh
--
http://thymos.blogspot.com
Stephen Harris
2005-01-07 09:18:06 UTC
Permalink
Post by joshbachynski
Dear Stephen et all,
I honestly don't mean to sound rude but you guys are skirting around
the issue ;) I will concede that modern AI (fully aware that they
don't know what intelligence is and therefore cleverly) decided to
limit their attempts to approximating intelligent-like behavior, and
not creating intelligence per se. Fine. But this is just sophistry
guys. If you don't know what intelligence is then you don't know
what intelligent-like behavior is either. It is simply prejudice to
assume that you do.
I think you are caught up in philosophy and are not being real. I mean
like the philosophers who asked how do I know I exist? Descartes
replied "I think therefore I am." But that reply does not prove existence
in an absolute objective manner. You are taking on the role of a
doubting Thomas philosopher about existence. Other people are not
going to care about your doubt because reality works satisfactorily.
When "Goedel" ,an automated theorem prover, proved some theorem
that human mathematicians had been struggling with for decades, they
considered that intelligent-like behavior. It didn't require a definition
of intelligence. I don't think Penrose disputed weak AI and currently
I am not sure AI has any rigid requirement stemming from weak AI.

It would have been different if you criticized strong AI, like Penrose,
and disputed consciousness, or intelligence, or mind as being incorrectly
attributed to AI program behavior. If a human had found the solution
or proof to the theorem mentioned above, people would have said
(s)he was creative or intelligent. It is part of our shared consensus
reality.
You don't need to call the machine's discovery intelligent or creative. Just
that the output displays artificial intelligence, because its output
produced
a result which would have been considered intelligent or creative if a
human would have done it. People make comparisons like this all the
time without defining the concept. Lots of concepts in language are
abstract and symbolic and don't have a physical thing to point to.

A logical consequence of your argument is that humans are not intelligent
because we can't define intelligence or consciousness precisely. But we
ascribe intelligence to others all the time. Don't you want to marry an
intelligent individual? People think chimps are smart if they can move a
chair so that they can climb up and grab a banana. They think that yellow
dog who saved the 7-year old has some intelligence because it nipped
at the kid's ankle so that the kid would flee from the shoreline.

The supreme court judges said we may not be able to define
pornography, but we know it when we see it. They are judging by
appearances which may not be quantifiable. In the case of calling AI
outputs intelligent-like (I don't care if you call it a calculation) they
are judging by observed reality and how it compares with previous
experience --> analogy, which is never a precise match in all categories.
But it works. It seems you are insisting that deductive reasoning be used
to proceed with building intelligent-like programs. You think intelligence
should be defined. In the real world, inductive arguments and their
conclusions, reach beyond their premises, exist along a range of probability
rather than being certain, and can be strengthened or weakened by new
evidence. This reasoning is employed in a lot of situations. Your claim
boils down that building a more general AI by the trial and error approach
of engineering is muddle-headed. You claim this despite the fact that
reality reports there are several intelligent-like programs in existence.

Your criticism would make a lot more sense if AI were claiming that
actual/identical human intelligence was achieved, that the AI was a
self-aware consciousness which possessed a mind. There are some AI
enthusiasts/visionaries who claim such things, but they are no longer the
majority like they were in the heyday of computationalism and
functionalism that was prominent 15 years. There is no sound reason
to exclude AI from the other concepts which use inductive reasoning.

People use reality to judge theories or philosophical positions. It is a
mistake to bend reality to meet your philosophical (mis)conceptions like
Einstein's Cosmic Fudge Factor
http://www.astronomycafe.net/anthol/fudge.html
"In 1917, Albert Einstein tried to use his newly developed theory of
general relativity to describe the shape and evolution of the universe.
The prevailing idea at the time was that the universe was static and
unchanging. Einstein had fully expected general relativity to support
this view, but, surprisingly, it did not. The inexorable force of
gravity pulling on every speck of matter demanded that the universe
collapse under its own weight.

His remedy for this dilemma was to add a new 'antigravity' term to his
original equations. It enabled his mathematical universe to appear as
permanent and invariable as the real one. This term, usually written as
an uppercase Greek lambda, is called the 'cosmological constant'. It has
exactly the same value everywhere in the universe, delicately chosen to
offset the tendency toward gravitational collapse at every point in space."

SH: So Einstein later called this the biggest mistake of his life (fudging).

You don't have some special insight when you remark that if the workings
of the brain were fully understood, the relationship of the brain to
conscious
activity, self-awareness and intelligence were fully understood and defined,
then building a smart robot program would be immeasureably easier. It
would be somewhat like reading the manual (which takes a lot less time) than
figuring out how some esoteric application works by trial and error.
But figuring out the brain may take 100 years or longer. In the meantime
many people are happy with useful intelligent-like programs. It is crazy
to deny reality and insist such programs don't exist because they didn't
meet your prerequisite metaphysical requirement of a precise definition!

value judgement: an assessment that reveals more about the values of the
person making the assessment than about the reality of what is assessed

Is that the general contemporary goal of AI?

"You know all those cheezy sci fi movies where the AI is ALWAYS evil?
Proceeding by the standard AI scientific method, if you are successful
(which IMHO you won't be) that aberrant childlike AI (which decides
all humans are dangerous and tries to kill us ala Matrix I mean
Terminator I mean Space Oddysey 2001 etc, etc.) is exactly what you
will create because the behavior of our *society* (which doesn't even
know what intelligence is) will be the model of intelligence.

This is really nonsense! Those movies assume a computer which
becomes sentient/self-aware. So you have introduced a strawman to
the argument. I've already told you the goal of contemporary AI is not
to build consciousness, self-awareness or intelligence with volitional ego.
(I don't mean that is true for everybody in the AI field, gripe to them).
That is why I keep typing intelligent-like and sometimes calcualtional.
Computers are already used to launch nuclear missiles. They have a
double key security protocol. Computers can already be used to destroy
the world. It would not be that hard to rewrite the program so that it
just needed the <enter> key tapped to launch the missiles by some human.
Computers did not exist in airplanes when they dropped A-bombs over
Japan. Computers don't have a semblance of free will. If they used your
idea to actually build a computer with emotions, intelligence, etc. then it
would have the capability of some independent self-aware action or
free will. So your estimation is exactly backward. Programs like Goedel
or Deep Blue don't have the potential to evolve into some ulterior purpose
outside of their programs. So they are not a threat anymore than your
telephone answering machine which is also connected to your computer
and ISP can spontaneously decide to connect to a defense computer
on the internet and order WW III. You are sharing delusions with the
AI fringe. Computers can already be used to start WW III without a
spark of machine awareness, humans will suffice. Can you show how modern AI
is capable of becoming such a threat? No. These SF
scenarios are fantasies are worthless speculation, not philosophy.
Post by joshbachynski
"Do you see my point?"
No, you don't have one.
Post by joshbachynski
"You say you don't understand the essence of X so
therefore you'll just try to make something as close to X as you can,
but you don't know what X is!"
No, but we know what it looks like, as do most people on the planet.
It looks like some aspect of what an intelligent human accomplishes.
Post by joshbachynski
"How can you know the shortest or best
or most efficient method of producing something X-like when you don't
know what X is?"
Obviously, one doesn't. This may not be obvious to you since you are
also ignorant of AI along with the other topics you mentioned: AI does
not claim exact knowledge is a not preferable method. But many people
think that finding such a method, if possible will take a very long time.
Trial and error will provide some useful applications of AI along the way.
It is possible due to random searches that this may be a quicker method.
The main problem with your idea is that it is not time realistic or that
both
methods cannot proceed at the same time.

"It doesn't matter that I don't have the authority or experience. To
assume because I am not qualified or an admitted expert in AI that it
is therefore impossible, or even improbable, that I have anything
important or correct to say about AI is a fallacy."

SH: You've demonstrated that you don't have anything important
or correct to say about AI and it is due to the fact that you don't
know what you are talking about. Having expertise in AI mirrors
your claim for the most efficient method to proceed is with full
knowledge. And it most certainly is *"improbable"* just like the
chances that a 'trial and error' approach to AI will be quickly
successful. By chance, or trial and error, or meta-principles,
you might have had something important or correct to say about
AI, but that in fact did not turn out to be the case. It reminds me
of the gamlber on a losing streak who once announced: "My credit
is better than cash?" I will be moving on to another topic.

Research is the spice of life (Dune),
Stephen
Lester Zick
2005-01-07 15:34:12 UTC
Permalink
On 6 Jan 2005 18:46:43 -0800, "joshbachynski"
Post by joshbachynski
Dear Stephen et all,
[. . .]
Post by joshbachynski
Yes, in theory it should be possible. But take a really hard look at
that belief (as I suspect Stephen is with his admission "Behavior may
not be the best way to determine intelligence but I don't know of a
better way"). EXACTLY! I'm saying behavior isn't the best way and
I think I know a batter one :) Wouldn't it be more efficient, faster
and correct to first know what it is you want to build before building
it?
Well, by all means, let's stop analyzing the errors of others and
start explaining this better mouse trap of intelligence you claim.

Regards - Lester
Lester Zick
2005-01-08 16:19:23 UTC
Permalink
I am reposting this because Josh hasn't answered.
Post by Lester Zick
On 6 Jan 2005 18:46:43 -0800, "joshbachynski"
Post by joshbachynski
Dear Stephen et all,
[. . .]
Post by joshbachynski
Yes, in theory it should be possible. But take a really hard look at
that belief (as I suspect Stephen is with his admission "Behavior may
not be the best way to determine intelligence but I don't know of a
better way"). EXACTLY! I'm saying behavior isn't the best way and
I think I know a batter one :) Wouldn't it be more efficient, faster
and correct to first know what it is you want to build before building
it?
Well, by all means, let's stop analyzing the errors of others and
start explaining this better mouse trap of intelligence you claim.
Regards - Lester
Regards - Lester
joshbachynski
2005-01-08 23:43:07 UTC
Permalink
Sorry Lester. I didn't realize you had asked me directly for an
explanation of my point #2. Althought logically this is not the proper
way to proceed, I'll proceed with my point 2 from my essay even though
we haven't agreed on point 1 and maybe never will.

It goes something like this:

Science assumes by observing an "intelligent" being (which they assume
humans are) they can derive a theoretical understanding of intelligent
behaviour and program a machine to replicate those behaviours in
increments until they have stumbled on to something which can fool
others into believing the machine is intelligent.

I have gone over ind etail why this is a flawed method. Most people
here have conceded that the scientific method is flawed in this regard
but they know of no other method one can use to determine what
intelligence is. A fact that, I have pointed out, which does not mean
there can be no other methods, or even better ones.

The general ignorance and bias of science notwithstanding, the problem
actually occurs when one tries to ascertain what subject matter they
are trying to study. Science thinks that thought and intelligence must
be material because they assume everything is material. This error is
easilly exampled by asking them then what material pure mathematcs
consists of - or logic.

Only once one is aware of the subsistence or type of existence of the
subject matter they are investigating can one then know what subject
matter to use. For example, one does not use calculus to observe when
ice melts, similarly one does not use empirical method to determine
trigonometry. There is a difference in mathematical and scientific
method because mathematical entities (numbers, points, lines) are
different subsistences then physical subsistence.

How does this apply to the mind? Science of AI attempts to use
scientific method to "discover" what the mind is because it assumes the
mind must be physical - a supposition not confirmed by experiment or
theory. I argue, as philosophers have argued from long before science
existed, that the mind or soul is not physical at all - it is a
different subsistence all together. It is not matter or energy (which
is also matter). Thought and the soul are incorporeal.

(Please note: by my arguing this I do not necesasrily or even probably
also must assume a God exists, or that we are immortal, or religion is
right, or any other such nonsense - don't strawman me before I am
finished).

So what method can we use to confirm this? And what method can we use
to exam this incorporeal subsistence of the mind? As early as Plotinus
we have had a method to catalog and investigate the human mind called
the Intelligible Method. This method is closely related to mathematical
method and uses logic to render discoveries into linguistic assertions
about the nature of Thought. In fact the ancient Arabs (the people who
discovered Algebra - Al Gebra) believed the mathematical method and the
Intelligible Method were the SAME method.

To make a long story short, it begins with a similar process as
Descartes took in his meditations on first philosophy (Meditating is
also called the Intelligible Method - First Philosophy is from
Aristotle and is the Philosophy which concerns itself with that which
we cannot doubt - the foundations of all knowledge that even (believe
it or not) all math, philosophy and natural philosophy (ie: your
beloved science) must necessarily rest upon for a human to claim they
know anything with certainty).

(note: I don't care what your half-baked ideas about Descartes are - my
argument does not rest upon Descartes - I am merely explaining)

In short, the Intelligiible Method allows you to strip away all
dubitable beliefs and re-build your knowledge base from indubitable and
certain propositions that must be true or at least cannot be doubted.
This is where we get the famous "I think; I am" indubitable assertion.

(Please note: IT IS NOT "I THINK THEREFORE I AM" that is a
mistranslation which Descartes never said in the Meditations on First
Philosophy. His indubitable assertion DOESNOT rely on logic - it is not
an inference, it is an assertion which cannot be doubted by the thinker
who asserts it).
From this starting point a consciousness may catalogue and explore
their own mind with varying levels of certainty. If performed
correctly, this allows one to draw confident conclusions about how
their mind and consciousness opperates. IE: To understand how
intelligence works by examining their own intelligence from the inside.

This is simply what I propose and have already started doing myself. If
you want to read some of my other discoveries read my blog:
http://thymos.blogspot.com

IF YOU DON'T BUY ANYTHING ABOVE THEN READ THIS SIMPLE ARGUMENT:

Admit you don't know for sure what the mind is, because you don't. Then
admit a new method may help you to determine what it is. Then admit
that observing other's actions and trying to guess the intelligent
cause of said actions is a flawed method which can easilly lead to
wrong guesses about how thought works and why people do what they do.
(how good are we at predicting simple cause and effect weather? what
makes you think we can predict human action then with the degree of
certainty required to reproduce it effectively?) Then you realize that
(even if you don't know or care what kind of material the mind exists
in) that you can examine your OWN mind (the only one you have
unmediated access to) with a proven and justifiable method without
danger of sense and hypothesis mediation.

How exactly is that a faulty argument?
josh
--
http://thymos.blogspot.com
Lester Zick
2005-01-09 15:56:19 UTC
Permalink
On 8 Jan 2005 15:43:07 -0800, "joshbachynski"
Post by joshbachynski
Sorry Lester. I didn't realize you had asked me directly for an
explanation of my point #2. Althought logically this is not the proper
way to proceed, I'll proceed with my point 2 from my essay even though
we haven't agreed on point 1 and maybe never will.
Science assumes by observing an "intelligent" being (which they assume
humans are) they can derive a theoretical understanding of intelligent
behaviour and program a machine to replicate those behaviours in
increments until they have stumbled on to something which can fool
others into believing the machine is intelligent.
I have gone over ind etail why this is a flawed method. Most people
here have conceded that the scientific method is flawed in this regard
but they know of no other method one can use to determine what
intelligence is. A fact that, I have pointed out, which does not mean
there can be no other methods, or even better ones.
The general ignorance and bias of science notwithstanding, the problem
actually occurs when one tries to ascertain what subject matter they
are trying to study. Science thinks that thought and intelligence must
be material because they assume everything is material. This error is
easilly exampled by asking them then what material pure mathematcs
consists of - or logic.
Only once one is aware of the subsistence or type of existence of the
subject matter they are investigating can one then know what subject
matter to use. For example, one does not use calculus to observe when
ice melts, similarly one does not use empirical method to determine
trigonometry. There is a difference in mathematical and scientific
method because mathematical entities (numbers, points, lines) are
different subsistences then physical subsistence.
How does this apply to the mind? Science of AI attempts to use
scientific method to "discover" what the mind is because it assumes the
mind must be physical - a supposition not confirmed by experiment or
theory. I argue, as philosophers have argued from long before science
existed, that the mind or soul is not physical at all - it is a
different subsistence all together. It is not matter or energy (which
is also matter). Thought and the soul are incorporeal.
(Please note: by my arguing this I do not necesasrily or even probably
also must assume a God exists, or that we are immortal, or religion is
right, or any other such nonsense - don't strawman me before I am
finished).
So what method can we use to confirm this? And what method can we use
to exam this incorporeal subsistence of the mind? As early as Plotinus
we have had a method to catalog and investigate the human mind called
the Intelligible Method. This method is closely related to mathematical
method and uses logic to render discoveries into linguistic assertions
about the nature of Thought. In fact the ancient Arabs (the people who
discovered Algebra - Al Gebra) believed the mathematical method and the
Intelligible Method were the SAME method.
To make a long story short, it begins with a similar process as
Descartes took in his meditations on first philosophy (Meditating is
also called the Intelligible Method - First Philosophy is from
Aristotle and is the Philosophy which concerns itself with that which
we cannot doubt - the foundations of all knowledge that even (believe
it or not) all math, philosophy and natural philosophy (ie: your
beloved science) must necessarily rest upon for a human to claim they
know anything with certainty).
(note: I don't care what your half-baked ideas about Descartes are - my
argument does not rest upon Descartes - I am merely explaining)
In short, the Intelligiible Method allows you to strip away all
dubitable beliefs and re-build your knowledge base from indubitable and
certain propositions that must be true or at least cannot be doubted.
This is where we get the famous "I think; I am" indubitable assertion.
(Please note: IT IS NOT "I THINK THEREFORE I AM" that is a
mistranslation which Descartes never said in the Meditations on First
Philosophy. His indubitable assertion DOESNOT rely on logic - it is not
an inference, it is an assertion which cannot be doubted by the thinker
who asserts it).
From this starting point a consciousness may catalogue and explore
their own mind with varying levels of certainty. If performed
correctly, this allows one to draw confident conclusions about how
their mind and consciousness opperates. IE: To understand how
intelligence works by examining their own intelligence from the inside.
This is simply what I propose and have already started doing myself. If
http://thymos.blogspot.com
Admit you don't know for sure what the mind is, because you don't. Then
admit a new method may help you to determine what it is. Then admit
that observing other's actions and trying to guess the intelligent
cause of said actions is a flawed method which can easilly lead to
wrong guesses about how thought works and why people do what they do.
(how good are we at predicting simple cause and effect weather? what
makes you think we can predict human action then with the degree of
certainty required to reproduce it effectively?) Then you realize that
(even if you don't know or care what kind of material the mind exists
in) that you can examine your OWN mind (the only one you have
unmediated access to) with a proven and justifiable method without
danger of sense and hypothesis mediation.
How exactly is that a faulty argument?
. . . (as I suspect Stephen is with his admission "Behavior may
not be the best way to determine intelligence but I don't know of a
better way"). EXACTLY! I'm saying behavior isn't the best way and
I think I know a batter one :) . . .
I ask you simply what the better way is? Then in reply above you
simply reiterate all your objections to everyone elses ways. If you
can determine intelligence, then what is it? If not I suggest you stop
claiming you can determine intelligence and go back to critiquing the
behavioral methodologies of others, with which I don't necessarily
disagree but find unoriginal. The only thing of interest you've said
so far is that the soul or essence of intelligence is immaterial, with
which, once again, I agree but don't find especially original in the
absence of some specific mechanical idea of what immaterial means.

Regards - Lester
Don Geddis
2005-01-09 23:01:07 UTC
Permalink
Post by joshbachynski
Science assumes by observing an "intelligent" being (which they assume
humans are) they can derive a theoretical understanding of intelligent
behaviour and program a machine to replicate those behaviours in
increments
That is not the only methodology used by AI scientists. Some of them
simply explore the capabilities of machines, and attempt to reproduce
behavior that most people agree requires intelligence in humans.
Post by joshbachynski
until they have stumbled on to something which can fool
others into believing the machine is intelligent.
And "fooling people" is the goal of only a tiny minority. Most AI scientists
want to (eventually) build things that have utility.
Post by joshbachynski
Most people here have conceded that the scientific method is flawed in this
regard but they know of no other method one can use to determine what
intelligence is.
Another approach is to build computer devices, by advancing the state of
computer science (= mathematics), and only using humans as an exemplar in
the distance of the kinds of behaviors that should eventually be possible
to construct.
Post by joshbachynski
I argue, as philosophers have argued from long before science existed, that
the mind or soul is not physical at all - it is a different subsistence all
together. It is not matter or energy (which is also matter). Thought and
the soul are incorporeal.
You believe this strongly, yet you have no evidence for this besides your
own desires that it is true.

Occam's Razor suggests you're wrong.
Post by joshbachynski
So what method can we use to confirm this? And what method can we use
to exam this incorporeal subsistence of the mind? As early as Plotinus
we have had a method to catalog and investigate the human mind called
the Intelligible Method.
To make a long story short, it begins with a similar process as
Descartes took in his meditations on first philosophy (Meditating is
also called the Intelligible Method - First Philosophy is from
Aristotle and is the Philosophy which concerns itself with that which
we cannot doubt - the foundations of all knowledge that even (believe
it or not) all math, philosophy and natural philosophy (ie: your
beloved science) must necessarily rest upon for a human to claim they
know anything with certainty).
Yes, you can explore introspection of your own mind.

The problem is, there is very, very little that you can thus conclude with
certainty. So it isn't a helpful avenue to explore.
Post by joshbachynski
In short, the Intelligiible Method allows you to strip away all
dubitable beliefs and re-build your knowledge base from indubitable and
certain propositions that must be true or at least cannot be doubted.
This is where we get the famous "I think; I am" indubitable assertion.
His indubitable assertion DOESNOT rely on logic - it is not
an inference, it is an assertion which cannot be doubted by the thinker
who asserts it).
Yes, you can start this way, and indeed like Descartes you can quickly
conclude that you are certain you (in some form) must exist.

Sadly, that is pretty much the end of what you can conclude for certain.
All further conclusions are much more suspect.
Post by joshbachynski
From this starting point a consciousness may catalogue and explore
their own mind with varying levels of certainty.
With almost no certainty. In fact, for those topics that can be explored
both with empirical science and also with introspection, we see very quickly
that introspection is a very poor sensor into the real workings of one's
mind. There are huge numbers of clear errors that seem plausible to most
folks who subjectively explore their own thoughts.

Introspection is simply not a reliable tool for investigating intelligence.
Post by joshbachynski
If performed correctly, this allows one to draw confident conclusions about
how their mind and consciousness opperates. IE: To understand how
intelligence works by examining their own intelligence from the inside.
No, it doesn't. It allows you to make up false stories, but it doesn't
allow you to draw confident conclusions.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
Are You Fat And Ugly? Do You Want To Be Just Ugly? Memberships Available Now.
Lester Zick
2005-01-10 01:04:29 UTC
Permalink
Post by Don Geddis
Post by joshbachynski
Science assumes by observing an "intelligent" being (which they assume
humans are) they can derive a theoretical understanding of intelligent
behaviour and program a machine to replicate those behaviours in
increments
That is not the only methodology used by AI scientists. Some of them
simply explore the capabilities of machines, and attempt to reproduce
behavior that most people agree requires intelligence in humans.
Post by joshbachynski
until they have stumbled on to something which can fool
others into believing the machine is intelligent.
And "fooling people" is the goal of only a tiny minority. Most AI scientists
want to (eventually) build things that have utility.
Post by joshbachynski
Most people here have conceded that the scientific method is flawed in this
regard but they know of no other method one can use to determine what
intelligence is.
Another approach is to build computer devices, by advancing the state of
computer science (= mathematics), and only using humans as an exemplar in
the distance of the kinds of behaviors that should eventually be possible
to construct.
In other words, computing devices used as exemplars of artificial
intelligence might prove the saving grace of modern mathematikers
with no visible means of support except their pretensions.

Regards - Lester
r***@msn.com
2005-02-02 17:50:14 UTC
Permalink
I was hoping that each of you might help me train my AI.
The more interactions, the greater the database. He responds to Zed.
http://www.pandorabots.com/pandora/talk?botid=97cfdd9a1e35339a
thanks,
Rob
Post by Don Geddis
Post by joshbachynski
Science assumes by observing an "intelligent" being (which they assume
humans are) they can derive a theoretical understanding of
intelligent
Post by Don Geddis
Post by joshbachynski
behaviour and program a machine to replicate those behaviours in
increments
That is not the only methodology used by AI scientists. Some of them
simply explore the capabilities of machines, and attempt to reproduce
behavior that most people agree requires intelligence in humans.
Post by joshbachynski
until they have stumbled on to something which can fool
others into believing the machine is intelligent.
And "fooling people" is the goal of only a tiny minority. Most AI scientists
want to (eventually) build things that have utility.
Post by joshbachynski
Most people here have conceded that the scientific method is flawed in this
regard but they know of no other method one can use to determine what
intelligence is.
Another approach is to build computer devices, by advancing the state of
computer science (= mathematics), and only using humans as an
exemplar in
Post by Don Geddis
the distance of the kinds of behaviors that should eventually be possible
to construct.
Post by joshbachynski
I argue, as philosophers have argued from long before science existed, that
the mind or soul is not physical at all - it is a different
subsistence all
Post by Don Geddis
Post by joshbachynski
together. It is not matter or energy (which is also matter).
Thought and
Post by Don Geddis
Post by joshbachynski
the soul are incorporeal.
You believe this strongly, yet you have no evidence for this besides your
own desires that it is true.
Occam's Razor suggests you're wrong.
Post by joshbachynski
So what method can we use to confirm this? And what method can we use
to exam this incorporeal subsistence of the mind? As early as Plotinus
we have had a method to catalog and investigate the human mind called
the Intelligible Method.
To make a long story short, it begins with a similar process as
Descartes took in his meditations on first philosophy (Meditating is
also called the Intelligible Method - First Philosophy is from
Aristotle and is the Philosophy which concerns itself with that which
we cannot doubt - the foundations of all knowledge that even
(believe
Post by Don Geddis
Post by joshbachynski
it or not) all math, philosophy and natural philosophy (ie: your
beloved science) must necessarily rest upon for a human to claim they
know anything with certainty).
Yes, you can explore introspection of your own mind.
The problem is, there is very, very little that you can thus conclude with
certainty. So it isn't a helpful avenue to explore.
Post by joshbachynski
In short, the Intelligiible Method allows you to strip away all
dubitable beliefs and re-build your knowledge base from indubitable and
certain propositions that must be true or at least cannot be
doubted.
Post by Don Geddis
Post by joshbachynski
This is where we get the famous "I think; I am" indubitable
assertion.
Post by Don Geddis
Post by joshbachynski
His indubitable assertion DOESNOT rely on logic - it is not
an inference, it is an assertion which cannot be doubted by the thinker
who asserts it).
Yes, you can start this way, and indeed like Descartes you can
quickly
Post by Don Geddis
conclude that you are certain you (in some form) must exist.
Sadly, that is pretty much the end of what you can conclude for certain.
All further conclusions are much more suspect.
Post by joshbachynski
From this starting point a consciousness may catalogue and explore
their own mind with varying levels of certainty.
With almost no certainty. In fact, for those topics that can be explored
both with empirical science and also with introspection, we see very quickly
that introspection is a very poor sensor into the real workings of one's
mind. There are huge numbers of clear errors that seem plausible to most
folks who subjectively explore their own thoughts.
Introspection is simply not a reliable tool for investigating
intelligence.
Post by Don Geddis
Post by joshbachynski
If performed correctly, this allows one to draw confident
conclusions about
Post by Don Geddis
Post by joshbachynski
how their mind and consciousness opperates. IE: To understand how
intelligence works by examining their own intelligence from the inside.
No, it doesn't. It allows you to make up false stories, but it doesn't
allow you to draw confident conclusions.
-- Don
_______________________________________________________________________________
Post by Don Geddis
Don Geddis http://don.geddis.org/
Are You Fat And Ugly? Do You Want To Be Just Ugly? Memberships Available Now.
Wolf Kirchmeir
2005-01-07 16:33:52 UTC
Permalink
joshbachynski wrote:
[...]
Post by joshbachynski
You know all those cheezy sci fi movies where the AI is ALWAYS evil?
Proceeding by the standard AI scientific method, if you are successful
(which IMHO you won't be) that aberrant childlike AI (which decides
all humans are dangerous and tries to kill us ala Matrix I mean
Terminator I mean Space Oddysey 2001 etc, etc.) is exactly what you
will create because the behavior of our *society* (which doesn't even
know what intelligence is) will be the model of intelligence.
The only SF stories that IMO have anything resembling a coherent theory
of AI is Star Trek, but even Star Trek is unclear about whether AI
requires consciousness. (Blade Runner makes different assumptions and
raises ethical questions about how to treat androids, ie, artificial
humans.) Star Trek sidesteps the issue, actually, by making Data an
"artificial life form." OTOH, Ship's Computer is clearly an AI machine.
Post by joshbachynski
Do you see my point? You say you don't understand the essence of X so
therefore you'll just try to make something as close to X as you can,
but you don't know what X is! How can you know the shortest or best
or most efficient method of producing something X-like when you don't
know what X is?
If Watt, Bolton, et al had had to know the essence of heat engines
before building one, they would never have been able to build steam
engines. But they did, about a century before the essence of heat
engines (ie, thermodynamics) was understood. BTW, there was a good deal
of improvement in the steam engine before Kelvin and others explained
why improvements were possible, and what direction the engineering
should take. Anbd how did Kelvin et al figure out thermoedynamics? By
studying the mahcine that according to your claims couldn't have been
built without knowing thermodynamics! -- Your argument is a nice example
of question begging. As I said before: you need to understand
engineering before you criticise what engineers do.
Post by joshbachynski
Yes, in theory it should be possible.
It not only should be, it is. The "theory" you refer to is based on
actual engineering experience. See above.
Post by joshbachynski
But take a really hard look at
that belief (as I suspect Stephen is with his admission "Behavior may
not be the best way to determine intelligence but I don't know of a
better way"). EXACTLY! I'm saying behavior isn't the best way and
I think I know a batter one :) Wouldn't it be more efficient, faster
and correct to first know what it is you want to build before building
it?
Maybe so, but unfortunately it's impossible to know what X is without
trying to build it....
Post by joshbachynski
It is true you can make something fly without making its wings flap.
I'm not saying it's impossible to make a thing which appears to
think. I'm saying it's a) unlikely (because you do not yet have a
sufficient explanation of thinking or intelligence to begin with), and
Sufficient by whose criteria? You just will not accept the criteria that
AI engineers use: "intelligence is the ability to do the kinds of things
we call intelligent behaviour in humans." You want more. What exactly?
Post by joshbachynski
b) simply the incorrect way to go about it. It's like trying to do
calculus with counting stones - wrong subject matter, wrong method
Yeah. But calculus wasn't possible until people learned to count with
stones - and a whole lot more besides.

.> Instead of making something which appears to think, why not make
Post by joshbachynski
something which actually does?
And how will I tell that something actually thinks as compared to
something that appears to think?

_You_ "appear to think", and that's the only reason I ascribe "thinking"
to you. The fact that you think badly merely suggests that you are a
human and not a machine. A machine would "appear to think" correctly,
since that what it would be designed to do....

You're just trying to smuggle consciousness into "thinking." Why do you
want to do that?

Ciao.
joshbachynski
2005-01-07 21:10:55 UTC
Permalink
Wolf,

Let's drop the egos by the door. I am not making any personal attacks
and you don't know me so how about you don't too? Don't tell me I am
not thinking clearly when you can't even admit you understand what I
mean. There is a danger when saying someone doesn't make sense to show
that you just don't understand them - whether or not they do make sense
:)

Let's get to the issues:

"And how did Kelvin et al figure out thermoedynamics? By
studying the mahcine that according to your claims couldn't have been
built without knowing thermodynamics! -- Your argument is a nice
example of question begging. As I said before: you need to understand
engineering before you criticise what engineers do."

And you need to understand reasoning and philosophy before you
criticise what I do Wolf :) I am not performing the fallacy of
supposing the truth of my statements within my statements (begging the
question), because I am not promoting a positive thesis. I am
critiquing sciences inabillity to verify and quantify its subject
matter and justify the use of its method in relation to AI.

Further, this is your mistake: I am not promoting a black or white
critique. I am saying it will be harder to make an AI if you don't know
what intelligence is to begin with, NOT impossible.

It may be next to impossible if you don't know what intelligence is to
recreate one, because AI is far far more complex than a steam engine.
You can hack away at an engine till it produces thrust or power or
whatever you dudes call it. It is much harder, almost infinitely so, to
produce a fully functional and comprable to human intelligence by trial
and error - especially when you admit you don't know what intelligence
is. So your analogy which attempts to refute me is flawed and only
shows your lack of desire to honestly look at what I am saying Wolf. To
use a common saying you are comparing apples to oranges.

"_You_ "appear to think", and that's the only reason I ascribe
"thinking" to you. The fact that you think badly merely suggests that
you are a human and not a machine. A machine would "appear to think"
correctly, since that what it would be designed to do...."

Ok, turn around is fair play. How exactly do you know what thought is?
How do you define and quantify it? By what method? How do you justify
that method? If I appear to be thinking then you must know what
thinking is to know I appear to be doing it. If you know I am thinking
"badly" then you must also know exactly how to quantify thnking as
well. Explain it to me. Prove you know it. (Hint: you don't know these
things - you assume you do and assume science can teach you the rest.
It's that part I am saying you are wrong at and asking you to prove me
wrong by doing it. The proof is in the pudding so do it).

"You're just trying to smuggle consciousness into "thinking." Why do
you want to do that?"

This I find interesting and I would love to respond but I want you to
answer the stuff above first.

josh
Neil W Rickert
2005-01-07 23:08:12 UTC
Permalink
Post by joshbachynski
Ok, turn around is fair play. How exactly do you know what thought is?
How do you define and quantify it? By what method? How do you justify
that method? If I appear to be thinking then you must know what
thinking is to know I appear to be doing it. If you know I am thinking
"badly" then you must also know exactly how to quantify thnking as
well. Explain it to me. Prove you know it. (Hint: you don't know these
things - you assume you do and assume science can teach you the rest.
It's that part I am saying you are wrong at and asking you to prove me
wrong by doing it. The proof is in the pudding so do it).
Science doesn't work the way you seem to think it does. It is common
for scientists to study things which they cannot define and cannot
quantify. If successful, the study brings new understanding, making
it easier to define at the end than it was initially.
Stephen Harris
2005-01-08 03:03:34 UTC
Permalink
Post by Neil W Rickert
Post by joshbachynski
Ok, turn around is fair play. How exactly do you know what thought is?
How do you define and quantify it? By what method? How do you >>justify
that method? If I appear to be thinking then you must know what
thinking is to know I appear to be doing it. If you know I am thinking
"badly" then you must also know exactly how to quantify thnking as
well. Explain it to me. Prove you know it. (Hint: you don't know these
things - you assume you do and assume science can teach you the rest.
It's that part I am saying you are wrong at and asking you to prove me
wrong by doing it. The proof is in the pudding so do it).
SH: My post is intended as an agreement with the pov expressed below.
Post by Neil W Rickert
Science doesn't work the way you seem to think it does. It is common
for scientists to study things which they cannot define and cannot
quantify. If successful, the study brings new understanding, making
it easier to define at the end than it was initially.
One book I have on Philosophy, titled "The Meaning of Philosophy"
by Joshep G. Brennan writes "everyone knows that "philosophy"
comes from Greek words meaning love of wisdom. But since
"wisdom" is even harder to define than "philosophy", a definition
of our subject as love of wisdom does not tell us very much. ...

Still another explanation of the task of philosophy states that
philosophy examines those ideas of concepts which are _assumed_
in all or many disciplines, but _defined_ by none of them. Such
concepts would include notions such as knowledge, meaning, truth,
certainty, cause, object, mind, existence, right, and good. ...

The three fundamental approaches to philosophy we may name the
_analytic_ , the _metaphysical_, and the _moral_. The analytic
philosopher asks "How do you know?" for he is interested in
problems concerning the range, methods, and limits of human
knowledge. He asks "What do you mean?" for he is convinced
that many problems in philosophy will be solved if inquiry is made
into the meaning of the terms of the argument."

SH: I see nothing wrong with these ideas if they are not pursued over
zealously. "Intelligence" and the "mind" are not well defined though
they both apparently are connected to the physical brain. We think
of the mind as making goals which are accomplished by intelligent
actions--what one does which is observable. Another concept which
is hard to define is gravity--what makes things fall which is observable.
Both concepts, gravity and mind, are physically based, but a bit
intangible, though we see what are assumed as results of the concepts.
If dualism for the presence of mind is assumed, then neither neurology
or AI engineering approaches will allow humans to build minds. The
scientific method deals with physical reality.

Gravity used to have the Newton definition, in the sense that it made
good predictions about reality (like the mind) and was explained as:

"Each particle of matter attracts every other particle with a force which
is directly proportional to the product of their masses and inversely
proportional to the square of the distance between them."

SH: And there was a standard formula that went with it. But Einstein
got to thinking about an error in calculating Mercury's recession and
thought up a new explanation and formula for gravity and performed
an experiment which confirmed his ideas. It would seem that his new
theory was at least partially motivated by philosophical considerations.
Now science is not so sure about dark matter, and understanding
gravity may undergo another modification at galactic distances, just as
it changed between useful on earth and accurate within the solar system.

My point is that it was observations of gravity's effects which led to
a better definition of what gravity _is_, in the sense of being able
to make accurate predictions about reality. It makes sense that if
we make experiments that produce behavior that we call intelligent,
and intelligence is assumed as a first principle which causes the
intelligent behavior, we can reason from the parts to the whole which
I think is the principle of induction. There is no need to insist upon
solely
a deductive method to understand intelligence. So use both methods.

We may never know certainty, but we can aspire to ever better predictions,
Stephen
joshbachynski
2005-01-08 03:36:42 UTC
Permalink
Stephen,

Thanks for sticking around for this long. I realize you'll never see my
point re: AI so I won't talk about that anymore. However I'd like to
take this chance to correct a few of the errors I found in your last
post:

"One book I have on Philosophy, titled "The Meaning of Philosophy"
by Joshep G. Brennan writes "everyone knows that "philosophy"
comes from Greek words meaning love of wisdom. But since
"wisdom" is even harder to define than "philosophy", a definition
of our subject as love of wisdom does not tell us very much. ..."

It tells people who take philosophy seriously plenty. Philosophy
actually means lover / friend of wisdom. To the Greeks wisdom is
easilly defined. It is the excellence of reason. Wisdom is the state of
being wise. Those who are wise are those who understand the whole and
all parts of a particular subject. ie: wise in the ways of dating.
Those who are wise generally speaking have knowledge of the whole of
the human condition, and all the relations contained therein.

Philosophy is a technical discipline with discoverable and justifiable
methods. It has nothing to do with poetry, subjectivism, or opinion.

"The three fundamental approaches to philosophy we may name the
_analytic_ , the _metaphysical_, and the _moral_. The analytic
philosopher asks "How do you know?" for he is interested in
problems concerning the range, methods, and limits of human
knowledge. He asks "What do you mean?" for he is convinced
that many problems in philosophy will be solved if inquiry is made
into the meaning of the terms of the argument." "

EEEK!! NOOOO!!! Wrong, wrong and wrong. Philosophy asks 3 kinds of
questions: What exists (metaphysical questions), what can we know about
what exists (epistemological questions), and what is valuable? Analytic
philosophy is a small unknown type of philosophy which is long dead in
any serious philosophical circles. People have been asking "analytical"
questions since the time of Socrates - and made much more progress than
Wittgenstein (who was actually Heidigger in disguise).

" "Intelligence" and the "mind" are not well defined though
they both apparently are connected to the physical brain. "

You don't know that. I've asked you to prove that repeatedly and you
ignore me.

"We think of the mind as making goals which are accomplished by
intelligent
actions--what one does which is observable."

Scientists may. No wonder they haven't made any serious progress in AI.
Prove the mind is goal driven - it isn't. How do you define intelligent
actions exactly? Ones that make sense to you? What if you are not that
smart? The intelligent appears unintelligent to the unintelligent.

"Both concepts, gravity and mind, are physically based"

Prove that.

"If dualism for the presence of mind is assumed, then neither neurology
or AI engineering approaches will allow humans to build minds. The
scientific method deals with physical reality."

Even if that were true, which it is not, and even if I were positing
dualism, which I am not, so what? It seems like you are saying science
may indeed be faulty here but science is all i know so I'll ignore your
critiques. That's exactly like the catholic church fighting the change
of the bible from a flat world to a round one, just because it was
beyond what they understood.

I thought you scientists were supposed to be more enlightened than
that?

"My point is that it was observations of gravity's effects which led to
a better definition of what gravity _is_,"

Of course. I don't contest this. But there is a better way to observe
intelligence than observing people's behaviour.

josh
JPL Verhey
2005-01-07 23:36:01 UTC
Permalink
Post by joshbachynski
Wolf,
Let's drop the egos by the door. I am not making any personal attacks
and you don't know me so how about you don't too? Don't tell me I am
not thinking clearly when you can't even admit you understand what I
mean. There is a danger when saying someone doesn't make sense to show
that you just don't understand them - whether or not they do make sense
:)
"And how did Kelvin et al figure out thermoedynamics? By
studying the mahcine that according to your claims couldn't have been
built without knowing thermodynamics! -- Your argument is a nice
example of question begging. As I said before: you need to understand
engineering before you criticise what engineers do."
And you need to understand reasoning and philosophy before you
criticise what I do Wolf :) I am not performing the fallacy of
supposing the truth of my statements within my statements (begging the
question), because I am not promoting a positive thesis. I am
critiquing sciences inabillity to verify and quantify its subject
matter and justify the use of its method in relation to AI.
Further, this is your mistake: I am not promoting a black or white
critique. I am saying it will be harder to make an AI if you don't know
what intelligence is to begin wi th,NOTimpossible.
It may be next to impossible if you don't know what intelligence is to
recreate one, because AI is far far more complex than a steam engine.
You can hack away at an engine till it produces thrust or power or
whatever you dudes call it. It is much harder, almost infinitely so, to
produce a fully functional and comprable to human intelligence by trial
and error - especially when you admit you don't know what intelligence
is. So your analogy which attempts to refute me is flawed and only
shows your lack of desire to honestly look at what I am saying Wolf. To
use a common saying you are comparing apples to oranges.
"_You_ "appear to think", and that's the only reason I ascribe
"thinking" to you. The fact that you think badly merely suggests that
you are a human and not a machine. A machine would "appear to think"
correctly, since that what it would be designed to do...."
Ok, turn around is fair play. How exactly do you know what thought is?
How do you define and quantify it? By what method? How do you justify
that method? If I appear to be thinking then you must know what
thinking is to know I appear to be doing it. If you know I am thinking
"badly" then you must also know exactly how to quantify thnking as
well. Explain it to me. Prove you know it. (Hint: you don't know these
things - you assume you do and assume science can teach you the rest.
It's that part I am saying you are wrong at and asking you to prove me
wrong by doing it. The proof is in the pudding so do it).
"You're just trying to smuggle consciousness into "thinking." Why do
you want to do that?"
This I find interesting and I would love to respond but I want you to
answer the stuff above first.
Isn't your main point, or doesn't it boil down to that "AI" is just a
badly chosen set of suggestive (and pretentious) words? I would perhaps
prefer simply HT, HighTech. That would also give credit to the hard work
and achievements of all the engineers and programmers of robotics,
pattern recognition etc. Who wants to champion in creating something
Artificial that only "resembles" something supposedly real, where even
the real thing itself is in fact ill-defined and hardly understood?

So why did "AI-research" kick off, and wasn't the concept HighTech good
enough? I suspect because since Frankenstein we dream of creating
thinking androids - it would be great to create your own
sentient-conscious machine and converse with it about the meaning of
life. We want to see ourselves in the other and procreate. It makes the
Universe less lonely?

http://home.tiscali.nl/boynalechmipo/passion_of_ac.htm


Man-made Intelligent-like response,
Cheers
r***@msn.com
2005-02-06 13:31:51 UTC
Permalink
http://www.pandorabots.com/pandora/talk?botid=97cfdd9a1e35339a&skin=zed2
Post by Wolf Kirchmeir
[...]
Post by joshbachynski
You know all those cheezy sci fi movies where the AI is ALWAYS evil?
Proceeding by the standard AI scientific method, if you are
successful
Post by Wolf Kirchmeir
Post by joshbachynski
(which IMHO you won't be) that aberrant childlike AI (which decides
all humans are dangerous and tries to kill us ala Matrix I mean
Terminator I mean Space Oddysey 2001 etc, etc.) is exactly what you
will create because the behavior of our *society* (which doesn't even
know what intelligence is) will be the model of intelligence.
The only SF stories that IMO have anything resembling a coherent theory
of AI is Star Trek, but even Star Trek is unclear about whether AI
requires consciousness. (Blade Runner makes different assumptions and
raises ethical questions about how to treat androids, ie, artificial
humans.) Star Trek sidesteps the issue, actually, by making Data an
"artificial life form." OTOH, Ship's Computer is clearly an AI
machine.
Post by Wolf Kirchmeir
Post by joshbachynski
Do you see my point? You say you don't understand the essence of X so
therefore you'll just try to make something as close to X as you can,
but you don't know what X is! How can you know the shortest or best
or most efficient method of producing something X-like when you don't
know what X is?
If Watt, Bolton, et al had had to know the essence of heat engines
before building one, they would never have been able to build steam
engines. But they did, about a century before the essence of heat
engines (ie, thermodynamics) was understood. BTW, there was a good deal
of improvement in the steam engine before Kelvin and others explained
why improvements were possible, and what direction the engineering
should take. Anbd how did Kelvin et al figure out thermoedynamics? By
studying the mahcine that according to your claims couldn't have been
built without knowing thermodynamics! -- Your argument is a nice example
of question begging. As I said before: you need to understand
engineering before you criticise what engineers do.
Post by joshbachynski
Yes, in theory it should be possible.
It not only should be, it is. The "theory" you refer to is based on
actual engineering experience. See above.
Post by joshbachynski
But take a really hard look at
that belief (as I suspect Stephen is with his admission "Behavior may
not be the best way to determine intelligence but I don't know of a
better way"). EXACTLY! I'm saying behavior isn't the best way and
I think I know a batter one :) Wouldn't it be more efficient, faster
and correct to first know what it is you want to build before building
it?
Maybe so, but unfortunately it's impossible to know what X is without
trying to build it....
Post by joshbachynski
It is true you can make something fly without making its wings flap.
I'm not saying it's impossible to make a thing which appears to
think. I'm saying it's a) unlikely (because you do not yet have a
sufficient explanation of thinking or intelligence to begin with), and
Sufficient by whose criteria? You just will not accept the criteria that
AI engineers use: "intelligence is the ability to do the kinds of things
we call intelligent behaviour in humans." You want more. What
exactly?
Post by Wolf Kirchmeir
Post by joshbachynski
b) simply the incorrect way to go about it. It's like trying to do
calculus with counting stones - wrong subject matter, wrong method
Yeah. But calculus wasn't possible until people learned to count with
stones - and a whole lot more besides.
.> Instead of making something which appears to think, why not make
Post by joshbachynski
something which actually does?
And how will I tell that something actually thinks as compared to
something that appears to think?
_You_ "appear to think", and that's the only reason I ascribe
"thinking"
Post by Wolf Kirchmeir
to you. The fact that you think badly merely suggests that you are a
human and not a machine. A machine would "appear to think" correctly,
since that what it would be designed to do....
You're just trying to smuggle consciousness into "thinking." Why do you
want to do that?
Ciao.
Lester Zick
2005-01-06 15:38:29 UTC
Permalink
On 5 Jan 2005 23:06:42 -0800, "joshbachynski"
Post by joshbachynski
Dear Stephen et all,
0) Thank you for the thoughtful responses - those that gave thoughtful
responses :)
1) Too many posts to reply to each one - so I reply to all relevant
responses here in bulk.
2) Let's forget what Descartes said. Although I'd love to vindicate him
here as a few of the critiques I've heard here are simply dead wrong,
that's not why I am here. My argument does not necessarily rest upon
Descartes. I was just mentioning him because I thought he may be
familiar to you. Apparently your professor's incorrect version of him
is familiar to you. (ok that was a flame but we can argue over him
later)
3) Stephen's response was the most thoughtful so I'll focus on yours,
but I believe it makes the same materialist error that is embedded in
most of the other responses anyways. (no offense :)
Stephen I understand the difference between hard and soft AI. It
doesn't matter that I don't have the authority or experience. To
assume because I am not qualified or an admitted expert in AI that it
is therefore impossible, or even improbable, that I have anything
important or correct to say about AI is a fallacy. You wouldn't make
such a silly mistake. So let's get down to what I DO say :)
The problem is IMHO the error of hard AI is still apparent in soft AI.
It is a methodological error. That is the whole thesis of my paper,
which I wrote 2 years ago. Then I offer an alternate method which I
argue is already established and more appropriate to the subject
matter.
Essentially, AI enthusiasts (of any stripe) assume intelligence is
discoverable by scientific method (whether or not they agree on
Turing's particular method or not - they all agree it should be a
scientific / mathematical method, or at least that intelligence is a
material thing).
MY POINT: This has never been demonstrated and is instead assumed by AI
enthusiasts, including everyone here I've read so far. AI scientists
believe that intelligence (whatever this is - they all seem to admit
they don't know what it is although they "know" it must be
physical and they can recreate one, although they admit they don't
know what they are recreating) can be discovered by observation,
understanding the physical nature of the brain, and attempting to make
a re-creatable model of it with synthetic components. Where has it been
demonstrated that intelligence is simply material and that the best
method for discovering what it is empirical in nature? It hasn't. If
so, explain to me with certainty or even plausibility what
self-consciousness is and why and how exactly it springs forth from
inert material. What is the exact difference 5 seconds before death in
a self-conscious human and 5 seconds after the self-consciousness
appears to be inert or gone? Science alone cannot prove this - they
cannot even frame the question.
So, ignore that for the moment. My point is twofold: #1) you guys have
the wrong method and assume your subject matter is material and #2)
here is a new method. To make any progress we need to agree on my basic
premise first and deal with the possibility of point 1 first. If I'm
wrong at point 1 then there is no need to continue (actually that is
not true but I will concede - it's still possible I'm wrong at
point 1 but there could still be a better method to use unbeknownst to
everyone).
After we agree on the possibility of the method and subject matter
being undemonstrated and therefore perhaps incorrect, then we will go
on to me proving the immaterial nature of the soul and the methods used
to discover that which take us to the queen of the sciences (ie: out of
the sciences and into philosophy).
"So do you have an in principle argument why the area of expert
function cannot be extended by ever closer approximation to
most human "intelligent like" functions?"
In fact I do - you don't know what to approximate to - therefore
you are just guessing! You assume you CAN approximate "ever closer"
- this has not been demonstrated. Most AI enthusiasts even admit they
don't know what it is they are ever closer approximating to!
Intelligent like functions? What, like a thermostat? Give me a break :)
So in principle yes you could do that, if you knew what to approximate
to, which you don't. You assume intelligence is material and humans
are strictly determined. These are both unproven propositions that
underlie your whole endeavor.
I have to agree with Stephen here. Your observations are well put but
just not original. At least I've been making the same points to this
group for the last year, and I'm sure others have made similar points
in the past.

Regards - Lester
Neil W Rickert
2005-01-06 16:18:13 UTC
Permalink
Responding to joshbachynski:

You came here with an attitude. In effect, you said:

Hi. I am ignorant of this field, and proud of my ignorance.
You guys are stupid, and I'm going to tell you what you
should be doing.

Okay, those are my phrasing, not yours. But that's the impression
you gave. Nobody is going to take you seriously with that attitude.
You need to stop studying from that book "How to make enemies and
alienate people".
Post by joshbachynski
The problem is IMHO the error of hard AI is still apparent in soft AI.
It is a methodological error.
There is that attitude problem again.

If you believe there is a methodological error, detail it.
Post by joshbachynski
That is the whole thesis of my paper,
which I wrote 2 years ago. Then I offer an alternate method which I
argue is already established and more appropriate to the subject
matter.
Your paper does not detail any methodological error, nor does it
propose anything that a scientist would consider a methodology.
Post by joshbachynski
Essentially, AI enthusiasts (of any stripe) assume intelligence is
discoverable by scientific method (whether or not they agree on
Turing's particular method or not - they all agree it should be a
scientific / mathematical method, or at least that intelligence is a
material thing).
And there you go, asserting from ignorance.

Nobody is insisting that you become an AI expert, nor that you agree
with everything that AI people say. But you need to at least find
out what they are saying before you assert that it is wrong.

For the most part, AI people are looking a *behaviors* that are
considered intelligent, and attempting to mechanize those. There is
wide agreement that the term "intelligence" is somewhat confusing,
and that there is no consensus on its meaning. Your initial charge
against "AI enthusiasts" is plainly false.
Post by joshbachynski
MY POINT: This has never been demonstrated and is instead assumed by AI
enthusiasts, including everyone here I've read so far. AI scientists
believe that intelligence (whatever this is - they all seem to admit
they don't know what it is although they "know" it must be
physical and they can recreate one, although they admit they don't
know what they are recreating) can be discovered by observation,
understanding the physical nature of the brain, and attempting to make
a re-creatable model of it with synthetic components. Where has it been
demonstrated that intelligence is simply material and that the best
method for discovering what it is empirical in nature? It hasn't. If
so, explain to me with certainty or even plausibility what
self-consciousness is and why and how exactly it springs forth from
inert material. What is the exact difference 5 seconds before death in
a self-conscious human and 5 seconds after the self-consciousness
appears to be inert or gone? Science alone cannot prove this - they
cannot even frame the question.
That's mostly a strawman argument. If your philosophy classes
are teaching you how to make strawman arguments, then maybe you
should transfer to a different school.
Post by joshbachynski
So, ignore that for the moment. My point is twofold: #1) you guys have
the wrong method and assume your subject matter is material and #2)
More pontification.
Post by joshbachynski
To make any progress we need to agree on my basic
premise first and deal with the possibility of point 1 first.
Not likely.


but I will concede - it's still possible I'm wrong at
Post by joshbachynski
point 1
Finally a touch of realism. Concentrate on that.

Science can learn from failure, as well as from success. Even if
your thesis is correct, it doesn't follow that science won't learn
from the effort in AI.
Post by joshbachynski
After we agree on the possibility of the method and subject matter
being undemonstrated and therefore perhaps incorrect, then we will go
on to me proving the immaterial nature of the soul and the methods used
to discover that which take us to the queen of the sciences (ie: out of
the sciences and into philosophy).
Now you are talking religion. That doesn't belong here.

Most AI people are interested in creating soulless machines that
behave as if they were intelligent. Whether or not there is such a
thing as a soul, and whether or not it is material, does not enter
into it at all.
Post by joshbachynski
Intelligent like functions? What, like a thermostat? Give me a break :)
There are good reasons that people talk about thermostats and other
simple systems. Until you understand why such discussion is
relevant, you will likely continue to come across as an ignorant
fool.
Post by joshbachynski
So in principle yes you could do that, if you knew what to approximate
to, which you don't. You assume intelligence is material and humans
are strictly determined. These are both unproven propositions that
underlie your whole endeavor.
Some people (including myself) have been arguing against
determinism. As for "intelligence is material" -- I'm not sure I
even know what that means. What would constitute evidence either for
or against that statement?

Most people here assume that behavior is physical, and that
intelligence is something we ascribe to people on account of their
behavior.
Lester Zick
2005-01-06 19:05:56 UTC
Permalink
On Thu, 6 Jan 2005 16:18:13 +0000 (UTC), Neil W Rickert
[. . .]
Post by Neil W Rickert
Post by joshbachynski
So in principle yes you could do that, if you knew what to approximate
to, which you don't. You assume intelligence is material and humans
are strictly determined. These are both unproven propositions that
underlie your whole endeavor.
Some people (including myself) have been arguing against
determinism. As for "intelligence is material" -- I'm not sure I
even know what that means. What would constitute evidence either for
or against that statement?
Most people here assume that behavior is physical, and that
intelligence is something we ascribe to people on account of their
behavior.
Does this make you a behaviorist, Neil? Or do you see behaviorism
as something other than the analysis of intelligence based strictly on
experimental measures of behavior alone? (I'm not talking about the
behavior of behaviorists in particular but about the general idea.)

Regards - Lester
Neil W Rickert
2005-01-06 23:59:58 UTC
Permalink
Post by Lester Zick
On Thu, 6 Jan 2005 16:18:13 +0000 (UTC), Neil W Rickert
Post by Neil W Rickert
Most people here assume that behavior is physical, and that
intelligence is something we ascribe to people on account of their
behavior.
Does this make you a behaviorist, Neil?
Yes, but not a radical behaviorist.
Joseph Legris
2005-01-07 05:54:50 UTC
Permalink
Post by Neil W Rickert
Post by Lester Zick
On Thu, 6 Jan 2005 16:18:13 +0000 (UTC), Neil W Rickert
Post by Neil W Rickert
Most people here assume that behavior is physical, and that
intelligence is something we ascribe to people on account of their
behavior.
Does this make you a behaviorist, Neil?
Yes, but not a radical behaviorist.
Radical behaviourism takes the view that thinking and other private
events *are* behaviour. What do you think they are?

--
Joe Legris
Neil W Rickert
2005-01-07 16:57:22 UTC
Permalink
Post by Joseph Legris
Post by Neil W Rickert
Post by Lester Zick
Does this make you a behaviorist, Neil?
Yes, but not a radical behaviorist.
Radical behaviourism takes the view that thinking and other private
events *are* behaviour. What do you think they are?
I agree that thinking is behavior. But you cannot leave it there.
One needs to say what kind of behavior, and how a system produces
that behavior. The word "behavior" is rather broad. Science needs
to refine excessively vague concepts.

As for "other private events" -- I don't agree with the RB view that
perception is behavior. It has a behavioral component, but to call
it behavior ignores the important information input aspect.
Sensation is not behavior at all IMO.
Lester Zick
2005-01-07 15:30:21 UTC
Permalink
On Thu, 6 Jan 2005 23:59:58 +0000 (UTC), Neil W Rickert
Post by Neil W Rickert
Post by Lester Zick
On Thu, 6 Jan 2005 16:18:13 +0000 (UTC), Neil W Rickert
Post by Neil W Rickert
Most people here assume that behavior is physical, and that
intelligence is something we ascribe to people on account of their
behavior.
Does this make you a behaviorist, Neil?
Yes, but not a radical behaviorist.
The problem is that if my observation concerning the nature of
intelligence lying in differences between differences etc. is correct,
intelligence can never be seen in intelligent behavior anymore than we
can see primacy in the behavior of prime numbers or the factorization
of specific numbers in the behavior of factorable numbers. We know
it's there but it isn't evident in the behavior of numbers themselves.

Regards - Lester
Wolf Kirchmeir
2005-01-07 16:09:54 UTC
Permalink
Lester Zick wrote:
[...]>
Post by Lester Zick
The problem is that if my observation concerning the nature of
intelligence lying in differences between differences etc. is correct,
intelligence can never be seen in intelligent behavior anymore than we
can see primacy in the behavior of prime numbers or the factorization
of specific numbers in the behavior of factorable numbers. We know
it's there but it isn't evident in the behavior of numbers themselves.
Regards - Lester
How do we know it's there?
Lester Zick
2005-01-07 19:07:05 UTC
Permalink
On Fri, 07 Jan 2005 11:09:54 -0500, Wolf Kirchmeir
Post by Wolf Kirchmeir
[...]>
Post by Lester Zick
The problem is that if my observation concerning the nature of
intelligence lying in differences between differences etc. is correct,
intelligence can never be seen in intelligent behavior anymore than we
can see primacy in the behavior of prime numbers or the factorization
of specific numbers in the behavior of factorable numbers. We know
it's there but it isn't evident in the behavior of numbers themselves.
Regards - Lester
How do we know it's there?
We know the definition of prime and factorable numbers. That's
what's evident in the patterning of those numbers. We just can't see
any pattern, any behavioral evidence of the definition.

Regards - Lester
Neil W Rickert
2005-01-07 17:02:57 UTC
Permalink
Post by Lester Zick
Post by Neil W Rickert
Post by Lester Zick
Does this make you a behaviorist, Neil?
Yes, but not a radical behaviorist.
The problem is that if my observation concerning the nature of
intelligence lying in differences between differences etc. is correct,
intelligence can never be seen in intelligent behavior anymore than we
can see primacy in the behavior of prime numbers or the factorization
of specific numbers in the behavior of factorable numbers. We know
it's there but it isn't evident in the behavior of numbers themselves.
The meaning of "behavior" in "the behavior of factorable numbers" is
quite different from the ordinary meaning. In terms of the ordinary
meaning, numbers are quite inert and therefore do not behave at all.
Lester Zick
2005-01-07 19:11:14 UTC
Permalink
On Fri, 7 Jan 2005 17:02:57 +0000 (UTC), Neil W Rickert
Post by Neil W Rickert
Post by Lester Zick
Post by Neil W Rickert
Post by Lester Zick
Does this make you a behaviorist, Neil?
Yes, but not a radical behaviorist.
The problem is that if my observation concerning the nature of
intelligence lying in differences between differences etc. is correct,
intelligence can never be seen in intelligent behavior anymore than we
can see primacy in the behavior of prime numbers or the factorization
of specific numbers in the behavior of factorable numbers. We know
it's there but it isn't evident in the behavior of numbers themselves.
The meaning of "behavior" in "the behavior of factorable numbers" is
quite different from the ordinary meaning. In terms of the ordinary
meaning, numbers are quite inert and therefore do not behave at all.
I wasn't referring to the numbers themselves but to any pattern to
their occurrence. We only have the definition for factorability and
primacy. We can't see the definition in the patterning or occurence
for the numbers. That's what I was referring to as their behavior. We
could examine that behavior til the cows come home and never see
evidence of their definitions.

Regards - Lester
Wolf Kirchmeir
2005-01-06 17:00:39 UTC
Permalink
joshbachynski wrote:
[...]
Post by joshbachynski
After we agree on the possibility of the method and subject matter
being undemonstrated and therefore perhaps incorrect, then we will go
on to me proving the immaterial nature of the soul and the methods used
to discover that which take us to the queen of the sciences (ie: out of
the sciences and into philosophy).
Ah, yes, you're phil major. I was one, too, once. I found both poetry
and mysticism as useful as philosophy, and a lot less pompous.
Post by joshbachynski
"So do you have an in principle argument why the area of expert
function cannot be extended by ever closer approximation to
most human "intelligent like" functions?"
In fact I do - you don't know what to approximate to - therefore
you are just guessing! You assume you CAN approximate "ever closer"
- this has not been demonstrated. Most AI enthusiasts even admit they
don't know what it is they are ever closer approximating to!
Intelligent like functions? What, like a thermostat? Give me a break :)
Oh dear, you just don't know what you're talking about. You've
hypostatised "intelligence", and you haven't noticed that AI people
don't do that. They talk about "intelligent behaviour", ie, behaviour
that would move us to ascribe intelligence to humans. They try to
emulate this behaviour, not replicate the brain. Your assumption about
the goals of AI (hard or soft) are simply wrong.
Post by joshbachynski
So in principle yes you could do that, if you knew what to approximate
to, which you don't. You assume intelligence is material and humans
are strictly determined. These are both unproven propositions that
underlie your whole endeavor.
This comment betrays a profound (and unfortunately wide spread)
misapprehension of what engineering is about. Engineering is of course
much harder than philosophy, but if you can get into 1st year
engineering school, I advise you to do so. You will learn a lot about
how engineers actually think about the problems they try to solve. I
assure you, "Knowing what you are trying to approximate" means something
quite different from what you apparently think it means. (Reread
Stephen's post for examples.) And if you don't want to do engineering,
read a good history of technology. You will find that "knowing what you
want to approximate" in your apparent meaning as often as not came
_after_ successful machines were built. The engineering clarified the
questions, you see.

As for whether humans are "strictly determined" or not, that question
can't be answered without defining what you mean by it. Kindly tell us
what you mean by "strictly determined." Not that it's really relevant to
your argument, since an engineer doesn't care whether some system is
strictly determined or not. (S)he just wants to understand enough to
emulate it.
JPL Verhey
2005-01-08 01:59:53 UTC
Permalink
"joshbachynski" <***@yahoo.com> wrote in message news:***@c13g2000cwb.googlegroups.com...
...
Post by joshbachynski
After we agree on the possibility of the method and subject matter
being undemonstrated and therefore perhaps incorrect, then we will go
on to me proving the immaterial nature of the soul and the methods used
to discover that which take us to the queen of the sciences (ie: out of
the sciences and into philosophy).
Well The Queen to me is Poetry, not philosophy! Philosophy easily
becomes chewing on your *own* gums.

Immaterial vs Material

When is somebody a materialist? When I'm in a coma or dead, I do believe
that the moon still orbits planet earth etc. That there is loads of
lively stuff left and going on - Life. I call that stuff material stuff,
or physical reality. Do I claim to know all there is the know about that
stuff ? No way, and no sane scientist (mad ones do exist, granted) will
claim so. Can we catch Life and put it in the cage of a few equations,
or some TOE? Maybe some like to believe that... the Messianics.

Do many scientists and philosophers still torture their brains with
trying to fit conscious experience into the scientific picture? Many
do..but mostly in vain, while others brush it all aside with some
Houdini-type disappear trick, the ultimate art of Self-Crucifixion.

The True Materialist

A true Materialist is not deterred by all this.. on the contrary. S/he
only will with even more curiosity and awe want to understand or -taste-
this magic stuff, the material of existence of which we understand so
little. S/he understands that the way we are trained to perceive
physical reality as dumb clueless pingpong balls, behaving in magic
fields and forces following certain deterministic "Laws of Nature"... is
obviously a sad, or just a very comic abstraction of the true and
indefinate subtle nature of things known and unknown.

Mind = brainactivity

So the matter of the fact is that our hands and what we hold in it
matter, but that much and perhaps most matters beyond our comprehension.
That certain brainactivity is probably IDENTICAL with conscious
experience is hence not a very worrysome idea. Unless you believe the
"naive materialists" who think that what they see of, and learned-about
the bodybrain (and physical reality in general) represents fully what
the (body)brain and physical reality "really are", the activity in and
of itself. Indeed, with such naivity rampant, one rather must look for
something "im-material" to fill in and account for the gaps. But a true
materialist doesn't fall into this trap.. and will continue to be amazed
by the creational possibilities of star dust... that in cases can even
be sentient, conscious. Isn't matter miraculous? Better believe it!

Souls?

Maybe we are souls inhibiting a body, even incarnatingly change bodies.
If so, so what? maybe it [soul] will be a "trivial" thingie.. a pattern
that behaves like a parasite entering a host and consuming it. But that
"parasite-soul" will be material as well, you bet!! Maybe it transforms
into a new pattern after the host died, flies away like an "immaterial
ghost" or maybe it dies with the host. Maybe the symbiosis between the
parasite-soul pattern and its host creates conscious experience under
conditions. You would then have an entirely materialistic "mind-body"
dualism. Maybe parasite-souls are patterns that can perists over very
long times and distances before they dissolve.

RestSum:

I don't know what is Immaterial, so maybe Immaterial is just another
word for "the unknown". Knowledge gaps. We cannot think about gaps that
are really empty..so at least we can give it some name: God,
Immaterial.. and so on.
Post by joshbachynski
"So do you have an in principle argument why the area of expert
function cannot be extended by ever closer approximation to
most human "intelligent like" functions?"
In fact I do - you don't know what to approximate to - therefore
you are just guessing! You assume you CAN approximate "ever closer"
- this has not been demonstrated. Most AI enthusiasts even admit they
don't know what it is they are ever closer approximating to!
Intelligent like functions? What, like a thermostat? Give me a break :)
So in principle yes you could do that, if you knew what to approximate
to, which you don't. You assume intelligence is material and humans
are strictly determined. These are both unproven propositions that
underlie your whole endeavor.
j***@ixpres.com
2005-01-07 01:20:45 UTC
Permalink
Post by joshbachynski
Hello,
I posted this essay on my blog sometime ago. It's called "Why
Artificial Intelligence can Never Get it Right". I thought some people
here may (cough cough troll cough) like to read it and post their views
here, or on my blog.
josh
--
http://thymos.blogspot.com
Josh wrote:
"All speculation aside, however, there is one final component of true
intelligence that AI has missed and cannot succeed in bringing into
existence using its current methods, because this thing deries all
current methods to study it: human free will. Volition is the essential
difference that computer systems lack, they can only do what they are
told."

That is nonsense.

Jim Bromer
Stephen Harris
2005-01-07 09:24:05 UTC
Permalink
Post by joshbachynski
Post by joshbachynski
Hello,
I posted this essay on my blog sometime ago. It's called "Why
Artificial Intelligence can Never Get it Right". I thought some
people
Post by joshbachynski
here may (cough cough troll cough) like to read it and post their
views
Post by joshbachynski
here, or on my blog.
josh
--
http://thymos.blogspot.com
"All speculation aside, however, there is one final component of true
intelligence that AI has missed and cannot succeed in bringing into
existence using its current methods, because this thing deries all
current methods to study it: human free will. Volition is the essential
difference that computer systems lack, they can only do what they are
told."
That is nonsense.
Jim Bromer
Do you think volition currently exists? Have you manifested
something like free will in your current? project?

I think the proof of the pudding is in the eating,
Stephen
j***@ixpres.com
2005-01-07 11:58:34 UTC
Permalink
You have one standard for your own speculations, and another for mine.
What does that pudding prove?

There is no reason why the computer cannot have an imagination. If a
computer could then be programmed to learn, it could use its
imagination to discover new ideas that the programmer and the teacher
did not specifically plan for ahead of time. The difficulty for me is
to find the fine line between control and freedom where I can be
confident that the program can handle the various kinds of situations
that I cannot completely foresee.

Studies have shown that about 80% of a programmers time is spent
debugging programs. It should be obvious, from this fact alone, that
computers do not just do what they are told.

Josh's statement that computers just do what they are told, even
taken figuratively, is nonsense. If computers could just do what they
were told then I would just tell mine to be smarter and show some
independence of spirit. There is something wrong with Josh's
statement. But if Josh did make an error, where did the error come
from?

Jim Bromer
Stephen Harris
2005-01-07 13:44:37 UTC
Permalink
Post by j***@ixpres.com
You have one standard for your own speculations, and another for mine.
What does that pudding prove?
There is the adage, if wishes were horses beggars would ride.
Post by j***@ixpres.com
There is no reason why the computer cannot have an imagination. If a
computer could then be programmed to learn, it could use its
imagination to discover new ideas that the programmer and the teacher
did not specifically plan for ahead of time. The difficulty for me is
to find the fine line between control and freedom where I can be
confident that the program can handle the various kinds of situations
that I cannot completely foresee.
Well, Minsky agrees with you and begins his defense in:
http://www.ai.mit.edu/people/minsky/papers/ComputersCantThink.txt
WHY PEOPLE THINK COMPUTERS CAN'T

Marvin Minsky, MIT

First published in AI Magazine, vol. 3 no. 4, Fall 1982. Reprinted in
Technology Review, Nov/Dec 1983, and in The Computer Culture,
(Donnelly, Ed.) Associated Univ. Presses, Cranbury NJ, 1985

Most people think computers will never be able to think. That is, really
think. Not now or ever. To be sure, most people also agree that computers
can do many things that a person would have to be thinking to do. Then how
could a machine seem to think but not actually think? Well, setting aside
the question of what thinking actually is, I think that most of us would
answer that by saying that in these cases, what the computer is doing is
merely a superficial imitation of human intelligence.
It has been designed to obey certain simple commands, and then it has been
provided with programs composed of those commands. Because
of this, the computer has to obey those commands, but without any idea
of what's happening."
Post by j***@ixpres.com
Studies have shown that about 80% of a programmers time is spent
debugging programs. It should be obvious, from this fact alone, that
computers do not just do what they are told.
Isn't the reason programs must be tested is because they often don't
do what the programmer expects or hopes for them to do? Testing
the program to see that it accomplishes its goal is proof of concept.
Post by j***@ixpres.com
Josh's statement that computers just do what they are told, even
taken figuratively, is nonsense. If computers could just do what they
were told then I would just tell mine to be smarter and show some
independence of spirit.
I think you are using the literal sense of 'computers do just what they
are told' when you project you could just tell your computer to be
smarter. The figurative sense of 'told' is executing program instructions.
It doesn't work like speech recognition software which is literal.
Post by j***@ixpres.com
There is something wrong with Josh's
statement. But if Josh did make an error, where did the error come
from?
An unlikely cosmic ray scrambling his circuits and introducing random
behavior. I don't know. But I do know most people are going to require
more than a philosphical argument to convince them that some program
is creative or self-aware. Most people use reality to inform their views on
philosophy, not philosophy to bend reality to their preferences. Until a
program demonstrates creativity, it will never be believed as a theory,
that is wishful thinking.
Post by j***@ixpres.com
Jim Bromer
I take it that you are obsessed with making this thing work and it is
driving you a bit crazy? That is how such things happen to me.
Nice to hear from you again. This issue doesn't seem too important,
at least to me, because I don't think it can be settled by conversation.
I don't think the paper I quoted by Minsky was his best paper.

So don't let me keep you from your work :-)
Stephen
Lester Zick
2005-01-07 15:42:21 UTC
Permalink
Post by j***@ixpres.com
You have one standard for your own speculations, and another for mine.
What does that pudding prove?
There is no reason why the computer cannot have an imagination. If a
computer could then be programmed to learn, it could use its
imagination to discover new ideas that the programmer and the teacher
did not specifically plan for ahead of time. The difficulty for me is
to find the fine line between control and freedom where I can be
confident that the program can handle the various kinds of situations
that I cannot completely foresee.
Studies have shown that about 80% of a programmers time is spent
debugging programs. It should be obvious, from this fact alone, that
computers do not just do what they are told.
This is rich. Computers don't do just what they are told? What would
you call it? Creative intelligence? The last time I checked computers
did exactly what they were told and programmers spend 80% of their
time debugging because their programs they don't understand what they
told computers to do.
Post by j***@ixpres.com
Josh's statement that computers just do what they are told, even
taken figuratively, is nonsense. If computers could just do what they
were told then I would just tell mine to be smarter and show some
independence of spirit. There is something wrong with Josh's
statement. But if Josh did make an error, where did the error come
from?
Apparently you would have use believe it came from the computer
instead of Josh's brain. This isn't so much regressive materialism as
just plain silly.

Regards - Lester
j***@ixpres.com
2005-01-08 15:04:53 UTC
Permalink
I do not understand what you are saying, but I certainly do not think
that Josh is a computer.!

All I was saying was that the idea that computers just do what they are
told is nonsense. I actually enjoyed some of Josh's article, but on
this point he was wrong.

We can say that since people are composed of matter they are strictly
deterministic, because they are completely subject to the laws of
physics. But this is not a valid insight for a number of reasons. One
is that we do not fully understand the laws of physics so this argument
cannot be issued with any kind of assurance. But another reason why we
are not just doing what physics makes us do is that we are able to make
choices within that realm of material reality.

The apparent reality is that we are able to make independent decisions.
Although we are limited by our material existence, we are still
capable of exploring novel ways of seeing and moving about the world.
The argument that we are only reacting to our conditioning, or that we
are only reacting to the interactions of matter is a kind of
categorical error. Some people cannot see the forest for the trees.

It is often necessary to abstract information and to explore it
carefully in partial isolation. But it is also necessary to
reintegrate that information into the greater systems as we can
appreciate them. This process is imperfect and it leads to
characteristic simplifications. Some of these simplifications,
especially when accompanied by the decision to ignore alternative
views, are the basis of many of the endless and tedious arguments over
fundamentals. But by exploring the synthesis of ideas we can make the
conscious decision to move past the arguments over the fundamentals.
However, this is not always easy. Following abstraction and synthesis,
we have to then try to reintegrate our new understanding into
pre-existing knowledge.

What would pure absolute indeterminism be? Without our familiar
knowledge of chains of related and deterministic events, the idea of
absolute indeterminism would be impossible to conceive. If you can
appreciate this argument, then you can see that non-deterministic
systems or non-deterministic events must be interdependent on
deterministic systems. Determinism and non-determinism are necessary
but they are also necessarily relative. Although we can abstract them
and think about them in isolation, there is no hard evidence that such
simplistic abstractions actually can exist as separate realities. And
regardless of the ultimate nature of the universe our understanding has
to be built around incomplete information. We need to see determinism
and indeterminism as integrated occurrences of the greater natural
universe because they are necessarily integrated in our mind's
understanding.

This insight into the greater integration of two opposing abstractions
of material behavior has a value greater than merely breaking a tedious
logjam over polarized fundamentals. It is essential to the creation of
better systems of AI. Regardless of the ultimate nature of the
universe, we are limited creatures and we have to rely on systems of
partial understanding. Our AI programs therefore have to be able to
develop insights that will create extensive archipelagos of meaningful
associations and chains of deterministic occurrences. But in order to
further integrate these islands of understanding, our AI programs will
also have to be able to deal with the surrounding oceans of vaguer
knowledge that will produce an expanse of non-deterministic relations.
Jim Bromer
Lester Zick
2005-01-08 16:39:52 UTC
Permalink
Post by j***@ixpres.com
I do not understand what you are saying, but I certainly do not think
that Josh is a computer.!
All I was saying was that the idea that computers just do what they are
told is nonsense. I actually enjoyed some of Josh's article, but on
this point he was wrong.
And that's the point I was commenting on. Your idea that computers
don't do exactly what they are told is nonsense.
Post by j***@ixpres.com
We can say that since people are composed of matter they are strictly
deterministic, because they are completely subject to the laws of
physics. But this is not a valid insight for a number of reasons. One
is that we do not fully understand the laws of physics so this argument
cannot be issued with any kind of assurance. But another reason why we
are not just doing what physics makes us do is that we are able to make
choices within that realm of material reality.
The apparent reality is that we are able to make independent decisions.
Although we are limited by our material existence, we are still
capable of exploring novel ways of seeing and moving about the world.
The argument that we are only reacting to our conditioning, or that we
are only reacting to the interactions of matter is a kind of
categorical error. Some people cannot see the forest for the trees.
It is often necessary to abstract information and to explore it
carefully in partial isolation. But it is also necessary to
reintegrate that information into the greater systems as we can
appreciate them. This process is imperfect and it leads to
characteristic simplifications. Some of these simplifications,
especially when accompanied by the decision to ignore alternative
views, are the basis of many of the endless and tedious arguments over
fundamentals. But by exploring the synthesis of ideas we can make the
conscious decision to move past the arguments over the fundamentals.
However, this is not always easy. Following abstraction and synthesis,
we have to then try to reintegrate our new understanding into
pre-existing knowledge.
What would pure absolute indeterminism be? Without our familiar
knowledge of chains of related and deterministic events, the idea of
absolute indeterminism would be impossible to conceive. If you can
appreciate this argument, then you can see that non-deterministic
systems or non-deterministic events must be interdependent on
deterministic systems. Determinism and non-determinism are necessary
but they are also necessarily relative. Although we can abstract them
and think about them in isolation, there is no hard evidence that such
simplistic abstractions actually can exist as separate realities. And
regardless of the ultimate nature of the universe our understanding has
to be built around incomplete information. We need to see determinism
and indeterminism as integrated occurrences of the greater natural
universe because they are necessarily integrated in our mind's
understanding.
This insight into the greater integration of two opposing abstractions
of material behavior has a value greater than merely breaking a tedious
logjam over polarized fundamentals. It is essential to the creation of
better systems of AI. Regardless of the ultimate nature of the
universe, we are limited creatures and we have to rely on systems of
partial understanding. Our AI programs therefore have to be able to
develop insights that will create extensive archipelagos of meaningful
associations and chains of deterministic occurrences. But in order to
further integrate these islands of understanding, our AI programs will
also have to be able to deal with the surrounding oceans of vaguer
knowledge that will produce an expanse of non-deterministic relations.
Yeah, I have no idea what this word salad has to do with the
contention that computers don't do exactly what they are told.

Regards - Lester
j***@ixpres.com
2005-01-09 13:56:36 UTC
Permalink
Although I am now paraphrasing, I believe that the essence of Josh's
argument was that computers cannot think because they cannot have
volition of their own since they are only doing what they are told to
do. I said that this argument of his was nonsense.

Again, I am paraphrasing, but my impression was that Stephen Harris
essentially said that my philosophical criticism was non-scientific.

I, in essence, then said that if a computer could just do what it was
told, it would have to be intelligent because if that was the case I
could just tell mine to smarten up and show some independence of
spirit.

Since I pointed out that Josh' statement that computers just do what
they told must have been a figurative, Stephen pointed out that my
reply was directed toward the literal interpretation. My recollection
(I might be incorrect) was that Stephen then said that what Josh meant
was that computers just follow separate instructions.

I pointed out that was not true. They are not *just* following
separate instructions.

Lester's comment implied that he did not understand and did not agree
when I claimed that computers were not just following instructions.

First of all, they are not *just* following instructions any more than
they are *just* writing to memory. Now this explanation of my
criticism would be quibbling if it were not intrinsically related to a
more significant issue.

Let me interrupt myself for a minute. I am not interested in the
endless debate of first principals, the fuel for the
I-was-right-and-you-were-wrong arguments that often takes place in
this, and indeed most, philosophical forums. I am discussing this with
you for two reasons. One, I feel that I have a novel if not completely
new view on this subject. I am not arguing determinism vs
non-determinism. Secondly, I believe that my novel view is significant
to the creation of more effective AI.

Let me repeat something for you Lester: I am not arguing determinism
vs non-determinism. I apologize for being patronizing, but I want to
make sure that you get that message.


I pointed out that I do not believe that absolute non-determinism could
be a reality. The only way we can appreciate non-determinism is in
reference to what appears to be the relative stability of deterministic
events. The world is different than it was yesterday, but its present
existence is due to certain deterministic events that link its changes.
Absolute non-determinism does not exist as far as we can tell; it is
an abstraction.

This argument, if reasonable, suggests that non-deterministic events
should be properly seen as coexisting with determinant events. This is
not the zillionth replay of the non-deterministic vs determinism
argument. It says something novel if not completely new. By doing
the work of trying to integrate our appreciation of abstract concepts
into more sophisticated theories of reality, we might actually learn
something that we did not appreciate last week. Perhaps observations
of indeterminable relations can be best appreciated as something that
necessarily coexists with determinant relations.

If this is true, then the argument that computers are different from
brains because they are only deterministically following instructions
is weakened.

Josh did not seem like a
quantum-mechanics-is-THE-fundamental-causative-force-in-the
universe-kind-of-guy to me, he should be able to appreciate the
following argument.

If it turns out that brains are deterministically dependent on the
natural forces of atoms and molecules, then the argument that they are
capable of thought whereas computers cannot because computers are only
deterministically dependent on their instructions is not sensible.

I know that this argument is not what you Lester want to read Lester.
I can appreciate it. I am sorry that it is not clearer. But it lays
the ground work for challenging the view that the computer cannot think
because it is only following instructions. It also lays the ground
work to explain why following instructions by some kinds of devices may
be a sign of intelligence. There are different kinds of instructions
and different kinds of devices.
I will continue this in another message.

Jim Bromer
Neil W Rickert
2005-01-09 15:13:49 UTC
Permalink
Post by j***@ixpres.com
Although I am now paraphrasing, I believe that the essence of Josh's
argument was that computers cannot think because they cannot have
volition of their own since they are only doing what they are told to
do. I said that this argument of his was nonsense.
It was my impression that his "argument" is that thinking requires
an immaterial soul. I put "argument" in quotes, because he has
never presented any argument. He only makes assertions without any
attempt to provide an evidential basis for them.

In a recent post he began one sentence with "The general ignorance
and bias of science notwithstanding,".

A killfile entry might be appropriate.
j***@ixpres.com
2005-01-09 15:36:26 UTC
Permalink
I want to get back to work, so I will try to make this quick.

I have said that computers do not *just* follow instructions any more
than they *just* write data to memory. However, in the context of this
discussion this may not seem like an important issue. For example, I
might have been arguing against the choice of words used in the
expression. If you said something like, the behavior of computers are
governed by a finite set of instructions, I might eventually say, yeah,
yeah, ok, yeah, something like that, uh, I never actually, um, never
said, I never actually said that they weren't, uhm what I was uh saying
was that they were uh you know ...

But there is a more subtle issue here. Why did Josh's argument turn
out to be so very wrong? Why did your comment that computers are just
following instructions fail to hit the nail on the head? Was it *just*
because of conflicts brought on by semantic ambiguities? I don't think
so.

I think that the concept of pure indeterminism can only exist as an
imaginary fantasy, and maybe not even there. On the other hand we have
to deal with indeterminant relations all of the time. It seems that
deterministic relations and non-deterministic relations must coexist
within our appreciation of the natural and imaginary universes.

The existence of stochastic methods of probability, show that there are
formal scientific methods of interpreting constrained uncertainty.
That is, there are formal logical methods we can use when we can
predict the probability of the occurrence of an event even when we
cannot predict whether or not it will occur in some trial.

Again, this suggests that there may be some good reasons to readjust
your concept of the relations of determinism and non-determinism.
Rather than always seeing it as determinism vs non-determinism fight to
the death over principals, there is a good reason to -AT THE VERY
LEAST- question whether determinism and non-determinism can coexist as
a natural order of things.

Perhaps the expression "just following instructions" has been used to
represent both intelligence and non-intelligence because there are
details of reacting to instructions that are not fully appreciated.

Some instructions allow a branch based on conditions. Some
instructions allow the decisions to be based on symbolic information
that is not known when the device is being programmed. Some
instructions allow information to be input into a symbolic conditional
branching device from a variety of independent sources of information.

If you are still there I will try to continue this tomorrow.

Jim Bromer
Lester Zick
2005-01-09 16:25:03 UTC
Permalink
Post by j***@ixpres.com
Although I am now paraphrasing, I believe that the essence of Josh's
argument was that computers cannot think because they cannot have
volition of their own since they are only doing what they are told to
do. I said that this argument of his was nonsense.
And I strongly disagree.
Post by j***@ixpres.com
Again, I am paraphrasing, but my impression was that Stephen Harris
essentially said that my philosophical criticism was non-scientific.
And I agree with Stephen.
Post by j***@ixpres.com
I, in essence, then said that if a computer could just do what it was
told, it would have to be intelligent because if that was the case I
could just tell mine to smarten up and show some independence of
spirit.
Saying a machine can only do what it is told isn't the same as saying
it can do anything it's told, Jim. That's an amazingly unscientific
perspective coming from what I assume is a scientific orientation.
Post by j***@ixpres.com
Since I pointed out that Josh' statement that computers just do what
they told must have been a figurative, Stephen pointed out that my
reply was directed toward the literal interpretation. My recollection
(I might be incorrect) was that Stephen then said that what Josh meant
was that computers just follow separate instructions.
I pointed out that was not true. They are not *just* following
separate instructions.
Lester's comment implied that he did not understand and did not agree
when I claimed that computers were not just following instructions.
True. Computers only follow instructions they're given. This doesn't
mean they can follow any instructions they're given, and I see no
reason at all for this idea that they could.
Post by j***@ixpres.com
First of all, they are not *just* following instructions any more than
they are *just* writing to memory. Now this explanation of my
criticism would be quibbling if it were not intrinsically related to a
more significant issue.
Let me interrupt myself for a minute. I am not interested in the
endless debate of first principals, the fuel for the
I-was-right-and-you-were-wrong arguments that often takes place in
this, and indeed most, philosophical forums. I am discussing this with
you for two reasons. One, I feel that I have a novel if not completely
new view on this subject. I am not arguing determinism vs
non-determinism. Secondly, I believe that my novel view is significant
to the creation of more effective AI.
Well, I'm not arguing the endless circle of ideas either, Jim. I'm
only commenting on this one aspect, to wit, whether computers only do
what they're told. If it were otherwise, by the way, programmers
couldn't debug them.
Post by j***@ixpres.com
Let me repeat something for you Lester: I am not arguing determinism
vs non-determinism. I apologize for being patronizing, but I want to
make sure that you get that message.
Okay. You don't have to worry about appearing patronizing. But your
arguments related to improved ai need to be examined more closely.
Post by j***@ixpres.com
I pointed out that I do not believe that absolute non-determinism could
be a reality. The only way we can appreciate non-determinism is in
reference to what appears to be the relative stability of deterministic
events. The world is different than it was yesterday, but its present
existence is due to certain deterministic events that link its changes.
Absolute non-determinism does not exist as far as we can tell; it is
an abstraction.
Okay, provisionally.
Post by j***@ixpres.com
This argument, if reasonable, suggests that non-deterministic events
should be properly seen as coexisting with determinant events. This is
not the zillionth replay of the non-deterministic vs determinism
argument. It says something novel if not completely new. By doing
the work of trying to integrate our appreciation of abstract concepts
into more sophisticated theories of reality, we might actually learn
something that we did not appreciate last week. Perhaps observations
of indeterminable relations can be best appreciated as something that
necessarily coexists with determinant relations.
It sounds like you are heading toward quantum indeterminacy here for
the explanation of intelligence, Jim.
Post by j***@ixpres.com
If this is true, then the argument that computers are different from
brains because they are only deterministically following instructions
is weakened.
I don't agree mainly because you haven't shown any indeterminacy in
the operation of computers.
Post by j***@ixpres.com
Josh did not seem like a
quantum-mechanics-is-THE-fundamental-causative-force-in-the
universe-kind-of-guy to me, he should be able to appreciate the
following argument.
If it turns out that brains are deterministically dependent on the
natural forces of atoms and molecules, then the argument that they are
capable of thought whereas computers cannot because computers are only
deterministically dependent on their instructions is not sensible.
Well, I don't know Josh's position on the issue of atoms and molecules
because he is being singularly coy about clarifying it if he has a
position. I basically agree that we're all dependent on atoms and
molecules. But I don't agree that the kind of material interactions
studied in physics or quantum physics as the cause of intelligence. It
sounds like you're arguing that because we're all made up of common
things, manifestations of intelligence in us justify the assumption of
intelligence in computers. That is an unsound argument.
Post by j***@ixpres.com
I know that this argument is not what you Lester want to read Lester.
I can appreciate it. I am sorry that it is not clearer. But it lays
the ground work for challenging the view that the computer cannot think
because it is only following instructions. It also lays the ground
work to explain why following instructions by some kinds of devices may
be a sign of intelligence. There are different kinds of instructions
and different kinds of devices.
I will continue this in another message.
Well, Jim, don't worry about what I want to read or not read. I tend
to want to read sound arguments and not read unsound arguments. My
only interest in this connection is with your comment that computers
don't do just what they're told because they can't do anything they're
told. As noted above this rests on your idea that "doing what" means
"doing anything" which are completely different concepts.

Regards - Lester
joshbachynski
2005-01-09 19:48:28 UTC
Permalink
Lester,

I'll try to respond to all of the comments I can in this post:

Lester said:
"Well, I don't know Josh's position on the issue of atoms and molecules
because he is being singularly coy about clarifying it if he has a
position. I basically agree that we're all dependent on atoms and
molecules. But I don't agree that the kind of material interactions
studied in physics or quantum physics as the cause of intelligence. It
sounds like you're arguing that because we're all made up of common
things, manifestations of intelligence in us justify the assumption of
intelligence in computers. That is an unsound argument."

I suppose I am being coy :) but it was with good reason. I don't seem
to be getting through to you guys - my intention was not to posit a
positive thesis at all, but show a deficieny in scientific method with
regards to AI. No one seems to want to accept / talk about that. Fair
enough - I can't force someone to talk about something they don't want
to :)

"I ask you simply what the better way is? Then in reply above you
simply reiterate all your objections to everyone elses ways. If you
can determine intelligence, then what is it? If not I suggest you stop
claiming you can determine intelligence and go back to critiquing the
behavioral methodologies of others, with which I don't necessarily
disagree but find unoriginal. The only thing of interest you've said
so far is that the soul or essence of intelligence is immaterial, with
which, once again, I agree but don't find especially original in the
absence of some specific mechanical idea of what immaterial means."

Intelligible method is the method I am speaking of to determine
intelligence. I did explain it a bit above. I cannot give you a full
demonstration, a) it would take far to long to write down my assertions
here in order, b) my point wasn't to vindicate it here but to posit its
plausibillity in the light that science is not the best way to proceed
in this subject matter. Seeings no one will even acknowledge those
arguments of mine the entire endeavour is all for naught I suppose. c)
The intelligible method involved meditating (being aware of and
recording or memorizing) upon the essence of one opperating
intelligence - their thoughts, how they relate to each other, what
different kinds of mental processes one has, how we proceed in
discovery from the indubitable, to the certain, to that which we cannot
prove but we can be sure of, etc, until we get to mere speculation. But
it is mere speculation with one advantage over science - there are no
faulty instruments or senses in the way as a mediator - we have direct
access to the thing being studied - our own mental processes and
thoughts.

It would be quite a post to give you a written example of this, and a
written transcription of my meditations may not suffice to convince you
- Descartes was sufficient although incomplete and often misunderstood,
his or the meditations of Plotinus may serve you better.

This is why I am being coy - I was never really intending to posit any
new subject matter or example the new method (as this is simply not the
proper forum - no pun intended), only to show a new method and
understanding of the subject matter may be called for. Please tell me
you understand what I am trying to say, whether or not you agree with
my reasoning.

">Although I am now paraphrasing, I believe that the essence of Josh's
Post by j***@ixpres.com
argument was that computers cannot think because they cannot have
volition of their own since they are only doing what they are told to
do. I said that this argument of his was nonsense."
No, that's not what I was saying at all. Which is why I didn't really
respond to any of that thread that thought I had said something I had
not :)

If you want my two cents on it however I'd say yes computers only do
what they are told to do - they follow the instructions they are given
- period. It's more like the instructions cause them to do what the
instruction says. They can generate new instructions to follow, but not
at random and have to be told to generate new instructions - by a human
or by programming - essentually, ALL by a human. Deep Blue didn't beat
Casparov - hundreds of programmers beat Casparov.

IMO random generation of new instructions to follow is not required for
AI IMO - not even the human mind works that way.

I cannot prove this I can only theorize. As I said above it would be a
very long and hard to understand post if I were to explain my whole
theory of mind and the intelligible method, and I strongly doubt any of
you would understand or accept it. But if you really want to read it
then I will type it. It will be good practice.
josh
--
http://thymos.blogspot.com
Lester Zick
2005-01-09 20:26:59 UTC
Permalink
On 9 Jan 2005 11:48:28 -0800, "joshbachynski"
Post by joshbachynski
Lester,
"Well, I don't know Josh's position on the issue of atoms and molecules
because he is being singularly coy about clarifying it if he has a
position. I basically agree that we're all dependent on atoms and
molecules. But I don't agree that the kind of material interactions
studied in physics or quantum physics as the cause of intelligence. It
sounds like you're arguing that because we're all made up of common
things, manifestations of intelligence in us justify the assumption of
intelligence in computers. That is an unsound argument."
I suppose I am being coy :) but it was with good reason. I don't seem
to be getting through to you guys - my intention was not to posit a
positive thesis at all, but show a deficieny in scientific method with
regards to AI. No one seems to want to accept / talk about that. Fair
enough - I can't force someone to talk about something they don't want
to :)
Well, no one seems to want to talk about that deficiency because it's
already been talked out, certainly by me and probably by others. And
you don't seem to have any positive alternative in mind that would
offset the deficiency. It's not a question of the deficiency per se
but of what you think you can do about it that others haven't already
suggested that prompts the lack of interest.
Post by joshbachynski
"I ask you simply what the better way is? Then in reply above you
simply reiterate all your objections to everyone elses ways. If you
can determine intelligence, then what is it? If not I suggest you stop
claiming you can determine intelligence and go back to critiquing the
behavioral methodologies of others, with which I don't necessarily
disagree but find unoriginal. The only thing of interest you've said
so far is that the soul or essence of intelligence is immaterial, with
which, once again, I agree but don't find especially original in the
absence of some specific mechanical idea of what immaterial means."
Intelligible method is the method I am speaking of to determine
intelligence. I did explain it a bit above. I cannot give you a full
demonstration, a) it would take far to long to write down my assertions
here in order, b) my point wasn't to vindicate it here but to posit its
plausibillity in the light that science is not the best way to proceed
in this subject matter.
Disproving one approach doesn't validate another. Your approach to
the analysis of intelligence, whatever it may eventually turn out to
be, isn't validated by reasonable objections to those of others. In
other words, their being wrong doesn't make you right.
Post by joshbachynski
Seeings no one will even acknowledge those
arguments of mine the entire endeavour is all for naught I suppose.
No self pity in science, just right and wrong.No victimization please.
Post by joshbachynski
c)
The intelligible method involved meditating (being aware of and
recording or memorizing) upon the essence of one opperating
intelligence - their thoughts, how they relate to each other, what
different kinds of mental processes one has, how we proceed in
discovery from the indubitable, to the certain, to that which we cannot
prove but we can be sure of, etc, until we get to mere speculation. But
it is mere speculation with one advantage over science - there are no
faulty instruments or senses in the way as a mediator - we have direct
access to the thing being studied - our own mental processes and
thoughts.
It would be quite a post to give you a written example of this, and a
written transcription of my meditations may not suffice to convince you
- Descartes was sufficient although incomplete and often misunderstood,
his or the meditations of Plotinus may serve you better.
This is why I am being coy - I was never really intending to posit any
new subject matter or example the new method (as this is simply not the
proper forum - no pun intended), only to show a new method and
understanding of the subject matter may be called for. Please tell me
you understand what I am trying to say, whether or not you agree with
my reasoning.
I understand what you object to. But what I don't understand is a
claim to a better alternative you can't support. Certainly you are
welcome to critique current methodologies. Just don't advance claims
you can't even describe succinctly.
Post by joshbachynski
">Although I am now paraphrasing, I believe that the essence of Josh's
Post by j***@ixpres.com
argument was that computers cannot think because they cannot have
volition of their own since they are only doing what they are told to
do. I said that this argument of his was nonsense."
No, that's not what I was saying at all. Which is why I didn't really
respond to any of that thread that thought I had said something I had
not :)
If you want my two cents on it however I'd say yes computers only do
what they are told to do - they follow the instructions they are given
- period. It's more like the instructions cause them to do what the
instruction says. They can generate new instructions to follow, but not
at random and have to be told to generate new instructions - by a human
or by programming - essentually, ALL by a human. Deep Blue didn't beat
Casparov - hundreds of programmers beat Casparov.
IMO random generation of new instructions to follow is not required for
AI IMO - not even the human mind works that way.
I cannot prove this I can only theorize. As I said above it would be a
very long and hard to understand post if I were to explain my whole
theory of mind and the intelligible method, and I strongly doubt any of
you would understand or accept it. But if you really want to read it
then I will type it. It will be good practice.
No, I don't really want to read it. I have to do too much reading as
it is. You undoubtedly need to write it out in order to gain some
understanding of what it is you don't know. Most of use have gone that
route. Some of us just refuse to recognize what it is we don't know
and can't prove, and some of us make progress. You recognize a problem
in the methods of behavioral science. You just don't seem to have any
idea of alternatives that result in science and not just so much
philosophizing or, as I would call it, hot air.

Regards - Lester
Wolf Kirchmeir
2005-01-10 15:32:41 UTC
Permalink
joshbachynski wrote:
[...]
Post by joshbachynski
I suppose I am being coy :) but it was with good reason. I don't seem
to be getting through to you guys - my intention was not to posit a
positive thesis at all, but show a deficieny in scientific method with
regards to AI. No one seems to want to accept / talk about that. Fair
enough - I can't force someone to talk about something they don't want
to :)
[...]

No, josh, you have a vague (very vague IMO) notion of "intelligenec",
and are offended that AI people in general doesn't seem to share your
notion. Your notion (insofar as I can figure it out - your language is
extremely intensional) assumes some internal quality or property or
factor that machines cannot(?) have, while humans not only can but do
have it. Your admiring references to Descartes and allusions to
dualistic notions suggest that you believe that this other factor is a
soul (or something like that) - in which case your argument is question
begging, since by definition a soul can't be built, but can only be
bestowed by some deity (whether personal or not in your scheme of
things, I can't tell.)

Whether the above is correct or not, there is an objective flaw in your
position. You claim that you've discovered a "deficiency in scientific
method", but you haven't clue as to what scientific method actually is.

Eg, you have repeatedly claimed that nothing useful can be done unless
and until one has a clear idea of what one wants to do. This assumption
(a sadly widespread one) merely displays an ignorance of empirical work
and engineering, of how invention and discovery in _in fact_ proceed.
It's as if you told the Wright brothers that they couldn't hope to build
an airplane because they didn't have any true understanding of flight.
You don't seem to realise that understanding does not precede but
follows experiment and trial.

IOW, most of your talk misses the point. Do some real science and
engineering before you talk any more about what scientists and engineers
can or can't do.
Stephen Harris
2005-01-11 08:26:37 UTC
Permalink
[...]
Post by joshbachynski
I suppose I am being coy :) but it was with good reason. I don't seem
to be getting through to you guys - my intention was not to posit a
positive thesis at all, but show a deficieny in scientific method with
The scientific method does not deal with things which cannot be
measured in the physical universe; when you said that the mind had nothing
to do with mass or energy, you excluded the scientific method
from whatever your claim was. The scientific method makes no claims
about the metaphysical, so it cannot have a deficiency in an area in
which it makes _no_ attempt to explain.

It does attempt to explain the physical. We have brains which we
think produce minds exhibiting intelligent behavior so most people
think building AI is amenable to the scientific method, which is
experimentation, engineering progress through trial and error.

Judah Pearl quoted Einstein in his book "Causality" (2000)

"Develpment of Western science is based on two great achievements:
the invention of the formal logical system (in Euclidean geometry) by
the Greek philosophers, and the discovery of the possibility to find
out causal relationships by systematic experiment (during the Renaissance)."

Causal laws act on the physical universe of mass and energy. When
you bring in a conjecture about something which is not subject to
causal laws (outside mass/energy) you leave the area which the
scientific method considers itself valid. It makes no claim about
such a non-causal existence, it does not attempt to disprove such a
non-causal existence. There are some scientists who adopt an
additional stipulation: if it isn't subject to the scientific method, then
it doesn't exist. That is just an assumption made by some people
who have their own little philosophical label. Not the definition.

Nobody can prove that a non-causality realm or entity does not
exist. The scientific method cannot be used for this nor does it try.
Nor can one logically prove that some non-causal realm or entity
does exist, because it is not by definition: neither mass nor energy,
subject to experimentation and measurement to provide such a proof.

Maybe you don't think it is obvious to people that if there were such
a thing as a non-causal (not mass or energy) realm outside of the
universe that provided a mind to the physical universe, or even was
a mind with a point of existence or focus within physical bodies,
that nobody would be trying to build this non-causal thing by using
physical causal means or the scientific method.

So I'm not sure whether the best term to describe your effort in this
trivial, boring, irrelevant, or created by an ignorance for what other
people have already provided consesnual agreements and definitions
of. Here you are calling AI researchers "sophists" because they don't
match your own ignorant and imaginary speculations. That was sort
of true 25 years ago. Now, such people are called Transhumanists
and have their own agenda for what AI should accomplish. Scientists
who think anything that doesn't fit under a fit subject for investigation
by the scientific method also have their own agendas/views which
are a subgroup of people who use the scientific method.

You are right that this sort of issue has come up before. There is
no logical argument that will settle the metaphysical issues involved.
Your argument is particulary useless in regard to "not matter or energy"
because it is alway never going to be more than your belief and it will
never have any proof. People on the other side of your assumption
will never have a proof either. The people in the middle are just
trying to make a machine that does some intelligent behavior just like
the people who used birds to inspire/build a machine that flies. They
didn't need to claim the airplane was like a bird in any other way
such as laying eggs, or eating worms or singing to build an airplane.
People will be able to build a machine that has some intelligent
behavior without claiming that it has a mind or free will.

People who want to build a program that runs on a computer that
also has a mind call it something else other than just AI in order to
distinguish it. They call it strong AI or maybe self-modifying general
artificial intellence. Because you don't know, you make the clueless
claim that people who want to use the terms AI are sophists and
are not ethically allowed to distinguish their aspirations from other
goals such as strong AI or AGI (super-intelligent self-modifying AI).

I don't know why you keep insisting on writing about something
you know knothing about and that doesn't fit into your speculations
which are based on your imagination rather than facts. By now, you
must be used to finding out that reality does not match your reality.
Post by joshbachynski
regards to AI. No one seems to want to accept / talk about that. Fair
enough - I can't force someone to talk about something they don't want
to :)
[...]
JPL Verhey
2005-01-11 21:59:10 UTC
Permalink
Post by Stephen Harris
[...]
Post by joshbachynski
I suppose I am being coy :) but it was with good reason. I don't seem
to be getting through to you guys - my intention was not to posit a
positive thesis at all, but show a deficieny in scientific method with
The scientific method does not deal with things which cannot be
measured in the physical universe; when you said that the mind had
nothing to do with mass or energy, you excluded the scientific method
from whatever your claim was. The scientific method makes no claims
about the metaphysical, so it cannot have a deficiency in an area in
which it makes _no_ attempt to explain.
Although speculation in science is necessary and a form of metaphysics.
You try to verify and falsify the "could it be that".
Post by Stephen Harris
It does attempt to explain the physical. We have brains which we
think produce minds
More contemporary is to think that (certain) brainactivity is
*identical* with mind.
Post by Stephen Harris
exhibiting intelligent behavior so most people
think building AI is amenable to the scientific method, which is
experimentation, engineering progress through trial and error.
It is a wise thing to say that if we define human behaviors as
intelligent behaviors.. non-human behaviors could in principle also
qualify. But it is not wise to think of intelligence as belonging to a
different category than qualifications of behavior such as "smart",
"funny" or "usefull".

"Funny" is not a property the natural sciences use when describing and
measuring observed phenomena.
Post by Stephen Harris
Judah Pearl quoted Einstein in his book "Causality" (2000)
the invention of the formal logical system (in Euclidean geometry) by
the Greek philosophers, and the discovery of the possibility to find
out causal relationships by systematic experiment (during the
Renaissance)."
Causal laws act on the physical universe of mass and energy.
Causal laws are abstractions, mental constructs, derived from
observations and are in fact describing our experiences. The idea that
such abstractions "act on the physical universe" (let alone "govern") is
turning the world upside down.
Post by Stephen Harris
When
you bring in a conjecture about something which is not subject to
causal laws (outside mass/energy) you leave the area which the
scientific method considers itself valid. It makes no claim about
such a non-causal existence, it does not attempt to disprove such a
non-causal existence.
Agreed - things "immaterial" are by definition non-scientific. And the
idea the mind is not identical with brainactivity is not only
non-scientific since all research points to this inescapable fact.. but
it is also an insult to the brain.

That we don't know all there is to know, or could be known about
matter.. is another matter. But there probably is a lot more waiting to
be known and understood about matter, and hence also about the brain.
For whatever reason, some people want "unknown matter" to be immaterial.
They probably hate matter already so much.. or perhaps blame matter for
a miserable life... that they can't stand the idea that there is
more..and more to know about matter...
Post by Stephen Harris
There are some scientists who adopt an
additional stipulation: if it isn't subject to the scientific method, then
it doesn't exist. That is just an assumption made by some people
who have their own little philosophical label. Not the definition.
Nobody can prove that a non-causality realm or entity does not
exist. The scientific method cannot be used for this nor does it try.
That's not correct. The scientific method equally revealed and reveals
determinacy as indeterminacy. It appears a war between "causal laws" and
the indeterminacy expressed in statistics.. a war that also Bohr and
Einstein were fighting in person for 30 years over the meaning of
quantum mechanics without resolve..

[]
Lester Zick
2005-01-11 22:45:55 UTC
Permalink
On Tue, 11 Jan 2005 22:59:10 +0100, "JPL Verhey"
[. . .]
Post by JPL Verhey
Post by Stephen Harris
Judah Pearl quoted Einstein in his book "Causality" (2000)
the invention of the formal logical system (in Euclidean geometry) by
the Greek philosophers, and the discovery of the possibility to find
out causal relationships by systematic experiment (during the
Renaissance)."
Causal laws act on the physical universe of mass and energy.
Causal laws are abstractions, mental constructs, derived from
observations and are in fact describing our experiences. The idea that
such abstractions "act on the physical universe" (let alone "govern") is
turning the world upside down.
That's exactly what happens. Such causal laws are not passive at all
despite representing abstractions and do act on the physical universe.

[. . .]
Post by JPL Verhey
Post by Stephen Harris
There are some scientists who adopt an
additional stipulation: if it isn't subject to the scientific method, then
it doesn't exist. That is just an assumption made by some people
who have their own little philosophical label. Not the definition.
Nobody can prove that a non-causality realm or entity does not
exist. The scientific method cannot be used for this nor does it try.
That's not correct. The scientific method equally revealed and reveals
determinacy as indeterminacy. It appears a war between "causal laws" and
the indeterminacy expressed in statistics.. a war that also Bohr and
Einstein were fighting in person for 30 years over the meaning of
quantum mechanics without resolve..
There is no lack of causality and no indeterminacy expressed in
statistics.

Regards - Lester
JPL Verhey
2005-01-11 23:06:53 UTC
Permalink
Post by Lester Zick
On Tue, 11 Jan 2005 22:59:10 +0100, "JPL Verhey"
[. . .]
Post by JPL Verhey
Post by Stephen Harris
Judah Pearl quoted Einstein in his book "Causality" (2000)
the invention of the formal logical system (in Euclidean geometry) by
the Greek philosophers, and the discovery of the possibility to find
out causal relationships by systematic experiment (during the Renaissance)."
Causal laws act on the physical universe of mass and energy.
Causal laws are abstractions, mental constructs, derived from
observations and are in fact describing our experiences. The idea that
such abstractions "act on the physical universe" (let alone "govern") is
turning the world upside down.
That's exactly what happens. Such causal laws are not passive at all
despite representing abstractions and do act on the physical universe.
Sometimes, Lester, I think you got on this roll over differences because
you just want to.. or Must differ..;)

Stephen posits "causal laws" to exist independent from our abstractions,
ie. that they [causal laws] will still be there when we are all dead and
gone, and that they [causal laws] are still 'acting on the physical
universe'. That's how I understood him, and that is what I think is
nonsense. Maybe he didn't mean that.. then let him speak.

If you, wrongly, thought I meant that *abstractions* such as "physical
laws" don't act on (in) the physical universe.. I'd agree with you.
Obsviously they do. Brainactivity acts in and on the physical universe,
although its effects do not reach too far beyond the skull it resides
in.
Post by Lester Zick
[. . .]
Post by JPL Verhey
Post by Stephen Harris
There are some scientists who adopt an
additional stipulation: if it isn't subject to the scientific
method,
then
it doesn't exist. That is just an assumption made by some people
who have their own little philosophical label. Not the definition.
Nobody can prove that a non-causality realm or entity does not
exist. The scientific method cannot be used for this nor does it try.
That's not correct. The scientific method equally revealed and reveals
determinacy as indeterminacy. It appears a war between "causal laws" and
the indeterminacy expressed in statistics.. a war that also Bohr and
Einstein were fighting in person for 30 years over the meaning of
quantum mechanics without resolve..
There is no lack of causality and no indeterminacy expressed in
statistics.
Why does statistics not reveal indeterminacy, in the same way that
causality reveals determinacy?
Lester Zick
2005-01-12 16:54:47 UTC
Permalink
On Wed, 12 Jan 2005 00:06:53 +0100, "JPL Verhey"
Post by JPL Verhey
Post by Lester Zick
On Tue, 11 Jan 2005 22:59:10 +0100, "JPL Verhey"
[. . .]
Post by JPL Verhey
Post by Stephen Harris
Judah Pearl quoted Einstein in his book "Causality" (2000)
the invention of the formal logical system (in Euclidean geometry) by
the Greek philosophers, and the discovery of the possibility to find
out causal relationships by systematic experiment (during the Renaissance)."
Causal laws act on the physical universe of mass and energy.
Causal laws are abstractions, mental constructs, derived from
observations and are in fact describing our experiences. The idea that
such abstractions "act on the physical universe" (let alone "govern") is
turning the world upside down.
That's exactly what happens. Such causal laws are not passive at all
despite representing abstractions and do act on the physical universe.
Sometimes, Lester, I think you got on this roll over differences because
you just want to.. or Must differ..;)
Stephen posits "causal laws" to exist independent from our abstractions,
ie. that they [causal laws] will still be there when we are all dead and
gone, and that they [causal laws] are still 'acting on the physical
universe'. That's how I understood him, and that is what I think is
nonsense. Maybe he didn't mean that.. then let him speak.
If you, wrongly, thought I meant that *abstractions* such as "physical
laws" don't act on (in) the physical universe.. I'd agree with you.
Obsviously they do. Brainactivity acts in and on the physical universe,
although its effects do not reach too far beyond the skull it resides
in.
I appear to have been mistaken in terms of what I thought you meant,
JPL, so I apologize.
Post by JPL Verhey
Post by Lester Zick
[. . .]
Post by JPL Verhey
Post by Stephen Harris
There are some scientists who adopt an
additional stipulation: if it isn't subject to the scientific
method,
then
it doesn't exist. That is just an assumption made by some people
who have their own little philosophical label. Not the definition.
Nobody can prove that a non-causality realm or entity does not
exist. The scientific method cannot be used for this nor does it try.
That's not correct. The scientific method equally revealed and reveals
determinacy as indeterminacy. It appears a war between "causal laws" and
the indeterminacy expressed in statistics.. a war that also Bohr and
Einstein were fighting in person for 30 years over the meaning of
quantum mechanics without resolve..
There is no lack of causality and no indeterminacy expressed in
statistics.
Why does statistics not reveal indeterminacy, in the same way that
causality reveals determinacy?
Well, JPL, statistics are just numerical summaries of coincidences.
They don't reveal anything except that. The rest is just our own
speculation and interpretation of what the numbers mean in relation to
one another. The numbers themselves are just numbers in any event and
don't explain anything in themselves except numerical correlation.

Statistical analysis is the scientific equivalent of analogical
reasoning. The numbers themselves don't say anything. It's people who
say what the numbers mean. When you say that statistics reveals
indeterminacy, all you're really saying is that scientists are too
lazy or too preoccupied with grant applications to look behind the
numbers to find out where the numbers come from and what they have
to mean in mechanical terms of cause and effect. They have to mean
something or there would be no correlations to begin with.

On the other hand, if you say statistical analysis and probability are
the only possible techniques for understanding material interactions
then you are positing a mechanical principle not in the numbers or in
their correlations they represent and for which there are no numbers
and no support statistical or otherwise.

Regards - Lester
j***@ixpres.com
2005-01-11 09:46:11 UTC
Permalink
Hey I am glad someone is reading this. I will finish what I was saying
and then if you want to argue about it I will be happy to do so with
fewer words.

Let's say that a scientist spends a day observing an object waiting for
an expected event. The event does not occur the scientist might then
enter that information into his observation log. That shows that the
observation of non-material or non-occurring events can be perfectly
reasonable and sound scientific observations. It also shows that
scientific observations can be made relative to an expected frame of
reference.

While we can talk about the natural relations that may occur between
two objects in the natural universe, we also need to be able to talk
about conceptual relations that we define between objects and events.
The noting of an event that did not occur, for example, is an
actualization of the understanding that non-material relations may be
considered significant to scientific theory. This is not nonsense and
it is not non-science.

There is another situation that is relevant to my argument. We are not
always able to detect the effective principals of an observed event.
From our point of view, it might seem that an event or relation is
non-deterministic. So there are also times when we might see an
apparent non-deterministic event just because we cannot see all of the
events that precipitated the observed event. This kind of observation
can also constitute a legitimate scientific observation.

These points are important in laying the groundwork for undermining the
theory that computers are "just following instructions."

Repeating myself, if the universe is completely deterministic then the
brain is completely dependent on the instructions of its biochemistry
and physics. From this point of view, one might conclude that when
people are thinking they are just following the instructions of
biology. But that seems like nonsense to me. We are more than the
reactants of the deterministic forces that have created us, we are
independent intellectual spirits. The reason why we are not just
reactants is because we are able to think for ourselves. We may be
following the *instructions* of the biochemistry and physics of our
brain, but if this is the case, there are some aspects of that
biochemistry that allows us to physically modify some part of that
biochemistry simply by taking thought. To think about something is to
change something of what you are.

Perhaps the universe is not completely deterministic. As I said, a
completely non-deterministic universe seems pretty far out there, so
lets take a look at an alternative. Perhaps independent chains of
deterministic events can occur. If they do, I don't think that they
would be the only source for the human brain to acquire its creativity
and volition, but if they were then that process could be duplicated
through the sources of input into the computer. So even though the
computer program is running instructions, the sources of those
instructions may at times be independent and therefore not
deterministically dependent on previous instructions that the computer
had processed earlier. If, say, a computer ran a probabilistic
simulation, it would have to use deterministic means to simulate the
random events of the sample. But if it was able to adapt in some way
to input events, the sources of those observations could include
non-deterministic relations or events.

Suppose I say that I can program my computer to solve equations that
are too complicated for me to solve. You would have said that the
computer is just doing what I instructed it to do even though there may
be some question whether or not I could have solved those equations
myself. Ok, fine. But here is the problem. If you agree to that,
then you would be accepting the idea that special complicated
algorithms can be considered as instructions that that the computer has
to follow.

Suppose that the computer was able to write its own instructions. Then
whose instructions would it be following?

If this could occur then some of the instructions that it *was
following* might be instructions that it itself wrote. When I follow
instructions that I myself make I don't usually make the claim that "I
am just following instructions," unless I find some great humor in the
comment or some great object lesson for my fellow man to be enwizened
by. Or, I might make that kind of comment if I wanted to avoid
accepting responsibility for my actions. (I too was just following
orders!)

Now if I programmed my computer to create its own instructions, I would
do it in a way where it would use my programming code to create new
combinations of algorithms. But I contend that even this controlled
environment can be seen as a method by which computer programs can
create their own instructions. Just as you would (probably) have said
that my complicated numerical algorithm would constitute the
instructions that the computer had to follow, you would have to accept
that the instructions that the computer wrote would also constitute
instructions that it had to follow as well.

So yes, you were partly right. Computer programs follow instructions.
But they are also capable of creating some of their own instructions,
and when that sort of thing happens in my mind I argue that constitutes
volition and self-determination. Or at least it constitutes a source
of self-determination and volition.

But the whole question of determinism vs non-determinism has to be
examined from an alternative view. There are many valid
non-deterministic relations that can be ascribed to events, but are
they natural relations or are they expressions of intellectual
constructs. The reporting of a non-event event is one example. But
there are also cases where relatively independent events may interact
as well. Are they naturally occurring non-deterministic relations?

But understanding that non-deterministic events or apparent
non-deterministic events must coexist with deterministic events or
apparent deterministic events we are left with the deeper recognition
of understanding that the observation of events is relative to the
vantage of the observer. This seems like elementary science to me.
And from the basis of that understanding we could also put forward the
theory that the instructions that a computer program follows are
dependent on the contextual frame of those instructions and the
immediate sources of the instructions.

The external sources of information that the computer program is
exposed to can be integrated into the programming. In order to
appreciate the effects of this kind of input, you cannot just look at
separate instructions and completely understand its effects on the
program, you would have to study the situation with a variety of
strategies.

And more importantly to this particular discussion, the program can
integrate its own algorithms into itself. Because there is no
overwhelming evidence that the human brain is not subject to the
deterministic laws of biochemistry and nature, we have to ask ourselves
what it is about our creativity that allows us to achieve some
independence of thought even though we must presumably abide by these
laws of nature. And because the nature of a computer program
incorporates a few special kinds of instructions that allow a program
to process symbolic references that can shape the algorithms that the
program will subsequently follow, we have to ask ourselves if it is
possible that there might be a similarity between the two kinds of
processes. In this question, absolute equivalency is not the deciding
factor. The question then is, are there effective similarities that
might cause you to suspect that it no more meaningful to say that the
computer is "just following instructions," then it is to say that we
human beings are "just following instructions"?

I apologize for the confusing and excessive presentation. If anyone
wants to continue arguing about this, I will try to use fewer words.
Jim Bromer
kenneth collins
2005-01-11 11:27:25 UTC
Permalink
<***@ixpres.com> wrote in message news:***@f14g2000cwb.googlegroups.com...
| [...]

I've not been following this thread,
but I've read Stephen's and your
last posts.

With respect to your position, the
sticky-point is that, just because the
causal links between two "events"
in physical reality remain unknown,
that does not mean they are not caus-
ally-linked.

It just means that the causal-linkage
remains unknown -- unexperienced
and unobserved.

It's not "observation" that instantiates
causality.

Causality or acausality exist whether
or not either are observed.

The sticky-point is that one cannot
know that which one does not exper-
ience.

Such does not apply to the example
that you gave, in which a machine is
programmed to write some of the in-
structions it will, subsequently, ex-
ecute.

All one has to do is a memory-dump
that includes the entirety of the CPU-
'time' in-question, and there they are --
all the causal links between the code
that was executed while the machine
"wrote" the further instructions that
it "wrote", all the causal links between
the execution of this machine-added
code and any further instructions that
the machine "wrote",... ad infinitum.

The above is True even if the any of
code that the machine executes is
driven by external data with respect
to which the external causal links re-
main unknown.

With respect to what happens within
the machine, the causal links are all
right-there, in the memory dump,
completely-determined.

All of the "head-scratching" stuff re-
mains external to anything that the
machine does. Nothing the machine
does is, or can be, "undetermined".

We don't have to ask the machine
"why" it did this or that. We can just
look in the memory-dump, and see
for ourselves. The chain of 'whys' is
deterministically-complete.

The machine never interacts with any-
thing that remains unknown, whether
or not what remains unknown is caus-
ally-linked to that with which the mach-
ine does interact.

The existence of the causal links is not,
in any way, dependent upon anything
that the machine does.

It's the same with respect to nervous
systems. What happens within nervous
systems can alter what happens within
physical reality, but only through a chain
of causal links.

What happens within nervous systems
does not dictate to physical reality what
it "Is".

When invention occurs, everything that's
in-it was already causally-linked within
physical reality, and the inventor 'just'
"read physical reality's memory-dump" to
gather, and implement, the stuff of his
invention.

Computers can be, and soon will be,
programmed to do the same.

I know this with Certainty.

I'm the inventor.

k. p. collins
JPL Verhey
2005-01-11 21:17:06 UTC
Permalink
"kenneth collins" <***@worldnet.att.net> wrote in message news:xKOEd.133$***@bgtnsc04-news.ops.worldnet.att.net...
..
Post by kenneth collins
When invention occurs, everything that's
in-it was already causally-linked within
physical reality, and the inventor 'just'
"read physical reality's memory-dump" to
gather, and implement, the stuff of his
invention.
Computers can be, and soon will be,
programmed to do the same.
I think of it like this. We mix and put together materials and thusly
create new configurations of materials. The result is various
technologies that make things happen, when lucky just as we want them to
happen. There is no fundamental difference between agriculture and
building computers+writing software.

Sometimes potatoes grow not exactly as we hoped - perhaps some fungy was
very succesfull in spreading and eating the cake, aqa destroying the
harvest. Similarly computers+software, one day, might "do things on
their own" because other processes, inside or outside the computers,
intervened.. like the fungi. The result might look to us as a total
failure of the "self-learning computer"... like you could hope your kid
becomes an Einstein, but ending up as a salesman of lubricants instead

So, it appears that a criteria for "intelligent behavior" of technology
(ie materials-organised-and-put-together-by-man) will be that it does
not live up to our expectations. :0 It might still do..on occasion, but
then it will be more in spite of our initial effort than thanks to us!

But, if despite of all this, we *really* do feel (and are convinced) we
did a very good job in creating something that starts to behave
intelligent on its own..and that it does so *because* of us.. nature
will remind us that it is not us driving the electrical currents through
neural or silicon circuitries.. We just re-arranged the magic beef and
organised the show.

When one day we can synthesize DNA almost from scratch..and have it grow
in artificial utureses.. we just, and again, organised, re-arranged
natural material stuff. My DNA can be mixed with engineerd human female
DNA and grown a new, totally normal human being out of it (I hope) It
will certainly not be artificially intelligent. Because all the
materials are just, and still, natural. a computer also behaves
natural.. because it can't do otherwise. Nothing can.

I think artificially sequenced DNA is much more promising in this
respect than computers and software, though perhaps they might be
involved in the production process as a whole.

Any future Nobel Prize in this area, I would think, should go to
somebody who not succeeded in surprising us how intelligently s/he the
inventor was.. but who succeeded in surprizing us with how different
arrangements of matter can produce amazing new behaviors. To the glory
of Matter and its Magic Nature.

Amen
Wolf Kirchmeir
2005-01-11 15:27:20 UTC
Permalink
Post by j***@ixpres.com
Hey I am glad someone is reading this. I will finish what I was saying
and then if you want to argue about it I will be happy to do so with
fewer words.
Let's say that a scientist spends a day observing an object waiting for
an expected event. The event does not occur the scientist might then
enter that information into his observation log. That shows that the
observation of non-material or non-occurring events can be perfectly
reasonable and sound scientific observations. It also shows that
scientific observations can be made relative to an expected frame of
reference.
[...]

So just because the event didn't occur it was non-material?????

Josh, that is _stupid_.

[snip the rest of the blather]
Michael Olea
2005-01-11 17:06:52 UTC
Permalink
Post by Wolf Kirchmeir
Post by j***@ixpres.com
Hey I am glad someone is reading this. I will finish what I was saying
and then if you want to argue about it I will be happy to do so with
fewer words.
Let's say that a scientist spends a day observing an object waiting for
an expected event. The event does not occur the scientist might then
enter that information into his observation log. That shows that the
observation of non-material or non-occurring events can be perfectly
reasonable and sound scientific observations. It also shows that
scientific observations can be made relative to an expected frame of
reference.
[...]
So just because the event didn't occur it was non-material?????
Josh, that is _stupid_.
[snip the rest of the blather]
Uh, that was not a post by Josh.
Lester Zick
2005-01-11 19:00:13 UTC
Permalink
Post by Michael Olea
Post by Wolf Kirchmeir
Post by j***@ixpres.com
Hey I am glad someone is reading this. I will finish what I was saying
and then if you want to argue about it I will be happy to do so with
fewer words.
Let's say that a scientist spends a day observing an object waiting for
an expected event. The event does not occur the scientist might then
enter that information into his observation log. That shows that the
observation of non-material or non-occurring events can be perfectly
reasonable and sound scientific observations. It also shows that
scientific observations can be made relative to an expected frame of
reference.
[...]
So just because the event didn't occur it was non-material?????
Josh, that is _stupid_.
[snip the rest of the blather]
Uh, that was not a post by Josh.
Thanks for pointing that out, Michael. I got it wrong too.

Regards - Lester
JPL Verhey
2005-01-11 20:25:49 UTC
Permalink
Post by Lester Zick
Kirchmeir at
Post by Wolf Kirchmeir
Post by j***@ixpres.com
Hey I am glad someone is reading this. I will finish what I was saying
and then if you want to argue about it I will be happy to do so with
fewer words.
Let's say that a scientist spends a day observing an object waiting for
an expected event. The event does not occur the scientist might then
enter that information into his observation log. That shows that the
observation of non-material or non-occurring events can be
perfectly
reasonable and sound scientific observations. It also shows that
scientific observations can be made relative to an expected frame of
reference.
[...]
So just because the event didn't occur it was non-material?????
Josh, that is _stupid_.
[snip the rest of the blather]
Uh, that was not a post by Josh.
Thanks for pointing that out, Michael. I got it wrong too.
Pay attention gentlemen! It shows that it matters who's talking. Jim was
making a statement quite true - if there is a question like who's on the
toilet and it turns out not to be my son but my wife's lover.. we do
science.
Lester Zick
2005-01-11 22:48:17 UTC
Permalink
On Tue, 11 Jan 2005 21:25:49 +0100, "JPL Verhey"
Post by JPL Verhey
Post by Lester Zick
Kirchmeir at
Post by Wolf Kirchmeir
Post by j***@ixpres.com
Hey I am glad someone is reading this. I will finish what I was saying
and then if you want to argue about it I will be happy to do so with
fewer words.
Let's say that a scientist spends a day observing an object waiting for
an expected event. The event does not occur the scientist might then
enter that information into his observation log. That shows that the
observation of non-material or non-occurring events can be
perfectly
reasonable and sound scientific observations. It also shows that
scientific observations can be made relative to an expected frame of
reference.
[...]
So just because the event didn't occur it was non-material?????
Josh, that is _stupid_.
[snip the rest of the blather]
Uh, that was not a post by Josh.
Thanks for pointing that out, Michael. I got it wrong too.
Pay attention gentlemen! It shows that it matters who's talking. Jim was
making a statement quite true - if there is a question like who's on the
toilet and it turns out not to be my son but my wife's lover.. we do
science.
Jim's post was inadvertently conflated in style and proximity with
Josh's. I see no culpable lack of attention.

Regards - Lester
Wolf Kirchmeir
2005-01-11 20:44:59 UTC
Permalink
Post by Michael Olea
Post by Wolf Kirchmeir
Post by j***@ixpres.com
Hey I am glad someone is reading this. I will finish what I was saying
and then if you want to argue about it I will be happy to do so with
fewer words.
Let's say that a scientist spends a day observing an object waiting for
an expected event. The event does not occur the scientist might then
enter that information into his observation log. That shows that the
observation of non-material or non-occurring events can be perfectly
reasonable and sound scientific observations. It also shows that
scientific observations can be made relative to an expected frame of
reference.
[...]
So just because the event didn't occur it was non-material?????
Josh, that is _stupid_.
[snip the rest of the blather]
Uh, that was not a post by Josh.
Uh, sorry. It sounds so much like josh, it didn't sink in that jpbromer
isn't josh---.
Stephen Harris
2005-01-12 00:15:35 UTC
Permalink
Post by Wolf Kirchmeir
Post by Michael Olea
Post by Wolf Kirchmeir
Post by j***@ixpres.com
Hey I am glad someone is reading this. I will finish what I was saying
and then if you want to argue about it I will be happy to do so with
fewer words.
Let's say that a scientist spends a day observing an object waiting for
an expected event. The event does not occur the scientist might then
enter that information into his observation log. That shows that the
observation of non-material or non-occurring events can be perfectly
reasonable and sound scientific observations. It also shows that
scientific observations can be made relative to an expected frame of
reference.
[...]
So just because the event didn't occur it was non-material?????
Josh, that is _stupid_.
[snip the rest of the blather]
Uh, that was not a post by Josh.
Uh, sorry. It sounds so much like josh, it didn't sink in that jpbromer
isn't josh---.
There may be some confusion about whether the mind can be weighed.
An idea may not be material, but it has/had existence. I think when
one talks about (as Josh did) having neither mass nor energy one
means not having existence in space or time which is a different idea than
an eternal verity. This is taken from Wikipedia. It is hard to
define something which doesn't exist, :-) but here it goes:
"According to Platonic realism, universals exist in a "realm" (often so
called) that is separate from space and time; one might say that universals
have a sort of ghostly or heavenly mode of existence, but, at least in more
modern versions of Platonism, such a description is probably more misleading
than helpful. It will make the theory seem less mysterious if we say,
instead, that it is meaningless (or a category mistake) to apply the
categories of space and time to universals. In any event, we never see or
otherwise come into sensory contact with Platonic universals, and they
definitely do not exist at any distance, in either space or time, from our
bodies. Obviously they do not exist in the way that ordinary physical
objects exist. Nonetheless these universals do, according to Plato and other
Platonic realists, exist in the broadest sense. Most modern Platonists avoid
the possible ambiguity by never claiming that universals exist, but "merely"
that they are."

SH: The idea of causality includes counterfactuals, what ifs, what could
happen to preempt a chain of causal relationship producing an event. What
would have happened if Colombus had died at sea in a storm? Is there a way
to determine the true cause of some effect other than by high frequency of
correlation. The quote from Wikipedia is also why I don't think the
scientific method applies to proving or disproving platonic ideals. If God
(as an idealistic example not intended to require the belief in God) created
the universe, then God does not reside within the universe unless of course
He omnipotently wills it to be so.

We have abstract ideas which by definition are non-physical. Where do all
those old ideas go? I think like old soldiers, they just fade away unless
one believes that no information can be lost in the universe.

Make it so, (Captain Picard)

Stephen
j***@ixpres.com
2005-01-12 03:49:23 UTC
Permalink
Presumably an event would be some kind of material event in the case of
a scientist waiting for it. If it does not occur than the material
interaction did not occur. A non-existent event could not be a
material event.

Jim Bromer
Lester Zick
2005-01-11 20:29:28 UTC
Permalink
Post by j***@ixpres.com
Hey I am glad someone is reading this. I will finish what I was saying
and then if you want to argue about it I will be happy to do so with
fewer words.
Let's say that a scientist spends a day observing an object waiting for
an expected event. The event does not occur the scientist might then
enter that information into his observation log. That shows that the
observation of non-material or non-occurring events can be perfectly
reasonable and sound scientific observations. It also shows that
scientific observations can be made relative to an expected frame of
reference.
While we can talk about the natural relations that may occur between
two objects in the natural universe, we also need to be able to talk
about conceptual relations that we define between objects and events.
The noting of an event that did not occur, for example, is an
actualization of the understanding that non-material relations may be
considered significant to scientific theory. This is not nonsense and
it is not non-science.
There is another situation that is relevant to my argument. We are not
always able to detect the effective principals of an observed event.
From our point of view, it might seem that an event or relation is
non-deterministic. So there are also times when we might see an
apparent non-deterministic event just because we cannot see all of the
events that precipitated the observed event. This kind of observation
can also constitute a legitimate scientific observation.
These points are important in laying the groundwork for undermining the
theory that computers are "just following instructions."
Repeating myself, if the universe is completely deterministic then the
brain is completely dependent on the instructions of its biochemistry
and physics. From this point of view, one might conclude that when
people are thinking they are just following the instructions of
biology. But that seems like nonsense to me. We are more than the
reactants of the deterministic forces that have created us, we are
independent intellectual spirits. The reason why we are not just
reactants is because we are able to think for ourselves. We may be
following the *instructions* of the biochemistry and physics of our
brain, but if this is the case, there are some aspects of that
biochemistry that allows us to physically modify some part of that
biochemistry simply by taking thought. To think about something is to
change something of what you are.
Perhaps the universe is not completely deterministic. As I said, a
completely non-deterministic universe seems pretty far out there, so
lets take a look at an alternative. Perhaps independent chains of
deterministic events can occur. If they do, I don't think that they
would be the only source for the human brain to acquire its creativity
and volition, but if they were then that process could be duplicated
through the sources of input into the computer. So even though the
computer program is running instructions, the sources of those
instructions may at times be independent and therefore not
deterministically dependent on previous instructions that the computer
had processed earlier. If, say, a computer ran a probabilistic
simulation, it would have to use deterministic means to simulate the
random events of the sample. But if it was able to adapt in some way
to input events, the sources of those observations could include
non-deterministic relations or events.
Suppose I say that I can program my computer to solve equations that
are too complicated for me to solve. You would have said that the
computer is just doing what I instructed it to do even though there may
be some question whether or not I could have solved those equations
myself. Ok, fine. But here is the problem. If you agree to that,
then you would be accepting the idea that special complicated
algorithms can be considered as instructions that that the computer has
to follow.
Suppose that the computer was able to write its own instructions. Then
whose instructions would it be following?
If this could occur then some of the instructions that it *was
following* might be instructions that it itself wrote. When I follow
instructions that I myself make I don't usually make the claim that "I
am just following instructions," unless I find some great humor in the
comment or some great object lesson for my fellow man to be enwizened
by. Or, I might make that kind of comment if I wanted to avoid
accepting responsibility for my actions. (I too was just following
orders!)
Now if I programmed my computer to create its own instructions, I would
do it in a way where it would use my programming code to create new
combinations of algorithms. But I contend that even this controlled
environment can be seen as a method by which computer programs can
create their own instructions. Just as you would (probably) have said
that my complicated numerical algorithm would constitute the
instructions that the computer had to follow, you would have to accept
that the instructions that the computer wrote would also constitute
instructions that it had to follow as well.
So yes, you were partly right. Computer programs follow instructions.
But they are also capable of creating some of their own instructions,
and when that sort of thing happens in my mind I argue that constitutes
volition and self-determination. Or at least it constitutes a source
of self-determination and volition.
But the whole question of determinism vs non-determinism has to be
examined from an alternative view. There are many valid
non-deterministic relations that can be ascribed to events, but are
they natural relations or are they expressions of intellectual
constructs. The reporting of a non-event event is one example. But
there are also cases where relatively independent events may interact
as well. Are they naturally occurring non-deterministic relations?
But understanding that non-deterministic events or apparent
non-deterministic events must coexist with deterministic events or
apparent deterministic events we are left with the deeper recognition
of understanding that the observation of events is relative to the
vantage of the observer. This seems like elementary science to me.
And from the basis of that understanding we could also put forward the
theory that the instructions that a computer program follows are
dependent on the contextual frame of those instructions and the
immediate sources of the instructions.
The external sources of information that the computer program is
exposed to can be integrated into the programming. In order to
appreciate the effects of this kind of input, you cannot just look at
separate instructions and completely understand its effects on the
program, you would have to study the situation with a variety of
strategies.
And more importantly to this particular discussion, the program can
integrate its own algorithms into itself. Because there is no
overwhelming evidence that the human brain is not subject to the
deterministic laws of biochemistry and nature, we have to ask ourselves
what it is about our creativity that allows us to achieve some
independence of thought even though we must presumably abide by these
laws of nature. And because the nature of a computer program
incorporates a few special kinds of instructions that allow a program
to process symbolic references that can shape the algorithms that the
program will subsequently follow, we have to ask ourselves if it is
possible that there might be a similarity between the two kinds of
processes. In this question, absolute equivalency is not the deciding
factor. The question then is, are there effective similarities that
might cause you to suspect that it no more meaningful to say that the
computer is "just following instructions," then it is to say that we
human beings are "just following instructions"?
I apologize for the confusing and excessive presentation. If anyone
wants to continue arguing about this, I will try to use fewer words.
Let me just say up front, Jim, that you might have used fewer words to
begin with and listed the half dozen or so points you wanted to make
just for clarity. However, let me try to summarize what I think you're
getting at:

1) Computers don't just follow instructions because instruction
sequences are modifiable and thus indeterminable.

2) If computers do just follow instructions, then intelligent beings
like people do too.

3) But this is apparently contradicted by the creativity associated
with intelligence.

4) Strict mechanical causation implies exclusively deterministic
processes?

(I'm eliminating reference to symbolic processing because I can't see
that it has any bearing. If I have omitted anything of importance,
feel free to modify the list.)

1) I can't see how the modifiability of instruction sequences implies
that computers don't follow the instructions they have wherever they
come from. Nor would the dynamic modification of instructions by the
machine itself. What might be implied is that if suitable instruction
sequences could be devised which model intelligence correctly then a
computer would exhibit characteristics of that intelligence.

2) This point really devolves to whether strict mechanical causation
applies to all processes. And I see no reason to assume it does not.
People routintely assume quantum processes violate strict mechanical
causation, but there is no evidence this is true. It is only assumed
to be either true or irrelevant. Consequently, I see no reason not to
conclude that people and all intelligent beings do the same.

3) This point pretty much implies all the arguments against strict
mechanical determinism in terms of cause and effect. Note here that I
do not say material determinism because that is a more specific claim.
For the sake of argument, let's just narrow this one point to issues
of free will and creativity.

Let's assume we have strict mechanical determinism in sentient
behavior but that we don't have strict material determinism. Then we
can say that sentient behavior will not be adequately addressed by
physics and chemistry despite the reliance of sentient intelligence on
subordinated physical processes. Then we have to ask how this is
possible still maintaining strict mechanical determinism.

4) The answer would imply a different kind of determinism than is
conventionally studied in physics and chemistry. And the answer has to
account for the two most prominent features of sentient intelligence
noted above: free will and creative imagination. But I don't believe
any answer can be given that implies that computers or intelligent
beings don't follow the instructions they have. Nor do I believe it is
as simple as direct code modification.

Regards - Lester
j***@ixpres.com
2005-01-12 04:26:39 UTC
Permalink
You disagree with someone, ok. But don't waste your time with insults
and petty criticisms of style.
I never said computers do not follow instructions. That was your
interpretation.
You are eliminating symbolic processing because you can't see that it
has any bearing? Then you could not have understood my main point.
Everything else was intended to support that.
I have to suspect that you do not understand what I was talking about
even though you got 90% of it.
Your answer that there is a different kind of determinism in
intelligent behavior than the kind of determinism conventionally
studied in physics and chemistry is part of the conventional wisdom of
computation. I was saying that you could go a step further.

Computers do not -Just Follow Instructions- any more than they -Just
Make Decisions-.

Jim Bromer
Lester Zick
2005-01-12 16:19:01 UTC
Permalink
Post by j***@ixpres.com
You disagree with someone, ok. But don't waste your time with insults
and petty criticisms of style.
This looks like you're replying to me. If you consider a suggestion to
include an abstract forward to an involved analysis as a petty
criticism of style, then it's hard to imagine how you take the rest of
what I had to say.
Post by j***@ixpres.com
I never said computers do not follow instructions. That was your
interpretation.
My interpretation of what you said. I believe you said that computers
don't just do what they're told. If that's different from not follow
instructions, perhaps you'd like to give us your interpretation of
what you said so we won't have to rely on my interpretation of what
you said. Then next time, if there is a next time, you can give us
your interpretation of what you said first and save what you actually
said for some other time.
Post by j***@ixpres.com
You are eliminating symbolic processing because you can't see that it
has any bearing? Then you could not have understood my main point.
Everything else was intended to support that.
So, lessee. I eliminate your main point of symbolic processing because
you don't establish its relevance clearly enough for me to grasp what
you're getting at. So, I should feel my summary interpretation of what
I thought you were trying to say that you didn't provide for yourself
is somehow defective? Next time do your own thinking with an abstract.
Post by j***@ixpres.com
I have to suspect that you do not understand what I was talking about
even though you got 90% of it.
Well, 90% is pretty good at a first pass considering I was doing what
you should have done for yourself. I strongly suspect I'm only getting
about 10% here since you aren't being responsive to my analysis of
what should have been your analysis to begin with.
Post by j***@ixpres.com
Your answer that there is a different kind of determinism in
intelligent behavior than the kind of determinism conventionally
studied in physics and chemistry is part of the conventional wisdom of
computation.
Yes, well the problem isn't in the conventional wisdom but
implementation of the conventional wisdom in mechanical terms,
which certainly doesn't include the maxim that computers don't do
what they're told or we certainly wouldn't be able to implement the
conventional wisdom on computers in any terms.
Post by j***@ixpres.com
I was saying that you could go a step further.
I'm beginning to think you go a step back to deciding what computers
actually do if they don't do what they're told.
Post by j***@ixpres.com
Computers do not -Just Follow Instructions- any more than they -Just
Make Decisions-.
How about if we just make a decision here and just follow instructions
to evaluate some other line of reasoning altogether.

Regards - Lester
pensul
2005-01-18 07:38:58 UTC
Permalink
.... The question then is, are there effective similarities that
might cause you to suspect that it no more meaningful to say that the
computer is "just following instructions," then it is to say that we
human beings are "just following instructions"?
Two thoughts stand out from my reading of your post that are relevant to
answering this question. The first is concerning that "the program can
integrate its own algorithms into itself", it would follow that an
entirely new algorithm would result, and so the computer would have to
decide at a certain time that it no longer depends on the outside world
but is making its own decisions. The second, interrelated thought is
concerning "that I can program my computer to solve equations that are too
complicated for me to solve." This seems to pose unsolvable dilemmas, because
it seems obvious that a computer can only be said to do this in those cases
where it would take me too long to solve those equations, which in any case
I can solve. Putting the two thoughts together, the computer could not decide
at what time it is independent of the outside world because it has not the
objectivity to decide what constitutes "too much time" for me. The short
answer, thus, is "no".
--
"The world of existence is an emanation of the merciful attribute of God."
Abdul-Baha
http://www.costarricense.cr/pagina/ernobe
Loading...