Discussion:
The Demise of Computationalism?
(too old to reply)
Stephen Harris
2007-03-02 08:57:59 UTC
Permalink
Here are my notes on the subject:

As some of us know, there is one mathematical formulation for
Quantum theory, but at least 8 interpretations, what the
extremely accurate quantum mathematical prediction says about
the nature of reality.

Likewise one can compare Computationalism (Comp) and Connectionism
as both being Turing computable. But that doesn't mean that they
both enjoy the same flexibility as tools for the pursuit of AI.

The universe is made of atoms, which is the parent category. There
are sub-categories, such as humans and rocks and dogs and cats.
Because they are all made of atoms does not mean that the sub-cats
all share the same properties when viewed from a more finely grained
perspective. Humans are assumed to have consciousness and rocks not,
even though they both consist of atoms. Likewise humans are assumed
to possess consciousness but this doesn't mean that property is
conferred to the more general category of the universe containing
humans. So when things are compared at the same level of abstraction
for similarity or difference, it doesn't necessarily work to answer
by skipping to another more general level and impute properties from
that level to the specific or assert properties found in the specific
to the more general category.

For example, the law of causality is considered to be universal.
One could kick a cat, dog, or stone and it would go flying, the
law of cause and effect in action. But that wouldn't mean that
you could lump cats, dogs or stones into a more specialized
abstract level of comparison because they were contained by a
broad, sweeping level or degree of comparison like cause and effect.

I mention causality, because that is mentioned as an argument for
the "implementational" connectionists (Fodor) to unify their point
of view regarding the sameness of Comp and Connectionism. This is
opposed to the "radical" Connectionists who maintain there is a
difference between Comp and Connectionism. So I quoted from that
paper by Gualtiero Piccinini in what seems to me to support the
pov that causality doesn't measure up to the implementational
connectionist claims. Comp and Connectionism seem like different
tools to me when compared at the meaningful level of abstraction.
I decided to include some info on the dynamic system approach.
Usually people don't write a paper to defend their pet philosophy
unless the rumor of the demise of it is fairly wide spread. I
added a touch of Behaviorism so everybody can correct this post.

--------------------------------------------

"The Rumors of its [Computationalism] Demise have been Greatly
Exaggerated" David Israel, Stanford University, SRI International, USA

“There has been much talk about the computationalism being dead. But as
Mark Twain said of rumors of his own death: these rumors are highly
exaggerated. Unlike Twain's case, of course, there is room for a good
deal of doubt and uncertainty as to what it is exactly that is being
claimed to have died. Whose old conception are we talking about?
Turing's? Fodor's?
I will leave the issues of the computational model of mind to the
philosophers and cognitive scientists. I will address rather some -- or
at any rate, one -- of the real shifts of focus in theoretical computer
science: away from single-processor models of computation and toward
accounts of interaction among computational processes. I will even
address the question as to whether this is a shift in paradigms or
simply (?) a quite normal evolution of interests within a paradigm.
Maybe a few philosophical morals will be drawn."

-------------------------------------------------------------------

www.syros.aegean.gr/users/tsp/conf_pub/C34/C34.doc

“In opposition to behaviorism, Cognitive Science opened the ‘black box’
while retaining behavior as the object of its investigation. It offers a
theory of what goes on inside an organism with cognitive capacities when
it engages in cognitive behavior. The dominant element of this process
is of an informational nature, but the respective activity is not
uniquely defined. The various ways this information processing activity
can be defined are tantamount to different overall approaches to
cognition (Petitot et al., 1999). For the purposes of this paper it is
useful to distinguish three major approaches:

The Cognitivist-Computationalist/Symbolic Approach

Computationalism is based on the hypothesis that the mind is supposed to
process symbols that are related together to form representations of the
environment. These representations are abstract, and their manipulations
are so deterministic that they can be implemented by a machine.
Computationalism is the metaphor of the sequential,
externally-programmed information processing machine based on the
theories of Turing (Turing, 1950) and vonNeumann (vonNeumann, 1958).
Therefore it implies that the key to building an adaptive system is to
produce a system that manipulates symbols correctly, as enshrined in the
Physical Symbol System Hypothesis (Newell, 1980). Computationalism has
two requirements: forms of representation and methods of search. Thus,
first one should find a way to formally represent the domain of interest
(whether it will be vision, chess, problem-solving) and then to find
some method of sequentially searching the resulting state space
(Mingers, 1995).

Consequently, these are purely formal systems and their symbols are
related to an apriori correspondence with externally imposed meaning.
They are processing information based on a static meaning structure,
which cannot be internally changed in order to adapt to the
ever-changing demands of a dynamic environment.

The Connectionist-Dynamic Approach

Connectionism argues that the mind is a system of network that gives
rise to a dynamic behavior that can be interpreted as rules at a higher
level of description. Here, the dominant view is that mental elements
are a vector distribution of properties in dynamic networks of neurons
and the proposed solution for a proper modeling of the phenomenon
(thinking process) is the set-up of parallel distributed architectures.
Connectionism overcomes the problems imposed by the linear and
sequential processing of classical computationalism and finds
application in areas like perception or learning, where the latter is,
due to its nature, too slow to deal with the rapidity of environmental
input.

Connectionism has also borrowed the idea of emergence, from the theories
of self-organization, which has as a central point the system’s
nonlinear dynamical processing. In this context the brain is seen as a
dynamical system whose behavior is determined by its attractor
landscape. The dynamics of the cognitive substrate (matter) are taken to
be the only thing responsible for its self-organization, and
consequently for the system’s behavior (vanGelder and Port, 1995). It
should be stressed that there is an on-going debate between dynamic
systems theory and connectionist networks. The latter exhibit many of
the properties of self-organizing dynamical systems, while not
discarding the notions of computation and representation. Instead, they
find it necessary in order for the system to exhibit high-level
intelligence (Eliasmith, 1998), (Clark and Eliasmith, 2002), or even any
kind of intentional behavior (Bickhard, 1998), (Clark and Wheeler,
1998), as long as representations emerge from the interaction in a
specific context of activity.

On the other hand, Fodor (Fodor and Psyslyn, 1988) among others, insists
that the form of the computation, whether logico-syntactic or
connectionist, is merely a matter of implementation, and in addition,
the implementation of computation, whether classical or connectionist,
lies in causal processes. The only real difference between this form of
connectionism and computationalism is that the former uses a vector
algebra, rather than scalar, to manipulate its symbols (representations)
(Smolensky, 1988). In this perspective and in relation to intrinsic
creation of meaning, connectionist architectures cannot evolve and be
adaptive. [SH: Seems like a fairly major difference to me.]

The Emergent-Enactive Approach

Advocates of the pure dynamic approach (Varela et al., 1991), argue that
connectionism remains basically representational, as it still assumes a
pre-given independent world of objective and well-defined problems.
These problems seek the proper set of representations together with an
efficient mapping of one set of representations onto another.

On the contrary, the emergent-enactive view, although it shares with
connectionism a belief in the importance of dynamical mechanisms and
emergence, disputes the relevance of representations as the instrument
of cognition (Mingers, 1995). Instead, in the enactive framework,
cognitive processes are seen as emergent or enacted by situated agents,
which drive the establishment of meaningful couplings with their
surroundings. Emergent cognitive systems are self-organized by a global
co-operation of their elements, reaching an attractor state which can be
used as a classifier for their environment. In that case, the
distinctions thus produced are not purely symbolic, therefore meaning is
not a function of any particular symbols, nor can it be localized in
particular parts of the network. Indeed, symbolic representation
disappears completely – the productive power is embodied within the
network structure, as a result of its particular history (Beer, 2000).
The diversity of their ability for classification is dependent on the
richness of their attractors, which are used to represent events in
their environments. Therefore, their meaning evolving threshold cannot
transcend their attractor’s landscape complexity, hence, it cannot
provide us with a framework for meaning-based evolution.

It is almost globally accepted that purely symbolic approaches cannot
give answer to issues related with the emergence of new meaning
structures and levels of organization, which justifies the existence and
the role of anticipation in adaptive systems (Collier, 1999). On the
other hand, although the emergent dynamical mechanisms have more
potential for self-organization, there are also some issues in human
cognition, (such as high-level learning, long-term memory, the stability
of old pattern of neuronal activity in the face of new ones, etc.) that
cannot be satisfactorily explained within these frameworks (Cariani,
2001). Moreover, the functionality of an adaptive system must be
examined in a framework which will justify all or most of its phenomenal
aspects, as these emerge from its striving for adaptable interaction
with its environment."

--------------------------------------

http://www.umsl.edu/~piccininig/Computation%20Without%20Representation%2016.htm
Computation without Representation by Gualtiero Piccinini

“The only alternative to the semantic view that is clearly stated in the
philosophical literature is that computational states are individuated
by their causal properties (Chalmers 1996, Copeland 1996, Scheutz 1999).
But causal individuation, without constraints on which causal powers
are relevant and which irrelevant to computation, is too weak. It does
not support a robust notion of computational explanation—the kind of
explanation that is needed to explain the capacities of computers,
brains, and other putative computing mechanisms in terms of their
putative computations.
Supporters of the causal individuation of computational states readily
admit that under their view, every state is a computational state and
every causal process is a computation. But this is tantamount to
collapsing the notion of computation into the notion of causal process.”

-----------------------------------

Regards,
Stephen
Josip Almasi
2007-03-02 15:40:58 UTC
Permalink
Post by Stephen Harris
As some of us know, there is one mathematical formulation for
Quantum theory, but at least 8 interpretations, what the
extremely accurate quantum mathematical prediction says about
the nature of reality.
I just recently learned that Deutsch invented quantum computer as a tool
to prove many-worlds interpretation. Here's the interview:
http://www.wired.com/news/technology/0,72734-0.html?tw=wn_index_26
I don't really understand how such a thing can be a proof... but if he's
right - so long 7/8.
Post by Stephen Harris
Likewise one can compare Computationalism (Comp) and Connectionism
as both being Turing computable. But that doesn't mean that they
both enjoy the same flexibility as tools for the pursuit of AI.
Right.
But what makes us to apply stamps to everything, like, this is Comp,
*stamp*, this is Conn *stamp*...?:)
IMHO this doesn't emerge from individuals but from social behaviour: we
have to define words so we know what we talk about.
IOW symbols come from society.
AFAIK this is consistent with some mind models, like Lacan.
Now, all these views you enlisted focus on knowledge representation
rather than knowledge acquisition. Conn views networks in one mind
instead of network of minds, Emergent-Enactive is focused on environment
but doesn't mention society... and while our brains are just about the
same for some 75000 years, progress is achieved by improving ways of
communication: language, writing, press, internet...
Post by Stephen Harris
The only real difference between this form of
connectionism and computationalism is that the former uses a vector
algebra, rather than scalar, to manipulate its symbols (representations)
Why not, say, class algebra?:)
Post by Stephen Harris
(Smolensky, 1988). In this perspective and in relation to intrinsic
creation of meaning, connectionist architectures cannot evolve and be
adaptive. [SH: Seems like a fairly major difference to me.]
Well yes, it's special case of Comp, just some specific
algorithm/knowledge base design.
OTOH it may or it may not be able to derive 'meaning'.
Say, we make a really mean NN able to recognize objects. We show it a
couple of trees and eventually it comes up with outputs 'tree', 'plant'
etc every time we show it any tree.
What has 'meaning' to do with it?
Q: define a chair.
A: device to sit on.
So, AI can never 'understand' the 'meaning' of a chair. It could
recognize it, and could maybe even define it, provided it sees us
sitting alot... but, could it invent it?;)
Post by Stephen Harris
The Emergent-Enactive Approach
Yeah that's my favourite;)
Lets see what it takes for such agents to make themselves a language...
though it might be easier if we just program them some language:)

Thanks for interesting reading.

Regards...
Stephen Harris
2007-03-02 22:53:40 UTC
Permalink
Post by Josip Almasi
Post by Stephen Harris
As some of us know, there is one mathematical formulation for
Quantum theory, but at least 8 interpretations, what the
extremely accurate quantum mathematical prediction says about
the nature of reality.
I just recently learned that Deutsch invented quantum computer as a tool
http://www.wired.com/news/technology/0,72734-0.html?tw=wn_index_26
I don't really understand how such a thing can be a proof... but if he's
right - so long 7/8.
The Baez view is that only one universe is physically realized so
that there is no proof because in principle there is no observation
of an alternate universe. I am not sure where "virtual" fits in.
I found this part of the article especially interesting, and I wonder
what artificial intelligence program was "running on quantum hardware".

"WN: So what prompted you to start thinking about quantum computing?

Deutsch: This goes back a long way before I even thought of general
purpose quantum computing. I was thinking about the relationship between
computing and physics.... This was back in the 1970s....

It had been said, ever since the parallel universes theory had been
invented by Everett in the 1950s, that there's no experimental
difference between it and the various (theories), like the Copenhagen
interpretation, that try to deny that all but one of the universes exist.

Although it had been taken for granted that there was no experimental
difference, in fact, there is -- provided the observer can be analyzed
as part of the quantum system. But you can only do that if the observer
is implemented on quantum hardware, so I postulated this quantum
hardware that was running an artificial intelligence program, and as a
result was able to concoct an experiment which would give one output
from an observer's point of view if the parallel universes theory was
true, and a different outcome if only a single universe existed.

This device that I postulated is what we would now call a quantum
computer, but because I wasn't particularly thinking about computers, I
didn't call it that, and I didn't really start thinking about quantum
computation as a process until several years later. That lead to my
suggesting the universal quantum computer and proving its properties in
the mid-'80s."
Post by Josip Almasi
Post by Stephen Harris
Likewise one can compare Computationalism (Comp) and Connectionism
as both being Turing computable. But that doesn't mean that they
both enjoy the same flexibility as tools for the pursuit of AI.
Right.
But what makes us to apply stamps to everything, like, this is Comp,
*stamp*, this is Conn *stamp*...?:)
IMHO this doesn't emerge from individuals but from social behaviour: we
have to define words so we know what we talk about.
IOW symbols come from society.
Well way back in the primitive days before language, wouldn't
you say words were invented to describe what individuals perceived
and because these objects were mutually perceived, this enabled
communication from which society emerged? Abstract concepts like
Truth could arise from some con cavemen using words which did not
quite fit or match the mutual consensus of how to describe reality.
Liar, Liar, pelts on fire! Perhaps "culture" rather than "society".
Post by Josip Almasi
AFAIK this is consistent with some mind models, like Lacan.
Now, all these views you enlisted focus on knowledge representation
rather than knowledge acquisition. Conn views networks in one mind
instead of network of minds, Emergent-Enactive is focused on environment
but doesn't mention society... and while our brains are just about the
same for some 75000 years, progress is achieved by improving ways of
communication: language, writing, press, internet...
Post by Stephen Harris
The only real difference between this form of connectionism and
computationalism is that the former uses a vector algebra, rather than
scalar, to manipulate its symbols (representations)
Why not, say, class algebra?:)
I don't know, perhaps because scalars don't respond well to being
given directions, or because both are "defined by how they are
made, not by what they are". Who I am is not defined by what I do?

www.physics.gatech.edu/people/faculty/finkelstein/Quantum050811.pdf

"We generalize how Boole developed his class algebra. In his first
pamphlet on the subject he associated a class with a defining mental
"elective act" [1]. Therefore his are mental classes.
Ours are physical, so we associate a physical class with a defining
physical selective operation. This operation is sometimes associated
with an ensemble, consisting of the systems it might produce, called
virtual since its members need not actually exist when the operation
and class are discussed. The members of this ensemble are defined by
how they are made, not by what they are. Selection and extraction of
a sample from a real ensemble, however, is a useful approximation to
an ideal input process, provided that interactions between the members
of the real ensemble can be neglected, a weak-beam approximation."
Post by Josip Almasi
Post by Stephen Harris
(Smolensky, 1988). In this perspective and in relation to intrinsic
creation of meaning, connectionist architectures cannot evolve and be
adaptive. [SH: Seems like a fairly major difference to me.]
Well yes, it's special case of Comp, just some specific
algorithm/knowledge base design.
OTOH it may or it may not be able to derive 'meaning'.
Say, we make a really mean NN able to recognize objects. We show it a
couple of trees and eventually it comes up with outputs 'tree', 'plant'
etc every time we show it any tree.
What has 'meaning' to do with it?
Q: define a chair.
A: device to sit on.
So, AI can never 'understand' the 'meaning' of a chair. It could
recognize it, and could maybe even define it, provided it sees us
sitting alot... but, could it invent it?;)
Phenomenology/Husserl bind intentionality to meaning with original
intentionality and a derived intentionality (for programs).
Post by Josip Almasi
Post by Stephen Harris
The Emergent-Enactive Approach
Yeah that's my favourite;)
Lets see what it takes for such agents to make themselves a language...
though it might be easier if we just program them some language:)
Shouldn't they have some kind of perception of reality so that
there are objects which have a need to be identified and defined
by language leading to cooperative action in regard to such objects?
"Run, run, sabretooth after me!"
Post by Josip Almasi
Thanks for interesting reading.
Regards...
I particularly enjoyed your comment about class algebra, which
led to me reading about loop and group character theories.
Don Geddis
2007-03-03 00:09:13 UTC
Permalink
Post by Josip Almasi
http://www.wired.com/news/technology/0,72734-0.html?tw=wn_index_26
Deutsch: This goes back a long way before I even thought of general purpose
quantum computing. I was thinking about the relationship between computing
and physics.... This was back in the 1970s....
[...]
But you can only do that if the observer is implemented on quantum
hardware, so I postulated this quantum hardware that was running an
artificial intelligence program, and as a result was able to concoct an
experiment which would give one output from an observer's point of view if
the parallel universes theory was true, and a different outcome if only a
single universe existed.
I wonder what artificial intelligence program was "running on quantum
hardware".
Deutsch was only describing a thought experiment. In the 1970's there
weren't even quantum computers at all, much less ones running AI program.

It was merely interesting to propose at least a theoretical experiment that
might distinguish between different interpretations of quantum mechanics.

But no real AI program existed, or was even proposed.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
After I die, and they're reading a list of my sins, I hope I don't fall asleep.
Sorry, heard it all before. -- Deep Thoughts, by Jack Handey [1999]
Josip Almasi
2007-03-03 14:16:43 UTC
Permalink
Post by Stephen Harris
The Baez view is that only one universe is physically realized so
that there is no proof because in principle there is no observation
of an alternate universe. I am not sure where "virtual" fits in.
Well it sure makes sense. But then very basic assumption of many-worlds
(no quantum collapse at all) disappears.
Post by Stephen Harris
I found this part of the article especially interesting, and I wonder
what artificial intelligence program was "running on quantum hardware".
Yep:) I think it comes from von Neumann. Like, there's no experiment
without intelligent observers and our minds are quantum systems so
expect interference.
So, programmed observer on quantum computer could eliminate von
Neumann/Wigner interpretation.
I don't think Deutsch had any specific program in mind.
Post by Stephen Harris
Post by Josip Almasi
IOW symbols come from society.
Well way back in the primitive days before language, wouldn't
you say words were invented to describe what individuals perceived
and because these objects were mutually perceived, this enabled
communication from which society emerged? Abstract concepts like
Truth could arise from some con cavemen using words which did not
quite fit or match the mutual consensus of how to describe reality.
Liar, Liar, pelts on fire! Perhaps "culture" rather than "society".
Actually I ment 'social behaviour'. To be more specific, mirror neurons
may be cruical for language acquisition.
http://en.wikipedia.org/wiki/Mirror_neuron
http://www.edge.org/3rd_culture/ramachandran/ramachandran_p1.html
With them, I can reach for this and that and something else and say
'take' each time, and you'll get the word eventually.
Without them, you just wouldn't get it.

Well, specific functions are not well known yet - we'll see.
But smells like a thing what AI attempts were missing all along.

As for me, I think human individuals are overestimated big time.
Post by Stephen Harris
Post by Josip Almasi
Why not, say, class algebra?:)
I don't know, perhaps because scalars don't respond well to being
given directions, or because both are "defined by how they are
made, not by what they are". Who I am is not defined by what I do?
Well... according to shrinks, 'I' is defined by who I think I am.
This thinking is influneced by our parents and then by other people.
So 'I' may be a skinny or fatass no matter what I do.
Post by Stephen Harris
Post by Josip Almasi
Lets see what it takes for such agents to make themselves a
language... though it might be easier if we just program them some
language:)
Shouldn't they have some kind of perception of reality so that
there are objects which have a need to be identified and defined
by language leading to cooperative action in regard to such objects?
But, if each agent grows entire knowledge base for itself, there's no
particular reason why their KB's would be compatible. Slight differences
in perception or environment conditions can grow different KB's.
And, no KB compatiblity, no language.
So, there has to be some additional constraints, something has to be
programmed. What exactly, it's hard to say. But if we're to judge by
human brain...:)
Post by Stephen Harris
I particularly enjoyed your comment about class algebra, which
led to me reading about loop and group character theories.
Glad you enjoyed:)
I just recently rediscovered it myself... I had that in discrete math
lectures but it was so long that I even forgot such a thing exists:))

Regards...
Robin Faichney
2007-03-02 19:31:18 UTC
Permalink
You guys might be interested in this:

“The Mind as Neural Software? Revisiting Functionalism,
Computationalism, and Computational Functionalism” by Gualtiero
Piccinini, University of Missouri – St. Louis.

ABSTRACT: Defending or attacking either functionalism or
computationalism requires clarity on what they amount to and what
evidence counts for or against them. My goal here is not to evaluate
their plausibility. My goal is to formulate them and their
relationship clearly enough that we can determine which type of
evidence is relevant to them. I aim to dispel some sources of
confusion that surround functionalism and computationalism, recruit
recent philosophical work on mechanisms and computation to shed light
on them, and clarify how functionalism and computationalism may or may
not legitimately come together.

http://www.petemandik.com/blog/2007/02/01/pms-010-gualtiero-piccinini-the-mind-as-neural-software-revisiting-functionalism-computationalism-and-computational-functionalism
(sorry if that's broken up)
<http://www.robinfaichney.org/>
Stephen Harris
2007-03-02 23:06:03 UTC
Permalink
“The Mind as Neural Software? Revisiting Functionalism,
Computationalism, and Computational Functionalism” by Gualtiero
Piccinini, University of Missouri – St. Louis.
ABSTRACT: Defending or attacking either functionalism or
computationalism requires clarity on what they amount to and what
evidence counts for or against them. My goal here is not to evaluate
their plausibility. My goal is to formulate them and their
relationship clearly enough that we can determine which type of
evidence is relevant to them. I aim to dispel some sources of
confusion that surround functionalism and computationalism, recruit
recent philosophical work on mechanisms and computation to shed light
on them, and clarify how functionalism and computationalism may or may
not legitimately come together.
http://www.petemandik.com/blog/2007/02/01/pms-010-gualtiero-piccinini-the-mind-as-neural-software-revisiting-functionalism-computationalism-and-computational-functionalism
(sorry if that's broken up)
<http://www.robinfaichney.org/>
Yes, I read a couple of papers by him leading up to the
posting of this thread and he impressed me as having quite
a bit of original insight. He is represented in that thread
"What does it mean for a machine to understand something?"

Syntax and Semantics,
Stephen
f***@msn.com
2007-03-03 02:49:19 UTC
Permalink
"The Mind as Neural Software? Revisiting Functionalism,
Computationalism, and Computational Functionalism" by Gualtiero
Piccinini, University of Missouri - St. Louis.
ABSTRACT: Defending or attacking either functionalism or
computationalism requires clarity on what they amount to and what
evidence counts for or against them. My goal here is not to evaluate
their plausibility. My goal is to formulate them and their
relationship clearly enough that we can determine which type of
evidence is relevant to them. I aim to dispel some sources of
confusion that surround functionalism and computationalism, recruit
recent philosophical work on mechanisms and computation to shed light
on them, and clarify how functionalism and computationalism may or may
not legitimately come together.
http://www.petemandik.com/blog/2007/02/01/pms-010-gualtiero-piccinini...
(sorry if that's broken up)
<http://www.robinfaichney.org/>
I kept waiting for the shoe to drop. I actually agree with most of
what
he wrote (maybe all of it, I'd have to spend more time to tell.)

I don't see much reason to fight over issues related to the mind that
cannot be investigated empirically.

It may be confusing to many that I'm not a computationalist but am a
functionalist in the sense that I believe much of what the mind/brain
does can be handled by computational explanations but the added
cabability had by some components of the brain is the production of
qualia and this capability leads to the mind. It is not clear to me
if the mind takes on the roll of Leibnitz's monads where determinism
leaves the mind as a viewer of processes that could just as well
happen
without the mind or if it plays an active roll in behavior, that is
there is no way to implement the functional capability without the
mental
capability.
Don Geddis
2007-03-02 23:59:46 UTC
Permalink
[O]ne can compare Computationalism (Comp) and Connectionism as both being
Turing computable. But that doesn't mean that they both enjoy the same
flexibility as tools for the pursuit of AI.
I don't actually disagree with much of what you write or quote here.

But I'll say I think you're using a limited definition of the word
"Computationalism". You've basically equated it with GOFAI, where
declarative linguistic statements are manipulated by inference engines.
You can indeed constrast such an architecture with a connectionist
architecture.

But that's not the sense of the word that is most relevant to this newsgroup,
e.g. when we discuss the Church-Turing thesis, or the Turing Test, or
whether programs could be conscious. In that context, "computationalism"
is meant as the claim of Strong AI, that programs on digital computers can
(in principle) do all the cognitive things that human minds do.

For example:
http://en.wikipedia.org/wiki/Computationalism

In this, more relevant context, "Computationalism" is an umbrella category,
and "Connectionism" is merely only of the possible types of Computationalism.

So it doesn't make sense to contrast the two.
www.syros.aegean.gr/users/tsp/conf_pub/C34/C34.doc
For the purposes of this paper it is useful to distinguish three major
The Cognitivist-Computationalist/Symbolic Approach
Computationalism is based on the hypothesis that the mind is supposed to
process symbols that are related together to form representations of the
environment. These representations are abstract, and their manipulations are
so deterministic that they can be implemented by a machine.
OK, so far.
Computationalism is the metaphor of the sequential, externally-programmed
information processing machine based on the theories of Turing (Turing,
1950) and vonNeumann (vonNeumann, 1958). Therefore it implies that the key
to building an adaptive system is to produce a system that manipulates
symbols correctly, as enshrined in the Physical Symbol System Hypothesis
(Newell, 1980). Computationalism has two requirements: forms of
representation and methods of search. Thus, first one should find a way to
formally represent the domain of interest (whether it will be vision,
chess, problem-solving) and then to find some method of sequentially
searching the resulting state space (Mingers, 1995).
This has become FAR too narrow. That describes merely one possible
computational approach from among many.
Consequently, these are purely formal systems and their symbols are related
to an apriori correspondence with externally imposed meaning. They are
processing information based on a static meaning structure, which cannot be
internally changed in order to adapt to the ever-changing demands of a
dynamic environment.
Again, while there have been programs that act in this way, it isn't really
appropriate to label only those programs as "Computational".
The Connectionist-Dynamic Approach
Connectionism argues that the mind is a system of network that gives rise to
a dynamic behavior that can be interpreted as rules at a higher level of
description.
This version can indeed be contrasted with the description above. But NOT
with the label above, which was "The Cognitivist-Computationalist/Symbolic
Approach". That is not an appropriate label for the description they gave.
On the other hand, Fodor (Fodor and Psyslyn, 1988) among others, insists that
the form of the computation, whether logico-syntactic or connectionist, is
merely a matter of implementation
Right.

In any case, I think your title for this thread ("The Demise of
Computationalism?") is ill-chosen. It certainly doesn't address anything
I've ever posted about Computationalism and consciousness and Strong AI.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
When you want to accomplish something, there are different stages that you go
through. The first is to imagine yourself doing whatever it is. The second is
to light up a big cigar, because mister, she's as good as done.
-- Deep Thoughts, by Jack Handey [1999]
Stephen Harris
2007-03-03 06:58:39 UTC
Permalink
Post by Don Geddis
[O]ne can compare Computationalism (Comp) and Connectionism as both being
Turing computable. But that doesn't mean that they both enjoy the same
flexibility as tools for the pursuit of AI.
I don't actually disagree with much of what you write or quote here.
But I'll say I think you're using a limited definition of the word
"Computationalism". You've basically equated it with GOFAI, where
declarative linguistic statements are manipulated by inference engines.
You can indeed constrast such an architecture with a connectionist
architecture.
But that's not the sense of the word that is most relevant to this newsgroup,
e.g. when we discuss the Church-Turing thesis, or the Turing Test, or
whether programs could be conscious. In that context, "computationalism"
is meant as the claim of Strong AI, that programs on digital computers can
(in principle) do all the cognitive things that human minds do.
http://en.wikipedia.org/wiki/Computationalism
In this, more relevant context, "Computationalism" is an umbrella category,
and "Connectionism" is merely only of the possible types of Computationalism.
So it doesn't make sense to contrast the two.
That is not what your wiki link to Computationalism says, it does say,
"Computationalism is a philosophy of mind theory"...

Besides not mentioning Connectionism on the wiki webpage you linked to,
the definition of Connectionism,
http://philosophy.uwaterloo.ca/MindDict/connectionism.html

"connectionism - A computational approach to modeling the brain which
relies on the interconnection of many simple units to produce complex
behavior."

"This commitment to a 'subconceptual' level of description of cognitive
processes is a direct rejection of the symbolicist or GOFAI approach to
human cognition."

I'm contrasting a Philosophy of Mind theory, which is elsewhere
described as a theory schema because it isn't complete enough to
qualify as a theory. I'm contrasting Philosophy to a computational
AI approaches. The philosophy can or cannot be applied to approaches.

"symbolicism - An approach to understanding human cognition that is
committed to language like symbolic processing as the best method of
explanation." ..
"The Physical Systems Symbol Hypothesis of Newell and Simon (1976)
formalized the commitments of this sort of approach to modeling cognition:

Natural cognitive systems are intelligent in virtue of being
physical symbol systems of the right kind."
Post by Don Geddis
www.syros.aegean.gr/users/tsp/conf_pub/C34/C34.doc
For the purposes of this paper it is useful to distinguish three major
The Cognitivist-Computationalist/Symbolic Approach
Computationalism is based on the hypothesis that the mind is supposed to
process symbols that are related together to form representations of the
environment. These representations are abstract, and their manipulations are
so deterministic that they can be implemented by a machine.
OK, so far.
Computationalism is the metaphor of the sequential, externally-programmed
information processing machine based on the theories of Turing (Turing,
1950) and vonNeumann (vonNeumann, 1958). Therefore it implies that the key
to building an adaptive system is to produce a system that manipulates
symbols correctly, as enshrined in the Physical Symbol System Hypothesis
(Newell, 1980). Computationalism has two requirements: forms of
representation and methods of search. Thus, first one should find a way to
formally represent the domain of interest (whether it will be vision,
chess, problem-solving) and then to find some method of sequentially
searching the resulting state space (Mingers, 1995).
This has become FAR too narrow. That describes merely one possible
computational approach from among many.
OK, here is another definition of Comp written by a former editor of an
AI journal, a supporter of Computationalism. He starts off the article
with a complaint about Time magazine not giving more play to Cognitive
scientists, and that "the computational hypothesis is the discipline's
[Cognitivism] foundational assumption.

http://scholar.lib.vt.edu/ejournals/SPT/v5n2/dietrich.html

"Before we go any further, though, I want to say for the record what
the computational hypothesis is. 2. The computational hypothesis.

The computational hypothesis (also known as computationalism) is
a version of functionalism where all the functions are computable. It
claims that cognition is the execution of Turing-computable functions
defined over various kinds of representational entities. Period.

[SH: I'll quote about representationalism from another source:
...[T]here exists (in the mathematical sense) an interpretation function
that systematically maps physical state transitions onto the arguments
and values of the product function. And it is precisely this
interpretation function that reveals physical states of the system as
representations of the arguments and values of the function satisfied,
i. e. as numerals. Any physical system with the right representational
states will therefore count as a multiplier." [for example]

SH: "Sequential" has been traditionally part of the definition. It
could be expanded to include parallel. Other than that the two
definitions seem pretty much the same to me and not too narrow.
Comp does not include non-computable functions.
"As far as the computationalist is concerned, if a function is not
computable, and if cognitive processes turn out to be computational
ones nevertheless, then it just isn't part of cognition."

"A working hypothesis of computationalism is that Mind arises, not from
the intrinsic nature of the causal properties of particular forms of
matter, but from the organization of matter. If this hypothesis is
correct, then a wide range of physical systems (e.g. optical, chemical,
various hybrids, etc.) should support Mind, especially computers, since
they have the capability to create/manipulate organizations of bits of
arbitrarily complexity and dynamics. In any particular computer, these
bit patterns are quite physical, but their particular physicality is
considered irrelevant (since they could be replaced by other physical
substrata).

When an organizational correspondence is set up between patterns in a
computer and patterns in some other physical system, we tend to call the
computer patterns "symbols". The correspondence, however, is usually
only to some level of organization."
Post by Don Geddis
Consequently, these are purely formal systems and their symbols are related
to an apriori correspondence with externally imposed meaning. They are
processing information based on a static meaning structure, which cannot be
internally changed in order to adapt to the ever-changing demands of a
dynamic environment.
Again, while there have been programs that act in this way, it isn't really
appropriate to label only those programs as "Computational".
They are labeled as symbolic, which is a modeling approach
to cognition not a philosophical commitment to Computationalism.
Programs are going to output the same output whether or not
there is a belief in "the mind is a program" by the programmer.
Post by Don Geddis
The Connectionist-Dynamic Approach
Connectionism argues that the mind is a system of network that gives rise to
a dynamic behavior that can be interpreted as rules at a higher level of
description.
This version can indeed be contrasted with the description above. But NOT
with the label above, which was "The Cognitivist-Computationalist/Symbolic
Approach". That is not an appropriate label for the description they gave.
It matches the AI editor description except that it included the
standard "sequential" adjective which is quite common.
Post by Don Geddis
On the other hand, Fodor (Fodor and Psyslyn, 1988) among others, insists that
the form of the computation, whether logico-syntactic or connectionist, is
merely a matter of implementation
Right.
In any case, I think your title for this thread ("The Demise of
Computationalism?") is ill-chosen. It certainly doesn't address anything
I've ever posted about Computationalism and consciousness and Strong AI.
The thread was entitled after the first dotted line section of my
post from the title of David Israel's paper:

"The Rumors of its [Computationalism] Demise have been Greatly
Exaggerated" David Israel, Stanford University, SRI International,USA

“There has been much talk about the computationalism being dead. ...
But as Mark Twain said of rumors of his own death: these rumors are
highly exaggerated."

SH: This is an example of a literary allusion similar to the example
Turing used for the Turing Test. You have failed it by not seeing the
connection to: "Shall I compare thee to a summer's day." It is fairly
common to take a well-known quote and build a title around it.

Also this thread wasn't addressed to you. I've gotten tired of having
to explain everything in detail to you, and then again in greater
detail. I don't think you have the reading comprehension of a Phd.

http://scholar.lib.vt.edu/ejournals/SPT/v5n2/dietrich.html

"It is hard to find two cognitive scientists who agree on any of the
details of a theory of mind.

Computationalism is only a foundational hypothesis. Computationalism
does not get specific about which particular functions cognition is.
Indeed we aren't sure which functions cognition is. Therefore,
computationalism does not tell us what models to build, nor which
experiments to run. All computationalism gives us is a framework
within which to work.

Computationalism, as with computation on garden variety computers,
is not committed to mental representations (internal encodings of
information) of any particular variety. Rather, computationalism
is compatible with many of different kinds of representations from
numerical quantities to propositional nodes in a semantic network."

SH: And neither are the approaches committed to Computationalism.
It is cognitivism which has whatever degree of existing commitment.
The Symbolic approach has a natural fit because it uses symbols
which are used in defining Computationalism.

http://scholar.lib.vt.edu/ejournals/SPT/v5n2/dietrich.html

"In sum, assuming computationalism leaves all the hard work left to do.
Which means it is not really a theory. Computationalism is a theory
schema. We still need to roll up our sleeves and get down to the
difficult business of developing a theory of mind. Computationalism
does tell us what this theory will look like -- but only broadly."

"Computationalism is attacked from without and from within cognitive
science. The vigor of the attacks, the large number of researchers and
scholars involved," ...

SH: A supporter of Comp in a position to know, editor of an AI journal,
states what is clearly the decline of Comp in the AI community, but is
not yet a report of Comp's demise.

He thinks the arguments/criticisms of Computationalism are weak. But to
a critical thinker the Comp description more closely resembles belief
that mystical engravings (symbols) on an amulet render magical powers.

As long as you know the right order and pronunciation of arcane chants,

Stephen

In case you couldn't tell I've stopped responding to most of your
posts because I don't think it is a profitable experience for me.
Neil W Rickert
2007-03-03 22:52:14 UTC
Permalink
Okay, I'm not quoting anything that Don said, but I'll leave the
attribution since Don did give a wiki citation that is mentioned
here.
Post by Stephen Harris
That is not what your wiki link to Computationalism says, it does say,
"Computationalism is a philosophy of mind theory"...
Besides not mentioning Connectionism on the wiki webpage you linked to,
the definition of Connectionism,
http://philosophy.uwaterloo.ca/MindDict/connectionism.html
"connectionism - A computational approach to modeling the brain which
relies on the interconnection of many simple units to produce complex
behavior."
I won't comment on the wiki page. However, computationalism is
the claim that cognition is a product of computation. There is
no implication that the symbols used in the computation are the
same as human level concepts. The symbols of the computation
could be at a far lower level. Connectionism as often described,
is a version of computationalism, though there might be some
variants of connectionism that are outside the constraints of what
computationalism implies.

I'm personally not a proponent of computationalism, though I probably
am some kind of functionalist. However, I think the reports of
the demise of computationalism are premature. It still seems to
be widely supported. Traditional epistemology very likely implies
computationalism, though I won't try to argue that.
--
DO NOT REPLY BY EMAIL - The address above is a spamtrap.

Neil W. Rickert, Computer Science, Northern Illinois Univ., DeKalb, IL 60115
Stephen Harris
2007-03-04 06:46:56 UTC
Permalink
Post by Neil W Rickert
Okay, I'm not quoting anything that Don said, but I'll leave the
attribution since Don did give a wiki citation that is mentioned
here.
Post by Stephen Harris
That is not what your wiki link to Computationalism says, it does say,
"Computationalism is a philosophy of mind theory"...
Besides not mentioning Connectionism on the wiki webpage you linked to,
the definition of Connectionism,
http://philosophy.uwaterloo.ca/MindDict/connectionism.html
"connectionism - A computational approach to modeling the brain which
relies on the interconnection of many simple units to produce complex
behavior."
I won't comment on the wiki page. However, computationalism is
the claim that cognition is a product of computation. There is
no implication that the symbols used in the computation are the
same as human level concepts. The symbols of the computation
could be at a far lower level. Connectionism as often described,
is a version of computationalism, though there might be some
variants of connectionism that are outside the constraints of what
computationalism implies.
I'm personally not a proponent of computationalism, though I probably
am some kind of functionalist. However, I think the reports of
the demise of computationalism are premature. It still seems to
be widely supported. Traditional epistemology very likely implies
computationalism, though I won't try to argue that.
First of all, I will acknowledge I have a lot of respect for your
opinions because you well-read and a critical thinker.

Harnad also says he doesn't see any reason why one couldn't be a
connectionist and also believe in Computationalism = Comp.

NR: "Connectionism [=Conn] often described,
Post by Neil W Rickert
is a version of computationalism, though there might be some
variants of connectionism that are outside the constraints of
what computationalism implies."
SH: It is the wording that I question. Doesn't Comp assert that
the right program (which is computable) running on a computer
will instantiate a mind? Since both Comp and Conn are TM
equivalent, then Computationalism could be applied.

But there are also a lot of people who think that a highly
useful AI program can just be built somewhat similar to how
a grandmaster chess program or other expert systems were
built which needed no assumption of a mind to make them work.

I think that Conn is an approach and is independent of a belief
in what is described as a philosophy=Computationalism. So why
is Conn a version of Comp rather than an approach which can
have a philosophy applied to it? Even if a majority of people
do choose to apply Comp to Conn, they are still separate ideas.

The Physical Systems Symbol Hypothesis [PSSH] Newell and Simon

"Natural cognitive systems are intelligent in virtue of being
physical symbol systems of the right kind."

SH: The PSSH is usually mentioned in conjunction with Comp as
a foundational concept. Because Cognitivists usually choose
to adopt PSSH and Comp, but why are they necessary assumptions?

PSSH is a hypothesis and Comp is a philosophical conjecture, so
what causal impact do they have on what some program outputs,
to limit some program's capability if you don't assume PSSH and
Comp? Why can't (weak) AI succeed as a long term engineering
project, without philosophical assumptions about reality?

Don is claiming that one must assume Comp in order to get to
a level where a Turing Test can be passed; a mind _must_ be
instantiated, or there will never be a TT passing program.

I disagree, which is why I'm separating the concepts into
the category of philosophical choice and physical approach.
Also originally, committing to Comp I think was just part of
AI or GOFAI, mostly at least, before weak and strong AI, but
a metaphysical idea is not needed to enable physical behavior.

I think Computationalism could be applied to other approaches
to AI, but I don't see a reason why Comp must be applied. The
system is going to perform just the same with or without Comp.
Don doesn't think the system will perform just the same with
or without Comp; Comp must be true so that a mind can be
instantiated which only then will allow a TT passing program
to reach the level of competency envisioned by Turing (test)
{from DG's posts}. Am I missing something in this analysis?

Good to hear from you,
Stephen

The link you left in is one I provided to a dictionary not
the one Don provided to the Wiki/Computationalism. I did
quote the explanation of Comp from the Wiki and said that
page didn't mention Conn, so I quoted Conn from the other
link. The Wiki has a link to Conn, but not from the Comp page.
Stephen Harris
2007-03-04 08:46:03 UTC
Permalink
http://en.wikipedia.org/wiki/Computationalism
Post by Stephen Harris
Don is claiming that one must assume Comp in order to get to
a level where a Turing Test can be passed; a mind _must_ be
instantiated, or there will never be a TT passing program.
I disagree, which is why I'm separating the concepts into
the category of philosophical choice and physical approach.
Also originally, committing to Comp I think was just part of
AI or GOFAI, mostly at least, before weak and strong AI, but
a metaphysical idea is not needed to enable physical behavior.
Good to hear from you,
Stephen
www.cs.bham.ac.uk/research/cogaff/sloman.turing.irrelevant.pdf
"The Irrelevance of Turing Machines to AI"
To refresh the memories of others reading this post, Aaron Sloman
is a strong proponent of AI and heads CoSy in England.

"A corollary of all this is that there are (at least) two very
different concepts of computation:
one of which is concerned entirely with properties of certain
classes of formal structures that are the subject matter of
theoretical computer science (a branch of mathematics), while
the other is concerned with a class of information-processing
machines that can interact causally with other physical systems
and within which complex causal interactions can occur. Only the
second is important for AI (and philosophy of mind). ...

From the theoretical viewpoint Turing machines are clearly of
great interest because they provide a framework for investigating
some of these questions, though not the only framework. If AI
were concerned with finding a single most general kind of
information processing capability, then Turing machines might
be relevant to this because of their generality. However, no
practical application of AI requires total generality, and no
scientific modelling task of AI (or cognitive science) requires
total generality for there is no human or organism that has
completely general capabilities. There are things chimps, or
even bees, can do that humans cannot and vice versa. ...

The engineering aims of AI include using computers to provide
new sorts of machines that can be used for practical purposes,
whether or not they are accurate models of any form of natural
intelligence. These engineering aims of AI are not sharply
distinguished from other types of applications for which computers
are used which are not described as AI. Almost any type of
application can be enhanced by giving computers more information
and more abilities to process such information sensibly,
including learning from experience. In other words almost any
computer application can be extended using AI techniques. ...

Thus there is no particular branch of AI or approach to AI that
has special links with computation: they all do, although they
may make different use of concepts developed in connection with
computers and programming languages. In almost all cases, the
notion of a Turing machine is completely irrelevant, except as
a special case of the general class of computers. ...

Ideas about Turing machines and related theoretical "limit"
results on computability, decidability, definability,
provability, etc. are relevant to all these kinds of
mathematical research but are marginal or irrelevant in
relation to most aspects of the scientific AI goal of trying
to understand how biological minds and brains work, and also
to the engineering AI goals of trying to design new useful
machines with similar (or greater) capabilities. ...

This kind of mathematical universality may have led some people
to the false conclusion that any kind of computer is as good as
any other provided that it is capable of modelling a universal
Turing machine. This is true as a mathematical abstraction, but
misleading, or even false when considering problems of controlling
machines embedded in a physical world. ...

Human intelligence, however, is often precisely concerned with
finding good solutions to problems quickly, and speed is central
to the success of control systems managing physical systems
embedded in physical environments. Aptness for their biological
purpose, and not theoretical universality, is the important
characteristic of animal brains, including human brains. What
those purposes are, and what sorts of machine architectures can
serve those purposes, are still open research problems (which I
have discussed elsewhere), but it is clear that time constraints
are very relevant to biological designs: speed is more
biologically important than theoretical universality. ...

This section on the mathematical applications of ideas of
computation was introduced only in order to get them out of
the way, and in order to provide a possible explanation for
the wide-spread but mistaken assumption that notions such as
Turing machines, or Turing computability are central to AI. ...


www.nd.edu/~mscheutz/publications/scheutz98conceptus.pdf

"My goal in this paper is to show that Searle and Putnam’s
arguments are based on the same criticism: as long as physical
states (of a given physical system) can be chosen freely, one
can always relate these physical states to computational states
(of an arbitrary computation) in such a way that the physical
system can be viewed as implementing that computation. From this
I will conclude that unless a better notion of implementation
is provided that avoids state-to-state correspondences between
physical systems and abstract objects (be they computations or
certain kinds of formal theories) computationalism will remain
in bad shape."
Josip Almasi
2007-03-04 11:07:25 UTC
Permalink
Post by Stephen Harris
Why can't (weak) AI succeed as a long term engineering
project, without philosophical assumptions about reality?
Good point.
The catch is, there's big difference between engineering and research
projects, and you simply can't avoid philosophical assumptions in research.
Well, at least *I* can't;)
Edison had pure engineering project with light bulb - he already had
working theoretical framework and prototype, he just had to find right
material.
But software isn't that easy. I won't even mention AI.

For example, I needed to make seemingly simple design decision: do I
make my 3d space discrete (matrix) or continous space?
So I wrote down pros and contras. Like, matrices make collision
detection trivial while continous spaces make them quite complex.
And so on.
And as a reality check, lets see what science sez, are there quanta of
space/time, or is planck just universe's equivalent of minimal double
precission floating point number?
And guess what... straight to quantum gravity:)))
It's not just there's no science concensus; there's even no concensus if
it's testable.
Goodbye my engineering project, I turned researcher/philosopher just
like that - for one simple 'int or float' decission.
(and FTR I came to conclusion there's no space/time quanta)

Now if that's not simple, how about *perception* of space/time?
Weirdest and most useless thing among all philosophical, scientifical
and mystical stuff I've read is many-minds interpretation: our mind is
scaterred and present and entangled across all universes.
How am I supposed to make any code out of it?

Back to weak AI - we make philosophical assumptions when choosing
reasoning models, data models, etc. Even if we're not aware of it,
assumptions are picked somehow along the way without making evaluations
and decissions.

Regards...
Alpha
2007-03-04 17:44:38 UTC
Permalink
Post by Josip Almasi
Post by Stephen Harris
Why can't (weak) AI succeed as a long term engineering
project, without philosophical assumptions about reality?
Good point.
The catch is, there's big difference between engineering and research
projects, and you simply can't avoid philosophical assumptions in research.
Well, at least *I* can't;)
Well, it depends on what the research is aimed at doing! I did a great deal
of research surrounding the possible use of ANNs in process control systems.
I had no philosophical assumptions qua philosophical. It was a fact-finding
and experimental scenario (research - hypotheses were proposed and
experiments carried out to validate or falsify the hypotheses) *within* an
engineering context; therefore I do not see a clear distinction between
engineering and research *in that type of scenario*. Now, one might claim
that the engineering context implied philosophical positioning that would
have been different had we not had a goal to actually implement ANNs. But
then it would not have been a real engineering context, but a pure research
context (with probable assumptions that might have been tied to a more
rigorous philosophical background. IOW, one can do pure research within an
engineering context and one can do engineering within a research context; so
where is the clear dividing line (yeah - I know the historical/therorectical
distinction between research and engineering, but practically, in most
real-world scenarios in both academic and commercial domains, there is an
intermix of both.)
Post by Josip Almasi
Edison had pure engineering project with light bulb - he already had
working theoretical framework and prototype, he just had to find right
material.
But software isn't that easy. I won't even mention AI.
For example, I needed to make seemingly simple design decision: do I make
my 3d space discrete (matrix) or continous space?
So I wrote down pros and contras. Like, matrices make collision detection
trivial while continous spaces make them quite complex.
And so on.
And as a reality check, lets see what science sez, are there quanta of
space/time, or is planck just universe's equivalent of minimal double
precission floating point number?
And guess what... straight to quantum gravity:)))
It's not just there's no science concensus; there's even no concensus if
it's testable.
Goodbye my engineering project, I turned researcher/philosopher just like
that - for one simple 'int or float' decission.
(and FTR I came to conclusion there's no space/time quanta)
So in your *total* scenario, it worked out just like I explained above - an
intermix of both to come to some end goal.
Post by Josip Almasi
Now if that's not simple, how about *perception* of space/time?
Weirdest and most useless thing among all philosophical, scientifical and
mystical stuff I've read is many-minds interpretation: our mind is
scaterred and present and entangled across all universes.
How am I supposed to make any code out of it?
You could snip your paper-tape version of your software program code into
lots of pieces and scatter it to the wind from a high building! ;^)))
Post by Josip Almasi
Back to weak AI - we make philosophical assumptions when choosing
reasoning models, data models, etc. Even if we're not aware of it,
assumptions are picked somehow along the way without making evaluations
and decissions.
If the assumptions are sunbconscious, what effect do they have on our
day-to-day work? I mean sure, we assume practical things like that the sun
is going to come up tomorrow so we can continue our work, but I wonder what
*philosophical* assumtions the average software developer is making while
performing his work? Many developers/engineers and even theoreticians may
not even have been exposed to the types of philosophical underpinnings that
could constitute a philosophical assumption that would be needed in some way
while performing one's work.
Post by Josip Almasi
Regards...
--
Posted via a free Usenet account from http://www.teranews.com
Josip Almasi
2007-03-05 10:54:47 UTC
Permalink
Post by Alpha
Well, it depends on what the research is aimed at doing! I did a great deal
...
Post by Alpha
intermix of both.)
Exactly my point.
Post by Alpha
If the assumptions are sunbconscious, what effect do they have on our
day-to-day work?
Oh I wasn't exactly meaning 'subconscious'. Just, we don't really
remember how and where we learn something, right?
(it's in fact property of NN)
And, really, I'm not sure about effects of awareness of assuptions to
results of work.

To be quite specific:) Recently I was blabering on the group about some
subject-object-predicate things. It cought me by surprise when Wolf
mistaken these natural language. OMG isn't that *common sense*?:)
Well I know exactly where I picked that kind of knowledge representation
- from neurogrid project. That is, I got it there so well that it became
my 'common sense'.
But I knew that before the project, just couldn't recall.
Quick search gave me dialectic reasoning and class algebra.
Yes I had both on philosophy and math courses.

How come I didn't grow synapses that connect them?;)

And what philosophic assumptions I made during that project?:)
Beats me.
Post by Alpha
I mean sure, we assume practical things like that the sun
is going to come up tomorrow so we can continue our work, but I wonder what
*philosophical* assumtions the average software developer is making while
performing his work? Many developers/engineers and even theoreticians may
not even have been exposed to the types of philosophical underpinnings that
could constitute a philosophical assumption that would be needed in some way
while performing one's work.
Sure. But! Not being aware of underlying philosphy results in many
flames and trolling all over the place:)

Regards...
Stephen Harris
2007-03-04 18:28:23 UTC
Permalink
Post by Josip Almasi
Post by Stephen Harris
Why can't (weak) AI succeed as a long term engineering
project, without philosophical assumptions about reality?
Good point.
The catch is, there's big difference between engineering and research
projects, and you simply can't avoid philosophical assumptions in research.
Well, at least *I* can't;)
Edison had pure engineering project with light bulb - he already had
working theoretical framework and prototype, he just had to find right
material.
But software isn't that easy. I won't even mention AI.
And guess what... straight to quantum gravity:)))
Well, you had better have used string theory because they are
mighty touchy about their foundations. Those guys are consumed
in their quest for the Holy Moby.

I was reading a Dover book on Pure Mathematics once. That is the
kind where they give you a license to be creative, make up the
rules you want. The author commented that he didn't that freedom
ever escaped the boundaries imposed by ones learning history.

Mostly, they don't think that grandmaster chess program has a
mind or is conscious. Suppose some Turing Test passing program
incorporates that chess program and the TTPP is asked what move to
play in a certain position, say it answered yes to playing chess.

Now when the chess part of the TTPP is functioning, does the chess
part now instantiate a mind since it is now satisfies cognition is
computation, as part of a larger realization?

Regards,
Stephen
Josip Almasi
2007-03-05 11:06:27 UTC
Permalink
Post by Stephen Harris
Well, you had better have used string theory because they are
mighty touchy about their foundations. Those guys are consumed
in their quest for the Holy Moby.
Ah I didn't use any theory:) I just needed some basic background
information to decide on fairly simple design question.
Post by Stephen Harris
Now when the chess part of the TTPP is functioning, does the chess
part now instantiate a mind since it is now satisfies cognition is
computation, as part of a larger realization?
Another good point. And I love how you put it, 'instantiate a mind';)
Well this implies gestalt mind. Einstein would agree for sure:)
I'll contemplate on this some more...
...in the meantime, this is IA rather than AI. I got that from Vinge:
team of an engineer and a workstation is much smarter than each alone.
Therefore, Intelligence Amplification.
So really, biggest potential of computers isn't replicating what people
do. It's doing well what people do bad.

Regards...
Stephen Harris
2007-03-05 14:51:38 UTC
Permalink
Post by Josip Almasi
Post by Stephen Harris
Well, you had better have used string theory because they are
mighty touchy about their foundations. Those guys are consumed
in their quest for the Holy Moby.
Ah I didn't use any theory:) I just needed some basic background
information to decide on fairly simple design question.
My only esoteric information which I learned awhile ago, which you
likely know, is that there is a limitation on computer (bus?) speed
as signals get closer to the speed of light.
Post by Josip Almasi
Post by Stephen Harris
Now when the chess part of the TTPP is functioning, does the chess
part now instantiate a mind since it is now satisfies cognition is
computation, as part of a larger realization?
Another good point. And I love how you put it, 'instantiate a mind';)
Well this implies gestalt mind. Einstein would agree for sure:)
I'll contemplate on this some more...
“In defense of this reading of the multiple realizability argument,
We should note that it accommodates other ways in which multiple
realizability has been linked to functionalism. For example, multiple
realizability is sometimes treated as a direct consequence of
functionalism, rather than as a premise in a supporting argument.
If mental states are defined by their causal-relational properties,
then it follows that any substance instantiating those properties
will instantiate a mind.”
Post by Josip Almasi
team of an engineer and a workstation is much smarter than each alone.
Therefore, Intelligence Amplification.
So really, biggest potential of computers isn't replicating what people
do. It's doing well what people do bad.
Regards...
That would make me happy, to have an Intelligent Assistant. OTOH
knowing other minds are in the universe would certainly enrich it.
I've read some SF by Iain M. Banks, his Culture series, with super AIs.
Josip Almasi
2007-03-05 21:19:56 UTC
Permalink
Post by Stephen Harris
My only esoteric information which I learned awhile ago, which you
likely know, is that there is a limitation on computer (bus?) speed
as signals get closer to the speed of light.
Errr... so?

But let me introduce counterclaim just for the sake of argument: signals
can go faster than light. More preciselly, speed of information is
infinite. And even more precise,
'The field of scalar and vector potentials in electrodynamics is shown
to represent an informational field capable of superluminally
transmitting a signal (information) with no energy and momentum
transfer. This conclusion strictly follows from Maxwell's equations for
electromagnetic field interacting with electric charges and currents in
vacuum, without resort to any additional hypotheses.'
http://arxiv.org/ftp/physics/papers/0306/0306073.pdf
Post by Stephen Harris
If mental states are defined by their causal-relational properties,
then it follows that any substance instantiating those properties
will instantiate a mind.”
Interesting.
But it doesn't seem to me that causality defines mind states. Madness is
also a mind state. Dream is a mind state too.
And, really, instantiating a 'mind state' doesn't mean instantiating a
mind. Its just instantiating a specific function of mind.
I still love the phrase regardless;)

Regards...
Stephen Harris
2007-03-05 23:32:38 UTC
Permalink
Post by Josip Almasi
Post by Stephen Harris
My only esoteric information which I learned awhile ago, which you
likely know, is that there is a limitation on computer (bus?) speed
as signals get closer to the speed of light.
Errr... so?
I just should have asked you why you needed to consider quantum gravity
in your philosophical musings. As I understand it, the problem of QG is
the inability to reconcile time between Relativity and Quantum Theory.
So I brought up time dilation which is physical.

I'm rather limited, I only know of two quantum computer applications,
quantum computers and those quantum random bit generator cards, so I
just should have confessed my ignorance and asked what type of computer
application needed to consider quantum gravity.
Post by Josip Almasi
But let me introduce counterclaim just for the sake of argument: signals
can go faster than light. More preciselly, speed of information is
infinite. And even more precise,
'The field of scalar and vector potentials in electrodynamics is shown
to represent an informational field capable of superluminally
transmitting a signal (information) with no energy and momentum
transfer. This conclusion strictly follows from Maxwell's equations for
electromagnetic field interacting with electric charges and currents in
vacuum, without resort to any additional hypotheses.'
http://arxiv.org/ftp/physics/papers/0306/0306073.pdf
I've heard of non-locality. I think in theory there can be transfer
of information between white? holes but nothing physical.
Post by Josip Almasi
Post by Stephen Harris
If mental states are defined by their causal-relational properties,
then it follows that any substance instantiating those properties
will instantiate a mind.”
Interesting.
But it doesn't seem to me that causality defines mind states. Madness is
also a mind state. Dream is a mind state too.
And, really, instantiating a 'mind state' doesn't mean instantiating a
mind. Its just instantiating a specific function of mind.
I still love the phrase regardless;)
Regards...
I think functionalism claims that if the properties are instantiated,
then the essense of the properties is also instantiated. The topic
is rather complicated with variations of functionalism with mention
of content and the relationship to computation. It doesn't seem
intuitive to me at all. "The claim (Jerry Fodor, 1975)

"A mental state is a computational state embedded in a
complex network of inputs, outputs, and other mental
states. Computationalism differs from machine state
functionalism by locating the mental in abstract
computational states rather than in the various
possible machine states that could implement them."

I'm not sure about the role of causality either, but
I didn't understand how madness or dreams are acausal.
I'm not sure that causality defines mind states but
how else can mind states be created? I don't think that
linking mind states to brain states is well received.

Regards,
Stephen
Josip Almasi
2007-03-06 00:11:04 UTC
Permalink
Post by Stephen Harris
I just should have asked you why you needed to consider quantum gravity
in your philosophical musings.
Ah that! It's quite simple, it popped out on 1st page whey I googled
'space time quanta':)) Remember, I just wanted to check if space is
discrete or continous.
Post by Stephen Harris
I'm rather limited, I only know of two quantum computer applications,
quantum computers and those quantum random bit generator cards, so I
just should have confessed my ignorance and asked what type of computer
application needed to consider quantum gravity.
Well none of mine for sure:))
Post by Stephen Harris
I've heard of non-locality. I think in theory there can be transfer
of information between white? holes but nothing physical.
Well this guy talks information without energy transfer.
So we can't measure it:)
Post by Stephen Harris
I think functionalism claims that if the properties are instantiated,
then the essense of the properties is also instantiated.
But my point is, subset of properties is nothing more than a subset of
properties. It doesn't contain essence of entire system. (not true for
holograms and fractals)
I transfer someones liver I really do get essence of liver:)))
Not of human.
Post by Stephen Harris
It doesn't seem intuitive to me at all.
Well, seems ilogical to me.
Post by Stephen Harris
"A mental state is a computational state embedded in a
complex network of inputs, outputs, and other mental
states. Computationalism differs from machine state
functionalism by locating the mental in abstract
computational states rather than in the various
possible machine states that could implement them."
Ah I think I dig it. It roughly translates to 'Comp: mind is software,
Func: mind is hardware':)
OMG what a language!!!!:))))))
This guy teaches some students, right?:)))
Post by Stephen Harris
I'm not sure about the role of causality either, but
I didn't understand how madness or dreams are acausal.
AFAIK we apply causality, and in fact even meaning, when we interpret
symbols from dreams. Dreams are meaningles - our meaning device is off
while we sleep:) For all we know, dreams may be nothing but neural noise.
Post by Stephen Harris
I'm not sure that causality defines mind states but
how else can mind states be created?
Beats me.
OTOH, maybe mind states create causality?
Maybe it's all in our mind. Maybe we create consequence the same time we
create/observe cause, only consequence occurs later.
(a wild von Neumann/Wigner-based guess)

Regards...
Alpha
2007-03-07 19:00:33 UTC
Permalink
Post by Josip Almasi
Post by Stephen Harris
My only esoteric information which I learned awhile ago, which you
likely know, is that there is a limitation on computer (bus?) speed
as signals get closer to the speed of light.
Errr... so?
But let me introduce counterclaim just for the sake of argument: signals
can go faster than light. More preciselly, speed of information is
infinite. And even more precise,
'The field of scalar and vector potentials in electrodynamics is shown to
represent an informational field capable of superluminally transmitting a
signal (information) with no energy and momentum transfer.
That is like David Bohm's view!

Holonomic brain and Pilot waves etc.
Post by Josip Almasi
This conclusion strictly follows from Maxwell's equations for
electromagnetic field interacting with electric charges and currents in
vacuum, without resort to any additional hypotheses.'
http://arxiv.org/ftp/physics/papers/0306/0306073.pdf
Post by Stephen Harris
If mental states are defined by their causal-relational properties,
then it follows that any substance instantiating those properties
will instantiate a mind.”
Interesting.
But it doesn't seem to me that causality defines mind states. Madness is
also a mind state. Dream is a mind state too.
And, really, instantiating a 'mind state' doesn't mean instantiating a
mind. Its just instantiating a specific function of mind.
I still love the phrase regardless;)
Regards...
--
Posted via a free Usenet account from http://www.teranews.com
Josip Almasi
2007-03-07 19:54:25 UTC
Permalink
Post by Alpha
That is like David Bohm's view!
Much like, but Olenik got there using only Maxwell.

Regards...
Alpha
2007-03-05 17:40:54 UTC
Permalink
"Stephen Harris" <cyberguard-***@yahoo.com> wrote in message news:gdEGh.5469$***@newssvr22.news.prodigy.net...
<snip>
Post by Stephen Harris
Mostly, they don't think that grandmaster chess program has a
mind or is conscious. Suppose some Turing Test passing program
incorporates that chess program and the TTPP is asked what move to
play in a certain position, say it answered yes to playing chess.
Now when the chess part of the TTPP is functioning, does the chess
part now instantiate a mind since it is now satisfies cognition is
computation, as part of a larger realization?
That is very interesting and goes to the heart of an issue I was discussing
with another poster. Say a TTP *is* composed of the very modules (various
Intelligent Agents) that provide expertise in their respective domains. A
Deep Blue might, even in the future, be one of those modules/intellignet
agents (although it may have undergone revisions since now). So might a
Cyc-like function (to help answer common-sense questions for example) be one
of the intelligent agents (again, with revision to incorporate almost all
possible domains of discourse). Lots of IAs all communicating with one
another using some elegant Agent Communications Language lets say.

There might even be an executive or controller or manager like
function/intelligent agent that would provide initial analysis of the
question and hand off the work to the appropriate one or set of IAs to do
the real work. And say there is a module, not yet conceived of, that
provides the kind of intelligence that we do not have now in machines -
General Intelligence, *as a module* or modules. Now, with that kind of
architecture (modular), say the question comes in that asks a commonsense
question or a chess question, but a question that is posed to elicit an
"essay" answer - one that further requires some aspect of GI to fully answer
(perhaps one involving a "why?"). The GI does not have the detailed
expertise (ontology, heuristics etc. particular to a domain) to deal with
such a question in total, so it delegates to the appropriate agent via the
manager function. Manager gets the answer back from the IA, sends it off to
the GI_module (whatever that is - it could be distributed among several
modules etc.) which rearranges the knowledge it received so as to be
presentable and believed by a TT judge to come from a human (to fool the
judges etc.). Lets further stipulate that such a system *could* fool the TT
judges; it has all the individual modules necessary to do so in any domain
of discourse and further, each IA encapsulates the mechanism(s) *appropriate
to the problem it must solve*).

(As an aside, it is probably the case, IMO given our approaches to SW so
far, that such a system is likely be a modular one, even one which has GI
functionality (just like brain is modular for example). Real SW engineering
has not generated monolithic SW for a long time. It is (almost) always
modular except for the most trivial programs.)

Now, at what point, or during what operation(s) can one say that the
"system" instantiates a mind? Only when the GI_functionality kicks in?
Surely not, by some people's opinions here, when the system is doing the
Deep Blue functions only, or merely the Cyc functions (because those
functions, by what we already know of them, do not instantiate a mind). And
all these functions come and go somewhat serially. Does mind blink in and
out as the function vectors go hither and thither? Is mind there, perhaps
latent in the "system" all the time? Is there a mind in that system when the
system is turned off for the night? (when there is no function of any kind
happening). How does that differ, if at all, from mind_of_human that is
latent or surely_there_but_not_active when unconscious for example, or in
deep sleep?
--
Posted via a free Usenet account from http://www.teranews.com
Stephen Harris
2007-03-05 20:24:29 UTC
Permalink
Post by Alpha
<snip>
Post by Stephen Harris
Mostly, they don't think that grandmaster chess program has a
mind or is conscious. Suppose some Turing Test passing program
incorporates that chess program and the TTPP is asked what move to
play in a certain position, say it answered yes to playing chess.
Now when the chess part of the TTPP is functioning, does the chess
part now instantiate a mind since it is now satisfies cognition is
computation, as part of a larger realization?
That is very interesting and goes to the heart of an issue I was discussing
with another poster. Say a TTP *is* composed of the very modules (various
Intelligent Agents) that provide expertise in their respective domains. A
Deep Blue might, even in the future, be one of those modules/intellignet
agents (although it may have undergone revisions since now). So might a
Cyc-like function (to help answer common-sense questions for example) be one
of the intelligent agents (again, with revision to incorporate almost all
possible domains of discourse). Lots of IAs all communicating with one
another using some elegant Agent Communications Language lets say.
There might even be an executive or controller or manager like
function/intelligent agent that would provide initial analysis of the
question and hand off the work to the appropriate one or set of IAs to do
the real work. And say there is a module, not yet conceived of, that
provides the kind of intelligence that we do not have now in machines -
General Intelligence, *as a module* or modules. Now, with that kind of
architecture (modular), say the question comes in that asks a commonsense
question or a chess question, but a question that is posed to elicit an
"essay" answer - one that further requires some aspect of GI to fully answer
(perhaps one involving a "why?"). The GI does not have the detailed
expertise (ontology, heuristics etc. particular to a domain) to deal with
such a question in total, so it delegates to the appropriate agent via the
manager function. Manager gets the answer back from the IA, sends it off to
the GI_module (whatever that is - it could be distributed among several
modules etc.) which rearranges the knowledge it received so as to be
presentable and believed by a TT judge to come from a human (to fool the
judges etc.). Lets further stipulate that such a system *could* fool the TT
judges; it has all the individual modules necessary to do so in any domain
of discourse and further, each IA encapsulates the mechanism(s) *appropriate
to the problem it must solve*).
(As an aside, it is probably the case, IMO given our approaches to SW so
far, that such a system is likely be a modular one, even one which has GI
functionality (just like brain is modular for example). Real SW engineering
has not generated monolithic SW for a long time. It is (almost) always
modular except for the most trivial programs.)
Now, at what point, or during what operation(s) can one say that the
"system" instantiates a mind? Only when the GI_functionality kicks in?
Surely not, by some people's opinions here, when the system is doing the
Deep Blue functions only, or merely the Cyc functions (because those
functions, by what we already know of them, do not instantiate a mind). And
all these functions come and go somewhat serially. Does mind blink in and
out as the function vectors go hither and thither? Is mind there, perhaps
latent in the "system" all the time? Is there a mind in that system when the
system is turned off for the night? (when there is no function of any kind
happening). How does that differ, if at all, from mind_of_human that is
latent or surely_there_but_not_active when unconscious for example, or in
deep sleep?
I got the idea for bringing this topic up from reading Piccinini's
paper quoted below. I copied paragraphs from page 33 and 34. The
last paragraph is closely related to your comments.

http://www.petemandik.com/blog/wp-content/uploads/PMS_WIPS-010-Piccinini.pdf
"The Mind as Neural Software?
Revisiting Functionalism, Computationalism, and Computational
Functionalism" by Gualtiero Piccinini

"Computational functionalism entails that minds are multiply realizable,
in the sense in which the same computer program can run on physically
different pieces of hardware. So if computational functionalism is
correct, then—pace Bechtel and Mundale 1999, Shapiro 2000, Churchland
2005 and other foes of multiple realizability—mental programs can also
be specified and studied independently of how they are implemented in
the brain, in the same way that one can investigate what programs are
(or should be) run by digital computers without worrying about how they
are physically implemented.

Under the computational functionalist hypothesis, this is the task of
psychological theorizing. Psychologists may speculate on which programs
are executed by brains when exhibiting certain mental capacities. The
programs thus postulated are part of a mechanistic explanation for those
capacities.

*The biggest surprise is that when interpreted literally, computational
functionalism entails that the mind is a physical component (or a stable
state of a component) of the brain, in the same sense in which computer
programs are physical components (or stable states of components) of
computers. As a consequence, even a brain that is not processing any
data—analogously to an idle computer, or even a computer that is turned
off—might still have a mind, provided that its programs are still
physically present. This consequence seems to offend many people’s
intuitions about what it means to have a mind, but it isn’t entirely
implausible. It might correspond to the sense in which even people who
are asleep, or have fainted, still have minds. Be that as it may, this
consequence can be easily avoided by a more dynamic interpretation of
computational functionalism, according to which the mind is constituted
by the processes generated by the brain’s software. This dynamic reading
may well be the one intended by the original proponents of computational
functionalism."

Regards,
Stephen
Alpha
2007-03-07 18:56:16 UTC
Permalink
Post by Stephen Harris
Post by Alpha
<snip>
Post by Stephen Harris
Mostly, they don't think that grandmaster chess program has a
mind or is conscious. Suppose some Turing Test passing program
incorporates that chess program and the TTPP is asked what move to
play in a certain position, say it answered yes to playing chess.
Now when the chess part of the TTPP is functioning, does the chess
part now instantiate a mind since it is now satisfies cognition is
computation, as part of a larger realization?
That is very interesting and goes to the heart of an issue I was
discussing with another poster. Say a TTP *is* composed of the very
modules (various Intelligent Agents) that provide expertise in their
respective domains. A Deep Blue might, even in the future, be one of
those modules/intellignet agents (although it may have undergone
revisions since now). So might a Cyc-like function (to help answer
common-sense questions for example) be one of the intelligent agents
(again, with revision to incorporate almost all possible domains of
discourse). Lots of IAs all communicating with one another using some
elegant Agent Communications Language lets say.
There might even be an executive or controller or manager like
function/intelligent agent that would provide initial analysis of the
question and hand off the work to the appropriate one or set of IAs to do
the real work. And say there is a module, not yet conceived of, that
provides the kind of intelligence that we do not have now in machines -
General Intelligence, *as a module* or modules. Now, with that kind of
architecture (modular), say the question comes in that asks a commonsense
question or a chess question, but a question that is posed to elicit an
"essay" answer - one that further requires some aspect of GI to fully
answer (perhaps one involving a "why?"). The GI does not have the
detailed expertise (ontology, heuristics etc. particular to a domain) to
deal with such a question in total, so it delegates to the appropriate
agent via the manager function. Manager gets the answer back from the IA,
sends it off to the GI_module (whatever that is - it could be distributed
among several modules etc.) which rearranges the knowledge it received so
as to be presentable and believed by a TT judge to come from a human (to
fool the judges etc.). Lets further stipulate that such a system *could*
fool the TT judges; it has all the individual modules necessary to do so
in any domain of discourse and further, each IA encapsulates the
mechanism(s) *appropriate to the problem it must solve*).
(As an aside, it is probably the case, IMO given our approaches to SW so
far, that such a system is likely be a modular one, even one which has GI
functionality (just like brain is modular for example). Real SW
engineering has not generated monolithic SW for a long time. It is
(almost) always modular except for the most trivial programs.)
Now, at what point, or during what operation(s) can one say that the
"system" instantiates a mind? Only when the GI_functionality kicks in?
Surely not, by some people's opinions here, when the system is doing the
Deep Blue functions only, or merely the Cyc functions (because those
functions, by what we already know of them, do not instantiate a mind).
And all these functions come and go somewhat serially. Does mind blink
in and out as the function vectors go hither and thither? Is mind there,
perhaps latent in the "system" all the time? Is there a mind in that
system when the system is turned off for the night? (when there is no
function of any kind happening). How does that differ, if at all, from
mind_of_human that is latent or surely_there_but_not_active when
unconscious for example, or in deep sleep?
I got the idea for bringing this topic up from reading Piccinini's
paper quoted below. I copied paragraphs from page 33 and 34. The
last paragraph is closely related to your comments.
http://www.petemandik.com/blog/wp-content/uploads/PMS_WIPS-010-Piccinini.pdf
"The Mind as Neural Software?
Revisiting Functionalism, Computationalism, and Computational
Functionalism" by Gualtiero Piccinini
"Computational functionalism entails that minds are multiply realizable,
in the sense in which the same computer program can run on physically
different pieces of hardware. So if computational functionalism is
correct, then—pace Bechtel and Mundale 1999, Shapiro 2000, Churchland 2005
and other foes of multiple realizability—mental programs can also be
specified and studied independently of how they are implemented in the
brain, in the same way that one can investigate what programs are (or
should be) run by digital computers without worrying about how they are
physically implemented.
Under the computational functionalist hypothesis, this is the task of
psychological theorizing. Psychologists may speculate on which programs
are executed by brains when exhibiting certain mental capacities. The
programs thus postulated are part of a mechanistic explanation for those
capacities.
*The biggest surprise is that when interpreted literally, computational
functionalism entails that the mind is a physical component (or a stable
state of a component) of the brain, in the same sense in which computer
programs are physical components (or stable states of components) of
computers. As a consequence, even a brain that is not processing any
data—analogously to an idle computer, or even a computer that is turned
off—might still have a mind, provided that its programs are still
physically present. This consequence seems to offend many people’s
intuitions about what it means to have a mind, but it isn’t entirely
implausible. It might correspond to the sense in which even people who are
asleep, or have fainted, still have minds. Be that as it may, this
consequence can be easily avoided by a more dynamic interpretation of
computational functionalism, according to which the mind is constituted by
the processes generated by the brain’s software. This dynamic reading may
well be the one intended by the original proponents of computational
functionalism."
I want to be clear about what I wrote above and its connotations. I do not
think that a super-intellignet machine necessarily has a mind and also does
not have consciousness IMO. Even if a TTP passes the TT *for intelligence*,
I do not believe that it therefore instantiates a mind (in the way humans
have minds).

I think intelligence is part of the ingredients for a mind, but I think
consciousness, which is different than intelligence (but perhaps enables or
accomodates, or, what is the word - *facilitates* intelligence processes),
is something separate from intelligence but is part of what it means to be a
mind. Consciousness seems to be a different class of thing than
intelligence/cognitive functionalities. Consciousness is the subject to all
objects *in* consciousness (objects being other mental functions/processes
that *are* intelligence functions.)
Post by Stephen Harris
Regards,
Stephen
--
Posted via a free Usenet account from http://www.teranews.com
Josip Almasi
2007-03-05 21:51:08 UTC
Permalink
Post by Alpha
That is very interesting and goes to the heart of an issue I was discussing
with another poster. Say a TTP *is* composed of the very modules (various
Intelligent Agents) that provide expertise in their respective domains. A
Deep Blue might, even in the future, be one of those modules/intellignet
agents (although it may have undergone revisions since now). So might a
Cyc-like function (to help answer common-sense questions for example) be one
of the intelligent agents (again, with revision to incorporate almost all
possible domains of discourse). Lots of IAs all communicating with one
another using some elegant Agent Communications Language lets say.
...

I played with this a bit. I was even talking here about it, hey I even
rember the thread - language without structure:)
And, I think there's no need for language. Contrary, it would work
better without language:) Having plain and simple protocol for knowledge
transfer.
Initially, all agents would have same knowledge bases. As they develop
their KBs each for self, knowledge transfer would become less and less
efficient, since simple protocol can't map attributes/properties to
(yet) unknown objects/classes/categories/callthemwhateveryoulike.
Like, Apple 1 color RED -> WTFException: unknown class Apple.
And then, they develop their own language... or don't:)
Either way, no need for language till this point.
Then, if we get them some of our languages, they don't ever come up with
anything really new. I have no idea how they can come up with their own
language, but negative/positive stimulation is simple in this situation.
(In my impl, here I find closest category and transfer the rest, IOW I
start using language)

Regards...
Neil W Rickert
2007-03-04 17:58:01 UTC
Permalink
Post by Stephen Harris
NR: "Connectionism [=Conn] often described,
Post by Neil W Rickert
is a version of computationalism, though there might be some
variants of connectionism that are outside the constraints of
what computationalism implies."
SH: It is the wording that I question. Doesn't Comp assert that
the right program (which is computable) running on a computer
will instantiate a mind?
That's a rather narrow version. More generally, computationalism
assert that cognition is computation (that's the slogan version),
or that what the brain is doing is computation. In particular, what
was proposed as "The Robot Reply" to Searle's CR argument would still
count as computationalism. That is, one can be a computationist, yet
still assert that you need a suitable set of sensors and effectors
before you can achieve a mind. You would still be claiming that
computation is what achieves the mind, but you would be asserting
that the computation must occur in a suitable environment (one rich
with I/O related to interactions with the world).
Post by Stephen Harris
But there are also a lot of people who think that a highly
useful AI program can just be built somewhat similar to how
a grandmaster chess program or other expert systems were
built which needed no assumption of a mind to make them work.
I would agree with that. But the term "AI" is a lot broader than
creating a mind. Useful programs that work by emulating what is
takent to be some aspect of human reasoning can count as useful AI
programs, even if they don't achieve something we would recognize
as cognition.
Post by Stephen Harris
Don is claiming that one must assume Comp in order to get to
a level where a Turing Test can be passed; a mind _must_ be
instantiated, or there will never be a TT passing program.
I didn't see that in his response to your initial post of this
thread. I saw him as defending the view that computationalism is
broader than what you described. Personally, I don't see the TT
as necessarily tied to computationalism. It seems to me that the
TT could be applicable to any attempt to produce an intelligent
artifact, whether or not it is based on computationalism.
--
DO NOT REPLY BY EMAIL - The address above is a spamtrap.

Neil W. Rickert, Computer Science, Northern Illinois Univ., DeKalb, IL 60115
Stephen Harris
2007-03-04 18:53:27 UTC
Permalink
Post by Neil W Rickert
Post by Stephen Harris
NR: "Connectionism [=Conn] often described,
Post by Neil W Rickert
is a version of computationalism, though there might be some
variants of connectionism that are outside the constraints of
what computationalism implies."
SH: It is the wording that I question. Doesn't Comp assert that
the right program (which is computable) running on a computer
will instantiate a mind?
That's a rather narrow version. More generally, computationalism
assert that cognition is computation (that's the slogan version),
or that what the brain is doing is computation. In particular, what
was proposed as "The Robot Reply" to Searle's CR argument would still
count as computationalism. That is, one can be a computationist, yet
still assert that you need a suitable set of sensors and effectors
before you can achieve a mind. You would still be claiming that
computation is what achieves the mind, but you would be asserting
that the computation must occur in a suitable environment (one rich
with I/O related to interactions with the world).
Post by Stephen Harris
But there are also a lot of people who think that a highly
useful AI program can just be built somewhat similar to how
a grandmaster chess program or other expert systems were
built which needed no assumption of a mind to make them work.
I would agree with that. But the term "AI" is a lot broader than
creating a mind. Useful programs that work by emulating what is
takent to be some aspect of human reasoning can count as useful AI
programs, even if they don't achieve something we would recognize
as cognition.
Post by Stephen Harris
Don is claiming that one must assume Comp in order to get to
a level where a Turing Test can be passed; a mind _must_ be
instantiated, or there will never be a TT passing program.
I didn't see that in his response to your initial post of this
thread. I saw him as defending the view that computationalism is
broader than what you described. Personally, I don't see the TT
as necessarily tied to computationalism. It seems to me that the
TT could be applicable to any attempt to produce an intelligent
artifact, whether or not it is based on computationalism.
No, it was in the "What does it mean for a mind to understand
something?" thread which I gave up on. He said he wasn't quite
certain because Deep Blue gave him a doubt. Yes, I agree the TT
would apply to any intelligent artifact assuming conversation.
I have seen several slogans,
www.umsl.edu/~piccininig/
"In a well-known slogan, computational functionalism says that
the mind is the software of the brain."

http://www.rpi.edu/~brings/SELPAP/irr/node2.html
..."the cognition-is-computation slogan competes for equal time
with a number of others. For example, for starters we have

* Thinking is computing.
* People are computers (perhaps with sensors and effectors).
* People are Turing machines (perhaps with sensors and effectors).
* People are finite state automata (perhaps with sensors and
effectors).
* People are neural nets (perhaps with sensors and effectors).
* Cognition is the computation of Turing-computable functions.

SH: It is the last one which I think contains the most information.
I've read which may not be a fact, that if the cognition were not
a Turing-computable computation, then cognitivism wouldn't cover it.
Alpha
2007-03-04 18:39:08 UTC
Permalink
<snip>
Post by Stephen Harris
Also this thread wasn't addressed to you. I've gotten tired of having
to explain everything in detail to you, and then again in greater detail.
I don't think you have the reading comprehension of a Phd.
Indeed! I was coming to the same conclusion(s).
--
Posted via a free Usenet account from http://www.teranews.com
Don Geddis
2007-03-05 23:23:09 UTC
Permalink
Post by Stephen Harris
Post by Don Geddis
[O]ne can compare Computationalism (Comp) and Connectionism as both being
Turing computable. But that doesn't mean that they both enjoy the same
flexibility as tools for the pursuit of AI.
OK, Stephen, I think a number of different issues are getting conflated here.
Perhaps if I tease them apart, it will be more clear.

The first thing is whether a "mind" or "consciousness" is merely the result
of some computation running on some hardware. Or, alternatively, whether it
might be reasonable for some entity to be an "intelligent zombie", where it
acts just like a human, but doesn't "really" have a mind.

I've argued elsewhere that a "zombie" is not a feasible entity, but that wasn't
the topic I was addressing in this thread.

The second point is whether "connectionism" is an alternative to
"computationalism". You have written a number of times as though they are
alternatives (e.g. in the quote above), and I've been trying to correct that
by saying that connectionism is a kind of computationalism, not an
alternative.

I now understand that these two issues have been intertwined, at that has
caused confusion.

Let me try to restate my real point (on this thread) and see if it makes more
sense:

Regardless of whether you think minds are programs, or alternatively
whether intelligent zombies may be possible; either way, intelligent
behavior (with or without minds) is a result of some computation.
Connectionism is merely one possible computational implementation
strategy.

In particular, it doesn't make sense to say something like

I don't believe in Computationalism because I don't believe that
minds are a result of computation. But I do think that a
Connectionist network might have a mind.

That kind of statement is just inconsistent, and it appeared to me that
you were making statements like that.

But perhaps I misunderstood your claims.
Post by Stephen Harris
Post by Don Geddis
In any case, I think your title for this thread ("The Demise of
Computationalism?") is ill-chosen.
The thread was entitled after the first dotted line section of my
"The Rumors of its [Computationalism] Demise have been Greatly Exaggerated"
David Israel, Stanford University, SRI International,USA
"There has been much talk about the computationalism being dead. ...
But as Mark Twain said of rumors of his own death: these rumors are highly
exaggerated."
SH: This is an example of a literary allusion similar to the example
Turing used for the Turing Test.
Yes of course I understood the allusion for the title.

My point was that you called the thread "The Demise of Computationalism?",
but then your main evidence was about the rise of Connectionism. But that's
a non-sequitur, as Connectionism is not incompatible with Computationalism.
It's either supportive, or at best orthogonal.

If you understood this, and were interesting in defending a claim that
Computationalism is in decline, then why all the quotations about
Connectionism? Why would you think that was relevant to the issue of the
possible decline of Computationism?
Post by Stephen Harris
http://scholar.lib.vt.edu/ejournals/SPT/v5n2/dietrich.html
SH: A supporter of Comp in a position to know, editor of an AI journal,
states what is clearly the decline of Comp in the AI community, but is
not yet a report of Comp's demise.
Thanks for the reference. I just read the paper, and I enjoyed it, and I
don't particularly disagree with anything it says.

But it doesn't at all support your (apparent?) position that there has been
a decline in belief in Computationalism within the AI community.

Certainly, there's no question that the author (Eric Dietrich) is a staunch
believer in Computationalism.
Post by Stephen Harris
He thinks the arguments/criticisms of Computationalism are weak. But to
a critical thinker the Comp description more closely resembles belief
that mystical engravings (symbols) on an amulet render magical powers.
I understand your opinion. (Naturally, mine differs.) And that paper you
referenced isn't really a strong defender of Computationalism, but instead is
exploring why there seems to be such resistance to what ought to be a very
compelling theory. So it's more meta-philosophy, than it is direct philosophy
of mind.

So anyway, my main points (in this thread) were that (1) Computationalism is
still the default philosophy for the vast majority of AI researchers and
cognitive scientists; and (2) if you dislike Computationalism, then it is
NOT the case that Connectionism gives you an alternative theory of mind.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
Bigamy is having one husband or wife too many. Monogamy is the same.
Stephen Harris
2007-03-06 08:07:42 UTC
Permalink
Post by Don Geddis
[O]ne can compare Computationalism (Comp) and Connectionism as both being
Turing computable.
“As Margaret Boden has argued, McCulloch and Pitts’s theory
was the common origin of both the connectionist and the
classical approach to computational artificial intelligence.

SH: The first sentence acknowledges they have a common philosophical
root. Before McCullough's 1943 paper, there was a lot of work done
on neural networks but TMs were not involved in the theory.
Post by Don Geddis
But that doesn't mean that they both enjoy the same
flexibility as tools for the pursuit of AI.
SH: The next sentence starts with "But" which signals how they
are not alike. I wrote it like this because I wanted to include
the fact that Connectionism is less committed to Computationalism.
I could have substituted "Symbolic AI" for Computationalism.
AI, classical AI, and GOFAI are all interchangeable terms.
The term "strong AI" wasn't invented until 25 years later.

"Two varieties of AI are often contrasted. Good Old-Fashioned
Artificial Intelligence employs computations which can be carried
out on von Neumann machines. Connectionism makes use of an
architecture of abstract neurons which are connected in networks."

"Last week we saw how classical, symbol handling AI (GOFAI) tries
to represent knowledge and reason with it. In the first part of
today's lecture we'll look (very briefly) at biologically inspired AI.
WHAT’S REALLY WRONG WITH THE COMPUTATIONAL THEORY OF MIND”
BY Jonathan Knowles

"In recent years, the main focus of the debate around CTM has
been its advantages and disadvantages relative to an alternative
conception of cognitive architecture broadly known as
connectionism. Connectionism involves less commitment to
the literally computational nature of thought, but is most directly
motivated by the desire to give an architecture with greater
fidelity a) to the biological structure of the brain and b) to the
fluid and flexible nature of cognitively-driven behaviour."
Post by Don Geddis
But that doesn't mean that they both enjoy the same
flexibility as tools for the pursuit of AI.
SH: The word "tools" signals it is not a philosophical remark.
Sloman:
"This kind of mathematical universality may have led some people
to the false conclusion that any kind of computer is as good as
any other provided that it is capable of modelling a universal
Turing machine. This is true as a mathematical abstraction, but
misleading, or even false when considering problems of controlling
machines embedded in a physical world."
Post by Don Geddis
So anyway, my main points (in this thread) were that (1) Computationalism is
still the default philosophy for the vast majority of AI researchers and
cognitive scientists; and (2) if you dislike Computationalism, then it is
NOT the case that Connectionism gives you an alternative theory of mind.
I never said that Connectionism gives you an alternative theory of mind.
I said it was an alternate tool for the pursuit of AI.
All of the symbolic labels assume building a human-level intelligence.
There was practically no such thing as "weak AI" until around 1980.
Post by Don Geddis
I agree that connectionism could be contrasted with classical
programming
(like writing algorithms in C or Pascal), or even with classical AI
(in the sense of declarative facts and inference engines).
Your mistake is believing that this is in any way related to
Computationalism.
As for (1).
"This article is the substance, edited and adapted,
of the keynote address given at the 1992
annual meeting of the American Association for
Artificial Intelligence on 14 July in San Jose, California.

"Machine learning, I think, is playing
an increasingly central role in AI, and this
role will grow. Of course, one cannot talk
about learning without talking about the
meaning of AI in what we do now. Is AI some
simple goal to be achieved? This view is the
popular one. Are the machines intelligent?
Can a machine be intelligent? Do we think
there is some simple Turing or meta-Turing
test that will persuade the ungodly of our
godliness? The answer is—of course not! ...

It reminds me of when we used to believe
that life was something that had to be
defined. How could one tell whether something
was alive or not? I remember that as a
child I read that viruses, those often vicious
infective agents, could be crystallized. How
could life be crystallized and still be life?

This question seems to me to be parallel to
the popular question, But how can a machine
think? It seems to me that although there
might be popular questions, like the Turing
test, most of us are beyond them now. ...

Are the machines intelligent?
Can a machine be intelligent? Do we think
there is some simple Turing or meta-Turing
test that will persuade the ungodly of our
godliness? The answer is—of course not!"

SH: This indicates to me that the AI community
is no longer interested in Computationalism.
So if that is not a demise, it is certainly a
decline. Your point that a lot of cognitivists
are closet Computationlists seems to carry little
weight when they don't think it worth discussing
because, "most of us are beyond them now."
Don Geddis
2007-03-07 00:49:00 UTC
Permalink
Connectionism is less committed to Computationalism. I could have
substituted "Symbolic AI" for Computationalism. AI, classical AI, and
GOFAI are all interchangeable terms. The term "strong AI" wasn't invented
until 25 years later.
I will agree with you that "Symbolic AI" = "AI" (in the early years) =
"classical AI" = "GOFAI".

But I don't think that "Computationalism" is a synonym for those other terms.
If you had just used one of them instead, and contrasted (say) GOFAI with
Connectionism, I probably would have agreed with you at the beginning.
"This kind of mathematical universality may have led some people to the
false conclusion that any kind of computer is as good as any other provided
that it is capable of modelling a universal Turing machine. This is true as
a mathematical abstraction, but misleading, or even false when considering
problems of controlling machines embedded in a physical world."
Absolutely true. Turing Machines (and other abstract forms) only talk about
what is computable, but in the real world computational complexity matters as
well. And architecture can influence complexity.

I agree completely.
(2) if you dislike Computationalism, then it is NOT the case that
Connectionism gives you an alternative theory of mind.
I never said that Connectionism gives you an alternative theory of mind.
I said it was an alternate tool for the pursuit of AI.
But you seem to persist in describing Connectionism as an "alternative" of
some kind to Computationalism. That's why I continue to have trouble, since
it isn't an alternative.

If you think that the mind is computational, then Connectionism is merely one
possible architecture among many under the Computationalism umbrella. If you
don't think the mind is computational, but perhaps "intelligent behavior"
might be, then you do indeed reject Computationalism as such. But
Connectionism still isn't an "alternative". It remains one computational
approach (now, to intelligent behavior instead of to minds) among many.
"This article is the substance, edited and adapted,
of the keynote address given at the 1992
annual meeting of the American Association for
Artificial Intelligence on 14 July in San Jose, California.
Yes, you've quoted this before. The relevant part seems to be
It reminds me of when we used to believe that life was something that had
to be defined. [...] This question seems to me to be parallel to the
popular question, But how can a machine think? It seems to me that although
there might be popular questions, like the Turing test, most of us are
beyond them now.
So the question is, how are we to interpret this most interesting final
SH: This indicates to me that the AI community is no longer interested in
Computationalism. So if that is not a demise, it is certainly a
decline. Your point that a lot of cognitivists are closet Computationlists
seems to carry little weight when they don't think it worth discussing
because, "most of us are beyond them now."
As I said the last time you posted this quote, I think an alternate but more
likely explanation, instead of the decline of Computationalism within AI, is
that most AI researchers no longer concern themselves with any philosophy of
mind. They merely try to make programs which exhibit interesting behavior,
and don't bother worrying about whether that implies a mind or not. The field
is so far from human-level performance, that the question is moot for the time
being. And has been a source of a lot of wasted time in AI in the past.

But, I would argue, this is significantly different from the field of AI
coming to a consensus conclusion that Computationalism is false (and something
else has taken its place?).

"Most of us [AI researchers] are beyond" the philosophy questions, because
they aren't relevant to the day-to-day working lives of the AI researchers.

Not because they are championing a new, superior, philosophy of mind.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
Stephen Harris
2007-03-08 22:51:28 UTC
Permalink
Post by Don Geddis
Connectionism is less committed to Computationalism. I could have
substituted "Symbolic AI" for Computationalism. AI, classical AI, and
GOFAI are all interchangeable terms. The term "strong AI" wasn't invented
until 25 years later.
I will agree with you that "Symbolic AI" = "AI" (in the early years) =
"classical AI" = "GOFAI".
But I don't think that "Computationalism" is a synonym for those other terms.
If you had just used one of them instead, and contrasted (say) GOFAI with
Connectionism, I probably would have agreed with you at the beginning.
The first sentence does not "contrast" (your term) I said "compare".

"Likewise one can compare Computationalism (Comp) and Connectionism
as both being Turing computable." But that doesn't mean that they
both enjoy the same flexibility as tools for the pursuit of AI."

SH: Now what is usually compared as fundamentally the same? Isn't
Symbolic AI and Connectionism as both being TM computable?

I used Comp as a standin for Symbolic AIs or GOFAI because in the
next sentence I want to contrast Comp and Conn in their approaches
and their degree of committment to the philosophy of Computationalism.
I think referring to Comp as a "tool" and Conn as a "tool" refers
to Comp in its function as an umbrella philosophy for different
approaches. I think an intelligent and well-educated reader should
be able to figure that out. Read it in a context that makes sense.
People with poor reading comprehension will have trouble with that.
Post by Don Geddis
(2) if you dislike Computationalism, then it is NOT the case that
Connectionism gives you an alternative theory of mind.
I never said that Connectionism gives you an alternative theory of mind.
I said it was an alternate tool for the pursuit of AI.
But you seem to persist in describing Connectionism as an "alternative" of
some kind to Computationalism. That's why I continue to have trouble, since
it isn't an alternative.
Read what is written. "I said it was an alternate tool for the pursuit
of AI."
I'm telling you bluntly what I meant. So don't persist in presenting
your interpretation of what I meant. I don't care if you don't like
how I expressed it. Elsewhere you said it was a limited description
of Computationalism. It wasn't intended as a full description of Comp.
Post by Don Geddis
If you think that the mind is computational, then Connectionism is merely one
possible architecture among many under the Computationalism umbrella. If you
don't think the mind is computational, but perhaps "intelligent behavior"
might be, then you do indeed reject Computationalism as such. But
Connectionism still isn't an "alternative". It remains one computational
approach (now, to intelligent behavior instead of to minds) among many.
"This article is the substance, edited and adapted,
of the keynote address given at the 1992
annual meeting of the American Association for
Artificial Intelligence on 14 July in San Jose, California.
Yes, you've quoted this before. The relevant part seems to be
It reminds me of when we used to believe that life was something that had
to be defined. [...] This question seems to me to be parallel to the
popular question, But how can a machine think? It seems to me that although
there might be popular questions, like the Turing test, most of us are
beyond them now.
So the question is, how are we to interpret this most interesting final
SH: This indicates to me that the AI community is no longer interested in
Computationalism. So if that is not a demise, it is certainly a
decline. Your point that a lot of cognitivists are closet Computationlists
seems to carry little weight when they don't think it worth discussing
because, "most of us are beyond them now."
As I said the last time you posted this quote, I think an alternate but more
likely explanation, instead of the decline of Computationalism within AI, is
that most AI researchers no longer concern themselves with any philosophy of
mind. They merely try to make programs which exhibit interesting behavior,
and don't bother worrying about whether that implies a mind or not. The field
is so far from human-level performance, that the question is moot for the time
being. And has been a source of a lot of wasted time in AI in the past.
DG: [if] "most AI researchers no longer concern themselves with any
philosophy of mind."

SH: Why doesn't that lead to the conclusion that most AI researchers
are also not interested in and don't care about any philosophy of mind?

Since Computationalism is a Philosophy of Mind, then if most AI
researchers don't care/concern or take interest in Phil of Mind,
then they don't care/concern themselves or take interest in Comp.
When a philosophy drops in priority how can you say it is anything
but a decline?

Your core sentence means the same thing that I interpreted the
quote to mean: DG: "lack of concern" means a [lower priority]
= SH: "decline" Less concerned means fewer adherents to Comp
because embracing such a Philosophy doesn't bring a reward.
Why claim people are holding onto a philosophy that no longer
matters to them?
Post by Don Geddis
But, I would argue, this is significantly different from the field of AI
coming to a consensus conclusion that Computationalism is false (and something
else has taken its place?).
They don't need to come to a consensus conclusion that Comp is false.
That is a strawman interpretation of what I wrote. I didn't claim that.
All they have to do is act as if it doesn't matter. That equals decline.
Certainly most cognitivists would still identify with Comp.
But it used to be that Comp was pushed when there was more interest.
There have been a lot of attacks on Computationalism and a minority
but still significant portion of cognitivists have adopted dynamic
systems theory. Nash created and Bringsjord supported this conclusion:
"Computationalism is Dead; Now What?"

A paper with that title would never have been written in 1975.
Post by Don Geddis
"Most of us [AI researchers] are beyond" the philosophy questions, because
they aren't relevant to the day-to-day working lives of the AI researchers.
Not because they are championing a new, superior, philosophy of mind.
Another strawman. I did quote a paper which said many cognitivists liked
Dynamic System Theory. Neither that quote nor I suggested it was the
majority. The mention of Comp is now mainly derogatory while your silent
majority have better things to do. It reminds me of the old Greek gods.
As long as mortals believed in them they existed, but when the belief
faded (as in not praying to a god is like downplaying the importance of
Comp by failure to mention) then the god/Comp faded into obscurity.

Just because a new attractive philosophy hasn't released Comp is no
reason to think Comp hasn't lost its allure, just because there is
nothing significantly better. It is really not the case that an
ill-considered philosophy which doesn't explain evolution is better
than no philosophy at all.
Don Geddis
2007-03-09 04:58:47 UTC
Permalink
Post by Stephen Harris
"Likewise one can compare Computationalism (Comp) and Connectionism
as both being Turing computable." But that doesn't mean that they
both enjoy the same flexibility as tools for the pursuit of AI."
"Computationalism" and "Connectionism" are different categories of things,
so it doesn't really make sense to "compare" them. Especially since
Connectionism could be (although doesn't need to be) a subtype of
Computationalism.

It's as though you wrote: "Likewise one can compare Fruits and Oranges as
both being composed of atoms." OK, the statement is true, but an orange is
a KIND of fruit. If not the direct meaning, at least the connotation is
incorrect, because your sentence structure implies that Computationalism
and Connectionism are both items in the same category of thing. But they
aren't.

Computationalism is a proposed theory in the philosophy of mind.
Connectionism is an implementation strategy for some AI systems.
In so far as Connectionism relates at all to philosophy of mind, it's
merely a subtype of Computationalism.
Post by Stephen Harris
SH: Now what is usually compared as fundamentally the same? Isn't
Symbolic AI and Connectionism as both being TM computable?
Now, those two, on the other hand, are both the same kinds of things
(both being different implementation strategies for AI systems).
So it indeed would have made sense to compare them. But those aren't the
words you used.
Post by Stephen Harris
I used Comp as a standin for Symbolic AIs or GOFAI because in the
next sentence I want to contrast Comp and Conn in their approaches
and their degree of committment to the philosophy of Computationalism.
This doesn't make sense to me. I don't even see how it makes sense to you.

You decided to use "Computationalism" as a "standin" for GOFAI/Symbolic AI,
even though they were different things? (And in fact, different KINDS of
things.) And then your whole point was to compare Computationalism to
... the philosophy of Computationalism? Isn't it, by definition, identical?
What kind of comparison could you make on that side?

I understand wishing to explore what kind of commitment Connectionism might
make to the philosophy of Computationalism. That's a possibly interesting
topic. But you have a strange way of getting there.
Post by Stephen Harris
I think referring to Comp as a "tool" and Conn as a "tool" refers
to Comp in its function as an umbrella philosophy for different
approaches.
But Computationalism isn't a tool, and Connectionism isn't a philosophy.
Your wording is very strange.
Post by Stephen Harris
I think an intelligent and well-educated reader should be able to figure
that out. Read it in a context that makes sense. People with poor reading
comprehension will have trouble with that.
I think I understand what you've written. I just think what you've written
makes no sense. It isn't some lack of context that is confusing me. Nor is
it some problem you accuse me of having with reading comprehension. It's
your actual choice of words.
Post by Stephen Harris
Post by Don Geddis
Post by Stephen Harris
I never said that Connectionism gives you an alternative theory of mind.
I said it was an alternate tool for the pursuit of AI.
But you seem to persist in describing Connectionism as an "alternative" of
some kind to Computationalism. That's why I continue to have trouble, since
it isn't an alternative.
Read what is written. "I said it was an alternate tool for the pursuit of
AI." I'm telling you bluntly what I meant. So don't persist in presenting
your interpretation of what I meant. I don't care if you don't like how I
expressed it.
But the problem is that they aren't "alternates" in any sense, whether tools
or philosophies or approaches or implementation strategies.

You can pursue Computationalism with or without Connectionism, and also
vis versa. There are four possible attitudes you might have towards the
combination of the two, and each of them make sense.

There is no sense in which you must choose between them, and only do one or
the other. Hence, they aren't "alternate tools".
Post by Stephen Harris
Post by Don Geddis
I think an alternate but more likely explanation, instead of the decline
of Computationalism within AI, is that most AI researchers no longer
concern themselves with any philosophy of mind.
Since Computationalism is a Philosophy of Mind, then if most AI
researchers don't care/concern or take interest in Phil of Mind,
then they don't care/concern themselves or take interest in Comp.
When a philosophy drops in priority how can you say it is anything
but a decline?
Well, ok, I suppose I could agree that in this narrow sense your are
technically correct.

But I don't think anyone would read your original sentence, something like
"Computationalism is in decline in AI", and somehow miss the obvious
connotation that it must have been either refuted or replaced with something
better, that there was now an improved theory of mind, and that only old
out-of-touch people still believed in Computationalism.

All of those connotations are false. If you intended them, then I stridently
reject them. If you didn't intend those connotations, then if you just
clarify the matter I suppose we no longer have a disagreement.
Post by Stephen Harris
Your core sentence means the same thing that I interpreted the
quote to mean: DG: "lack of concern" means a [lower priority]
= SH: "decline" Less concerned means fewer adherents to Comp
because embracing such a Philosophy doesn't bring a reward.
The difference is the implication of what current researchers believe.
I would say they (in general) either still support Computationalism, or
else are agnostic or just don't care. Your use of the term "decline"
really does imply that something was found lacking with Computationalism
and so it was rejected, which I don't think is the case.
Post by Stephen Harris
Post by Don Geddis
"Most of us [AI researchers] are beyond" the philosophy questions, because
they aren't relevant to the day-to-day working lives of the AI
researchers. Not because they are championing a new, superior, philosophy
of mind.
Just because a new attractive philosophy hasn't released Comp is no
reason to think Comp hasn't lost its allure, just because there is
nothing significantly better. It is really not the case that an
ill-considered philosophy which doesn't explain evolution is better
than no philosophy at all.
I agree with your final sentence, but you are incorrect to believe that
Computationalism is an example of an ill-considered philosophy. Similarly,
in your belief that it has lost its allure.

The only change in the majority opinion of AI researchers that I'll agree
with is that philosophy of mind is usually not relevant for short-term AI
work.

But among those AI researchers who care at all about mind or consciousness,
the vast majority still accept Computationalism as the most likely theory of
mind. All your talk about "decline" and "ill-considered" and "lost its
allure" notwithstanding.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
Whom the gods destroy, they first make mad. -- Euripides
Stephen Harris
2007-03-09 14:10:42 UTC
Permalink
Post by Don Geddis
I agree with your final sentence, but you are incorrect to believe that
Computationalism is an example of an ill-considered philosophy. Similarly,
in your belief that it has lost its allure.
To be polite, I'll just respond that you are out of touch with reality.
There are thousands of papers and books which disparage and defend Comp,
and its parent, The Computational Theory of Mind (CTM).

For instance, the Chinese Room Argument (CRA) produced 1,390,000 hits
when googled. Humans are not dealing with a fact of the matter, that
"cognition is computation", but a philosophical position. So it is a
matter of opinion. And the CRA was factually immensely influential.
It doesn't really matter at all that you think the CRA is flawed. A
few thousand papers have been written which debate this.

What is the competency of cognitivists in Philosophy? They don't enjoy
any privilege of academic expertise. Mainly, because their promises
for Comp have returned 50 years of failure. It is like a team of doctors
whose patients always die under the digital scalpel. Of course many
people are going to lose confidence in Comp. It is a fact that thousands
of papers have been written and debated which criticize the
philosophical foundations of Computationalism. That is just a fact.
You are in a state of denial of reality when you think it is just my
belief that Comp has lost its allure. It's nearly everybody's opinion.
Demise: "a loss of position or status" which meaning I made sure you
understood by adding "decline".

You keep resorting to flimsy 'by implication you must mean that
Comp has been proven a failure'. I don't mean that. I mean that it
has lost its credibility for many who had an open mind about it
so that its appeal or allure targets a diminishing minority of
those biased towards seeing it as reasonable. They go to University
and take AI courses taught by Professors who have established tenure
in a field whose philosophical foundations are largely perceived to
have crumbled 25 years ago.

How important is a cognitivists endorsement of Computationalism?
It is virtually worthless. Unlike other academics, they don't
have any record of substantiating their philosophical claims, so
that they can appear to be offering "expert" opinions.
It is closer to say their opinions are the object of ridicule
and not stupid, but grandiosely deluded.

"Marvin Minsky of MIT says that the next generation of computers will
be so intelligent that we will `be lucky if they are willing to keep
us around the house as household pets'. My all-time favorite in the
literature of exaggerated claims on behalf of the digital computer is
from John McCarthy, the inventor of the term 'artificial intelligence'.
McCarthy says even 'machines as simple as thermostats can be said to
have beliefs'. And indeed, according to him, almost any machine capable
of problem-solving can be said to have beliefs. I admire McCarthy's
courage. I once asked him: `What beliefs does your thermostat have?' And
he said: `My thermostat has three beliefs -- it's too hot in here, it's
too cold in here, and it's just right in here.' --- J. Searle,"

SH: There is something quite corrupt in the core of a philosophy that
can be manipulated to assign consciousness to teddy bears, beliefs to
thermostats and a mind to an abacus.

Most people are going to deem such attributions as ludicrous. No wonder
the plausibility of Computationalism is going to be underminded and
lose prestige. The majority opinion is that people who embrace Comp
are deluded, not the source of insight and respectable opinions. Comp
is an airy-fairy figment of the imagination similar in plausibility
to the idea that there is no matter, it is all a projection of mind. It
is perpetuated by Mensan academics who are stuck on their own intellect.

That is the reason that saner individuals have moved on when it became
evident that Comp didn't play any role in developing expert systems AI.
And that Turing's assertion which implied that a program which passed
the TT, that there is no good objection to considering it to have human
level intelligence. This is not even a theory because it is not
falsifiable. There is no test for mind. The idea that one can assert
that a program has a mind if it can pass the TT, and then build a
program that passes the TT and then, conclude, Wow, the program must
be instantiating a mind, is a bit too circular for objective thinkers.
If a successful TT program is ever engineered, it will still do nothing
to resolve the dispute of whether the program has a mind or is a zombie.

Where has your head been during the onslaught of Anti-Computationalism?
It certainly can't have been in the area of keeping abreast of how
Comp is being received because you had already pigeon-holed that idea.

---------------------------------------------------------

"The Dark Ages of AI: A Panel Discussion at AAAI-84

Drew McDermott, M. Mitchell Waldrop, B. Chandrasekaran, John McDermott,
Roger Schank

This panel, which met in Austin, Texas, discussed the "deep unease among
AI researchers who have been around more than the last four years or so
... that perhaps expectations about AI are too high, and that this will
eventually result in disaster."


------------------------------------------------------

"Meanwhile, the 1984 meeting of the American Association for Artificial
Intelligence had a panel discussion on “The Dark Ages of AI.” This
appeared in AI Magazine for the fall on 1985. The field was running low
on new ideas and the business community was getting stale about AI’s
commercial promise. ...

I’ll list three anthology volumes:

R. Núñez and W. J. Freeman,eds. (1999). Reclaiming Cognition.

Port, R. F. and T. van Gelder, Eds. (1995). Mind as Motion:
Explorations in the Dynamics of Cognition.

J. Petitot, F. J. Varela, B. Pachoud and J.-M. Roy, eds. (1999)
Issues in Contemporary Phenomenology and Cognitive Science.

These volumes all argue that the "classical" cognitive science has
failed and we need a more dynamic approach, one that’s more realistic
about the nervous system and, incidentally, one that’s more friendly
with the continental tradition in philosophy. Walter Freeman, in
particular, has been pursuing a rapprochement with Derrida.

My quick and dirty reading of this intellectual history is that it has
been driven by ideas that began crystallizing during the 1950s. Those
ideas have now given up their vitality. There’s nothing new to be gained
from them. We stand in need of fundamentally new starting points. Just
what they might be . . ."

------------------------------------------------------


"[Tim]Maudlin starts off with the assumption that a recording being
conscious is obviously absurd, hence the need for the conscious machine
to handle counterfactuals. If it were not for this assumption then there
would not have been much point to the rest of the paper. Actually,
Putnam and Chalmers also think that the idea of any physical system
implementing any computation is absurd. I am not sure of Mallah's
position (he seems to have disappeared from the list after I joined),
but Hal Finney seemed to give some credence to the idea, and outside the
list Hans Moravec and Greg Egan seem also to at least entertain the
possibility that it is true. I would be interested if anyone is aware of
any other references.
Post by Don Geddis
"Hans Moravec has defended in this list indeed the idea that even a
teddy bear is conscious. You could perhaps search in the archive my
reply to him. I will try to sum up what I think about this, but other
things need to be clarified, perhaps."

-------------------------------------------------------------------

SH: I've seen your defense of the view that a mind will be required
to pass the Turing Test, tied to some statements by Turing. You are
one of the victims that this article talks about:

http://www.cogs.susx.ac.uk/users/blayw/tt.html
"One conclusion that is implied by this view of the history of AI and
Turing's 1950 paper is that for most of the period since its publication
it has been a distraction. While not detracting from the brilliance of
the paper and its central role in the philosophy of AI, it can be argued
that Turing's 1950 paper, or perhaps some strong interpretations of it,
has, on occasion, hindered both the practical development of AI and the
philosophical work necessary to facilitate that development.
Thus one can make the claim that, in an important philosophical sense,
Computing Machinery and Intelligence has led AI into a blind alley from
which it only just beginning to extract itself. It is also an
implication of the title of this chapter that the Turing test is not
the only blind alley in the progress of AI."
Don Geddis
2007-03-10 04:34:13 UTC
Permalink
Post by Stephen Harris
There are thousands of papers and books which disparage and defend Comp,
and its parent, The Computational Theory of Mind (CTM).
In the philosophy literature, no doubt you are correct.

My claim was always that within the group of _AI_researchers_, CTM is the
dominant philosophy -- assuming they bother to think about the subject at
all. Probably the same for cognitive scientists, although I'll admit to
being less familiar with that field.

But I never meant to make any claim about what most philosophers think on
the subject, or even the general public.
Post by Stephen Harris
For instance, the Chinese Room Argument (CRA) produced 1,390,000 hits
when googled. Humans are not dealing with a fact of the matter, that
"cognition is computation", but a philosophical position. So it is a
matter of opinion. And the CRA was factually immensely influential.
It doesn't really matter at all that you think the CRA is flawed. A
few thousand papers have been written which debate this.
Sure, sure, I grant that Searle's CRA thought experiment has been hugely
influential. In philosophy, and perhaps with the general public.

At the same time, I doubt it's changed the course of a single AI research
project, in these last decades.

On top of that, it factually is a broken argument, merely reinforcing
existing prejudices (on both sides), without actually "proving" anything.
It's an appeal to intuition, not an insight into the real world.
Post by Stephen Harris
It is a fact that thousands of papers have been written and debated which
criticize the philosophical foundations of Computationalism. That is just a
fact.
Yes, I can agree with that. I'm sure many such papers have been written
in philosophy departments.
Post by Stephen Harris
"Marvin Minsky of MIT says that the next generation of computers will
be so intelligent that we will `be lucky if they are willing to keep
us around the house as household pets'.
I agree that this is wildly optimistic. AI has a horrible track record of
predicting future results. Intelligence is a much harder problem than anyone
in the field originally realized.
Post by Stephen Harris
My all-time favorite in the literature of exaggerated claims on behalf of
the digital computer is from John McCarthy, the inventor of the term
'artificial intelligence'. McCarthy says even 'machines as simple as
thermostats can be said to have beliefs'. And indeed, according to him,
almost any machine capable of problem-solving can be said to have
beliefs. I admire McCarthy's courage. I once asked him: `What beliefs does
your thermostat have?' And he said: `My thermostat has three beliefs --
it's too hot in here, it's too cold in here, and it's just right in here.'
--- J. Searle,"
Searle is a strident, biased, anti-AI philosopher. His writings are
propaganda and rhetoric, not an objective view of the true situation.

That said, I believe McCarthy's quotes are accurate.
Post by Stephen Harris
SH: There is something quite corrupt in the core of a philosophy that
can be manipulated to assign consciousness to teddy bears, beliefs to
thermostats and a mind to an abacus.
You haven't even explored the philosophy, to so abruptly dismiss it as
"corrupt".

Even if you don't respect me, John McCarthy surely has a sufficient track
record that you owe it to actually read and understand his position before
casting it off with such negative comments.

If you bothered to look into what McCarthy has actually written, you would
see that he's attacking the notion that there is something "special" or
"magical" about human intellect, something qualitatively different from
machines. Instead, McCarthy argues, it's just points of complexity along the
same spectrum. Thermostats have "a little" belief, humans have "a lot".
There is no magic step #N where machine N does not have beliefs, but with a
minor improvement in some critical functionality, machine N+1 suddenly does.

This philosophy is at least reasonable to consider, and is by no means
obviously "corrupt in the core".
Post by Stephen Harris
Most people are going to deem such attributions as ludicrous.
The opinions of "most people" are not important to the truth of the claim.
Post by Stephen Harris
The majority opinion is that people who embrace Comp are deluded, not the
source of insight and respectable opinions. Comp is an airy-fairy figment
of the imagination similar in plausibility to the idea that there is no
matter, it is all a projection of mind. It is perpetuated by Mensan
academics who are stuck on their own intellect.
Well, at least this isn't a direct ad hominem attack on me, but only an
indirect one, via association. :-)

Perhaps we can stick to the merits of the various claims.
Post by Stephen Harris
And that Turing's assertion which implied that a program which passed
the TT, that there is no good objection to considering it to have human level
intelligence. This is not even a theory because it is not
falsifiable. There is no test for mind.
You're right. Turing's paper where he described his Test is not a scientific
theory.
Post by Stephen Harris
The idea that one can assert that a program has a mind if it can pass the
TT, and then build a program that passes the TT and then, conclude, Wow,
the program must be instantiating a mind, is a bit too circular for
objective thinkers.
Yes, that is indeed circular reasoning. I would never argue that the
Turing Test provides a definition of mind.
Post by Stephen Harris
If a successful TT program is ever engineered, it will still do nothing to
resolve the dispute of whether the program has a mind or is a zombie.
You may have a point there, although I suspect in the court of public opinion
it will be a lot harder to the average person to deny an entity that
strenuously asserts its own consciousness.
Post by Stephen Harris
Where has your head been during the onslaught of Anti-Computationalism?
It certainly can't have been in the area of keeping abreast of how
Comp is being received because you had already pigeon-holed that idea.
I agree that there are plenty of people who object to Computationalism.

Few of them are AI researchers, however. That has been my main point.
Post by Stephen Harris
"The Dark Ages of AI: A Panel Discussion at AAAI-84
Drew McDermott, M. Mitchell Waldrop, B. Chandrasekaran, John McDermott, Roger
Schank
This panel, which met in Austin, Texas, discussed the "deep unease among AI
researchers who have been around more than the last four years or so ... that
perhaps expectations about AI are too high, and that this will eventually
result in disaster."
Sure, but what does this have to do with Computationalism?

There was a whole "AI Winter" in the 80's (which I lived through), that
was about government funding drying up, mostly because of unmet promises
from the AI projects in the 70's which never came to pass.

But that's got no connection with our current topic of discussion.
Post by Stephen Harris
"Meanwhile, the 1984 meeting of the American Association for Artificial
Intelligence had a panel discussion on "The Dark Ages of AI." This appeared
in AI Magazine for the fall on 1985. The field was running low on new ideas
and the business community was getting stale about AI's commercial
promise. ...
That's all true, but is about short-term (e.g. a few years or decades) of
concern about engineering approaches. It's not at all a comment about the
overall philosophy that minds are computational.
Post by Stephen Harris
"Hans Moravec has defended in this list indeed the idea that even a
teddy bear is conscious.
I'm not familiar with this claim, but absent the surrounding context, it's
hard to evaluate in isolation.

On this newsgroup, for example, Curt has claimed that rocks are "conscious",
but he says that more to point out the problems with the definitions that
other people assign to that word, than to make any interesting claim about
rocks.

I'm pretty sure that both Curt and Moravec don't think that rocks or teddy
bears have the same level of subjective experience as humans.
Post by Stephen Harris
SH: I've seen your defense of the view that a mind will be required
to pass the Turing Test, tied to some statements by Turing.
Yes, that's correct.
Post by Stephen Harris
http://www.cogs.susx.ac.uk/users/blayw/tt.html
How so? I actually agree with almost everything in that article.

I've said many times on this very group that the Turing Test is merely a
sufficient test for intelligence/consciousness, but not a necessary one.
Whitby, in that paper, is quite correct to note that research projects which
pursue an imitation game ("the Turing Test") are almost certainly
non-productive at this point in time. (I've criticized the Loebner Prize,
which is a "restricted" form of the Turing Test, in this group recently.)
Moreover, it hard to imagine that it will ever be worthwhile to engineer a
real system to pass a real Turing Test.

Whitby has a great analogy, that if there were a Turing Test-like contest
for flight, where the goal is to imitate a bird, it seems likely that mankind
is unable to pass this test even today. Despite our success with fighter
jets and spy planes and rocket ships to the moon.

Real AI isn't about trying to pass the Turing Test, no question.

(But all that is separate from the philosophy question of, _if_ a system did
manage to pass the Turing Test, would it _then_ necessarily have a mind?
I will continue to profess my belief that yes, it would need to have a mind.)
Post by Stephen Harris
http://www.cogs.susx.ac.uk/users/blayw/tt.html
"One conclusion that is implied by this view of the history of AI and
Turing's 1950 paper is that for most of the period since its publication it
has been a distraction. While not detracting from the brilliance of the paper
and its central role in the philosophy of AI, it can be argued that Turing's
1950 paper, or perhaps some strong interpretations of it, has, on occasion,
hindered both the practical development of AI and the philosophical work
necessary to facilitate that development.
I think he's overstating the "hindered [...] development" part. I think only
a tiny fraction of AI researchers bothered to try to write programs directly
to pass the Turing Test.

But sure, it's a distraction for real AI research.
Post by Stephen Harris
Thus one can make the claim that, in an important philosophical sense,
Computing Machinery and Intelligence has led AI into a blind alley from which
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Note: that's the magazine which published Turing's paper. He's not
criticizing the research fields of computing machinery or intelligence.
Post by Stephen Harris
it only just beginning to extract itself. It is also an implication of the
title of this chapter that the Turing test is not the only blind alley in
the progress of AI."
Sure. I don't object.

How does this relate to anything we've been talking about?

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
If you ever go temporarily insane, don't shoot somebody, like a lot of people
do. Instead, try to get some weeding done, because you'd really be surprised.
-- Deep Thoughts, by Jack Handey
Stephen Harris
2007-03-10 16:34:22 UTC
Permalink
Post by Don Geddis
But I never meant to make any claim about what most philosophers think on
the subject, or even the general public.
Post by Stephen Harris
SH: There is something quite corrupt in the core of a philosophy that
can be manipulated to assign consciousness to teddy bears, beliefs to
thermostats and a mind to an abacus.
You haven't even explored the philosophy, to so abruptly dismiss it as
"corrupt".
Even if you don't respect me, John McCarthy surely has a sufficient track
record that you owe it to actually read and understand his position before
casting it off with such negative comments.
I've read over 2000 books, papers and web essays. The gap is in your
reading. I keep wondering how you can have a Ph.d and know so little.
You might think I'm trying to insult you, but I'm quite serious. I
usually suspect you are another "I'm a Stanford graduate." fraud or
that maybe you earned some degree but suffered brain damage afterward.

Another thing is that your responses are full of critical thinking
errors: banal responses which divert attention to something trivial.
For instance I was quoting remarks which addressed the ability to
use knowledge and philosophical acumen to predict the future of AI.
What does it matter that Searle collected them and I have no idea
why you think I wouldn't know he is Anti-AI.

My point is that being a big name in AI gives you zero qualifications
to pass yourself off as having expert credentials to evaluate the
likelihood of the Computationalism thesis of 'cognition is computation'.
It isn't something you can go to school and learn because nobody has
made strong AI work so they have nothing to teach. All they can teach
is a philosophical speculation. I use the word "strong" because I'm
not talking about useful theorem provers and medical diagnosis etc.

Not even for somebody as gifted and wise as Minsky who has a talent
for philosophy. I posted a few years ago here why I don't respect
McCarthy even though he developed Lisp. He admitted that he knew that
his claim that 'thermostats have beliefs' was controversial. He kept
it because he felt the controversy would improve the chances of being
funded; he wanted to establish the field of AI.

Who does the thermostats have beliefs claim appeal to? Geeks who
dabble in Linux, the Game of Life and virtual reality games. So
they get recruited by such propaganda thus paying the University
which funds the AI department and pays the profs salaries. What does
the student receive that helps realize the strong AI dream? They
wind up unemployed, underemployed, or working on expert systems.
I don't respect McCarthy's morals nor his midget philosophizing.
Post by Don Geddis
If you bothered to look into what McCarthy has actually written, you would
see that he's attacking the notion that there is something "special" or
"magical" about human intellect, something qualitatively different from
machines. Instead, McCarthy argues, it's just points of complexity along the
same spectrum. Thermostats have "a little" belief, humans have "a lot".
There is no magic step #N where machine N does not have beliefs, but with a
minor improvement in some critical functionality, machine N+1 suddenly does.
So you have read McCarthy and I haven't??! I mentioned that CyC would
get some questions(easier) right during a Turing Test. I then claimed
that some future TT passing program might get a lot more questions right
and be adjudicated as instantiating a mind. I said that Comp claimed
that the mind/intelligence would be on a spectrum so that CyC would be
on the same scale as the future TT passing program. I drew a diagram.
I said that if the future TTPP had a human-level mind than CyC had
some mind.

You rejected this description, starting with that I shouldn't have used
the word "specturm" but should have used the word degree. Then you
claimed that CyC was so far down the scale that its result had no
bearing on the 'instantiated mind" of the future TT passing program;
because that TTPP would need and have so much more stuff to pass the
TT, that it couldn't be compared to something as primitive as CyC.
CyC and the future TTPP just weren't in the same category.

Explain to me how I have mistakenly applied McCarthy's claim while
you correctly have applied it? After all, you say you believe in
Comp, so how is your thinking consistent?
Post by Don Geddis
This philosophy is at least reasonable to consider, and is by no means
obviously "corrupt in the core".
Post by Stephen Harris
Most people are going to deem such attributions as ludicrous.
The opinions of "most people" are not important to the truth of the claim.
Perhaps we can stick to the merits of the various claims.
Post by Stephen Harris
If a successful TT program is ever engineered, it will still do nothing to
resolve the dispute of whether the program has a mind or is a zombie.
You may have a point there, although I suspect in the court of public opinion
it will be a lot harder to the average person to deny an entity that
strenuously asserts its own consciousness.
The opinions of "most people" are not important to the truth of the claim.
Post by Stephen Harris
"The Dark Ages of AI: A Panel Discussion at AAAI-84
Drew McDermott, M. Mitchell Waldrop, B. Chandrasekaran, John McDermott, Roger
Schank
This panel, which met in Austin, Texas, discussed the "deep unease among AI
researchers who have been around more than the last four years or so ... that
perhaps expectations about AI are too high, and that this will eventually
result in disaster."
Sure, but what does this have to do with Computationalism?
There was a whole "AI Winter" in the 80's (which I lived through), that
was about government funding drying up, mostly because of unmet promises
from the AI projects in the 70's which never came to pass.
But that's got no connection with our current topic of discussion.
It was introduction to the next quote on the same topic, the Dark Ages.
In the 70s, Connectionism was just about non-existent. All AI was
symbolic manipulation. There wasn't strong AI and weak AI, just AI. And
that AI inherited mind is computation from the 40s through McCullough
and Pitts passed on to Wiener and Von Neumann. The Computational Theory
of Mind (CTM) the parent of and synonomous with Computationalism, was
published in 1960 but the concept was already in place. So Comp acting
on Symbolic AI was in place for 20 or 25 years before Connnectionism
stormed the gates. To me, it is easy to see how the distinction,
Symbolic AI and Comp was blurred because there wasn't any contending
concept that one need to be careful to avoid confusion with.

The failure of AI was the same failure of its conception of progress
towards human-level intelligence programs that was inherent with AI
at its inception and somewhat later more formal positions of CTM and
Computationalism. I also don't think that one's opinion of the value
of one's guiding philosophy is supported (so declines) when your
project fails and funding is cut or much harder to get. There has
to be a real motivation for so many jumping onto the Connectionist
bandwagon when it first became known/popular.
Post by Don Geddis
Post by Stephen Harris
"Meanwhile, the 1984 meeting of the American Association for Artificial
Intelligence had a panel discussion on "The Dark Ages of AI." This appeared
in AI Magazine for the fall on 1985. The field was running low on new ideas
and the business community was getting stale about AI's commercial
promise. ...
That's all true, but is about short-term (e.g. a few years or decades) of
concern about engineering approaches. It's not at all a comment about the
overall philosophy that minds are computational.
Absence of evidence is not evidence of absence. But when physical
realizations of one's guiding philosophy don't materialize, then
faith in that promise fades which is a comment about attitude toward
the promise not is theoretical truth value. Humans works on a Bayesian
plausibility.
Post by Don Geddis
On this newsgroup, for example, Curt has claimed that rocks are "conscious",
but he says that more to point out the problems with the definitions that
other people assign to that word, than to make any interesting claim about
rocks.
I'm pretty sure that both Curt and Moravec don't think that rocks or teddy
bears have the same level of subjective experience as humans.
No, I don't think so either. I don't think that is the point. What is
the mechanism for rocks or teddy bears to have any subjective experience
(or consciousness) whatsoever?! How does this mechanism establish a
boundary on the larger scale, why isn't the universe conscious? I
think your mechanism is equivalent to some freakish mutation of
panpsychism.
Post by Don Geddis
Post by Stephen Harris
SH: I've seen your defense of the view that a mind will be required
to pass the Turing Test, tied to some statements by Turing.
How does this relate to anything we've been talking about?
Somewhat to McCarthy mentioned earlier. The zombie case is considered
an attack on computationalism. zombie=TT passing program with no mind.

"I treat computationalism as a thesis that defines the human cognitive
system as a physical, symbolic and semantic system, in such a manner
that the description of its physical states is isomorphic with the
description of its symbolic conditions, so that this isomorphism is
semantically interpretable. In the second section (Section II), I
discuss the Zombie arguments, and the epistemological-modal problems
connected with them, which refute sustainability of computationalism.
The proponents of the Zombie arguments build their attack on the
computationalism on the basis of thought experiments with creatures
behaviorally, functionally and physically indistinguishable from human
beings, though these creatures do not have phenomenal experiences.
According to the consequences of these thought experiments - if
zombies are possible, then, the computationalism doesn't offer a
satisfying explanation of consciousness."
A TT passing program with no mind, a zombie, as compared
to your claim no zombie can pass the TT, it must have a mind.
How is the mechanism for instantiating a mind powered by a
consistent philosophy? You called the philosophy "reasonable".
That would imply that you have explored it. You contend that
I have not. My dismissal is anything but abrupt, but considered.
Post by Don Geddis
You haven't even explored the philosophy, to so abruptly
dismiss it as "corrupt".
ON REALIZING COMPUTATIONAL SUFFICIENCY
EMILIANO BOCCARDI [page 2]
Don Geddis
2007-03-13 04:43:23 UTC
Permalink
I keep wondering how you can have a Ph.d and know so little. You might
think I'm trying to insult you, but I'm quite serious. I usually suspect
you are another "I'm a Stanford graduate." fraud or that maybe you earned
some degree but suffered brain damage afterward.
I don't recall saying that I have a PhD, nor that I'm a Stanford graduate.
I'm not the one touting my credentials on this group. I'm happy to let the
quality of my ideas speak for themselves, without resorting to even the
implication of a so-called argument from authority.
I posted a few years ago here why I don't respect McCarthy even though he
developed Lisp.
He did a lot more than just invent Lisp :-).
He admitted that he knew that his claim that 'thermostats have beliefs' was
controversial. He kept it because he felt the controversy would improve the
chances of being funded; he wanted to establish the field of AI.
Sure, McCarthy was being deliberately controversial. He's earned the right.
I mentioned that CyC would get some questions(easier) right during a Turing
Test. I then claimed that some future TT passing program might get a lot
more questions right and be adjudicated as instantiating a mind. I said
that Comp claimed that the mind/intelligence would be on a spectrum so that
CyC would be on the same scale as the future TT passing program. I drew a
diagram. I said that if the future TTPP had a human-level mind than CyC
had some mind.
You fail to understand that your conclusion doesn't follow from your premises.

Yes, passing a Turing Test implies that the candidate has a mind.

Yes, mind/intelligence is a spectrum.

No, answering a few Turing Test questions (and then failing) does not allow
you to draw any strong conclusions about the failing candidate. Least of
which is that it must have "a bit" of mind, somewhere on the mind spectrum.

The Turing Test does not provide "partial credit".
You rejected this description, starting with that I shouldn't have used the
word "specturm" but should have used the word degree. Then you claimed that
CyC was so far down the scale that its result had no bearing on the
'instantiated mind" of the future TT passing program; because that TTPP
would need and have so much more stuff to pass the TT, that it couldn't be
compared to something as primitive as CyC. CyC and the future TTPP just
weren't in the same category.
That's basically right.
Explain to me how I have mistakenly applied McCarthy's claim while
you correctly have applied it?
Because:
1. "Belief" is much simpler than "mind" or "consciousness"; and
2. Thermostats may be more interesting (in terms of mind) than Cyc.

Thermostats have sensors on the real world, have internal state that models
some physical attribute of the real world, have effectors on the real world,
and actually demonstrate some goal-seeking behavior on a continuous basis.

They're very simple devices. But the right sort of simple devices.
In the 70s, Connectionism was just about non-existent. All AI was symbolic
manipulation. There wasn't strong AI and weak AI, just AI. And that AI
inherited mind is computation from the 40s through McCullough and Pitts
passed on to Wiener and Von Neumann. The Computational Theory of Mind (CTM)
the parent of and synonomous with Computationalism, was published in 1960
but the concept was already in place. So Comp acting on Symbolic AI was in
place for 20 or 25 years before Connnectionism stormed the gates. To me, it
is easy to see how the distinction, Symbolic AI and Comp was blurred
because there wasn't any contending concept that one need to be careful to
avoid confusion with.
Yes, I can see how, if you just read about history, you might be confused on
this issue. Or even if you started working in AI in the early '80s.

But surely we've had a few decades of experience with connectionism now.
How are you still confused about the relationship between computationalism
and connectionism? Why, today, do you appear to believe that they are in
conflict?

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
Hard work never killed anybody, but why take a chance?
-- Charlie McCarthy (ventriloquist puppet)
Alpha
2007-03-13 16:24:07 UTC
Permalink
Post by Don Geddis
Sure, McCarthy was being deliberately controversial. He's earned the right.
So did Penrose!!! More so than McCarthy (whom I met and spoke with at a
Phoenix Conference on Computers and Communications).
Post by Don Geddis
I mentioned that CyC would get some questions(easier) right during a Turing
Test. I then claimed that some future TT passing program might get a lot
more questions right and be adjudicated as instantiating a mind. I said
that Comp claimed that the mind/intelligence would be on a spectrum so that
CyC would be on the same scale as the future TT passing program. I drew a
diagram. I said that if the future TTPP had a human-level mind than CyC
had some mind.
You fail to understand that your conclusion doesn't follow from your premises.
Yes, passing a Turing Test implies that the candidate has a mind.
Nope. The implied "mind" will not be a real mind without consciousness (as
Mind = consciousness + intelligence).

"Mind refers to the collective aspects of intellect and consciousness..."
(Wiki)


And passing the TT does not imply consciousness.
--
Posted via a free Usenet account from http://www.teranews.com
Don Geddis
2007-03-13 18:22:03 UTC
Permalink
Post by Alpha
Post by Don Geddis
Sure, McCarthy was being deliberately controversial. He's earned the right.
So did Penrose!!!
You are correct that Penrose has made significant enough contributions to
science (in cosmology) and mathematics (in geometry), that his new ideas
should not be dismissed out of hand, but ought to be investigated.

Hence, when Penrose started writing about AI, in "The Emperor's New Mind"
in 1989, I bought the book and read it. As you say, Penrose has earned the
right to be taken seriously. He's not a quack.

Imagine my surprise to discover that his knowledge and intuitions in AI are
that of an ignorant primitive savage. His first book received widespread and
damning criticism from the AI community.

Penrose appeared to ignore all that criticism, and continued to write and
publish the same basic ideas in a sequence of further papers and books.

I followed them in detail for awhile, but how much trash do you have to sift
through before giving up on somebody, and realizing that he's merely spouting
propaganda to reinforce his prejudices, not making any real contribution to
science?

I've long since abandoned looking to Penrose for insight in AI. But this is
_after_ having initially given him the benefit of the doubt. Repeatedly.


As Carl Sagan once wrote in "The Burden of Skepticism":

Another writer again agreed with all my generalities, but said that
as an inveterate skeptic I have closed my mind to the truth. Most
notably I have ignored the evidence for an Earth that is six thousand
years old. Well, I haven't ignored it; I considered the purported
evidence and then rejected it. There is a difference, and this is a
difference, we might say, between prejudice and postjudice.
Prejudice is making a judgment before you have looked at the facts.
Postjudice is making a judgment afterwards. Prejudice is terrible,
in the sense that you commit injustices and you make serious
mistakes. Postjudice is not terrible. You can't be perfect of
course; you may make mistakes also. But it is permissible to make a
judgment after you have examined the evidence. In some circles it is
even encouraged.
Post by Alpha
More so than McCarthy (whom I met and spoke with at a Phoenix Conference on
Computers and Communications).
Ooooh! Are we playing six degrees of separation? Can I play?

McCarthy was one of the judges at my thesis defense. He basically slept
through the public presentation. During the private session, he ignored
the questions from the other professors (and my answers). When it came
his turn to ask a question, he rambled forth a somewhat interesting story
about some work being done by a guy he had recently met, and he explained
how that other research went, and then he finally concluded with "does that
relate to your thesis at all?"

I answered, "no". McCarthy had no further questions.

Fun times. Fun times.

But McCarthy's fundamental contributions to computer science and AI cannot
be denied. He invented Lisp, still one of the most impressive programming
languages in the world decades later. He invented timesharing. He named
the field of artificial intelligence. With the possible exception of Marvin
Minsky, who sometimes posts on c.a.p, it's almost certain that McCarthy has
done more for science (and AI in particular) than anyone posting on this
group ever will for their entire lives.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
Seen on Pavlov's door: "Knock. Don't ring bell."
m***@media.mit.edu
2007-03-13 21:28:17 UTC
Permalink
Post by Don Geddis
Post by Alpha
Post by Don Geddis
Sure, McCarthy was being deliberately controversial. He's earned the right.
So did Penrose!!!
You are correct that Penrose has made significant enough contributions to
science (in cosmology) and mathematics (in geometry), that his new ideas
should not be dismissed out of hand, but ought to be investigated.
Hence, when Penrose started writing about AI, in "The Emperor's New Mind"
in 1989, I bought the book and read it. As you say, Penrose has earned the
right to be taken seriously. He's not a quack.
Imagine my surprise to discover that his knowledge and intuitions in AI are
that of an ignorant primitive savage. His first book received widespread and
damning criticism from the AI community.
Penrose appeared to ignore all that criticism, and continued to write and
publish the same basic ideas in a sequence of further papers and books.
I followed them in detail for awhile, but how much trash do you have to sift
through before giving up on somebody, and realizing that he's merely spouting
propaganda to reinforce his prejudices, not making any real contribution to
science?
I've long since abandoned looking to Penrose for insight in AI. But this is
_after_ having initially given him the benefit of the doubt. Repeatedly.
[snip]

So have I. The standard interpretation of Godel's theorem is that if
(1) a logical system is consistent and (2) if it large enough to
include arithmetic (so that it can make inferences about Godel
Numbers), then it will include some statements that it cannot prove.
(If the system is inconsistent, then it can prove any statement, even
a false one.)

Godel's proof is not difficult, because it is very much like Cantor's
diagonal argument, which shows that there are numbers that are not
rational. But the interpretation for human thought is difficult.
Myself, I think it's irrelevant, because human reasoning is not
consistent enough.

In "The Emperor's New Mind," Roger Penrose claims that humans can
somehow transcend this theorem, but I think that's because he
unjustifiably assumed human consistency, and this led him to a wrong
conclusion. So yes, He's extremely good at other things but here I
think he made a bad mistake.
Stephen Harris
2007-03-13 17:00:16 UTC
Permalink
Post by Don Geddis
I keep wondering how you can have a Ph.d and know so little. You might
think I'm trying to insult you, but I'm quite serious. I usually suspect
you are another "I'm a Stanford graduate." fraud or that maybe you earned
some degree but suffered brain damage afterward.
I don't recall saying that I have a PhD, nor that I'm a Stanford graduate.
I'm not the one touting my credentials on this group. I'm happy to let the
quality of my ideas speak for themselves, without resorting to even the
implication of a so-called argument from authority.
You are the one who provides a link to your website which makes the
claim: "1987-1995 Ph.D. (and Master's) in Artificial Intelligence,
Computer Science Stanford University"

The quality of your ideas do speak for themselves, superficial and
short of ethics, like a failed shyster lawyer. Your record of having
a website at Stanford (found on that website list Alpha provided)
shows you first having an account in 1998, three years after your
website claims you obtained a Ph.D from Stanford. One way or another
you are a fraud. You are certainly a pseudo-intellectual who constructs
absurd arguments in a futile attempt to save face. Well, the truth will
out as Muldur says. You have given me the idea of including a signature
which exemplifies the "quality of my ideas speaking for themselves,"...
Post by Don Geddis
No, answering a few Turing Test questions (and then failing) does not allow
you to draw any strong conclusions about the failing candidate. Least of
which is that it must have "a bit" of mind, somewhere on the mind spectrum.
The Turing Test does not provide "partial credit".
The TT is irrelevant to strong AI (it is like a byproduct capability)
strong AI is a claim about the program itself.
It doesn't provide any credit. _If_ a program can pass the TT test later
the same day (without changing the TT program itself) then that program
already has a mind according to Computationalism. It is not the case
that _because_ the TT program passes the TT, that this process somehow
causally confers or instantiates a mind on the computer. If that TT
program is running, and not processing a TT test, but some ordinary
conversation, according to Comp it has instantiated a mind. Comp is
about the program, not the TT. The TT will affect the opinion of
human observers. The opinion of the observers will not cause the program
to flicker on and off like a thermostat controlling a furnace; whether
a program instantiates a mind isn't controlled by observer beliefs.
Post by Don Geddis
SH: Explain to me how I have mistakenly applied McCarthy's claim while
you correctly have applied it?
1. "Belief" is much simpler than "mind" or "consciousness"; and
2. Thermostats may be more interesting (in terms of mind) than Cyc.
Thermostats have sensors on the real world, have internal state that models
some physical attribute of the real world, have effectors on the real world,
and actually demonstrate some goal-seeking behavior on a continuous basis.
Doesn't the computer which is running the CyC program have a belief
because it has a cpu sensor that says, It is just the right temperature,
or if it is too hot, then it chooses to reboot.? Cpus malfunction
when it is too cold, but I don't know if that temperature is acted upon.


-- My shiny new signature which will pay long term tribute to you. \/\/

Stephen
Don Geddis, on Mar 12, 8:43 pm passed on the following wisdom:
1. "Belief" is much simpler than "mind" or "consciousness"; and
2. Thermostats may be more interesting (in terms of mind) than Cyc.
Neil W Rickert
2007-03-13 18:58:15 UTC
Permalink
Post by Stephen Harris
Post by Don Geddis
I keep wondering how you can have a Ph.d and know so little. You might
think I'm trying to insult you, but I'm quite serious. I usually suspect
you are another "I'm a Stanford graduate." fraud or that maybe you earned
some degree but suffered brain damage afterward.
I don't recall saying that I have a PhD, nor that I'm a Stanford graduate.
I'm not the one touting my credentials on this group. I'm happy to let the
quality of my ideas speak for themselves, without resorting to even the
implication of a so-called argument from authority.
You are the one who provides a link to your website which makes the
claim: "1987-1995 Ph.D. (and Master's) in Artificial Intelligence,
Computer Science Stanford University"
The quality of your ideas do speak for themselves, superficial and
short of ethics, like a failed shyster lawyer. Your record of having
a website at Stanford (found on that website list Alpha provided)
shows you first having an account in 1998, three years after your
website claims you obtained a Ph.D from Stanford. One way or another
you are a fraud. You are certainly a pseudo-intellectual who constructs
absurd arguments in a futile attempt to save face. Well, the truth will
out as Muldur says. You have given me the idea of including a signature
which exemplifies the "quality of my ideas speaking for themselves,"...
When you write this sort of stuff, it does not in any way affect
my opinion of Don Geddis. But it does lower my opinion of Stephen
Harris. I judge Don on what he writes. And I judge you on what
you write. What you have been writing includes inappropriate
personal attacks.
Stephen Harris
2007-03-13 20:14:35 UTC
Permalink
Post by Neil W Rickert
Post by Stephen Harris
Post by Don Geddis
I keep wondering how you can have a Ph.d and know so little. You might
think I'm trying to insult you, but I'm quite serious. I usually suspect
you are another "I'm a Stanford graduate." fraud or that maybe you earned
some degree but suffered brain damage afterward.
I don't recall saying that I have a PhD, nor that I'm a Stanford graduate.
I'm not the one touting my credentials on this group. I'm happy to let the
quality of my ideas speak for themselves, without resorting to even the
implication of a so-called argument from authority.
You are the one who provides a link to your website which makes the
claim: "1987-1995 Ph.D. (and Master's) in Artificial Intelligence,
Computer Science Stanford University"
The quality of your ideas do speak for themselves, superficial and
short of ethics, like a failed shyster lawyer. Your record of having
a website at Stanford (found on that website list Alpha provided)
shows you first having an account in 1998, three years after your
website claims you obtained a Ph.D from Stanford. One way or another
you are a fraud. You are certainly a pseudo-intellectual who constructs
absurd arguments in a futile attempt to save face. Well, the truth will
out as Muldur says. You have given me the idea of including a signature
which exemplifies the "quality of my ideas speaking for themselves,"...
When you write this sort of stuff, it does not in any way affect
my opinion of Don Geddis. But it does lower my opinion of Stephen
Harris. I judge Don on what he writes. And I judge you on what
you write. What you have been writing includes inappropriate
personal attacks.
But who are you to judge? You used to write posts peppered with
vile swear words. Why were those appropriate personal attacks?

I think you've changed since I used to follow your posts. Now
you are a yet another 'do as I say, not as I do', senile twit.


Don Geddis provided a link to his website where he claims to
have earned a Ph.D. in Artificial Intelligence, from Stanford.
Ph.Ds from Stanford are very bright and very well educated.

What I encountered instead was somebody closer to Lester Zick.
In the meantime I wasted more and more time. It has irritated
me that he lied in order to claim my time. I am not one to feed
the pigeons to that they crap on other people's heads. Instead
I would rather shove crap down the throats of those who defend
pigeon feeders. Your hypocrisy has managed to thoroughly disgust
me. I believe I am going to survive any "plonks".
--
Stephen
Don Geddis, on Mar 12, 8:43 pm passed on the following wisdom:
1. "Belief" is much simpler than "mind" or "consciousness"; and
2. Thermostats may be more interesting (in terms of mind) than Cyc.
Neil W Rickert
2007-03-13 23:32:09 UTC
Permalink
Post by Stephen Harris
But who are you to judge? You used to write posts peppered with
vile swear words. Why were those appropriate personal attacks?
I think you will find that those were responses to personal
attacks, rather than something I initiated.
Post by Stephen Harris
Don Geddis provided a link to his website where he claims to
have earned a Ph.D. in Artificial Intelligence, from Stanford.
Ph.Ds from Stanford are very bright and very well educated.
What I encountered instead was somebody closer to Lester Zick.
I haven't seen anything trollish is Don's posts. I haven't followed
them all, so maybe I missed something. From what I have seen, he
takes respectable position (which doesn't mean that I agree with
all of them).
Stephen Harris
2007-03-14 00:51:09 UTC
Permalink
Post by Neil W Rickert
Post by Stephen Harris
But who are you to judge? You used to write posts peppered with
vile swear words. Why were those appropriate personal attacks?
I think you will find that those were responses to personal
attacks, rather than something I initiated.
Post by Stephen Harris
Don Geddis provided a link to his website where he claims to
have earned a Ph.D. in Artificial Intelligence, from Stanford.
Ph.Ds from Stanford are very bright and very well educated.
What I encountered instead was somebody closer to Lester Zick.
I haven't seen anything trollish is Don's posts. I haven't followed
them all, so maybe I missed something. From what I have seen, he
takes respectable position (which doesn't mean that I agree with
all of them).
All right, never mind.

DG wrote:
2. "Thermostats may be more interesting (in terms of mind) than Cyc."

SH: This remark seems so obtuse that I can't imagine it being
somebody's respectable conviction. He is trying to repudiate
the idea that CyC could be considered a primitive TT program.
(not counting gimmicks like not adding too fast etc.)
I think CyC is better than early TT program attempts.
Neil W Rickert
2007-03-14 01:11:01 UTC
Permalink
Post by Stephen Harris
2. "Thermostats may be more interesting (in terms of mind) than Cyc."
I agree with him on that.
Post by Stephen Harris
SH: This remark seems so obtuse that I can't imagine it being
somebody's respectable conviction. He is trying to repudiate
the idea that CyC could be considered a primitive TT program.
I don't know what Don is or is not trying to repudiate. In my
opinion, CYC is a dead end. I don't expect any new insight into
human cognition to come out of it. However, thermostats might yet
be a source of useful insight.
--
DO NOT REPLY BY EMAIL - The address above is a spamtrap.

Neil W. Rickert, Computer Science, Northern Illinois Univ., DeKalb, IL 60115
Stephen Harris
2007-03-14 03:22:03 UTC
Permalink
Post by Neil W Rickert
Post by Stephen Harris
2. "Thermostats may be more interesting (in terms of mind) than Cyc."
I agree with him on that.
Post by Stephen Harris
SH: This remark seems so obtuse that I can't imagine it being
somebody's respectable conviction. He is trying to repudiate
the idea that CyC could be considered a primitive TT program.
However, thermostats might yet
be a source of useful insight.
"My intuition is that the bimetallic strip device does not work by
computation but is entirely explained in terms of physical-level
processes that implement feedback control. It does not manipulate
symbolic representations of its environment or task. If you have a
fancy microprocessor-controlled programmable thermostat then there is
likely to be symbolic computation going on between inputs and outputs,
the input (including that from a clock) is getting symbolized (coded)
and manipulated in accordance with a formal program.

*I think that to whatever extent that the function of the relevant
brain circuitry turns out to be more like that of the bimetallic
strip then like the fancy digital thermostat, then to that extent
the interesting high-level hypothesis that motivated the idea of
computational models in cognitive science ("cognitivism", the
"physical symbol system hypothesis") would stand refuted.*

Of course you could always call it (and anything else) "analog"
computation, but that was no part of the interesting idea behind
computational models in psychology."
Don Geddis
2007-03-13 18:42:43 UTC
Permalink
But I've never used or implied that my credentials should be taken into
account when evaluating my claims. I allow my claims to stand by themselves.
You're the one who is bringing credentials into this.

It is true that if an interested party can learn to use a web search engine,
say, like Google, it is possible that they could learn more about me than I
choose to post on this group.
Post by Stephen Harris
The quality of your ideas do speak for themselves, superficial and
short of ethics, like a failed shyster lawyer.
Boy, you're sure a polite guy, aren't you? Just because you disagree with
my claims, or don't share my intuition, I must also be evil?
http://en.wikipedia.org/wiki/Ad_hominim
Dude, grow up a little.
Post by Stephen Harris
Your record of having a website at Stanford (found on that website list
Alpha provided) shows you first having an account in 1998, three years
after your website claims you obtained a Ph.D from Stanford. One way or
another you are a fraud.
My, your powers of research are incredible. You have unmasked me! I am
undone!

By the way, as long as you brought up web sites from long ago, allow me to
relate a somewhat amusing story. Somewhere around 1993 or 1994, CERN (which
was where the web was invented) was keeping a list of (all) the web sites in
the world. Most of them were physics labs, since that's why the web was
originally built. There were about 100 major web sites listed. Three of
them were mine. Including www.stanford.edu. Which was running on my
personal workstation in my office.

I also kept a list of "other interesting web sites out there you might want
to check out". And tried to update it every day. But that was a lot of work
and a pain in the ass. And I quickly saw that there were these losers over
in the EE department, Yang and Filo, who also had a list of interesting links.
And they seemed to be putting WAY more effort into keeping their list current
than I wanted to. So pretty soon I abandoned my own list, and just sent people
over to the silly Yahoo guys.

Ah well. A billion dollars here, a billion dollars there. Easy come, easy
go.
Post by Stephen Harris
Post by Don Geddis
No, answering a few Turing Test questions (and then failing) does not allow
you to draw any strong conclusions about the failing candidate. Least of
which is that it must have "a bit" of mind, somewhere on the mind spectrum.
The Turing Test does not provide "partial credit".
It doesn't provide any credit. _If_ a program can pass the TT test later
the same day (without changing the TT program itself) then that program
already has a mind according to Computationalism. It is not the case
that _because_ the TT program passes the TT, that this process somehow
causally confers or instantiates a mind on the computer. If that TT
program is running, and not processing a TT test, but some ordinary
conversation, according to Comp it has instantiated a mind. Comp is
about the program, not the TT. The TT will affect the opinion of
human observers. The opinion of the observers will not cause the program
to flicker on and off like a thermostat controlling a furnace; whether
a program instantiates a mind isn't controlled by observer beliefs.
I agree with everything you wrote in that paragraph. Of course the Turing
Test is not a causal agent to create a mind in a program.

But you avoided the actual topic I was addressing, which was an earlier claim
that passing "a few" initial Turing Test questions necessarily means that the
program must have "a little bit" of mind.

It doesn't mean that at all.
Post by Stephen Harris
Doesn't the computer which is running the CyC program have a belief
because it has a cpu sensor that says, It is just the right temperature,
or if it is too hot, then it chooses to reboot.? Cpus malfunction
when it is too cold, but I don't know if that temperature is acted upon.
The CPU and computer hardware may be characterized (loosely) by such a
belief. But CyC itself doesn't have access to that information. So CyC
does not have those beliefs.

You need to realize that the same piece of hardware may simultaneously
instantiate multiple potential intelligences or minds, depending on how you
choose to observe or interact with it.

This kind of example was already used in the Systems Reply to Searle's
Chinese Room analogy. There's no question that if you ask the guy in the
room (in English) whether he understands Chinese, he'll say no. At the
exact same time, you can ask the room itself (in Chinese) whether it
understands Chinese, and it will say yes. Despite the fact that the man
is part of the room.

In the same way, CyC doesn't have beliefs about its physical self. But it
may be that the computer motherboard which is running CyC does.
Post by Stephen Harris
-- My shiny new signature which will pay long term tribute to you. \/\/
I appreciate any effort you make to enhance the shrine of worship you are
building in my honor.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
The brain is a wondrous thing, I admit, but please, keep it away from me.
-- Deep Thoughts, by Jack Handey [1999]
Stephen Harris
2007-03-13 22:25:08 UTC
Permalink
Post by Don Geddis
But I've never used or implied that my credentials should be taken into
account when evaluating my claims. I allow my claims to stand by themselves.
You're the one who is bringing credentials into this.
There you go again making a sophist argument. Credentials enter into the
discussion this way. You make what appear to me to be claims that are
contradictory and have educational gaps. You make these claims in a smug
self-assured manner. So I wonder if I'm missing something and you have
something profound to say which will educate me.

So I make replies to you mentioning things I think you should already
know. And then I have to explain the explanation and you still don't
get it. At this point I dismiss your pretension of expertise. Which
means I doubt your qualifications. Then I check your website which you
include in your signature and find out that you claim to have a Ph.D in
Artificial Intelligence from Stanford. I checked on this because your
credentials and qualifications were in doubt and because your statements
didn't stand on their own merit. They are muddled.

At this point you are claiming that because your credentials weren't
presented on this forum, that instead they are presented in your
website signature, that you haven't introduced credentials into this.
You introduced credentials into this on the forum which linked to the
website by acting as if you know what you are talking about when your
claims don't make all that much sense. And you have to have all the
dots connected for you.
Post by Don Geddis
It is true that if an interested party can learn to use a web search engine,
say, like Google, it is possible that they could learn more about me than I
choose to post on this group.
This again is a strawman. I learned about your website which contains
your claim about earning a Ph.D from Stanford from your posting on
this forum contained in your signature. It wasn't until you pointed
out that you had a Stanford url that I checked and saw the dates were
off. I did that because you don't do any justice to the caliber of
people who are Ph.Ds at Stanford. Your own writing planted the seed
of doubt.
Post by Don Geddis
Post by Stephen Harris
The quality of your ideas do speak for themselves, superficial and
short of ethics, like a failed shyster lawyer.
Boy, you're sure a polite guy, aren't you? Just because you disagree with
my claims, or don't share my intuition, I must also be evil?
http://en.wikipedia.org/wiki/Ad_hominim
Dude, grow up a little.
Another strawman. A failed shyster lawyer is about your sophistry, about
attempting to change the topic with some subtle adjustment to another
topic than the one under discussion. Now you are claiming that I'm
attacking you rather than your claims. I'm attacking how you change
the claims under discussion to something else. I'm attacking your
lying style of posting not the claim per se, which is just usually
misquided. I am not attacking your character in order to attack your
claim which is the meaning of an ad hominen attack. I'm attacking
your character because of the deviousness in your replies. I'm
explaining to you the difference of something you are already
supposed to have learned in college. I am attacking your
character but am not using that to condemn your claim. That is easy
enough to do on its own merit or lack thereof.
Post by Don Geddis
Ah well. A billion dollars here, a billion dollars there. Easy come, easy
go.
Snip what I admit was an amusing anecdote.
Post by Don Geddis
I agree with everything you wrote in that paragraph. Of course the Turing
Test is not a causal agent to create a mind in a program.
But you avoided the actual topic I was addressing, which was an earlier claim
that passing "a few" initial Turing Test questions necessarily means that the
program must have "a little bit" of mind.
I felt I had already explained it and claiming that I didn't is either
devious or you can't remember a thread. Does this sound familar?

In the future we finally get a TT program that can pass the TT if
tested. It is the program which resides on a hard drive with memory
on a computer, which can pass or does pass the TT and so is said to
instantiate a mind. Since all TT passing programs are identical,
they are not all going to be equally convincing on the same number
of questions. That is, there is going to be a range in their test
results regarding their intelligence. Now with humans usually mind
intelligence and consciousness correlate. I mean humans who are
virtually brain dead exhibit little intelligence, mind or consciousness.
There is a continuum for humans, some are brighter than others. I think
then that there will be a specturm for TT passing programs so that we
will say some programs are designed better than others and exhibit
more mind or intelligence. Actually, I think you have agreed to that.

What does it mean for some TT program which shares most of the design
of a future model which passes the TT, but in its present state fails
the TT. It means that the older failing version has answered fewer
questions cogently and perhaps also with less quality. It could still
answer quite a few questions satisfactorily, but just not enough to
pass the test. So that means there is a range and some TTs will fit
at different scores into the passing range and others will be close,
but no cigar and in the failing range, and still others fall into the
failing range.

Now do only the TT passing programs (potentially) instantiate minds
due to the right program being implemented? And if so can only the
TT passing programs have a range to their intelligence and mind?

The thing is with McCarthy's idea there is going to be a continuum.
The TT program which slightly fails or is not quite convincing
enough is not going to be said to have a zero mind. That it is an
all or nothing outcome where mind suddenly emerges(a Comp foreign
concept) when some other TT program gets a few more questions right
than the program which just failed slightly. Like there is a great
divide, TT passing programs with minds, versus some almost TT passing
programs which have no minds.

Now you asserted that thermostats could be conscious and have no
ability to answer any TT questions correctly, whereas the CyC which
can answer some TT questions is supposedly not conscious. Particularly,
you don't want to put CyC in with other machines which also don't
answer enough questions to pass the TT. You want to say that CyC
does not have a little bit of mind or a little bit of consciousness
but that the thermostat does??!

2. Thermostats may be more interesting (in terms of mind) than Cyc.

My last question is what the above assertion means.
Post by Don Geddis
It doesn't mean that at all.
Post by Stephen Harris
Doesn't the computer which is running the CyC program have a belief
because it has a cpu sensor that says, It is just the right temperature,
or if it is too hot, then it chooses to reboot.? Cpus malfunction
when it is too cold, but I don't know if that temperature is acted upon.
The CPU and computer hardware may be characterized (loosely) by such a
belief. But CyC itself doesn't have access to that information. So CyC
does not have those beliefs.
Isn't this a clever argument which uses the ideas of Computationalism,
not. The Comp thesis is not stated in terms of the right program and
sensors, or that sensors must have a direct connection. Humans have
pain receptors for fire. Removing the hand from fire is a reflex and
not conscious. You are saying that the thermostat because it has direct
access to the sensors, but no program, is a candidate for consciousness
but that the computer housing the CyC program is not a candidate for
consciousness due to the reboot reflex not being directly connected to
CyC. What happened to the systems argument used in defense of the CRA?
That the system has direct access to the sensors and so that is enough
for the system which includes the program CyC, to situate consciousness?

Now CyC receives questions as inputs and outputs answers which are
physical changes. CyC has belief that it is providing the right answer
at pretty much the same level of abstraction that a thermostat can be
said to feel cold. So using McCarthy's criteria, why can't they both be
said to have beliefs, therefore some niggling degree of consciousness?

I didn't say this last part before. Why is it you can't figure it out
yourself? It is because if you fully adopted McCarthy's pov, then you
would have to grant thermostats and CyC a position in the great chain
of consciousness which leads up to TT passing programs like the meager
minds that lead up the evolutionary steps of the primate chain of
consciousness.
Post by Don Geddis
You need to realize that the same piece of hardware may simultaneously
instantiate multiple potential intelligences or minds, depending on how you
choose to observe or interact with it.
I believe more accurately is that the same piece of hardware can
multiply realize different functions and that some functions
realize the same task. Your pov may change as you examine the
object from different perspectives; that is not the same as the
ojbect changing shapes. It seems like you are saying some TT
passing program can exhibit multiple personalities which passes
beyond my ability to offer comment.
Post by Don Geddis
This kind of example was already used in the Systems Reply to Searle's
Chinese Room analogy. There's no question that if you ask the guy in the
room (in English) whether he understands Chinese, he'll say no. At the
exact same time, you can ask the room itself (in Chinese) whether it
understands Chinese, and it will say yes. Despite the fact that the man
is part of the room.
In the same way, CyC doesn't have beliefs about its physical self. But it
may be that the computer motherboard which is running CyC does.
Well yes, the system. But I don't see how that makes something which
is to simple to have other parts to be more prone to consciousness.
Actually, I do appreciate that you thought of that.
Post by Don Geddis
Post by Stephen Harris
-- My shiny new signature which will pay long term tribute to you. \/\/
I appreciate any effort you make to enhance the shrine of worship you are
building in my honor.
-- Don
_______________________________________________________________________________
Does that mean you think I'm not going to bill you
for promotional efforts when you get a job?? Huh!
--
Stephen
Don Geddis, on Mar 12, 8:43 pm passed on the following wisdom:
1. "Belief" is much simpler than "mind" or "consciousness"; and
2. Thermostats may be more interesting (in terms of mind) than Cyc.
Don Geddis
2007-03-14 00:39:50 UTC
Permalink
You make what appear to me to be claims that are contradictory and have
educational gaps. You make these claims in a smug self-assured manner. So I
wonder if I'm missing something and you have something profound to say
which will educate me. So I make replies to you mentioning things I think
you should already know. And then I have to explain the explanation and you
still don't get it. At this point I dismiss your pretension of expertise.
Which means I doubt your qualifications.
Well, I'm sorry that I confuse you so much. Apparently you are unable
to pigeonhole me into a convenient category to deal with.
At this point you are claiming that because your credentials weren't
presented on this forum, that instead they are presented in your website
signature, that you haven't introduced credentials into this. You
introduced credentials into this on the forum which linked to the website
by acting as if you know what you are talking about when your claims don't
make all that much sense. And you have to have all the dots connected for
you.
You have explained well what led you to check out my (claimed?) background.

But I do want to emphasize that I have never, on this newsgroup, asked
anyone to accept my statements because of my (supposed?) background.
I'm attacking your lying style of posting not the claim per se, which is
just usually misquided. I am not attacking your character in order to
attack your claim which is the meaning of an ad hominen attack. I'm
attacking your character because of the deviousness in your replies.
From my perspective, I have never intentionally lied in my postings on this
newsgroup. Or been devious, although I might admit to an occasional attempt
to be clever, for its comedic effect.

But you sure have built an interesting theory about me! Doesn't match my
self-model at all. I wonder which is more accurate.
Since all TT passing programs are [not] identical, they are not all going
to be equally convincing on the same number of questions. That is, there is
going to be a range in their test results regarding their intelligence.
[...]
I think then that there will be a specturm for TT passing programs so that
we will say some programs are designed better than others and exhibit more
mind or intelligence.
Right. I'm with you.
What does it mean for some TT program which shares most of the design of a
future model which passes the TT, but in its present state fails the TT. It
means that the older failing version has answered fewer questions cogently
and perhaps also with less quality. It could still answer quite a few
questions satisfactorily, but just not enough to pass the test. So that
means there is a range and some TTs will fit at different scores into the
passing range and others will be close, but no cigar and in the failing
range, and still others fall into the failing range.
Sounds good.
Now do only the TT passing programs (potentially) instantiate minds
due to the right program being implemented? And if so can only the
TT passing programs have a range to their intelligence and mind?
The TT program which slightly fails or is not quite convincing
enough is not going to be said to have a zero mind. That it is an
all or nothing outcome where mind suddenly emerges(a Comp foreign
concept) when some other TT program gets a few more questions right
than the program which just failed slightly. Like there is a great
divide, TT passing programs with minds, versus some almost TT passing
programs which have no minds.
Agreed. Programs which are "close" to passing the Turing Test (but get
tripped up on a question here or there) likely have just as much
consciousness as programs which pass every question with flying colors.

We don't disagree about any of this.
Now you asserted that thermostats could be conscious
Hmm. The word I used was "belief", not "conscious". Those are different
things.

But ok, for the sake of argument, lets continue.
and have no ability to answer any TT questions correctly, whereas the CyC
which can answer some TT questions is supposedly not conscious.
Recall that I've said many times that the Turing Test is a sufficient test,
but not a necessary one. The vast majority of intelligent, conscious
entities in the universe (past or future) would fail the Turing Test. There
is little you can conclude about an entity just because it fails the Turing
Test.
Particularly, you don't want to put CyC in with other machines which also
don't answer enough questions to pass the TT. You want to say that CyC does
not have a little bit of mind or a little bit of consciousness but that the
thermostat does??!
2. Thermostats may be more interesting (in terms of mind) than Cyc.
My last question is what the above assertion means.
We could explore this further if you really want, but I really don't feel
that strongly about it one way or the other.

This is like arguing whether viruses (or crystals) are alive or not. Whether
you define it that way or not, it's such a primitive form of life, who really
cares?

Neither CyC nor thermostats have more than a trivial amount of consciousness,
if any at all. I'm interested in human-level consciousness. It's as
different from CyC and thermostats as viruses are from great apes.

I don't really care about the resolution of your question, but the reason why
I mentioned thermostats as being interesting (for minds) is because they have
real-world real-time sensors, which is an important part of a functioning
consciousness. And CyC doesn't have those things.
So using McCarthy's criteria, why can't they both be said to have beliefs,
therefore some niggling degree of consciousness?
McCarthy (and I) were only talking about beliefs. We didn't make the
implication you're now asserting that beliefs necessarily imply
consciousness.

In any case, I'm not as invested in the resolution of this topic (whether or
not thermostats or CyC have consciousness) as I was in previous topics in
this thread (such as whether Computationalism is a reasonable theory of
mind; or whether Computationalism is compatible or an alternative rival of
Connectionism), which you seem to have abandoned at this point.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
Mr. Banks: Just a moment, Mary Poppins. What is the meaning of this outrage?
Mary Poppins: I beg your pardon?
Mr. Banks: Will you be good enough to explain all this?
Mary Poppins: First of all I would like to make one thing perfectly clear.
Mr. Banks: Yes?
Mary Poppins: I never explain anything.
Stephen Harris
2007-03-14 04:20:25 UTC
Permalink
Post by Don Geddis
From my perspective, I have never intentionally lied in my postings on this
newsgroup. Or been devious, although I might admit to an occasional attempt
to be clever, for its comedic effect.
But you sure have built an interesting theory about me! Doesn't match my
self-model at all. I wonder which is more accurate.
Well, maybe this isn't supposed to count, but you've already admitted
to being a game player on your website. Besides the things I've
mentioned, that means you are insecure and most likely don't know it.
Post by Don Geddis
Agreed. Programs which are "close" to passing the Turing Test (but get
tripped up on a question here or there) likely have just as much
consciousness as programs which pass every question with flying colors.
We don't disagree about any of this.
Post by Stephen Harris
Now you asserted that thermostats could be conscious
Hmm. The word I used was "belief", not "conscious". Those are different
things.
But ok, for the sake of argument, lets continue.
Post by Stephen Harris
and have no ability to answer any TT questions correctly, whereas the CyC
which can answer some TT questions is supposedly not conscious.
Recall that I've said many times that the Turing Test is a sufficient test,
but not a necessary one. The vast majority of intelligent, conscious
entities in the universe (past or future) would fail the Turing Test. There
is little you can conclude about an entity just because it fails the Turing
Test.
If a metallic strip has beliefs why not the Cyc metallic ensemble?
Because a sensor empowers the creation of beliefs? I don't think so.
Post by Don Geddis
I don't really care about the resolution of your question, but the reason why
I mentioned thermostats as being interesting (for minds) is because they have
real-world real-time sensors, which is an important part of a functioning
consciousness. And CyC doesn't have those things.
Here you are saying that sensors contribute to a functioning
consciousness. That thermostats are enabled to have beliefs
because of these sensors. But next you claim there is no
implication from thermostat sensor belief to thermo consciousness.
CyC not having sensors is specious because if they connected CyC
up to a couple of sensors you still wouldn't think it had a mind,
or could even instantiate some degree of mind.
I forget where the Computationalist claim went past symbol manipulation
and required sensors. It is not that I disagree about 'sensor might be
needed', but that sensors are outside the requirement of right program.
Post by Don Geddis
Post by Stephen Harris
So using McCarthy's criteria, why can't they both be said to have beliefs,
therefore some niggling degree of consciousness?
McCarthy (and I) were only talking about beliefs. We didn't make the
implication you're now asserting that beliefs necessarily imply
consciousness.
"# What of consciousness?

* many believe having intentional states require consciousness
beliefs are intentional states (they're about things, they have meaning)
if that's right, if thermostats really had beliefs, they'd really be
consciousness"

Humans have a scheme which lumps beliefs, mind and consciousness.
Apparently you want to have beliefs and then reorganize human
foundations so that 'thermostat beliefs' don't require the rest.
Post by Don Geddis
In any case, I'm not as invested in the resolution of this topic (whether or
not thermostats or CyC have consciousness) as I was in previous topics in
this thread (such as whether Computationalism is a reasonable theory of
mind; or whether Computationalism is compatible or an alternative rival of
Connectionism), which you seem to have abandoned at this point.
I've mentioned that I think Comp is empirically disproved as a
claim about the basic architecture of the minds, neurons.

I provided four cases of Computationalism used with the meaning of
Symbolic AI. I said these are not cases of authors making mistakes,
but are part of standard usage in the literature. That means they
don't have to have a technically correct foundation. I provided the
examble of Paul Halmos and eigenvalues as an example of this.

You claimed using the adjectives of classical and standard to modify
Computationalism conveyed the meaning of symbolic. But non-standard
or non-classical doesn't provide the meaning of connectionism.
Therefore I don't think the adjectives are relevant to understanding
the meaning of Computationalism as pertaining to symbolic AI which
thsoe authors used. Actually, I think it is obvious. I used Comp
the same way that other highly qualified authors use Comp, sometimes
to refer to Symbolic. It does not matter that at one time or even
still is that this practice is not technically practice. Common usage
supercedes that. That is not my confusion but your ignorance due to
not being well read. I knew it because I had already read it more
than once. So I let it drop because it is not part of your game-player
personna to concede defeat. I dropped the topic because I thought you
would invent another convoluted defense which I would find tiresome.
Alpha
2007-03-10 19:01:23 UTC
Permalink
Post by Don Geddis
Even if you don't respect me, John McCarthy surely has a sufficient track
record that you owe it to actually read and understand his position before
casting it off with such negative comments.
Hmmm; the same could be said, exactly, about you and Penrose; you have not
read and understood the theory of Orch OR.
Post by Don Geddis
If you bothered to look into what McCarthy has actually written, you would
see that he's attacking the notion that there is something "special" or
"magical" about human intellect, something qualitatively different from
machines. Instead, McCarthy argues, it's just points of complexity along the
same spectrum. Thermostats have "a little" belief, humans have "a lot".
There is no magic step #N where machine N does not have beliefs, but with a
minor improvement in some critical functionality, machine N+1 suddenly does.
But there is an emergence point where some new emergent phenomena (c for
example) is present in artifacts with certain
substrates/architecture/processes/complexity, where we do see C arise, and
before that - nothing. ! There is no subject (consciousness) functionality
in the thermostat (or other machines), to which one could ascribe a
"belief". Does a car have a belief about where it is going (that going to a
liquor store is bad and going to a candy store is good), or has other
intentional (and intensional) aspects of Universe? Is a car aware of what
it is doing (carrying pasengers) (will it act differently if controlled
remotely - will it decide to act like a cow if there are three people rather
than 4 in the car (not quantitatively but qualitatively). Does the car care
if there are 3 people vs 4? On any type of question that has to do with
subjective/intentionall/intensional aspects, and whether car exhibits such
functionality, the answer is: of course not.
Post by Don Geddis
This philosophy is at least reasonable to consider, and is by no means
obviously "corrupt in the core".
It is then, obviously nonsensical to the core.
<snip>
Post by Don Geddis
On this newsgroup, for example, Curt has claimed that rocks are "conscious",
but he says that more to point out the problems with the definitions that
other people assign to that word, than to make any interesting claim about
rocks.
I'm pretty sure that both Curt and Moravec don't think that rocks or teddy
bears have the same level of subjective experience as humans.
They have *no* level of such; there is simply no subject functionality in a
rock or teddy bear.
Post by Don Geddis
Post by Stephen Harris
SH: I've seen your defense of the view that a mind will be required
to pass the Turing Test, tied to some statements by Turing.
Yes, that's correct.
Post by Stephen Harris
http://www.cogs.susx.ac.uk/users/blayw/tt.html
How so? I actually agree with almost everything in that article.
I've said many times on this very group that the Turing Test is merely a
sufficient test for intelligence/consciousness,
But it is not sufficient. A conscious idiot (like Sizemore) could fail it
and still be conscious. Likewise, an unconscious machine could pass it. It
is not sufficient because there would be other things one must look at to
determine consciousness or true intelligence; like architecture, substrate
and complexity to assure that the artifact being adjudged isn't a black box
(a large black box) filled with Chinese with rules and placards etc.

<snip>
Post by Don Geddis
(But all that is separate from the philosophy question of, _if_ a system did
manage to pass the Turing Test, would it _then_ necessarily have a mind?
I will continue to profess my belief that yes, it would need to have a mind.)
Mind = intelligence + consciousness; an IA need not have C to be I, so an IA
can be intelligent and pass the TT without C.
--
Posted via a free Usenet account from http://www.teranews.com
Allan C Cybulskie
2007-03-10 19:07:19 UTC
Permalink
Post by Don Geddis
Post by Stephen Harris
There are thousands of papers and books which disparage and defend Comp,
and its parent, The Computational Theory of Mind (CTM).
In the philosophy literature, no doubt you are correct.
My claim was always that within the group of _AI_researchers_, CTM is the
dominant philosophy -- assuming they bother to think about the subject at
all. Probably the same for cognitive scientists, although I'll admit to
being less familiar with that field.
But I never meant to make any claim about what most philosophers think on
the subject, or even the general public.
I've been reading this for a while, and want to point out a critical
difference here: I think that most AI researchers and most cognitive
scientists AND most philosophers are generally in the functionalist
camp. Whether or not the subsidiary idea of computationalism is true
or not is still open, and currently I think that most people who are
concerned about the matter are more concerned with establishing that
functionalism is correct because if that isn't true, then
computationalism is a non-starter.

What I DO think is true is that most of them would agree that simply
because you could make an intelligent computer -- ie a computer that
acted functionally equivalently to an intelligent being -- would not
prove computationalism. The functionality would have to be more
interestingly computational than simply something that you could
somehow program on a computer.
Stephen Harris
2007-03-10 20:57:07 UTC
Permalink
Post by Allan C Cybulskie
Post by Don Geddis
Post by Stephen Harris
There are thousands of papers and books which disparage and defend Comp,
and its parent, The Computational Theory of Mind (CTM).
In the philosophy literature, no doubt you are correct.
My claim was always that within the group of _AI_researchers_, CTM is the
dominant philosophy -- assuming they bother to think about the subject at
all. Probably the same for cognitive scientists, although I'll admit to
being less familiar with that field.
But I never meant to make any claim about what most philosophers think on
the subject, or even the general public.
I've been reading this for a while, and want to point out a critical
difference here: I think that most AI researchers and most cognitive
scientists AND most philosophers are generally in the functionalist
camp. Whether or not the subsidiary idea of computationalism is true
or not is still open, and currently I think that most people who are
concerned about the matter are more concerned with establishing that
functionalism is correct because if that isn't true, then
computationalism is a non-starter.
What I DO think is true is that most of them would agree that simply
because you could make an intelligent computer -- ie a computer that
acted functionally equivalently to an intelligent being -- would not
prove computationalism. The functionality would have to be more
interestingly computational than simply something that you could
somehow program on a computer.
Hello,

I didn't think about making this reply although I had read
a really good paper that divided functionalism from Comp.

http://www.umsl.edu/~piccininig/Computational%20Functionalism%20New%20New%205.htm

"But functionalism—properly understood—does not entail computationalism,
either classical or non-classical." ...["Thesis3=Its strengthening leads
to what is often called classical (i.e., non-connectionist)
computationalism."]

The classical (symbolic)Comp is digital symbol manipulation.
I've been assuming that the non-classical Computationalism is
connectionist computationalism(?).

"Given this generalization, computational functionalism is compatible
with any computational theory of mind, including connectionist
computationalism." ... "Functionalism may be combined with a
non-computationalist theory of mind, and computationalism may be
combined with a non-functionalist metaphysics."
"Computationalism does not entail functionalism either."
Allan C Cybulskie
2007-03-10 22:04:08 UTC
Permalink
Post by Stephen Harris
Post by Allan C Cybulskie
Post by Don Geddis
Post by Stephen Harris
There are thousands of papers and books which disparage and defend Comp,
and its parent, The Computational Theory of Mind (CTM).
In the philosophy literature, no doubt you are correct.
My claim was always that within the group of _AI_researchers_, CTM is the
dominant philosophy -- assuming they bother to think about the subject at
all. Probably the same for cognitive scientists, although I'll admit to
being less familiar with that field.
But I never meant to make any claim about what most philosophers think on
the subject, or even the general public.
I've been reading this for a while, and want to point out a critical
difference here: I think that most AI researchers and most cognitive
scientists AND most philosophers are generally in the functionalist
camp. Whether or not the subsidiary idea of computationalism is true
or not is still open, and currently I think that most people who are
concerned about the matter are more concerned with establishing that
functionalism is correct because if that isn't true, then
computationalism is a non-starter.
What I DO think is true is that most of them would agree that simply
because you could make an intelligent computer -- ie a computer that
acted functionally equivalently to an intelligent being -- would not
prove computationalism. The functionality would have to be more
interestingly computational than simply something that you could
somehow program on a computer.
Hello,
I didn't think about making this reply although I had read
a really good paper that divided functionalism from Comp.
http://www.umsl.edu/~piccininig/Computational%20Functionalism%20New%2...
"But functionalism-properly understood-does not entail computationalism,
either classical or non-classical." ...["Thesis3=Its strengthening leads
to what is often called classical (i.e., non-connectionist)
computationalism."]
The classical (symbolic)Comp is digital symbol manipulation.
I've been assuming that the non-classical Computationalism is
connectionist computationalism(?).
"Given this generalization, computational functionalism is compatible
with any computational theory of mind, including connectionist
computationalism." ... "Functionalism may be combined with a
non-computationalist theory of mind, and computationalism may be
combined with a non-functionalist metaphysics."
"Computationalism does not entail functionalism either."
Well, I'm willing to be corrected, but while I agree with the first
part of his view the second part quoted here is only true in an
uninteresting way.

Note his definitions:

"Functionalism: The mind is the functional organization of the brain.

Computationalism: The functional organization of the brain is
computational.

Computational Functionalism (generalized): The mind is the
computational organization of the brain."

His only claim for the notion that computationalism can be separated
from functionalism is that he limits it to the brain, and doesn't
directly relate it to the mind at all. Which is fine, I suppose, but
my point -- and what I think you and Don are actually arguing over --
is if MIND is computational. And if you look at his own definitions,
if mind is not simply functional (leave brain out for now) or to put
it better can't be described simply by its functions then
computationalism ABOUT MIND is in deep trouble. Perhaps it would
still apply to brain, but it would be of little interest to anyone in
AI, cognitive science, or Philosophy of Mind.
Stephen Harris
2007-03-11 17:35:09 UTC
Permalink
Post by Allan C Cybulskie
Post by Stephen Harris
Post by Allan C Cybulskie
Post by Don Geddis
Post by Stephen Harris
There are thousands of papers and books which disparage and defend Comp,
and its parent, The Computational Theory of Mind (CTM).
In the philosophy literature, no doubt you are correct.
My claim was always that within the group of _AI_researchers_, CTM is the
dominant philosophy -- assuming they bother to think about the subject at
all. Probably the same for cognitive scientists, although I'll admit to
being less familiar with that field.
But I never meant to make any claim about what most philosophers think on
the subject, or even the general public.
I've been reading this for a while, and want to point out a critical
difference here: I think that most AI researchers and most cognitive
scientists AND most philosophers are generally in the functionalist
camp. Whether or not the subsidiary idea of computationalism is true
or not is still open, and currently I think that most people who are
concerned about the matter are more concerned with establishing that
functionalism is correct because if that isn't true, then
computationalism is a non-starter.
What I DO think is true is that most of them would agree that simply
because you could make an intelligent computer -- ie a computer that
acted functionally equivalently to an intelligent being -- would not
prove computationalism. The functionality would have to be more
interestingly computational than simply something that you could
somehow program on a computer.
Hello,
I didn't think about making this reply although I had read
a really good paper that divided functionalism from Comp.
http://www.umsl.edu/~piccininig/Computational%20Functionalism%20New%2...
"But functionalism-properly understood-does not entail computationalism,
either classical or non-classical." ...["Thesis3=Its strengthening leads
to what is often called classical (i.e., non-connectionist)
computationalism."]
The classical (symbolic)Comp is digital symbol manipulation.
I've been assuming that the non-classical Computationalism is
connectionist computationalism(?).
"Given this generalization, computational functionalism is compatible
with any computational theory of mind, including connectionist
computationalism." ... "Functionalism may be combined with a
non-computationalist theory of mind, and computationalism may be
combined with a non-functionalist metaphysics."
"Computationalism does not entail functionalism either."
Well, I'm willing to be corrected, but while I agree with the first
part of his view the second part quoted here is only true in an
uninteresting way.
"Functionalism: The mind is the functional organization of the brain.
Computationalism: The functional organization of the brain is
computational.
Computational Functionalism (generalized): The mind is the
computational organization of the brain."
His only claim for the notion that computationalism can be separated
from functionalism is that he limits it to the brain, and doesn't
directly relate it to the mind at all. Which is fine, I suppose, but
my point -- and what I think you and Don are actually arguing over --
is if MIND is computational. And if you look at his own definitions,
if mind is not simply functional (leave brain out for now) or to put
it better can't be described simply by its functions then
computationalism ABOUT MIND is in deep trouble. Perhaps it would
still apply to brain, but it would be of little interest to anyone in
AI, cognitive science, or Philosophy of Mind.
I've reread that paper twice and I can't say that I've completely
grasped it, or for that matter, your comment. Here is your quote
follwed by
Post by Allan C Cybulskie
Post by Stephen Harris
Post by Allan C Cybulskie
I think that most AI researchers and most cognitive
scientists AND most philosophers are generally in the functionalist
camp. Whether or not the subsidiary idea of computationalism is
true or not is still open," ...
"4 Mechanistic Functionalism
According to functionalism, the mind is the functional organization of
the brain. According to computationalism, the functional organization
of the brain is computational. These theses are prima facie logically
independent—it should be possible to accept one of them while rejecting
the other. But according to a popular construal, functional
organizations are specified by computational descriptions connecting a
system’s inputs, internal states, and outputs (Putnam 1967b, Block and
Fodor 1972). Under this construal, functional organizations are ipso
facto computational, and hence functionalism entails computationalism.
This consequence makes it impossible to reject computationalism without
also rejecting functionalism, which may explain why attempts at refuting
functionalism often address explicitly only its computational variety
(e.g., Block 1978, Churchland 2005). The same consequence has led to
Fodor’s recent admission that he and others conflated functionalism and
computationalism (2000, 104).

To avoid conflating functionalism and computationalism, we
need a notion of functional organization that doesn’t beg the question
of computationalism."

SH: So Piccinini proposes a way to fix this which of which apparently
you are critical. But I don't understand the fix of which you are
aware which enables
Post by Allan C Cybulskie
Post by Stephen Harris
Post by Allan C Cybulskie
"Whether or not the subsidiary idea of computationalism is
true or not is still open,"
since Piccinini says they are "hence functionalism entails
computationalism." at this point tied together. So I'm not sure
how you know this has already been fixed and how that it was fixed
so that it avoids your criticism which Piccinini suffers from.

I didn't actually understand your criticism, actually the conclusion
that it, Piccinini's idea, would no longer be interesting to AI.

I'm not denying your conclusion, just that I didn't follow it.
Piccinini seems to be trying to eliminate the (Putnam?) idea
that everything is a computation by trying to more precisely
define what is a computer and computation. I don't like a definition
/philo that enables a rock to be considered conscious. Piccinini
tries to make functionalism and computationalism independent, which
means that one could endorse computationalism without endorsing
functionalism which would be possible if they are stand alone ideas.

I'll think about it some more and read it again,
Stephen
Allan C Cybulskie
2007-03-11 21:49:58 UTC
Permalink
Post by Stephen Harris
Post by Allan C Cybulskie
"Functionalism: The mind is the functional organization of the brain.
Computationalism: The functional organization of the brain is
computational.
Computational Functionalism (generalized): The mind is the
computational organization of the brain."
Post by Allan C Cybulskie
I think that most AI researchers and most cognitive
scientists AND most philosophers are generally in the functionalist
camp. Whether or not the subsidiary idea of computationalism is
true or not is still open," ...
According to functionalism, the mind is the functional organization of
the brain. According to computationalism, the functional organization
of the brain is computational.
The lines above are the two key lines, and also note the definitions
above. What he says is that computationalism is the idea that the
functional organization of the brain is computationalism. So I had
conceded in the first reply (or in one of them, or heck even MEANT to
[grin]) that you can have functionalism without computationalism --
that is, if computationalism is false then functionalism may still be
true. It would just mean that the functional organization of the
brain is not computational. But look at what the definition means if
we reject his view of functionalism but accept computationalism: the
mind is NOT the functional organization of the brain, but the
functional organization of the brain is computational. Well, yes,
that could indeed be true, but it seems fairly uninteresting for AI,
Cognitive Science, and Philosophy of Mind, doesn't it? Especially
considering that they really do seem to be all about getting ahold of
mind?

The only way it could be true is that the mind isn't really brain, or
at least not the functional organization of it, but that would
eliminate most of the uses one could get out of computationalism in
any study that cares about mind.
Neil W Rickert
2007-03-12 20:18:53 UTC
Permalink
Post by Allan C Cybulskie
Post by Stephen Harris
According to functionalism, the mind is the functional organization of
the brain. According to computationalism, the functional organization
of the brain is computational.
The lines above are the two key lines, and also note the definitions
above. What he says is that computationalism is the idea that the
functional organization of the brain is computationalism.
But what does that mean?

Is the functional organization of the Windows operating system
computational? How could I tell.

I know what it means to say that the functions are computational,
but I am having trouble making sense of "computational" as a
description of the organization.
--
DO NOT REPLY BY EMAIL - The address above is a spamtrap.

Neil W. Rickert, Computer Science, Northern Illinois Univ., DeKalb, IL 60115
f***@msn.com
2007-03-13 06:52:37 UTC
Permalink
Post by Neil W Rickert
Post by Allan C Cybulskie
Post by Stephen Harris
According to functionalism, the mind is the functional organization of
the brain. According to computationalism, the functional organization
of the brain is computational.
The lines above are the two key lines, and also note the definitions
above. What he says is that computationalism is the idea that the
functional organization of the brain is computationalism.
But what does that mean?
Is the functional organization of the Windows operating system
computational? How could I tell.
I know what it means to say that the functions are computational,
but I am having trouble making sense of "computational" as a
description of the organization.
OK, I'll venture a guess...

If there is an effective procedure to access the functions of
the system then the functional organization is computational.
(An "effective procedure" is to be taken as an unambigious set
of steps on a computer to produce the desired output from the
input within an acceptably finite time or number of steps.)
Neil W Rickert
2007-03-13 18:52:11 UTC
Permalink
Post by f***@msn.com
Post by Neil W Rickert
Is the functional organization of the Windows operating system
computational? How could I tell.
I know what it means to say that the functions are computational,
but I am having trouble making sense of "computational" as a
description of the organization.
OK, I'll venture a guess...
If there is an effective procedure to access the functions of
the system then the functional organization is computational.
But what are "the functions of the system?". Is there anything more
to being a function of the system, than that we designate certain
things as functions of the system?
Post by f***@msn.com
(An "effective procedure" is to be taken as an unambigious set
of steps on a computer to produce the desired output from the
input within an acceptably finite time or number of steps.)
And if those computer steps instead produce output that I don't
desire, does that make for an ineffective procedure?
Allan C Cybulskie
2007-03-13 10:22:34 UTC
Permalink
Post by Neil W Rickert
Post by Allan C Cybulskie
Post by Stephen Harris
According to functionalism, the mind is the functional organization of
the brain. According to computationalism, the functional organization
of the brain is computational.
The lines above are the two key lines, and also note the definitions
above. What he says is that computationalism is the idea that the
functional organization of the brain is computationalism.
But what does that mean?
Is the functional organization of the Windows operating system
computational? How could I tell.
I know what it means to say that the functions are computational,
but I am having trouble making sense of "computational" as a
description of the organization.
Well, this seems to be a question of what does it mean to say that
something is a functional organization. And I'm not qualified to
answer that at the moment, since it isn't my main concern [grin].
Stephen Harris
2007-03-09 19:59:33 UTC
Permalink
Post by Don Geddis
Post by Stephen Harris
"Likewise one can compare Computationalism (Comp) and Connectionism
as both being Turing computable." But that doesn't mean that they
both enjoy the same flexibility as tools for the pursuit of AI."
"Computationalism" and "Connectionism" are different categories of things,
so it doesn't really make sense to "compare" them. Especially since
Connectionism could be (although doesn't need to be) a subtype of
Computationalism.
It's as though you wrote: "Likewise one can compare Fruits and Oranges as
both being composed of atoms." OK, the statement is true, but an orange is
a KIND of fruit. If not the direct meaning, at least the connotation is
incorrect, because your sentence structure implies that Computationalism
and Connectionism are both items in the same category of thing. But they
aren't.
www.lehigh.edu/~mhb0/cogs7webreadings/Troubles.pdf

"Since the early 80s, a major rival has challenged computationalism in
its classical form. This rival is known as connectionism, or Parallel
Distributed Processing (PDP). ...

But with respect to the basic critique that will be made of
computationalism, there is no significant difference between the two —
they both make the same basic assumptions, and they are both vulnerable
to the same criticisms.
Between the two of them, computationalism and connectionism dominate
contemporary cognitive science and related disciplines in the 90s. In
spite of that, I wish to argue that both computationalism and
connectionism are in serious trouble — fatal trouble, in fact."

Mark H. Bickhard
Department of Psychology
17 Memorial Drive East
Lehigh University
Bethlehem, PA 18015
***@LEHIGH.EDU

Biographical Note
Mark H. Bickhard is a graduate of the University of Chicago, and was at
the University of Texas at Austin from 1972 to 1990. He is currently the
Henry R. Luce Professor of Cognitive Robotics and the Philosophy of
Knowledge at Lehigh University. His primary interests concern the nature
of psychological processes — including, among others, representation and
language. Relevant publications include Cognition, Convention, and
Communication (1980), On the Nature of Representation (1983), and
Foundational Issues in Artificial Intelligence and Cognitive Science:
Impasse and Solution. (1995)."

------------------------------------------------------

www.clarku.edu/departments/philosophy/faculty/images/hendricks/philpsychdraft.pdf

"In the second part we examine the idea that the mind is a kind of
machine. We start by focusing on the computational approach, the idea
that the mind is a computer. One computational approach, elaborated by
traditional artificial intelligence, is the classical computational
theory of cognition. According to this theory, thinking is essentially
digital symbol manipulation. We will ask, Can the classical
computational theory give an account of how mental symbols come to be
meaningful? The main alternative to the classical theory is the
connectionist theory of cognition.
"Professor Hendricks received his Ph.D. from the University of Arizona
in 2001. He has taught at Clark since the Fall of 2001 and holds the
George Kneller Chair in Philosophy.

Teaching and Research

Professor Hendricks's primary research interests are epistemology, the
philosophy of mind, and the philosophy of psychology and cognitive
science. Presently he is interested in internalism in epistemology and
the relationships between epistemic rationality, point of view, and the
structure of belief." [SH: He has also published a few papers.]
Post by Don Geddis
I used Comp as a standin for Symbolic AIs or GOFAI because in the
Post by Stephen Harris
next sentence I want to contrast Comp and Conn in their approaches
and their degree of committment to the philosophy of
Computationalism.
DG: "This doesn't make sense to me. I don't even see how it makes
sense to you."

SH: What do you think of Professors Bickhard's and Hendricks sentence
structure? Can you imagine Prof. Hendricks is even employed teaching?!

I don't think either of those Profs would have had trouble understanding
my sentence. Apparently, it is a standard usage in the AI literature.
Either that our they are just goofs. It happens that terms, like
"eigenvalues" which have precise technical meanings get subsumed into
the literature of a field and replace the original technical meaning.

I suppose you might try to say that they both used an adjective to
to modify Computationalism (standard/classical) when it was used in
the same sentence with Connectionism.
Post by Don Geddis
Post by Stephen Harris
SH: I think an intelligent and well-educated reader should be able
to figure
Post by Stephen Harris
that out. Read it in a context that makes sense. People with poor
reading comprehension will have trouble with that.
DG: I think I understand what you've written. I just think what you've
written makes no sense. It isn't some lack of context that is confusing
me. Nor is it some problem you accuse me of having with reading
comprehension. It's your actual choice of words.

SH: I've just provided two contexts which, if you were well-read in
the field would have prepared you understand my statement as sensible
because they both used Comp and Conn functionally as I did. One
difference is that they are writing papers with more detailed
explanation, and I labelled what I wrote "Notes" because they were
intended to be brief, and not serve as a substitute for having
already read relevant literature in the field.

Just in case you have trouble recognizing the wordings I refer to:

"Since the early 80s, a major rival has challenged computationalism
in its classical form. This rival is known as connectionism,"...

"The differences between standard computationalism and connectionism
are many and important for a variety of reasons."

"In what way is connectionism a genuine alternative to classical
computationalism?"

SH: There are pointers to GOFAI and its mutual basis of Turing
Computable functions shared with Connectionism.

But that is not the point you have been strenuously objecting to.
You've criticized my mixing of "Computationalism" and "Connectionism",
using both those words just as if it were legitimate to compare or
contrast them which is just what those Profs did in their papers. You
are accusing me of making a category error. You are going to have a
very hard time making a case that both Profs did not commit the same
travesty of good sense that I committed. Or, maybe it is you and the
case is that such usage has become standard in the literature and you
don't know about it.
Don Geddis
2007-03-10 01:20:14 UTC
Permalink
Post by Stephen Harris
www.lehigh.edu/~mhb0/cogs7webreadings/Troubles.pdf
"Since the early 80s, a major rival has challenged computationalism in its
classical form. This rival is known as connectionism, or Parallel Distributed
Processing (PDP). ...
"...in its classical form" is an important restriction.
Post by Stephen Harris
But with respect to the basic critique that will be made of computationalism,
there is no significant difference between the two -- both make the same
basic assumptions, and they are both vulnerable to the same criticisms.
I find this observation interesting as well, at it supports my view that
Computationalism and Connectionism aren't really "alternatives".
Post by Stephen Harris
www.clarku.edu/departments/philosophy/faculty/images/hendricks/philpsychdraft.pdf
"In the second part we examine the idea that the mind is a kind of
machine. We start by focusing on the computational approach, the idea that
the mind is a computer. One computational approach, elaborated by
traditional artificial intelligence, is the classical computational theory
of cognition.
Note, "one computational approach". Not: "Any approach labelled
Computationalism". Also, "traditional AI" and "classical".
Post by Stephen Harris
According to this theory, thinking is essentially digital symbol
manipulation. We will ask, Can the classical computational theory give an
account of how mental symbols come to be meaningful? The main alternative
to the classical theory is the connectionist theory of cognition.
Connectionist networks are also examples of digital symbol manipulation.
I'll leave it to you as to whether Professor Hendricks has made a mistake
in his writing in this quoted paragraph.

If he means to imply that the "connectionist theory" is not doing "digital
symbol manipulation", then he's just wrong. If not, then perhaps the
phrasing is merely awkward.
Post by Stephen Harris
I suppose you might try to say that they both used an adjective to
to modify Computationalism (standard/classical) when it was used in
the same sentence with Connectionism.
Yes, you've correctly guessed what my likely response would be.
Post by Stephen Harris
But that is not the point you have been strenuously objecting to. You've
criticized my mixing of "Computationalism" and "Connectionism", using both
those words just as if it were legitimate to compare or contrast them which
is just what those Profs did in their papers. You are accusing me of making
a category error. You are going to have a very hard time making a case that
both Profs did not commit the same travesty of good sense that I
committed. Or, maybe it is you and the case is that such usage has become
standard in the literature and you don't know about it.
I think the two professors you quoted narrowly escaped my criticism by their
careful inclusions of adjectives like "classical" and "standard" and
"one computational approach".

But you're correct that if _I_ were given their papers to review, I might
edit them with comments that their chosen sentence structure is a bit
misleading, and they ought to write more clearly.

Because the fact of the matter is, that Connectionism is merely one possible
implementation approach for AI, and isn't particularly in conflict with the
Computational Theory of Mind.

If you can read their words as disagreeing with this last sentence, then
either they are just wrong, or else they've written less clearly than they
should.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
Raspberry walks into a bar, bartender says "Sorry, we don't serve food here."
-- On Fruitopia bottle, "raspberry psychic lemonade" flavor
Stephen Harris
2007-03-10 03:23:45 UTC
Permalink
Post by Don Geddis
I think the two professors you quoted narrowly escaped my criticism by their
careful inclusions of adjectives like "classical" and "standard" and
"one computational approach".
But you're correct that if _I_ were given their papers to review, I might
edit them with comments that their chosen sentence structure is a bit
misleading, and they ought to write more clearly.
Because the fact of the matter is, that Connectionism is merely one possible
implementation approach for AI, and isn't particularly in conflict with the
Computational Theory of Mind.
If you can read their words as disagreeing with this last sentence, then
either they are just wrong, or else they've written less clearly than they
should.
Of course I don't read their words as disagreeing with that last
sentence. I'm really surprised that you think my words disagreed
with that last sentence. "Likewise one can compare Computationalism
(Comp) and Connectionism as both being Turing computable." My next
sentence does nothing to undo this statement asserting that both
have Turing computable equivalence.

Grounding Symbols in the Analog World with Neural Nets
Think 2(1): 12-78 (Special Issue on "Connectionism versus Symbolism"
D.M.W. Powers & P.A. Flach, eds.). [Also reprinted in French translation
as "L'Ancrage des Symboles dans le Monde Analogique a l'aide de Reseaux
Neuronaux: un Modele Hybride." In: Rialle V. et Payette D. (Eds) La
Modelisation. LEKTON, Vol IV, No 2.]

"The predominant approach to cognitive modeling is still what has
come to be called "computationalism," the hypothesis that cognition is
computation. The more recent rival approach is "connectionism," the
hypothesis that cognition is a dynamic pattern of connections and
activations in a "neural net." Are computationalism and connectionism

really deeply different from one another, and if so, should they compete
for cognitive hegemony, or should they collaborate?"

SH: This writer is also using computationalism in place of symbolic AI.
That is what you criticized me for confusing Comp with Symbolic AI
and then comparing it to Connectionism. My sentence quoted again above
doesn't say that Comp and Comp don't have a common ground; quite the
opposite I say they are both Turing computable.

I have another example of an author substituting Computationalism
for Symbolic AI also called top-down. I'm sure I could find more
examples. But I'm not going to put any more work into defending
my usage which I find sprinkled through the AI literature and
used similarly by other authors.
Don Geddis
2007-03-10 05:00:20 UTC
Permalink
Post by Don Geddis
Because the fact of the matter is, that Connectionism is merely one possible
implementation approach for AI, and isn't particularly in conflict with the
Computational Theory of Mind.
If you can read their words as disagreeing with this last sentence [...]
Of course I don't read their words as disagreeing with that last sentence.
Huh. OK, then. So you actually agree that they aren't in conflict, and even
more that Connectionism is one possible approach for implementing
Computationalism? (Or, if you prefer, is one possible approach for
implementing intelligent behavior -- aka weak AI -- which is compatible with
either Computationalism or non-Computationalism, depending on your attitudes
about what a mind is.)

Because that's all I've been saying for awhile. But this is the first I've
seen you agree.
Grounding Symbols in the Analog World with Neural Nets
Think 2(1): 12-78 (Special Issue on "Connectionism versus Symbolism"
D.M.W. Powers & P.A. Flach, eds.). [Also reprinted in French translation as
"L'Ancrage des Symboles dans le Monde Analogique a l'aide de Reseaux
Neuronaux: un Modele Hybride." In: Rialle V. et Payette D. (Eds) La
Modelisation. LEKTON, Vol IV, No 2.]
"The predominant approach to cognitive modeling is still what has come to
be called "computationalism," the hypothesis that cognition is
computation. The more recent rival approach is "connectionism," the
hypothesis that cognition is a dynamic pattern of connections and activations
in a "neural net."
Anyone who thinks these two concepts are in opposition in any way
(e.g. "rival approach") is simply ignorant. They're completely compatible.
Are computationalism and connectionism really deeply different from one
another, and if so, should they compete for cognitive hegemony, or should
they collaborate?"
They aren't identical concepts, but they do have close to total overlap.
It's silly to describe them as "rivals".
SH: This writer is also using computationalism in place of symbolic AI.
Really? That's not what they said in your own quote above. Your very quote
says
"computationalism," the hypothesis that cognition is computation.
That is a different statement than
computationalism (i.e., Symbolic AI).
These just aren't the same things at all.
That is what you criticized me for confusing Comp with Symbolic AI
and then comparing it to Connectionism.
Yes, and I still would criticize you for that.
My sentence quoted again above doesn't say that Comp and Comp don't have a
common ground; quite the opposite I say they are both Turing computable.
Well, that's hardly much common ground. Lots of things are Turing
computable. They could still be rival approaches.

But Computationalism and Connectionism are not rival approaches.

Symbolic AI and Connectionism might be considered as rival approaches. And,
as it turns out, both of _those_ concepts are Turing computable as well.
I have another example of an author substituting Computationalism for
Symbolic AI also called top-down. I'm sure I could find more examples.
I'm sure you can find lots of examples of sloppy writing. But that doesn't
mean we should let it go. Sloppy writing leads to sloppy thinking.

The words mean something. Even finding common usage doesn't mean the authors
are using the words correctly. It would be one thing if they were changing
the definitions at the same time. If, perhaps, I kept referring to some old
original definition of the terms, whereas in some kind of modern usage the
meaning of the words has changed and people now are referring to something
else. In that case, I might lament the loss of precision that we used to
have, but I would resign myself to the common usage of the majority.

But that's not our situation. Every time one of the people in your quotes
goes to define Computationalism or Connectionism, they keep using the same
definitions that I use. So the meanings are clear.

The problem is, that under their _own_ definitions, these things aren't
incompatible or even rivals. And, even with _their_ definitions,
"computationalism" doesn't mean the same thing as "symbolic AI" or GOFAI.

You're just quoting sloppy writers, not giving even the first hint of
argument about why their usage is correct.
But I'm not going to put any more work into defending my usage which I find
sprinkled through the AI literature and used similarly by other authors.
You seem to be acting as though a claim gets proven merely by volume of
references to published literature. You rarely seem to address the actual
topic of discussion head on. What do the words mean? What data can actually
be observed there in the real world? What is a compelling argument, from the
data and the definitions, to support a claim?

I've enjoyed reading many of your references, but I've rarely had the same
reaction to them as you seem to have intended. I generally find them either
irrelevant, supporting my point, or as easy to criticize as the original post.

But, I guess, each to their own style. You seem to take most comfort in
quoting other people's words. I find it a bit frustrating, but perhaps that's
just me.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
Stephen Harris
2007-03-10 17:13:28 UTC
Permalink
Post by Don Geddis
They aren't identical concepts, but they do have close to total overlap.
It's silly to describe them as "rivals".
Post by Stephen Harris
SH: This writer is also using computationalism in place of symbolic AI.
Really? That's not what they said in your own quote above. Your very quote
says
"computationalism," the hypothesis that cognition is computation.
That is a different statement than
computationalism (i.e., Symbolic AI).
These just aren't the same things at all.
AI was formally established in 1956. It was symbol manipulation.
The idea that mind was acomputer came from the 1940s with
McCullough and Pitt and onto von Neumann. The idea of strong
AI and weak AI never existed until much later (Searle). What
became Strong AI was the same as the original symbolic AI which
also had some very close premise like the mind was a TM.

Originally, AI had no such thing as Connectionism and no such
thing as weak AI. AI was symbolic with a supposition that
the mind is some sort of turing program or array of turing
computable functions. One can distinguish the philosophy from
the architecture, it may be needed now more, but to begin
with it came as a package. When I use a slogan I hardly mean
to exclude other helper notions like 'every physical system
instantiates every dynamical system'.

Early AI was symbolic manipulation and CTM was expressed
in terms of symbol manipulation.

"This essay is concerned with a particular philosophical view
that holds that the mind literally is a digital computer (in
a specific sense of "computer" to be developed), and that
thought literally is a kind of computation. This view --
which will be called the "Computational Theory of Mind" (CTM)

The Computational Theory of Mind combines an account of reasoning
with an account of the mental states. The latter is sometimes called
the Representational Theory of Mind (RTM). This is the thesis that
intentional states such as beliefs and desires are relations between
a thinker and symbolic representations of the content of the states:"

SH: This ends our discussion.
Don Geddis
2007-03-13 02:36:19 UTC
Permalink
Post by Stephen Harris
AI was formally established in 1956. It was symbol manipulation.
The idea that mind was acomputer came from the 1940s with
McCullough and Pitt and onto von Neumann. The idea of strong
AI and weak AI never existed until much later (Searle). What
became Strong AI was the same as the original symbolic AI which
also had some very close premise like the mind was a TM.
Originally, AI had no such thing as Connectionism and no such
thing as weak AI. AI was symbolic with a supposition that
the mind is some sort of turing program or array of turing
computable functions.
I probably would have worded things a little differently, but I don't
immediately object to this brief overview of AI history.
Post by Stephen Harris
Early AI was symbolic manipulation and CTM was expressed
in terms of symbol manipulation.
"This essay is concerned with a particular philosophical view
that holds that the mind literally is a digital computer (in
a specific sense of "computer" to be developed), and that
thought literally is a kind of computation. This view --
which will be called the "Computational Theory of Mind" (CTM)
Look, maybe we just need to back up a bit, to figure out why we're trying to
define these things in the first place.

[Philosophical Theory of Mind] Humans experience subjective consciousness.
We don't understand it well. There's a theory that
mind/intelligence/consciousness is some kind of computational process running
on some kind of hardware (probably neural networks). In this view, the
proper software running on different hardware (e.g. transistors) would also
be just as intelligent/conscious. Also, it implies that anything human minds
can accomplish will be limited by the known limitations of computation,
e.g. Godel Incompleteness, and P vs. NP, and computational complexity
classes, and the Halting Problem.

There are alternatives to this [Philosophy]. For example, some people
believe in an everlasting soul, which doesn't a priori have any limitations.
It might have abilities like telekinesis or precognition, and it might be
capable of reincarnation. Others (like Penrose) imagine that there is a
Platonic ideal of mathematics, and some human consciousness is able to touch
and tap into this Platonic world. Others propose that the brain's hardware
has more abilities than a standard computation substrate (for example,
quantum effects like superposition or quantum computation), and that
consciousness may critically depend on these extra, non-Von Neumann
capabilities in some way.

[AI Engineering] Separately, early AI was dominated by an approach which has
come to be known as Symbolic AI. This included technologies like expert
systems, inference engines, modal logics, non-monotonic logics, knowledge
bases, etc. Later, there were competitive approaches suggested, such as
artificial neural networks / connectionism / parallel distributed processing,
etc.

So, now on to the clarifications.

I don't know what you want to call the [Philosophical Theory of Mind] I
described above. It sure seems to be the same thing people were talking
about 50 years ago, when they said "the mind is a computer", and called it
Computationalism. So that's what I've been calling it. Would you prefer
another term?

I have been asserting this Computationalism (or whatever term you prefer) as
a claim, but I fully admit that it is just speculation at this point
(although backed up by reasonable suggestive evidence).

Secondly (for [AI Engineering]), although there may have been some initial
claims to this effect, it is not the case that these later approaches are
non-computational, or even non-symbolic for that matter. (Newell & Simon's
original Physical Symbol System hypothesis applies equally as well to
connectionism approaches as it does to expert systems.) Despite the fact
that the early AI technologies are generally known as "Symbolic AI", it's
actually the case that the later technologies are also within AI, and also
manipulate symbols.

It is also simply a fact that all of the [AI Engineering] approaches,
including connectionism, are compatible with the [Philosophical Theory of
Mind]. You've been quoting a lot of authors, and writing some historical
summaries, to suggest that Computationalism is necessarily joined with
Symbolic AI. But the concepts themselves are NOT necessarily joined. It
doesn't matter about use, or convention, or history.

It's simply a fact of their definitions (and the nature of computation) that
the [Philosophy] described above is perfectly compatible with an engineering
approach like connectionism. In both directions, actually: if
Computationalism is correct, then connectionism may be one approach for
successfully implementing minds on transistor-like hardware; in the opposite
direction, if a mind could be created via a connectionist architecture, then
that would prove Computationalism is correct. It is not possible for
Connectionism to implement a mind, but for Computationalism to be wrong.

You've quoted a lot of things, and you've implied strongly (but never stated?)
that Connectionism is an "alternative" or "rival" or "alternate tool" to
Computationalism.

Either:
1. You mean something different by "Computationalism" than my paragraph above;
2. You mean something different by "Connectionism" than the standard PDP
texts, e.g.
http://www.scism.sbu.ac.uk/inmandw/tutorials/pdp/pdpintro.html
3. You actually think #1 and #2 are incompatible.

I have to admit, I've been unable to tell what you believe on this topic. If
it's #3, then you're wrong, but I'd be happy to help you clear up your
confusion. And we can do this merely by talking about facts of computation
and definitions, without regard to history or use by other authors.

I suppose you might be using different definitions for the terms, but I really
can't guess what definitions you have in mind. Perhaps you could clarify for
me.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
If you don't want to be jealous of your friends, do what I do: only have losers
for friends. -- Deep Thoughts, by Jack Handey [1999]
v***@gmail.com
2007-03-11 19:39:57 UTC
Permalink
Hello,

it seems to be interesting what are you discussing about, but I am new
both to this group and to this topic (I am doing computer science
currently) so I can't fully grasp the concepts used here. Could anyone
give some good references to get the basics and description of leading
ideas in AI philosophy? Best of all, if it were free and online.

Thanks.
Michael Olea
2007-03-11 23:45:26 UTC
Permalink
Post by Stephen Harris
Hello,
it seems to be interesting what are you discussing about, but I am new
both to this group and to this topic (I am doing computer science
currently) so I can't fully grasp the concepts used here. Could anyone
give some good references to get the basics and description of leading
ideas in AI philosophy? Best of all, if it were free and online.
Thanks.
Hi.

You can find a number of papers relevant to the philosophy of AI, and to
topics in this thread, freely available on-line at:

http://www.arts.uwaterloo.ca/~celiasmi/cv.html

You might also want to browse here:

http://cse.ucdavis.edu/~cmg/compmech/pubs.htm

Most of these are technical papers on "Computatinal Mechanics", but this
short little paper is topical:

Dynamical Embodiment of Computation in Cognitive Processes

James P. Crutchfield
Physics Department
University of California
Berkeley, California 94720, USA
and
Santa Fe Institute
1399 Hyde Park Road
Santa Fe, New Mexico 87501


ABSTRACT: Dynamics is not enough for cognition nor is it a substitute for
information processing aspects of brain behavior. Moreover, dynamics and
computation are not at odds, but are quite compatible. They can be
synthesized so that any dynamical system can be analyzed in terms of its
intrinsic computational components.

(Jim Crutchfield was a member of the group at UCSC dubbed "The Dynamics
Collective" in Gleick's popularization "Chaos: The Making of A New
Science", and coauthor of a Scientific American article on chaotic
dynamical systems.)

You might also want to have a look at:

Facts, Concepts, And Theories: The Shape Of
Psychology's Epistemic Triangle
http://www.behavior.org/journals_BP/2000/Machado.pdf

On A Distinction Between Hypothetical Constructs And
Intervevening Variables
http://www.psych.ufl.edu/~steh/PSB6088/macandmeehl.pdf

Why I Am Not a Cognitive Psychologist
http://skeptically.org/skinner/id9.html

By the way, the notion that there has been any sort of decline or "demise"
of so called "computationalism" is misguided at best. On the contrary,
there has been, and continues to be, explosive growth of computational
approaches to all aspects of neuroscience, and increasing penetration in
some aspects of behavioral science (a great deal of work in psychophysics,
but also some work, for example, in developmental psychology, and in the
dynamics of choice behavior).

-- Michael
Alpha
2007-03-12 15:35:16 UTC
Permalink
Post by Michael Olea
Post by Stephen Harris
Hello,
it seems to be interesting what are you discussing about, but I am new
both to this group and to this topic (I am doing computer science
currently) so I can't fully grasp the concepts used here. Could anyone
give some good references to get the basics and description of leading
ideas in AI philosophy? Best of all, if it were free and online.
Thanks.
Hi.
You can find a number of papers relevant to the philosophy of AI, and to
http://www.arts.uwaterloo.ca/~celiasmi/cv.html
http://cse.ucdavis.edu/~cmg/compmech/pubs.htm
Most of these are technical papers on "Computatinal Mechanics", but this
Dynamical Embodiment of Computation in Cognitive Processes
James P. Crutchfield
Physics Department
University of California
Berkeley, California 94720, USA
and
Santa Fe Institute
1399 Hyde Park Road
Santa Fe, New Mexico 87501
ABSTRACT: Dynamics is not enough for cognition nor is it a substitute for
information processing aspects of brain behavior. Moreover, dynamics and
computation are not at odds, but are quite compatible. They can be
synthesized so that any dynamical system can be analyzed in terms of its
intrinsic computational components.
(Jim Crutchfield was a member of the group at UCSC dubbed "The Dynamics
Collective" in Gleick's popularization "Chaos: The Making of A New
Science", and coauthor of a Scientific American article on chaotic
dynamical systems.)
Facts, Concepts, And Theories: The Shape Of
Psychology's Epistemic Triangle
http://www.behavior.org/journals_BP/2000/Machado.pdf
On A Distinction Between Hypothetical Constructs And
Intervevening Variables
http://www.psych.ufl.edu/~steh/PSB6088/macandmeehl.pdf
Why I Am Not a Cognitive Psychologist
http://skeptically.org/skinner/id9.html
By the way, the notion that there has been any sort of decline or "demise"
of so called "computationalism" is misguided at best. On the contrary,
there has been, and continues to be, explosive growth of computational
approaches to all aspects of neuroscience, and increasing penetration in
some aspects of behavioral science (a great deal of work in psychophysics,
but also some work, for example, in developmental psychology, and in the
dynamics of choice behavior).
Indeed, one should read carefully Read Montague's new book: Why Choose This
Book: How We Make Decisions , for a computational neuroscience approach to
issues like regret, trust, choice and collaboration between minds in social
contexts.

Another source of interest is David Chalmer's set of papers in areas that
computationalists are trying to understand soas to provide fodder for their
simulations/emulations of brain and mind and consciousness (e.g., what is
mind, brain. consciousness etc.)

http://consc.net/online.html
Post by Michael Olea
-- Michael
--
Posted via a free Usenet account from http://www.teranews.com
Stephen Harris
2007-03-12 21:16:13 UTC
Permalink
Post by Alpha
Another source of interest is David Chalmer's set of papers in areas that
computationalists are trying to understand soas to provide fodder for their
simulations/emulations of brain and mind and consciousness (e.g., what is
mind, brain. consciousness etc.)
http://consc.net/online.html
Yes, I had forgotten about this. However, I don't think there
is a large overview that covers the topics of those papers. This
book by Aaron Sloman is a free pdf file of his whole book which
has some updates to the material. Dated 1978, but still useful.

http://www.cs.bham.ac.uk/research/projects/cogaff/crp/
Note added 2001
"After this book was published there was a revival of interest
among many AI researchers in "connectionist" architectures. Some
went so far as to claim that previous approaches to AI had failed,
and that connectionism was the only hope for AI. Since then there
have been other swings of fashion. It should be clear to people
whose primary objective is to understand the problems rather than
to win media debates or do well in competitions for funding that
there is much that we do not understand about what sorts of
architectures are possible and what their scope and limitations
are. It seems very likely that very different sorts of mechanisms
need to be combined in order to achieve the full range of human
capabilities, including controlling digestion, maintaining balance
while walking, recognising faces, gossiping at the garden gate,
composing poems and symphonies, solving differential equations,
and developing computer programs such as operating systems and
compilers. I don't know of an any example of an AI system, whether
implemented using neural nets, logical mechanisms, dynamical
systems, evolutionary mechanisms, or anything else, that is
capable of most of the things humans can do including those items
listed above. This does not mean it is impossible. It only means
that AI researchers need some humility when they propose mechanisms."

Regards,
Stephen
Alpha
2007-03-12 15:38:25 UTC
Permalink
Post by Michael Olea
Post by Stephen Harris
Hello,
it seems to be interesting what are you discussing about, but I am new
both to this group and to this topic (I am doing computer science
currently) so I can't fully grasp the concepts used here. Could anyone
give some good references to get the basics and description of leading
ideas in AI philosophy? Best of all, if it were free and online.
The following site by the authors of AI: A Modern Approach is replete with
links/references to AI, all categorized. One of the most complete links
sets I have seen WRT AI.

http://aima.cs.berkeley.edu/ai.html
Post by Michael Olea
Post by Stephen Harris
Thanks.
Hi.
You can find a number of papers relevant to the philosophy of AI, and to
http://www.arts.uwaterloo.ca/~celiasmi/cv.html
http://cse.ucdavis.edu/~cmg/compmech/pubs.htm
Most of these are technical papers on "Computatinal Mechanics", but this
Dynamical Embodiment of Computation in Cognitive Processes
James P. Crutchfield
Physics Department
University of California
Berkeley, California 94720, USA
and
Santa Fe Institute
1399 Hyde Park Road
Santa Fe, New Mexico 87501
ABSTRACT: Dynamics is not enough for cognition nor is it a substitute for
information processing aspects of brain behavior. Moreover, dynamics and
computation are not at odds, but are quite compatible. They can be
synthesized so that any dynamical system can be analyzed in terms of its
intrinsic computational components.
(Jim Crutchfield was a member of the group at UCSC dubbed "The Dynamics
Collective" in Gleick's popularization "Chaos: The Making of A New
Science", and coauthor of a Scientific American article on chaotic
dynamical systems.)
Facts, Concepts, And Theories: The Shape Of
Psychology's Epistemic Triangle
http://www.behavior.org/journals_BP/2000/Machado.pdf
On A Distinction Between Hypothetical Constructs And
Intervevening Variables
http://www.psych.ufl.edu/~steh/PSB6088/macandmeehl.pdf
Why I Am Not a Cognitive Psychologist
http://skeptically.org/skinner/id9.html
By the way, the notion that there has been any sort of decline or "demise"
of so called "computationalism" is misguided at best. On the contrary,
there has been, and continues to be, explosive growth of computational
approaches to all aspects of neuroscience, and increasing penetration in
some aspects of behavioral science (a great deal of work in psychophysics,
but also some work, for example, in developmental psychology, and in the
dynamics of choice behavior).
-- Michael
--
Posted via a free Usenet account from http://www.teranews.com
Don Geddis
2007-03-13 00:36:01 UTC
Permalink
Post by Alpha
The following site by the authors of AI: A Modern Approach is replete with
links/references to AI, all categorized. One of the most complete links
sets I have seen WRT AI.
http://aima.cs.berkeley.edu/ai.html
Hey! What do you know. _My_name_ is linked on that list. (Although, alas,
the link target itself is no longer valid.)

I guess I'm more important than I thought in the field of AI!

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
The concept is interesting and well-formed; but in order to earn better than a
C, the idea must be feasible.
-- A Yale University management professor in response to student
Fred Smith's paper proposing reliable overnight delivery service.
(Smith went on to found Federal Express corporation.)
Stephen Harris
2007-03-12 19:14:11 UTC
Permalink
Post by Michael Olea
Post by Stephen Harris
Hello,
it seems to be interesting what are you discussing about, but I am new
both to this group and to this topic (I am doing computer science
currently) so I can't fully grasp the concepts used here. Could anyone
give some good references to get the basics and description of leading
ideas in AI philosophy? Best of all, if it were free and online.
Thanks.
http://cse.ucdavis.edu/~cmg/compmech/pubs.htm
Most of these are technical papers on "Computatinal Mechanics", but this
Dynamical Embodiment of Computation in Cognitive Processes
James P. Crutchfield
Physics Department
University of California
Berkeley, California 94720, USA
and
Santa Fe Institute
1399 Hyde Park Road
Santa Fe, New Mexico 87501
ABSTRACT: Dynamics is not enough for cognition nor is it a substitute for
information processing aspects of brain behavior. Moreover, dynamics and
computation are not at odds, but are quite compatible. They can be
synthesized so that any dynamical system can be analyzed in terms of its
intrinsic computational components.
(Jim Crutchfield was a member of the group at UCSC dubbed "The Dynamics
Collective" in Gleick's popularization "Chaos: The Making of A New
Science", and coauthor of a Scientific American article on chaotic
dynamical systems.)
-- Michael
Yes Crutchfield is recognized as an expert. I read him as saying that
symbolic computation can be reduced to dynamic system theory.
'If neural nets are computational and representational, then so are
dynamic systems.' So the computational aspect is not eliminated.

However, if organic minds are fundamentally dynamic systems, then
they are not fundamentally symbolic systems, and the computationalist
slogan, Minds are computers, is no longer supported.

"Yet another different and higher level question is how the behavior
of a dynamical system that supports intrinsic computation takes on
functionality and cognitive import in an environment. As we look to
the future it will be increasingly important that the limitations of
our current conceptions of dynamics and of computation be identified
so that extensions and new frameworks invented in the spirit that has
brought us to the threshold of synthesizing them can form the
foundations for understanding how cognition works."
J.A. Legris
2007-03-13 02:56:44 UTC
Permalink
Hi Michael,

I think you are using a restricted and misleading sense of the term
computationalism. As Don Geddis wrote above, "computationalism" is
meant as the claim that programs on digital computers can (in
principle) do all the cognitive things that human minds do - a very
broad claim.

We can simulate and model all sorts of physical phenomena with
computers, but I am not aware of any, let alone cognition, where a
computer program can (even in principle) do *everything* the physical
process can do.

Computationalists make their case by assuming the conclusion, viz:
cognition is information processing and information processing is just
computation so, obviously, cognition is just computation. But we are
no more justified in assuming that cognition is information
processings than, say, assuming that electron flow in digital circuits
is just information processing. The unescapable fact is that the
behaviours of electrons and brains depend on the properties of matter,
which, at least until science itself is a done deal, cannot be
characterized by information alone.

--
Joe
Michael Olea
2007-03-13 04:17:36 UTC
Permalink
Post by J.A. Legris
Hi Michael,
Hi, Joe.*
Post by J.A. Legris
I think you are using a restricted and misleading sense of the term
computationalism. As Don Geddis wrote above, "computationalism" is
meant as the claim that programs on digital computers can (in
principle) do all the cognitive things that human minds do - a very
broad claim.
We can simulate and model all sorts of physical phenomena with
computers, but I am not aware of any, let alone cognition, where a
computer program can (even in principle) do *everything* the physical
process can do.
Does a missile guidance system simulate guiding a missile, or does it guide
a missile?

I'll have to take it up in more depth later.
Post by J.A. Legris
cognition is information processing and information processing is just
computation so, obviously, cognition is just computation. But we are
no more justified in assuming that cognition is information
processings than, say, assuming that electron flow in digital circuits
is just information processing. The unescapable fact is that the
behaviours of electrons and brains depend on the properties of matter,
which, at least until science itself is a done deal, cannot be
characterized by information alone.
* I can only delay, but not completely suppress adding "where you going with
that gun in your hand".
Neil W Rickert
2007-03-13 18:43:30 UTC
Permalink
Post by Michael Olea
Does a missile guidance system simulate guiding a missile, or does it guide
a missile?
A missile guidance system is not a computational system. It uses
computation, but it also uses measurement in a corrective feedback
loop.
Don Geddis
2007-03-13 23:58:15 UTC
Permalink
Post by Neil W Rickert
Post by Michael Olea
Does a missile guidance system simulate guiding a missile, or does it guide
a missile?
A missile guidance system is not a computational system. It uses
computation, but it also uses measurement in a corrective feedback loop.
Wow. That's kind of a bizarre claim.

Lots of programs have sensors into some kind of environment. A web browser
gets stuff off the internet, for example.

You think those kinds of programs are somehow not covered in the Theory of
Computation? That they're a different field of study? They're still limited
by the halting problem, by P vs. NP, by complexity classes of algorithms, etc.

I don't think anybody was ever claiming that the brain (or a computer mind)
doesn't take input from its environment.

Your statement seems to be a non-sequiter, at best.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
I got a fever. And the only prescription is more cowbell.
-- "Bruce Dickinson" (Christopher Walken), SNL
Neil W Rickert
2007-03-14 01:04:27 UTC
Permalink
Post by Don Geddis
Post by Neil W Rickert
Post by Michael Olea
Does a missile guidance system simulate guiding a missile, or does it guide
a missile?
A missile guidance system is not a computational system. It uses
computation, but it also uses measurement in a corrective feedback loop.
Wow. That's kind of a bizarre claim.
Lots of programs have sensors into some kind of environment. A web browser
gets stuff off the internet, for example.
I make a distinction between a system that merely processes static
external data fed to it, and a system that solicits external data
that is a real-world reaction to prior outputs of the computer.
Post by Don Geddis
You think those kinds of programs are somehow not covered in the Theory of
Computation?
You cannot adequately model them on a Turing machine. To model on a
TM presupposes that all of the data to be used can be on the TM tape
before the computation begins. But that's wrong for missile guidance
systems.

I'll grant that I am taking a minority view here. Time will show that
the distinction I am making is important.
Post by Don Geddis
Computation? That they're a different field of study? They're still limited
by the halting problem, by P vs. NP, by complexity classes of algorithms, etc.
Neither the halting problem, nor P vs. NP have much relevance to
the design of missile guidance systems.
Post by Don Geddis
I don't think anybody was ever claiming that the brain (or a computer mind)
doesn't take input from its environment.
You don't think such an assumption is implicit in Penrose's anti-AI
argument?
--
DO NOT REPLY BY EMAIL - The address above is a spamtrap.

Neil W. Rickert, Computer Science, Northern Illinois Univ., DeKalb, IL 60115
Don Geddis
2007-03-13 04:27:00 UTC
Permalink
We can simulate and model all sorts of physical phenomena with computers,
but I am not aware of any, let alone cognition, where a computer program
can (even in principle) do *everything* the physical process can do.
The unescapable fact is that the behaviours of electrons and brains depend
on the properties of matter, which, at least until science itself is a done
deal, cannot be characterized by information alone.
Well of course you're right, in so far as this goes.

The question is whether the physical device/process can be usefully divided
into a notion of "information processing" plus "noise".

We have no interest in simulating or duplicating all the noise.

The computational theory suggests that brains are like physical calculators.
It is of course the case that all sorts of things are going on at the atomic
(or subatomic!) level in both physical devices. Minsky has recently suggested
that no theory of the brain (or calculators) will be "complete" until you
account for the influence of gravity on every particle, since gravity is
everywhere in the universe and cannot be blocked or shielded.

But as Minsky rightly points out, gravity is not an _important_ contributor
to understanding the operation of a physical calculator (or the brain).

It's clear for calculators that there is a useful level of abstraction which
one can call "information processing". Once the operation of the calculator
is defined at this level, it makes sense to talk about implementing software
that performs the same "information processing". This software is not
_simulating_ a physical calculator; it is performing the same information
processing steps as are performed by the physical device. There's no question
that the physical device _also_ has a lot of other physical effects, which we
generally call "noise" and are happy to ignore.

So, the question is whether the brain/mind can be usefully modelled at this
level. _Nobody_ is saying that the brain is not a physical device, and of
course is affected (in some minor way) by gravity, and quantum mechanics,
and the strong nuclear force, and...

But it seems likely that when you talk about the I/O behavior of the brain,
including everything we think of as mind or consciousness or intelligence,
that this is most usefully understood as a kind of information processing.

And, as you wrote earlier, it's not a big step from there to
Computationalism.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
I don't think I'm ever more "aware" than I am right after I hit my thumb with a
hammer. -- Deep Thoughts, by Jack Handey
Alpha
2007-03-13 16:17:55 UTC
Permalink
Post by Don Geddis
We can simulate and model all sorts of physical phenomena with computers,
but I am not aware of any, let alone cognition, where a computer program
can (even in principle) do *everything* the physical process can do.
The unescapable fact is that the behaviours of electrons and brains depend
on the properties of matter, which, at least until science itself is a done
deal, cannot be characterized by information alone.
Well of course you're right, in so far as this goes.
The question is whether the physical device/process can be usefully divided
into a notion of "information processing" plus "noise".
We have no interest in simulating or duplicating all the noise.
One man's noise is another's information. (One level's noise is another
level's information). And a given level of information processing can lead
to emergent properties at the next higher level of description.
Post by Don Geddis
The computational theory suggests that brains are like physical calculators.
It is of course the case that all sorts of things are going on at the atomic
(or subatomic!) level in both physical devices. Minsky has recently suggested
that no theory of the brain (or calculators) will be "complete" until you
account for the influence of gravity on every particle, since gravity is
everywhere in the universe and cannot be blocked or shielded.
But as Minsky rightly points out, gravity is not an _important_ contributor
to understanding the operation of a physical calculator (or the brain).
It's clear for calculators that there is a useful level of abstraction which
one can call "information processing". Once the operation of the calculator
But you miss the point. It is not known at what level information
processing stops and "noise" or non-salient aspects of matter begin. As has
already been shown here for example, there are many aspects of sub-neuronal
processes that can contribute to information processing. Why, DNA
explication itself, along with downstream processes therein, contribute
mightily to information processing in brain (determines when and where
dendritic growth or LTP will occur for example).
Post by Don Geddis
is defined at this level, it makes sense to talk about implementing software
that performs the same "information processing". This software is not
_simulating_ a physical calculator; it is performing the same information
processing steps as are performed by the physical device.
But with brain, we simply do not know that the full specification of
informatin processing lies at the AP level; that is arbitrary and ignores
other salient aspects of neuronal processing.
Post by Don Geddis
There's no question
that the physical device _also_ has a lot of other physical effects, which we
generally call "noise" and are happy to ignore.
Ignorance is bliss!
Post by Don Geddis
So, the question is whether the brain/mind can be usefully modelled at this
level. _Nobody_ is saying that the brain is not a physical device, and of
course is affected (in some minor way) by gravity, and quantum mechanics,
and the strong nuclear force, and...
But it seems likely that when you talk about the I/O behavior of the brain,
including everything we think of as mind or consciousness or intelligence,
that this is most usefully understood as a kind of information processing.
But many think of mind, consciousness and brain processes as additionally
(to APs only) most of those processes you seem to want to ignore. It is a
complete package - with emergent phenomena built upon emergent phenomena
(See Alwyn Scott for how hierarchies of emergent phenomena occur in brain
and elsewhere (soliton-soliton processes for example). See Read Montague for
information about how hormoonal milieu's affect processes like
reward-prediction signals, giving rise to algorithms that support our
processes of trust and regret etc.
Post by Don Geddis
And, as you wrote earlier, it's not a big step from there to
Computationalism.
-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/
I don't think I'm ever more "aware" than I am right after I hit my thumb with a
hammer. -- Deep Thoughts, by Jack Handey
--
Posted via a free Usenet account from http://www.teranews.com
Stephen Harris
2007-03-13 08:00:48 UTC
Permalink
Post by J.A. Legris
Hi Michael,
I think you are using a restricted and misleading sense of the term
computationalism. As Don Geddis wrote above, "computationalism" is
meant as the claim that programs on digital computers can (in
principle) do all the cognitive things that human minds do - a very
broad claim.
We can simulate and model all sorts of physical phenomena with
computers, but I am not aware of any, let alone cognition, where a
computer program can (even in principle) do *everything* the physical
process can do.
cognition is information processing and information processing is just
computation so, obviously, cognition is just computation. But we are
no more justified in assuming that cognition is information
processings than, say, assuming that electron flow in digital circuits
is just information processing. The unescapable fact is that the
behaviours of electrons and brains depend on the properties of matter,
which, at least until science itself is a done deal, cannot be
characterized by information alone.
--
Joe
Computationalism is true at a broad categorical level(s), but not
true at the more detailed level which asserts that programs can be
made which instantiate minds on a computer.

That is because Comp requires symbol manipulation and neuron spikes
are not symbols nor do trains of spikes correspond to manipulating
symbol strings. So Comp at the meaningful level is empirically false.

Regards,
Stephen
Allan C Cybulskie
2007-03-13 10:20:55 UTC
Permalink
Post by J.A. Legris
Hi Michael,
I think you are using a restricted and misleading sense of the term
computationalism. As Don Geddis wrote above, "computationalism" is
meant as the claim that programs on digital computers can (in
principle) do all the cognitive things that human minds do - a very
broad claim.
There's a big flaw here: just because a computer can do the cognitive
things that humans do does not mean that human cognition is
necessarily cognitive, any more than the fact that a parallel
processing system can do the same things that a sequential system can
do makes the parallel system sequential. It may just be the case that
the parallel system works to "emulate" the functionality of the
sequential system. The same thing may be the case for cognition.

[argument snipped; it kind of makes my point [grin]]
J.A. Legris
2007-03-13 02:03:51 UTC
Permalink
Post by Michael Olea
By the way, the notion that there has been any sort of decline or "demise"
of so called "computationalism" is misguided at best. On the contrary,
there has been, and continues to be, explosive growth of computational
approaches to all aspects of neuroscience, and increasing penetration in
some aspects of behavioral science (a great deal of work in psychophysics,
but also some work, for example, in developmental psychology, and in the
dynamics of choice behavior).
Hi Michael,

I think you are using a restricted sense of the term computationalism.
As Don Geddis wrote above,
"computationalism" is meant as the claim that programs on digital
computers can (in principle) do all the cognitive things that human
minds do - a very broad claim.

We can simulate and model all sorts of physical phenomena with
computers, but I am not aware of any, let alone cognition, where a
computer program can (even in principle) do *everything* the physical
process can do.

Computationalists make their case by assuming the conclusion, viz:
cognition is information processing and information processing is just
computation so, obviously, cognition is just computation. But we are
no more justified in assuming that cognition is information
processings than, say, assuming that electron flow in digital circuits
is just information processing. The unescapable fact is that the
behaviours of electrons and brains depend on the properties of matter,
which, at least until science itself is a done deal, cannot be
characterized by information alone.

--
Joe
J.A. Legris
2007-03-13 20:49:59 UTC
Permalink
What's this? Overnight delivery? Has Google Groups gone postal?

Anyway, please disregard this duplicate posting.

--
Joe
Stephen Harris
2007-03-12 16:33:35 UTC
Permalink
Post by Stephen Harris
Hello,
it seems to be interesting what are you discussing about, but I am new
both to this group and to this topic (I am doing computer science
currently) so I can't fully grasp the concepts used here. Could anyone
give some good references to get the basics and description of leading
ideas in AI philosophy? Best of all, if it were free and online.
Thanks.
"The Mind Doesn't Work That Way: The Scope and Limits of Computational
Psychology" by Jerry Fodor. It is certainly available in the library
and a book is easier to read. He is one of the very best writers.

I got my boost by reading "Godel, Escher and Bach" by Hofstadter.
It received the Pulitzer Prize, but it may be too general and dated.
He mentions Godel Incompleteness which is very likely a wrong way
of attacking strong AI. And quining after the philosopher, Quine.

Another avenue is to read Searle's Chinese Room Argument, which is
free and an attack on strong AI. Decide if you find the argument
intuitive. Then read the hundreds at least of free online defenses
against and dismissals of the CRA. These papers will have lots of
keywords such as "intentionality" or "modularity" which you can
pursue that will fan out your readings like plants moving towards
sunshine. That way is fun. Or you can take a class in it and receive
the benefits of a disciplined inclusive exploration, but no quick fix.

Regards,
Stephen
Stephen Harris
2007-03-06 13:13:40 UTC
Permalink
Post by Don Geddis
So anyway, my main points (in this thread) were that (1) Computationalism is
still the default philosophy for the vast majority of AI researchers and
cognitive scientists; and (
http://mechanism.ucsd.edu/~bill/research/REPRESENT.html

"Dynamical systems theory (DST) is changing the manner in which many
cognitive scientists think about cognition. It provides a new set of
tools to use in trying to understand how the mind-brain carries out
cognitive tasks."

"The computational approach is nothing less than a research paradigm in
Kuhn's classic sense. It defines a range of questions and the form of
answers to those questions (i.e., computational models). It provides an
array of exemplars--classic pieces of research which define how
cognition is to be thought about and what counts as a successful model.
. . . [T]he dynamical approach is more than just powerful tools; like
the computational approach, it is a worldview. It is not the brain,
inner and encapsulated; rather, it is the whole system comprised of
nervous system, body, and environment. The cognitive system is not a
discrete sequential manipulation of static representational structures;
rather, it is a structure of mutually and simultaneously influencing
change. The cognitive system does not interact with other aspects of the
world by passing messages or commands; rather, it continuously coevolves
with them. . . . [T]o see that there is a dynamical approach is to see a
new way of conceptually reorganizing cognitive science as it is
currently practiced (Van Gelder & Port, 1995, pp. 2-4)."
f***@msn.com
2007-03-06 13:38:37 UTC
Permalink
Post by Stephen Harris
Post by Don Geddis
So anyway, my main points (in this thread) were that (1) Computationalism is
still the default philosophy for the vast majority of AI researchers and
cognitive scientists; and (
http://mechanism.ucsd.edu/~bill/research/REPRESENT.html
"Dynamical systems theory (DST) is changing the manner in which many
cognitive scientists think about cognition. It provides a new set of
tools to use in trying to understand how the mind-brain carries out
cognitive tasks."
"The computational approach is nothing less than a research paradigm in
Kuhn's classic sense. It defines a range of questions and the form of
answers to those questions (i.e., computational models). It provides an
array of exemplars--classic pieces of research which define how
cognition is to be thought about and what counts as a successful model.
. . . [T]he dynamical approach is more than just powerful tools; like
the computational approach, it is a worldview. It is not the brain,
inner and encapsulated; rather, it is the whole system comprised of
nervous system, body, and environment. The cognitive system is not a
discrete sequential manipulation of static representational structures;
rather, it is a structure of mutually and simultaneously influencing
change. The cognitive system does not interact with other aspects of the
world by passing messages or commands; rather, it continuously coevolves
with them. . . . [T]o see that there is a dynamical approach is to see a
new way of conceptually reorganizing cognitive science as it is
currently practiced (Van Gelder & Port, 1995, pp. 2-4)."
This sounds like cybernetics. Is there a distinction?
Stephen Harris
2007-03-06 23:53:28 UTC
Permalink
Post by f***@msn.com
Post by Stephen Harris
Post by Don Geddis
So anyway, my main points (in this thread) were that (1) Computationalism is
still the default philosophy for the vast majority of AI researchers and
cognitive scientists; and (
http://mechanism.ucsd.edu/~bill/research/REPRESENT.html
"Dynamical systems theory (DST) is changing the manner in which many
cognitive scientists think about cognition. It provides a new set of
tools to use in trying to understand how the mind-brain carries out
cognitive tasks."
"The computational approach is nothing less than a research paradigm in
Kuhn's classic sense. It defines a range of questions and the form of
answers to those questions (i.e., computational models). It provides an
array of exemplars--classic pieces of research which define how
cognition is to be thought about and what counts as a successful model.
. . . [T]he dynamical approach is more than just powerful tools; like
the computational approach, it is a worldview. It is not the brain,
inner and encapsulated; rather, it is the whole system comprised of
nervous system, body, and environment. The cognitive system is not a
discrete sequential manipulation of static representational structures;
rather, it is a structure of mutually and simultaneously influencing
change. The cognitive system does not interact with other aspects of the
world by passing messages or commands; rather, it continuously coevolves
with them. . . . [T]o see that there is a dynamical approach is to see a
new way of conceptually reorganizing cognitive science as it is
currently practiced (Van Gelder & Port, 1995, pp. 2-4)."
This sounds like cybernetics. Is there a distinction?
en.wikipedia.org/wiki/Dynamical_system

"Dynamic system theory has recently emerged in the field of cognitive
development. It is the belief that cognitive development is best
represented by physical theories rather than theories based on syntax
and AI."


http://en.wikipedia.org/wiki/Systems_theory#Cybernetics
Cybernetics

"The terms 'systems theory' and 'cybernetics' have been widely used as
synonyms, although some authors use the term cybernetic systems to
denote a proper subset of the class of general systems, namely those
systems that include feedback loops."
Stephen Harris
2007-03-08 15:18:46 UTC
Permalink
Post by f***@msn.com
Post by Stephen Harris
Post by Don Geddis
So anyway, my main points (in this thread) were that (1) Computationalism is
still the default philosophy for the vast majority of AI researchers and
cognitive scientists; and (
http://mechanism.ucsd.edu/~bill/research/REPRESENT.html
"Dynamical systems theory (DST) is changing the manner in which many
cognitive scientists think about cognition. It provides a new set of
tools to use in trying to understand how the mind-brain carries out
cognitive tasks."
This sounds like cybernetics. Is there a distinction?
Yes, I finally found out and also there is a distinction between
Connectionism and Dynamic Systems Theory (DST).

"The systems theory approach to science has proved to be
effective especially at the interfaces of well-established
disciplines, for example, biophysics, biochemistry,
information theory and cybernetics just to name a few.

However, a metatheory involving the concept of complex
adaptive systems (CAS) has been invented and developed by
members of the Santa Fe Institute [9]. According to Gell-Mann
[10] a CAS is a system that gathers information about itself
and its own behavior and from the perceived patterns which are
organized into a combination of descriptions and predictions,
modifies its behavior. Further, the interaction of such a CAS
with the environment provides feedback with which the survival
characteristics of the system are adjusted. This complicated
behavior leading to an internal change in the system associated
with decision making is not to be confused with the direct
control envisioned in cybernetics and other early forms of
systems theory."
Alpha
2007-03-07 19:18:09 UTC
Permalink
Post by Stephen Harris
Post by Don Geddis
So anyway, my main points (in this thread) were that (1) Computationalism is
still the default philosophy for the vast majority of AI researchers and
cognitive scientists; and (
http://mechanism.ucsd.edu/~bill/research/REPRESENT.html
"Dynamical systems theory (DST) is changing the manner in which many
cognitive scientists think about cognition. It provides a new set of tools
to use in trying to understand how the mind-brain carries out cognitive
tasks."
"The computational approach is nothing less than a research paradigm in
Kuhn's classic sense. It defines a range of questions and the form of
answers to those questions (i.e., computational models). It provides an
array of exemplars--classic pieces of research which define how cognition
is to be thought about and what counts as a successful model. . . . [T]he
dynamical approach is more than just powerful tools; like the
computational approach, it is a worldview.
It is not the brain, inner and encapsulated; rather, it is the whole
system comprised of nervous system, body, and environment.
I think the dynamical approach to mind/brain can be separated from whether
mind/brain is situated and thus has to account for environment. That the
CNS is a dynamical system (with attractors/basins etc.) is becoming clear at
one level of description. But such can be the case without reference to an
environment (brain in a vat).

Now of course, per Bateson in his Mind and Nature - A Necessary Unity, it
may be the case that in practice, embodied mind/brains have a lot of
feedback/feedforward processes with an environment and that affects the
states/processes of mind/brain.
Post by Stephen Harris
The cognitive system is not a discrete sequential manipulation of static
representational structures; rather, it is a structure of mutually and
simultaneously influencing change. The cognitive system does not interact
with other aspects of the world by passing messages
I would say that information flows both ways. We get inf. from the
environment and we send information back out (in terms of our actions - we
alter informational structures in the world).
Post by Stephen Harris
or commands; rather, it continuously coevolves with them. . . . [T]o see
that there is a dynamical approach is to see a new way of conceptually
reorganizing cognitive science as it is currently practiced (Van Gelder &
Port, 1995, pp. 2-4)."
--
Posted via a free Usenet account from http://www.teranews.com
Alpha
2007-03-07 19:07:47 UTC
Permalink
Post by Don Geddis
[O]ne can compare Computationalism (Comp) and Connectionism as both being
Turing computable. But that doesn't mean that they both enjoy the same
flexibility as tools for the pursuit of AI.
OK, Stephen, I think a number of different issues are getting conflated here.
Perhaps if I tease them apart, it will be more clear.
The first thing is whether a "mind" or "consciousness" is merely the result
of some computation running on some hardware. Or, alternatively, whether it
might be reasonable for some entity to be an "intelligent zombie", where it
acts just like a human, but doesn't "really" have a mind.
I've argued elsewhere that a "zombie" is not a feasible entity, but that wasn't
the topic I was addressing in this thread.
It is feasible if you change your definitions a little bit. Mind =
intelligence + consciousness. A passing TTP may instantiate intelligence,
but not a mind, as it will not have consciousness. It is even likely IMO,
that a passing TTP will not understand anything it does that way humans do
(which comes about with a subject (consciousness) understanding an object in
consciousness (a percept or concept).
.

<snip>
Post by Don Geddis
So anyway, my main points (in this thread) were that (1) Computationalism is
still the default philosophy for the vast majority of AI researchers and
cognitive scientists;
But not Strong AI. And I would say not for Cognitive scientists either.
Post by Don Geddis
and (2) if you dislike Computationalism, then it is
NOT the case that Connectionism gives you an alternative theory of mind.
I think I agree with you that Connectionisms is a form of Computationalism,
which should be independent of implementation.
Post by Don Geddis
-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/
Bigamy is having one husband or wife too many. Monogamy is the same.
--
Posted via a free Usenet account from http://www.teranews.com
Don Geddis
2007-03-07 21:17:25 UTC
Permalink
Post by Alpha
Post by Don Geddis
I've argued elsewhere that a "zombie" is not a feasible entity, but that
wasn't the topic I was addressing in this thread.
It is feasible if you change your definitions a little bit. Mind =
intelligence + consciousness.
Well, these things really aren't merely a matter of definition. They already
exist (in the conceptions of lay people, anyway).

Intelligence is something like problem-solving ability. A behavior you can
observe externally.

Consciousness is the internal, subjective, first-person perspective that we
can all observe within ourselves.

It's still an open question whether an entity that could pass something like
the Turing Test might exist without a "mind".
Post by Alpha
A passing TTP may instantiate intelligence, but not a mind, as it will not
have consciousness. It is even likely IMO, that a passing TTP will not
understand anything it does that way humans do (which comes about with a
subject (consciousness) understanding an object in consciousness (a percept
or concept).
I'm aware of your perspective.

I don't share it at all. Two things I believe, which you (probably) don't are:
1. An entity without a mind/consciousness would be unable to pass the Turing
Test; and
2. Working on "consciousness" is not really an independent research direction
for AI scientists. That the same approaches which enable intelligent
behavior in complex devices, also as a side-effect create consciousness.

So you can see how, given _my_ theory, that an "intelligent zombie" which
passes the Turing Test but has no mind, is "not a feasible entity", as I
originally wrote above.

It doesn't become feasible just by changing definitions.

But in any case, I understand that you have a different theory of mind (of
some kind), and I understand at least the outer sketch of the claims your
theory makes.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
When I was in high school, my friends would lay anything that moved.
I choose not to limit myself.
Alpha
2007-03-07 22:21:36 UTC
Permalink
Post by Don Geddis
Post by Alpha
Post by Don Geddis
I've argued elsewhere that a "zombie" is not a feasible entity, but that
wasn't the topic I was addressing in this thread.
It is feasible if you change your definitions a little bit. Mind =
intelligence + consciousness.
Well, these things really aren't merely a matter of definition.
Yet, to start out a conversation, it is useful to define one's terms! I was
under the impression, from reading hundreds of books on
brain/mind/consciousness and those areas of AI that delve into such (Like
the classic Godel, Escher, Back and Kurzweil's work etc, as well as
Wolfram), all of whom BTW, redefined many aspects of those topics and what
they point to or reference in those areas.
Post by Don Geddis
They already
exist (in the conceptions of lay people, anyway).
Intelligence is something like problem-solving ability. A behavior you can
observe externally.
Consciousness is the internal, subjective, first-person perspective that we
can all observe within ourselves.
I agree with all three statements. I would presume that the problem-solving
ability includes memory functionality!?
Post by Don Geddis
It's still an open question whether an entity that could pass something like
the Turing Test might exist without a "mind".
Now comes the tricky part. I would argue that mind is composed of that
self-consciousness + consciousness_of capabilties (conscious of some
object/processes that impinges upon one's CNS or is created from within it
(internal discourse)), + all the various intelligence capabilties
(reasoning and so forth). The "mind" is the complete package in my POV.

So, in that POV, it is consistent to say that an artifact can pass the TT
for intelligence, fooling judges etc., yet does not posess consciousness and
therefore does not instantiate a mind.

I realize that you think that somehow, through emergence perhaps (and that
is a possibility I agree), that some consciousness functionality may appear.
I don;t think the most salient aspects will be there though - the
"self-consciousness" and the qualia/experience of objects/processes in the
way humans have that experential Gestalt. I think that is where I think
substrate matters (and thence I am not a pure functionalist (although I am
to a certain extent, just not for all types or classes of functionality)).
Post by Don Geddis
Post by Alpha
A passing TTP may instantiate intelligence, but not a mind, as it will not
have consciousness. It is even likely IMO, that a passing TTP will not
understand anything it does that way humans do (which comes about with a
subject (consciousness) understanding an object in consciousness (a percept
or concept).
I'm aware of your perspective.
1. An entity without a mind/consciousness would be unable to pass the Turing
Test; and
You are right - I don't. I think a lot of Intelligence functionality is
needed to pass a TT for intelligence. And I separate that functionality as
different in class, from the functionality (if you can call it that), that
provides us with experience/qualia and a sense of self.
Post by Don Geddis
2. Working on "consciousness" is not really an independent research direction
for AI scientists. That the same approaches which enable intelligent
behavior in complex devices, also as a side-effect create consciousness.
That could ("side-effect" connotes) mean you are an epiphenomenolist ?! Do
you believe that consciousness is causal (can affect other mind functions,
including hormonal milieus)?

Epiphenomenolism - A mind/body viewpoint once held by B. F. Skinner,
poster boy for the radical behaviorist faction of psychology. It holds that
cognitions exist, but hold no causal role in behavior. They are essentially
"trash" that the mind discards whilst doing work.
http://www.candleinthedark.com/cognitivemiddle.html
Post by Don Geddis
So you can see how, given _my_ theory, that an "intelligent zombie" which
passes the Turing Test but has no mind, is "not a feasible entity", as I
originally wrote above.
It doesn't become feasible just by changing definitions.
Sure it does!! If I say mind = i_functions + c_functions, that can arise
somewhat or prehaps completely separately (a paramecium is very conscious
(reacts vigorously to stimuli) but has almost no intelligence), (where I_
and C_functions are separate classes of process/function, arise separately,
one can be had without the other etc.), then I can have a TTP that has
I_functions but no C_functions and thus does not instatiate a mind. I
changed the definition to a different one from yours, and it works
perfectly, and a zombie can be feasible.

If I say, on the other hand, that my definition of mind (yours actually) is
that I_functions and C_functions arise almost simultaneoulsy or when
I_functions get to some sufficient degree of complexity/dynamics etc., then
I cannot separate them (or not as easily) and a zombie would not make sense.

So the issue is at least partly, between defintions.
Post by Don Geddis
But in any case, I understand that you have a different theory of mind (of
some kind),
Some of which was explained above.
Post by Don Geddis
and I understand at least the outer sketch of the claims your
theory makes.
-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/
When I was in high school, my friends would lay anything that moved.
I choose not to limit myself.
--
Posted via a free Usenet account from http://www.teranews.com
Don Geddis
2007-03-08 19:42:17 UTC
Permalink
Post by Don Geddis
1. An entity without a mind/consciousness would be unable to pass the
Turing Test; and
I think a lot of Intelligence functionality is needed to pass a TT for
intelligence. And I separate that functionality as different in class, from
the functionality (if you can call it that), that provides us with
experience/qualia and a sense of self.
I'm not sure if you think a "Turing Test for intelligence" is a different
test than the one Turing originally outlined in his seminal paper. But
in any case, I was referring to the regular old Turing Test, where the judges
can ask any question they want, and the goal is to distinguish humans from
computers.

(Actually, in the original paper, Turing had an indirect test, where first
men tried to pretend to be women, and then computers tried to pretend to be
women, and the question was whether computers could do as good a job as human
men in the task of pretending to be women. But most people leave out this
additional gender wrinkle when discussing "the Turing Test".)

In any case, I suppose the issue is whether you think a judge could compose
a set of questions that would reveal whether the candidate had qualia or a
sense of self. If you think a program could pass the TT without those things,
then presumably you think it's easy to fake having those properties, at least
through a (detailed, probing) conversation.

I, on the other hand, think that a judge could ask the right kinds of
questions, such that the answers would reveal a fake program that was only
pretending to have qualia and/or a sense of self, but didn't "really" have
it.
Post by Don Geddis
2. Working on "consciousness" is not really an independent research
direction for AI scientists. That the same approaches which enable
intelligent behavior in complex devices, also as a side-effect create
consciousness.
That could ("side-effect" connotes) mean you are an epiphenomenolist ?! Do
you believe that consciousness is causal (can affect other mind functions,
including hormonal milieus)?
Epiphenomenolism - [...] holds that cognitions exist, but hold no causal
role in behavior. They are essentially "trash" that the mind discards
whilst doing work.
No, that's not my belief.

When you consciously imagine what would happen if you knocked over the glass
of water on the table, and you picture a future world where your pants are
all wet, and you notice a "feeling" in that future world that you're unhappy,
and thus you decide not to knock over the glass, and finally you notice that
in fact your body does not then knock over the glass -- I think that
introspective account of cognition is fairly accurate. Your internal
subjective explanation of why your arm didn't in fact knock over the glass
actually mirrors the real causal chain of how your physical body is
controlled.

So in that sense, no, I don't believe in epiphenomenolism. Consciousness is
not acausal "trash".

What I think is that the computations that result in consciousness are a
subset of the computations that result in intelligent behavior. You can get
some kinds of intelligent-like behavior (say, playing chess) without
necessarily implementing the algorithms that result in consciousness. But by
the time you've gotten around to implementing all the necessary algorithms to
account for all intelligent behavior, you'll have necessarily implemented the
consciousness ones as well.

(Like: self-models, introspective mental sensors, imagination, short- and
long-term memory, etc.)

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
Puritanism: The haunting fear that someone, somewhere, may be happy.
-- H. L. Mencken
Alpha
2007-03-09 16:01:37 UTC
Permalink
Post by Don Geddis
Post by Don Geddis
1. An entity without a mind/consciousness would be unable to pass the
Turing Test; and
I think a lot of Intelligence functionality is needed to pass a TT for
intelligence. And I separate that functionality as different in class, from
the functionality (if you can call it that), that provides us with
experience/qualia and a sense of self.
I'm not sure if you think a "Turing Test for intelligence" is a different
test than the one Turing originally outlined in his seminal paper. But
in any case, I was referring to the regular old Turing Test, where the judges
can ask any question they want, and the goal is to distinguish humans from
computers.
(Actually, in the original paper, Turing had an indirect test, where first
men tried to pretend to be women, and then computers tried to pretend to be
women, and the question was whether computers could do as good a job as human
men in the task of pretending to be women. But most people leave out this
additional gender wrinkle when discussing "the Turing Test".)
In any case, I suppose the issue is whether you think a judge could compose
a set of questions that would reveal whether the candidate had qualia or a
sense of self. If you think a program could pass the TT without those things,
then presumably you think it's easy to fake having those properties, at least
through a (detailed, probing) conversation.
I, on the other hand, think that a judge could ask the right kinds of
questions, such that the answers would reveal a fake program that was only
pretending to have qualia and/or a sense of self, but didn't "really" have
it.
Post by Don Geddis
2. Working on "consciousness" is not really an independent research
direction for AI scientists. That the same approaches which enable
intelligent behavior in complex devices, also as a side-effect create
consciousness.
That could ("side-effect" connotes) mean you are an epiphenomenolist ?!
Do
you believe that consciousness is causal (can affect other mind functions,
including hormonal milieus)?
Epiphenomenolism - [...] holds that cognitions exist, but hold no causal
role in behavior. They are essentially "trash" that the mind discards
whilst doing work.
No, that's not my belief.
When you consciously imagine what would happen if you knocked over the glass
of water on the table, and you picture a future world where your pants are
all wet, and you notice a "feeling" in that future world that you're unhappy,
and thus you decide not to knock over the glass, and finally you notice that
in fact your body does not then knock over the glass -- I think that
introspective account of cognition is fairly accurate. Your internal
subjective explanation of why your arm didn't in fact knock over the glass
actually mirrors the real causal chain of how your physical body is
controlled.
So in that sense, no, I don't believe in epiphenomenolism. Consciousness is
not acausal "trash".
What I think is that the computations that result in consciousness are a
subset of the computations that result in intelligent behavior. You can get
some kinds of intelligent-like behavior (say, playing chess) without
necessarily implementing the algorithms that result in consciousness. But by
the time you've gotten around to implementing all the necessary algorithms to
account for all intelligent behavior, you'll have necessarily implemented the
consciousness ones as well.
(Like: self-models, introspective mental sensors, imagination, short- and
long-term memory, etc.)
I noticed you left out the most salient aspects of C in that list. Let me
explain...

So you are essentially saying (and you have implied or explicitly said this
before), that it is the intelligence functions that somehow give rise to the
salient aspects of consciousness; namely qualia/experience?

Hmmm; possible but I think that is unlikely, considering tht experience can
be had without intelligence (an idiot/severely autistic person who has all
the phenomenal aspects of consciousness (qualia/experience), but is very
very low WRT cognitive (AKA intelligence) functionality.
One can also observe the paramecium for example which (according to its
behavior) has consciousness (experience of pain etc.), but has no
intelligence (and I do not count instinctive response to stimuli as an
I_function - but perhaps I am wrong about that.)

In fact though, there are countless account of people with deficits in each
of the areas you posit as being C-like_functions, yet are fully conscious
(experiencing qualia). So I think most of what you consider as C-like is
really I-like functionality. I look upon consciousness as the "bucket" into
which some brain-generated content is placed - metaphorically. I say "some",
because there are obvious cases of unconscious brain-generated content in
brain. Some of the contents of the bucket can actually look back upon
itself (be aware of itself as a bucket) and thence is borne
self-consciousness.

I note you include imagination as an intelligence "algorithm"; I can also
see it as a consciousness-driven aspect of mind (given my def. of mind =
consciousness + intelligence). Especially when the imagination involves
experiencing in one's "mind's eye", that which one imagines (like a FSM
perhaps or imagining myself on the ski slopes etc). So in the case of
visual imagination scenarios, (the mind's eye experience) it does not seem
to be an intelligence-related aspect at all; now, the content of some other
imaginative scenario may involve some cognitive functionality (ike imagining
myself independently solving Fermat's Last Theorem), but the essence of
visual imagination seems to involve only the experiencing of made_up or
remembered visual qualia.

And as such, you probably already know that I think consciousness give rise
to a self/self_consciousness (a "subject) that can have no objects before it
(meditators experience this consciousness without an object (see Franklin
Merrill-Wolff and thousands of like experiencers across culture and the
millennia )whose experiences vis consciousness must be explained by any
complete/coherent theory of C - and explaining it away by saying those
states of C are "crazy" or like NDEs - fooling their experiencers is not
giving credit to the actuality of the experience as interpreted by trained
meditators (as opposed to NDEs which are probably the result of bizarre
chemical imbalances/lack_of_oxegyn/etc.). So the notion of pure
consciousness (without any other cognitive functions) is a possibility (in
principle and apparently, in actuality as reported by experiencers of such).
In fact, the next level of "ineffable experience" reported is that of
"no-self"; so self-consciouness goes away and we are left with pure
consciousness without an object *or subject*. So no contents *in*
consciousness seems to be a repeatable phenomemal aspect of C (with no other
cognitive functions apparent).
--
Posted via a free Usenet account from http://www.teranews.com
z***@netscape.net
2007-03-03 13:39:13 UTC
Permalink
Post by Stephen Harris
As some of us know, there is one mathematical formulation for
Quantum theory, but at least 8 interpretations, what the
extremely accurate quantum mathematical prediction says about
the nature of reality.
Likewise one can compare Computationalism (Comp) and Connectionism
as both being Turing computable. But that doesn't mean that they
both enjoy the same flexibility as tools for the pursuit of AI.
The universe is made of atoms, which is the parent category.
But that's where the compuatational cranks
stumble and fall repeatedly.
Since they're thumbwheel approach
doesn't take into account that techology
is made off radiation hardeded sattelites, robots,
and lasers.





There
Post by Stephen Harris
are sub-categories, such as humans and rocks and dogs and cats.
Because they are all made of atoms does not mean that the sub-cats
all share the same properties when viewed from a more finely grained
perspective. Humans are assumed to have consciousness and rocks not,
even though they both consist of atoms. Likewise humans are assumed
to possess consciousness but this doesn't mean that property is
conferred to the more general category of the universe containing
humans. So when things are compared at the same level of abstraction
for similarity or difference, it doesn't necessarily work to answer
by skipping to another more general level and impute properties from
that level to the specific or assert properties found in the specific
to the more general category.
For example, the law of causality is considered to be universal.
One could kick a cat, dog, or stone and it would go flying, the
law of cause and effect in action. But that wouldn't mean that
you could lump cats, dogs or stones into a more specialized
abstract level of comparison because they were contained by a
broad, sweeping level or degree of comparison like cause and effect.
I mention causality, because that is mentioned as an argument for
the "implementational" connectionists (Fodor) to unify their point
of view regarding the sameness of Comp and Connectionism. This is
opposed to the "radical" Connectionists who maintain there is a
difference between Comp and Connectionism. So I quoted from that
paper by Gualtiero Piccinini in what seems to me to support the
pov that causality doesn't measure up to the implementational
connectionist claims. Comp and Connectionism seem like different
tools to me when compared at the meaningful level of abstraction.
I decided to include some info on the dynamic system approach.
Usually people don't write a paper to defend their pet philosophy
unless the rumor of the demise of it is fairly wide spread. I
added a touch of Behaviorism so everybody can correct this post.
--------------------------------------------
"The Rumors of its [Computationalism] Demise have been Greatly
Exaggerated" David Israel, Stanford University, SRI International, USA
"There has been much talk about the computationalism being dead. But as
Mark Twain said of rumors of his own death: these rumors are highly
exaggerated. Unlike Twain's case, of course, there is room for a good
deal of doubt and uncertainty as to what it is exactly that is being
claimed to have died. Whose old conception are we talking about?
Turing's? Fodor's?
I will leave the issues of the computational model of mind to the
philosophers and cognitive scientists. I will address rather some -- or
at any rate, one -- of the real shifts of focus in theoretical computer
science: away from single-processor models of computation and toward
accounts of interaction among computational processes. I will even
address the question as to whether this is a shift in paradigms or
simply (?) a quite normal evolution of interests within a paradigm.
Maybe a few philosophical morals will be drawn."
-------------------------------------------------------------------
www.syros.aegean.gr/users/tsp/conf_pub/C34/C34.doc
"In opposition to behaviorism, Cognitive Science opened the 'black box'
while retaining behavior as the object of its investigation. It offers a
theory of what goes on inside an organism with cognitive capacities when
it engages in cognitive behavior. The dominant element of this process
is of an informational nature, but the respective activity is not
uniquely defined. The various ways this information processing activity
can be defined are tantamount to different overall approaches to
cognition (Petitot et al., 1999). For the purposes of this paper it is
The Cognitivist-Computationalist/Symbolic Approach
Computationalism is based on the hypothesis that the mind is supposed to
process symbols that are related together to form representations of the
environment. These representations are abstract, and their manipulations
are so deterministic that they can be implemented by a machine.
Computationalism is the metaphor of the sequential,
externally-programmed information processing machine based on the
theories of Turing (Turing, 1950) and vonNeumann (vonNeumann, 1958).
Therefore it implies that the key to building an adaptive system is to
produce a system that manipulates symbols correctly, as enshrined in the
Physical Symbol System Hypothesis (Newell, 1980). Computationalism has
two requirements: forms of representation and methods of search. Thus,
first one should find a way to formally represent the domain of interest
(whether it will be vision, chess, problem-solving) and then to find
some method of sequentially searching the resulting state space
(Mingers, 1995).
Consequently, these are purely formal systems and their symbols are
related to an apriori correspondence with externally imposed meaning.
They are processing information based on a static meaning structure,
which cannot be internally changed in order to adapt to the
ever-changing demands of a dynamic environment.
The Connectionist-Dynamic Approach
Connectionism argues that the mind is a system of network that gives
rise to a dynamic behavior that can be interpreted as rules at a higher
level of description. Here, the dominant view is that mental elements
are a vector distribution of properties in dynamic networks of neurons
and the proposed solution for a proper modeling of the phenomenon
(thinking process) is the set-up of parallel distributed architectures.
Connectionism overcomes the problems imposed by the linear and
sequential processing of classical computationalism and finds
application in areas like perception or learning, where the latter is,
due to its nature, too slow to deal with the rapidity of environmental
input.
Connectionism has also borrowed the idea of emergence, from the theories
of self-organization, which has as a central point the system's
nonlinear dynamical processing. In this context the brain is seen as a
dynamical system whose behavior is determined by its attractor
landscape. The dynamics of the cognitive substrate (matter) are taken to
be the only thing responsible for its self-organization, and
consequently for the system's behavior (vanGelder and Port, 1995). It
should be stressed that there is an on-going debate between dynamic
systems theory and connectionist networks. The latter exhibit many of
the properties of self-organizing dynamical systems, while not
discarding the notions of computation and representation. Instead, they
find it necessary in order for the system to exhibit high-level
intelligence (Eliasmith, 1998), (Clark and Eliasmith, 2002), or even any
kind of intentional behavior (Bickhard, 1998), (Clark and Wheeler,
1998), as long as representations emerge from the interaction in a
specific context of activity.
On the other hand, Fodor (Fodor and Psyslyn, 1988) among others, insists
that the form of the computation, whether logico-syntactic or
connectionist, is merely a matter of implementation, and in addition,
the implementation of computation, whether classical or connectionist,
lies in causal processes. The only real difference between this form of
connectionism and computationalism is that the former uses a vector
algebra, rather than scalar, to manipulate its symbols (representations)
(Smolensky, 1988). In this perspective and in relation to intrinsic
creation of meaning, connectionist architectures cannot evolve and be
adaptive. [SH: Seems like a fairly major difference to me.]
The Emergent-Enactive Approach
Advocates of the pure dynamic approach (Varela et al., 1991), argue that
connectionism remains basically representational, as it still assumes a
pre-given independent world of objective and well-defined problems.
These problems seek the proper set of representations together with an
efficient mapping of one set of representations onto another.
On the contrary, the emergent-enactive view, although it shares with
connectionism a belief in the importance of dynamical mechanisms and
emergence, disputes the relevance of representations as the instrument
of cognition (Mingers, 1995). Instead, in the enactive framework,
cognitive processes are seen as emergent or enacted by situated agents,
which drive the establishment of meaningful couplings with their
surroundings. Emergent cognitive systems are self-organized by a global
co-operation of their elements, reaching an attractor state which can be
used as a classifier for their environment. In that case, the
distinctions thus produced are not purely symbolic, therefore meaning is
not a function of any particular symbols, nor can it be localized in
particular parts of the network. Indeed, symbolic representation
disappears completely - the productive power is embodied within the
network structure, as a result of its particular history (Beer, 2000).
The diversity of their ability for classification is dependent on the
richness of their attractors, which are used to represent events in
their environments. Therefore, their meaning evolving threshold cannot
transcend their attractor's landscape complexity, hence, it cannot
provide us with a framework for meaning-based evolution.
It is almost globally accepted that purely symbolic approaches cannot
give ...
read more »
Continue reading on narkive:
Loading...