2007-03-02 08:57:59 UTC
As some of us know, there is one mathematical formulation for
Quantum theory, but at least 8 interpretations, what the
extremely accurate quantum mathematical prediction says about
the nature of reality.
Likewise one can compare Computationalism (Comp) and Connectionism
as both being Turing computable. But that doesn't mean that they
both enjoy the same flexibility as tools for the pursuit of AI.
The universe is made of atoms, which is the parent category. There
are sub-categories, such as humans and rocks and dogs and cats.
Because they are all made of atoms does not mean that the sub-cats
all share the same properties when viewed from a more finely grained
perspective. Humans are assumed to have consciousness and rocks not,
even though they both consist of atoms. Likewise humans are assumed
to possess consciousness but this doesn't mean that property is
conferred to the more general category of the universe containing
humans. So when things are compared at the same level of abstraction
for similarity or difference, it doesn't necessarily work to answer
by skipping to another more general level and impute properties from
that level to the specific or assert properties found in the specific
to the more general category.
For example, the law of causality is considered to be universal.
One could kick a cat, dog, or stone and it would go flying, the
law of cause and effect in action. But that wouldn't mean that
you could lump cats, dogs or stones into a more specialized
abstract level of comparison because they were contained by a
broad, sweeping level or degree of comparison like cause and effect.
I mention causality, because that is mentioned as an argument for
the "implementational" connectionists (Fodor) to unify their point
of view regarding the sameness of Comp and Connectionism. This is
opposed to the "radical" Connectionists who maintain there is a
difference between Comp and Connectionism. So I quoted from that
paper by Gualtiero Piccinini in what seems to me to support the
pov that causality doesn't measure up to the implementational
connectionist claims. Comp and Connectionism seem like different
tools to me when compared at the meaningful level of abstraction.
I decided to include some info on the dynamic system approach.
Usually people don't write a paper to defend their pet philosophy
unless the rumor of the demise of it is fairly wide spread. I
added a touch of Behaviorism so everybody can correct this post.
"The Rumors of its [Computationalism] Demise have been Greatly
Exaggerated" David Israel, Stanford University, SRI International, USA
“There has been much talk about the computationalism being dead. But as
Mark Twain said of rumors of his own death: these rumors are highly
exaggerated. Unlike Twain's case, of course, there is room for a good
deal of doubt and uncertainty as to what it is exactly that is being
claimed to have died. Whose old conception are we talking about?
I will leave the issues of the computational model of mind to the
philosophers and cognitive scientists. I will address rather some -- or
at any rate, one -- of the real shifts of focus in theoretical computer
science: away from single-processor models of computation and toward
accounts of interaction among computational processes. I will even
address the question as to whether this is a shift in paradigms or
simply (?) a quite normal evolution of interests within a paradigm.
Maybe a few philosophical morals will be drawn."
“In opposition to behaviorism, Cognitive Science opened the ‘black box’
while retaining behavior as the object of its investigation. It offers a
theory of what goes on inside an organism with cognitive capacities when
it engages in cognitive behavior. The dominant element of this process
is of an informational nature, but the respective activity is not
uniquely defined. The various ways this information processing activity
can be defined are tantamount to different overall approaches to
cognition (Petitot et al., 1999). For the purposes of this paper it is
useful to distinguish three major approaches:
The Cognitivist-Computationalist/Symbolic Approach
Computationalism is based on the hypothesis that the mind is supposed to
process symbols that are related together to form representations of the
environment. These representations are abstract, and their manipulations
are so deterministic that they can be implemented by a machine.
Computationalism is the metaphor of the sequential,
externally-programmed information processing machine based on the
theories of Turing (Turing, 1950) and vonNeumann (vonNeumann, 1958).
Therefore it implies that the key to building an adaptive system is to
produce a system that manipulates symbols correctly, as enshrined in the
Physical Symbol System Hypothesis (Newell, 1980). Computationalism has
two requirements: forms of representation and methods of search. Thus,
first one should find a way to formally represent the domain of interest
(whether it will be vision, chess, problem-solving) and then to find
some method of sequentially searching the resulting state space
Consequently, these are purely formal systems and their symbols are
related to an apriori correspondence with externally imposed meaning.
They are processing information based on a static meaning structure,
which cannot be internally changed in order to adapt to the
ever-changing demands of a dynamic environment.
The Connectionist-Dynamic Approach
Connectionism argues that the mind is a system of network that gives
rise to a dynamic behavior that can be interpreted as rules at a higher
level of description. Here, the dominant view is that mental elements
are a vector distribution of properties in dynamic networks of neurons
and the proposed solution for a proper modeling of the phenomenon
(thinking process) is the set-up of parallel distributed architectures.
Connectionism overcomes the problems imposed by the linear and
sequential processing of classical computationalism and finds
application in areas like perception or learning, where the latter is,
due to its nature, too slow to deal with the rapidity of environmental
Connectionism has also borrowed the idea of emergence, from the theories
of self-organization, which has as a central point the system’s
nonlinear dynamical processing. In this context the brain is seen as a
dynamical system whose behavior is determined by its attractor
landscape. The dynamics of the cognitive substrate (matter) are taken to
be the only thing responsible for its self-organization, and
consequently for the system’s behavior (vanGelder and Port, 1995). It
should be stressed that there is an on-going debate between dynamic
systems theory and connectionist networks. The latter exhibit many of
the properties of self-organizing dynamical systems, while not
discarding the notions of computation and representation. Instead, they
find it necessary in order for the system to exhibit high-level
intelligence (Eliasmith, 1998), (Clark and Eliasmith, 2002), or even any
kind of intentional behavior (Bickhard, 1998), (Clark and Wheeler,
1998), as long as representations emerge from the interaction in a
specific context of activity.
On the other hand, Fodor (Fodor and Psyslyn, 1988) among others, insists
that the form of the computation, whether logico-syntactic or
connectionist, is merely a matter of implementation, and in addition,
the implementation of computation, whether classical or connectionist,
lies in causal processes. The only real difference between this form of
connectionism and computationalism is that the former uses a vector
algebra, rather than scalar, to manipulate its symbols (representations)
(Smolensky, 1988). In this perspective and in relation to intrinsic
creation of meaning, connectionist architectures cannot evolve and be
adaptive. [SH: Seems like a fairly major difference to me.]
The Emergent-Enactive Approach
Advocates of the pure dynamic approach (Varela et al., 1991), argue that
connectionism remains basically representational, as it still assumes a
pre-given independent world of objective and well-defined problems.
These problems seek the proper set of representations together with an
efficient mapping of one set of representations onto another.
On the contrary, the emergent-enactive view, although it shares with
connectionism a belief in the importance of dynamical mechanisms and
emergence, disputes the relevance of representations as the instrument
of cognition (Mingers, 1995). Instead, in the enactive framework,
cognitive processes are seen as emergent or enacted by situated agents,
which drive the establishment of meaningful couplings with their
surroundings. Emergent cognitive systems are self-organized by a global
co-operation of their elements, reaching an attractor state which can be
used as a classifier for their environment. In that case, the
distinctions thus produced are not purely symbolic, therefore meaning is
not a function of any particular symbols, nor can it be localized in
particular parts of the network. Indeed, symbolic representation
disappears completely – the productive power is embodied within the
network structure, as a result of its particular history (Beer, 2000).
The diversity of their ability for classification is dependent on the
richness of their attractors, which are used to represent events in
their environments. Therefore, their meaning evolving threshold cannot
transcend their attractor’s landscape complexity, hence, it cannot
provide us with a framework for meaning-based evolution.
It is almost globally accepted that purely symbolic approaches cannot
give answer to issues related with the emergence of new meaning
structures and levels of organization, which justifies the existence and
the role of anticipation in adaptive systems (Collier, 1999). On the
other hand, although the emergent dynamical mechanisms have more
potential for self-organization, there are also some issues in human
cognition, (such as high-level learning, long-term memory, the stability
of old pattern of neuronal activity in the face of new ones, etc.) that
cannot be satisfactorily explained within these frameworks (Cariani,
2001). Moreover, the functionality of an adaptive system must be
examined in a framework which will justify all or most of its phenomenal
aspects, as these emerge from its striving for adaptable interaction
with its environment."
Computation without Representation by Gualtiero Piccinini
“The only alternative to the semantic view that is clearly stated in the
philosophical literature is that computational states are individuated
by their causal properties (Chalmers 1996, Copeland 1996, Scheutz 1999).
But causal individuation, without constraints on which causal powers
are relevant and which irrelevant to computation, is too weak. It does
not support a robust notion of computational explanation—the kind of
explanation that is needed to explain the capacities of computers,
brains, and other putative computing mechanisms in terms of their
Supporters of the causal individuation of computational states readily
admit that under their view, every state is a computational state and
every causal process is a computation. But this is tantamount to
collapsing the notion of computation into the notion of causal process.”