The neurons of the cortex are organized in clusters, each containing 50 to 10,000 neurons. The
neurons of each cluster are connected primarily to other neurons in the same cluster. Edelman
(1988) has proposed that it makes sense to think of connections between clusters, not just
individual neurons, as being reinforced or inhibited; and he has backed this up with a detailed
mathematical model of neural behavior. Following this line of thought, it is the nature of the
interaction between neural clusters which interests us here. We shall show that, according to a
simple model inspired by Edelman’s ideas, the interaction of neural clusters gives rise to a simple
form of structural analogy.
Edelman’s theory of "Neural Darwinism" divides the evolution of the brain into two phases.
The first, which takes place during fetal development, is the phase of cluster formation. And the
second, occurring throughout the remainder of life, is the phase of repermutation: certain
arrangements of clusters are selected from the set of possible arrangements. This is not an
outlandish hypothesis; Changeux (1985), among others, has made a similar suggestion. Edelman,
however, has formulated the theory as a sequence of specific biochemical hypotheses, each of
which is supported by experimental results.
Roughly speaking, a set of clusters which is habitually activated in a certain order is called a
map. Mental process, according to Edelman’s theory, consists of the selection of maps, and the
actual mapping of input — each map receiving input from sensory sources and/or other maps,
and mapping its output to motor control centers or other maps. Mathematically speaking, a map
is not necessarily a function, since on different occasions it may conceivably give different
outputs for the same input. A map is, rather, a dynamical system in which the output yt at time t
and the internal state St at time t are determined by yt=f(xt-1,St-1), St=g(xt-1,Mt-1), where xt is
the input at time t. And it is a dynamical system with a special structure: each map may be
approximately represented as a serial/parallel composition of simple maps which are composedof
single clusters. One key axiom of Edelman’s theory is the existence of numerous "degenerate"
clusters, clusters which are different in internal structure but, in many situations, act the same
(i.e. often produce the same output given the same input). This implies that each map is in fact a
serial/parallel combination of simple component maps which are drawn from a fairly small set,
or at least extremely similar to elements of a fairly small set.
There is much more to Neural Darwinism than what I have outlined here — for instance, I have
not even mentioned Edelman’s intriguing hypotheses as to the role of adhesion molecules in the
development of mentality. But there are nevertheless a number of mysteries at the core of the
theory. Most importantly, it is not known exactly how the behavior of a cluster varies with its
strengths of interconnection and its internal state.
Despite this limitation, however, Neural Darwinism is in my opinion the only existing
effective theory of low-to-intermediate-level brain structure and function. Philosophically, it
concords well with our theory of mind: it conceives the brain as a network of "maps", or
"functions". In previous chapters we spoke of a network of "programs," but the terms "program"
and "map" are for all practical purposes interchangeable. The only difference is that a program is
discrete whereas the brain is considered "continuous," but since any continuous system may be
approximated arbitrarily closely by some discrete system, this is inessential.
So, consider a set of n processors Pi, each one possessing an internal state Sit at each time t.
Assume, as above, that the output yit at time t and the internal state Sit at time t are determined
by yit=fi(xit-1,Sit-1), Sit=gi(xit-1,Sit-1), where xit is the input at time t. Define the "design" of
each processor as the set of all its possible states. Assume the processors are connected in such a
way that the input of each processor is composed of subsets of the output of some set of k
processors (where k is small compared to n, say O(logn)). This is a basic "network of
processors".
Note that we have defined f and g to vary with i. Strictly speaking, this means that the
different processors are different dynamical systems. In this case, what it really means,
intuitively, is that fi and gi may vary with the pattern of flow of the dynamical system. In fact,
from here on we shall assume gi=g for all i; we shall not consider the possibility of varying the
gi. However, we shall be concerned with minor variations in the fi; in particular, with
"strengthening" and "weakening" the connections between one processor and another. For this
purpose, we may as well assume that the space of outputs yit admits is Rn or a discrete
approximation of some subset thereof. Let si,j denote the scalar strength of connection from Pi to
Pj. For each Pi, let jfit denote the portion of the graph of fi which is connected to Pj at time t.
Assume that if j and l are unequal, then jfir and lfit are disjoint for any t and r; and that jfir= si,j jfit
for all t and r. According to all this, then, the only possible variations of fi over time are merely
variations of strength of connectivity, or "conductance".
As observed above, our model of mind is expressed as a network of processors, and Edelman’s
theory expresses brain function as a result of thedynamics of a network of neuronal clusters,
which are specialized processors. In the context of neuronal clusters, the "design" as defined
above is naturally associated with the graph of interconnection of the neurons in the cluster; a
particular state is then an assignation of charge levels to the neurons in the cluster, and a
specification of the levels of various chemicals, most importantly those concerned with the
modification of synaptic strength. According to Edelman’s theory, the designs of the million or
so processors in the brain’s network are highly repetitive; they do not vary much from a much
smaller set of fundamental designs. And it is clear that all the functions fi regulating connection
between neuronal clusters are essentially the same, except for variations in conductance.
THE NOISY HEBB RULE
D.O. Hebb, in his classic Organization of Behavior (1949), sought to explain mental process in
terms of one very simple neural rule: when a synaptic connection between two neurons is used,
its "conductance" is temporarily increased. Thus a connection which has proved somehow useful
in the past will be reinforced. This provides an elegant answer to the question: where does the
learning take place? There is now physiological evidence that Hebbian learning does indeed
occur. Two types of changes in synaptic strength have been observed (Bliss, 1979): "post-tetanic
potentiation", which lasts for at most a few minutes, and "enhancement", which can last for hours
or days.
Hebb proceeded from this simple rule to the crucial concepts of the cell-assembly and the
phase-sequence:
Any frequently repeated, particular stimulation will lead to the slow development of a "cell-
assembly", a diffuse structure comprising cells in the cortex and diencephalon… capable of
acting briefly as a closed system, delivering facilitation to other such systems and usually having
a specific motor facilitation. A series of such events constitutes a "phase sequence" — the thought
process. Each assembly action may be aroused by a preceding assembly, by a sensory event, or —
normally — by both.
This theory has been criticized on the physiological level; but this is really irrelevant. As Hebb
himself said, "it is… on a class of theory that I recommend you to put your money, rather than
any specific formulation that now exists" (1963, p.16). The more serious criticism is that Hebb’s
ideas do not really explain much about the most interesting aspects of mental process. Simple
stimulus-response learning is a long way from analogy, associative memory, deduction, and the
other aspects of thought which Hebb hypothesizes to be special types of "phase sequences".
Edelman’s ideas mirror Hebb’s on the level of neural clusters rather than neurons. In the
notation given above, Edelman has proposed that if the connection from P1 to P2 is used often
over a certain interval of time, then its"conductance" s1,2 is temporarily increased.
Physiologically, this is a direct consequence of Hebb’s neuron-level principle; it is simply more
specific. It provides a basis for the formation of maps: sets of clusters through which information
very often flows according to a certain set of paths. Without this Hebbian assumption, lasting
maps would occur only by chance; with it, their emergence from the chaos of neural flow is
virtually guaranteed. At bottom, what the assumption amounts to is a neural version of the
principle of induction. It says: if a pathway has been useful in the past, we shall assume it will be
useful in the future, and hence make it more effective.
Unfortunately, there is no reason to believe that the cluster-level interpretation of Hebb’s
theory is sufficient for the explanation of higher mental processes. By constructing a simulation
machine, Edelman has shown that a network of entities much like neural clusters, interacting
according to the Hebbian rule, can learn to perceive certain visual phenomena with reasonable
accuracy. But this work — like most perceptual biology — has not proceeded past the lower levels
of the perceptual hierarchy.
In order to make a bridge between these neural considerations and the theory of mind, I would
like to propose a substantially more general hypothesis: that if the connection between P1 and P3
is used often over a certain interval of time, and the network is structured so that P2 can
potentially output into P3, then the conductance s2,3 is likely to be temporarily increased.
As we shall see, this "noisy Hebb rule" leads immediately to a simple explanation of the
emergence of analogy from a network of neural clusters. Although I know of no evidence either
supporting or falsifying the noisy Hebb rule, it is certainly not biologically unreasonable. One
way of fulfilling it would be a spatial imprecision in the execution of Hebbian neural induction.
That is, if the inductive increase in conductance following repeated use of a connection were by
some means spread liberally around the vicinity of the connection, this would account for the
rule to within a high degree of approximation. This is yet another case in which imprecision may
lead to positive results.
THE NOISY HEBB RULE AND STRUCTURAL ANALOGY
Consider the situation in which two maps, A and B, share a common set of clusters. This
should not be thought an uncommon occurrence; on the contrary, it is probably very rare for a
cluster to belong to one map only. Let B-A (not necessarily a map) denote the set of clusters in B
but not A. The activation of map A will cause the activation of some of those clusters in map B-
A which are adjacent to clusters in A. And the activation of these clusters may cause the
activation of some of the clusters in B-A which are adjacent to them — and so on. Depending on
what is going on in the rest of B, this process might peter out with little effect, or it might result
in the activation of B. In the latter case, what has occurred is the most primitive form of
structural analogy.
Structural analogy, as defined earlier, may be very roughly described asreasoning of the form:
A and B share a common pattern, so if A is useful, B may also be useful. The noisy Hebb rule
involves only the simplest kind of common pattern: the common subgraph. But it is worth
remembering that analogy based on common subgraphs also came up in the context of
Poetszche’s approach to analogical robot learning. Analogy by common subgraphs works. There
is indeed a connection between neural analogy and conceptual analogy. And — looking ahead
to chapter 7 — it is also worth noting that, in this simple case, analogy and structurally
associative memory are inextricably intertwined: A and B have a common pattern and are
consequently stored near each other (in fact, interpenetrating each other); and it is this
associative storage which permits neural analogy to lead to conceptual analogy.
Kaynak: A New Mathematical Model of Mind
belgesi-948
Baharatlarda Uçucu Olmayan Eter Ekstraktı Tayini 01. Yöntemin Prensibi Yöntem örneğin dietil ile…
01. Sütte Kirlilik Tayini 01.01. Yöntemin Prensibi Süte dışarıdan bulaşmış olabilecek kirliliklerin…
İnsan atalarıyla niye övünür? İnsanlık evrimine katkıları nedeniyle olmalı, değil mi? Gariptir ama bizim Türk-İslamcılarımız…
İyi şeyleri engelleyen sözler esasında saymakla bitmez. Bu sözlerden bazıları bir virüs gibi bulaşıcıdır. Kırıcı…
SERVET-İ FÜNUN DÖNEMİNİN ÖNEMLİ SANATÇILARI TEVFİK FİKRET (1867-1915): Şairin, Batılı sanat anlayışını benimsemesindeki en önemli…
SERVET-İ FÜNUN EDEBİYATI (EDEBİYAT-I CEDİDE) (1896-1901) Servet-i Fünun veya Edebiyat-ı Cedide devri, Türk edebiyatında…