Koppel [8] has recently proposed an alternative to the KCS complexity measure. According to
Koppel’s measure, the sequences which are most complex are not the structureless ones. Neither,
of course, are they the ones with very simple structures, like 00000000000…. Rather, the more
complex sequences are the ones with more "sophisticated" structures.
The basic idea [10] is that a sequence with a sophisticated structure is part of a natural class
of sequences, all of which are computed by the same program. The program produces different
sequences depending on the data it is given, but these sequences all possess the same underlying
structure. Essentially, the program represents the structured part of the sequence, and the data the random part. Therefore, the "sophistication" of a sequence x should be defined as the size of the program defining the "natural class" containing x.
But how is this "natural" program to be found? As above, where y is a program and z is a
binary sequence, let %y% and %z% denote the length of y and z respectively. Koppel proposes
the following algorithm, defined with respect to a Turing machine that has two tapes instead of
just one, a program tape and a data tape:
1) search over all pairs of binary sequences (y,z) for which the two-tape tape Turing machine
with program y and data z computes x, and find those pairs for which %y% + %z% is smallest,
2) search over all pairs found in Step 1, and find the one for which %y% is biggest. This value of %z% is the "sophistication" of x.
All the pairs found in Step 1 are "best" representations of x. Step 2 searches all the "best"
representations of x, and find the one with the most program (as opposed to data). This program
is assumed to be the natural structure of x, and its length is therefore taken as a measure of the sophistication of the structure of x.
There is no doubt that the decomposition of a sequence into a structured part and a random
part is an important and useful idea. But Koppel’s algorithm for achieving it is conceptually
problematic. Suppose the program/data pairs (y1,z1) and (y2,z2) both cause a Turing machine to
output x, but whereas %y1%=50 and %z1%=300, %y2%=250 and %z2%=110. Since %y1%+%z1%=350, whereas %y2%+%z2%=360, (y2,z2) will not be selected in Step 1, which searches for those pairs (y,z) that minimize %y%+%z%. What if, in Step 2, (y1,z1) is chosen as the pair with maximum %y%? Then the sophistication of x will be set at %y1%=50. Does it not seem that the intuitively much more sophisticated program y2, which computes x almost as well as y1, should count toward the sophistication of x?
In the language of pattern, what Koppel’s algorithm does is:
1) Locate the pairs (y,z) that are the most intense patterns in x according to the definition of pattern with % %, a=b=1, c=0
2) Among these pairs, select the one which is the most intense pattern in x according to the definition of pattern with % %, a=1, b=c=0.
It applies two different special cases of the definition of pattern, one after the other.
How can all this be modified to accommodate examples like the pairs (y1,z1), (y2,z2) given
above? One approach is to look at some sort of combination of %y%+%z% with %y%.
%y%+%z% measures the combined length of program and data, and %y% measures the length
of the program. What is desired is a small %y%+%z% but a large %y%. This is some motivation
for looking at (%y%+%z%)/%y%. The smaller %y%+%z% gets, the smaller this quantity gets;
and the bigger %y% gets, the smaller it gets. One approach to measuring complexity, then, is to
search all (y,z) such that x=y*z, and pick the one which makes (%y%+%z%)/%y% smallest. Of
course, (%y%+%z%)/%y% = 1 + %z%/%y%, so whatever makes (%y%+%z%)/%y% smallest
also makes %z%/%y% smallest. Hence, in this context, the following is natural:
Definition 3.12: The crudity of a pattern (y,z) is %z%/%y%.
The crudity is simply the ratio of data to program. The cruder a pattern is, the greater the
proportion of data to program. A very crude pattern is mostly data; and a pattern which is mostly program is not very crude. Obviously, "crudity" is intended as an intuitive opposite to
"sophistication"; however, it is not exactly the opposite of "sophistication" as Koppel defined it.
This approach can also be interpreted to assign each x a "natural program" and hence a
"natural class". One must simply look at the pattern (y,z) in x whose crudity is the smallest. The program y associated with this pattern is, in a sense, the most natural program for x.
LOGICAL DEPTH
Bennett [9], as mentioned above, has proposed a complexity measure called "logical depth",
which incorporates the time factor in an interesting way. The KCS complexity of x measures
only the length of the shortest program required for computing x — it says nothing about how
long this program takes to run. Is it really correct to call a sequence of length 1000 simple if it can be computed by a short program which takes a thousand years to run? Bennett’s idea is to look at the running time of the shortest program for computing a sequence x. This quantity he
calls the logical depth of the sequence.
One of the motivations for this approach was a desire to capture the sense in which a
biological organism is more complex than a random sequence. Indeed, it is easy to see that a
sequence x with no patterns in it has the smallest logicaldepth of any sequence. The shortest
program for computing it is "Print x", which obviously runs faster than any other program
computing a sequence of the same length as x. And there is no reason to doubt the hypothesis
that biological organisms have a high logical depth. But it seems to us that, in some ways,
Bennett’s definition is nearly as counterintuitive as the KCS approach.
Suppose there are two competing programs for computing x, program y and program y’. What
if y has a length of 1000 and a running time of 10 minutes, but y’ has a length of 999 and a
running time of 10 years. Then if y’ is the shortest program for computing x, the logical depth of x is ten years. Intuitively, this doesn’t seem quite right: it is not the case that x fundamentally requires ten years to compute.
At the core of Bennett’s measure is the idea that the shortest program for computing x is the
most natural representation of x. Otherwise why would the running time of this particular
program be a meaningful measure of the amount of time x requires to evolve naturally? But one
may define the "most natural representation" of a given entity in many different ways. Bennett’s
is only the simplest. For instance, one may study the quantity dC(y,z) + e%z%/%y% +
f(%y%+%z%), where d, e and f are positive constants defined so that d+e+f=3. The
motivation for this is as follows. The smaller %z%/%y% is, the less crude is the pattern (y,z).
And, as indicated above, the crudity of a pattern (y,z) may be interpreted as a measure of how
natural a representation it is. The smaller C(y,z) is, the less time it takes to get x out of (y,z).
And, finally, the smaller %y%+%z% is, the more intense a pattern (y,z) is. All these facts
suggest the following:
Definition 3.13: Let m denote the smallest value that the quantity
dC(y,z) + e%z%/%y% + f(%y%+%z%) assumes for any pair (y,z) such that x=y*z (assuming
there is such a minimum value). The meaningful complexity of x may then be defined as the
time complexity C(y,z) of the pattern (y,z) at which this minimum m is attained.
Setting d=e=0 reduces the depth complexity to the logical depth as Bennett defined it. Setting
e=0 means that everything is as Bennett’s definition would have it, except that cases such as the patterns (y1,z1), (y2,z2) described above are resolved in a more intuitive matter. Setting f=0 means that one is considering the time complexity of the most sophisticated — least crude, most structured — representation of x, rather than merely the shortest. And keeping all the constants nonzero ensures a balance between time, space, and sophistication.
Admittedly, this approach is not nearly so tidy as Bennett’s. Its key shortcoming is its failure
to yield any particular number of crucial significance — everything depends on various factors
which may be given various weights. But there is something to be said for considering all the
relevant factors.
belgesi-939
Albert Einstein, Charles Darwin, Wolfgang Amadeus Mozart ve Pablo Picasso gibi dünyayı etkilemiş dahilerin beyinlerinin…
Çeşitli kişilik testleri belli gruptan insanlar arasındaki benzerlikleri vurgular. Yine de, diğerleriyle olan tüm benzerliklerine…
Boşaltım sistemi vücutta homeostazın sağlanmasında çok önemli bir yere sahiptir.Böbrekler, üreterler ve mesaneden oluşan boşaltım…
Büyük Atatürk'ün ölümünü takip eden günlerde, o zamanlar yalnız Avrupa'nın değil, dünyanın en güçlü günlük…
Mustafa Kemal Atatürk 1881 yılında Selânik'te Kocakasım Mahallesi, Islâhhâne Caddesi'ndeki üç katlı pembe evde doğdu.…