Having just reviewed certain aspects of the psychological perspective on intelligence, it is
worth observing how different the engineering perspective is. As one might expect, engineers
have a much simpler and much more practical definition of intelligence.
Control theory deals with ways to cause complex machines to yield desired behaviors.
Adaptive control theory deals with the design of machines which respond to external and internal
stimuli and, on this basis, modify their behavior appropriately. And the theory of intelligent
control simply takes this one step further. To quote a textbook of automata theory (Aleksander
and Hanna, 1976)
[An] automaton is said to behave "intelligently" if, on the basis of its "training" data which
is provided within some context together with information regarding the desired action, it takes
the correct action on other data within the same context not seen during training.
This is the sense in which contemporary "artificial intelligence" programs are intelligent. They
can generalize within their limited context: they can follow the one script which they are
programmed to follow.
Of course, this is not really intelligence, not in the psychological sense. It is true that modern
"intelligent" machines can play championship chess and diagnose diseases from symptoms —
things which the common person would classify as intelligent behavior. On the other hand,
virtually no one would say that walking through the streets of New York requires much
intelligence, and yet not only human beings but rats do it with little difficulty, but no machine yet
can. Existing intelligent machines can "think" within their one context — chess, medical
diagnosis, circuit design — but they cannot deal with situations in which the context continually shifts, not even as well as a rodent can.
The above quote defines an intelligent machine as one which displays "correct" behavior in
any situation within one context. This is not psychologically adequate, but it is on the right
track. To obtain an accurate characterization of intelligence in the psychological sense, one must merely modify their wording. In their intriguing book Robots on Your Doorstep, Winkless and
Browning (1975) have done so in a very elegant way:
Intelligence is the ability to behave appropriately under unpredictable conditions.
Despite its vagueness, this criterion does serve to point out the problem with ascribing
intelligence to chess programs and the like: compared to our environment, at least, the
environment within which they are capable of behaving appropriately is very predictable indeed,
in that it consists only of certain (simple or complex) patterns of arrangement of a very small
number of specifically structured entities.
Of course, the concept of appropriateness is intrinsically subjective. And unpredictability is
relative as well — to a creature accustomed to living in interstellar space and inside stars and planets as well as on the surfaces of planets, or to a creature capable of living in 77 dimensions, our environment might seem just as predictable as the universe of chess seems to us. In order to make this folklore definition precise, we must first of all confront the vagueness inherent in the terms "appropriate" and "unpredictable."
Toward this end, let us construct a simple mathematical model. Consider two computers: S
(the system) and E (the environment), interconnected in an adaptive manner. That is, let St
denote the state of the system at time t, and let Et denote the state of the environment at time t.
Assume that St=f(St-1,Et-1), and Et=g(St-1,Et-1), where f and g are (possibly nondeterministic)
functions characterizing S and E. What we have, then, is a discrete dynamical system on the set
of all possible states SxE: an apparatus which, given a (system state, environment state) pair,
yields the (system state, environment state) pair which is its natural successor. We need to say
what it means for S to behave "appropriately", and what it means for E to be "unpredictable".
belgesi-926