CCM - Reading Assignment
High-level Perception, Representation, and Analogy
-
The essence of human perception lies in the ability of the mind to hew order from this chaos, whether this means simply detecting movement in the visual field, recognizing sadness in a tone of voice, perceiving a threat on a chessboard, or coming to understand the Iran-Contra affair in terms of Watergate.
-
One of the most important properties of high-level perception is that it is extremely flexible. A given set of input data may be perceived in a number of different ways, dependings on the context and the state of the perceiver. Due to this flexibility, it is a mistake to regard perception as a process that associates a fixed representation with a particular situation.
-
The formation of appropriate representations lies at the heart of human high-level cognitive abilities.
It might even be said that the problem of high-level perception forms the central task facing the
artificial-intelligence community: the task of understanding how to draw meaning out of the world. It
might not be stretching the point by saying that there is a “meaning barrier”, which has rarely been
crossed by work in AI.
-
The end product of perception, when a set of raw data has been organized into a coherent
and structured whole, is a representation. Representations have been the object of much study
and debate within the field of AI, and much is made of the “representation problem”. This
problem this has been traditionally phrased as “What is the correct structure for mental
representations?”.
-
To separate representation-building from higher-level cognitive tasks is, we belive, impossible. In order to prvoide the kind of flexibility that is apparent in cognition, any fully cognitive model will probably require a continual interaction between the process of representation-building and the manipulation of those representations.
-
The Physical Symbol System Hypothesis (Newell & Simon, 1976), upon which most of the traditional AI enterprise has been built, posits that thinking occurs through the manipulation of symbolic representations, which are composed of atomic symbolic primitives. Such symbolic representations are by their nature somewhat rigid, black-and-white entities, and it is difficult for their representational content to shift subtly in response to changes in context.
-
The formation of appropriate representations lies at the heart of human high-level cognitive abilities. It might even be said that the problem of high-level perception forms the central task facing the artificial-intelligence community: the task of understanding how to draw meaning out of the world.
-
People are constantly interpreting new situations in terms of old ones. Whenever they
do this, they are using the analogical process to build up richer representations of
various situations.
-
The lesson to be learned from all this is that separating perception from the “higher” tasks for
which it is to be used is almost certainly a misguided approach. The fact that representations
have to be adapted to particular contexts and particular tasks means that an interplay between the
task and the perceptual process is unavoidable, and therefore that any “modular” approach to
analogy-making will ultimately fail. It is therefore essential to investigate how the perceptual and
mapping processes can be integrated.
-
In light of these considerations, it is somewhat disheartening to note that almost all
current work in the computational modeling of analogy bypasses the process of
perception altogether. The dominant approach involves starting with fixed, preordained
representations, and launching a mapping process to find appropriate correspondences
between representations. The mapping process not only takes center stage; it is the only
actor