mark h. bickhard and loren terveen, foundational issues in artificial intelligence and cognitive...

5
BOOK REVIEWS 435 Brooks, R. (1987), ‘Planning is just a way of avoiding figuring out what to do next’, MIT Artificial Intelligence Laboratory Working Paper 103. Burge, T. (l986), ‘Individualism and psychology’, Philosophical Review 95, pp. 3–45. Davies, M. (1992), ‘Perceptual content and local supervenience’, Proceedings of the Aristotelian Society 92, pp. 21–45. Dinsmore, J. (1991), Partitioned representations, Dordrecht: Kluwer Academic Publishers. Fauconnier, G. (1985), Mental spaces, Cambridge, MA: MIT Press. Lakoff, G. (1987), Women, fire and dangerous things, Chicago: Univ. of Chicago Press. Gärdenfors, P. (1996), ‘Mental representations, conceptual spaces and metaphors’, Synthese 106, pp. 21–47. Martins, J.P. and Shapiro, S.C. (1988), ‘A model for belief revision’, Artificial Intelligence 35, pp. 25–79. McGinn, C. (1989), Mental Content, Oxford: Basil Blackwell. Mozer, M. and Smolensky, P. (1989), ‘Using relevance to reduce network size automatically’, Connection Science 1, pp. 3–17. Olson, K. (1987), An essay on facts. Stanford: CSLI/Univ. of Chicago Press. Putnam, H. (1975), Mind, language and reality: Philosophical papers, Vol. 2, Cambridge: Cambridge Univ. Press. Putnam, H. (1981), Reason, truth and history, Cambridge: Cambridge University Press. Pylyshyn, Z.W. (1973), ‘What the mind’s eye tells the mind’s brain’, Psychological Bulletin 80, pp. 1–24. Shapiro, S.C. and Rapaport, W.J. (1987), ‘SnePS considered as a fully intensional propositional semantic network’, in N. Cercone and G. McCalla (eds.) The knowledge frontier: Essays in the representation of knowledge, New York: Springer-Verlag, pp. 262–315. Shapiro, S.C. and Rapaport, W.J. (1991), ‘Models and Minds: Knowledge representation for natural- language competence’, in R. Cummins and J. Pollock (eds.) Philosophy and AI: Essays at the interface, Cambridge: MIT Press, pp. 215–259. Stenning, K. and Oberlander, J. (1994), ‘Spatial inclusion as an analogy for set membership: a case study of analogy at work’, in K. Holyoak and J. Barnden (eds.) Analogical Connections, Hillsdale, N.J.: Erlbaum, pp. 446–486. Stenning, K. and Oberlander, J. (1995), ‘A cognitive theory of graphical and linguistic reasoning: logic and implementation’, Cognitive Science 19, pp. 97–140. Department of Computer, BIPIN INDURKHYA Information and Communication Sciences, Tokyo University of Agriculture and Technology, 2-24-16 Nakacho, Koganei, Tokyo 184-8588, Japan E-mail: [email protected] Mark H. Bickhard and Loren Terveen, Foundational Issues in Artificial Intelligence and Cognitive Science: Impasse and Solution, Advances in Psychology, Vol. 109, Amsterdam: North-Holland/Elsevier Science B.V., 1995, ix + 384 pp., $152.50 (cloth), ISBN 0-444-82048-5; xii + 384 pp., $92.00 (paper), ISBN 0-444-82520-7. This book revitalizes dialogue regarding the status of computational represent- ations of knowledge in a theoretical account of cognition, displacing the grounding of symbols as a secondary concern relative to the content and function of rep- resentation. Inspired by developmental psychology, Bickhard and Terveen focus

Upload: valerie-l-shalin

Post on 06-Aug-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Mark H. Bickhard and Loren Terveen, Foundational Issues in Artificial Intelligence and Cognitive Science: Impasse and Solution, Advances in Psychology, Vol. 109

BOOK REVIEWS 435

Brooks, R. (1987), ‘Planning is just a way of avoiding figuring out what to do next’,MIT ArtificialIntelligence Laboratory Working Paper103.

Burge, T. (l986), ‘Individualism and psychology’,Philosophical Review95, pp. 3–45.Davies, M. (1992), ‘Perceptual content and local supervenience’,Proceedings of the Aristotelian

Society92, pp. 21–45.Dinsmore, J. (1991),Partitioned representations, Dordrecht: Kluwer Academic Publishers.Fauconnier, G. (1985),Mental spaces, Cambridge, MA: MIT Press.Lakoff, G. (1987),Women, fire and dangerous things, Chicago: Univ. of Chicago Press.Gärdenfors, P. (1996), ‘Mental representations, conceptual spaces and metaphors’,Synthese106, pp.

21–47.Martins, J.P. and Shapiro, S.C. (1988), ‘A model for belief revision’,Artificial Intelligence35, pp.

25–79.McGinn, C. (1989),Mental Content, Oxford: Basil Blackwell.Mozer, M. and Smolensky, P. (1989), ‘Using relevance to reduce network size automatically’,

Connection Science1, pp. 3–17.Olson, K. (1987),An essay on facts. Stanford: CSLI/Univ. of Chicago Press.Putnam, H. (1975),Mind, language and reality: Philosophical papers, Vol. 2, Cambridge: Cambridge

Univ. Press.Putnam, H. (1981),Reason, truth and history, Cambridge: Cambridge University Press.Pylyshyn, Z.W. (1973), ‘What the mind’s eye tells the mind’s brain’,Psychological Bulletin80, pp.

1–24.Shapiro, S.C. and Rapaport, W.J. (1987), ‘SnePS considered as a fully intensional propositional

semantic network’, in N. Cercone and G. McCalla (eds.)The knowledge frontier: Essays in therepresentation of knowledge, New York: Springer-Verlag, pp. 262–315.

Shapiro, S.C. and Rapaport, W.J. (1991), ‘Models and Minds: Knowledge representation for natural-language competence’, in R. Cummins and J. Pollock (eds.)Philosophy and AI: Essays at theinterface, Cambridge: MIT Press, pp. 215–259.

Stenning, K. and Oberlander, J. (1994), ‘Spatial inclusion as an analogy for set membership: acase study of analogy at work’, in K. Holyoak and J. Barnden (eds.)Analogical Connections,Hillsdale, N.J.: Erlbaum, pp. 446–486.

Stenning, K. and Oberlander, J. (1995), ‘A cognitive theory of graphical and linguistic reasoning:logic and implementation’,Cognitive Science19, pp. 97–140.

Department of Computer, BIPIN INDURKHYAInformation and Communication Sciences,Tokyo University of Agriculture and Technology,2-24-16 Nakacho, Koganei, Tokyo 184-8588, JapanE-mail: [email protected]

Mark H. Bickhard and Loren Terveen,Foundational Issues in Artificial Intelligenceand Cognitive Science: Impasse and Solution, Advances in Psychology, Vol. 109,Amsterdam: North-Holland/Elsevier Science B.V., 1995, ix + 384 pp., $152.50(cloth), ISBN 0-444-82048-5; xii + 384 pp., $92.00 (paper), ISBN 0-444-82520-7.

This book revitalizes dialogue regarding the status of computational represent-ations of knowledge in a theoretical account of cognition, displacing the groundingof symbols as a secondary concern relative to the content and function of rep-resentation. Inspired by developmental psychology, Bickhard and Terveen focus

Page 2: Mark H. Bickhard and Loren Terveen, Foundational Issues in Artificial Intelligence and Cognitive Science: Impasse and Solution, Advances in Psychology, Vol. 109

436 BOOK REVIEWS

on the origins of cognitive phenomena as a crucial constraint on the explanationof adult capability. Building on this perspective, they propose that the dominantprogrammatic view of computational representation adopts a flawed conceptualiz-ation of the human capacity to interact with (and thereby learn about and developwithin) a dynamic environment. They do not propose that we banish representationfrom an explanation of cognition, but hint instead at a foundation for representationgrounded in an interactive capability. Their proposal merits serious consideration,and is sure to promote spirited exchange. This review attempts to contribute to thisexchange, substituting chapter summaries with admittedly selective comments ontheir criticism of cognitive science and artificial inteliigence, the proposed interact-ivist alternative, the scope and nature of their argument, and the focal points of aninteractivist’s agenda.

1. The Criticism

Bickhard and Terveen refer to the dominant programmatic view of representationas “encodingism”. Paraphrasing from pages 12–16, encodingism assumes that thesymbols of a representation correspond to “something”, but without providing amechanism for how this correspondence occurs. Readers who doubt this accountof computational representation need only consult Newell and Simon (1972, p. 21)and Newell (1990, p. 78) for examples of the cursory treatment of the encodingprocess. Those shaping the programmatic view clearly recognized the encodingassumption, but viewed it as dissmissable–partitionable from the primary scientificenterprise. Yet, Bickhard and Terveen claim that by focusing on the manipulationof predefined symbols the programmatic view “presupposes what it aspires to ex-plain” (p. 13). Indeed, the authors overlook two influential projects that exemplifythe power of this presupposition. Starting with the just the right representationalprimitives Winston (1975) and Langley, Bradshaw Simon (1983) deliver not onlycomputational models of learning but discovery as well.

Bickhard and Terveen urge greater concern for the origins of representations,which organisms must be able to create in the process of development and learn-ing. Yet, the authors argue against the possibility of a mechanism for encoding arepresentation. They dismiss a tracking mechanism that maps things-in-the-worldto representational primitives for failing to capture a necessary distinction betweena representation and that which it represents. “A ‘thing’ and its representations aresimply not the same ontological sort – you cannot do the same things with a repres-entation of X that you can with X itself” (p. 15). And consequently, the representingorganism must be able to distinguish between operating on its representations andinteracting with the real world. Moreover, "connectionism and PDP approachesare just as committed to, and limited by, encodingism as are compositional, orsymbol manipulational, approaches" p. 283. Most troubling to the science, theauthors claim all proposed mechanisms for creating new representations for newenvironmental phenomena presume a pre-existing representation. “Encodings can

Page 3: Mark H. Bickhard and Loren Terveen, Foundational Issues in Artificial Intelligence and Cognitive Science: Impasse and Solution, Advances in Psychology, Vol. 109

BOOK REVIEWS 437

only transform, can only encode or recode representations that already exist” [...]p. 21. This in brief, is the claimed incoherence of encodingism, which accordingto the authors, undermines the viability of contemporary Artificial Intelligence andCognitive Science.

This well-articulated claim must not mask its precedents in the literature. In-deed, the scope of the concern would have been substantially clarified by intro-ducing the standard distinction between extensional and intensional semantics.Extensional semantics requires a link between beliefs and objects in the world, andposes the greatest challenge to learning under the encodingist view. Second, theclaim echoes Dietterich’s (1986) thoughtful critique of machine learning, which isunfortunately missing from the bibliography.

2. The Alternative of Interactivism

The proposed alternative to encodingism as the source of representation is interact-ivism, which replaces concern for the elements of representation with concern forthe function of representation in the selection among potential strategies for futureinteraction. “In the interactive view, representation does not emerge in knowledgeof what the differentiations are differentiation of – are in correspondence with –but instead representation is emergent in predications of the potentiality for furtherinteractive properties. Such predications of interactive potentiality will often beevoked by – be contingent upon – instances of environmental differentiations, suchas the frog predication of ‘eating opportunity’ evoked by the factual differentiationof a fly. In such an instance, however, the representational content – the potentiallyfalsiflable content – is of ‘eating opportunity’ not of ‘fly’. The factual corres-pondence with the fly serves a functional role in evoking the representation, thepredication of ‘eating opportunity.’ The factual correspondence does not constitutethat representation.” (pp. 132–133). The grounding of representation in physicalaction gives this alternative view a biological (Piagetian) orientation that convergesin a later chapter on language with concern for social and cultural influences oncognition (see also Bruner, 1990 for another developmental psychologist’s view).

3. The Scope of the Argument

To their credit, Bickhard and Terveen provide a mammoth survey of competingaccounts for (human) cognitive capability. However, the apparent breadth of cov-erage circumscribes a set of similarly inclined cousins: Searle, Gibson, Piaget,Maturana and Varela, Dreyfus and Dreyfus, Hard, Bodgan, Clancey, R. Brooks,Agre and Chapman, B. Shannon, K. Ford, Kuipers, Dynamic Systems Approaches,etc. The survey can feel like a sermon for the choir while picking on the details offundamentally similar views. Though the Soar (based on Laird, Newell and Rosen-bloom, 1986) and CYC (Lenat and Guha, 1988) projects receive extended critiques(see below), serious challenges to interactivism receive little or no attention. For

Page 4: Mark H. Bickhard and Loren Terveen, Foundational Issues in Artificial Intelligence and Cognitive Science: Impasse and Solution, Advances in Psychology, Vol. 109

438 BOOK REVIEWS

example Bickhard and Terveen commit to a theory of cognition and learning thatdepends on goals and intentions as essential for the detection of error. Yet, thereis no mention of the alternative positions surrounding this committment (Dennett,1987). And the authors show little concern for the ambiguity that the potential formultiple levels of interpretation introduces to a semantics grounded in action (seeRowlands 1997 for a recent discussion).

In the extended critiques, provocation sometimes substitutes for careful argu-mentation, as in “he (Newell) has no notion whatsoever of the constitutive role ofgoal-directed interaction in representation. p. 97.” How is such a conclusion con-sistent with a body of work that includes mean–ends analysis and the formulationof problem spaces to organize domain specific goals with operators and problemrepresentation? Perhaps Bickhard and Terveen are objecting to the manner in whichNewell formulates what he calls the “Great Move”, that is, the capability to repres-ent a complex environment in a generic medium, and conduct further, goal-directedinterpretation and inference. The capability enables thought decoupled from con-current action, or constrained by physically possible action (see Newell, 1990, pp.61–63 and Agre, 1993, p. 424) as well as interruption by circumstances not neces-sarily related to current goals (Newell, 1990, p. 228). More recently, Clark (1997)distinguishes this function for representation as the hallmark of a “genuine model-using agent” that is properly exempt from some of the criticisms of interactivism (p.479). Newell’s (1980, p. 197) specific multi-stage account of representation mayhave pressed Bickhard and Terveen to discount the significant human capabilitythat prompted Newell’s formulation of the Great Move. Nevertheless, readers mustresist the provocation to appreciate the gravity of the authors’ fundamental critique:the heart of contemporary computational models of cognition is representationalcontent – a free parameter, with no apparent theoretical foundation. If the beha-vior of a computational model indeed rests on a free parameter, the computationalarchitecture has limited value as a principled explanation.

4. The Interactivist Agenda

Bickhard and Terveen intend to promote a shift in the focus of the research agenda,prompted in part by exposing the roots of common (errant) practices. The authorsattribute lack of concern for the assumptions of encodingism to the dominance ofproject-oriented rather than program-oriented research, which they claim, logicallyprecludes the discovery of programmatic flaws. This reviewer is more inclined toagree with Dietrich (1990), who suggests that the research community has notchosen to treat Artificial Intelligence and Cognitive Science as falsiflable hypo-theses about the nature of intelligent thought. A self-perpetuating cycle of confirm-ation bias allows for the illusion of empirical success within the boundaries of aculturally determined, narrow view of knowledge-based reasoning. Many of thetasks formulated as computational models are drawn from academics (arithemetic,geometric proof, certain aspects of language comprehension, and even the ap-

Page 5: Mark H. Bickhard and Loren Terveen, Foundational Issues in Artificial Intelligence and Cognitive Science: Impasse and Solution, Advances in Psychology, Vol. 109

BOOK REVIEWS 439

parently advanced diagnostic reasoning of medicine and mechanical repair) thatrequire only the emulation of procedures for the manipulation of symbols.

Thus an important contribution of this book is the role of interactivism as acatalyst for the identification of intelligent capability too easily and too often over-looked in the tasks formulated as computational models. These overlooked cap-abilities include: expert skill that requires timed interaction with a dynamic world(see also Clark, 1997); the use of language in an open-ended physical and socialworld; the self-discovery of the nature and consequence of error; and the use ofenvironmental feedback, not merely to reign in generalization and specializationbut to promote the diagnosis and enrichment of ontologically limited representa-tions. Bickhard and Terveen offer a critique that is surely provocating, seriouslyincomplete, and the beneficiary of a careful selection of challenges. Nevertheless,Bickhard and Terveen have correctly identified a fundamental vulnerability inher-ent in Artificial Intelligence and Cognitive Science that demands response fromreaders of Minds and Machines.

References

Agre, P. E. (1993), ‘Interview with Allen Newell’,Artificial Intelligence59, pp. 415–449.Bruner, J. (1990),Acts of Meaning, Cambridge, MA: Harvard University Press.Clark, A. (1997), ‘The dynamical challenge’,Cognitive Science21, pp. 461–481.Dennett, D. C. (1987),The Intentional Stance, Cambridge, MA: The MIT Press.Dietrich, E. (1990), ‘Programs in the search for intelligent machines’, D. Partridge and Y. Wilks,

eds.The Foundations of Artificial Intelligence: A Sourcebook, Cambridge: Cambridge UniversityPress, pp. 223–233

Dietterich, T. G. (1986), ‘Learning at the knowledge level,’Machine Learning1, pp 287–316.Langley, P., G. L. Bradshaw, and H. A. Simon, (1983), ‘Rediscovering chemistry with the BACON

System’, in R.S. Michalski, J.G. Carbonell and T.M. Mitchell, eds.,Machine LearningPalo Alto,CA: Tioga, pp. 307–329.

Lenat D. and R. Guha, (1988),The world according to CYC, MCC Technical Report No. ACA-AJ-300-88.

Newell, A. (1980), ‘Reasoning, problem solving and decision processes: The problem space as afundamental category,’ in R. Nickerson, ed.,Attention and PerformanceVIII, Hillsdale, NJ:Lawrence Erlbaum Associates.

Newell, A. (1990),Unified Theories of Cognition, Cambridge, MA: Harvard University Press.Rowlands, M. (1997), ‘Teleological semantics’,Mind, 106, pp. 279–303.Winston, P. H. (1975), ‘Learning structural descriptions from examples’, in P. H. Winston, ed.,The

Psychology of Computer Vision, New York: McGraw-Hill, pp. 157–210.

Department of Psychology, VALERIE L. SHALINWright State University,Dayton, OH 45435 USA