rosenblatt 1958

Download Rosenblatt 1958

Post on 28-Dec-2015

15 views

Category:

Documents

0 download

Embed Size (px)

DESCRIPTION

roseablatt

TRANSCRIPT

  • Psychological ReviewVol. 65, No. 6, 19S8

    THE PERCEPTRON: A PROBABILISTIC MODEL FORINFORMATION STORAGE AND ORGANIZATION

    IN THE BRAIN1

    F. ROSENBLATTCornell Aeronautical Laboratory

    If we are eventually to understandthe capability of higher organisms forperceptual recognition, generalization,recall, and thinking, we must firsthave answers to three fundamentalquestions:

    1. How is information about thephysical world sensed, or detected, bythe biological system?

    2. In what form is informationstored, or remembered?

    3. How does information containedin storage, or in memory, influencerecognition and behavior?

    The first of these questions is in theprovince of sensory physiology, and isthe only one for which appreciableunderstanding has been achieved.This article will be concerned pri-marily with the second and thirdquestions, which are still subject to avast amount of speculation, and wherethe few relevant facts currently sup-plied by neurophysiology have not yetbeen integrated into an acceptabletheory.

    With regard to the second question,two alternative positions have beenmaintained. The first suggests thatstorage of sensory information is inthe form of coded representations orimages, with some sort of one-to-onemapping between the sensory stimulus

    1 The development of this theory has been

    carried out at the Cornell Aeronautical Lab-oratory, Inc., under the sponsorship of theOffice of Naval Research, Contract Nonr-2381(00). This article is primarily'an adap-tation of material reported in Ref. IS, whichconstitutes the first full report on the program.

    and the stored pattern. According tothis hypothesis, if one understood thecode or "wiring diagram" of the nerv-ous system, one should, in principle,be able to discover exactly what anorganism remembers by reconstruct-ing the original sensory patterns fromthe "memory traces" which they haveleft, much as we might develop aphotographic negative, or translatethe pattern of electrical charges in the"memory" of a digital computer.This hypothesis is appealing in itssimplicity and ready intelligibility,and a large family of theoretical brainmodels has been developed around theidea of a coded, representational mem-ory (2, 3, 9, 14). The alternative ap-proach, which stems from the tradi-tion of British empiricism, hazards theguess that the images of stimuli maynever really be recorded at all, andthat the central nervous systemsimply acts as an intricate switchingnetwork, where retention takes theform of new connections, or pathways,between centers of activity. In manyof the more recent developments ofthis position (Hebb's "cell assembly,"and Hull's "cortical anticipatory goalresponse," for example) the "re-sponses" which are associated tostimuli may be entirely containedwithin the CNS itself. In this casethe response represents an "idea"rather than an action. The impor-tant feature of this approach is thatthere is never any simple mapping ofthe stimulus into memory, accordingto some code which would permit itslater reconstruction. Whatever in-

    386

  • THE PERCEPTRON 387

    formation is retained must somehowbe stored as a preference for a par-ticular response; i.e., the informationis contained in connections or associa-tions rather than topographic repre-sentations. (The term response, forthe remainder of this presentation,should be understood to mean anydistinguishable state of the organism,which may or may not involve ex-ternally detectable muscular activity.The activation of some nucleus of cellsin the central nervous system, forexample, can constitute a response,according to this definition.)

    Corresponding to these two posi-tions on the method of informationretention, there exist two hypotheseswith regard to the third question, themanner in which stored informationexerts its influence on current activity.The "coded memory theorists" areforced to conclude that recognition ofany stimulus involves the matchingor systematic comparison of the con-tents of storage with incoming sen-sory patterns, in order to determinewhether the current stimulus has beenseen before, and to determine the ap-propriate response from the organism.The theorists in the empiricist tradi-tion, on the other hand, have essen-tially combined the answer to thethird question with their answer to thesecond: since the stored informationtakes the form of new connections, ortransmission channels in the nervoussystem (or the creation of conditionswhich are functionally equivalent tonew connections), it follows that thenew stimuli will make use of these newpathways which have been created,automatically activating the appro-priate response without requiring anyseparate process for their recognitionor identification.

    The theory to be presented heretakes the empiricist, or "connectionist"'position with regard to these ques-

    tions. The theory has been developedfor a hypothetical nervous system, ormachine, called a perceptron. Theperceptron is designed to illustratesome of the fundamental properties ofintelligent systems in general, withoutbecoming too deeply enmeshed in thespecial, and frequently unknown, con-ditions which hold for particular bio-logical organisms. The analogy be-tween the perceptron and biologicalsystems should be readily apparent tothe reader.

    During the last few decades, thedevelopment of symbolic logic, digitalcomputers, and switching theory hasimpressed many theorists with thefunctional similarity between a neuronand the simple on-off units of whichcomputers are constructed, and hasprovided the analytical methods nec-essary for representing highly complexlogical functions in terms of suchelements. The result has been aprofusion of brain models whichamount simply to logical contrivancesfor performing particular algorithms(representing "recall," stimulus com-parison, transformation, and variouskinds of analysis) in response tosequences of stimulie.g., Rashevsky(14), McCulloch (10), McCulloch &Pitts (11), Culbertson (2), Kleene(8), and Minsky (13). A relativelysmall number of theorists, like Ashby(1) and von Neumann (17, 18), havebeen concerned with the problems ofhow an imperfect neural network,containing many random connections,can be made to perform reliably thosefunctions which might be representedby idealized wiring diagrams. Un-fortunately, the language of symboliclogic and Boolean algebra is less wellsuited for such investigations. Theneed for a suitable language for themathematical analysis of events insystems where only the gross organ-ization can be characterized, and the

  • 388 F. ROSENBLATT

    precise structure is unknown, has ledthe author to formulate the currentmodel in terms of probability theoryrather than symbolic logic.

    The theorists referred to above werechiefly concerned with the question ofhow such functions as perception andrecall might be achieved by a deter-ministic physical system of any sort,rather than how this is actually doneby the brain. The models which havebeen produced all fail in some im-portant respects (absence of equi-potentiality, lack of neuroeconomy,excessive specificity of connectionsand synchronization requirements,unrealistic specificity of stimuli suffi-cient for cell firing;, postulation ofvariables or functional features withno known neurological correlates, etc.)to correspond to a biological system.The proponents of this line of ap-proach have maintained that, once ithas been shown how a physicalsystem of any variety might be madeto perceive and recognize stimuli, orperform other brainlike functions, itwould require only a refinement ormodification of existing principles tounderstand the working of a morerealistic nervous system, and to elim-inate the shortcomings mentionedabove. The writer takes the position,on the other hand, that these short-comings are such that a mere refine-ment or improvement of the principlesalready suggested can never accountfor biological intelligence; a differencein principle is clearly indicated. Thetheory of statistical separability (Cf.15), which is to be summarized here,appears to offer a solution in principleto all of these difficulties.

    Those theoristsHebb (7), Milner(12), Eccles (4), Hayek (6)whohave been more directly concernedwith the biological nervous systemand its activity in a natural environ-ment, rather than with formally'anal-

    ogous machines, have generally beenless exact in their formulations and farfrom rigorous in their analysis, so thatit is frequently hard to assess whetheror not the systems that they describecould actually work in a realistic nerv-ous system, and what the necessaryand sufficient conditions might be.Here again, the lack of an analyticlanguage comparable in proficiency tothe Boolean algebra of the networkanalysts has been one of the mainobstacles. The contributions of thisgroup should perhaps be considered assuggestions of what to look for andinvestigate, rather than as finishedtheoretical systems in their own right.Seen from this viewpoint, the mostsuggestive work, from the standpointof the following theory, is that ofHebb and Hayek.

    The position, elaborated by Hebb(7), Hayek (6), Uttley (16), andAshby (1), in particular, upon whichthe theory of the perceptron is based,can be summarized by the followingassumptions:

    1. The physical connections of thenervous system which are involved inlearning and recognition are not iden-tical from one organism to another.At birth, the construction of the mostimportant networks is largely random,subject to a minimum number ofgenetic constraints.

    2. The original system of connectedcells is capable of a certain amount ofplasticity; after a period of neuralactivity, the probability that a stim-ulus applied to one set of cells willcause a response in some other set islikely to change, due to some rela-tively long-lasting changes in theneurons themselves.

    3. Through exposure to a la

Recommended

View more >