john mccarthy, binford, david luckham, zohar manna...

78
Stanford Artificial Intelligence Laboratory Memo AIM-337 Computer Science Department Report No. STAN-CS-80-808 Basic Research in Artificial Intelligence and Foundations of Programming bY John McCarthy, Principal Investigator Thomas Binford, David Luckham, Zohar Manna, Richard Weyhrauch Associate Investigators Edited by Les Earnest Research sponsored by Defense Advanced Research Projects Agency COMPUTER SCIENCE DEPARTMENT Stanford University

Upload: others

Post on 27-Mar-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Stanford Artificial Intelligence LaboratoryMemo AIM-337

Computer Science DepartmentReport No. STAN-CS-80-808

Basic Research in Artificial Intelligenceand

Foundations of Programming

bY

John McCarthy, Principal InvestigatorThomas Binford, David Luckham, Zohar Manna, Richard Weyhrauch

Associate Investigators

Edited by Les Earnest

Research sponsored by

Defense Advanced Research Projects Agency

COMPUTER SCIENCE DEPARTMENTStanford University

Stanford Artificial Intelligence LaboratoryMemo AIM-337

.

May 1980

Computer Science DepartmentReport No. STAN-C&80-808

Basic Research in Artificial Intelligenceand

Foundations of Programming

John McCarthy, Principal InvestigatorThomas Binford, David Luckham, Zohar Manna, Richard Weyhrauch

Associate Investigators

Edited by Les Earnest

Recent research results are reviewed in the areas of formal reasoning, mathematical theory ofcomputation, program verification, and image understanding.

This research was supported by the Advanced Research Projects Agency of the Department ofDefense under ARPA Order No. 2494, Contract MDA903-76-C-0206. The views and conclusionscontained in this document are those of the authors and should not be interpreted as necessarilyrepresenting the oflcial policies, either expressed or implied, of Stanford University, vr any agencyof the U. S. Government.

Reproduced in the U.S.A. Available from the National Techicol Information Service, Springfield,Z4rginia 22161.

1

. 1

Table of ContentsSect ion Page

1. Introduction 1

2. Basic Research in Artificial Intelligenceand Formal Reasoning 2

2.1 Formal Reasoning 22.2 First Order Logic 32.3 Mathematical Theory of Program

Testing 42.4 Proofs as Descriptions of Algorithms 5

3. Mathematical Theory of Computation 6

4. Program Verification 8

5. Image Understanding 145. I Summary 145.2 Research Objectives 175 .3 ACRONYM progress 225.4 Autonomous Navigation 275.5 References 31

Appendices

A. Theses 33

B. Film Reports 38

C. External Publications 40

D. Abstracts of Recent Reports 50

1. Introduction

This report describes recent research in severalrelated areas:

0

0

l

l

Basic research in artifiial intelligence andformal reasoning addresses fundamentalproblems in the representation of knowledgeand reasoning processes applied to thisknowledge. Solution of these problems willmake possible the development of analyticalapplications of computers with large andcomplex data bases, where current systemscan handle only a very restricted set of datastructures and queries.

Mathematical theory of computation studiesthe properties of computer programs. Thegoal is to provide a sound theoretical basisfor proving correctness or equivalence ofdesigns.

The goal of program verifiation is toimprove the reliability of important classesof programs such as compilers, operatingsystems and realtime control systems, and tostandardize techniques for programconstruction, documentation andmaintenance.

Image understanding is aimed atmechanizing visual perception of three-dimensional objects either from

.photographs or from passive imagingsensors. Advances in this field are expectedto lead to much more efficientphotointerpretation capabilities as well asautomatic visual guidance systems.

Appendices list dissertations, films, and otherrecent reports as well as recent externalpublications by the staff.

2

2. Basic Research in Artificial Intelligenceand Formal Reasoning

Personnel: John McCarthy,Richard Weyhrauch, Lewis Creary,Luigla Aiello, Jussi Ketonen, Ma XiwenStudent Resclarch Assistants:Christopher Goad, Carolyn Talcott,Ben Moszkowski

Applied research requires basic research toreplenish the stock of ideas on which itsprogress depends. The long range goals ofwork in basic AI and formal reasoning are tomake computers carry out the reasoningrequired to solve problems. Our work over thepast few years has made it considerably clearerhow our formal approach can be applied to AIand MTC problems. This brings applicationnearer, especially in the area of mathematicaltheory of computation.

2.1 Formal Reasoning

John McCarthy continued his work informalization of the common sense facts aboutthe world needed for intelligent behavior andalso work in formalization of computerprograms. This work led to the following‘reports and publications:

1. Two papers with R.S. Cartwright on therepresentation of recursive programs in firstorder ‘logic were published in POPLconferences [2, 31.

2. In Spring lp79 McCarthy discovered aformahsm for representlng sequential programsin first order logic now called Elephant - itnever forgets. Unlike previous formalisms,Elephant uses time explicitly as an independentvariable and a program counter as an explicitfunction of time. As a result, properties ofprograms can be proved directly from theprogram and the axiomatization of the datadomain. Besides that, Elephant programs canrefer directly to past inputs and events -sometimes obviating the need for theprogrammer to define certain data structures forremembering these events; the data structures

are then supplied by the compiler. In this onerespect Elephant is a higher level languagethen any developed so far.

3. Work continued on formalisms forrepresenting knowledge. Ma Xiwen revisedMcCarthy’s set of axioms for representing therather difficult knowledge puzzle of Mr. S andMr. P and used FOL to verify the translationof the knowledge problem to a purelyarithmetic problem. This work is part of astudy of the representation of facts aboutknowledge that will aid the development ofprograms that know what people and otherprograms do and don’t know.

4. McCarthy further developed his methodcalled circumscription of non-monotonicreasoning and related it to the methodsdeveloped by Raymond Reiter and by DrewMcDermott and Jon Doyle jointly. It turns outthat humans do non-monotonic reasoning allthe time, and machines must do it in order tobehave intelligently. In our opinion, theformalization of non-monotonic reasoning is amajor theoretical step preparing for practicalAI systems that can avoid excessivelyspecialized formulations of their knowledge ofthe world.

A conference on non-monotonic reasoning washeld at Stanford in November 1978, and itsproceedings are being published as a specialissue of the journal Arti@ial Inrclligencc.

Carolyn Talcott continued her st’udies inpreparation for writing a dissertation that willcombine the methods of recursive functiontheory with the programming techniques ofLISP. Hopefully, it will permit the formulationand proof of more complex facts about morecomplex programs.

Lewis Creary continued his work on theepistemology of artificial intelligence, and on thedesign of an advice-taking problem solver.

On the epistemological side, new ideas weredeveloped for dealing with:a) causal reasoning and the frame and

qualification problems,

2.1 Formal Reasoning . 3

b) the formal representation of actions,c) epistemic feasibility of actions and ihe need

to plan for the availability of knowledge,and

d) the structuring, classification, and use ofdomain-specific knowledge within ageneral problem solving system.

Work on the advice-taking problem solver wasconcentrated for the most part on high-leveldesign issues and associated theoreticalquestions. A tentative high-level design andlong term plan were constructed, incorporatingthe epistemological ideas mentioned above, aswell as new theoretical ideas concerning:e) problematic goal seeking, goal refinement,

and achievement planningf) advice seeking and taking as a means of

program development and selectiveknowledge acquisition,

g) complex motivational structures for use inproblem solving.

References

Ill Cartwright, R.S., A Practical FormalSemantic Definition and VerificatiorlSystem for Typed Lisp, Ph.D. This,Computer Science Department, Stanford

University, Stanford, California, 1976.

[21 Cartwright, Robert and John McCarthy,Recursive PrograIns as Functions irl aFirst Order Theory, in E. Blum and S.Takasu (eds.), Proceedings of thelnternutionnl Confuenie on MathtmaticaiStuclies of Information Processing, SpringerPublishers, 1978.

[31 Cartwright, Robert and John McCarthy,First Order Programming Logic, Proc. 6thAnnual ACM Symp. on Principles of‘Programming Languages, San Antonio,Texas, January 1979.

2.2 First Order Logic

During the past year FOL group efforts havebeen devoted mainly to the following activities:1) writing and giving talks about projects; 2)doing many new experiments, some of whichare described below; 3) planning for the nextversion of FOL and beginning itsimplementation (this has taken the form of anew implementation of LISP which is describedbelow}; 3) using the Prawitz normalizationtheorem to automatically specialize bin packingalgorithms; 5) the invention (by Ketonen andWeyhrauch) of a new decision procedure for asubset of predicate logic.

I) Publications are listed below. The bulk ofthe other writing consists of a set of lecturenotes by Richard Weyhrauch which he hopeswill become a monograph on FOL and ispresently about 125 pages long.

2) Weyhrauch is preparing a paper thatcontains many diverse examples uses of FOL toexpress many currently interestmg problemsdiscussed by AI people. These include non-monotonic logic, morning star/evening star,meta-theory, Dynamic introduction of newreasoning principles, reasoning about belief,Procedural vs. Declatative representation, Notvs. Thnot, and reasoning about inconsistencies.These examples will demonstrate the breath ofthe FOL epistemology. In addition LuigiaAiello did extensive work demonstrating howmetatheory can be used to represent elementaryalgebra in a reasonable way. This work will bepresented at this years automatic deductionworkshop.

3) Planning for the next version of FOL hastaken several directions. The first amounts todiscussions about what the underlying logic ofthe next version should look like. The secondinvolves how to integrate the work of ChrisGoad involving normalization into the newFOL system. The third is what kind ofprogramming language should FOL be writtenin. The most important issue here is how weintend to implement the FOL evaluator in sucha way that the FOL code becomes self

4 Basic Research in Artificial Intelligence and Formal Reasoning.

axiomatizing and we can use the FOLevaluator itself as our basic interpreter. Thisdiscussion is central to Carolyn Talcott’s thesis.This has resulted in Weyhrauch producing anew assembly language programmed version ofLISP which is designed to be theoreticallysmooth enough that an axiomatization of itsproperties can be carried out by CarolynTalcott.

4) The work of producing a computationallyefficient implementation of normalization wascompleted early m the year and a series ofexperiments whose purpose is to assess thepractical applicability of proofs as descriptionsof computation was carried out. Chris Goadhas chosen a class of bin packing problems as atest domain. The work involved both buildingup a collection of proofs which establishedsome computationally useful facts about thedomain, and investigating the computationalefficiency aspects of these proofs. This workhas resulted in Chris finishrng his Ph. D. thesiswhich he is currently writing up.

5) Ketonen worked out details and greatlyextended an idea of Weyhrauch’s about thepossibility of a new decision procedure for asubset of the predicate calculus they are nowcalling the direct part of first order logic. Thisalgorithm is currently being coded by Ketonen.

Publications

[l] L. A iello and G. Prini, An efficientinterpreter for the LAMBDA calculus, tobe published in Journal of Computer andSystem Sciences, 1979.

[2] G. Prini, Applicative parallelism,submitted for publication in Journal of theACM, 1979.

DJ L. Aiello, and C. Prini, Desigtl of apersona) computing system, to bepublished in Proceeedings of the AnnualConference of the Canadian InformationProcessing Society, Victoria (BritishColumbia), May 12-14, 1980.

[41 L. Aiello, and G. Prini, Operators in LISP:some remarks on iverson’s Operators, to bepublished in ACM Transactions onProgramming Languages and Systems, 1980.

[51 G. Prini, Explicit parallelism in LISP-likelanguages, submitted to 1980 LISPConference, Stanford University, 24-27A ugust 1980.

161 L. Aiello, Evaluating functions defined infirst-order logic (tentative title), submittedto Logic Programming Conference,Budapest, July 1980.

[7] L. Aiello, R. Weyhrauch, Using meta-theoretic Reasoning to do Algebra, Pm.Automatic Deduction Conference, Les Arcs(France), July 8-1 I, 19SO.

E8] R. Weyhrauch, Prolegomena to a theoryof meechanized formal reasoning, Toappear in R/ Journal special issue on non-monotonic logic.

[9] C. Goad, Proofs as descriptions ofcomputations, Proc. Automatic DeductionConference, Les Arcs (France), July 8- 11,1980.

2.3 Mathematical Theory of ProgramTesting

This thesis by Martin Brooks describes howtesting can be used to show a program’scorrectness. The scenario for using testing toshow correctness is that the programmer writesa program P which is either correct, or whichdiffers from the correct program by one or moreerors in a specified class E of errors. Thetheory relates:(I) P’s structure.(2) The combinatorics of how errors in E could

have led to P (incorrectly) being written.(3) The type of program behaviour that the

programmer will observe when testing, forexample the program’s output, or a trace.

(4) The characterisitics of “complete” test data,which is guaranteed to reveal the presenceof errors in E, assuming all errors comefrom E.

2.3 Mathenlatical Theory of Program Testing . 5

The theory is developed at a general level andis then specialized to the case of recursiveprograms having only simple errors, where theprogrammer observes a trace of each test run.A method is devised for generating test data,based on this application of the theory. Thegenerated test data is “complete”, meaning thatif the only errors in the recursive program areof the specified types, then the test data willreveal the errors through at least one incorrecttrace.

Viewed contrapositively, assuming that thewritten program P is “almost correct”, that is,differs from the correct program at most byerrors of the specified simple types E, if thetrace of each test run is as the programmerintended, then the program must be correct.The test data generation method has beenimplemented (in Maclisp) and examples of itsoperation are given.

Refereme

[II Martin Brooks, DeterluiCng Correctnessby testing, Stanford AI Memo AIM-336,June 1980.

2.4 Proofs as Descriptions of Algorithms

Chris Goad is currently completing his Ph.D.thesis, titled “Computational uses of themanipulation of formal proofs*‘. The thesisconcerns the use of proofs as descriptions ofalgorithms. In particular, a proof in anappropriate system of a formula of the form“for all x there exists a y such that R(x,y)” canbe used express an algorithm A which satisfiesthe “specification” R in the sense that, for eachinput n, R(n,A(n)) holds. The proof can be“executed” by use of any of several methodsderived from proof theory, for example, by aproof normalization procedure, or by extractionof “code” from the proof by means of one ofthe functional interpretations.

additional information expands the class oftransformations on the algorithm which areamenable to automation. For example, there isa class of “pruning” transformations whichimprove the computational efficiency* of anatural deduction proof by removing unneededcase analyses. These transformations makeessential use of dependency information whichfinds formal expression in a natural deductionproof, but not in a conventional program.Pruning is particularly useful in adaptinggeneral purpose algorithms to special situations.

Goad has developed efficient methods for theexecution and transformation of proofs, andhas implemented these methods on the StanfordArtificial Intelligence Laboratory’s PDP- IOcomputer. He has done a series of experimentson the specialization of a bin-packingalgorithm. In these experiments, substantialincreases of efficiency were obtained by use ofthe additional ‘logical” information containedin proofs.

The general objective of Goad’s work has beenthe development of improved technology formanipulating methods of computation. In histhesis, he has exhibited new automaticoperations on algorithms - operations whichare made possible by the use of formal proofsto describe computation.

Reference

[I] C. Goad, Proofs as descriptions ofCOIII putat ions, Proc. Automatic DeductionConference, Les Arcs (France), July 8-l I,1980.

A proof differs from a conventional programexpressing the same algorithm in that itformahzes more information about thealgorithm than does the program. This

6

3. Mathem.atical Theory of Computatiorl

Personnel: Zohar Manna.

(a) A Deductive Approach to ProgramSynthesis (references [2,3,61)

A framework for program synthesis hasbeen developed that relies on a theorem-proving approach. This approach combinestechniques of unification, mathematicalinduction, and transformation rules within asingle deductive system. We investigated thelogical structure of this system and consideredthe strategic aspects of how deductions aredirected. Although no implementation exists,the approach is machine oriented andultimately intended for implementation inautomatic synthesis systems.

(b) The Modal Logic of Programs (reference[53)

We explored the general framework ofModal Logic and its applicability to programreasoning. In relating the basic concepts ofModal Logic to the programming environment,we adopted the temporal interpretation ofModal Logic: the concept of “world”corresponds to a program state, and the conceptof “accessibility relation” corresponds to therelation of derivabiliy between states duringexecution. A variety of program propertiesexpressible within the modal formahsm wasinvestigated.

The first axiomatic system studied, the“sometime” system, is adequate for proving totalcorrectness and “eventuality” properties.However, it is inadequate for proving“invariance” properties. The stronger nextimesystem obtained by adding the next operatorwas shown to be adequate for Invariances aswell.

We first applied these systems to theanalysis of deterministic programs, then weexamined the lmplrcatrons of nondeterminism,and finally investigated the extenstons necessaryto handle concurrency.

(c) A Situational-Calculus Approach toProblematic Features of ProgrammingLanguages (reference 111)

Certain features of programminglanguages, such as data structure operationsand procedure call mechanisms, have beenfound to resist formalization by classicaltechniques. We developed an approach, basedon a situational calculus that makes explicitreference to the states of a computation. In thiscalculus, the evaluation of an expression yieldsa value and produces a new state. For eachstate, distinction is drawn between anexpression, the location of its value, and thevalue itself.

Within this conceptual framework, manyof the “problematic” features of programminglanguages can be described precisely, includingoperations on such data structures as arrays,pointers, lists, and records, and such procedurecall mechanisms as call-by-value, call-by-reference, and call-by-name. No particularobstacle is presented by aliasing betweenvariables or by recursion in procedure calls.The framework provides an expressivevocabulary for defining the semantics ofprogramming-language features. It is amenableto implementation in program verification orsynthesis systems.

(d) Synchronous Schemes and Their DecisionProblems (reference [4I)

A class of schemes called synchronousschemes was defined. A synchronous schememay have several variables but requires thevalues assigned to these variables to besynchronized during all computations. Thismeans that in every symbolic computation thedifferences between the heights of theHerbrand values pi the variables are keptbounded. This boundedness ensures aconvergent “annotate and unwind”transformation which yields an equivalent fullyannotated free scheme. As a result, theproblems of convergence, divergence,equivalence, and other properties are decidablefor schemes in this class. Membership in thisclass for a given bound is decidable. The classcan be extended by allowing “reset statements”which cause all the variables to be reset toconstant values, as well as dead variables,which from a certain point on remain constantand hence lag behind the other variables.

. 7

The class of synchronous schemescontains, as special cases, the known classes ofIanov schemes, one-variable schemes andprogressive schemes, for which equivalence andother properties are decidable.

Recent Publicatiorls

1. 2. Manna and R. Waldinger, ProblculaticFeatures of Programming Languages: ASituational-CalcuJus Approach, inpreparation.

2. 2. Manna, Lectures ou the Logic ofComputer Programming, SIAM Press(Feb. 1980).

3. 2. Manna and R. Waldinger, A DeductiveApproach to Prograui Synthesis, ACMTransactions on Programming, Languages,and Systems, vol. 2, no. 1, (Jan. 19SO>, pp.90421.

4. 2. Manna and A. Pnueli, SynchronousScirenles and Their Decision Problems,Proceedings Seventh ACM Symposium onPrinciples of Programming Languages, LasVegas (Jan. /980), pp. 62-67. .

5. 2. Manna and A. Pnueli, A Modal Logic ofPrograms, Proceedings InternationalConference on Automata, Languages andProgramming, Grat, A ustria (July i 979),Invited paper.

6. 2. Manna and R. Waldinger, Synthesis:Dreams+PrograIns, IEEE Transactions onSoftware Engineering, vol. SE-5, no. 4, (July1979).

8

4. Program Verificatiorl

Persorwel: David C. Luckham,Derek C. Oppen, Friedrich W. vonHenke, Steven M. German,Wolfgang Polak, Richard Karp,Howard Larsen, Peter Milne.

The research of the Program Analysis andVerification Group is directed towards thedevelopment of new programming methods andautomated programming aids. The goal isefficient production of very reliable systemsprograms, including compilers and operatingsystems, and efficient maintenance of suchprograms.

The group is actively conducting research inthree main areas:

1. Design and implementation of interactiveprogram analyzers.

2. Design of a high level programminglanguage and associated specificationlanguage for multi-processing. Thislanguage is equivalent to an extensive ADAsubset but in addition enforces a rigorousspecification standard.

3. Application of program analyzers, andparticularly verifiers, to such programmingproblems as debugging, documentation,proof of correctness, and analysis ofmodifications to code and specifications.

heSystems that automate or partially automate tanalysis of properties of programs may becollectively named “program analyzers”. Thegroup has implemented two analyzers, theStanford Pascal Verifier and the Runchecksystem. The Stanford verifier is intended forinteractive use as an aid to checking thecorrectness of programs. It automates methodsfor analysing the consistency of a program withits documentation. It is being distributed on alimited basis to other research laboratories forevaluation and for experimental use as aprogramming aid to debugging and structuredprogramming. Runcheck is a development of

the verifier for fully automatic analysis of aprogram for possible common runtime errors. Itautomates some simple methods for improvingprogram documentation and analysing whyverifications fail.

Our experiments with these analyzers havemade it clear that verification methods can beapplied to analysis of other programmingproblems, including adequacy ofdocumentation, efficiency of code, andadaptability of existing code to newspecifications.

Currently the group is working towardsapplying analysis based on verificationtechniques to a very wide range of programs.These include:(i). new kmds of programs previouslyconsidered to be beyond the limits of presenttechniques in automated verification; e.g.,complicated pointer manipulating programs,and large programs such as a compiler.(ii). programs using new language constructsincluding Modules, Concurrent Processes andMessage Passing constructs.

This has required a research effort in design ofprogramming languages and specification (orassertion) languages, and in programmingmethods, particularly in the area of multi-processing. This research is being carried outwith very careful attention to the ADA designeffort [4,16] and all implementation andverification results will be directly applicable toADA.

The references contain some of the group’searlier work in areas 1 and 3 (above) which hasbeen published (e.g., [3, 7, 13, 17, 21, 22, 25, 27,281). Reference 1373 is Edition 1 of theVerifier User Manual.

Accomplishments

I. Veri&r System

1.1 The group has implemented a verifier foralmost the full Pascal language (exclusionsmainly concern floating point arithmetic).

. 9

1.2 The user manual [37] contains anintroduction to program verification andexamples illustrating the use of the verifier asan aid in debugging, documentation andstructured programming. The manual has beendistributed widely and is now in a secondprinting.

.

1.3 The verifier has recently been introduced atseveral other A RPAnet sites to test itsportabilty and to obtain some preliminaryfeedback from other user groups. A versionfor use over the net is provided on theStanford SCORE computer. It has been usedover the net by several user groups includingCSIRO, Canberra, Australia and theNorwegian Defense Research Establishment,Oslo, Norway. Export versions are being usedas a basis for courses in program verification atthree or more universities in the USA and twouniversities in Australia.

1.4 We have begun distribution of the verifieron a limited basis both to government projectsinterested in security aspects of systemsprograms, and to industrial researchlaboratories interested in programminglanguages, specification languages, softwaretools, and verification applied to products thathave very high reliabilty requirements,

1.5 A library of documented and verifiedprograms is being built and is avaitable overthe ARPAnet from SAIL on the drrectoryIEX,VERI and from N-SCORE on directorycCSD.VER>.

The verifier is an experimental prototypesystem, and it is hoped that this distribution(1.1 - 1.5 above) will generate feedback essentialto motivating development and improvementsin automated verifiers and their applications.

2. Specifiation Languages

The language accepted by the verifier extendsPascal by inclusion of Modules in a formcompatible with the DOD Ada language design[16] (see also DOD Steelman[4]).

2.1 A theory of documentatron of modules hasbeen worked out (Luckham and Polak [24]).This theory is new and has not been suggestedin any other previous language design. Itspracticality IS being tested. The verifierrequires modules to be documented.Verification of documented modules has alreadybeen successful on some substantial examples.One example is a module implementing Pascalpointer and memory allocation operations usinga subset of Pascal whose only complex datatype is Arrays. These extensions will bedocumented in the next edition of the usermanual.

2.2 The theory of Pascal pointer verification(Luckham and Suzuki [253) has finally beenpublished in TOPLAS, five years after itsimplementation in earlier versions of theStanford verifier and four years after theoriginal AI Lab. report. Recent discussionsshow that some other verification groups arenow experimenting with our method of pointerspecification and verification by means ofReference classes. .

2.3 Specifications for multiprocess systems havebeen studied and developed. A method ofspecification and implementable axiomaticsemantics of concurrent processes has beengiven in 1231, and also in Karp’s Ph.D. thesis.This study is expected to contribute toward thedevelopment of documentation standards andautomated verifiers for Ada.

3. Runcheckdesign

and Programming Language

3.1 A special version of this verifier forautomatic detection of runtime errors inprograms has been implemented. This is calledthe,Runcheck system. Results with an earlyversion of Runcheck have been published

10

(German, [71), and are the most impressive inthe area of completely automatic analysis ofprograms so far. A more comprehensive reporton this system is forthcoming [9]. Work onimprovements to this system and on automaticdocumentation of Pascal and Ada programs iscontinuing.

3.2 Recently, work on this area has also focusedon the relationship between verification andlanguage design. .For instance, a secureverifiable Union type to replace Pascal’s variantrecords has been designed and formallyspecified. Checking for runtime errorsinvolving unions is a straightforwardverification problem.

3.3 Certain very efficient operations havetraditionally been excluded from high levellanguages because errors with them would haveserious and confusing side effects. Work hasbegun on the idea that some of theseoperations, such as explicit deallocation fordynamic variables and conversion betweenarbitrary types, can be safely included in a highlevel language by verifying that they are usedcorrectly.

4. Theorem Provers and Rule Languages.

4.1 The success of these analyzers depends onrecent advances made by this group in thetheory and implementation of cooperatingspecial purpose decision procedures (Nelsonand Oppen [27, 281, Oppen 129, 301). This newapproach to implementing theorem provers isthe best method found to date. The presentprover automatically handles any quantifier-free formula over the integers, arrays, liststructure and Pascal records. It hasautomatically proved formulas over 300 lineslong which have been produced by theRUNCHECK version of the verifier.

4.2 In developing an underlying theoremproving capability a second important area ofprogress has been the design andimplementation of a Rule Language for user-supplied lemmas. This latter facility allows theuser to describe data structures not handled by

PrograIl) Verification

special purpose provers and to definespecification languages, i.e. new concepts used inthe specification of programs. Itsimplementation, the “rulehandler”, gives theverifier the ability to use definitions ofspecification concepts in correctness proofs.Details of the present rulehandler are in theuser manual [373 and in [353.

4.3 Experiments have been made in which aspecification language for a particular class ofprograms (e.g., simple database programs) wasdefined by a set of 20 rules; using this set ofrules, the verifier was able to carry outcorrectness proofs of programs that requiredproving over 120 verification conditions. Itwould be completely impractical to carry outsuch proofs by hand (and exhaustive testing ofsuch programs would also be very difficult).The rule language provides the means ofdescribing the basic knowledge necessary forautomatically proving a large number ofdifferent (but similar) programs. The design ofrule languages is still very much a researcharea, and we are continuing research toimprove the present facility.

5. Verifiation Experiments

5.1 A number of programs operating onbinary trees has been verified as part of a studyon top-down development and verification ofprograms involving manipulation of pointerstructures. First results ~111 be reported in 1121The verified programs constitute the main partof a program implementing the fastest knownalgorithm for merging balanced binary trees.

5.2 An in-place permuation algorithmpresented by D. Knuth in 1971 has beensuccessfully verified. In 1971, D. Knuth claimedthat this algorithm was intractable to automaticverification at that time. A paper [321describing this proof has been published inIEEE Transactions on Software Engineering.

5.3 In connection with the compiler verificationproject [34] (described in more detail in Section2.4) several fairly large proofs have beencompleted. A scanner, a parser, a type checking

. 11

program, and a code generator for a Pascal likelanguage have been verified. These programstotal I50 pages of Pascal code; they may well bethe largest known set of programs to be verifiedwith the aid of a verification sy’stem at thispoint. Also the verification conditions areprobably the largest formulas ever succesfullytreated by an automatic theorem prover.

References

[ll Brinch-Hansen, P., The Solo OperatingSystem: A Concurrent Pascal Program,Software - Practice and Experience 6, 1976.

121 Bochmann, G., and Cecsei, J., A UnifiedMethod for the Specificatioil andVerification of Protocols, Proceedings IFIPcongress 1977, pp.229-234.

131 Cartwright, R. and Oppen, D., UnrestrictedProcedure Calls in Hoare’s Logic, toappear, Acta hformalica. (Also Proc. 5thACM Symposium on Principles ofProgramming Languages, ACM, New York,I 9 78).

I /41 DOD., Requirements for High OrderComputer Programming Languages,Department of Defense revised Steelman,July 1978.

[51 Drysdale, R.L., and Larsen, H.J.,Verification of sorting programs, AI Lab.Memo, forthcoming.

t

[6l Flon, L. and Suzuki, N., Norrdeterminisrnand the Correctness of Parallel Programs,Proc. /FIP Working Conference on the

Format Description of ProgrammingConceprs, St. Andrews, New Brunswick,1977.

171 German, S., Automating Proofs of theAbsence of Common Runtime Errors,Proc. Sth ACM Symposium on Principles ofProgramming Languages, ACM, New Yorkpp.l05-118, 1978.

181 German, S., Luckham, DC., and Oppen, D.,

Proving the absence of common runtimeerrors, draft Stanford AI. Lab. Memo,Stanford University, forthcoming.

193 German, S., An Automatic System forProving the Absence of Com manRuntime Errors, Stanford A.I. Lab. Memo,Stanford University, forthcoming.

[IO] German, S., Verification with VariantRecords, Unions Types, and Direct Accessto Data Representations, Stanford AI. LabMemo, Stanford University, in preparation.

[ll] Haynes, G., Verification Issues in an AirTraffic Control System, Texas InstrumentsInc. Report, 1979.

cl21 v. Henke, F.W., Top-down developmentand verification of programms, StanfordA.I. Lab. Memo, forthcoming.

[I33 v.Henke, F.W. and Luckham, D.C., Anlethodology for verifying programs,Proc. ht. Conf. on Reliable Softrrrare, LosAngeles, California, April 20-24, 1*975, 13%164.

[I41 v. Henke, F.W. and Luckham, D.C.,Verification as a Programming Tool,Stanford A.I. Lab. Memo, forthcoming.

cl51 Hoare, C.A.R., Monitors: An OperatingSystem Structuring Concept, CRCM 17: 10(1974), pp 549 - 557.

[ 161 Jchbiah, J.D., Krieg-Brueckner, B.,Wichmann, B.A., Ledgard, H.F., Heliard, J-C., Abrial, J-R., Barnes, C.P., and Roubine,O., Reference Manual for the GreenProgramming Language, CII HoneywellBull, March 1979.

1171 Igarashi, S., London, R.L., and Luckham,D.C., Automatic program verification I:Logical basis and its implementation, ActuInformatica, ~01.4, 1975, 14 5- 182.

[la] Jensen, K. and Wirth, N., Pascal UserManual and Report, Springer-Verlag, NewYork (1975).

12 Program Verification

[I91 Kahn, C., The Semantics of a SimpleLanguage for Parallel Program ni iug, hoc./F/P, North-Holland Publishing Company,Amsterdam, 1974.

1203 Kahn, G. and MacQreen, D., Coroutinesand Networks of Parallel Processes, Proc.IF/P, North-Holland Publishing Company,Amsterdam, 1977.

1211 Karp, R., and Luckham, D., Verificatiouof Fairness in an Implementation ofMonitors, Proc. 2nd ht. Conf. on SoftwareEngineering, San Francisco, 1976.

[221 Luckham, D.C., Program Verification andVerification-Oriented Prograinmirtg, Proc.IF/P Congress 77, pp. 783-793, North-Holland publishing Co., Amsterdam, 1977.

[23J Luckham D.C., and Karp, R.A.,Axiomatic Semantics of CoucurreutCyclic Processes, Stanford VerificationGroup report, forthcoming; submitted forpublication.

[24J Luckham, D. and Polak, W., A PracticalMethod of Documenting ,arld VerifyingPrograms with Modules, draft manuscript,A.I. Lab. Memo, forthcoming, submrtted forpublication ( 1978),

E251 Luckham, D.C., and Suzuki, N., Automaticprogram verification IV: Proof oftermination within a weak logic ofprograms, Stanford AI Memo AIM-269,Stanford University, 1975, Acta Informutica,vol. 8, 1977, 21-36.

[263 Luckham, D.C., and Suzuki, N.,Verification oriented proof rules forarrays, records, and pointers, StanfordArtificial Intellrgence Project Memo 278,Stanford University, April 1976; publishedas: Verification of Array, Record, andPointer Operations in Pascal, ACMTransactions on Programming Languagesand Systems, Oct. 1979, pp. 226-244.

1273 Nelson, C. and Oppen, D., Fast Decision

Procedures based on Congruence Closure,to appear JACM. (Available as CS ReportNo. STA N-CS-77-646, StanfordUniversity, or in Proc. 123th Annual IEEESymposium on Foundations of ComputerScience).

[28J Nelson, C. and Oppen, D., Simplificationby Cooperating Decision Procedures, ACMTransactions on Programming Languagesand Systems, Oct. 1979. (Available as CSReport No. STA N-CS-78-652, StanfordUniversity, or in Proc. 5th ACM Symposiumon Principles of Programming Languages,ACM, New York.)

[291 Oppen, D., Reasolling about RecursivelyDefined Data Structures, to appear JACM.(A vailable as CS Report STAN-CS-78-678, Stanford University, or in Proc. 5thACM Symposium on Principles ofProgramming Languages, A CM, NewYork.)

1301 Oppen, D., Convexity, Complexity, andCot11 biuat ions of Theories, to appear,Theoretical Computer Science. Proc. 5thSymposium on Automated Deduction,January 1979.

[31J Owicki, S., Specifications and Proofs forAbstract Data Types in ConcurrentPrograms, Technical report 133, DigitalSystems Laboratory, Stanford, 1977.

1321 Polak, W., An exercise in automaticprogram verification, IEEE Trans. onSoftware Engineering, vol. SE-5, Sept. 1979.

[33J Polak, W., Program verification based ondenotationat semantics, StanfordVerification Group Report, forthcoming.

1341 Polak, W., Theory of CompiJerSpecification and Verification, PhDThesis, Stanford University, forthcommg.

[35] Scherlis, W., A Rule Language and itsImplementation, in preparation, A-1. LabMemo, forthcoming; submitted to the AIjournal. .

. 13

[361 Scott, D., Outline of a MathematicalTheory of Computation, Technicalmonograph PRC-2, Oxford UniversityComputing Laboratory, Oxford, England( 1970).

.1373 Stanford Verification Group, Stanford

Pascal Verifier User Manual, Edition 1,Stanford Verification Group Report No. 11,Report STA N-CS-79-73 1, StanfordUniversity, March 1979.

[38] Suzuki, N., Automatic Verification ofprograms with colnplex data structures,Ph.D. thesis, Computer Science Department,Stanford University, 1976.

1393 Wirth, N., MODULA: A language formodular multiprogranlming, ETH Zurich,1976.

14

5. Image Understanding

Personnel: Thomas Binford, Barry Soroka,Student Research Assistants.Donald Gennery, Reginald Arnold,Rodney Brooks, Hans Moravec.

The program of research of the ImageUnderstanding group (SAIL-W) is leading toimplementi.ng powerful computer vision systemswhich perform the following tasks inPhotointerpretation. Guidance, andCartography:

monitor aircraft at airfields;monitor changes in building complexes;locate vehicles in parking lots and off roads

from aerial photos;reconstruct geometry of building complexes

from stereo images to make referenceimages;

navigate at low altitude with passive sensors.

SAIL-W capabilities will contribute to the IU-D M A Testbed.

These scenarios cover important tasks whichinclude a broad variety of object classes(aircraft, airfields, buildings, terrain, vehicles)and a range of imaging conditions (high andlow altitude, a variety of sensors, ranges ofillumination, weather conditions, snow andground cover). The scenarios provide a test ofgenerality and relevance.

SAIL-IU has made significant progress inseveral areas of Image Understanding duringthe contract period:1. development of ACRONYM, an advanced

model-based interpretation system;2. first demonstration of its interpretation

system, ACRONYM, in recognition of aLockheed LlOll in photos of SanFrancisco airport;

3. progress in algorithms for a highperformance stereo vision system;

4. development of an algorithm for finding“ribbons” in images;

5. progress toward a new algorithm for findingedges in images.

.

SAIL-IU research has found applications inthe Passive Navigation Project-and in theAutonomous Terminal Homing program.

5.1 Summary

SAIL-IU research has concentrated on buildinga powerful interpretation system (A CRONY M)including a stereo vision system. ACRONYMintegrates a wide range of image understandingcapabilities including those from otherlaboratories.

ACRONYM has successfully identified anL 1011 in an aerial photograph of SanFrancisco airport [Brooks 79~1. Frgure 1 showsACRONYM’s model of an LlOll. Figure 2shows an aerial photograph of an LlOl I at apassenger terminal at San Francisco airport.Figure 3 shows an edge description of theairport scene. Figure 4 shows large “ribbons”described by a “ribbon” finder. On the basis ofthese ribbons, a tentative identification wasmade. Figure 5 shows the result of a search forsmaller features which verify the identification;these smaller features are horizontal stabilizersand engine pods. Identification of the aircraftwould be significant if it were done by a specialpurpose program to identify aircraft. The factthat it was done automatically from models by ageneral purpose model-based vision systemmakes it a special accomplishment.Implementation of ACRONYM is far along.Relatively powerful versions of all majorcomponents exist.

Figure 1

Figure 2

i----P

Figure 3

.,

Image Updetstanding I

16

Figure 4

Figure 5

. 17

ACRONYM contains subsystems for geometricmodeling, geometric reasoning, prediction,planning, description, and interpretation.Geometric modeling produces models of objectsas Object Graphs (figure 1). The geometricmodeling system uses generalized cylrnderprimitives which provide compactrepresentation of complex shapes rncludmgcurved surfaces. It produces symbolic andgeneric models needed for geomctrrc reasoning.The modeling system has a geometric editor toaid in constructing models. Geometricreasoning is used in prdiction, planning,description, and interpretation subsystems.Geometric reasonrng is implemented as a rule-based system which uses Object Graphs,Observability Graphs, and Picture Graphswhich represent models, expectations, andimage descriptions.

Prediction is the process of going from symbolicmodels to signals or to symbolic descriptionsclosely derived from signals. Predictionproduces Observability Graphs which representsymbolic summarres of expected observablesproduced by “ribbon” finders, edge finders,surface and shape descriptor routines. Themost useful predictions are quasi-invariantobservables, valid for a wide range ofviewpoints and valrd for classes of objects, i.e.generic with respect to object class andviewpoint.

Interpretation is the process of mappingdescriptions of images and surfaces (PictureGraph) to predictions of appearances(Observability Graph) and to objects (ObjectGraph). Description is the process of goingfrom signal to symbol, from images to edges,ribbons, and surfaces.

A key part of any system is the ckscriptiuccomponent, the subsystem which obtains“ribbon”, edge, surface and shape descriptionsfrom images and sequences of images. Theedge finder of Nevatia and Babu of USC wastransported to SAIL at considerable effort[Nevatia and Babu 781. Its output alone wasnot sufficient, but an algortthm was developedfor linking edge elements into well-formed

regions [Brooks 79b]. Edges were linked by abest first search, with heuristics to selectcandidate edges for linking and to prune thesearch tree. Meanwhile, an extension isunderway of the techniques in the Binford-Horn edge finder [Horn 721.

Development of stereo vision has progressedrapidly. A new stereo mapping subsystem hasbeen developed which generates depth mapswhich are much more complete and moreaccurate than previous systems [Baker SO]. Itwas preceded by two earlier stereo mappingsystems [Cennery 771, [Arnold 781. Meanwhile,algorithms are being developed for a new stereomapping system. Signrficant results have beenobtalned for geometric constraints oncorrespondence in stereo images which will leadto powerful and robust stereo matchmgcapabilities.

5.2 Research Objectives

The Image Understanding program aims tocontribute to powerful techniques In support ofthese missions: 1. Object classification inphotointerpretation and photointelligence in theImage Understanding Testbed, in cooperationwith Defense Mapping Agency, relevant toDMA activities. 2. Stereo reconstruction forcultural areas for the Testbed. 3. Stereoreconstruction for making reference maps ofcultural areas in support of AutonomousTerminal Homing; stereo image matching andstereo TERCOM (terrain comparison) forguidance in terminal homing. 4. Tracking oflinear features and stereo Image matching incultural areas, for navigation with passivesensors.

Technologies which are essential for thosemissions are: object classification, stereomapping of cultural areas, stereo matching, andlinear feature description. A later sectiondescribes scientific problems which are centralto those techniques. Interpretation and objectclassification is an enormous problem to whichboth predictive and descriptive technrquescontribute. Both are being integrated inACRONYM. Both stereo surface features and

18 Image Understandi!lg

linear image feature description are important.While adequate techniques exist for stereomapping of terrain, stereo mapping forburldings presents dlfflcult problems. Lmearfeature description is important in its ownright, and also supports the other techniques.Recent progress with linear feature descriptionpromises significant payoff for all parts ofautomated photointerpretation. The followingparagraphs describe photointerpretationseen arios.

Photointerpretatiorl

In a typical scenario, a photointerpreter (PI)instructs ACRONYM to monitor selectedairfields frequently and report number and typeof aircraft. He names models of aircraft andairfields from a database.

ACRONYM monitors buildmg complexes inorder to alert a PI to changes which mrght besignificant, while ignoring insignificant changes.A PI selects a building complex to bemonitored. ACRONYM builds a depth mapand spatial model of the complex. The PIlabels some objects in the image as bulldmgs,oil tanks, radar antennas, etc.. and helps thesystem augment or improve its model.A CRONY M incorporates results of stereomapping of new coverage rn Its model (sidesseen from different views, dlf@rent shadowclues). ACRONYM maps depth changes andinterprets them. The PI instructs ACRONYMto Ignore vehicle motion, palnting, snow, rain,and vegetation surface changes. ACRONYMalerts the PI when depth changes are observedand presents an interpretation, with a registeredstereo pair showing changes m depth.

A PI instructs ACRONYM and the StanfordStereo System to cue on raised obJects whichstand above surrounding terrain, or to cue oncultural objects.

ACRONYM monitors an area for location,number and types of new buildings.ACRONYM monitors a region for offroadstaging areas for vehicles. It reports numberand type of vehicles and their activities.

Traditionally, DMA has produced maps andcharts. More and more, DMA needs to providedigital data bases and to provideindividualized information to particular users.DMA seeks quicker response, higherthroughput, better response to user needs, andgreater availability of data. Sensing technologyprovides a flood of image input which currentoperations cannot handle within reasonablecosts. Recording and presenting results ofinterpretation pose great problems. Tacticalsituations require quick turnaround which isdifficult with the current system. DMA plansto change to a digital system to meet thesedemands. Much of the sensor data will bedigital directly. The Image UnderstandingTestbed and this research are expected tocontribute to the demonstration of candidatecapabilities and to aid in the evaluation processwhich determines the design of future DMAsystems.

SAIL-IU research is expected to contribute todeveloping Interactive aids which enhanceproductivity of PIs by taking over a burden ofroutine cueing, screening, monitoring,measuring, countmg, and recording.Automated systems offer the possibility ofextensive, continuous observation of tacticalsituations, with instantaneous reports.

Passive Navigation and Terminal Homing

SAIL-IU has a subcontract to Lockheed onnavigation of low-flying autonomous aircraftwith passive sensors. Passive image sensingwill Improve survivablhty, provide a capabilityto fly a flexible course and to navigate withhigh accuracy with small memory requirements.The approach is intended to make navqationinsensitive to change and errors in its referencedata base.

SAIL-IU is preparing to work with theAutonomous Terminal Homing program indevelopment of improved stereo compilationalgorithms, in assessment and integration ofstereo algorithms from the ImageUnderstanding community, and in workingwith contractors of the terminal homing

5.2 Research Objectives . 19

program. One terminal homing contractoracknowledges a major contribution to theirwork from the Image Understanding program.

Scientific Objectives

A photointerpreter solves a puzzle by ptecrngtogether clues from a history of images andbackground information. He relies heavily onspatial information from stereo imaging andshadows, and spatial knowledge aboutstructures, together with collateral reports aboutstructures. The interpretation system shouldprovide these three capabilities: interpretingspatial structures, using multiple cues, andstereo mapping. The SAIL-IU approach isthat multiple cues and collateral reports provideknowledge at different levels about geometricstructures, and that using multiple cues requiresreasoning among different levels of spatialstructure.

The interpretation system should begeneraIizable in the following sense:interpretation systems for similar tasks shoulduse the same basic system with different models;interpretation systems for widely different tasksshould be assembled from a core of commonmodules and a few modules which are taskspecific. A lgorithms should solve well-definedgeneric subproblems which are common to awide range of interpretation tasks. ACRONYMuses modules based on a hierarchy ofrepresentations: context level, object level,volume level, surface level, image structurelevel, and image level. See figures 6 and 7.These representations and the maps betweenthem are the basis for a vision language. Thescenarios outlined above cover a broad rangeof objects and contexts which ~111 help todevelop a general system and maintaingenerality.

The interpretation system should have anatural interface which make it natural andeasy for a PI to set up a new interpretationtask. The PI should be able to program a newtask in a .week without expertise as a visionprogrammer. Users and ACRONYMcommunicate in a common language of

geometric mo e s, in a high level modelingd 1language. Users supply models of object shapeand surface descriptors. The high levelmodeling language provrdes a bridge to naturallanguage. SAIL-IU aims to provide theunderlying geometric representation of shapeand spatial relations upon which to build alinguistic system, but not to work on naturallanguage for geometric description. That isbetter left to those working in natural language.

The system should have generic interpretation,generic with respect to object class and genericwith respect to viewing conditions. That is, itshould represent and recognize object classes,e.g. jet passenger aircraft, and should deal withranges of viewing conditions, e.g. variations inviewing position and orientation, objectorientation, illumination, sensors, and surfacemarkings. It is important information thatdifferent configurations of the same aircraft areclosely related, instead of separate, unrelatedobjects.

Summary

ACRONYM includes the followingmechanisms:

segmented part/whole representations ofgeneric parts;

interpretation at the level of volumes;coarse-to-fine interpretation for efficiency;a clear representation hierarchy;rule-based spatial reasoning;a high level modeling language.

20 Image Understanding

Figure 6. Representation Levels

PREDICTION INTERPRETATION DESCR I PT I ON

MODEL PERCEPT(REFERENCE) (SENSED)

r I I 1f-1 C o n t e x t )-c--7 I+{ C o n t e x t 1I - I I -I I I

,-flodeler f-1 4 JI t-4 O b j e c t 1-1 ++-i~I

1 -r--1-

User I ( I n t e r p r e t a t i o n 1 1r I I J-J -I 1-j V o l u m e I-+-{ N 144 V o l u m e [

I

1 J t-3 Planner a n d 4 -I- I n t e r p r e t e r 4

I - I II 1 S u r f a c e 1-j l---&xJ-,I - r r - I II I I 3d Cues II - I I I r S t e r e oI I Image )-c-’ - Image )-----J MapWispIay,-+StructureI (Structure+---1 r

I 1 J 1 1 I II Edge Map II - r IL--,,,,,cfSyntheticl- C o r r e l a t i o n 1-1- -Sensed

I Image I I 1 WP I

LEVEL

Figure 7

Representations and Levels

REPRESENTATION

i C o n t e x t i O b j e c t s

i Object i Vo lumes , Su r faces , Func t i ons

i Volume i General ized Cones, Spheres, Surfaces

i S u r f a c e iJ

Ribbons

i Image i/Structure1

Ribbons , Curves , Tex tu reV e r t i c e s

i I niage iI F e a t u r e s 1

Edge elements, Spots

j P i x e l s

22 Image Understanding

Generic interpretation poses a major conceptualchallenge. Traditional classification ofindividual objects is not plausible for cueingand screening buildings and vehicles. Intraditional classlficatlon, each conf$y-ation of atruck with load would be a separate template.It is difficult to anticipate all loads that a truckmight carry. Buildings are made up ofcombinations of basic structure elements; for anunstructured matching approach the complexityis combinatorial in the components, while for asegmented, structured matching approach,complexity is additive. The requiredgeneralization is to go from two dimensional tothree dimensional representations, fromunstructured to structured, segmented matching,and to use generic primitives.

Interpretation for generic viewing conditions isequally important. Volume parts of an objectare invariant with respect to viewingconditions, but there is no invariance at theimage level. One traditional classificationapproach is to have a dense sampling ofseparate views of an object. ACRONYMinterprets at the level of voluttte parts ratherthan at the image level. This means that partsof an image need to have the same tlzree-dimensional interpretation, not the sameappearance, thus images can vary widely.

Efficiency

A CRONY M uses a coarse-to-fineinterpretation mechanism which first matcheslarge, obvious parts, then finer detail. Thepart/whole representation embodies a hierarchyof detail, from coarse to fine. For example, anaircraft has fuselage and wings at the top level;attached to them are engine pods, tail andhorizontal stabilizer. ACRONYM uses thisinherent structure as a guide to efficientstrategies for identification. It is planned toincorporate a set of rules for efficient orderingof evaluation.

5.3 ACRONYM progress

ACRONYM integrates powerful shaperepresentation with rule-based geometricreasoning in an inherently three-drmensronalsystem. ACRONYM has a geometric reasoningcapability based on a rule-based backward-chaining production rule system modeled afterMYCIN [Davis 19751. Contact with theMYCIN group has contributed enormously tothis effort. ACRONYM has a number of newcapabilrties which are advances over previousvision systems. ACRONYM uses threedimensional models and matches at the level ofthree dimensions. Previous vision systems havebeen quasi-two-dimensional and have largelyinterpreted at the level of images [Barrow 761,[Kanade 771. ACRONYM makes strong use ofimage shape in the form of ribbons andrelations between them in selecting initialcandidates for interpretation; previous systemshave used largely pointwise properties like colorat a pixel for initial hypotheses. ACRONYMaddresses generic mode1 classes and genericviewing, and extensive use of shape.Representation is central to ACRONYM,analytic representation of three-dimensionalshape of objects and their two-dimensionalappearances and representations of tasksandprograms. ACRONYM takes a structuralapproach to interpretation.’ ACRONYMrepresents objects by segmenting them intopart/whole graphs whose parts are volumesrepresented as generalized cones. Generalizedcones are snakelike parts whose cross sectionschange smoothly along a space curve called theaxis. Figure 8 shows a collection of generalizedcone prmutrves from ACRONYM. Surfacesare represented as ribbons, which arespecializations of generalized cones to surfaces.An aid to interpretation is a formalization ofspatial matching.

5.3 ACRQNY M progress . 23

@ \Q

R=0

6 G.k

Figure 8. Ceueralized Cone Examples

Segmentation is a recurring problem at alllevels in vision systems. An importantlimitation on image understandmg systems isthe “signal to symbol” transformatron. Avigorous progbam is being pursued to developImage shape descriptors and surface shapedescriptors. Both edge-based and stereosegmentation are being pursued.

The user inputs models to ACRONYM in acompact high level language with the aid of amodel editor. In the predrctive paradigm,ACRONYM determines symbolic observablesof the appearance of objects from their models,particularly those that are quasi-invariant,approximately stable over a large range ofobjects and viewing classes. ACRONYM plansa sequence of observations to tentativelyidentify the object. ACRONYM thendetermines model parameters to match modelswith observed objects in order to make detailedverification.

Development of ACRONYM has stressed goal-directed interpretation (top-down) for problemslike aircraft monitoring. Generalized conerepresentations of parts of objects embodypowerful notions of local structure and well-formed ob jeces. In the descriptive paradigm,ACRONYM will use quasi-invariants andgeneralized cone models to build up ribbondescriptions of images and generalized conedescriptions of spatial structure [Nevatln andBinford 771. None of the mechanisms of thatearlier work have yet been included in

ACRONYM, however that work will beresumed. There is an intimate connectionbetween predictive and descriptive parts in thatdescrrption uses very general forms ofprediction.

ACRONYM has four major parts: modelingsystem, PredIctor and Planner, Interpreter, andimage descriptor system.

Predictor and Planner

The Predictor and Planner uses geometric .reasoning to predict the appearance of objectparts and to make a plan (program) forinterpretation. It works from a 3d model of anobJect and the context of the object (e.g. anaircraft is on the ground on an airfield), fromwhich it derives generic and specificexpectations for surface and image observables.

A prototype Predictor and Planner is complete.Sixty rules have been built for the first task,identification of aircraft, enough thatACRONYM identified an L 1011 using the rulebase. Most of these rules will be valuable forthe next object class, vehicles. We expect thatafter the first four or five object classes the rulebase will be nearly complete and well-structured, and that few additional rules will benecessary for other object classes.

The rule interpreter has been augmented toallow more compact and more powerful rules,and to Incorporate the information contained inObject Graph and Observability Graph. W eare extending the rule interpreter with a moregeneral pattern directed invocation scheme.Explicit binding will make a cleaner andsimpler structure for the person writing therules and for meta-rules. Control of thereasoning process now includes forwardreasoning with an agenda with preconditionson items. The rule base is being restructured.

The reasoning system is a production rulesystem with the addition of the Object Graphand the Observability Graph. It is consequent-driven (I.e. backward-chained); initially the

24 Image U!lderstanding

system was modeled after MYCIN [Davis19751. Much of ACRONYM’s knowledge ismaintained 111 the Object Graph andObservability Graph, not gust in the rules.Thus, changes were made which make thePredictor and Planner heavily dependent onuse of Object Graph. Changes to the ruleinterpreter allow a clean way to do wlthin therules what were previously side effects. Thebackward chaining strategy finds all ruleswhich might satisfy the current goal. They aretried in turn until one succeeds. If all premises(antecedents) of a rule are satisfied the sideeffects are carried out and the consequents areasserted. The rule interpreter could only dealwith the antecedents and consequents, not theside effects. Much of the actron was outside therule base; no reasoning with side effects couldbe included in writing meta-rules. The newscheme is essential for further development ofgeometric reasoning.

Rules can be categorized into classes of rulesabout the camera model, rules for deducingsimple shapes, rules to calculate relativepositions of objects, rules for symbolicarithmetic, rules that build ObservabilityGraph nodes, and rules rhat transfer valuesfrom the Object Graph to the memory ofassertions. There are rules to handlequantification. Other rules determine modelparameters from image measurements. Thusfar, rules deal only wleh simple generalizedcones, flat straight ribbons (such as runways),and right circular cylinders.

Interpreter

The interpreter is a graph-matching procedurewhich uses relaxation. It is relatively well-developed, however minor problems werediscovered in working through examples.Changes were made for more efficient matchingby instantiating relations between nodes ratherthan single nodes. The Interpreter knowsnothmg about geometry, nothmg aboutperception. It is now a domain-independentgraph matcher. Progress has been made inredesign of the interpreter to introducegeometric knowledge and rule-based geometric

reasoning, to divide the effort more betweenplannmg in advance and reasoning at the timeof interpretation. In the future, the Interpreterwill probably merge with the Predictor andPlanner. This will be necessary in using rulesfor efficiency, and in integrating bottom-updescriptive mechanisms. Merging theInterpreter in the rule-based mechanism willmake it easier to program.

We are formalizing interpretation in terms oftransformational invariance, an extension ofthe principle of translational invariance whichunderlies the generalized cone representation.

The Interpreter performs a graph match byfirst matching nodes of the ObservabilityGraph, then matching arcs. It uses relaxation toestablish consistent asstgnments. TheInterpreter is capable of doing much more thanthe Predictor and Planner can ask of it now,however it is very hard to program to use thatgenerality. The Interpreter is realtyprogrammed by the Observability Graph.Control can be programmed into theObservability Graph. That form of control isnot explicit; it is difficult for the PredIctor andPlanner to control it indirectly. Thus otherforms of control are being considered. It isthought that dynamic planning will extend thepower of the interpretation process.

Modeling System

The user communicates with the modelingsystem to build models of objects and theircontexts. A high level modeling language hasbeen built which reflects powerful underlyingmodeling primitives. Models are built ofprimitives which are generalized cones.Construction is simplified by having a libraryof prototype volumes and typlcal spines, crosssections, and sweeping rules from which tobuild generalized cones. The modeling systemhelps to construct models by modifying othermodels. Models are defined with a hierarchy oflevels of detail; thus fine detail can be modeledat little cost.

ACRONYM requires models with generic

5.3 ACRONYM progress . 25

x

parts, generic and symbolic predictions ofappearance. If ACRONYM were notconcerned with interpretation, if it were only agraphics system, conventional modeling andgraphics systems would be adequate.

Cenerahzed cones are defined by a crosssection, a space curve called the spine, and asweeping rule which defines the transformationof the cross section along the spine. Crosssections may be made of straight lines and arcsof circles; spines may be made of straight linesand circular arcs; sweeping rules at-e constant orlinear. The model parser accepts descriptionsof cross sections, axes, and sweepmg rules, withsome cases symbolically defined. The class ofgeneralized cones fully implemented includes:cross sections composed of straight lines andcircular arcs with straight axes and linearsweeping rules; convex polygonal cross sectionswith axes made of straight lines and circulararcs, with linear sweeping rules.

New capabilities simplify the task of the userand expand ACRONYM’s modelingprimitives. An interactive geometric editor( M O D I T O R ) h bas een built to assist the userand. give him easy control over model buildmg.MODITOR operates at ehe textual level. Itaids the user to edit the Object Graph, fillmgand changing slots in generalized conedescriptors. Work is underway to make thegeometric editor operate at the geometric levelof manipulating objects, specifying translationsand rotations, sweeping cross sectlons alongcurves, etc. There will be a third level, anintelligent editor which uses the rule-basedsystem to manipulate relations between parts ina natural way.

We have implemented a new class ofgeneralized cones in ACRONYM. In order todescribe parts of the L 101 l fuselage andespecially wings, it was found useful toimplement generalrzed cones with cross sectionswhich are not perpendicular to the cone axis.It can be seen most clearly in aircraft wings.Wing cross sections are natural alongstreamlines, that is fore and aft, while the wingsurface is continuous along the direction of the

swept wing, which is not orthogonal tostreamlines. We have made progress towardextending our mathematical analysis forrepresentation to a powerful subclass ofgeneralized cones which is complete in the senseof splines, generating surfaces which are madeof pieces which are continuous with continuoustangent and curvature at joints.

The user gets feedback from the modelingsystem by a graphics display of models. Thesystem has a line display which has backsurface suppression but not full hidden surfacesuppression. Its solution for apparent edges(e.g. the edges of a cylinder which are not trueedges) and back surface suppression is analyticand very fast. Work is underway toward a fullhidden surface algorithm.

Stereo and Feature Descriptors

Effective surface description involves stereomapping to produce a depth map, and surfacesegmentation to produce a symbolic descriptionof surfaces as separate planes, cylmders, etc.Cennery has built a symbolic description systemwhich takes the output of his stereo mappingsystem, determines the grou,nd plane, segmentsobjects above the ground plane, then describesthose objects as ellipsoids or clusters ofellipsoids [Gennery 791.

Stereo

Baker has developed an edge-based stereomapping system which searches for consistentedge pairings line-by-hne along epipolar hnes.It then rejects pairings which vialate interlinecontinuity. It determmes edges by a verysimple step of taking zero crossings of the“laterally inhibited” signal, the difference fromthe local average. Edges are obtained by zerocrossmgs of lx7 and 7x 1 bar masks.Implementation of the system was motivated bya desire for speed; it executes in about 30seconds for 256x256 images. Edges are pairedon the basis of matching contrast or intensityon one side of the edge. For each edge thereare a set of possible matches. Formerly, abranch and bound search process was used to

26 Image Ujlderstanding

estabhsh correspondence. That has beenreplaced by a Viterbi algorithm dynamicprogrammlng,method. The Vlterbi algorithmis much faster. It returns only a single bestsolution however, which may be error sensitive.That is, the locally optimal solution for singlelines may not be globally consistent. A laterstage removes some of these errors. Theprogram uses a coarse to fine search procedurelike the binary search correiator mtroduced by[Moravec 771, however limited to singleepipolar lines. Branch and bound search anddynamic programming both sought solutionswhich maximize the number of edges whichmatched; among solutions with equal numbersof edge matches, they sought solutions whichminimize the sum of squared errors ofintensities of intervals to either srde. Pairingsfor individual epipoiar lines are tested forconsistency by testing depth continuity. Theconsistency procedure follows connected edgesin both images, calculating local mean andstandard deviation of disparity, and removingpairings by a sequence of tests on disparity.

SAIL-W is designing an advanced stereomapping system which will producemeasurements of surface boundaries accurate tooptical and quantizatlon limits (. 12 pixels rmswith reasonable contrast). The new stereosystem is still in the design phase. It is basedon a coarse-to-fine matching scheme first usedby Moravec in 1974. That binary searchstrategy leads to efficiency but also eilminatesambigurties by including context of the areabeing matched. The new system will use bothedge correspondence [A rnoid 19781 and areacorrespondence [Moravec 19’77, Cennery 19’7’71.

It will use successive refinement, a differentform of coarse-to-fine correspondence.Successive iterations concentrate on matchingfeatures and areas which have nocorrespondence. The new system will makesurface interpretations with an Improvedmeasure for interpretation. It will haveimproved curved edges and make use of ribbonsurfaces to describe curved surfaces.

We are making a theoretical analysis of

consistency constraints which follow fromobscuration and simplicity assumptions. Weare also making statistical analysis of noiseeffects for line finding to maintain tightconstraints. Frame to frame coherence is beingincorporated for image sequences.

Edge Segmentation

Progress has been made in extending conceptsof the Binford-Horn edge finding and curvelinking program. Binford-Horn was based onseveral phases: 1. Detect edges using directionaldifferences of the “laterally-inhibited” signal(subtraction of the local average, similar todifference of gaussians but with step functions).2. Localize edges from zero crossing of the“laterally inhibited” signal; in essence, detectionbased on a large odd component, localization atthe zero of the even component. 3. Reject orkeep segments by tracing them with asequential detection process.

An intermediate level edge-finding scheme isbeing implemented based on simplification ofBinford-Horn. This intermediate level systemis aimed toward VLSI implementation.

The Coal-Directed Ribbon Finder

Brooks has made a goal-directed module meantto be guided by ACRONYM [Brooks 1979blIt is intended to be invoked many times withdifferent heuristics and parameters, based onprevious results from edge linking andinterpretation. Thus, it is a family of goai-directed descriptor mechanisms. It uses linesegments from [Nevatla and Babu 781. Anexample is shown in Figure 3. An example ofits ribbon output is shown in Figure 4.

There are four phases of the algorithm: 1.linking segments into candidate contours; 2.discarding unsatisfactory candidates; 3. findingribbon descriptors for contours and the regionsthey bound; and 4. disambiguating redundantdescriptions of a single region of the image.

Linking edges into a contour is formulated as atree search. The linking phase selects starting

4

5.3 ACRONYM progress

nodes for the search, e.g. restricting startingsegments to a part of an image, or a certaindirection. Edge segments are hashed by theirimage location; tests for proximity and imagelocation are thus effrclent. A best-first search ISmade from the starting node. Candidates forthe tree search are generated by heurlstlcs.Candidate nodes are pruned by another set ofheuristics. Search is iimlted by expandrng onlya given numb er of nodes (typically 2).Heuristics for generating candidates areprimarily smooth continuation and proximity.Pruning heuristics Include a crude test forconvexity.

Contours are linked by local properties. A setof heuristics screens these contours. Ribbondescriptors are fit to contours which survive.The program describes the class of ribbonsbounded by two straight lines. Boundarydirections are found from maxima of ahistogram of edge directions. Two straightlines are fit to those boundary segments.

Often there will be multiple ribbons whichoverlap the same area. In some cases, severalribbons ~~11 be useful. In the example shown,the rear engin,e pod and the fuselage overlap.Both are useful. Disambiguation is done bythe Interpreter.

About 30 seconds KL-10 CPU time wasrequired for the example shown. The systemhas developmental baggage and has not beenoptimized.

ACRONYM performance

The total system for identification of the LlOi Irequired about 110k 36 bit words. Some of thisis necessary at the time of model-building, someat the time of planning, some at execution time.The modeler contains about 5k of parser codeand 23k for graphics and tv buffer. Thepredictor and planner requires about 2k code,the matcher about 1Ok of code, the ribbonfinder about 7k code. The LISP systemrequires about 4Sk with record package and i/o.Data structures require about 1Sk. A guess isthat 200 rules will be required for aircraft

. 2’1

monitoring and eventually 500 for a completesystem. At an average of 30 list ceils for eachrule, 500 rules would require 15-30k for therule base. A minimal execution system withoutpiannmg is about 70k. Since then ACRONYMhas grown considerably. The ribbon finderrequires about 30 set, matching about 1 set,, thePredictor and Planner requires I seconduncomplied, the modeler 1 second with file i/oof 1 second, and graphics of .5 seconds.

Passive Navigat ioll

The Stanford stereo system is being used in theNavigation with Passive Sensors program[Firschein 19791. It has obtained automaticregistration and a camera model for pairs ofimages from the Night Vision Lab terrainmodel. Lockheed is evaluating the stereo systemfor two roles: first, terrain mapping of relativeelevations for use in navrgatlon by terrainmatching (passive TERCOM); second, tomeasure V/H in a navigation system which fliesby dead reckoning between landmarks. In thelatter role, the stereo system can function on anon-stabilized platform. Baker hasdemonstrated good stereo reconstruction ofterrain from the NVL terrain. model. Terrainelevation data are not yet available from NVLto verify quantitatively the quality of theelevation reconstruction.

The other part of the passive navigationprogram is tracking of linear features. Thework on extensions of Binford-Horn edge-finding and curve-linking is of key importancehere, however the work is not yet complete.

5.4 Autonomous Navigation

Hans Moravec recently completed a dissertationon visual navigation based on experiments witha robot vehicle. The Stanford AI lab cart is acard-table sized mobile robot controlledremotely through a radio link, and equippedwith a TV camera and transmiiter. A computerhas been programmed to drive the cart throughcluttered indoor and outdoor spaces, gaining itsknowledge about the world entirely fromimages broadcast by the onboard TV system.

Image Updcrstanding

The cart deduces the three dimensional locationof objects around it, and its own motion amongthem, by notrng their apparent relative shifts insuccessive images obtained from :he movingTV camera. It maintains a model of thelocation of the ground, and registers objects ithas seen as potential obstacles if they aresufficiently above the surface, but nor: too high.It plans a path to a user-specified destinattonwhich avoids these obstructions. This plan ischanged as the moving cart ~JerCelVeS newobstacles on its journey.

The system is moderately reliable, but veryslow. The cart moves about one meter every tento fifteen minutes, in lurches. After rolling ameter, it stops, takes some pictures and thinksabout them for a long time. Then It plans anew path, and executes a little of it, and pausesagain.

The program has successfully driven the cartthrough several 20 meter indoor courses (each

staking about five hours) complex enough tonecessitate three or four avoiding swerves. Aless successful outdoor run, in which the cartswerved around two obstacles but collided witha third, was also done. Harsh lighting (verybright surfaces next to very dark shadows)resulting in poor pictures, and movement ofshadows during the cart’s creeping progress,were major reasons for the poorer outdoorperformance. These obstacle runs have beenfilmed (minus the very dull pauses).

A typical run of the avoider system begins witha calibration of the cart’s camera. The cart isparked in a standard position in front of a wallof spots. A calibration program notes thedispartty in position of the spots in the imageseen by the camera with their positionpredicted from an idealized model of thesituation. It calculates a distortion correctionpolynomial which relates these positions, andwhich is used in subsequent rangingcalculations.

The cart is then manually driven to its obstaclecourse. Typically this is either in the largeroom in which it lives, or a stretch of the

driveway which encircles the AI lab. Chairs,boxes, cardboard constructions and assorteddebris serve as obstacles in the room. Outdoorsthe course contains curbtng, trees, parked carsand signposts as well.

The obstacle avoiding program is started. Itbegins by asking for the cart’s destination,relative to its current position and headmg.After being told, say, 50 meters forward and 20to the right, it begins its maneuvers.

It activates a mechanism which moves the TVcamera, and digitizes about nine pictures as thecamera slides (in precise steps) from one side tothe other along a 50 cm track.

A subroutine called the interest operatordescribed in is applied to the one of thesepictures. It picks out 30 or so particularlydistinctive regions (features) in this picture.Another routine called the correlator looks forthese same regions in the other frames. Aprogram called the camera solver determinesthe three dimensional position of the featureswith respect to the cart from their apparentmovement image to image.

The navigator plans a path to the destinationwhich avoids all the perceived features by alarge safety margin. The program then sendssteering and drive commands to the cart tomove it about a meter along the planned path.The cart’s response to such commands is notvery precise.

After the step forward the camera is operatedas before, and nine new images are acquired.The control program uses a version of thecorrelator to find as many of the features fromthe previous location as possible in the newpictures, and applies the camera solver. Theprogram then deduces the cart’s actual motionduring the step from the apparent threedimensional shift of these features.

The motion of the cart as a whole is larger andless constrained than the precise slide of thecamera. The images between steps forward canvary greatly, and the correlator is usually

5.4 Autonomous Navigation

unable to find many of the features it wants.The interest operator/correlator/camera solvercombination is used to find new features to ’replace lost ones.

The three dlmensional locatron of any newfeatures found is added to the program’s modelof the world. The navigator is invoked togenerate a new path that avoids all knownfeatures, and the cart is commanded to takeanother step forward.

This continues until the cart arrives at Itsdestination or until some disaster terminates theprogram.

Previous Results: Stereo

Research has demonstrated high potential forutility of passive Imaging techniques for highresolution depth measurement. Passivetechniques have important advantages overactive ranging techniques in hostileenvironments. Sequences of images from amoving aircraft have been used to find theground plane and separate objects fromground. The system should be effective withcamouflaged surfaces. Accuracy of 2” heighterror has been attained for 3” horizontal pixelsize on the ground, with a 60 degree baseline.On a general purpose computer, the processrequires about 15 seconds with no guidanceinformation. That can likely be reduced atleast a factor of 2. With accurate guidanceinformatlon, the time required is estimated tobe about 250 msec (most missions wouldprobably be in this category). The system isself-calibrating and reliable.

Automatic Registration

The system includes a solution to the problemof determining the stereo camera model frominformation within the pair of pictures.Imagine an aircraft approaching a runway. Asit moves, objects on both sides appear to moveradially outward from a center, the fixed point.The fixed point is the instantaneous directionof motion. The pilot knows that the pomtwhich does not appear to move is where he will

. 29

touch down, unless he changes direction. Thedistance between views and the apparentdisplacement of points allow calculation of thedistance of each point from the observer andfrom the vehicle path. The touchdown pointcan be calculated from the trajectory of centers.That is precisely what is done by the cameratransform solver in the system [Gennery 19’7’71.It determines the transform from one view toanother in a sequence of views from a movingobserver. The program first orients itself inthe scene and finds a model for the transformbetween the two cameras. If two views are anaccurately calibrated stereo pair, this operationis not necessary. If accurate guidanceinformation is available, this operation can bespeeded up enormously. The program finds acamera transform model by finding a sample offeatures of interest in one image and matchingthem with their corresponding view in theother image. The Interest Operator requiresabout ‘75 msec for a 256x256 frame. Interestingfeatures are areas (typically SxS) which can belocalized in two dimensions wlthout a cameratransform. The operator chooses those areaswith large variance along all grid directions.That is roughly equivalent to a large drop inautocorrelatlon along all grid directions, whichmeans that the area can be localized closely.

The binary search correlator matches thefeatures in the other image by a coarse to finestrategy: It has versions of the picture atresolutions of 256x256, I2Ss 12S,6+i64, 32x 32,and 16x 16. It first matches by a small searchon a very coarse version (16x 16) of the Image.It then performs a search in the next finerversion of the image, in the neighborhood ofthe best match in the previous Image. Thatstep is repeated until the full resolution isreached. The matching process requires only5Omsec for a match of a smgle feature nomatter where it is in the image. If the cameratransform is known, search IS necessary onlyalong a line in the image. In this case, search isabout a factor of seven faster. If the depth ofneighboring points is used as a starting pointfor the search, the match is another factor ofseven faster. It is planned to incorporate thosespeedups; neither is now used.

30 Image Ujlderstanding

The matching has about 102 errors. Itencounters fewer ambiguities than brute forcematching, since not only must the featurematch, but the surrounding context must alsomatch. The procedure should not work forparts of scenes where the background of obJects(context) changes drastically from one view toanother. This is true only at very wide anglesand close range. Aerial views are mostlyplanar, so failure of matching should not be aproblem, nor has it been in practice. Theprocess requires about 50k of 36 bit words now.It is possible to implement the coarse-to-finesearch strategy in a raster scan and keep only aportion of each image in core. This would cutmemory size by a large amount, but it has notbeen done.

Given corresponding views of at least fivepoints which are non-degenerate (i.e. nocolinear and planar degeneracies) the relativetransform of the two views can be found. It isnot necessary to know the posrtlon of thesepoints, only which views correspond. Thetransform IS determined except for a scalefactor, the length of the stereo baseline. Thatdoes not affect subsequent matching of twoviews using the transform, and the scale factorcan often be determlned from known scenedistances or guldance Information. If the sceneis nearly flat, then certam parameters are ill-determined. However, that does not affect theaccuracy of measuring heights using thetransform. If the scene is nearly flat, then aspecial simplified form of the solver can beused.

The special case version has been used on someimages. It is much faster than the fulltransform solver. Part of the Job of thetransform solver is to deal with mistakenmatches. The procedure calculates an errormatrix for each point and iterates by throwingout wild points. It calculates an error matrixfrom which errors in depths of point paus arecalculated. The solver uses typically 1‘2 pointsand requires about 300 msec per point. Itrequires about 20k of memory. With accurateguidance i fn ormation, this operation would notbe necessary. However, it can be used directly

to find the instantaneous direction of thevehicle. As mentioned above, as the vehiclemoves, points in the image appear to moveradially away from the center which is theInstantaneous direction of the vehicle. Threeangles relate the coordinate system of one viewwith the other, and two angles specify thedirection of the instantaneous direction ofmotion.

The stereo system was used to register a pair ofpictures of the Pittsburgh skyline. Initially, thesolution failed to converge; when severalerroneous matches were excluded, the solutionconverged well.

Ground Plane

The camera transform model makes iteconomical to make a denser depth map. Apoint in one view corresponds to a ray in spacewhich corresponds to a hne in the other view.The search is limlted to this line, and inaddition, nearby points usually have about thesame disparity as their neighbors. Thus, searchis limited to a small interval on a line. A highresolution correlator has been developed whichinterpolates to the best match, and whichcalculates the precision of the match based onstatistics of the area.

The system then finds a best ground plane orground parabola to the depth points, In theleast squares sense. It gives no weight to pointsabove the ground plane; it expects many ofthose. It includes points below the groundplane and near the ground plane. Since pointsbelow the ground plane may be wild points,they are edited out in an iterative procedure.Of course, there may be holes. If they aresmall, there is no problem. If the hole is big, itbecomes the ground plane. The ground planefinder requires 5 msec per point.

Edge-Based Stereo

Feature-based stereo using edges increases theaccuracy with which boundaries of depthdiscontinuitles can be found by about a factorof 25. It also provides additaonal information

5 . 4 Autonomous Navigatiorl

about surface markings which are not availablein stereo based on area correlation. Feature-based stereo IS potentially fast, although for themoment, the brnary search correlator IS faster.Edge-based’ techniques have not beendeveloped very far, and would benefit from“smart sensor” technology. A new techniquehas been developed to use edge features Instereo. Edges are lmked along smooth curvesin 3d (in the image coordinates and in depth).The new technique is used in the objectmodeling and recognition modules of rhesystem. Those edges out of the ground planedelimit bodies, If isolated.

In a stereo pair of images of an aircraft at SanFrancisco airport, the succession of edgesmatched in stereo as a function of heightshowed a separation in height between wingtips and and wing roots of an L 1011.

Vchictc Locatiotl

[Brooks 1978al Brooks, R.A., Creiner, R.,Binford, T.O.; “A Model-Based vision System“;Proceedings Image Understanding Workshop’*,Boston, May 1978.

Arnold [ 1.9771 demonstrated progress towardvehicle location and identification. The systemregistered images of a suburban parking lotand obtained the stereo camera model. Itseparated the vehlclcs from the ground and . .succeeded i.n describing the projection of a carby a rectangle of approximately the right sizeand orientation. The length and width of thecar were accurate to about 5% by inspection.

[Brooks 1978b) Brooks, R.A., Creiner, R.,Binford, T.O.; “Progress Report on a Model-Based vision System”; Proceedings ImageUnderstanding Workshop’, Carnegie-Mellon,Nov 1978.

Recognitioll of Complex Objects

[Brooks 1979b] Brooks, R.A .; “Coal-DirectedEdge Linking and Ribbon Finding”; ProcImage Understanding Workshop, Palo Alto,Calif, Apr 1979.

Nevatia developed a system which took depthmaps of a doll, a toy horse, a glove, a hammerand a ring using depth maps from a lasertriangulation system [Nevatia 19741. Thesedepth maps were described in ter-ms ofgeneralized cones which were organued Intopart/whole structures. The system recogmzeddifferent views of these objects with articulationof lrmbs and some obscuration. The systemaddressed important Issues of representation:indexing into a subclass of slmllar objects invisual memory instead of matching against allstored models; determining generalized conedescriptors from surface descriptors.

. 31

5.5 References

fA mold 19771 A rnold,R.D.; “SpatialUnderstanding”; Proc Image UnderstandingWorkshop, A prll 1977.

[A rnold 19781 Arnold,R.D.; “LocaI Context inMatching Edges for Stereo Vision”; Proc ImageUnderstandmg Workshop, Boston, May 1978.

[Barrow 19761 Barrow,H.G., Tenenbaum, J.M.;“MSYS: A System for Reasoning about Scenes”;SRI AI Center Tech Note 121, April 1976.

[Bin ford 19791 Binford, T.O., Brooks, R .A .;“Geometric Reasoning in ACRONYM “; ProcImage Understandmg Workshop’, Palo Alto,Calif, A pr 1979.

[Brooks 79c] Brooks, R.A., Creiner, R.,Binford, T.O.; “ACRONYM: A Mod&Basedvision System”; Proc Int Jt Conf on A I, Aug1979.

[Davis 19751 Davis, R., Buchanan, B.,Shortliffe, E.; “Production Rules as aRepresentation for a Knowledge-BadConsultation ProgramU; Stanford A I MemoAIM-266, 1975.

[Firschein 19791 [Firschein, O., Cennery, D.Milgram, D. Pearson, J.J.; “Progms inNavigation Using Passively Sensed Images”;Proc ARPA Image Understanding Workshop,Palo Alto, 1979.

32 Image iJ!lderstaudirrg

[Cennery 19761 Cennery, D.B.; “Slcrco Camera [Nevatia and Binford 19771 Nevatia, R.K.,Solver”; Stanford A I Lab Internal Note, 1976. Binford, T.O.; “Description anti Recognition of

Curved Objects”; Artificial intelligence 9, 77,[Gennery 19771 Gennery, D.B.; “A Stereo Vision 1977.System“; Proc image Understanding Workshop,October 1977. [Nevatia and Babu 19781 Nevatia, R.K., Babu,

K.R.; “Linear Feature Extraction”; Proc ARPA[Cennery 1977a] Cennery, D.B.; “Ground Plane image Understanding Workshop, CMU, NovFinder“; Stanford A I Lab internal Note, 1977. 1978.

[Gennery 19791 Gennery, D.B.; “Object Lhtection [Panton 19781 Panton, D.J.; “A Flexibleanti Measurement using Stereo Vision”; Draft Approach to Digital Stereo Mapping”; (CDC)Feb 1979 submitted to international Joint Conf Photogrammetric Engineering and Remoteon Al, Aug 1979. Sensmg V 44, ~1499, Dee 1978.

[Hannah] Hannah, M.J.; “CorxP~t~e~ Matchingof Areas in Stereo Images“ Artificial IntelligenceLaboratory, Stanford University, Memo AIM-239 July 1974.

[ H o r n 723 B.K.P.Horn; “The Binford-HornEdge Finder”; MIT AI Memo 285, revisedDecember 1973.

[K anade 19771 Kanade, T.; “I%&/Representations and Control Structures inhags I/ ntltlrstanding“; Proc I JCA l-5, Boston,1977.

[Marr and Poggio] D. Marr and T. Poggio; “ATheory of Human Stereo Vision,” Al Memo 451,MIT, Nov 1977.

[Rubin 19781 Rubin, S.M.; “The ARCOS ImageVndcrstanding S~st~rri”; Proc ARPA ImageUnderstanding Workshop, CMU, Nov 1978.

[Tenenbaum 19761 Tenenbaum, J.M., Barrow,H.G.; “Experiments in hterpretation cuiddSegmentation”; SRI AI Center Tech Note 123.

[Waltz 19721 W altz,D.; “Generating SemanticDescriptions from Drarvings of Scenes withShatfows”; MIT-AI Tech Rept Al-TR-271,1972.also “Understanding Line Drartiings ofScenes with Shatiows”; In “The Psychology ofComputer Perception”; P.H.Winston, ed;McGraw-Hill, 1975.

[Moravec 19771 Moravec, H.P.; “TozuardsAutomatic Visual Obstacle Avoidancd’; Proc 5thinternational Joint Conf on Artificialintelligence, MIT, Boston, Aug 1977.

[Moravec 19791 Moravec, H.P.; “VisualMapping by a Robot Rover”, Draft Feb 1979submitted to lnt Joint Conf on Al, Aug 1979.

[Nevatia 19741 N evatia, R.K.; “StructuredDescriptions of Complex Curved Objc’cts forRecognition and Visual Memory”; Stanford A ILab Memo AIM-250.

ENevatia 761 Nevatia, R.K.; “DepthMeasurement by Motion Stereo“; Comp. Graph.and Image Proc., vol.5 , 1976, pp. 203-214.

. 33

Appendix ATheses

Theses that have been published by thislaboratory are listed here. Several earneddegrees at institutions other than Stanford, asnoted. This list is kept in diskfile THESES[Bi B,DOC] @SU-A I.

D. Raj. Reddy, AIM-43Au Approach to Computer SpeechRecognition by Direct AnaJysis of the SpeechWave,Ph.D. in Computer Science,September 1966.

S. Persson, AIM-46Some Sequence Extrapolating Programs: aStudy of Representation and Modeling inInquiring Systeins,Ph.D. in Co1tlp11I~r Science, University ofCalifornia, Berkeley,September 1966.

Bruce Buchanan, AIM-47Logics of Scientific Discovery,Ph.D. in Philosophy, University of California,Berkeley,December 1966.

James Painter, AIM-44Seinailtic Correctness of a Compiler for anAJgol-like Language,Ph.D. in Compter Science,March 1967.

Wlllram Wichman, AIM-56Use of Optical Feedback in the ComputerControl of an Arm,Eng. in EE(cctrical Engineering,A ugust 1967.

Monte Callero, A IM-5SAn Adaptive Cotnn~and and Control SystelnUtilizing Heuristic Learuiug Processes,Ph.D. in Operations Research,December 1967.

Donald I< aplan, AIM-60The Forlnal Theoretic Analysis of StrongEquivalence for Elemental Properties,Ph.D. in Computer Science,July 1968.

Barbara Huberman, AIM-65A Program to Play Chess End Games,Ph.D. in Computer Science,August 1968.

Donald Pieper, AIM-72The Kinenlatics of Manipulators underComputer Corrtrol,Ph.D. in Mechanical Engineering,October 1968.

Donald Waterman,Machine Learning of Heuristics,Ph.D. in Computer Science,December 1968.

A IM-74

Roger Schank, A I M - 8 3A Couceptual Dependency Representation fora Colnputer Oriented Setna.ntics,Ph.D. in Linguistics, University of Texas,March 1969.

Pierre Vicens, AIM-85Aspects of Speech Recognition by Computer,Ph.D. in Computer Science,March 1969.

Victor D. Scheinman, AIM-92Design of Computer Controlled Manipulator,Eng. in Mechanical Engineering,June 1969.

Claude Cordell Green, AIM-96The Application of Theorem Proving toQuestion-answering Systeins,Ph.D. in ElEctrical Engineering,A ugust 1969.

James J. Horning, AlM-98A Study of Grammatical Inference,Ph.D. in Computer Science,August 1969.

34

Michael E. Kahn, AIM-106The Near-IniniI11uIti-tiIne Control of Open-loop Articulated Kinematic Chains,Ph.D. in Mechanical Engineering,December 1969.

Joseph Becker, AIM-l 19An Illforlllation-processing Model ofIntermediate-level Cognition,Ph.D. in Computer Science,May 1972.

Irwin Sobel, AIM-121Camera Models and Machine Perception,Ph.D. in Electrical Engineering,May 1970.

Michael D. Kelly, AIM-130Visual Idcutification of People by Computer,Ph.D. in Computer Science,July 1970.

Gilbert Falk, AIM-132Computer Interpretation of Imperfect LineData as a Tllree-dilnerisiollal Scene,Ph.D. in EiectricaI Engineering,A ugust 1970.

Jay Martin Tenenbaum, AIM-,134Accon~modatiotl in Computer Vision,Ph.Ll. in Electrical Engineering,September 1970.

Lynn H. Qam, AIM-144Computer Coinparisou of Pictures,Ph.D. in Computer Science,May 1971.

Robert E. Kling, AIM-147Reasorling by Analogy with Applications toI-teuristic Problem Solving: a Case Study,Ph.D. in Computer Science,August 1971.

Rodney Albert Schmidt Jr., AIM-149A Study of the Real-time Control of aComputer-drivel] Vehicle,Ph.D. in Electrical Engineering,August 1971.

. Appendix A

Jonathan Leonard Ryder, A IM-155Heuristic Analysis of Large Trees asGeuerated in the Came of Co,Ph.D. in Computer Science,December 1971.

Jean M. Cadiou, AIM-163Recursive Definitions of Partial Functionsand their Computations,Ph.D. in Computer Science,April 1972.

Gerald Jacob Agin, AIM-173Representation and Description of CurvedObjects,Ph.D. in Computer Science,October 1972.

Francis Lockwood Morris, AIM-174Correctness of Translations of Program m ingLanguages - an Algebraic Approach,Ph.D. in Computer Science,August 1972.

Richard Paul, AIM-177Modelling, Trajectory Calculation andServoiug of a Computer Controlled Arm,Ph.D. in Computer Science,November 1972.

A haron Gill, AIM-178Visual Feedback and Related Problems inComputer Controlled Hand Eye Coordination,Ph.D. in Electrical Engineering,October 1972.

Rutena Bajcsy, AIM-180Computer Identification of Textured VisiualScenes,Ph.D. in Computer Science,October 1972.

Ashok Chandra, AIM-188On the Properties and Applications ofProgramming Schemas,Ph.D. in Computer Science,March 1973.

Theses

Gunnar Rutger Grape, AIM-201Model Based (Iutetmediate Levei) CotnputerVision,Ph.D. in Computer Science,May 1973.

Yoram Yakimovsky, A I M -209Scene Analysis Using a Semantic Base forRegion Crowing,Ph.D. in Computer Science,July 1973.

Jean E. Vuillemin, AM-218Proof Techniques for Recursive Programs,Ph.D. in Computer Science,October 1973.

Daniel C. Swinehart, A IM-230COPILOT: A Multiple Process Approach toInteractive Programming Systems,P&LX in Computu Science,May 1974.

James Cips, AIM-231Shape Grammars and their IlJsesPh.D. in Computer Science,May 1974.

Charles J. Rieger III, AIM-233Conceptual Memory: A Theory andComputer Program for Processirlg theMeaning Content of Natural LanguageUtterances,Ph.D. in Computer Science,June 1974.

Christopher K. Riesbeck, AIM-238Computational Ilnderstanding: Analysis ofSentences aild Context,Ph.D. in Compu.tcr Science,June f 974.

Marsha Jo Hannah, AIM-239Computer Matching of Areas in StereoImages,Ph.L>. in Computer Science,July 1974. ,

. 35

James R. Low, AIM-242Automatic Coding: Choice of DataStructures,Ph.D. in Computer Science,August 1974.

Jack Buchanan, AIM-245A Study in Automatic ProgrammingPh.D. in Computer Science,May 1974.

Neil Goldman, AIM-247Computer Ceneratiolr of Natural LanguageFrom a Deep Conceptual BasePh.D. in Computer Science,January 1974.

Bruce Baumgart, AIM-249Geometric Modeling for Computer VisionPh.D. in Computer Science,October 1974.

Ramakant Nevatia, AIM-250Structured Descriptions of Complex CurvedObjects for Recognition and Visual MemoryPh.D. in Electrical Engineering,October 1974.

Edward H. Shortliffe, AIM-251MYCIN: A Rule-Based Computer Program forAdvising Physiciam RegardingAntimicrobial Therapy SelectionPh.D. in Medical Information Sciences,October 1974.

Malcolm C. Newey, AIM-257Formal Semantics of LISP With Applicationsto Program CorrectnessPh.D. in Computer Science,

January 1975.

Hanan Samet, AIM-259Automatically Proving the Correctness ofTranslations Involving Optimized CodedPAD in Computer Science,

May 1975.

36 .Appeudix A

David Canficld Smith, AIM-260PYGMALION: A Creative ProgranmingEnvironmentPhD in Computer Science,

June 1975.

Sundaram Canapathy, AIM-272Reconstruction of Scenes ContainingPolyhedra From Stereo Pair of ViewsPh.D. in Computer Science,

December 1975.

Linda Gail Hemphlll, AIM-273A Conceptual Approach to AutowatedLanguage Understanding and BeliefStructures: with Disanlbiguation of the Word

ForPh.D. in Linguistics,May 1975.

Nori hsa Suzuki, AIM-279Automatic Verification of Progranls withComplex Data StructuresPh.D. in Computer Science,

February 1976.

Russell Taylor, AIM-282Synthesis of Manipulator Control ProgramsFrom Task-Level SpecificationsPhD in Computer Science,July 1976.

Randall Davis, AIM-283Applications of Meta Level Knowledge to theConstruction, Maintenanceand Use of Large Knowledge Bases

Ph.D. in Computer Science,July 1976.

Rafael Finkel, AIM-284Constructing aud Debugging ManipulatorProgramsPh.D. in Computer Science,August 1976.

Douglas Lenat, AIM-286AM: AII Artificial Intelligence Approach toDiscovery in Mathematics as Heuristic SearchPh.D. in Computer Science,July 1976.

Michael Roderick, AIM-287Discrete Control of a Robot ArmEngineer in Electrical Engineering,August 1976.

Robert C. Belles, AIM-295Verification Vision Within a ProgrammableAssembly SystemPh.D. in Computer Science,December 1976.

Robert Cartwright, AIM-296Practical Formal Semantic Definition andVerification SystemsPh.D. in Computer Science,December 1976.

Todd Wagner,Hardware VerificationPhD in Computer Science,September 1977.

AIM-304

William Faught. AIM-305Motivation and Intensionality in a ComputerSinlulation ModelPh.D. in Computer Science,September 1977.

David Bar-stow, AIM-308Autolnatic Construction of AlgorithmsPh.D. in Computer Science,December i 977.

Bruce E. Shimano, AIM-313The Kinematic Design and Force Control ofComputer Controlled ManipulatorsPh.D. in Mechanical Engineering,March 1978.

Jerrold M. Cinsparg, AIM-316Natural Language Processing in anAutonlatic Programming DomainPh.D. in Computer Science,June 197s.

Robert Elliot Filman, AIM-327The Interaction of Observation and InferencePhD in Computer Science,A pril 1979.

Theses

David Edward Wilkins, AIM-329Using Patterrls and Plans to Solve Problemsand Control SearchPhD in Computer Science,June 1979.

Elaine Kant, AIM-331Efficiency Corlsiderat ion in ProgramSy Irthesis: A K nowedge-Based ApproachPhD in Computer Science,July 1979.

Brian P. McCune, AIM-333Building Program Models Incrementally fromInformal DescriptionsPhD in Computer Science,October 1979.

. 37

38

Appendix BFilm Reports

Prints of the following films are available fordistribution. This list is kept in diskfile FILMS[BIB,DOCJ G&U-A I.

1. Art Eisenson and Gary Feldman, Ellis D.Kroptechev and Zeus, his MarvelousTime-sharing System, l6mm B&W withsound, 15 minutes, March 196’7.

The advantages of time-sharing over standardbatch processing are revealed through the goodoffices of the Zeus time-sharing system on aPDP- I computer. Our hero, Ellrs, is savedfrom a fate worse than death. Recommendedfor mature audiences only.

2. Gary Feldman, Butterfinger, 16mm colorwith sound, 8 minutes, March 1968.

Describes the state of the hand-eye system atthe Artificial Intelligence Project in the fall of196’7. The PDP-6 computer getting visualinformation from a television camera andcontrolling an etectricat-mechanical arm solvessimple tasks involving stacking blocks. Thetechniques of recognizing the blocks and theirpositions as wet1 as controllmg the arm arebriefly presented. Rated “C”.

3. Raj Reddy, Dave Espar and Art Eisenson,Hear Here, 16mm color with sound, 15minutes, March 1969.

Describes the state of the speech recognitionproject as of Spring, 1969. A discussion of theproblems of speech recognition is followed bytwo real time demonstrations of the currentsystem. The first shows the computer learningto recognize phrases and second shows how thehand-eye system may be controlled by voicecommands. Commands as complicated as ‘Pickup the small block in the lower lefthand corner’,are recognized and the tasks are carried out bythe computer controlled arm.

.

4. Gary Feldman and Donald Peiper, Avoid,16mm’color, silent, 5 minutes, Mdrch 1969.

An illustration of Peiper’s Ph.D. thesis. Theproblem is to move the computer controlledmechanical arm through a space filled with oneor more known obstacles. The film shows thearm as it moving through various clutteredenvironments with fairly good success.

5. Richard Paul and Karl Pmgle, InstantInsanity, 16mm color, silent, 6 minutes,August, 1971.

Shows the hand/eye system solving the puzzleInstant Insanity. Sequences include finding andrecognizing cubes, color recognition and objectmanipulation. [Made to accompany a paperpresented at the 1971 IJCAI. May be hard tounderstand without a narrator.]

6. Suzanne Kandra, Motion and Vision,16mm color, sound, 22 minutes,November 1972.

A technical presentation of three researchprojects completed in 1972: advanced armcontrol by R. P. Paul [AIM-1771, visualfeedback control by A. Gill [AIM-1781, and ’representation and description of curved objectsby G. Agin [AIM-1731. Drags a bit.

7. Larry Ward, Computer InteractivePicture Processing, (MARS Project),16mm color, sound, 8 min., Fall 1972.

This film describes an automated picturedifferencing technique for analyzing thevariable surface features on Mars using datareturned by the Mariner 9 spacecraft. Thesystem uses a time-shared, terminal orientedPDP-10 computer. The film proceeds at abreathless pace. Don’t blink, or you will missan entire scene.

8. D.I. Okhotsimsky, et al, DisplaySimulations of C-Legged Walking, Instituteof Applied Mathematics - USSR Academyof Science, (titles translated by Stanford AILab and edited by Suzanne Kandra), 16mmblack and white, silent, 10 minutes, 1972. .

Film Reports . 39

A display simulation of a 6-legged ant-likewalker getting over varrous obstacles. Theresearch is aimed at a planetary rover thatwould get around by walking. This cartoonfavorrte beats Mickey Mouse hands down. Orrather, feet down.

9. Richard Paul, Karl Pingle, and Bob Eolles,Automated Pu~np Assembly, 16mm color,silent (runs at sound speed!), 7 minutes,April, 1973.

Shows the hand-eye system assembling a simplepump, using vision to locate the pump bodyand to check for errors. The parts areassembled and screws inserted, using somespecial tools designed for the arm. Some titlesare included to help explarn the film.

10. Terry Winograd, Dialog with a robot,MIT A. I. Lab., 16mm black and white,silent, 20 minutes, 1971.

Presents a natural language dralog with asimulated robot block-manipulation system.The dialog IS substantially the same as that inVniierstanding Natural Language (T.Winograd, Academrc Press, 1972). Noexplanatory or narratrve material is on the film.

11. Karl Pinglc, Lou Paul, and Bob Belles,Programmable Assembly, Three ShortExalnples, l6mm color, sound, 8 minutes,October 1974.

The first segment demonstrates the arm’s abilityto dynamically ad just for position andorientatron changes. The task is to mount abearing and seal on a crankshaft. Nest, thearm is shown changing tools and recovermgfrom a run-time error. Finally, a cinematicfirst: IWO arms cooperatrng to assemble 3 hinge.

12. Brian Harvey, Display Ternlinals atStanford, 16mm B&W, sound, 13 minutes,May 1975.

timesharing systems provide good support forusing display terminals in normal text displayapplrcations. This film shows a session usingthe display system at the Stanford AI Lab,explamrng how the display support features inthe Stanford monitor enhance the user’s controlover his job and facilitate the writing ofdisplay-effective user programs.

13. M. Shahid Mujtaba, POINTY - AnInteractive System for Assembly, 16mmcolor, sound, 10 minutes, December 1977.

POINTY is an rnteractive programmingsystem that uses a mechanical manipulatoras a measuring tool to determine the posrtionand orientation of various parts laid out in awork station. Positions may be determinedprecisely by means of a sharp pointed tool heldin the manipulator hand, or by using thefinger touch sensors and moving the arm tothe desired potnts either manually or undercomputer control. Arbitrary orientations maybe determined from the location of threepoints. The data generated may be referredto symbolically, so that the programmer isfreed from having to think In terms of thenumerlcal values of object locations. The datais saved in a computer file for later use in aprogram to assemble the parts.

This film Illustrates the use of POINTYinstructrons to collect the position data of twoparts of a water valve assembly. It shows theuse of multiple points to determineorientations, the procedure followed to obtainthe data, and how the programmer m a yrefer to the data symbolically. Finally, thearm is shown putting together the water valveassembly.

Although there are many effective programs touse display terminals for special graphicsapplications, very few general purpose

40

Appendix CExternal Publications

Articles and books by project members thathave appeared since July 1973 are listed herealphabetically by lead author. Earlierpublications are given In our ten-year report[Memo AIM-2281 and in diskfile PUBS.OLD[BIB,DOCl &U-AI. The list below is kept inPUBS [BIB,DOC] &U-AI.

1. Agin, Gerald J., Thomas 0. Binford,Computer Description of Curved Objects,Proceedings of the Thirti International JointConference on Artifirial Intelligence,Stanford University, August 1973.

2. Agin, G.J., T.O. Binford; Representationand Description of Curved Objrcts, IEEETransactions on Computers, Vol C-25, 440,April 1976.

3. A iello, L., Aiello, M. and Weyhrauch, R.W.,Pascal iu LCF: Semantics and Examplesof Proofs, Theoretical Computer Science, 5,1977.

4. Aiello, Luigia and C. Prini, Design of apersonal coniputiiig system, Proi. AnnualConf. of the Canadian InformationProcessing Society, Victor-la (BritishColumbia), May 12-14, 1980.

5. Aiello, Luigia and R. Weyhrauch, Usingmeta-theoretic Reasoning to do Algebra,Proc. Automatic Deduction Conference, LesArcs (France), July S-l 1, 19SO.

6. Aiello, Mario, Richard Weyhrauch,Checking Proofs in the hletalnatlleillaticsof First Order Logic, Ah. Papers gf 4thht. Joint Conference on Artifi-ialIntelligence, Vol. 1, pp. I-S, September1975.

7. Arnold, R.D., Local Context in MatchingEdges for Stereo Visiou, PWC. ImageV nderstanding Workshop, Boston, May1978.

.

8. Ashcroft, Edward, Zohar Manna, AmlrPnueli, Decidable Properties of MonadicFunctional Schemas, J. ACM, Vol. 20, No.3, pp. 489-499, July 1973.

9. Ashcroft, Edward, Zohar Manna,Translating Prograln Schemas to WhiIe-schetnas, SIAM Journal on Computing,Vol. 4, No. 2, pp. 125-146, June 1975.

10. Bajcsy, Ruzena, Computer Description ofTextured Scenes, Proc. Third hf. JointConf. on Arti$dal Intelligence, Stanford U.,1973.

11. Barstow, David, Elaine Kant, Observations011 the Ineraction between Coding andEfficiency Knowledge in the PSI ProgramSynthesis System, Proc. 2nd ht. Conf. onSoftware Engineering, IEEE ComputerSociety, Long Beach, California, October1976.

12. Barstow, David, A Knowledge-BasedSystem for Automatic ProgramConstruction, Proc. ht. Joint Conf. on A./.,August 1977.

13. Biermann, A. W., R.I. Baum, F.E. Petry,Speeding Up the Synthesis of Programsfrom Traces, IEEE Trans. Computers,February 1975.

14. Bobrow, Daniel, Terry Winograd, AnOverview of KRL, a KnowledgeRepresentation Language, J. CognitiveScience, Vol. I, No. 1, 1977.

15. Bobrow, Dan, Terry Winograd, SC KRLResearch Group, Experience with KRL-0:One Cycle of a Knowledge RepresentationLanguage, Proc. ht. Joint Conf. on Ad.,A ugust 1977.

16. Belles, Robert C. Verification Vision forProgrammabIe Assembly, Proc. ht. JointConf. on A./., August 1977.

External Publications . 41

17. Brooks, R., R. Greiner, and T.O. Binford,A Model-Based Vision System; Proc. ImageUnderstanding Workshop, Boston, May1978.

18. Cartwright, Robert S., Derek C. Oppen,Unrestricted Procedure Calls in Hoare’sLogic, PYOC. Fifth ACM Symposium onPrinciples of Programming Languages,January 1978.

19. Cartwright, Robert and John McCarthy,First Order Programming Logic, Pm. 6thAnnual ACM Symp. on Principles ofProgramming Languages, San Antonio,Texas, January 1979.

20. Chandra, Ashok, Zohar Manna, On thePower of Programming Features,Computer Languages, Vol. 1, No. 3, pp.2 19-232, September 1975.

21.’ Chowning, John M., The Synthesis ofComplex Audio Spectra by means ofFrequency Modulation, J. AudioEngineering Society, September 1973.

22. Clark, Douglas, and Green, C. Cordell, AnEmpirical Study of List Structure inLISP, Communications of the ACM, Volume20, Number 2, February 1977.

23. Colby, Kenneth M., Artifiiai Paranoia: AComputer Simulation of the Paranoid Mode,Pergamon Press, N.Y., 1974.

24. Colby, K.M. and Parkison, R.C. Pattern-matching rules for the Recognition ofNatural Language Dialogue Expressions,American Journal of ComputationalLinguistics, 1, September 1974.

25. Creary, Lewis C., Propositional Attitudes:Fregean Representation and SimylativeReasoning, Proc. 6th ht. Joint Conferenceon Art+ial Intelligence, Tokyo, Japan,A uguse 1979.

26. Dershowitt, Nachum, Zohar Manna, OnAutomating Structural Progranlnling,Colloques IRIA on Proving and ImprovingPrograms, A rc-et-Senans, France, pp.. 167-193, July 1975.

27. Dershowitt, Nachum, Zohar Manna, TheEvolution of Programs: a system forautomatic program modification, IEEETrans. Software Eng., Vol. 3, No. 5, pp.377-385, November 1977.

28. Dershowitz, Nachum, Zohar Manna,Inference Rules for Program Annotation,Proc. 3rd ht. Conf. on SoftwareEngineering, Atlanta, Ca., pp. 158-167, May1978.

29. Dobrotin, Boris M., Victor D. Scheinman,Design of a Computer ControlledManipulator for Robot Research, Pm.Third ht. Joint Con.. on ArtifiialIntelligence, Stanford U., 1973.

30. Enea, Horace, Kenneth Mark Colby,Idiolectic Language-Analysis forUnderstanding Doctor-Patient Dialogues,Proceedings of the Third International JointConference on Artifiial Intelligence,Stanford University, August 1973.

31. Faught, William S., Affect as Motivationfor Cognitive and Conative Processes, Ah.Papers of 4th ht. Joint Conference onArtifiial Intelligence, Vol. 2, pp. 893-899,September 1975.

32. Feldman, Jerome A., James R. Low,Comlnent on Brent’s Scatter StorageAlgorithm, Comm. ACM, November 1973.

33. Feldman, Jerome A., Yoram Yakimovsky,Decision Theory and ArtificialIntelligence: I A Semantics-based RegionAnalyzer, Artifdal Intelligence J., Vol. 5,No. 4, Winter 1974.

34. Finkel, Raphael, Russell Taylor, RobertBelles, Richard Paul, Jerome Feldman, AnOverview of AL, a Programming System

42 . Appendix C

for Aufomatiorl, Ah. Papers of&h hf.Joint Conference on Artifiial Intelligence,Vol. 2, pp. 758-765, September 1975.

35. Floyd, Robert, Louis Stemberg, AnAdaptive Algorithm for Spatial Greyscale,PYOC. Society for Information Display,Volume 17, Number 2, pp. 75-77, SecondQuarter 1976.

36. Fuller, Samuel H., Forest Baskett, AnArtalysis of Drum Storage LJnits, J. ACM,Vol. 22, No. I, January 1975.

37. Funr, Brian, WHISPER: A Problm- .solving System utilizing Diagrams and aParallel Processing Retiua, Proi. ht. jointConf. OTZ A-I., August 1977.

38. Cennery, Don A Stereo Vision System foran Au ~OIIOIIIOUS Vehicle, Proc. ht. jointConf. on A./., A ugust 1977.

39. Gennery, D.B., A Stereo Vision System forAutonomous Vehicles, PYOC. ImageUnderstanding Workshop, Palo A Ito, Ott1977.

45. Green, Cordell, David Barstow, SomeRules for the Automatic Synthesis ofPrograms, Ah. Papers of 4th ht. jointConference on Arti$ial Intelligence, Vol. I,pp. 232-239, September 1975.

46. Green, Cordell, and Barstow, David, SomeRules for the Automatic Synthesis ofPrograms, Advance Papers of the FourthInternational Joint Conference on ArtifidalIntelligence, Volume I, Artificial IntelligenceLaboratory, Massachusetts Institute ofTechnology, Cambridge, Massachusetts,September 1975, pages 232-239.

47. Green, Cordell, The Design of the PSIProgram Synthesis System, Proc. hi ht.Conf. on Software Engineering, IEEEComputer Society, Long Beach, California,October 1976.

48. Green, Cordell, The PSI ProgramSynthesis System. 1976, ACM ‘76:Proceedings of the Annual Conference,Association for Computing Machinery, NewYork, New York, October 1976, pages 74-75.

40. German, Steven, Automating Proofs ofthe Absence of COWIIOII Runtime Errors,Proc. 5th ACM Symposium on Principles ofProgramming Languages, pp. 105- I 18,January 1978.

4 1. Goad, Chris, Monadic InfinitaryPropositiollal Logic: A Special Operator,Reports on Mathematical Logic, IO, 1978.

49. Green, C. C., and Barstow, D. R., AHypothetical Dialogue Exhibiting aKnowledge Base for a ProgramUnderstanding System, in Elcock, E. W.,and Michie, D., editors, MachineIntelligence 9: Machine Representations ofKnowledge, Ellis Horwood, Ltd., and JohnWiley and Sons, Inc., New York, New York,1976.

42. Goad, Chris, Proofs as descriptions ofcomputations, Proc. Automatic DeductionConference, Les Arcs (France), July 8-l I,1980.

43. Goldman, Neil M., Sentence Paraphrasingfrom a Conceptual Base, Comm. ACM,February 1975.

44. Goldman, Ron, Recent Work with the ALSystem, Proc. ht. Joint Confl on A.I.,A ugust 1977.

50. Green, C. C., A Summary of the PSIProgram Synthesis System, Proc. ht. JointConf. on Ad., August 1977.

51. Harvey, Brian, Increasing ProgrammerPower at Stanford with DisplayTerminals, Minutes of the DECsystem-IOSpring-75 DECUS Meeting, DigitalEquipment Computer Users Society,Maynard, Mass., 1975.

Ex terual Publicatious

52. Hieronymus, J. L., N. J. Miller, A. L.Samuel, The Amanuensis SpeechRecognition System, Proc. IEEESymposium on Speech Recognition, April1974.

53. Hieronymus, J. L., Pitch Syr~ftro~~ousAcoustic Segmentation, Proc. IEEESymposium on Speech Recognition, April1974.

54. Hilf, Franklin, IJse of ComputerAssistance in Enhancing Dialog BasedSocial Welfare, Public Health, andEducational Services in DevelopingCountries, Proc. 2nd Jerusalem Conf. onInfo. Technology, July 1974.

55. Hitf, Franklin, Dynamic ContentA nalysis, Archives of General Psychiatry,January 1975.

56. Hueckel, Manfred H., A Local VisualOperator ivhich Recognizes Edges alldLines, J. ACM, October 1973.

57. lgarashi, S., R. L. London, D. C. Luckham,Automatic Program Verification I:Logical Basis and its Iillplell~eilfation,Acta lnformatica, Vol. 4, pp. 145-182, March1975.

58. Ishida, Tatsuzo, Force Coiltrol inCoordination of Two Arms, Proc. Int.Joint Conf. on A./., August 1577.

59. Kant, Elaine, The Selectiotl of Elf icientIInplelrlelltatiorls for a High-levelLanguage, Proc. SIGART-SIGPLAN Symp.on A./. U Prog. Lang., August 1977.

60. Jcarp, Richard A., David C Luckham,Verificatiou of Fairness it] auImplcmentatiorl of Monitors, PYOL. 2ndIntnl. Conf. on Software Engineering, PP.40-46, October 1976.

43

Infelligence, Stanford University, pp. 500-512, August 1973.

62. Katz, Shmuel, Zohar Manna, TowardsAutomatic Debugging of Programs, Proc.ht. Con. on Reliable Software, Los Angeles,April 1975.

63. Katz, Shmuel, Zohar Manna, LogicalAilalysis of Programs, Comm. ACM, Vol.19, No. 4, pp. 188-206, April 1976.

64. Katz, Shmuel, Zohar Manna, A CloserLook at Termination, Acta Informatica,Vol. 5, pp. 333-352, April 1977.

65. Lenat, Douglas B., BEINGS: Knowledge asInteracting Experts. Ah. Papers of #sirht. Joint Conference on Artifiialhtelfigence, Vol. 1, pp. 126-133, September1975.

66. Luckham, David C., Automatic ProblemSolving, Proceedings of the ThirdInternational Joint Conference on ArtifiialIntelligence, Stanford University, August1973.

67. Luckham, David C., Jack R. Buchanan,Automatic Generation of ProgramsContainirlg Conditional Statements, Proc.AISB Summer Conference, U. Sussex, July1974.

68. Luckham, David C., Nori Suzuki, Proof ofTermination within a Weak Logic ofPrograms, Acta Informatica, Vol 8, No. 1,pp. 21-36, March 1977.

69. Luckham, David C., Program Verificationand Verification-oriented Programming,Proc. I.F.I.P. Congress ‘77, August 1977.

70. Manna, Zohar, Program Schemas, inCurrents in the Theory of Computing (A. V.A ho, Ed.), Prentice-Hall, Englewood Cliffs,N. J., 1973.

61. J<atz, Shmuel, Zohar Manna, A HeuristicApproach to Program Verification, Proc.Third ht. Joint Conf. on Artijkial

44 .Appendix C

71. Manna, Zohar, Stephen Ness, JeanVuillemin, Illductive Mcttrods for ProvingProperties of Programs, Corr;rn. ACM, Vol.16, No. 8, pp. 49 I-502, August 1973.

Proving Program Correctness, Comm.ACAl, Vol. 21, No. 2, pp. 159- 172, February197s.

82. Manna, Zohar, Adi Shamir, The72. Manna, Zohar, Automatic Programming, Convergence of Functions to Fixpoints of

Proceedings of 1i14 Third International Joint Recursive Definitions, Thoreticai ConputerConference on Artifiial /ntt*liigence, Science J., Vol. 6, pp. 109-141, March 1978.Stanford University, August 1973.

83. Manna, Zohar, Richard Waldmger, The73. Manna, Zohar, Mathtvtaticn! Theory of Logic of Computer Programming, IEEE

Computation, McGraw-Hill, New York, Trans. Software Eng., Vol. SE-4, No. 5, pp.1974. 199-224, May 1978.

74. Manna, Zohar, Amir Pneuli, AxiomaticApproach to Total Correcruess, Actahformatica, Vol. 3, pp. 243-263, 1974.

75. Manna, Zohar, Richard Waldlnger,K nowicdge aud Reasoning in ProgramSynthesis, Artifiial Intelligence, Vol. 6, pp.175-208, 1975.

S4. Manna, Zohar, Richard Waldinger, TheSynthesis of Structure-changingPrograms, Proc. 3rd ht. Conf. on SoftwareEng., Atlanta, CA, May 1978.

85. Manna, Zohar, Richard Waldinger, TheDEDALUS System, Proc. NationalComputer Conf., Anaheim, CA, June 1978.

76. Manna, Zohar, Adi Shamir, TheTheoret ical Aspects of the OptimalFixpoint, SIAhl Journnl of Compuring,Vol. 5, No. 3, pp.4 14-426, September 1976.

77. Manna, Zohar, Richard Waldinger, TheAutomatic Synthesis of RecursivePrograms, Proc. S/CART-SKPLAN Symp.on A.I. V Prog. Lang., August 1977.

78. Manna, Zohar, Richard Waldinger, TheAutomatic Synthesis of Systems ofRecursive Programs, Proc. ht. Joint Conf.on A.I., August 1977.

79. Manna, Zohar, Adi Shamn-, The Optimal-Fixpoint Approach to RecursivePrograms, Comm. ACh4, Vol. 20, No. 11, pp.824-83 1, November 1977.

80. Manna, Zohar, Richard Waldinger, (eds.),Studies in Automatic Programming Logic,A merican Elsevier, New York, NY, 1977.

86. McCarthy, John, Mechanical Servants forMankind, Britannica Yearbook of Scienceand the Future, 1973.

87. McCarthy, John, Book Review: ArtificialIntelligence: A Cetleral Survey by SirJames Lighthill, Arti.ial htelligence, Vol.5, No. 3, Fall 1974.

88. McCarthy, John, Modeling Our MindsScience Ye&r 1975, The World Book ScienceAnnual, Field Enterprises EducationalCorporation, Chicago, 1974.

89. McCarthy, John, Proposed Criterion for aCipher to be Probable-word-proof, Comm.ACM, February 1975.

90. McCarthy, John, An Unreasonable Book, areview of Computer Power and HumanReason by Joseph Weizenbaum (W.H.Freeman and Co., San Francisco, 1976),SICART Newsletter *5S, June 1976.

81. Manna, Zohar, Richard Waldmger, Is‘Sometime sonletinles better than‘Always’? Interurittatlt Assertions in

91. McCarthy, John, Review: Computer Powerand Human Reason, by JosephWeizenbaum (W.H. Freeman and Co., SanFrancisco, 1976) in Physics Today, 1977.

External Publications . 45

92. McCarthy, John, Another SAMEFRINGE,S/CART Newsletter No. 61, February 1977.

93. McCarthy, John, The IIonw InforwatiouTerm i nal, The Crolier Encyclopedia, 1977.

94. McCarthy, John, M. Sato, T. Hayashi, S.Igarashi, 011 the Model Theory ofKnowledge, Proc. ht. Joint Conf. on A./.,August 1977.

a 95. McCarthy, John, EpistemologicalProblems of Artificial Intelligence, PYOC.

ht. Joint Conf. on A.I., August 1977.

96. McCarthy, John, History of LISP, PYOC.

ACM Con/. on History of ProgrammingLanguages, 1978.

97. McCarthy, John, Representation ofRecursive Programs in First Order Logic,Proc. ht. Conf. on Mathematicd Studies ofinformation Processing, K yoto Japan, 1978.

98. McCarthy, John, Ascribing Mentalowualities to Machines, PhilosophicalPerspectives in Art+id In tdligence, MartinR ingle (ed.), Humanities Press, 1978.

89. McCune, Brian, The PSI Prograw ModelBuilder: Synthesis of Very l-Iigh-level

Programs, PYOC. SIGART-SIGPLAN Symp.on A./. V Prog. Lang., August 1977.

100. Miller, N. J., Pitch Detection by DataReduction, Proc. IEEE Symposium on

Speech Recognition, A pril 1974.

101. Moore, Robert C., Reasoning about

. K nowledge and Action, PYOC. ht. JointConf. on A./., August 1977.

102. Moorer, James A., The Optimum CombMethod of Pitch Period Analysis ofContirluous Speech, IEEE Trans. Acoustics,Speech, and Signal Processing, Vol. ASSP-22, No. 5, October 1974.

103. Moorer, James A., On the Transcriptionof Musical Sound by Computer, USA-JAPAN Computer Conference, August 1975.

104. Morales, Jorge J., Interactive TheoremProving, PYOC. ACM National Conference,A ugust 1973.

105. Moravec, Hans, Towards AutomaticVisual Obstacle Avoidance, Proc. ht. JointConf. on AI., August 1977.

106. Moravec, Hans, A Non-SynchronousOrbital Skyhook, J. Astronautical Sciences,Vol. XXV ~4, pp. 307-322, Oct.-Dec. 1977.

107. Moravec, Hans, Today’s Cow puters,Intelligent Machines and Our Future,Analog, Vol. 99 +2, pp. 59-84, February1979.

108. Moravec, Hans P., Visual Mapping by aRobot Rover, PYOC. 6th ht. Joint Conf. onArtiJcial intelligence, Tokyo, Japan, August1979, pp. 589-601.

109. Moravec, Hans, A Non-synchronousOrbital Skyhook, J. Astronautical Sciences,Vol 26, No. I, 1978.

I IO. Moravec, Hans P., Fully InterconnectingMultiple Computers with PipelinedSorting Nets, IEEE Trans. on Computers,October 1979.

111. Moravec, Hans, Cable Cars in the Sky,The Endless Frontier, Vol. 1, JerryPournelle, ed., Grosset 8~ Dunlap, Acebooks, November 1979.

112. Moravec, Hans, The Endless Frontier andThe Thinking Machine, The EndlessFrontier, Vol. 2, Jerry Pournelle, ed.,Grosset & Dunlap, Ace books, l9SO. (toappear)

113. Nelson, C. C., Oppen, D. C., FastDecision Algorithms based on UNIONand FIND, PYOC. I&h Annual IEEESymposium on Foundations of ComputerScience, October 1977.

46 . Appendix C

114. Nelson, C. G., Derek Oppen, A Simplifier 122. Oppen, Derek, Convexity, Complexity,based on Fast Decision Algorithms, Proc. arld Combinations of Theories, hoc. 5fhFifth ACM Symposium on Principles of Symposium on Automated Deduction,Programming Languages, January 1978. January 1979.

115. Nelson, C.C. and Oppen, D., 123. Phillips, Jorge, T. H. Bredt, Design andSimplification by Cooperating Decisiojl Verification of Real-the System, Proc.Procedures, ACM Transactions on 2nd ht. Conf. on Softruare Engineering,Programming Languages and Systems, Oct. IEEE Computer Society, Long Beach,1979. California, October 1976.

i 16. Nevatia, Ramakant, Thomas 0. Binford, 124. Phillips; Jorge, Program Inference fromStructured Descriptious of Complex Traces using Multiple Knowledge Sources,Objects, Proceedings of the Third Proc. Int. Joint Conf. on A.I., August 1977.International Joint Conference on ArtifiialIntdligence, Stanford University, August 125. Phillips, Jorge V., LISP and Its1973. Seinan t its, Comunicaciones Tecnicas (in

Spanish), Blue Series: monographs, Center11’7. Nevatia, R., T.O. Bmford; Structured for Research in Applied Mathematics and

Descriptions of Complex Objects; Artifi-iai , Systems, National University of Mexico,Intelligence, 1977. Mexico City, Mexico, June 1978.

1 IS. Newell, A., Cooper, F. S., Forgie, J. W.,Green, C. C., Klatt, D. H., Medress, M. F.,Neuburg, E. P., O’Malley, M. H., Reddy, D.R., Ritea, B., Shoup, J. E., Walker, D. E.,and Woods, W. A., Considerations for aFollow-On ARPA Research Program forSpeech Understanding Systems, InformationProcessing Techniques Office, AdvancedResearch Projects Agency, Department ofDefense, Arlington, Virginia, August 1975.

I 19. Oppen, Derek, S.A. Cook, ProvingAssertions about Programs thatManipulate Data Structures, ActaInformatica, Vol. 4, No. 2, pp. 127-144,1975.

120. Oppen, Derek C., Reasonillg aboutRecursive Data Structures, Proc. FifthACM Symposium on Principles ofProgramming Languages, January 1978.

121. Oppen, Derek C., A SuperexponentialBound on tile Complexity of PresburgerArithmetic, Journal of Computer andSystems Sciences, June 1978.

Phillips, Jorge V., Abstract Programming,Colombia Electronica (in Spanish), VolumeIV-2 1979, Apartado Aereo 1825, Bogota,Colombia, June 1979.

126. Polak, Wolfgang, An exercise inautomatic program verification, /EEE ,Trans. on Software Engineering, vol. SE-5,Sept. 1979.

127. Quam, Lynn, Robert Tucker, BotondEross, J. Veverka and Carl Sagan, Mariner

.9 Picture Differencing at Stanford, Skyand Telescope, August 1973.

128. Rubin, Jeff, Computer Communicationvia the Dial-up Network, Minutes of theD ECsystem-10 Spring-75 D ECUS Meeting,Digital Equipment Computer Users Society,Maynard, Mass., 1975.

129. Sagan, Carl, J. Veverka, P. Fox, R.Dubisch, R. French, P. Gierasch, L. Quam,J. Lederberg, E. Levinthal, R. Tucker, B.Eross, J. Pollack, Variable Features onMars II: Mariner 9 Global Results, J.Ceophys. Res., 78, 4 163-4 196, 1973.

External Publicatious

130. Samet, Hanan, Proving the Correctnessof Heuristically Optimized Code, Comm.ACM, July 1978.

131. Schank, Roger C., Neil Goldman, CharlesJ. Rieger III, Chris Riesbeck, MARGIE:Memory, Analysis, Resporlse Cenerat iorland Irlfererlce 011 English, Proceedings ofthe Third International Joint Confhence onArtifiial Intrlfigcnce, Stanford University,August 1973.

132. Schank, Roger C., Kenneth Colby (eds),Computer Models of Thought andLanguage, W. H. Freeman, San Francisco,1973.

133. Schank, Roger, The Conceptual A tlalysisof Natural Language, in R. Rustin (ed.),Natural Language Processing, A igorithmicsPress, New York, 1973.

134. Schank, Roger, Charles J. Rieger III,hference and Colnputw Understandingof Natural Lallguage, Artifiial IntelligenceJ., Vol.5, No. 4, Wmter 1974.

135. Schank, Roger C., Neil M. Goldman,Charles J. Rieger III, Christopher K.Ricsbeck, Interface and Paraphrase byComputer, J. ACM, Voi 22, No. 3, July1975.

136. Shaw, David E., William R. Swat-tout, C.Cordeii Green, Inferring LISP Programsfrom Examples, Adu. Papers of 4th ht.Joint Conference on Artifiial Intelligence,Vol. 1, pp. 260-267, September 1975.

137. Shortiiffe, Edward H., Davis, Randall,Axiine, Stanton C., Buchanan, Bruce C.,Green, C. Cordeii, and Cohen, Stanley N.,Computer-Based Cotlsultatiolls in ClinicalThcrapeu t its: Explauatiou alld RuleAcquisition Capabilities of tile MYCINSystem, Compu.ters and BiotncdicalResearch, Volume 8, Number 3, June 1975,pages 303-320.

. 47

138. Smith, David Canfield, Horace J. Enea,Backtracking in MLISP2, Proce&ings ofthe Third International Joint Conference OR

Artifizial htelligence, Stanford University,August 1973.

139. Smith, Leiand, Editing and PrintingMusic by Computer, /. Music Theory, Fall1973.

140. Sobel, Irwin, On Calibrating ComputerControlled Cameras for Perceivi rrg 3-DScenes, Proc. Third ht. Joint Conf. onArtifiial Intelligence, Stanford U., 1973; alsoin Artificial Intelligence J., Vol. 5, No. 2,Summer 1974.

141. Suzuki, N., Verifying Programs byAlgebraic and Logical Reduction, Proc.ht. Conf. on Retiabte Software, Los Angeles,Calif., April 1975, in ACM SIGPLANNotices, Vol. 10, No. 6, pp. 473-481, June1975.

142. Tesier, Lawrence G., Horace J. Enea,David C. Smith, The LISP70 PatternMatching System, Proceedings of the ThirdInternational J,oint Conference on ArtifiialIntelligence, Stanford University, August1973.

143. Thomas, Arthur J., Puccetti on MachinePatterri Recognition, Brit. J.Philosophy ofScience, 26:227-232, 1975.

144. Tracy, Kathleen B., Elaine C. Montague,Richard P. Gabriel, Barbara E. Kent,Computer-Assisted Diagnosis ofOrthopedic Gait Disorders, PhysicalTherapy, Vol. 59, No. 3., pp 268-277, March1979.

145. Veverka, J., Carl Sagan, Lynn Quam, R.Tucker, B. Eross, Variable Features onMars III: Comparison of Mariner 1969 andMariner 1971 Photography, Icarus, 21, 317-368, 1974.

48 Appendix C

146. von Henke, F. W., D.C. Luckham, AMethodology for Verifying Programs,Proc. Int. Conf. on Reliable Software, LosA ngeles, Calif., April 1975, in ACMS ICPLAN Notices, Vol. 10, No. 6, pp. 156164, June 1975.

147. Wilks, Yorick, The Stallford hlachineTranslation and Understallding Project, InR. Rustin (ed.), Natural LanguageProcessing, A lgorithmlcs Press, New York,1973.

148. Wilks, Yorick, IJndcrstanding \VithoutProofs, Proceedings of the Thirdhternational Joint Conference on ArtifiialIntelligence, Stanford Unrversity, August1973.

149. Wilks, Yorick, Annette Herskovits, AnIntelligent Analyser and Generator ofNatural Language, PYOC. Int. Conf. onComputational Linguistics, Plsn, Italy,Proceedings of the Third Internation JointConference on Artifiial Inteiligcnct,Stanford University, August 1973.

150. Wilks, Yorick, The Computer Analysis ofPhilosophical Arguments, CIRPHO, Vol.1, No. 1, September 1973

151. Wilks, Yorick, An Artificial IlltelligetlceApproach to Machine Translation, inSchank and Colby (eds.), Computer Modelsof Thou.ght and Language, W. l-l. Freeman,San Francisco, 1973.

152. Wilks, Yorick, One Small ZIead - Modelsand Theories in Linguistics, Foundationsof Language, Vol. 10, No. 1, January 1974.

153. Wilks, Yorick, Preference Senlantics, E.Ktenan (ed.), PYOC. 1973 Colloquium onFormal Semantics of Natural Language,Cambridge, U.K., 1974.

i54. Wilks, Yorick, The XGP Computer-driven Printer at Stanford, Bull&tin ofAssoc. for Literary and LinguisticComputing,, Vol. 2, No. 2, Summer 1974.

155. Wilks, Y., Semantic Procedures andInforlnation, in Studies in the Foundationsof Communication, R. Posner (ed.), Springer,Berlin, forthcoming.

156. Wilks, Yorick, A Preferential, Pattern-Seeking Semantics for Natural LanguageInference, Artifiial Intelligence J1., Vol. 6,No. 1, Spring 1975.

157. Wilks, Y., An Intelligent Analyser andUuderstander of English, Comm. ACM,May 1975.

158. Winograd, Terry, A Process Model ofLanguage Understanding, In Schank andColby (eds.), Computer Modeis of Thoughtand Language, W. H. Freeman, SanFrancisco, 1973.

159. Winograd, Terry, The Processes ofLanguage Understanding in Benthali, (ed.),The Limits of Human Nature, Allen Lane,London, 1973.

160. Winograd, Terry, Language and theNature of Intelligence, in C.J. Dalenoort(ed.), Process Models for Psychology,Rotterdam Univ. Press, 1973

161. Winograd, Terry, Breaking theComplexity Barrier (again), Proc.S/G P L AN-S’IGIR Interface M ceting, 1973;ACM SIGPLAN Notices, lO:l, pp. 13-30,January 1975.

162. Winograd, Terry, Artificial Intelligence- Wherl Will Computers UnderstandPeople?, Psychology Today, May 1974.

163. Winograd, Terry, Frame Representationsand the Procedural - DeclarativeControversy, in D. Bobrow and A. Collins,eds., Representation and Understanding:Studies in Cognitive Science, AcademicPress, 1975.

164. Winograd, Terry, Reactive Systetns,Coevolution Quarterly, September I975

External Publications 49

165. Winograd, Terry, Parsirlg NaturalLanguage via Recursive Transitioll Net, inRaymond Yeh (ed.) Applied ComputationTheory. Prentice-Hall. 1976.

166. Winograd, Terry, Computer Memories -a Metaphor for Human MelJlOry, inChartes Cofer (ed.), Models of HumanMemgry, Freeman, 1976.

167. Yakimovsky, Yoram, Jerome A. Feldman,A Semantics-Based Decision TheoreticRegion Analyzer, Proceedings of the Thirdhernational Joint Conference on ArtijizialIntelligence, Stanford University, August1973.

168. Yolks, Warwick, There’s Always ROOIN atthe Top, or How Frames gave my LifeMeaning, S/CART Newsletter, No. 53,August 1975.

50

Appendix DAbstracts of Recent Reports

Abstracts are given here for ArtificialIntelligence Memos published since 1976. Forearlier years, see our ten-year report [MemoAIM-2281 or diskfile AIMS.OLD [BlB,DOCI@W-AI. The abstracts below are kept indiskfile AIMS [BlB,DOC] @SU-AI and thetitles of both earlier and more recent A. IMemos are in AIMLST[BIB,DOCl &U-AI.

In the listing below, there are up to threenumbers given for each report: an “AIM”number on the left, a “CS” (Computer Science)number in the middle, and a NTIS stocknumber (often beginning “AD...“) on the right.Special symbols preceding the “AIM” numberindicate availability at this writing, as follows:

+ hard copy or microfiche,- microfiche only,* out-of-stock.

If there is no special symbol, then it is availablein hard copy only. Reports that are in stockmay be requested from:

Documentation ServicesArtificial Intelligence LaboratoryStanford UniversityStanford, California 94305

Rising costs and restrictions on the use ofresearch funds for printing reports have madeit necessary to charge for reports at theirreplacement cost. By doing so, we will be ableto reprint popular reports rather than simplydeclaring them “out of print“.

Alternate Sources

Alternatively, reports may be ordered (for anominal fee) in either hard copy or microfichefrom:

National Technical Information ServiceP. 0. Box 1553Springfield, Virginia 22 161

If there is no NTIS number given, then theymay or may not have the report. In requestingcopies in this case, give them both the “AIM-”

.

and “CS-nnn“ numbers, with the latterenlarged into the form “STAN-CS-yy-nnn’,where “yy” is the last two digits of the year ofpublication.

Memos that are also Ph.D. theses are so markedbelow and may be ordered from:

University MicrofilmP. 0. Box 1346Ann Arbor, Michigan 48106

For people with access to the ARPA Network,the texts of some A. 1. Memos are stored onlinein the Stanford A. I. Laboratory disk file.These are designated belovl by “Diskfile: <filename>“ appearing in the header.

- AIM-277 cs-542 ADA027454Zohar Manna, Adi Shamir,The Theoretical Aspects of the OptimalFixedpoint,24 pages, March 1976.

In this paper we define a new type of fixedpointof recursive definitions and investigate some ofits properties. This optimal fixedpoint (whichalways uniquely exists) contains, in some sense,the maximal amount of “interesting“information which can be extracted from therecursive definition, and it may be strictly moredefined than the program’s least fixedpoint.This fixedpoint can be the basis for assigning anew semantics to recursive programs.

+ AIM-278 cs-549 ADA027455David Luckham, Norihisa Suzuki,Automatic Program Verification V:Verification-Oriented Proof Rules for Arrays,Records and Pointers,48 pages, March 1976. Cost: $3.05

A practical method is presented for automatingin a uniform way the verification of Pascalprograms that operate on the standard Pascaldata structures ARRAY, RECORD, andPOINTER. New assertion language primitivesare introduced for describing computationaleffects of operations on these data structures.Axioms defining the semantics of the newprimitives are given. Proof rules for standard

Abstracts of Recent Reports 51.

Pascal operations on pointer variables are then verification process. It is expected that repeateddefined in terms of the extended assertion verification attempts will be necessary becauselanguage. Similar rules for records and arrays programs and specifications may have errors atare special cases. An extensible axiomatic rule first try. So the time to complete onefor the Pascal memory allocation operation, verification attempt is very important in realNEW, is also given. environment.

These rules have been implemented in theStanford Pascal program verifier. Examplesillustrating the verification of programs whichoperate on list structures implemented withpointers and records are discussed. Theseinclude programs with side-effects.

- AIM-279 CS-552Norihisa Suzuki,Automatic Verification of Programs withComplex Data Structures,Thesis: Ph.D. in Computer Science,194 pages, February 1976.

The problem of checking whether programswork correctly or not has been troublingprogrammers since the earliest days ofcomputing. Studies have been conducted toformally define semantics of programminglanguages and derive proof rules for correctnessof programs.

Some experimental systems have been built tomechanically verify programs based on theseproof rules. However, these systems are yet farfrom attacking real programs in a realenvironment. Many problems covering theranges from theory to artificial intelligence andprogramming languages must be solved inorder to make program verification a practicaltool. First, we must be able to verify a completepractical programming language. One of theimportant features of real programminglanguages which is not treated in earlyexperimental systems is complex data structures.Next, we have to study specification methods.In order to verify programs we have to expresswhat we intend to do by the programs. Inmany cases we are not sure what we want toverify and how we should express them. Thesespecification methods are not independent ofthe proof rules. Third, we have to construct anefficient prover so that we can interact with the

We have chosen Pascal as the target language.The semantics and proof rules are studied byHoare SC Wirth and Igarashi, London 8cLuckham. However, they have not treatedcomplex data structures obtained from arrays,records, and pointers. In order to express thestate of the data structures concisely andexpress the effects of statements we introducedspecial assertion language primitives and newproof rules. We defined new methods ofintroducing functions and predicates to writeassertions so that we can express simplificationrules and proof search strategies. Weintroduced a special language to documentproperties of these functions and predicates.These methods enable users to expressassertions in natural ways so that verificationbecomes easier. The theorem prover isconstructed so that it will be efficient forproving a type of formulas which appear veryoften as verification conditions.

We have successfully verified many programs.Using our new proof rules and specificationmethods we have proved properties of sortingprograms such as permutation and stabilitywhich have been thought to be hard to prove.We see no theoretical as well as practicalproblems in verifying sorting programs. Wehave also verified programs which manipulatepointers. These programs change their datastructures so that usually verification conditionstend to be complex and hard to read. Somestudy about the complexity problem seemsnecessary.

The verifier has been used extensively byvarious users, and probably the most widelyused verifier implemented so far. There is yet agreat deal of research necessary in order to fillthe gap between the current verifier and thestandard programming tools like editors andcompilers.

52 .Appendix D

This dissertation was submitted to theDepartment of Computer Science and theCommittee on Graduate Studies of StanfordUniversity In partial fulfillment of therequirements for the degree of Doctor ofPhilosophy.

+ AIM-280 cs-555David D. Grossman,Monte Carlo Simulation of Tolerancing irrDiscrete Parts Manufacturing alld Assembly,25 pages, May 1976. Cost: $2.40

The assembly of discrete parts is stronglyaffected by imprecise components, imperfectfixtures and tools, and inexact measuremets. Itis often necessary to design higher precisioninto the manufacturing and assembly processthan is functionally needed in the final product.Production engineers must trade off betweenalternative ways of selecting individualtolerances in order to achieve minimum cost,while preserving product integrity. This paperdescribes a comprehensive Monte Carlo methodfor systematically analysing the stochasticimplications of tolerancing and related forms ofimprecision. The method is illustrated by fourexamples, one of which is chosen from the fieldof assembly by computer controlledmanipulators.

-+ AIM-281.1 CS-558 A D-A 042 507Zohar Manna, Richard Waldinger,Is ‘sometime’ sometimes better than ‘always’?Intermittent assertions in proving programcorrect Ii ess,4 1 pages, June 1976, revised March 1977. Cost:$2.85

This paper explores a technique for provingthe correctness and termination of programssimultaneously. This approach, which we callthe [intermittent]-[assertion method], involvesdocumenting the program with assertions thatmust be true at some time when control ispassing through the corresponding point, butthat need not be true every time. The method,introduced by Knuth and further developed byBurstall, promises to provide a valuablecomplement to the more conventional methods.

We first introduce and illustrate the techniquewith a number of examples. We then showthat a correctness proof using the invariantassertion method or the subgoal inductionmethod can always be expressed usingintermittent assertions instead, but that thereverse is not always the case. The method canalso be used just to prove termination, and anyproof of termination using the conventionalwell-founded sets approach can be rephrased asa proof using intermittent assertions. Finally,we show how the method can be applied toprove the validity of program transformationsand the correctness of continuously operatingprograms.

This is a revised and simplified version of apevious paper with the same title (AIM-281,June 1976).

+ AIM-282Russell Taylor,

CS-560

Synthesis of Manipulator Control Programsfrom Task-level Specifications,Thesis: Ph.D. in Computer Science,229 pages, July 1976. Cost: $8.10

This research is directed towards automaticgeneration of manipulator control programsfrom task-level specifications. The centralassumption is that much manipulator-levelcoding is a process of adapting known programconstructs to particular tasks, in which codingdecisions are made by well-definedcomputations based on planning infurmafion.For manipulator programming, the principalelements of planning information are: (1)descriptive information about the objects beingmanipulated; (2) situational informationdescribing the execution-time environment; and(3) action information defining the task and thesemantics of the execution-time environment.

A standard subtask in mechanical assembly,insertion of a pin into a hole, is used to focusthe technical issues of automating manipulatorcoding decisions. This task is first analyzedfrom the point of view of a humanprogrammer writing in the target language, AL,to identify the specific coding decisions required

Abstracts of Recent Reports 53.

and the planning Information required to makethem. Then, techniques for representmg thisinformation in a computationally useful formare developed. Objects are described byattribute graphs, in which the nodes containshape information, the links contain structuralinformation, and properties of the links containlocation information. Techniques aredeveloped for representing object locations byparameterlzed mathematical expresslons inwhich free scalar variables correspond todegrees of freedom and for deriving suchdescriptions from symbolic relations betweenobject features. Constraints linking theremaining degrees of freedom are derived andused to predict maximum variations.Differential approximations are used to predicterrors in location values. Finally, proceduresare developed which use this planninginformation to generate AL code automatically.

The AL system itself performs a number ofcoding functions not normally found inalgebraic compliers. These functions and theplanning information required to support themare also discussed.

- AIM-283Randall Davis,

CS-552

Applications of hieta Level Knowledge to theConstruction, Maintenance sod Use of LargeIC nowledge Cases,Thesis: Ph.D. in Computer Science,304 pages, July 1976.

The creation and management of largeknowledge bases has become a central problemof artificial intelligence research as a result oftwo recent trends: an emphasis on the use oflarge stores of domain specific knowledge as abase for high performance programs, and aconcentration on problems taken from realworld settings. Both of these mean anemphasis on the accumulation and managementof large collections of knowledge, and in manysystems embodying these trends much time hasbeen spent on building and maintaining suchknowledge bases. Yet there has been littlediscussion or analysis of the concomitantproblems. This thesis attempts to define some

of the issues involved, and explores steps takentoward solving a number of the problemsencountered. It describes the organization,implementation, and operation of a programcalled TEIRESIAS, designed to make possiblethe interactive transfer of expertise from ahuman expert to the knowledge base of a highperformance program, in a dialog conducted ina restricted subset of natural language.

The two major goals set were (i) to make itpossible for an expert in the domain ofapplication to “educate” the performanceprogram directly, and (ii) to ease the task ofassembling and maintaining large amounts ofknowledge.

The central theme of this work is theexploration and use of what we have labelledmeta level knowledge. This takes severaldifferent forms as its use is explored, but can besummed up generally as “knowing what youknow“. It makes possible a system which hasboth the capacity to use its knowledge directly,and the ability to examine it, abstract it, anddirect its application.

We report here on the full extent of thecapabilities it makes possible, and documentcases where its lack has resulted in significantdifficulties. Chapter 3 describes efforts toenable a program to explain its actions, bygiving it a model of its control structure and anunderstanding of its representations. Chapter 5documents the use of abstracted models ofknowledge (rule models) as a guide toacquisition. Chapter 6 demonstrates the utilityof describing to a program the structure of itsrepresentations (using data structure schemata).Chapter 7 describes the use of strategies in theform of meta rules, which contain knowledgeabout the use of knowledge.

- AIM-284Rafael Fin kel,

CS-567

Constructing and Debugging ManipulatorPrograms,Thesis: Ph.D. in Computer Science,171 pages pages, August 1976.

t 4

54 Appelldix D.

This thesis presents results of work done at theStanford Artificial Intelligence Laboratory inthe field of robotics. The goal of the work is toprogram mechanical manipulators toaccomplish a range of tasks, especially thosefound in the context of automated assembly.The thesis has three chapters describingsignificant work in this domain. The firstchapter is a textbook that lays a theoreticalframework for the principal issues involved incomputer control of manipulators, includingtypes of manipulators, specification ofdestinations, trajectory specification andplannmg, methods of interpolation, forcefeedback, force application, adaptive control,collision avoidance, and simultaneous control ofseveral manipulators. The second chapter is animplementation manual for the ALmanipulator programming language. The goalsof the language are discussed, the language isdefined, the compiler described, and theexecution environment detailed. The languagehas special facilities for condition monitoring,data types that represent coordinate systems,and affixment structures that allow coordinatesystems to be linked together. Programmableside effects play a large role in theimplementation of these features. This chaptercloses with a detailed programming examplethat displays how the constructs of the languageassist in formulating and encoding themanrpulation task. The third chapter discussesthe problems involved m programming 111 theA L language, including program prcparatlon,compilation, and especially debu~;~;mg. Adebugger, ALA ID, 1s desIgned to make use ofthe complex environment of AL. Provlslon ismade to take advantage of the multiplc-processor, multil~le-process, real-lime,interactive nnture of the problem. Theprlnclpal conclusion IS that the debugger canfruitfully act as a uniform supervisor for theentire process of program preparation and asthe means of communrcation betweencooperating processors.

- AIM-285 CS-568 PB-259 130/3WCT. 0. Binford, D. D. Grossman, C. R. Lui, R.C. Belles, R. A. Finkel, M. S. Mujtaba, M. D.Roderick, B. E. Shimano, R. H. Taylor, R. H.Goldman, J. P. Jarvis, V. D. Scheinman, T. A.Cafford,Exploratory Study of Computer IntegratedAsset bly Systems, Progress Report 3,336 pages, August 1976.

The Computer Integrated Assembly Systemsproject is concerned with developing thesoftware technology of programmable assemblydevices, including computer controlledmanipulators and vision systems. A completehardware system has been implemented thatincludes manipulators with tactile sensors andTV cameras, tools, fixtures, and auxiliarydevices, a dedicated minicomputer, and a time-shared large computer equipped with graphicdisplay terminals. An advanced softwaresystem call AL has been developed that can beused to program assembly applications.Research currently underway includesrefinement of AL, development of improvedlanguages and interactive programmingtechniques for assembly and vision, extensionof computer vision to areas which are currentlyinfeasible, geometric modeling of objects andconstraints, assembly simulation, controlalgorithms, and adaptive methods ofcalibration.

+ A TM-285.4 CS-568 PB-259 130/3WCT. 0. Binford, C. R. Lui, C. Cini, M. Gini, 1.Glaser, T. Ishida, M. S. Mujtaba, E. Nakano,H. Nabavi, E. Panofsky, B. E. Shimano, R.Goldman, V. D. Scheinman, D. Schmelling, T.A. Cafford,Exploratory Study of Computer IntegratedAsseul bly Systems, Progress Report 4,255 pages, June 1977. Cost: 88.85

The Computer Integrated Assembly Systemsproject is concerned with developing thesoftware technology of programmable assemblydevices. A primary part of the research hasbeen designing and building the AL languagefor assembly. A first level version of AL is nowimplemented and debugged, with user

Abstracts of Recerlt Reports . 55

interfaces. Some of the steps involved incompleting the system are described. The ALparser has been completed and is documentedin this report. A preliminary interface withvision is in operation. Several hardwareprojects to support software development havebeen completed. One of the two Stanford armshas been rebuilt. An electronic interface forthe other arm has been completed. Progress onother hardware aspects of the AL systems isreported.

Several extensions to A L are described. A newinteractive program for building models byteaching is running and undergoing furtherdevelopment. Algorithms for force compliancehave been derived; a software system for forcecompliance has been implemented and isrunning in the AL runtlme system. Newalgorithms have been derived for cooperativemanipulation using two manipulators.Preliminary results are described for parthcalculation; these results are steps along the wayto a runtime path calculator which will beimportant m making an export version of AL.

Results are described in analysis of severalcomplex assemblies. These results show thattwo manipulators are necessary in a significantfraction of assembly operations.

Studies are described which focus on makingAL meet the realities of industrial research andproduction use. Results of a questionaire ofleading industrial and research laboratores arepresented. A summary is presented of theWorkshop on Softwre for Assembly, heldimmediately before the NSF-RANNConference at IITR I, Nov. 1976.

- AIM-286Douglas Lenat,

cs-570

AM: An Artificial Intelligence Approach toDiscovery in Mathematics as HeuristicSearch,Thesis: Ph.D. in Computer Science,350 pages, July 1976.

research: developing new concepts under theguidance of a large body of heuristic rules.“Mathematics” is considered as a type ofintelligent behavior, not as a finished product.

+ AJM-287 cs-57 IMichael Roderick,Discrete Control of a Robot Arm,Thesis: Engineer in Electrical Engineering,98 pages, August 1976. Cost: 34.45

The primary goal of this thesis was todetermine the feasibility of operating theStanford robot arm and reduce sample rates. Asecondary goal was to reduce the effects ofvariations in inertia and sampling rates on thecontrol system’s stability.

A discrete arm model was initially developed toillustrate the effects of inertia and sampling ratevariations on the present control system.Modifications were then suggested for reducingthese effects. Finally, a method wasdemonstrated for reducing the arm samplingrate from its present value of 60 hertz toapproximately 45 hertz without significantlyeffecting the arms performance.

+ AIM-288 CS-572Robert Filman, Richard Weyhrauch,An FOL Primer,36 pages, September 1976. Cost: $2.70

This prrmer is an introduction to FOL, aninteractive proof checker for first order logic.Its examples can be used to learn the FOLsystem, or read independently for a flavor ofour style of interactive proof checking. Severalexample proofs are presented, successivelyincreasing in the complexity of the FOLcommands employed.

FOL runs on the computer at the StanfordArtificial Intelligence Laboratory. It can beused over the ARPA net after arrangementshave been made with Richard Weyhrauch(network address R W W&U-A I).

A program, called “AM”, is described whichmodels one aspect of elementary mathematics

56 Appendix D.

+ AIM-289 cs-574John Reiser (ed.),SAIL,178 pages, August 1976. Cost: 86.70

SAIL is a high-level programming languagefor the PDP-IO computer. it includes anextended ALGOL 60 compiler and acompanion set of execution-time routines. Inaddition to ALGOL, the language features: (1)flexible linking to hand-coded machinelanguage algorithms, (2) complete access to thePDP-IO I/O facilities, (3) a complete system ofcompile-time arithmetic and logic as well as aflexible macro system, (4) a high-level debugger,(5) records and references, (6) sets and lists, (7)an associative data structure, (8) independentprocesses, (9) procedure variables, (IO) usermodifiable error handling, (11) backtracking,and (12) interrupt facilities.

This manual describes the SAIL language andthe execution-time routines for the typicalSAIL user: a non-novice programmer withsome knowledge of ALGOL. It lies somewherebetween being a tutorial and a referencemanual.

+ AIM-290 cs-575 AD-A042 494Nan,cy W. Smith,SAIL Tutorial,54 pages, November 1976. Cost: $3.20

This TUTORIAL is designed for a beginninguser of Sail, an ALGOL-like language for thePDP 10. The first part covers the basicstatements and expressions of the language;remaining topics include macros, records,conditional compilation, and input/output.Detailed examples of Sail programming areincluded throughout, and only a minimum ofprogramming background is assumed.

- AIM-291 cs-577 A044713Bruce Buchanan, Joshua Lederberg, JohnMcCarthy,Three Reviews of J. Weizenbaum’s ComputerPower and Humarl Reason,28 pages, November 1976.

Three reviews of Joseph Weizenbaum’sComputer Power and Human Reason (W.H.Freeman and Co., San Francisco, 1976) arereprinted from other sources. A reply byWeizenbaum to McCarthy’s review is alsoreprinted.

+ AIM-292 CS-580Terry Winograd,Towards a Procedural Understanding ofSemantics,30 pages, October 1976. Cost: $2.55

The term “procedural semantics“ has been usedin a variety of ways, not all compatible, and notall comprehensible. In this paper, I havechosen to apply the term to a broad paradigmfor studying semantics (and in fact, all oflinguistics). This paradigm has developed in acontext of writing computer programs whichuse natural language, but it is not a theory ofcomputer programs or programming techniques.It is “procedural“ because it looks at theunderlying structure of language asfundamentally shaped by the nature ofprocesses for language production andcomprehension. It is based on the belief thatthere is a level of explanation at which thereare significant similarities between thepsychological processes of human language useand the computational processes in computerprograms we can construct and study. Its goalis to develop a body of theory at this level.This approach necessitates abandoning ormodifying several currently accepted doctrines,including the way in which distinctions havebeen drawn between “semanticsU and“pragmatics” and between “performance” and“competence”.

The paper has three major sections. It first laysout the paradigm assumptions which guide theenterprise, and elaborates a model of cognitiveprocessing and language use. It then Illustrateshow some specific semantic problems might beapproached from a procedural perspective, andcontrasts the procedural approach with formalstructural and truth conditional approaches.Finally, it discusses the goals of linguistic theoryand the nature of the linguistic explanation.

Abstracts of Recent Reports . 57

Much of waht is presented here IS a speculationabout the nature of a pradigm yet to bedeveloped. This paper is an attempt to beevocative rather than definitive; to conveyintuitions rather than to formulate crucialarguments which justify this approach overothers. It will be successful if It suggests someways of looking at language which lead tofurther understanding.

- AIM-293 CS-58 1 A D-A 042 508Daniel Bobrow, Terry Winograd,An Overview of KRL,40 pages, November 1976.

This paper describes KRL, a KnowledgeRepresentation Language designed for use inunderstander systems. It outlines both thegeneral concepts which underlie our researchand the details of KRL-0, an experimentalimplementation of some of these concepts.IiRL is an attempt to integrate proceduralknowledge with a broad base of declarativeforms. These forms provide a variety of waysto express the logical structure of theknowledge, in order to give flexibility inassociating procedures (for memory andreasoning) with specific pieces of knowledge,and to control the relative accessibility ofdifferent facts and descriptions. The formalismfor declarative knowledge is based onstructured conceptual objects with associateddescriptions. These objects form a network ofmemory units with several different sorts oflinkages, each having well-specifiedimplications for the retrieval process.Procedures can be associated directly with theinternal structure of a conceptual object. Thisprocedural attnchtnent allows the steps for aparticular operation to be determined bycharacteristics of the specific entities involved.

The control structure of KRL is based on thebelief that the next generation of intelligentprograms will integrate data-directed and goal-directed processing by using multi-processing.It provides for a priority-ordered multi-processagenda with explicit (user-provided) strategiesfor scheduling and resource allocation. Itprovides procec!ure directories whkch operate

along with process frameworks to allowprocedural parameterization of the fundamentalsystem processes for building, comparing, andretrievmg memory structures. Futuredevelopment of KRL will include integratingprocedure definition with the descriptiveformalism.

+ AIM-294 CS-586 AD-A042 516Nachum Dershowitz, Zohar Manna,The Evolution of Programs: A System forAutomatic Program Modification,45 pages, December 1976. Cost: $2.95

An attempt is made to formulate techniques ofprogram modification, whereby a program thatachieves one result can be transformed into anew program that uses the same principles toachieve a different goal. For example, aprogram that uses the binary search paradigmto calculate the square-root of a number maybe modified to divide two numbers in a similarmanner, or vice versa.

Program debugging is considered as a specialcase of modification: if a program computeswrong results, it must be modified to achievethe intended results. the application of abstractprogram schemata to concrete problems is alsoviewed from the perspective of modificationtechniques.

We have embedded this approach in a runningimplementation; our methods are illustratedwith several examples that have beenperformed by it.

+ AIM-295 cs-59 1Robert C. Belles,Verification Vision Within a ProgrammableAssembly System,Thesis: Ph.D. in Computer Science,245 pages, December 1976. Cost: $8.55

The long-range goal of this research is tosimplify visual information processing bycomputer. The research reported in this thesisconcentrates on a subclass of visual informationprocessing referred to as verification vision(abbreviated VV). VV includes a significant

58

portion of the visual feedback tasks requiredwithin programmable assembly. There areseveral types of information available in VVtasks that can facilitate the solution of suchtasks. The main questron addressed m thisthesis is how to use all of this information toperform the task efficiently. Two steps areinvolved in answering this questron: (I)forrnalrte the types of tasks, availableinformation, and quantities of interest and (2)fornlulate combination rules that use theavailable information to estimate the quantitiesof interest.

The combination rules that estimate confidencesart ba.sc!d upon Baycs’ theorem. They aregmeral enough to handle operators that are notcompletely reliable, i.e., operators that may findshy one of several features or a surprise. Thecombination rules that estimate precisions arebased upon a least-squares technique. Theyuse the expected preclsions of the operators tocheck the structural consistency of a set ofmatches and to estimate the resulting precisionsIbout the points of interest. An interactive VVrystem based upon these ideas has beenImplemented. It makes it possrble for a personwho is not an expert in vision research toprogram visual feedback tasks. This systemhelps the programmer select potentially usefuloperator/feature pairs, provides a trainingsession to gather statistics on the behavror ofthe operators, automatically ranks theoperator/feature pairs according to theirexpected contributions, and performs thedesired task. The VV system has also beeninterfaced to the AL control system for themechanical arms and has been tested on tasksthat involve a combination of touch, force, andvisual feedback.

+ AIM-296 cs-592Robert Cartwright,Practical Formal Semantic Definitiorl aridVerifcat ion Systems,Thesis: Ph.D. in Computer Science,158 pages, December 1976. Cost: $6.15

Despite the fact that computer scientists havedeveloped a variety of formal methods for

Appendix D.

proving computer programs correct, the formalverification of a non-trivial program is still aformidable task. Moreover, the notion of proofis so imprecise in most existing verificationsystems, that the validity of the proofsgenerated is open to question. With an aimtoward rectifying these problems, the researchdiscussed in this dissertation attempts toaccomplish the following objectives:

1. To develop a programming language whichis sufficiently powerful to express manyinteresting algorithms clearly and succintly, yetsimple enough to have a tractable formalsemantic definition.

2. To completely specify both proof theoreticand model theoretic formal semantics for thislanguage using the simplest possibleabstractions.

3. To develop an interactive programverification system for the language whichautomatically performs as many of thestraightforward steps in a verification aspossible. The first part of the dissertationdecribes the motivation for creating TYPEDLISP, a variant of PURE LISP including aflexible data type definition facility allowing theprogrammer to create arbitrary recursive types.It is argued that a powerful data type definitionfacility not only simplifies the task of writingprograms, but reduces the complexity of thecomplementary task of verifying thoseprograms.

The second part of the thesis formally definesthe semantics of TYPED LISP. Every functionsymbol defined in a program P is identifiedwith a function symbol in a’first order predicatecalculus language Lp. Both a standard modelMp and a natural deduction system Np aredefined for the language Lp. In the standardmodel, each function symbol is interpreted bythe least call-by-value fixed-point of itsdefining equation. An informal meta-mathematical proof of the consistency of themodel Mp and the deductive system Np isgiven.

Abstracts of Recent Reports . 59

The final part of the dissertation describes an examples and references. Its four sectionsinteractive verification system implementing the attempt to:natural deduction system Np.

The verification system includes:Delimit the range of problems covered by theterm “discourse.”

1. A subgoaler which applies rules specified bythe user to reduce the proof of the current goal(or theorem) to the proof of one or moresubgoals.

2. A powerful simplifier which automaticallyproves many non-trivial goals by utilizinguser-supplied lemmas as well as the rules of

NP*

With a modest amount of user guidance, theverification system has proved a number ofinteresting, non-trivial theorems including thetotal correctness of an algorithm which sorts bysuccessive merging, the total correctness of theMcCarthy-Painter compiler for expressions, thetermination of a unification algorithm and theequivalence of an iterative algorithm and arecursive algorithm for counting the leafs of atree. Several of these proofs are included in anappendix.

- AIM-297 CS-610Terry Winograd,A Franlework for Understanding Discourse,24 pages, April 19’77.

There is a great deal of excitement inlinguistics, cognitive psychology, and artificialintelligence today about the potential ofunderstanding discourse. Researchers arestudying a group of problems in naturallanguage which have been largely ignored orfinessed in the mainstream of language researchover &he past fifteen years. They are lookinginto a wide variety of phenomena, andalthough results, and observations are scattered,it is apparent that there are manyinterrelationships. While the field s not yet at astage where it is possible to lay out a preciseunifying theory, this paper attempts to providea beginning framework for studying discourse.Its main goal is to establish a general contextand give a feeling for the problems through

Characterize the basic structure of naturallanguage based on a notion ofcommunication.

Propose a general approach to formalismsfor describing the phenomena and buildingtheories about them

Lay out an outline of the different schemasinvolved in generating and comprehendinglanguage

- AIM-298 CS-6 11 A DA 046703Zohar Manna, Richard Waldinger,The Logic of Computer Programming,90 pages, June 1977.

Techniques derived from mathematical logicpromise to provide an alternative to theconventional methodology for constructing,debugging, and optimizing computer programs.Ultimately, these techniques are intended tolead to the automation of many of the facets ofthe programming process.

In this paper, we provide a unified tutorialexposition of the logical techniques, illustratingeach with examples. We assess the strengthsand limitations of each technique as a practicalprogramming aid and report on attempts toimplement these methods in experimentalsystems.

+ AIM-299 CS-614 ADA049760Zohar Manna, Adi Shamir,The Convergence of Functions to Fixedpointsof Recursive Definitions,45 pages, May 1977. Cost: 82.95

The classical method for constructing the leastfixedpoint of a recursive definition is togenerate a sequence of functions whose initialelement is the totally undefined function andwhich converges to the desired least fixedpoint.

60 Appendix D.

This method, due to Kleene, cannot begeneralized to allow the construction of otherfixedpoints.

In this paper we present an alternate definitionof convergence and a new [fixedpoint access]method of generating sequences of functions fora given recursive definition. The initialfunction of the sequence can be an arbitraryfunction, and the sequence ~111 always convergeto a fixedpoint that is “close” to the initialfunction. This defines a monotonic mappingfrom the set of partial functions onto the set ofall fixedpoints of the given recursive definition.

- AIM-300 CS-617Terry W inograd,On some Contested Suppositions ofGenerative Linguistics about the ScientificStudy of Lallguage,25 pages, May 1977.

This paper is a response to a recently publishedpaper which asserts that current work inartificial intelligence is not relevant to thedevelopment of theories of language. Theauthors of that paper declare that workers inAI have misconstrued what the goals of anexplanatory theory of language should be, andthat there is no reason to believe that thedevelopment of programs which couldunderstand language in some domain couldcontribute to the development of such theories.This paper concentrates on the assumptionsunderlying their view of science and language.It draws on the notion of “scientific paradigms”as elaborated by Thomas Kuhn, pointing outthe ways in which views of what a scienceshould be are shaped by unprovableassumptions. It contrasts the proceduralparadigm (within which artificial intelligenceresearch is based) to the currently dominantparadigm typified by the work of Chomsky. Itdescribes the ways in which research inartificial intelligence will increase ourunderstanding of human language, andthrough an analogy with biology, raises somequestions about the plausibility of theChomskian view of language and the science oflinguistics.

t AIM-301 CS-624Lester Earnest, et. al.,

ADA044231

Recetlt Research in Computer Science,118 pages, June 1977. Cost: $5.00

This report summarizes recent accomplishmentsin six related areas: (1) basic AI research andformal reasoning, (2) image understanding, (3)mathematical theory of computation, (4)program verification, (5) natural languageunderstanding, and (6) knowledge basedprogramming.

+ AIM-302 es-630 ADA049761Zohar Manna, Richard WaldingerSynthesis: Dreams => Programs,119 pages, October 1977. Cost: 85.05

Deductive techniques are presented forderiving programs systematically from givenspecifications. The specifications express thepurpose of the desired program without givingany hint of the algorithm to be employed. Thebasic approach is to transform the specificationsrepeatedly according to certain rules, until asatisfactory program is produced. The rules areguided by a number of strategic controls.These techniques have been incorporated in arunning program synthesis system, calledDEDALUS.

Many of the transformation rules representknowlede about the program’s subject domain(e.g. numbers, lists, sets); some represent themeaning of the constructs of the specificationlanguage and the target programming language;and a few rules represent basic programmingprinciples. Two of these principles, theconditional-formation rule and the recursion- ,formation rule, account for the introduction ofconditional expressions and of recursive callsinto the synthesized program. The terminationof the program is ensured as new recursive callsare formed.

Two extensions of the recursion-formation ruleare discussed: a procedure-formation rule,which admits the introduction of auxilliarysubroutines in the course of the synthesi:,process, and a generalization rule, which tauses

Abstracts of Receilt Reports . 61

the specifications to be extended to represent amore general problem that is nevertheless easierto solve.

The techniques of this paper are illustratedwith a sequence of examples of increasingcomplexity; programs are constructed for listprocessing, numerical computation, and sorting.These techniques are compared with themethods of ‘*structured programming“, and withrecent work on “program transformation“.

The DEDA LUS system accepts specificationsexpressed in a high-level language, includingset notation, logical quantification, and a richvocabulary drawn from a variety of subjectdomains. The system attempts to transform thespecifications into a recursive, LISP-like targetprogram. Over one hundred rules have beenimplemented, each expressed as a smallprogram in the QLISP language.

- AIM-303 CS-63 1 A DA 050806Nachum Dershowitz, Zohar Manna,XIlference Rules for Program Annotation,46 pages, October 1977.

Methods are presented whereby an Algal-likeprogram, given together with its specifications,may be documented automatically. Thisdocumentation expresses invariant relationshipsthat hold between program variables atintermediate points in the program, andexplains the actual workings of the programregardless of whether the program is correct.Thus this documentation can be used forproving the correctness of the program, or mayserve as an aid in the debugging of anincorrect program.

The annotation techniques are formulated asHoare-like inference rules which deriveinvariants from the assignment statements, fromthe control structure of the program, or,heuristically, from suggested invariants. Theapplication of these rules is demonstrated bytwo examples which have run on ourimplemented system.

t AIM-304Todd Wagner,

CS-632 A DA 048684

Hardware Verification,Thesis: PhD in Computer Science,102 pages, September 1977. Cost: $4.55

Methods for detecting logical errors incomputer hardware designs using symbolicmanipulation instead of digital simulation arediscussed. A non-procedural register transferlanguage is proposed that is suitable fordescribing how a digital circuit should perform.This language can also be used to describe eachof the components used in the design.Transformations are presented which shouldenable the designer to either prove or disprovethat the set of interconnected componentscorrectly satisfy the specifications for the overallsystem.

The problem of detecting timing anomaliessuch as races, hazards, and oscillations isaddressed. Also explored are some interestingrelationships between the problems of hardwareverification and program verification. Finally,the results of using an existing proof checkingprogram on some digital circuits are presented.Although the theorem proving approach is notvery efficient for simple circuits, it becomesincreasingly attractive as circuits become morecomplex. This is because the theorem provingapproach can use complicated componentspecifications without reducing them to the gatelevel.

t AIM-305 CS-633 A DA 048660William Faught,Motivation and Intensionality in a ComputerSinwlation Mode),Thesis: Ph.D. in Computer Science,104 pages, September 1977. Cost: $4.60

This dissertation describes a computersimulation model of paranoia. The modelmimics the behavior of a patient participatingin a psychiatric interview by answeringquestions, introducing its own topics, andresponding to negatively-valued (e.g.,threatening or shame-producing) situations.

62 . Appendix D

The focus of this work is on the motivatronai recovery, incompletely specified input data, andmechanisms required to instigate and direct the natural language specification of tasks tomodeiled behavior. perform.

The major components of the model are:

(I) A production system (PS) formalismaccounting for the instigation and guidanceof behavior as a function of internal(affective) and external (real-world)environmental factors. Each rule in the PS iseither an action pattern (AP) or aninterpretation pattern (IP). Both may haveeither affect (emotion) conditions, externalvariables, or outputs of other patterns astheir’ initial conditions (left-hand sides). ThePS activates ail rules whose left-hand sidesare true, selects the one with the highestaffect, and performs the action specified bythe right-hand side.

(2) A model of affects (emotions) as ananticipation mechanism based on a smallnumber of basic pain-pleasure factors.Primary activation (raising an affect’sstrength) occurs when the partlcuiarcondition for the affect is anticipated (e.g.,anticipation of pain for the fear affect).Secondary activation occurs when an internalconstruct (AP, IP, belief) is used and itsassociated affect is processed.

(3) A formalism for intensional behavior(directed by internal models) requiring a dualrepresentation of symbol and concept. Anintensional object (belief) can be accessedeither by sensing it in the environment(concept) or by its name (token). Similarly,an intensional action (intention) can bespecified either by its conditions in theimmediate environment (concept) or by itsname (token).

+ AIM-306 CS-639 ADA053175Cordeli Green, David Barstow,On PrograIn Synthesis K nowiedge,63 pages, November 1977. Cost: $3.45

This paper presents a body of programsynthesis knowledge dealing with arrayoperations, space reutilization, the divide andconquer paradigm, conversion from recursiveparadigms to iterative paradigms, and orderedset enumerations. Such knowledge can be usedfor the synthesis of efficient and in-place sortsincluding quicksort, mergesort, sinking sort, andbubble sort, as well as other ordered setoperations such as set union, element removal,and element addition. The knowledge isexplicated to a level of detail such that it ispossible to codify this knowledge as a set ofprogram synthesis rules for use by a computer-based synthesis system. The use and content ofthis set of programming rules is illustratedherein by the methodical synthesis of bubblesort, sinking sort, quicksort, and mergesort.

+ AIM-307 CS-640 ADA053176Zohar Manna and Richard Waidinger,Structured Progralnming Without Recursion,IO pages, December 1977. Cost: $2.00

There is a tendency in presentations ofstructured programming to avoid the use ofrecursion as a repetitive construct, and to favorinstead the iterative loop constructs. Forinstance, in his recent book, “A Discipline ofProgramming,” Dijkstra bars recursion from hisexemplary programming language, declaringthat “I don’t like to crack an egg with asledgehammer, no matter how effective thesledgehammer is for doing so.“

Issues of intelligence, psychopathologicalmodeiiing, and artificial intelligence In introducing an iterative loop, the advocatesprogramming are discussed. The paranoid of structured programming advise that we firstphenomenon is found to be explainable as an find an invariant assertion and a terminationextremely skewed use of normal processes. function, and then construct the body of theApplications of these constructs are found to be loop so as to reduce the value of theuseful in AI programs dealing with error termination function while maintaining the

Abstracts of Recent Reports . 63

truth of the invariant assertion. The decisionwhen to introduce a loop, and the choice of anappropriate invariant assertion and terminationfunction, are not dictated by the method, butare left to the intuition of the programmer.

Unfortunately, at each stage in the derivationof a program, there are innumerable conditionsand functions that could be adopted as theinvariant assertion and termination function ofa loop. With so many plausible candidatesaround, a correct selection requires an act ofprecognitive insight.

As an alternative, we advocate a method ofloop formation in which the loop is representedas a recursive procedure rather than as aniterative construct. A recursive procedure isformed when a subgoal in the program’sderivation is found to be an instance of ahigher-level goal. The decision to introducethe new procedure, its purpose, and the choiceof the termination function are ail dictated bythe structure of the derivation.

Knowledge about several aspects of symbolicprogramming has been expressed as a collectionof four hundred refinement rules. The rulesdeal primarily with collections and mappingsand ways of manipulating such structures,including several enumeration, sorting andsearching techniques. The principlerepresentation techniques covered include therepresentation of sets as linked lists and arrays(both ordered or unordered), and therepresentation of mappings as tables, sets ofpairs, property list markings, and invertedmappings (indexed by range element). Inaddition to these general constructs, many iow-level programming details are covered (such asthe use of variables to store values).

The directness of this recursion-formntionapproach stems from the use of recursionrather than iteration as a repetitive construct.Recursion is an ideal vehicle for systematicprogram construction; in avoiding its use, theadvocates of structured programming havebeen driven to less natural means.

To test the correctness and utility of these rules,a computer system (called PECOS) has beendesigned and implemented. A igorithms arespecified to PECOS in a high-level languagefor symbolic programming. By repeatedlyapplying rules from its knowledge base,PECKS gradually refines the abstractspecification into a concrete implementation inthe target language. When several rules areapplicable in the same situation, a refinementsequence can be split. Thus, PECOS canactually construct a variety of differentimplementations for the same abstractalgorithm.

+ AIM-30sDavid Barstow,

CS-64 1 ADA053184

Automatic Construction of Algorithms,Thesis: Ph.D. in Computer Science,220 pages, December 1977. Cost: 87.85

Despite the wealth of programming knowledgeavailable in the form of textbooks and articles,comparatively little effort has been applied tothe codification of this knowledge intomachine-usable form. The research reportedhere has involved the explication of certainkinds of programming knowledge to a sufficientlevel of detail that it can be used effectively bya machine in the task of constructing concreteimplementations of abstract algorithms in thedomain of symbolic pro~ramminp.

PECOS has successfully implementedalgorithms in several application domains,including sorting and concept formation, as wellas algorithms for solving the reachabilityproblem in graph theory and for generatingprime numbers. PECOS’s ability to constructprograms from such varied domains suggestsboth the generality of the rules in its knowledgebase and the viability of the knowledge-basedapproach to automatic programming.

+ AIM-309 CS-646Nelson, C.C., Derek Oppen,Eff icierrt Decision Procedures Based onCorlgruelice Closure,I5 pages, January 1938. Cost: $2.15

We define the notion of the confluence closure

64 Appendix D.

of a relation on a graph and give a simplealgorithm for computmg it. We then givedecision procedures for the quantifier-freetheory of equality and the quantifier-freetheory of LISP list structure, both based on thisalgorithm. The procedures are fast enough tobe practical in mechanical theorem proving:each procedure determines the satisfiahillty of aconjunction of length n of literals in timeO(nt2). We also show that if theaxiomatization of the theory of list structure ischanged slightly, the problem of determiningthe satisfiability of a conjunction of hteralsbecomes NP-complete. We have implementedthe decision procedures in our simplifier for theStanford Pascal Verifier.

An earlier version of this paper appeared inthe Proceedings of the 18th Annual Symposiumon Foundations of Computer Science,Providence, October 1977.

+ AIM-310 CS-65 1 ADA058601Nachum Dershowitz, Zohar Manna,Proving Termination with MultisetOrder i rigs,33 pages, March 1978. Cost: $2.65

A common tool for proving the termination ofprograms is the well-founded set, a set orderedin such a way as to admit no infinitedescending seq uences. The basic approach is tofind a termination function that maps theelements of the program into some well-founded set, such that the value of thetermination function is continually reducedthroughout the computation. All too often, thetermination functions required are difficult tofind and are of a complexity out of proportionto the program under consideration. However,by providing more sophisticated well-foundedsets, the corresponding termination functionscan be simplified.

Given a well-founded set S, we considermultisets over S, “sets“ that admit multipleoccurrences of elements taken from S. Wedefine an ordering on all finite multisets over Sthat is induced by the given ordering on S.This multiset ordering is shown to be well-founded.

The value of the multiset ordering is that itpermits the use of relatively simple andintuitive termination functions in otherwisedifficult termination proofs. In particular, weapply the multiset ordering to provide simpleproofs of the termination of production systems,programs defined in terms of sets of rewritingrules.

+ AIM-31 1 CS652Greg Nelson, Derek C. Oppen,Sinlplificatiorl by Cooperating DecisionProdcedures,20 pages, April 1978. Cost: $2.25

We describe a simplifier for use in programmanipulation and verification. The simplifierfinds a normal form for any expression over thelanguage consisting of individual vat-tables, theusual boolean connectives, equality, theconditional function cond (denoting if-then-else), the numberals, the arithmetic functionsand predicates t, - and s, the LISP constants,functions and predicates nil, car, cdr, cons andatom, the functions store and select for storinginto and selecting from arrays, anduninterpreted function symbols. Individualvariables range over the union of the reals, theset of arrays, LISP list structure and thebooleans true and false.

The simplifier is complete; that is, it simplifiesevery valid formula to true. Thus it is also adeciion procedure for the quantifier-free theoryof reals, arrays and list structure under theabove functions and predicates.

The organization of the simplifier is based on amethod for combining decision procedures forseveral theories into a single decision procedurefor a theory combining the original theories.More precisely, given a set S of functions andpredicates over a fixed domain, a satisfiabilityprogram for S is a program which determinesthe satisfiability of conjunctions of ,literals(signed atomic formulas) whose predicate andfunction symbols are in S. We give a generalprocedure for combining satisfiability programsfor sets S and T into a single satisfiabilityprogram for S u T, given certain conditions onS and T.

Abstracts of Recent Reports

The simplifier described in this paper iscurrently used in the Stanford Pascal Verifier.

+ AIM-312 CS-657 ADA065502John McCarthy, Masahiko Sato, TakeshiHayashi, Shigeru Igarashi,On the Model Theory of Knowledge,1 1 pages, April 1978. Cost: 82.00

A nother language for expressing “knowingthat“ is given together with axioms and rules ofinference and a Kripke type semantics. Theformalism is extended to time-dependentknowlege. Completeness and decidabilitytheorems are given. The problem of the wisemen with spots on their foreheads and theproblem of the unfaithful wives are expressedin the formalism and solved.

+ AIM-313 cs-660Bruce E. Shimano,The Kitlematic Desigu and Force Control ofComputer Coutrollcd Manipulators,Thesis: Ph.D. in Mechanical Engineering,140 pages, March 1978. Cost: $5.65

The research presented in this dissertationconcerns two problems involving mechanicalmanipulators which are operated undercomputer control: (I) the design and analysis ofthe structural parameters of manipulators withrespect to their affect on work space, and (2) thereal time control of manipulators using forcefeedback.

The positioning properties of mechanicalmanipulators are considered in terms of themotlon of lines which are used to represent themanrpulator’s links and axes. A systematicmethod is developed for deriving equationswhich yield the normal distance, relative angleof twist, and moment between any two axes ofrotatron of a manipulator or between an axis ofrotation and a link axis. This method isapplred to the study of manipulators with atmcst three revolute axes, and conclusrons aredrawn concerning the dependence of themanipulator’s range of motion on thedimensions of Its links.

. 65

The possible number of extreme positions for atwo link mechanism is studied, and it is shownhow the selection of certain special linkgeometries affects the ability of the manipulatorto maneuver around obstacles. In addition, anew geometric relationship between the extremenormal distance between the revolute axes OS amanipulator and the positions of itsintermediate axes is derived. This condition isapplied to the development of an efficientmeans of determining the extreme distancepositions given that the geometry of themanipulator is specified. It is also applied tothereverse problem, i.e. of synthesizing the linkdimensions of a manipulator given that one ormore extreme distances is specified. Solutions toboth problems are presented for manipulatorscontaining an arbitrary number of links.

By interpreting the line analysis in terms offorces and torques, the previous resultsregarding displacements and changes inorientation are shown to be directly applicableto the analysis of joint and link force loads.

+ AIM-314 CS-678Derek C. Oppen,Reasoning About Recursively Defined DataStructures,15 pages, July 1978. Cost: $2.15

A decision algorithm is given for thequantifier-free theory of recursively defineddata structures which, for a conjunction oflength R, decides its satisfiability in time linearin n. The first-order theory of recursivelydefined data structures, in particular the first-order theory of LISP list structure (the theoryof CONS, CAR and CDR), is shown to bedecidable but not elementary recursive.

+ AIM-315 CS-687 ADA065698Richard W. Weyhrauch,Prolegomeua to a Theory of FormalReasotring,4 1 December 1978. This paper is anintroduction to the mechanization of a theoryof reasoning. Currently formal systems are outof favor with the AI community. The aim ofthis paper is to explain how formal systems can

66 Appendix D.

be used in AI by explaining how traditionalideas of logic can be mechanized in a practicalway. The paper presents several new ideas.Each of these is illustrated by giving simpleexamples of how this idea is mechanized in thereasoning system FOL. That is pages, this isnot just theory but there is an existing runningimplementation of these ideas. Cost: $2.85

In this paper: 1) we show how to mechanize thenotion of model using the idea of a simulationstructure and explain why this is particularlyimportant to AI, 2) we show how to mechanizethe notion of satisfaction, 3) we present a verygeneral evaluator for first order expressions,which subsumes PROLOG and we propose asa natural way of thinking about logicprogramming, 4) we show how to formalizemetatheory, 5) we describe reflection principles,which connect theories to their metatheories ina way new to A I, 6) we show how these ideascan be used to dynamically extend the strengthof FOL by “implementing“ subsidiarydeduction rules, and how this in turn can beextended to provide a method of dcscrlbingand proving theorems about heuristics forusing these rules, 7) we discuss one notion ofwhat it could mean for a computer to learn andgive an example, 8) we describe a new kind offormal system that has the property that it canreason about its own properties, 9) we giveexamples of all of the above.

- AIM-316 CS-67 1Jerrold M. Ginsparg,Natural Language Processing in ai1Automatic Programming Domain,Thesis: Ph.D. in Computer Science,172 pages, June 1978.

This paper is about communicatmg withcomputers in English. In particular, Itdescribes an interface system which allows ahuman user to communicate with an automaticprogramming system In an English dialogue.

The interface consists of two parts. The first isa parser called Reader. Reader was deslgned tofacilitate writing English grammars which arenearly deterministic in that they consider a very

small number of parse paths during theprocessing of a sentence. This efficiency isprimarily derived from using a single parsestructure to represent more than one syntacticinterpretation of the input sentence.

The second part of the interface is an aninterpreter which represents Reader’s output ina form that can be used by a computerprogram without linguistic knowledge. TheInterpreter is repsonsible for asking questionsof the user, processing the user’s replies,building a representation of the program theuser’s replies describe, and supplying the parserwith any of the contextual information orgeneral knowledge it needs while parsing.

+ AIM-317 CS-675Donald E. Knuth,Tau Epsilon Chi, A System for TechnicalText,200 pages, September 1978. Cost: $7.30

This is the user manual for TEX, a newdocument compiler at SU-AI, intended to be anadvance in computer typesetting. It is primarilyof interest to the Stanford user community andto people contemplating the installation of TEXat their computer center.

+ AIM-318Zohar Manna,

cs-688

Six Lectures on the Logic of ComputerProgram 111 ing,54 pages, November 1978. Cost: $3.20

These are notes from six tutorial lectures givenat the NSF Regional Conference, RenssellaerPolytechnic Institute, May 29-June 2, 1978.The lectures deal with various aspects of thelogic of computer programming: partialcorrectness of programs, termination ofprograms, total correctness of programs,systematic program annotation, synthesis ofprograms and termination of productionsystems.

Abstracts of Recellt Reports . 67

+ AIM-319 CS-689Charles C. Nelson,

A II II log ‘I algorithm for the two-variable-per-colistraint linear prograitlniiilgsatisfiability problem,20 pages, December 1978. Cost: $2.25

mathematical theory of computation andprogram synthesis,

program verification,image understanding,knowledge based programming.

A simple algorithm is described whichdetermines the satisfiability over the reals of aconjunction of linear inequalities, none ofwhich contains more than two variables. In theworst case the algorithm requires time

O(mni10~2nj+310g n), where n is the number ofvariables and m the number of inequalities.SeveraLconsiderations suggest that thealgorithm may be useful in practice: it s simpleto implement, it is fast for some importat specialcases, and if the inequalities are satisfiable itprovides valuable information about theirsolutron set. The algorithm is particularlysuited to applications in mechanical programverification. This is also known as StanfordVerification Group Report 10.

+ AIM-322 CS-7 16Georgeff, Michael,A Framework for Control in ProductionSystems,35 pages, January 1979. Cost: $2.70

A formal model for representing control inproduction systems is defined. The formalismallows control to be directly specifiedindependently of the conflict resolution scheme,and thus allows the issues of control andnondeterminism to be treated separately.Unlike previous approaches, it allows control tobe examined within a uniform and consistentframework.

+ AIM-320Zohar Manna,

CS-690 A DA 065558

A Dedlictive Approach to Program Synthesis,30 pages, December 1978. Cost: 82.55

Program synthesis is the systematic derivationof a program from given specifications. Adeductive approach to program synthesis ispresented for the construction of recursiveprograms. This approach regards programsynthesis as a theorem-proving task and relieson a theorem-proving method that combinesthe features of transformation rules, unification,and mathematical induction within a singleframework.

It is shown that the formalism provides a basisfor implementing control constructs which,unlike existing schemes, retain all the propertiesdesired of a knowledge based system --modularity, flexibility, extensibility andexplanatory capacity. Most importantly, it isshown that these properties are not a functionof the lack of control constrains, but of the typeof information allowed to establish theseconstraints.

Within the formalism it is also possible toprovide a meaningful notion of the power ofcontrol constructs. This enables the types ofcontrol required in production systems to beexamined and the capacity of various schemesto meet these requirements to be determined.

+ AIM-321 CS-695 ADA06562McCarthy, et.al,Recerlt Research irl Artificial Irltelligence andProgram 111 ing Methodology,94 pages, November 1978. Cost: $4.35

Summarizes recent research in the followingareas:

artificial intelligence and formalreasoning,

Schemes for improving system efficiency andresolving nondeterminism are examined, anddevices for representing such meta-levelknowledge are described. In particular, theobjectification of control information is shownto provide a better paradigm for problemsolving and for talking about problem solving.It is also shown that the notion of controlprovides a basis for a theory of transformationof production systems, and that this provides a

68 . Appendix D

uniform and consistent approach to problemsinvolving subgoal protection.

+ AIM-323 G-718 Mujtaba ShahidRon Goldman,

AL Users’ Manual,I36 pages, January 1979. Cost: $5.50

This document describes the current state ofthe AL system now in operation at the StanfordA rtificial Intelligence Laboratory, and tcachcsthe reader how to use it. The system consists ofAL, a high-level programming language formanipulator control useful in industrialassembly research; POINTY, an interactivesystem for specifying representation of parts;and A LA ID, an interactive debugger for AL.

+ AIM-324 cs-717Cartwright, Robert, John McCarthy,

32 pages, March 1979. Cost: $2.60

Recursive Programs as Functions in a FirstOrder Theory,

Pure Lisp style recursive function programs arerepresented In a new way by sentences andschemata of first order logic. This permits easyand natural proofs of extensional properties ofsuch programs by methods that generalizestructural Induction. It also systematizes knownmethods such as recursion induction, subgoalinduction, inductive assertions by interpretingthem as first order axiom schemata. We discussthe metatheorems justifying the representationand techniques for proving facts about specificprograms. We also give a simpler version ofthe Coedel-Kleene way of representingcomputable functions by first order sentences.

+ AIM-325 CS-724McCarthy, John,First Order Theories of Individual Conceptsand Proposit ions,I9 pages, March 1979. Cost: $2.25

We discuss first order theories in whichindividual concepts are admitted asmathematical objects along with the things thatreify them. This allows very straightforwardformalitations of knowledge, belief, wanting,

and necessity in ordinary first order logicwithout modal operators. Applications aregiven in philosophy and in artificialintelligence.

+ AIM-326 CS-725McCarthy, John,Ascribing Mental Qualities to Machines,25 pages, March 1979. Cost: $2.40

Ascribing mental qualities like beliefs,intentions and wants to a machine is sometimescorrect if done conservatively and is sometimesnecessary to express what is known about itsstate. We propose some new definitional toolsfor this: definitions relative to an approximatetheory and second order structural definitions.

+ AIM-327 CS-727

Thesis: PhD in Computer Science,

Filman, Robert Elliot,

232 pages, April 1979. Cost: 85.20

The Interaction of Observation andInference,

An intelligent computer program must haveboth a representation of its knowledge, and amechanism for manipulating that knowledge ina reasoning process. This thesis is anexamination of the problem of formalizing theexpression and solution of reasoning problemsin a machine manipulable form. It isparticularly concerned with analyzing theinteraction of the standard form of deductivesteps with an observational analogy obtainedby performing computation in a semanticmodel.

Consideration in this dissertation is centered onthe world of retrograde analysis chess, aparticularly rich domain for both observationaltasks and long deductive sequences.

A formalization is embodied in its axioms, anda major portion of this dissertation is devotedto both axiomatizing the rules of chess, anddiscussing and comparing the representationaldecisions involved in that axiomatitation.Consideration was given to not only thenecessity for these particular choices (and

Abstracts of Recent Reports 69.

possible alternatives) but also the implications semi-automatic theorem prover; and 3) as anof these results for designers of representational extensible environment for the programming ofsystems for other domarns. theorem proving heuristics.

c

Using a reasoning system for first order logic,“FOL”, a detailed proof of the solution of adifficult retrograde chess puzzle wasconstructed. The close correspondence betweenthis “formal“ solution to the problem, and an“informal, descriptive” analysis a human mightpresent was shown.

The proof and axioms were then examined fortheir relevance to general epistemologicalformalisms. The importance of severaldifferent ,mechanisms were considered. Theseincluded: 1) retaining both the notion of*‘current status“ (typically embodied as thecurrent chessboard) and that of “historicalstate” (a hypothetical game played to reachdesired place), 2) evaluating functional andpredicate objects rn the semantic model (thechess eye), 3) the value of “rnducrion schemas”as partial solutions to frame problems, 4) theretention of explicit undefined elements withinthe representation, 5) the importance ofmanipulating multiple representations ofobjects, and 6) a comparison of state vector andmodal representations.

.

+ AIM-328 cs-743Bulnes-Rotas, Juan Bautista,GOAL: A Goal Oriented ColnInand Larlguagefor I ii teract ive Proof Construct ions,Thesis: PhD in Computer Science,178 June 1979. This thesis represents acontribution to the development of practicalcomputer systems for interactive construction offormal proofs. Beginning with a summary ofcurrent research in automatic theorem provingpages, goal oriented systems, proof checking,and program verification, this work aims atbridging the gap between proof checking andtheorem proving. Cost: $6.70

Specifically, it describes a system GOAL for theFirst Order Logic proof checker FOL. COALhelps the user of FOL in the creation of longproofs in three ways: I) as a facility forstructured, top down proof construction; 2) as a

In GOAL, the user defines top level goals.These are then recursively decomposed intosubgoals. The main part of a goal is a wellformed formula that one desires to prove, butthey include assertions, simplification sets, andother information. Coals can be tried by threedifferent types of elements: matchers, tactics, andstrategies.

The matchers attempt to prove a goal directly-that is without reducing it into subgoals- bycalling decision procedures of FOL. Successfulapplication of a matcher causes the proved goalto be added to the FOL proof.

A tactic reduces a goal into one or moresubgoals. Each tactic is the inverse of someinference rule of FOL; the goal structurerecords all the necessary information so that theappropriate inference rule is called when all thesubgoals of a goal are proved. In this way thegoal tree un winds automatically, producing aFOL proof of the top level goal from the proofsof its leaves.

The strategies are programmed sequences ofapplications of tactics and matchers. They donot interface with FOL directly. Instead, theysimulate a virtual user of FOL. They can callthe tactics, matchers, other strategies, orthemselves recursively. The success of thisapproach to theorem proving success isdocumented by one heuristic strategy that hasproved a number of theorems in Zermelo-Fraenkel Axiomatic Set Theory. Analysis ofthis strategy leads to a discussion of some tradeoffs related to the use of assertions andsimpfifiation sets in goal oriented theoremproving.

The user can add new tactics, matchers, andstrategies to COAL. These additions cause thelanguage to be extended in a uniform way.The description of new strategies is done easily,at a fairly high level, and no faulty deduction ispossible. Perhaps the main contribution of

70

COAL is a high level environment for easyprogramming of new theorem provingapplications in the First Order PredicateCalculus.

The thesis ends with two appendixes presentingcomplete proofs of Ramsey’s theorem inaxiomatic Set Theory and of the correctness ofthe Takeuchi function.

(It is planned that both FOL and COAL will bemade available over the ARPANET this year.Inquiries regarding their use should beaddressed to Dr. R. Weyhrauch at theStanford Artificial Intelligence Laboratory, SU-A I).

+ AIM-329 cs-747 A D-A 076-872David Edward Wrlkins,UsiIlg Patterrls arld Plans to Solve Problemsand Control Search,Thesis: PhD in Computer Science,264 pages, June 1979. Cost: $9.10

The tape of reasoning done by human chessmasters has not been done by computerprograms. The purpose of this research is toinvestigate the extent to which knowledge canreplace and support search in selecting a chessmove and to delineate the issues involved.This has been carried out by constructing aprcgram, PA RA DISE (PA tternRecognitionAllplied to Directing SEarch), which finds thebest move in tatically sharp middle gamepositions from the games of chess masters.

FA RADISE plays chess by doing a largeamount of static, knowledge-based reasoningand constructing a small search tree (tens ofhundreds of nodes) to confirm that a particularmove is best, both characteristics of humanchess masters. A “Production-Language” hasbeen developed for expressing chess knowledgein the form of productions, and the knowledgebase contains about 200 productions written inthis language.’ The actions of the rules postconcepts in the data base while the conditionsmatch patterns in the chess position and database. The patterns are complex to match(search may be involved). The knowledge base

Appendix D.

was built incrementally, relying on the system’sability to explain its reasoning and the ease ofwriting and modifying productions. Theproductions are structured to provide “concepts”to reason with, methods of controlling patterninstantiation, and means of focusing thesystem’s attention on relevant parts of theknowledge base. PARADISE knows why itbelieves its concepts and may reject them aftermore analysis.

PARADISE uses its knowledge base to controlthe search, and to do a static analysis of a newposition which produces concepts and plans.Once a plan is formulated, it guides the treesearch for several ply by focussing the analysisat each node on a small group of productions.Expensive static analyses are rarely done at newpositions created during the search. Plans maybe elaborated and expanded as the searchproceeds. These plans (and the use made ofthem) are more sophisticated than those inprevious AI systems. Through plans,PARADISE uses the knowledge applied duringa previous analysis to control the search formany nodes.

By using the knowledge base to help control thesearch, PARADISE has developed an efficientbest-first search which uses different startegiesat the top level. The value of each move is arange which is gradually narrowed by doingbest-first searches until it can be shown thatone move is best, By using a global view of thesearch tree, information gathered during thesearch, and information produced by staticanalyses, the program produces enoughterminations to force convergence of the search.PARADISE does not place a depth limit on thesearch (or any other artificial effort limit). Theprogram incorporates many cutoffs that are notuseful in less knowledge-oriented programs(e.g., restrictions are placed on concatenation ofplans, accurate forward prunes can be made onthe basis of static analysis results, a causalityfacility determines how a move can affect a linealready searched using patterns generatedduring the previous search).

PARADISE has found combinations as deep as

Abhtr;acts of Recent Reports

I9 ply and performs well when tested on 100standard problems. The modifiability of theknowledge base is excellent. Developing asearch strategy which converges and using alarge knowledge base composed of patterns ofthis complexity to achieve expert performanceon such a complex problem is an advance onthe state of the art.

+ AIM-330 cs-75 1Zohar Manna, Amir Pnueh,The Modal Logic of Programs,36 pages, September 1979. Cost: $2.70

We explore the general framework of ModalLogic and its applicability to programreasoning. We relate the basic concepts ofModal Logic to the programming environment:the concept of “world“ corresponds to aprogram state, and the concept of “accessibilityrelation“ corresponds to the relation ofderivability between states during execution.Thus we adopt the Temporal interpretation ofModal Logic. The variety of programproperties expressible within the modalformalism is demonstrated.

The first axiomatic system studied, the sometimesystem, is adequate for proving total correctnessand ‘eventuality’ properties. However, it isinadequate for proving invariance properties.The stronger nexttime syslcln obtained byadding the next operator is shown to beadequate for invariances as well.

+ AIM-331Elaine Kant,

cs-755

EfT’iciency Considerat ion in ProgramSynthesis: A K nowedge-Based Approach,Thesis: PhD in Computer Science,160 pages, July 1979. Cost: 86.20

This thesis presents a framework for usingefl’iciency knowledge to guide program synthesiswhen stepwise refinement is the primarysynthesis technique. The practicality of theframework is demonstrated via theimplementation of a particular search-controlsystem called LIBRA. The framework is alsoone model for how people write and analyzeprograms.

71.

LIBRA complements an interactive program-synthesis project by adding efficiencyknowledge to the basic coding knowledge andby automating the implementation-selectionprocess. The program specifications includesize and frequency notes, the performancemeasure to be minimized, and some limits onthe time and space of the synthesis task itself.LIBRA selects algorithms and datarepresentations and decides whether to use“optimizing” transformations by applyingknowledge about time and storage costs ofprogram components. Cost estimates areobtained by incremental, algebraic programanalysis.

Some of the most interesting aspects of thesystem are explicit rules about plausible

’implementations, constraint setting, multiple-exit-loop analysis, the comparison of optimisticand achievable cost estimates to identifyimportant decisions, and resource allocation andthe basis of importance. LIBRA hasautomatically guided the construction ofprograms that classify, retrieve information,sort, and computer prime numbers.

+ AIM-332Donald Knuth,

CS-762 A D-A 083-229

METAFONT: A System for Alphabet Design,110 pages, September 1979. Cost: $4.80

This 1s the user’s manual for METAFONT, acompanion to the TEX typesetting system. Thesystem makes it fairly easy to define highquality fonts of type in a machine-independentmanner; a user writes “programs” in a newlanguage developed for this purpose. Byvarying parameters of a design, an unlimitednumber of typefaces can be obtained from asingle set of programs. The manual alsosketches the algorithms used by the system todraw the character shapes.

+ AIM-333 cs-772Brian P. McCune,Building Program Models Incrementally fromInforrllal Descriptions,Thesis: PAD in Computer Science,146 pages, October 1979. Cost: $5.80

72 . Appendix D

Program acquisition is the transformation of aprogram specification into an executable, butnot necessarily efficient, program that meets thegiven specification. This thesis presents asolution to one aspect of the programacquisition problem: the incrementalconstruction of program models from Informaldescriptions. The key to the solution IS aframework for incremental program acquisitionthat Includes (I) 2 formal language forexpressing program fragments that containinformalities, (2) a control structure for theincremental recognition and assimilation ofsuch fragments, and (3) a knowledge base ofrules for acquiring programs specified withinformalities.

The thesis describes a LISP based computersystem called the Program MO&/ Builder(abbreviated “PM B”), which receives informalprogram fragments incrementally and assemblesthem into a very high level program model thatis complete, semantically consistent,unambiguous, and executable. The programspecification comes in the form of partialprogram fragments that arrive m any orderand may exhibit such informalities asinconsistencies and ambiguous references.Possible sources of fragments are a naturallanguage parser or a parser for a surface formof the fragments. PM B produces a programmodel that is a complete and executablecomputer program. The ptogrcrnt frngmcn!langunge used for specifications is a superset ofthe language in which program models arebuilt. This progr~” modeliing Innpge is avery high level programming language forsymbolic processing that deals with suchinformation structures as sets ancl mappings.

PMB has expertise in the general area ofsimple symbolic computations, but PMB isdesigned to be independent of more specificprogramming domains and particular programspecification techniques at the user level.However, the specifications given to PMB muststill be algorithmic in nature. Because of thevery high level nature of the program modelproduced, PMI3 also operates mdependentlyfrom implementation details such as the target

computer and low level language. PMB hasbeen tested both as a module of the PSIprogram synthesis system and independently.Models built as part of PSI have been acquiredvia natural language dialogs and executiontraces and have been automatically coded intoLISP by other PSI modules. PMB hassuccessfully built a number of moderatelycomplex programs for symbolic computation.

By the design the user is allowed to havecontrol of the specification process. ThereforePMB must handle program fragmentsinteractively and incrementally. Interestingproblems arise because thtse informalfragments may arrive in an arbitrary order,may convey an arbitrarily small amount of newinformation, and may be incomplete,semantically inconsistent, and ambiguous. Toallow the current point of focus to change, aprogram reference language has been designedfor expressing patterns that specify what partof the model a fragment refers to. Variouscombinations of syntactic and semanticreference patterns in the model may bespecified.

The recognition paradign used by PMB is aform of subgoaling that allows the parts of theprogram to be specified in an order chosen bythe user, rather than dictated by the system.Knowledge is represented as a set of datadriven antecedent rules of two types, responserules and demons, which are triggeredrespectively by either the input of newfragments or changes in the partial programmodel. In processing a fragment, a responserule may update the partial program model andcreate new subgoals with associated responserules. To process subgoals that are completelyinternal to PMB, demon rules are created thzitdelay execution until their prerequisiteinformation in the program model has beenfilled in by response rules or perhaps otherdemons.

PMB has a knowledge base of rules forhandling modelling language constructs,processing informalities in fragments,monitoring the consistency of the model, and

Abstracts of Recent Reports 73

transforming the program to canonical form.Response rules and simple demons areprocedural. Cornpond &mom have morecomplex antecedents that test more than oneobject in the program model. Componddemons are declarative antecedent patterns thatare expanded automatically into proceduralform.

+ AIM-334 CS-788John McCarthy,CIRCUMSCRIPTION - A Form of Non-Monotonic Reasoning,I5 pages, February 1980. Cost: $2.15

Humans and intelligent computer programsmust often junp to the conclusion that theobjects they can determine to have certainproperties or relations are the only objects thatdo. Circumcription formalizes such conjecturalreasoning.

+ AIM-335Arthur Samuel,Essential E,

CS-796

33 pages, March 1980. Cost: $2.65

This is an introductory manual describing thedisplay-oriented text editor E that is availableon the Stanford AI. Laboratory PDP-IOcomputer. The present manual IS intended tobe used as an aid for the beginner as well asfor experienced computer users who either areunfamiliar with the E editor or use itinfrequently. Reference is made to the two on-line manuals that help the beginner to getstarted and that provide a complete descriptionof the editor fr the experienced user.

E is commonly used for writing computerprograms and for preparing reports andmemoranda. It is not a document editor,although it does provide some facilities forgetting a document into a pleasing format. Theprimary emphasis is that of speed, both interms of the number of key strokes required ofthe user and in terms of the demands made onthe computer system. At the same time, E iseasy to learn and it offers a large range offacilities that are not available on many editors.