institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/fulltext01.pdf ·...

141
Institutionen för datavetenskap Department of Computer and Information Science Master Thesis Extended MetaModelica Based Integrated Compiler Generator by ARUNKUMAR PALANISAMY LIU-IDA/LITH-EX-A—12/058—SE 2012-10-18 Linköpings universitet SE-581 83 Linköping, Sweden Linköpings universitet 581 83 Linköping

Upload: others

Post on 13-May-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

Institutionen för datavetenskap Department of Computer and Information Science

Master Thesis

Extended MetaModelica Based Integrated

Compiler Generator

by

ARUNKUMAR PALANISAMY

LIU-IDA/LITH-EX-A—12/058—SE

2012-10-18

Linköpings universitet

SE-581 83 Linköping, Sweden

Linköpings universitet

581 83 Linköping

Page 2: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

Linköpings universitet

Institutionen för datavetenskap

Master Thesis

Extended MetaModelica Based Integrated

Compiler Generator

by

ARUNKUMAR PALANISAMY

LIU-IDA/LITH-EX-A—12/058—SE

2012-10-18

Supervisor: Olena Rogovchenko

Dept. of Computer and Information Science

Examiner: Prof. Peter Fritzson

Dept. of Computer and Information Science

Page 3: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

ii

Abstract

OMCCp is a new generation (not yet released) of the OpenModelica Compiler-Compiler

parser generator which contains an LALR parser generator implemented in the

MetaModelica language with parsing tables generated by the tools Flex and GNU Bison. It

also contains very good error handling and is integrated with the MetaModelica semantics

specification language.

The main benefit with this master thesis project is the development of new version of

OMCCp with complete support for an extended Modelica grammar for a complete

OMCCp-based Modelica parser. The implemented parser has been tested and the results

have been analyzed. This is a new enhanced generation OMCCp with improvements made

from the previous version. This version support Modelica as well as the language

extensions for MetaModelica, ParModelica, and optimization problem specification.

Moreover, the generated parsers are about three times faster than those from the old

OMCCp.

Page 4: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

iii

Acknowledgements

I would take this opportunity to thank several important peoples who made this

thesis possible to complete. First I would like to thank my examiner professor Peter

Fritzson who gave me the opportunity to work on this thesis believing in my capabilities.

Along with him, I have to thank my supervisor Olena Rogovchenko who has been keeping

track of my progress every week and encouraging me to complete my work faster, and also

I have to thank my technical supervisor Martin Sjölund providing me with great technical

assistance in coding and enough guidance which made this work possible. I am happy to be

a part of the OpenModelica project which has provided me with the opportunity to learn a

new language and contribute to the development of this open source project.

I have to thank IDA administration (department of computer and information

science) for providing me with office and resource for my daily work which is very

essential.

I would like to thank my family, especially my mother and brother who offered

their support when I started my work and encouraging me daily during difficult situations

which made me to feel comfortable and also I would like to thank my fellow master thesis

students Goutham and Alachew for offering some valuable advice.

Page 5: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

iv

Table of contents

1. Introduction

1.1. OpenModelica 2

1.2. OMCCp 2

1.3. ANTLR 3

1.4. Project Goal 3

1.5. Approach 3

1.6. Intended readers 4

2. Theoretical Background

2.1. Modelica 5

2.2. MetaModelica 6

2.3.1. Match continue expression 6

2.3.2. Union type 7

2.3.3. List 8

2.3. Compiler Construction

2.3.1. Compiler phases 9

2.3.2. Front end-Lexical analysis 12

2.3.3. Front end-Syntax analysis 14

2.3.4. LALR Parser 17

3. Existing Technologies

3.1. Flex 23

3.2. Bison 24

4. Implementation

4.1. Problem Statement 25

4.2. Proposed Solution 25

4.3. OMCCp Design Architecture 26

4.4. New parser 28

4.5. Error Handler 39

4.6. OMCCP VS ANTLR Error Handler 41

4.7. Discussions 43

Page 6: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

v

5. Testing

5.1. Sample input and output 44

5.2. Analysis of result 47

5.3. Improvements from previous version 50

6. User Guide

6.1. Getting started 52

6.2. OMCCp commands 52

7. Conclusion

7.1. Accomplishment 55

7.2. Future works 55

Bibliography 57

Appendices 59

Page 7: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

vi

List of figures

2.3.1. Compiler Phases 9

2.3.2. Lexer Component 12

2.3.2.a) finite state automata 13

2.3.3. Parser Component 15

2.3.3.a) Construction of PDA 16

2.3.3.b) Construction of AST 17

2.3.3.c) Construction of DFA 18

2.3.3.d) action and goto table 20

2.3.3.e) Construction of AST without look ahead 21

2.3.3.f) Construction of AST with look ahead 22

4.3. Layout of OMCCp 26

5.2.a) Parsing time results between old OMCCp and new OMCCp 44

5.2.b) Parsing time results between new OMCCp and ANTLR 44

Page 8: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

1

Chapter 1

Introduction

Before getting into the technical aspects of the project, I would like to discuss some

general information about this thesis. The idea of creating this thesis work came from

OpenModelica developers who analyzed the problems of previous implementation of

OMCCp and came with a proposal of reworking the parser so that the efforts of the

previous work should not go vain with the goal of developing OMCCp to become the

main parser tool for the OpenModelica project replacing ANTLR. The whole thesis

required the knowledge of compiler construction techniques, the advanced compiler

construction course which I had opted to study from my course curriculum is the main

motivation for taking this thesis work, as I had good background knowledge plus the

lab exercises in the course provided me the knowledge of how to write codes for lexer

and parser. Even though with good background knowledge, the lack of knowledge in

Modelica made the starting process very slow and I found difficulties in installing the

tools and software’s required for the work. But learning Modelica is not a difficult task

as there are enough resources and materials provided by the Open Source Modelica

Consortium (OSMC). Once I understood the language, then the next part of the work

went rather rapidly. Then I found some difficulties in understanding the actual work of

OMCCp, Considering the complexity of the OMCCp design it was initially tough to

understand what is the real problem and what should I do, But help is always provided

by the OpenModelica developers and made me comfortable when I found it difficult.

After first three weeks, I understood my work and progressed in rapid pace and able to

complete the implementation between 60 to 90 days as proposed in my initial thesis

plan. Thus, I assume the people who are reading this report are the ones who are going

to do some work in OMCCp and I assure you that with the experience, the work will be

very interesting and fun.

Page 9: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

2

1.1 Open Modelica

The OpenModelica project develops a modeling and simulation environment based on

Modelica language. The OpenModelica uses the OMC (OpenModelica compiler).

OMC translates code written in the Modelica language to generate C or C++ code

which runs the simulation for the Modelica codes. There are several sub systems which

are integrated with OpenModelica like the following [1].

OMEdit

OMEdit is a user friendly graphical modeling tool which provides the users with an

easy approach to create models by adding components (drag and drop), connecting and

editing the components, simulation of models, and plotting the results. It also provides

both textual and graphical representation of the user defined models which makes the

work easier [1].

OMShell

OMShell is an interactive window terminal integrated with OpenModelica project

which provides an interactive approach to the users by simply loading and simulating

the model by use of commands [1].

OMNotebook

OMNotebook is a lightweight notebook editor which can be used to compile and run

Modelica code by evaluating the codes written in the cells [1].

1.2 OMCCp (OpenModelica Compiler-Compiler Parser)

OMCCp (OpenModelica Compiler-Compiler parser generator) is a new generation

parser tool implemented entirely in MetaModelica an extension of Modelica language.

The OpenModelica project currently uses ANTLR (Another tool for language

recognition) tool for generating the AST. In this thesis we present an alternative tool

called OMCCp which is a new generation enhanced parser with lexical analysis and

syntax analysis implemented in separate phases improving the efficiency of compiler,

and using the MetaModelica in the new bootstrapped OpenModelica compiler for the

RML semantics implementation. The tool also contains good error handling. The

OMCCp uses LALR parser to generate the Abstract Syntax Tree (AST) [2].

Page 10: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

3

1.3 ANTLR (Another tool for language recognition)

The OpenModelica project is currently integrated with ANTLR tool to generate the

Abstract syntax tree (AST). The ANTLR uses LL (k) parser to generate the AST. Over

the years the tool has been identified with well known disadvantage like memory

overhead, bad error handling and lack of type checking. The tool contains an integrated

lexical and semantic analysis decreasing the efficiency of the parser. Generally the

performance of the parser is higher when lexical analysis is kept separate from the

semantic analysis. The ANTLR tool is connected with OMC with an external c

interface [2].

1.4 Project Goal

The goal of this master thesis is to write a new parser and front end to OMCCp

(OMCCp lexer and OMCCp parser) instead of the previous parser to generate

MetaModelica Abstract Syntax Tree as input to the new bootstrapped OpenModelica

compiler.

The results expected from this thesis are,

1) A working OMCCp lexer and parser integrated with MetaModelica in the new

bootstrapped OpenModelica compiler.

2) Tested OMCCp.

3) Improvements in performance compared from previous parser.

4) Completed Modelica grammar for a complete OMCCp based Modelica parser.

5) Release of updated OMCCp.

1.5 Approach

The first step to start this thesis is based on the literature study of Modelica,

MetaModelica and most importantly knowledge in compiler construction techniques.

There are several lectures and presentation slides available to learn the Modelica from

materials on www.openmodelica.org website which contributes to the better

understanding of the Modelica language syntax, and also be the first step in

construction of the new parser.

Page 11: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

4

Since the OMCCp is implemented in MetaModelica an extension of Modelica language

it is important to familiarize with the language construct. There are online courses

available on the www.openmodelica.org website which provides with sample exercises

and methods to construct MetaModelica code. The exercises are highly recommended

before the start of implementation. A MetaModelica guide [5] is also available which

contains the whole MetaModelica constructs and built in functions.

The next part is to learn about the different phases of compiler which are required to be

known before implementation. There are two phase’s namely front end and the back

end. In this thesis we focus on the front end which requires the knowledge of lexical

analysis and syntax analysis.

The lexical analysis or scanner is the first stage in the front end of the compiler phase.

In this phase scanner receives the source code as a character stream and translates the

character stream into a list of tokens based on rules written in the form of regular

expression. The tool used for generating the tokens is flex.

The syntax analysis or parser is the next phase in the compiler front-end where it takes

the tokens generated by the lexer as its input and creates an intermediate form called

Abstract Syntax Tree (AST). The bison or yacc tool is used for generating the AST.

1.6 Intended Readers

The reader of this document is someone who is interested in core compiler construction

work, especially interested in building the front end of the compiler phases. This

document provides the developers some of the important information about the

OpenModelica project OMCCp.

Page 12: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

5

Chapter 2

Theoretical background

This chapter provides the theoretical knowledge required for the understanding of this

work. It covers the topic of Modelica, MetaModelica and compiler construction

techniques.

2.1 Modelica

Modelica is an object-oriented equation based language for modeling and simulation. It

can be used for modeling; multi domain component based modeling which includes

systems containing mechanical, electrical, electronic, hydraulic, thermal, control,

electric power etc. The effort is supported by OSMC, a non-profit organization [3].

Example

class Hello World

Real x (start = 1);

parameter Real a = 1;

equation

der(x) = - a * x;

end HelloWorld;

Some of the important points of the language to be discussed here, is that the number of

unknown variables must be equal to the number of equation with some exceptions like

parameter; constant variables need not be specified in equation part. In the above

example we created a new model or class “Hello World” which contains the variable

‘x’ of Real type and a parameter variable ‘a’ of Real type. Thus in the equation section

we have defined the variable ‘x’ and hence the number of variables is matched with

number of equations [3].

Page 13: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

6

2.2 MetaModelica

MetaModelica is an extension of Modelica language which is used to model the

semantics of Modelica language. MetaModelica is developed as a part of

OpenModelica project to provide a platform for developing Modelica compiler. Some

of the MetaModelica language constructs used in this implementation are discussed

below [4] [5].

2.2.1 Matchcontinue

The match continue expression is very similar to switch expression in C but the

difference is that, the match continue expression returns a value and also it supports

pattern matching.[4][5].

Example

function matchcontinue

x: = String s;

Real x;

algorithm

matchcontinue s

case "one" then 1;

case "two" then 2;

case "three" then 3;

else 0;

end matchcontinue;

In the above example given a String‘s’ it returns the number value of the corresponding

string. The underscore ’_’ can be used to match any case. The matchcontinue

expression contains a case block, each case block is executed for finding a match , if a

catch is matched it returns the value, if it does not match, the next case block is tried

until a match is found, the match continue expression can be used to return more than

one value.

Page 14: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

7

2.2.2 Uniontype

The uniontype is a collection of one or more record types. The uniontype can be

recursive and can refer to other uniontypes. This is one of the important construct of

MetaModelica used in this implementation. The uniontype is mainly used in the

construction of Abstract Syntax tree (AST) [4][5].

Example

In the above example we created a new Uniontype “Exp” which includes collection of

two or more record types namely “INT”,”NEG”, “ADD”. The record type is restricted

class type in Modelica which does not contain equations. Suppose if we have an

expression like 6 + 44. The Abstract syntax tree will be constructed as.

ADD (INT (12), INT (5))

When the parser finds the expression 6 + 44, the uniontype “Exp” is invoked and it

finds the appropriate record type and constructs the AST. Here the parser finds the

expression is of add operation and selects the add record add and the numbers are of int

type and selects the record INT from the uniontype “EXP”.

ADD

INT

6

INT

44

uniontype Exp

record INT Integer x1; end INT;

record NEG Exp x1; end NEG;

record ADD Exp x1; Exp x2; end ADD;

end Exp;

Page 15: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

8

2.2.3 List

The list construct in MetaModelica is used to add or retrieve items from a list which

are very similar to arrays. The cons operator ’::’ is used for adding or retrieving

elements from the list [4][5][2].

Example:

In the above example we create a list ‘a’ of integer type which contains elements

from 1 to 3. In the second line we use the cons operator’::’ to add and retrieve

elements.

Retrieve operation:

In line no 2 we have the statement i::a=a

It retrieves the top element 3 from the list and stores it in variable ‘i’ and stores the

remaining list element is variable ‘a’.

Add operation:

In line no 3 we have the statement a= i::a

It will add an item ‘i’ into the list ‘a’. (i.e.) adds element 3 into list a.

l i s t <Integer> a={1 ,2 ,3};

i : : a=a ;

a=i : : a ;

Page 16: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

9

2.3 Compiler Construction

2.3.1 Compiler phases

Fig: 2.3.1. Compiler phases [6] [7].

Lexical Analyzer

Source code

Syntax Analyzer

Semantic Analyzer

Tokens

AST

Front end

Code optimizer

Type checked

AST

Intermediate code

Optimized AST

Intermediate code

Intermediate code

Assembly code

Back end

Page 17: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

10

This implementation strongly requires compiler construction knowledge. The people

who are interested in building a new compiler or their own, must follow the above six

steps. Generally the compiler consists of two phases namely [6] [7].

1. Front-end and

2. Back end.

Front end

The front-end includes 3 stages namely

1. lexical analyzer

2. Syntax analyzer

3. Semantic analyzer

Back end

The back-end includes 3 stages namely

1. Code optimizer

2. Intermediate code

3. Code generator.

Both the front-end and back end is dependent on each other. Each stage in a compiler

takes the input from the output of the previous stages of the compiler. Now let us see

what each stage perform.

1. Lexical Analyzer

Input: Source code

Output: Tokens

The lexical analyzer also called scanner is the first stage of building a compiler.

The Lexical analyzer takes the source code in the form of symbols as input and

outputs Tokens. The token streams are identified based on rules written in the form

of regular expression [6] [7].

2. Syntax Analyzer

Input: Tokens

Output: Abstract Syntax Tree (AST)

Page 18: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

11

The syntax analyzer is the second stages of building a compiler. The syntax

analyzer also called Parser, takes tokens as its input from the previous stage and

outputs Abstract Syntax Tree only if it identifies the tokens are created according to

the rules of the language otherwise it reports error messages [6] [7].

3. Semantic Analyzer

Input: AST

Output: type checked AST

The semantic analyzer is the third stage of the compiler. It takes the input from

syntax analyzer (i.e.) AST and perform type checking over the created AST and

outputs type checked AST otherwise reports error messages [6] [7].

4. Code optimizer

Input: AST

Output: optimized AST

The code optimization is the fourth stage of the compiler. It takes the input from

the semantic analyzer performs code optimization over the type checked AST and

outputs optimized AST. More about the code optimization can be found in compiler

construction book [6] [7].

5. Intermediate code

Input: optimized AST

Output: intermediate code

The intermediate code is the fifth stage of the compiler. It takes the input from code

optimizer and outputs intermediate code which close to the final code [6] [7].

6. Code generator

Input: optimized AST

Output: intermediate code

The code generator is the final stage of the compiler. It takes the input from

intermediate code and outputs machine code or assembly code which is the final

target code [6] [7].

In this implementation we are interested in the front-end phase of the compiler, and

we will be discussing about front end stages in detail. Hence the above few lines

will give the basic knowledge to the readers who are interested in building a new

Page 19: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

12

compiler. More about the compiler stages and especially about the back end can be

read through the book Aho, Lam, Sethi, Ullman, Compilers Principles, Techniques,

and Tools, Second Edition. Addison-Wesley, 2006.

2.3.2 Front-end - lexical analyzer

The lexical analyzer also called scanner is the first stage in compiler construction.

As said in definition it takes source code as input and outputs tokens based on the

rules written in the form of regular expression. It identifies the tokens and reduces

the complexity of the next stages of the compiler. The scanner outputs the token

based on the first match it found and hence order of rules is very essential to avoid

ambiguity.

The lexical analysis in this implementation is performed by the use of program

called lexer [6] [7] [2].

Lexer

Fig: 2.3.2. Lexer component

OMCCp Lexer - Program

source code

Finite automata-DFA Regular expressions

runs

Tokens

Page 20: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

13

The lexer is a program which takes the source code and outputs the token. To

generate the tokens, the lexer runs the Deterministic finite automata (DFA) based

on the rules written in the form of regular expression and outputs the tokens. We are

using DFA in this implementation as we want to have single path between

transitions of states and more importantly to avoid ambiguity, however a Non

Deterministic finite automata (NFA) can also be converted to DFA. To avoid such

complexities we use DFA in the implementation. Let us see how a DFA works.

Deterministic Finite Automata (DFA)

It is a 5 tuple represented as

5 tuple- (Q, Σ, δ, q0, F) [8] [6].

Σ-set of alphabets

Q-set of states

δ-set of transitions made

q0-start state

F –final state

Example:

{Wa|WE {a, b}*}

Input: abbaaba

Fig: 2.3.2.a) finite state automata

q1 q0 b

a

b

a

Page 21: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

14

In the example we have a regular expression of the rule which says that words

containing alphabets a and b can be repeated one or more time with a condition that

words must end with a. when we give the input string abbaaba the rules will be

identified and automata will be created which outputs the tokens.

For the above input string we represent the 5 tuple as

5 tuple (Q, Σ, δ, q0, F)

Q =q0,q1

Σ =a,b

δ =Q x Σ --->Q eg, (q0, a) -> q1 and (q1, b) ->q0

q0 =start state and F=q1

we can see from the above automata it initially starts with q0 starting state and reads

a and moves to next state q1 and it reads b and move again to q0 and it reads b and

remains in same state q0 and reads a and again goes to q1 and reads a again and

remain in the same state and reads b and goes to q0 and finally reads string a and

reaches the final state q0 which tells that the string is accepted and finally it outputs

the tokens [8] [6].

2.3.3 Front-end-syntax analyzer

The syntax analyzer also called the parser is the second stage in front-end. It takes

the tokens from the lexer and uses as its input and checks whether the generated

tokens are constructed according to the rules of the language. If it is correct, it

creates Abstract Syntax Tree (AST) otherwise it reports error messages. Since the

lexical analysis is kept separate from the syntax analyzer it reduces the complexity

of the parser. The AST is used as input for the back end of the compiler [6] [7].

The syntax analyzer is performed by the component called parser.

Parser

The parser is a program which takes the tokens as its input, runs LALR algorithm

over the list of tokens and generates the AST with help of parse tables and stack

states. The architecture of parser is discussed below.

Page 22: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

15

Fig 2.3.3) Parser component

The parser uses the parsing table to determine the next state. The next state is obtained

from the parsing table based on the look ahead token and current stack state. By doing

so, it constructs the AST by means of two operations namely shift and reduce

operations. It uses PDA to perform the two operations.

Context free grammar (CFG)

The rules of the parser are represented in the form of

context free grammar. A CFG is represented as four tuple [9][6].

(V, Σ, R, S) where

• V- set of non terminals

• Σ- set of terminals

• R- set of rules

• S- start symbol

Eg: 1.S->aS|X

2. X->ab

In the above grammar the left hand side of the rules are strictly non terminals

and generally represented in capital letters and in the right hand side it contains a

combination of terminals and non terminals usual a terminal is represented in lower

cases the terminals can also include operators and symbols like $,# etc.. in the above

example the terminals are a and b.

Parser Tokens

Parse table Stack states

LALR AST

Page 23: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

16

Push Down Automata (PDA)

A push down automata (PDA) is a stack which is used to construct the AST by reading

the tokens and pushing into the stack called the push operation and when it encounters

sufficient amount of tokens which can be replaced by a new token it creates the abstract

syntax tree(AST) known as the reduce operation. It uses the stack states and tokens to

decide the next action. It continues to create the AST until an accept state is found with

the help of parsing table [10] [6] [2].

Let us see how the PDA works with a small example

1.S->aS|X

2.X->ab

Let us consider the above two grammar, the PDA for the above productions is

constructed as follows. It starts reading the input tokens from the production in this case

it starts reading the token a from rule 1 and pushes a into the stack, then it reads the

next input ‘S’ from rule 1 and pushes the token into stack, from the top of the stack it

determines the next action and state and finds that token ‘S’ can be replaced by new

token ‘X’ and performs reduce operation and thereby constructs the AST. The process

continues until an accept state is found. Below is the diagrammatic representation of the

construction of PDA and AST.

Fig: 2.3.3. a) Construction of PDA

a

S

a

X

a

b

a

a

Page 24: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

17

Fig :2.3.3. b) Construction of AST

2.3.4 LALR Parser

The look ahead left right (LALR) parser is a type of LR parser which is used in the

implementation. The LALR parser comes in the category of bottom up parsers

which are efficient than top down parser. The LR parser starts reading the input from

left and proceeds in direction towards right without backtracking. Since it avoids

backtracking more time is saved and this type of parser ideally suits our

implementation. To avoid back tracking the LR parser uses the look ahead ‘k’ on the

input symbols and decide whether to shift or reduce. Hence the LR parser is usually

represented as LR (k) where k denotes the look ahead. In general the value of k=1

[11] [6] [2].

The LR parsers are best suited than LL parser which commits the reduce action once

it got tokens in the parse stacks which gives wrong results where as the LR(k)

parsers waits until it completes the look ahead action (i.e.) it looks into the entire

input symbol and finds the appropriate pattern before committing the action. Thus

LR parsers are used to handle large amount of grammar without errors. They are

best suited for error handling comparing to LL parsers [11] [6] [2].

The LR parser is deterministic and they produce single correct parse without

backtracking and ambiguity. The construction of LR parser is performed by two

b a

X

a S

S

Page 25: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

18

actions namely shift and reduce by the use of parse table constructed by DFA. Now

let us see with a small example how the LR parse tables are constructed.

Construction of LR parser

Let us consider the following grammar rules

1. S’->S$

2. S->aS

3. S->X

4. X->ab

The DFA construction is represented below

Fig 2.3.3.c): construction of DFA

As said earlier the LR parsers are deterministic, the canonical LR items namely the

action and Goto table are built based on the DFA. The construction of DFA is begin by

traversing every production with a dot placed in front of the right hand side of the

production for example let us take the first rule S’-> .S$, a dot is placed before a non-

terminal ‘S’, the ultimate goal of the LR parser is to parse entire right hand side of the

S’->.S$

S->.aS

S->.X

X->.ab

1

S’->S.$

S->a.S

X->a.b

S->.aS

S->.X

X->.ab S->X.

Accept

S->aS.

X->ab.

2

4

5

6

3

S

a

a

Page 26: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

19

productions (i.e.) in this production the dot should reach the end of the symbol ‘$’

which can be represented as (S’-> S$.) insists that the rule is processed completely by

the parser. The above diagram represents the various states in which the DFA is

constructed to parse all the four productions. For constructing the DFA we have to

follow certain rules with respect where the dot is placed.

1) If the dot is placed in front of a non terminal represented in capital letter

then we have to write down all the production of that non terminal. For

example in the state 1 we can see that dot is placed in front of a non

terminal ‘S’(S’->.S$) and in the corresponding state we have wrote

down all the production of S (S->.aS, S->.X) and again we see dot is

placed in front of non terminal ‘X’ so we wrote productions of ‘X’.

2) If the dot is placed in front of a terminal (small letters) we can continue

to traverse the dot with next symbol.

The above diagram represents the various states in which the entire production is

traversed completely which is used for the construction of the parse table (action and

goto). For more details about the construction of the DFA look into

http://en.wikipedia.org/wiki/LR_parser.

Construction of action and goto table

Once the DFA is constructed we can construct the action and goto table with help of

DFA states. The action table is nothing but the actions taken on a particular token

which can be either two operations namely

1.Shift represented as ‘S’ followed by the which state is goes next, for example S3

means on reading that particular input we are performing a shift operation and move to

state 3.

2. Reduce represented as ‘R’ followed by the which state is goes next, for example R4

means on reading that particular input we are performing a reduce operation and move

to state 4.

The action table represents the actions performed on the set of terminals and goto table

represents the actions performed on the set of nonterminals. Below is the representation

of the table based on the DFA construction.

Page 27: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

20

Action GoTo

a b $ S X

1 S3 2 4

2 accept

3 S3 S6 5 4

4 R3

5 R2

6 R4

Fig 2.3.3.d) action and goto table

The construction of LALR parser is very similar to the above construction with only a

minor modification made in parse table by combining two different states which almost

perform the same actions in order to save table space. eg if state 3 and 4 have same

actions on the same input the table is combined into single row as 34 and their

corresponding actions are joined in a single row. To learn more about this construction

look into Aho, Lam, Sethi, Ullman, Compilers Principles, Techniques, and Tools,

Second Edition. Addison-Wesley, 2006

Look ahead

The look ahead is a very important in parsing a grammar as it avoids

ambiguity and error. The look ahead waits for the maximum incoming tokens to decide

which rules it should apply. The look ahead has two advantages

a) It helps to avoid conflicts and produce correct result

b) It avoids duplicate states and eliminates the space of having extra stack

Now let us see with an example how works look ahead. Let us consider the following

grammar rules [12].

1: E → E + E

2: E → E * E

3: E → number

4: + has less precedence than *

Page 28: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

21

Input

1+2*3=7

Case 1: Construction of AST without Look Ahead

The parser pushes the input token ‘1’ into the stack and found a match with rule no 3

and replaces ‘1’ with ‘E’ and then pushes ‘+’ into the stack and then ‘2’ and replace it

with rule no 3 (E->number). At this time it does not looks into the rule no 4 which says

that * has more precedence than + which means it should not apply rules no 1 to the

input symbol ‘1+2’, but since no look ahead is performed it performs reduce operation

and applies rule no 1 (E->E+E), at this point we have result 3 in the stack. And in the

next step it pushes ‘*’ into the stack and then finally it pushes ‘3’ into the stack and

replaces it with rule no 3 (E->number) and in the stack we have E*E and applies rule no

2 (E->E*E) and we will get the result 9 in the final stack which is wrong. Below is the

pictorial representation of the stack [12].

Fig 2.3.3.e): stack with non look ahead [12].

1 E

+

2

E

+

E

E

E

*

3

E

*

E

E

Page 29: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

22

Case 2: Construction of AST with Look Ahead

The parser pushes the input token ‘1’ into the stack and found a match with rule no 3

and replaces ‘1’ with ‘E’ and then pushes ‘+’ into the stack, At this time it looks into

the rule no 4 which says that * has more precedence than + which means it should not

apply rules no 1 to the input symbol ‘1+2’, hence in this example the ‘+’ is look ahead,

since look ahead is performed it does not performs reduce operation on the input

‘1+2’, at this point we still have (E+E) in the stack. And in the next step it pushes ‘*’

into the stack and then finally it pushes ‘3’ into the stack and replaces it with rule no 3

(E->number) and in the stack we have (E+E*E) and applies rule no 2 (E->E*E) and we

will get the result 6 in the stack and in the final stack we have (E+E) and applies rules

no 1(E->E+E) which will give the correct result 7. Below is the pictorial representation

of the stack [12].

Fig 2.3.3.f): stack with look ahead [12]

The above example is taken from wikipedia for further details refer

http://en.wikipedia.org/wiki/Parsing#Lookahead.

1 E

+

2

E

+

E

E

E

*

+

E

+

E

*

E

E

Look ahead

Page 30: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

23

Chapter 3

3 Existing Technologies

In this chapter we will discuss about the existing tools which are used to generate the

lexer and parser namely the FLEX and BISON tool.

3.1 Flex

Flex (fast lexical analyzer) is a tool which is used to generate the lexical analyzer or

scanner. Flex tool serves as an alternative tool to lex. Flex tool makes the first stage of

the compiler easier. It automates the generation of tokens. The input to the FLEX is

generally a file with “.l” extension, in our implementation the input file is “lexer

Modelica.l” which contains the rules written in the form of regular expressions. When

the user gives the input symbol (test cases) the scanner starts reading the input and finds

a match with regular expression and outputs the tokens. It generates a c file (“lexer

Modelica.c”) which contains the arrays and algorithm which runs the DFA over the list

of tokens. The flex tool is used to avoid complexity of recognizing the tokens based on

the order of priority. Also the flex is used to handle large amount of rules and thus

avoid the complexity in the next stage of compiler (parser). Mostly the flex tool is used

in combination with bison as it provides the tokens to bison directly avoiding

ambiguity. More information about the flex tool, a course about FLEX manual and

other necessary information can be found in http://flex.sourceforge.net/manual/. The

manual contains very good information about the FLEX [13] [2] [7].

Page 31: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

24

3.2 Gnu Bison

It is mostly referred as Bison which is a parser generator. The bison tool is mostly used

with flex tool. The bison tool gets the tokens as input from the flex and checks whether

the tokens are constructed according to the rules of the grammar and creates Abstract

Syntax Tree (AST). The rules are identified in the form of Context Free Grammar.

Bison generates LALR parser, the algorithm we use to construct the AST. Bison tool

generates a C code which contains the transition arrays and algorithm which runs the

PDA to create the AST. More information about GNU bison can be found in

http://flex.sourceforge.net/manual/Bison-Bridge.html. The manual contains very good

information about the generated c files and how the array transition and LALR tables

are constructed and how the parser determines to perform shift or reduce operation by

querying the table [14] [7] [2].

Page 32: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

25

Chapter 4

Implementation

In this chapter we will discuss how OMCCp is implemented by identifying the problem

statement, then we propose a new solution for the identified problem, then discuss

about the OMCCp design and architecture and the important code changes.

4.1 Problem Statement

The design of OMCCp was started in the year 2011 by the OpenModelica developers.

The development and coding was started by a master thesis student who designed the

structure and layout of the OMCCp. When the final code was done, the implementation

had problems. The implementation was highly dependent on the “OpenModelica”

project directory “Compiler” which includes the front end and back end components.

During the development of OMCCp, the OpenModelica developers made a lot of

significant changes in the “Compiler” utilities files. These changes were made to

improve the efficiency of OMC. One of the well made significant changes is the

Compiler utility file (“rtopts.mo”) has been removed from the directory “Compiler”

(Compiler->util->rtopts.mo) and the OMCCP was developed in context with removed

file and hence the developed OMCCp failed to work and does not support the newer

version which also prevented the release of OMCCp [2].

4.2 Proposed Solution

The solution we propose in this thesis is to rebuild the entire OMCCp with respect to

the changes made to “Compiler utility component models” which supports new version.

We reuse the old version of OMCCp as a starting point to understand the code and

identify the flaws which made the OMCCp to fail. We implement the parser entirely in

MetaModelica with lexical and semantic analysis in separate phases. By separating

these phases we improve the efficiency of parser and also the performance of compiler.

We use LALR algorithm to create the AST which performs much better than other

parsing algorithm and also avoids ambiguity. Also we will implement full Modelica

grammar so that OMCCp parses all types of Modelica and MetaModelica syntax and

constructs.

Page 33: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

26

4.3 Omccp Design and Architecture

Fig 4.3) layout of OMCCp

OMCCP

OMCCp Lexical

Analyzer

OMCCp Syntax

Analyzer

lexer Modelica.l

Lexer.mo

LexerCode.tmo

Generated files:

LexerCode Modelica.mo

LexerGenerator.mo

Lexer Modelica.mo

LexTable Modelica.mo

Token Modelica.mo

parser Modelica.y

Parser.mo

ParseCode.tmo

Generated files:

ParserCode Modelica.mo

ParserGenerator.mo

Parser Modelica.mo

ParseTable Modelica.mo

FLEX BISON

Main.mo

SCRIPT.mos

OMCC.mos

Page 34: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

27

The design of OMCCp is consists of two process namely

1.) The OMCCp lexical analysis and

2.) The OMCCp syntax analysis.

We use the flex tool for generating the lexical analysis. The main files in the lexical

analysis are “lexer Modelica.l” and “Lexer.mo”. As we said earlier the OMCCp is

entirely implemented in MetaModelica. The flex tool identifies the scanner file “lexer

Modelica.l”, by identifying the token and generates C file (“lexer Modelica.c”). We use

the c file to generate the MetaModelica code (eg. Files with .mo extension) which can

be seen in the above diagram under the name generated files.

We use the bison tool for generating the syntax analysis. The main files in the syntax

analysis are “parserModelica.y” and “Parser.mo”. The bison tool takes the “parser

Modelica.y” as its input and generates C file (“parserModelica.c”). From the generated

c file we generate the MetaModelica code which can be seen in the above diagram.

The file “Main.mo” is the main file which contains the run time system calls to start the

translation process. From this file we make functional call to start the lexer and parser.

The file “SCRIPT.mos” contains the loading of compiler utility files and the test files.

In this report we will be discussing the important changes made during the

development. For more detailed explanation about the code please look into the report

work of “A MetaModelica based parser generator applied to Modelica” by Edgar

Alonso Lopez Rojas which is available from the following link.

https://www.openmodelica.org/index.php/research/master-theses.

Page 35: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

28

4.3 New Parser

In this part we will start discussing about the changes made to support new version.

Before getting into the changes we discuss the main models of OMCCp which starts the

translation.

Main.mo

The above code is taken

public function main

input list<String> inStringLst;

protected

list<OMCCTypes.Token> tokens;

ParserModelica.AstTree astTreeModelica;

algorithm

_ := matchcontinue (inStringLst)

Local

case args as _::_

equation

{filename,parser} = Flags.new(args);

"Modelica" = parser;

false=(0==stringLength(filename));

print("\nParsing Modelica with file " + filename + "\n");

// call the lexer

print("\nstarting lexer");

tokens = LexerModelica.scan(filename,false);

print("\n Tokens processed:");

print(intString(listLength(tokens)));

// call the parser

print("\nstarting parser");

(result,astTreeModelica) = ParserModelica.parse(tokens,filename,true);

print("\n")

// printing the AST

if (result) then

print("\nSUCCEED");

else

print("\n" +Error.printMessagesStr());

end if;

end matchcontinue; end main;

Page 36: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

29

The above code is taken from the OMCCp main model “Main.mo”. This is the main

model where we make the run time system calls to start the translation process namely

the lexer and parser. The model contains a function “main” which makes the call to

lexer and parser respectively. The function contains an input component list of type

String which takes the input as string arguments. Then in the protected section we have

list of type “OMCC tokens” which is used for printing the tokens identified by the

lexer. Look into the model “TokenModelica.mo” for the list of Tokens. Then we have

the ParserModelica.AstTree astTreeModelica to print the AST. In the equation section

we take the input from the user and then make call to lexer “LexerModelica.scan” with

filename (input or test case file given by the user) as an argument which can be seen in

the above code. The results of the lexer are stored in “tokens”. Once the lexer outputs

the tokens we make call to the parser “ParserModelica.parse” by giving the tokens

identified by the lexer as input. The parser checks if the tokens are formed according to

the rules of the language. Then we print the AST or print error message. For detailed

version of this model refer the Appendix A section.

LexerModelica.mo “function Scan”

function scan "Scan starts the lexical analysis, load the tables and consume the program to

output the tokens"

input String fileName "input source code file";

input Boolean debug "flag to activate the debug mode";

output list<OMCCTypes.Token> tokens "return list of tokens";

algorithm

// load program

(tokens) := match(fileName,debug)

local

list<OMCCTypes.Token> resTokens;

list<Integer> streamInteger;

case (_,_)

equation

streamInteger = loadSourceCode(fileName);

resTokens = lex(fileName,streamInteger,debug);

then (resTokens);

end match;

end scan;

Page 37: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

30

The above code is taken from the model “LexerModelica.mo”. The model contains

function “scan”. The model “Main.mo” makes the first system call to this function, to

start the lexical analysis. The function starts the scanning process, loads the table from

the arrays created by the “Flex” tool and consumes the program to output the token.

The function contains two input component “filename of type string” and “Debug of

type Boolean” and one output component “token of type list”. In the algorithm section

we first load the program by passing the input components. Then in the equation

section we load the real source code file given by the user as input using the function

call “loadSourceCode (fileName)” and then start the scanning process by making

another function call to “lex” with the given filename which starts identifying the total

number of characters in the given file and outputs them as tokens. We added only the

important text of code here, for more details about the code refer the Appendix A

section.

Lexer Modelica.mo “function Lex”

function lex

input String fileName "input source code file";

input list<Integer> program "source code as a stream of Integers";

input Boolean debug "flag to activate the debug mode";

output list<OMCCTypes.Token> tokens "return list of tokens";

algorithm

tokens := {};

if (debug) then

print("\n TOTAL Chars:");

print(intString(listLength(program1)));

end if;

while (List.isEmpty(program1)==false) loop

if (debug) then

print("\nChars remaining:");

print(intString(listLength(program1)));

end if;

end while;

tokens := listReverse(tokens);

end lex;

Page 38: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

31

The above code is taken from the model “LexerModelica.mo”. The model contains

function “lex”. The function “scan” makes the call to this function. The function lex

starts loading the input source code file and converts the input source code file as

stream of integers. This is done by the function “scan”. Then in the algorithm section

the function starts reading the characters from the source file in a list and outputs the

tokens until the full completion of the identified character from the given source file

and then prints the token using the listReverse built-in functions. We added only the

important text of code here, for more details about the code refer the Appendix A

section.

ParserModelica.mo “function parse”

function parse "realize the syntax analysis over the list of tokens and generates the AST

tree"

input list<OMCCTypes.Token> tokens "list of tokens from the lexer";

input String fileName "file name of the source code";

input Boolean debug

output ParseCodeModelica.AstTree ast "AST tree that is returned when the result output is

true";

algorithm

while (List.isEmpty(tokens1)==false) loop

if (debug) then

print("\nTokens remaining:");

print(intString(listLength(tokens1)));

end if;

(tokens1, env,result,ast) := processToken(tokens1,env,pt);

if (result==false) then

break;

end if;

end while;

if (debug) then

printAny(ast);

end if;

end parse;

Page 39: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

32

The above code is taken from the model “LexerModelica.mo”. The model contains

function “scan”. The model “Main.mo” makes the second system call to this function,

to start the Syntax analysis. The function “parse” realize the syntax analysis over the

list of tokens given as input from the lexer and generates the AST. The function has

three input component and one output component. The first input is a list which

contains the tokens generated by the generated by lexer. In the algorithm section we

check the total number of tokens and for each token we check the token is formed

according to rules with the help of process token function. If the result are correct we

print the AST. We added only the important text of code here, for more details about

the code refer the Appendix A section.

Function statements

The Modelica function statements have specific input and output components to which

data types can be declared. We can also specify other normal data types which do not

be an input or output component statements under the keyword protected. And

moreover we cannot assign the input component parameter directly with another

parameter.

Old version

function printBuffer

input list<Integer> inList;

output String outList;

Integer c;

algorithm

outList := "";

while (Util.isListEmpty(inList)==false) loop

c::inList := inList;

outList := outList + intStringChar(c);

end while;

end printBuffer;

Page 40: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

33

The above code is taken from the package “Lexer.mo”. The above function block is

used for printing the tokens. But the above code has some errors and warnings which

are pointed using the shadings. The function block contains one input component list of

type integer named inlist and one output component of type String named outList. Then

we have a normal variable ‘c’ of type integer. This statement will give an warning

message in the newer version (variables which are not assigned as input or output

component must be protected).

Then in the algorithm block we have match expression statement and a while loop

Which checks for the tokens is empty and then assigns the input component inlist to

variable c which gives an error (variables which are declared to an input component

cannot be assigned to another component).

New version

The modified changes are listed are pointed with shading which clears the warning

message by declaring the variable ‘c’ of integer type under the keyword protected and

the error message is cleared by copying the input component list of type integer to

normal list type of integer named inlist1 ( list<Integer> inlist1:=inList).

function printBuffer

input list<Integer> inList;

output String outList;

protected

list<Integer> inList1:=inList;

Integer c;

algorithm

outList := "";

while (List.isEmpty(inList1)==false) loop

c::inList1 := inList1;

outList := outList + intStringChar(c);

end while;

end printBuffer;

Page 41: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

34

The above mentioned function block changes is really important to support the newer

version of the OpenModelica.

Modelica identifiers

There are two types of identifiers in Modelica. The first identifier which is of the form

[a-zA-Z_] [a-zA-z0-9]* is commonly parsed by all types of the parser. But in Modelica

we have new type of identifier namely Q-IDENT which represents the special

characters identified in single quotes. (‘Any character within the quotes’). This Q-

IDENT is specially used in enumeration statements mostly in Modelica electrical

component packages. The older version does not support the special type of

enumeration statements written within quotes.

Enumeration statements which include (Q-ident)

The above statements which are shaded are a special type of Modelica identifier (Q-

Ident) which will not be parsed by the OMCCp showing error like replace the token

with some other alternatives which is done by error handling. In the new

implementation we tried to write the new rule in “lexer Modelica.l” by prioritizing the

rule in the actual identifier. Since the Q-Ident is only used on rare occasions we gave

the Q-Ident with lower priority without affecting the actual rule. The new change are

listed below

class test

String str;

type test= enumeration('4' "word",'1' "alpha");

test v;

equation

str=v.'4';

str=v.'1';

end test;

Page 42: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

35

In the above listing we can see, the identifier is given higher priority than the Q-ident

which appears in single quotes without affecting the actual identifier rule as well as we

decided not to write a separate rule for the Q-ident as it would cause an overhead to the

parser. After the new rule the OMCCp is able to handle all types of statements.

Next we look into the important changes made in the parser.

One of the important changes made during this implementation is the compiler utility

file in the path (Compiler->util->rtopts.mo) has been removed. The old parser was

implemented to handle all the operations using this file which caused overhead problem

in the parser. Hence new parser is modified according to newly added utility files. The

new added utility files are loaded in the SCRIPT.mos.

Old version

SCRIPT.mos

The SCRIPT.mos file is used to load all the packages which are required to run the

OMCCp by passing this file as an argument. As we can see in the above listing the old

parser uses the rtopts.mo which has been removed due to overhead problem. The main

problem with the file was that function to handle different operations like string

manipulation, list operations etc. are written in the same file which caused the overhead

letter [a-zA-Z]

wild [ _ ]

digit [0-9]

digits {digit}+

ident (({letter}|{wild})|({letter}|{digit}|{wild}))({letter}|{digit}|{wild})*

loadFile("../../Compiler/FrontEnd/Absyn.mo");

loadFile("../../Compiler/Util/Error.mo");

loadFile("../../Compiler/Util/ErrorExt.mo");

loadFile("../../Compiler/FrontEnd/Dump.mo");

loadFile("../../Compiler/Util/Print.mo");

loadFile("../../Compiler/Util/RTOpts.mo");

loadFile("../../Compiler/Util/Util.mo");

loadFile("../../Compiler/Util/System.mo");

Page 43: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

36

and the modified version of the rtopts.mo is split into several files which are listed

below in the shadings.

New version

SCRIPT.mos

The above models are developed by the OpenModelica developers, so we need not to

get into the details of coding. The things we need to do is to just find which functions

have to be replaced with old version. Some example of few modifications is listed

below.

Main.mo

The above listing is taken from the “Main.mo” where we will start the actual

translation; since the file is removed we have to make the changes with newly added

utility file named “flags.mo”. The actual function call RTOpts.args is used to read the

loadFile("Types.mo");

loadFile("../../../Compiler/FrontEnd/Absyn.mo");

loadFile("../../../Compiler/Util/Error.mo");

loadFile("../../../Compiler/Util/ErrorExt.mo");

loadFile("../../../Compiler/FrontEnd/Dump.mo");

loadFile("../../../Compiler/Util/Print.mo");

loadFile("../../../Compiler/Util/Flags.mo");

loadFile("../../../Compiler/Global/Global.mo");

loadFile("../../../Compiler/Util/Pool.mo");

loadFile("../../../Compiler/Util/Debug.mo");

loadFile("../../../Compiler/Util/List.mo");

loadFile("../../../Compiler/Util/Settings.mo");

loadFile("../../../Compiler/Util/Corba.mo");

loadFile("../../../Compiler/Util/Name.mo");

loadFile("../../../Compiler/Util/Scope.mo");

loadFile("../../../Compiler/Util/Util.mo");

loadFile("../../../Compiler/Util/System.mo");

case args as _::_

equation

{filename,parser} = RTOpts.args(args);

" Modelica" = parser;

false=(0==stringLength(filename));

Page 44: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

37

strings given as arguments, since this file has been removed, the task is to find an

alternative to the above which can be done by the new utility file named ”Flags.mo”.

The above listing shows a replacement for the above replaced “RTOpts.arg”. The new

utility file “Flags.mo” contains function named “new” which reads the strings given as

arguments and outputs them. The function new from the package Flags.mo is listed

below.

Flags.mo

The above mentioned change is one sample example of how the new modification is

done to support the new version. We will not be discussing all the changes in this report

as it would be highly impossible to include the entire code changes in this report.

To perform the entire modification was a very difficult task considering the complexity

of the implementation as the whole OMCCp contains 15,000 lines code. It will be

really hard to go through every line of the code and finding the problem. The good

thing about the OMCCp is; it contains very good error handling, so based on the

previous report work we try to compile the code and find out the list of errors and from

the obtained error list we try to understand the code and find a possible replacements in

context with new implementation.

case args as _::_

equation

{filename,parser} = Flags.new(args);

" Modelica" = parser;

false=(0==stringLength(filename));

public function new

"Create a new flags structure and read the given arguments."

input list<String> inArgs;

output list<String> outArgs;

algorithm

_ := loadFlags();

outArgs := readArgs(inArgs);

end new;

Page 45: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

38

The next important we would be discussing about the new features added in the file

“Absyn.mo” (path: Compiler->frontend->Absyn.mo). This is an important change to be

mentioned as we use this package to construct the Abstract Syntax tree.

The OpenModelica developers recently developed the code for parallelization of the

compiler which directly affects the OMCCp. Also the developers trying to perform

optimization on the compiler (under development) which is an advantage to OMCCp

once the development is finished.

Now we discuss about the changes made. The Absyn.mo package included a new union

type Parallelism which directly affects the union type Element Attributes as we use the

latter union type for the construction of AST. The below code is the newly added

uniontype.

Absyn.mo

As we can see the new union type is added as an effort to parallelize the compiler, since

we are not performing any kind of parallelization in OMCCp we need not to worry

about the first two record types namely PARGLOBAL and PARLOCAL. They are used

in some other projects which we need not worry.

Absyn.mo

public

uniontype Parallelism "Parallelism"

record PARGLOBAL "Global variables for CUDA and OpenCL" end PARGLOBAL;

record PARLOCAL "Shared for CUDA and local for OpenCL" end PARLOCAL;

record NON_PARALLEL "Non parallel/Normal variables" end NON_PARALLEL;

end Parallelism;

public

uniontype ElementAttributes "- Component attributes"

record ATTR

Boolean flowPrefix "flow" ;

Boolean streamPrefix "stream" ;

// Boolean inner_ "inner";

// Boolean outer_ "outer";

Parallelism parallelism "for OpenCL/CUDA parglobal, parlocal ...";

Variability variability "variability ; parameter, constant etc." ;

Direction direction "direction" ;

ArrayDim arrayDim "arrayDim" ;

end ATTR;

end ElementAttributes;

Page 46: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

39

The above coding is taken from the package “Absyn.mo”, we can see under the record

ATTR we are recursively calling uniontype Parallelism and also the new addition

affects the parser while creating the Abstract syntax tree. Now our task is to find an

appropriate record type and pass it as a parameter to the union type Element Attributes.

We can see one such modification below. And another important thing we try tell in

this context is we try to make these changes directly in the main file rather than

modifying in generated files which will cause an overhead when we again re compile

the code if we made new changes. One such change we try to discuss here is listed

below.

parserModelica.y

We make the change directly in the main file parserModelica.y. The above code listing

is one small example of how we should perform the modification. Since we have to

adapt the changes made in the “Absyn.mo”, we have to find the affected code in the

parser, as we said before the OMCCp contains good error handling we start to run the

code and find the affected code. When we try to use the record ATTR for construction

of AST which is pointed in the shading, now we have to choose proper record type of

union type parallelism, we safely choose record NON_PARALLEL and pass it to main

record ATTR when constructing the AST.

4.5 ERROR HANDLER

The error handler is an important part of this implementation. As we said earlier the

OMCCp contains very good error handling functions compared to ANTLR. The error

handler aims to provide messages or hints to the developers in case of erroneous

models. When an erroneous models is executed the OMCCp displays the users with the

following error handler messages which are listed below.

1.insert

2.Erase

3.Replace

4.insert tokens at the end

classdefderived : EQUALS typespec elementargs2 comment

{ $$[ClassDef] =

Absyn.DERIVED($2[TypeSpec],Absyn.ATTR(false,false,Absyn.NON_PARALLEL(),Absyn.VAR()

, Absyn.BIDIR(),{}),$3[ElementArgs],SOME($4[Comment])); }

Page 47: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

40

5.merge

Insert token

The insert token hints the user with the possible replacement token when the parser

finds an error token. The parser checks for the remaining list of tokens over error token

and if a possible token is found then the new candidate token is selected and the

message is displayed to the user. Look into Appendix D for the erroneous model and

the error message display[2].

Erase token

The erase token hints the user with the possible token to be removed or erased when the

parser finds same token repeated one or several times, an error message is displayed to

the user to replace or erase the extra repeated token. Look into Appendix D for the

erroneous model and the error message display[2].

Replace token

The replace token is very similar to insert token. When the parser finds an error token it

provides the user with the possible replacement tokens by checking with the list of

candidate tokens over the error token. Both the insert and replace tokens error handler

are used in combination to display good error messages to the users. Look into

Appendix D for the erroneous model and the error message display[2].

Insert Tokens At End

The insert tokens at end is used to display error message to the users when a model or a

class is not ended or finished properly which includes mainly a missing class name or

semicolon or keyword end. Look into Appendix D for the erroneous model and the

error message display[2].

Merge Tokens

The merge token hints the user to combine tokens when a space is inserted between

tokens making the parser to consider as two different tokens. In such cases an error

message is displayed to the user to eliminate the space and merge the two tokens to a

Page 48: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

41

single token. Look into Appendix D for the erroneous model and the error message

display [2].

4.6 OMCCp VS ANTLR Error Handler:

The OMCCp contains very good error handling mechanism when compared with

ANTLR tool. The error handler plays a major role in saving time to find errors in large

models where the users are given immediate clue by OMCCp for the possible error

replacement by displaying one of the above error handling messages strategies. The

ANTLR generator contains a bad error handling strategy and it does gives much clue to

the user of the possible error replacement or where the error had occurred . We

performed the testing over the same set of erroneous models with both the tools. Below

is the sample SCRIPT file which is used for the testing ANTLR parser and the error

message reporting.

Parser.mos

runScript("LoadCompilerSources.mos");getErrorString();

loadString("function f

input String str;

output Real t;

protected

Real r1;

algorithm

_ := Interactive.ISTMTS({},false);

r1 := 1.0;

_ := Absyn.isDerCref(Absyn.REAL(r1));

_ :=

Interactive.IALG(Absyn.ALGORITHMITEMANN(Absyn.ANNOTATION({})));

_ := Interactive.IEXP(Absyn.REAL(r1));

System.realtimeTick(1);

_ := ParserExt.parse(str,1,\"UTF-8\",false)

t := System.realtimeTock(1)*1000;

end f;");getErrorString();

{f(str) for str in {"error3.mo"}};

getErrorString();

The above Script files is used to test the models with the ANTLR parser generator. The

Parser.parse starts the parsing procedure with ANTLR. We pass the erroneous model

“error3.mo” to ANTLR parser for getting the error messages compared with OMCCp

Page 49: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

42

Error3.mo

class error_test

int x,y,z,w;

algorithm

while x <> 99

x := (x+111) - (y/3);

if x == 10 then

y := 234;

end if;

end while;

end error_test;

The above model contains an error in line 3 where we missed the loop keyword for

while condition. Below we include the message displayed by the two parsers.

ANTLR parser

Loaded all files without error

"true

"

""

true

""

{fail()}

""

OMCCp parser

Parsing Modelica with file ../../testsuite/omcc_test/error3.mo

starting lexer

Tokens processed:45

starting parser

[../../testsuite/omcc_test/error3.mo:9:3-9:4:writable] Error: Syntax

error near: 'x', INSERT token 'LOOP'

args:../../testsuite/omcc_test/error3.mo

OMCCp v0.9.2 (OpenModelica compiler-compiler Parser generator) Lexer

and Parser Generator-2012

""

From the two results we can clearly see that OMCCp parser gives better and good hints

to the user compared with ANLTR parser which just gives the fail() message without

indicating any kind of possible error replacement. The error handler implemented in

OMCCp provides a better use of the tool. The tool can also be used as a time saver

when working in very large models by reporting errors with possible error replacement

Page 50: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

43

messages in case if errors occurred in the model. All the error handling messages of

OMCCp are listed in the Appendix D.

4.7 Discussions

We started the work of rebuilding the OMCCp on February 2012 and completed the

implementation by June 2012 and also made the latest commit to trunk on October

2012 . We also committed the working copy of OMCCp in the main test suite of the

OpenModelica project. Hence we assume that people who are trying to run OMCCp to

check their models or a person who is assigned to do some new work to OMCCp, while

compilation if errors are generated then you have to assume that the OpenModelica

developers have performed some changes in the “Compiler models”. So we have to re-

modify the parser with respect to newly added features. Mostly we have to look into

Front-end component model”Absyn.mo.”OpenModelica/Compiler/Frontend/Absyn.mo.

The below list includes the other Compiler Util models we used in the construction of

OMCCp. If OMCCp is not working the following models in the Compiler have to be

looked for correcting the solution.

In the path OpenModelica/Compiler/Util, we use the following models in our

implementation.

1).Error.mo

2).ErrorExt.mo

3).Flags.mo

4).List.mo

5).Print.mo

6).System.mo

7).Util.mo

Page 51: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

44

Chapter 5

Testing

In this chapter we will test the newly developed OMCCp with test cases and analyze

the result. In the end we compare the performance of the parser.

5.1 Sample input and output

We can pass all the test cases in the SCRIPT.mos and obtain the result in the same

SCRIPT.mos file at the end (note: look into the SCRIPT.mos file for the comment

// add your test case). The sample example is shown below.

Input: Test1.mo which test the Modelica data types declared under class

Output:

We have included few aspects of the output, for the detailed version of the output look

into the Appendix B section.

// Parsing Modelica with file ../../testsuite/omcc_test/Test1.mo

// starting lexer

// [TOKEN:CLASS 'class' (5:2-5:7)][TOKEN:IDENT 'test1' (5:8-5:13)][TOKEN:IDENT 'Integer' (6:2-

6:9)][TOKEN:IDENT 'c' (6:10-6:11)][TOKEN:SEMICOLON ';' (6:11-6:12)][TOKEN:IDENT 'Real'

(7:2-7:6)][TOKEN:IDENT 'a' (7:7-7:8)][TOKEN:SEMICOLON ';' (7:8-7:9)][TOKEN:PARAMETER

'parameter' (8:2-8:11)][TOKEN:IDENT 'Real' (8:12-8:16)][TOKEN:IDENT 'z' (8:17-

8:18)][TOKEN:EQUALS '=' (8:18-8:19)][TOKEN:UNSIGNED_REAL '3.0' (8:19-

8:22)][TOKEN:SEMICOLON ';' (8:22-8:23)][TOKEN:IDENT 'Boolean' (9:2-9:9)][TOKEN:IDENT 'x'

(9:10-9:11)][TOKEN:SEMICOLON ';' (9:11-9:12)][TOKEN:IDENT 'String' (10:2-

10:8)][TOKEN:IDENT 't' (10:9-10:10)][TOKEN:EQUALS '=' (10:10-10:11)][TOKEN:STRING '"test"'

(10:11-10:17)][TOKEN:SEMICOLON ';' (10:17-10:18)][TOKEN:CONSTANT 'constant' (11:2-

11:10)][TOKEN:IDENT 'Real' (11:11-11:15)][TOKEN:IDENT 'pi' (11:16-11:18)][TOKEN:EQUALS '='

(11:18-11:19)][TOKEN:UNSIGNED_REAL '3.14' (11:19-11:23)][TOKEN:SEMICOLON ';' (11:23-

11:24)][TOKEN:CONSTANT 'constant' (12:2-12:10)][TOKEN:IDENT 'Integer' (12:11-

class test1

Integer c;

Real a;

parameter Real z=3.0;

Boolean x;

String t="test";

constant Real pi=3.14;

constant Integer x=1;

end test1;

Page 52: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

45

12:18)][TOKEN:IDENT 'x' (12:19-12:20)][TOKEN:EQUALS '=' (12:20-12:21)][TOKEN:IDENT '1'

(12:21-12:22)][TOKEN:SEMICOLON ';' (12:22-12:23)][TOKEN:ENDCLASS 'end test1' (13:2-

13:11)][TOKEN:SEMICOLON ';' (13:11-13:12)]

// Tokens processed: 36

The OMCCp starts the process by calling the lexer. From the input “test1.mo” the lexer

starts recognizing the characters and put them under appropriate tokens. For example in

the model “test1.mo” we declared few data types under class test1. When the lexer

starts reading the input it first reads the character class and puts it under the Token class

which can be seen in the above output as [TOKEN:CLASS 'class' (5:2-5:7)]. The

numbers (5:2-5:7) tells the line (5) and column (2) number where it recognized the

character. Likewise the lexer performs for the remaining characters and outputs them

under appropriate tokens. For the above input the lexer recognized 36 tokens.

// starting parser

// Parsing tokens ParseCode Modelica ...../../testsuite/omcc_test/Test1.mo

//

// Tokens remaining:36

// [State:0]{|0}

// [CLASS,'class'][n:1368-NT:1375] SHIFT1

// Tokens remaining:35

// [State:2]{|2|0}

// REDUCE3[Reducing(l:1,r:24)][nState:23]

// [State:23]{|23|0}

// [IDENT,'test1'][n:110-NT:191] SHIFT1

// Tokens remaining:34

// [State:38]{|38|23|0}

// [IDENT,'Integer'][n:818-NT:899] SHIFT1

// Tokens remaining:33

// [State:31]{|31|38|23|0}

// REDUCE3[Reducing(l:1,r:332)][nState:33]

// [State:33]{|33|38|23|0}

// [IDENT,'c'][n:101-NT:182] REDUCE2[Reducing(l:1,r:329)][nState:98]

// Tokens remaining:33

// [State:98]{|98|38|23|0}

// [IDENT,'c'][n:83-NT:164] REDUCE2[Reducing(l:0,r:220)][nState:216]

// Tokens remaining:33

// [State:216]{|216|98|38|23|0}

// REDUCE3[Reducing(l:2,r:214)][nState:97]

// [State:97]{|97|38|23|0}

// [IDENT,'c'][n:197-NT:278] SHIFT1

Once the tokens are generated from the lexer, the OMCCp makes call to the parser. The

parser starts the process by identifying the token and starts to construct the AST. For

each Token it queries the parsing table generated by the GNU bison and performs two

operations namely shift or reduce. In the process of reduce operation it constructs the

AST. While performs each action for a token it queries the table which state to go next

Page 53: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

46

which can be seen in the above output. It performs the above action for the remaining

tokens until it found an accept state which tells that parsing is done and the AST is

constructed.

// construction of AST:

Absyn.Program.PROGRAM(classes = {Absyn.Class.CLASS(name = test1, partialPrefix = 0, finalPrefix

= 0, encapsulatedPrefix = 0, restriction = Absyn.Restriction.R_CLASS(), body =

Absyn.ClassDef.PARTS(typeVars = {NIL}, classAttrs = {NIL}, classParts =

{Absyn.ClassPart.PUBLIC(contents = {Absyn.ElementItem.ELEMENTITEM(element =

Absyn.Element.ELEMENT(finalPrefix = 0, redeclareKeywords = NONE(), innerOuter =

Absyn.InnerOuter.NOT_INNER_OUTER(), name = component, specification =

Absyn.ElementSpec.COMPONENTS(attributes = Absyn.ElementAttributes.ATTR(flowPrefix = 0,

streamPrefix = 0, parallelism = 0, variability = Absyn.Variability.VAR(), direction =

Absyn.Direction.BIDIR(), arrayDim = {NIL})

The AST is constructed with respect to compiler Front end component “Absyn.mo”.

For each time the parser performs the reduce operation the model “Absyn.mo” is

referred and the parser identifies the union type and construct the AST.

// ***************-ACCEPTED-***************

// SUCCEED

// args:../../testsuite/omcc_test/Test1.mo

// OMCCp v0.9.2 (OpenModelica compiler-compiler Parser generator) Lexer and Parser Generator-2012

// ""

// endResult

The above message tells that parsing is successful for the given input model “test1.mo”

The above is one example of how the output of parser looks. We can perform multiple

testing by passing the test cases in the SCRIPT.mos file which looks like below.

SCRIPT.mos

// add your test cases here

{Main.main ({"../../testsuite/omcc_test/" + file," Modelica"}) for file in

{"Test1.mo","Test2.mo","Test3.mo","Test4.mo",

"Test5.mo","Test6.mo","Test7.mo","Test8.mo","Test9.mo","Test10.mo",

"Test11.mo","Test12.mo","Test13.mo","Test14.mo","Test15.mo","Test16.mo",

"Test17.mo","Test18.mo","Test19.mo","Test20.mo","Test21.mo"}};

Page 54: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

47

5.2 Analysis of the Result

We performed some 150-200 test cases and all the tests were passed which is a very

good result. We almost tested all types of Modelica language constructs with test cases

taken from the text book “Principle of object oriented modeling and simulation with

Modelica” by PETER FRITZON [3]. These test cases are saved in the test suite under

the folder “omcc_test”; additionally we took many test cases from the test suite so that

we have enough test cases to check that the parser is working fine. Considering the

complexity and overhead problem the parser had in previous version, the newly

implemented parser performs much faster. From the output we can confirm that parser

is efficient and all the test cases were passed with good timing. Below is the screen shot

of the output screen.

From the screen shot we can see that the parser took only 23 sec to parse 250 test cases

and output the abstract syntax tree and more importantly all the test cases were passed

which is a very good result. We also performed testing over large subset of Modelica

grammar (>10,000 lines) and we found that parser takes more time and it takes more

memory space, but that is arguably acceptable. But the good news is that the parser is

able to parse correctly and still in good time considering the size of the file. Also we

compared the timing performance of our parser with old version. The table below

shows the comparison.

Page 55: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

48

Number of test cases Old OMCCp (time in secs) New OMCCp(time in secs)

21 24 9

100 50 16

250 120 25

Fig 5.2.a) Parsing time results between old OMCCp and new OMCCp

From the above table we can clearly say that the newly implemented parser is much

time efficient from the old OMCCp which is good result. Also we compared the Timing

performance of the our newly implemented parser with ANTLR parser. We performed

testing over 50 test cases and analyzed the result. We performed individual testing as

the ANTLR parser had memory overhead problem while passing more test cases in

single attempt. The table below shows the comparison between new OMCCp and

ANTLR.

Test cases NewOMCCp ANTLR

Test1.mo 3.760501 1.3980394

Test2.mo 1.9562920000000001 0.7899635

Test3.mo 1.614373 0.800695

Test4.mo 2.857024 1.201042

Test5.mo 2.791135 Failed()

Test6.mo 2.6246790000000004 0.0960541

Test7.mo 3.1126380000000005 1.32764862478871

Test8.mo 3.75037 1.56521756133043

Test9.mo 4.285425 1.58379291027813

Test10.mo 6.239206 1.06514961465927

Test11.mo 2.5076780000000003 0.918013297994547

Test12.mo 3.6748920000000003 1.55348576199503

Test13.mo 3.795541 1.55201928707811

Test14.mo 6.011137000000001 1.63120893259201

Test15.mo 9.294233

1.65271723137357

Time in milliseconds (ms)

Page 56: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

49

Test16.mo 9.411837 1.68449085457359

Test17.mo 6.51426 1.69377852904745

Test18.mo 6.011074000000001 1.63414188242586

Test19.mo 5.5634380000000005 1.61458888353354

Test20.mo 3.6801550000000005 0.963962845391505

Test21.mo 6.381881 11.02702126681925

Test22.mo 5.787908000000001 1.883306724960674

Test23.mo 3.7386090000000003 13.46794139184112

Test24.mo 5.160052 1.94063514006302

Test25.mo 4.58291 1.36528814765643

Test26.mo 11.565446000000001 4.56472873635812

Test27.mo 16.048763 1.8604678446045

Test28.mo 6.865589000000001 1.43470129372417

Test29.mo 3.9414510000000003 1.40439414544107

Test30.mo 6.45792 1.51731271404423

Test31.mo 7.145268 0.90285972385299

Test32.mo 9.201868000000001 1.8179400720137

Test33.mo 9.107916000000001 1.05977253996389

Test34.mo 9.392864 1.15118280978549

Test35.mo 8.959991 1.57548288574889

Test36.mo 12.904014 1.12967451100394

Test37.mo 15.223904000000001 6.24405955452403

Test38.mo 7.636395 1.0426636659331

Test39.mo 7.725462 1.04315249090541

Test40.mo 7.258154000000001 1.66493785568127

Test41.mo 4.862293 2.90508681042683

Test42.mo 4.82479 5.5801619183838

Test43.mo 9.299007000000001 15.3203768645007

Test44.mo 3.7506190000000004 11.7553196377611

Test45.mo 6.43192100000000 15.222611870039

Test46.mo 6.45666600199 1.45556667788

Page 57: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

50

Large test cases:

Test cases Size OMCCp ANTLR

Test47.mo 53 kb 2999.571836 37.0216481027236

Test48.mo 141 kb 9222.6054749 200.607389

Test49.mo 335 kb 29909.026248

240.778963

Test50.mo 515 kb 93393.57284300002 300.758278

Fig5.2.b). Parsing Time results between new OMCCp and ANTLR

From the above table we can say that ANTLR is faster than OMCCp. Though the

ANTLR is faster than OMCCp, we discuss some of the positive points from results.

We performed a total of 50 test cases and achieved 100% success in OMCCp and 99%.

Success in ANTLR. The scaling of large files is considerably good when compared

with the old OMCCp. Also the testing was done with computer having limited

resources (eg. 2gb RAM). With good computer resources (6gb RAM etc..) the OMCCp

can perform faster than the above time. Even though ANTLR performed faster there

were 5 out of 50 test cases which performed slower than OMCCp, The reason behind

this measurement is that when passing multiple test cases ( >20) to ANTLR there was

memory overhead problem and in those situation the ANTLR can perform slower than

OMCCp. As far as considering OMCCp we passed more than 250 test cases in a single

attempt and there was no memory overhead problem.

5.3 Improvements from Previous Version

When we started this thesis we used the report work of “A MetaModelica based parser

generator applied to Modelica” by Edgar Alonso Lopez-Rojas as the main reference, as

the development of OMCCp began from his thesis work and listed below the

improvements made.

Page 58: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

51

1) We use the rtest environment for testing our implementation which is not used

by previous version. The results produced are very accurate and clear.

2) We completed the lexer with “Q-ident” a special type of identifier used in

Modelica which is not supported in previous version.

3) We implemented the whole OMCCp and made to work supporting the new

version and added the OMCCp in main test suite and possibly release OMCCp

comparing the failure of previous implementation.

4) We allocated more time in testing and ensure that parser parses all types of

Modelica grammar correctly whereas the previous version has not been tested

properly.

5) We created a Make file to make the execution process easier.

6) We tested a total of 250 test cases and all the test cases were passed which

clearly states that almost we covered the entire Modelica grammar ensuring

100% result with previous version has failures.

7) The parsing time is very fast compared to previous version for a total of 250 test

cases there are more than 10,000 characters outputted as tokens and it just took

23 seconds to parse and construct the abstract syntax tree which is really a

significant improvement comparing the previous parser which failed to work.

Page 59: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

52

Chapter 6

User Guide

In this chapter we will discuss about the tools and software that are needed to be

installed and run the OMCCp.

6.1 Getting Started

The tools needed for this implementation are tortoise SVN, Eclipse IDE with MDT

plug in, OMDEV, mingw window terminal if u r windows user. Follow the below steps

for setting up the environment.

1) Install the latest version of tortoise svn from http://tortoisesvn.net/

2) Create a new folder named OpenModelica in any of your drive (eg:

c:/OpenModelica) and right click on that folder and perform latest svn checkout

from https://www.openmodelica.org/index.php/developer/source-code

3) Install OMDEV from

https://www.openmodelica.org/index.php/developer/courses and follow the

instruction steps found in install.txt

4) Install latest version of Eclipse from www.eclipse.org

5) Install the MDT plug-in from

https://www.openmodelica.org/index.php/developer/courses and follow the

installation step.

6) Note: the installation steps may vary for windows, Linux and MAC users so be

aware of the steps to be followed. All the information is provided in

www.openmodelica.org main website under downloads.

6.2 Commands To Run the Code

After downloading the latest version of OpenModelica source code from the

SVN repository or source code repository. Check for the folders “omcc” and

“omcc_test” in the testsuite directory (eg: c/OpenModelica/testsuite/omcc). These

two files contain the main implementation and test cases to run our parser. After

checking those two folders in the test suite, now we are ready to compile the

OMCCp. Follow the below step to successfully test the parser.

Page 60: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

53

1) Open the mingw terminal if you are a windows user and shell terminal for

Linux user

2) In the terminal window go the path OpenModelica/testsuite/openmodelica/omcc

(eg: cd c:/OpenModelica/testsuite/openmodelica/omcc)

3) At present the code is tested with testing environment rtest, and the baseline is

added to makefile so everytime when you add a new test case or at what ever

time you run the code you have to set the baseline first by typing make baseline

in the terminal window which looks like below.

4) Then type the command "make test” in the terminal window, the omccp will be

compiled.

Page 61: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

54

5) You can also test the OMCCp without the use of make file by typing the

following command(../../../build/bin/omc +g=MetaModelica=rml SCRIPT.mos )

and if you want you can pass the output to a text file by doing

../../../build/bin/omc +g=MetaModelica +d=rml SCRIPT.mos >output.txt

6) Some c files and other lib files will be created after running make, you can run

again "make clean" to delete those generated files.

7) The parser runs and produces the output for the testcases which looks like the

sample output we discussed in chapter 5. Also you can have look into the output

in SCRIPT.mos.

8) If you want to test your own models or some other models look for the file

"SCRIPT.mos" in the folder omcc.

9) Open the file "SCRIPT.mos" and look for the comments // add your testfiles

here.

Page 62: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

55

Chapter 7

Conclusion

In this chapter we conclude the thesis based on the results achieved and with some

possible future works.

7.1 Accomplishment

As said in the chapter 1.4, we accomplished our project goal with good results and end

the thesis with the following output

A working OMCCp lexer and parser integrated with MetaModelica in the new

bootstrapped OpenModelica compiler

Tested OMCCp

Improvements in performance

Completed Modelica grammar for a complete OMCCp based Modelica parser.

Release of updated OMCCp.

From the results discussed in chapter 5 we can clearly see that the implementation of

OpenModelica compiler-compiler parser generator (OMCCp) works fast and achieve

100% result in testing. And also the OMCCp has been added to main test suite and

ready to be used by everyone. Though there is small amount of overhead when parsing

large subset of grammar, considering the total numbers of characters consumed (>3, 00,

000) the parser still managed to successfully construct the Abstract syntax tree with

good parsing timing. But the good news is that OMCCp is very fast when parsing

normal subset of grammar which completes before 1 secs.Thus the OMCCp can be a

good competitor to be replaced as the main parser for the OpenModelica project.

7.2 Future works

With this thesis work we accomplished our goals, yet during the development we found

so many improvements can be made especially by the OpenModelica development

team.

Page 63: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

56

OMCCp is functionally is independent, both the lexer and parser are not

dependent each other, which makes us to test lexer and parser separately. But

both the OMCCp lexer and parser are highly dependent on Compiler utility files

(OpenModelica/Compiler/ Front-end and OpenModelica/Compiler/ Util). A

small change is any of these utility files will directly affect OMCCp. Hence an

abstract method or some kind of virtual functions (like in c ++) can be created

so that we just make the function call to “Absyn.mo” and make our own

definition or develop some kind of standard interface which can be called

externally.

OMCCp is fixed, hence we can write RML grammar and map the RML

constructs to internal MetaModelica AST generated by OMCCp [15] [16][17].

Test the RML-based OMCCp on all the RML examples in the RML

tutorial[15].

Materials to learn RML examples can be included in the newly updated

OMCCp[15][16][17].

Page 64: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

57

Bibliography

[1]. OpenModelica project. https://www.OpenModelica.org/ [last accessed : 25-8-2012]

[2]. Edgar Alonso Lopez-Rojas. OMCCp: A MetaModelica based parser generator

applied to Modelica. Master's-Thesis, Linköping University, Department of

Computer and Information Science, PELAB- Programming Environment

Laboratory, ISRN:LIU-IDA/LITH-EX-A--11/019--SE, May 2011.

[3]. Peter Fritzson. Principles of Object-oriented modelling and simulation with

Modelica. Wiley-IEEE Press, 2003.

[4]. Peter Fritzson, Adrian Pop and Martin Sjölund. Towards Modelica 4

Meta-Programming and Language Modeling with MetaModelica 2.0, Technical

reports Computer and Information Science Linköping University Electronic Press,

ISSN:1654-7233; 2011:10.

Available at: http://www.ep.liu.se/PubList/Default.aspx?SeriesID=2550

[5]. Peter Fritzson and Adrian Pop. Meta-Programming and Language Modeling

with MetaModelica 1.0. Technical reports Computer and Information Science

Linköping University Electronic Press, ISSN: 1654-7233. 2011:9.

Available at: http://www.ep.liu.se/PubList/Default.aspx?SeriesID=2550

[6]. Aho, Lam, Sethi, Ullman, Compilers Principles, Techniques, and Tools, Second

Edition. Addison-Wesley, 2006.

[7]. Compiler Construction laboratory assignments, 2011. Compendium, Bokakademin.

Linköping University, Department of Computer and Information Science.

[8]. DFA. http://en.wikipedia.org/wiki/Deterministic_finite_automaton

[last accessed : 20-7-2012]

[9]. CFG. http://en.wikipedia.org/wiki/Context-free_grammar.

[last accessed : 25-7-2012]

[10]. PDA. http://en.wikipedia.org/wiki/Pushdown_automaton.

[last accessed : 20-6-2012]

[11]. LALR. http://en.wikipedia.org/wiki/LALR_parser

[last accessed : 25-7-2012]

[12]. Look Ahead. http://en.wikipedia.org/wiki/Parsing#Lookahead

[last accessed : 25-7-2012]

Page 65: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

58

[13]. Flex. http://en.wikipedia.org/wiki/Flex_lexical_analyser

[last accessed : 15-8-2012]

[14]. Bison. http://en.wikipedia.org/wiki/GNU_Bison

[last accessed : 15-8-2012]

[15]. Peter Fritzson. Developing Efficient Language Implementations from Structural

and Natural Semantics, March 2006. (RML-manual)

Available at: http://www.ida.liu.se/labs/pelab/rml/

[16]. Adrian Pop, Peter Fritzson. Debugging Natural Semantics Specifications, Sixth

International Symposium on Automated and Analysis-Driven Debugging

(AADEBUG2005),September 19-21,2005, Monterey, California.

[17]. Peter Fritzson, Adrian Pop, David Broman, Peter Aronsson. Formal Semantics

based Translator Generation and Tool Development in Practice. In Proceedings

of 20th Australian Software Engineering Conference (ASWEC 2009), Gold

Coast, Queensland, Australia, April 14 – 17, 2009.

Page 66: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

59

Appendices

Page 67: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

60

Appendix A

Main.mo

class Main

import Lexer;

import Parser;

import LexerModelica;

import ParserModelica;

import Flags;

import Util;

import System;

import Types;

public function main

"function: main

This is the main function that the MetaModelica Compiler (MMC) runtime

system calls to

start the translation."

input list<String> inStringLst;

protected

list<OMCCTypes.Token> tokens;

ParserModelica.AstTree astTreeModelica;

type Mcode_MCodeLst = list<Mcode.MCode>;

algorithm

_ := matchcontinue (inStringLst)

local

String ver_str,errstr,filename,parser,ast;

list<String> args_1,args,chars;

String s,str;

Boolean result;

case args as _::_

equation

{filename,parser} = Flags.new(args);

"Modelica" = parser;

false=(0==stringLength(filename));

print("\nParsing Modelica with file " + filename + "\n");

// call the lexer

print("\nstarting lexer");

tokens = LexerModelica.scan(filename,false);

print("\n");

// print(OMCCTypes.printTokens(tokens,""));

print("\n Tokens processed:");

print(intString(listLength(tokens)));

// call the parser

print("\nstarting parser");

(result,astTreeModelica) =

ParserModelica.parse(tokens,filename,true);

print("\n");

Page 68: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

61

// printing the AST

if (result) then

print("\nSUCCEED");

else

print("\n" +Error.printMessagesStr());

end if;

print("\nargs:" + filename);

printUsage();

then ();

case {}

equation

print("no args");

printUsage();

then ();

case _

equation

print("\n**********Error*************");

print("\n" +Error.printMessagesStr());

printUsage();

then ();

end matchcontinue;

end main;

public function printUsage

protected

Integer n;

List<String> strs;

algorithm

print("\nOMCCp v0.9.2 (OpenModelica compiler-compiler Parser generator)

Lexer and Parser Generator-2012");

end printUsage;

protected function readSettings

"function: readSettings

author: x02lucpo

Checks if 'settings.mos' exist and uses handleCommand with runScript(...)

to execute it.

Checks if '-s <file>.mos' has been

returns Interactive.InteractiveSymbolTable which is used in the rest of

the loop"

input list<String> inStringLst;

output String str;

algorithm

str:=

matchcontinue (inStringLst)

local

list<String> args;

case (args)

equation

outSymbolTable = Interactive.emptySymboltable;

"" = Util.flagValue("-s",args);

// this is out-commented because automatically reading settings.mos

// can make a system bad

// outSymbolTable = readSettingsFile("settings.mos",

Interactive.emptySymboltable);

then

outSymbolTable;

case (args)

equation

str = Util.flagValue("-s",args);

str = System.trim(str," \"");

outSymbolTable = readSettingsFile(str,

Interactive.emptySymboltable);

then

outSymbolTable;

end matchcontinue;

end readSettings;

end Main;

Page 69: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

62

LexerModelica.mo

package LexerModelica // Generated by OMCC v0.9.2 (OpenModelica Compiler-

Compiler) Copyright 2011 Open Souce Modelica Consorsium (OSMC) Fri May 4

16:09:39 2012

"Implements the DFA of OMCC"

import Types;

import LexTableModelica;

import LexerCodeModelica;

uniontype LexerTable

record LEXER_TABLE

array<Integer> accept;

array<Integer> ec;

array<Integer> meta;

array<Integer> base;

array<Integer> def;

array<Integer> nxt;

array<Integer> chk;

array<Integer> acclist;

end LEXER_TABLE;

end LexerTable;

uniontype Env

record ENV

Integer startSt,currSt;

Integer pos,sPos,ePos,linenr;

list<Integer> buff;

list<Integer> bkBuf;

list<Integer> stateSk;

Boolean isDebugging;

String fileName;

end ENV;

end Env;

function scan "Scan starts the lexical analysis, load the tables and consume

the program to output the tokens"

input String fileName "input source code file";

input Boolean debug "flag to activate the debug mode";

output list<OMCCTypes.Token> tokens "return list of tokens";

algorithm

// load program

(tokens) := match(fileName,debug)

local

list<OMCCTypes.Token> resTokens;

list<Integer> streamInteger;

case (_,_)

equation

streamInteger = loadSourceCode(fileName);

resTokens = lex(fileName,streamInteger,debug);

then (resTokens);

end match;

end scan;

function scanString "Scan starts the lexical analysis, load the tables and

consume the program to output the tokens"

input String fileSource "input source code file";

input Boolean debug "flag to activate the debug mode";

output list<OMCCTypes.Token> tokens "return list of tokens";

algorithm

// load program

(tokens) := match(fileSource,debug)

local

list<OMCCTypes.Token> resTokens;

list<Integer> streamInteger;

list<String> chars;

Page 70: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

63

case (_,_)

equation

chars = stringListStringChar(fileSource);

//streamInteger = Util.listMap(chars, stringCharInt);

streamInteger = list(stringCharInt(c) for c in chars);

resTokens = lex("<StringSource>",streamInteger,debug);

then (resTokens);

end match;

end scanString;

function loadSourceCode

input String fileName "input source code file";

output list<Integer> program;

algorithm

(program) := match(fileName)

local

list<Integer> streamInteger;

list<String> chars;

case ("")

equation

print("Empty FileName");

then ({});

case (_)

equation

chars = stringListStringChar(System.readFile(fileName));

//streamInteger = Util.listMap(chars, stringCharInt);

streamInteger = list(stringCharInt(c) for c in chars);

then (streamInteger);

end match;

end loadSourceCode;

function lex "Scan starts the lexical analysis, load the tables and consume

the program to output the tokens"

input String fileName "input source code file";

input list<Integer> program "source code as a stream of Integers";

input Boolean debug "flag to activate the debug mode";

output list<OMCCTypes.Token> tokens "return list of tokens";

protected

list<Integer> program1:=program;

Integer r,cTok;

list<Integer> cProg;

list<String> chars;

array<Integer>

mm_accept,mm_ec,mm_meta,mm_base,mm_def,mm_nxt,mm_chk,mm_acclist;

Env env;

LexerTable lexTables;

algorithm

// load arrays

mm_accept := listArray(LexTableModelica.yy_accept);

mm_ec := listArray(LexTableModelica.yy_ec);

mm_meta := listArray(LexTableModelica.yy_meta);

mm_base := listArray(LexTableModelica.yy_base);

mm_def := listArray(LexTableModelica.yy_def);

mm_nxt := listArray(LexTableModelica.yy_nxt);

mm_chk := listArray(LexTableModelica.yy_chk);

mm_acclist := listArray(LexTableModelica.yy_acclist);

lexTables :=

LEXER_TABLE(mm_accept,mm_ec,mm_meta,mm_base,mm_def,mm_nxt,mm_chk,mm_acclist);

// Initialize the Env Variables

env := ENV(1,1,1,0,1,1,{},{},{1},debug,fileName);

if (debug==true) then

print("\nLexer analyzer LexerCodeModelica..." + fileName + "\n");

//printAny("\nLexer analyzer LexerCodeModelica..." + fileName + "\n");

end if;

tokens := {};

Page 71: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

64

if (debug) then

print("\n TOTAL Chars:");

print(intString(listLength(program1)));

end if;

while (List.isEmpty(program1)==false) loop

if (debug) then

print("\nChars remaining:");

print(intString(listLength(program1)));

end if;

cTok::program1 := program1;

cProg := {cTok};

(tokens,env,cProg) := consume(env,cProg,lexTables,tokens);

if (List.isEmpty(cProg)==false) then

cTok::cProg := cProg;

program1 := cTok::program1;

end if;

end while;

tokens := listReverse(tokens);

end lex;

function consume

input Env env;

input list<Integer> program;

input LexerTable lexTables;

input list<OMCCTypes.Token> tokens;

output list<OMCCTypes.Token> resToken;

output Env env2;

output list<Integer> program2;

protected

list<Integer> program1:=program;

list<OMCCTypes.Token> tokens1:=tokens;

array<Integer>

mm_accept,mm_ec,mm_meta,mm_base,mm_def,mm_nxt,mm_chk,mm_acclist;

Integer mm_startSt,mm_currSt,mm_pos,mm_sPos,mm_ePos,mm_linenr;

list<Integer> buffer,bkBuffer,states;

String fileNm;

Integer c,cp,mm_finish,baseCond;

Boolean debug;

algorithm

LEXER_TABLE(accept=mm_accept,ec=mm_ec,meta=mm_meta,base=mm_base,

def=mm_def,nxt=mm_nxt,chk=mm_chk,acclist=mm_acclist) := lexTables;

ENV(startSt=mm_startSt,currSt=mm_currSt,pos=mm_pos,sPos=mm_sPos,ePos=mm_ePos,

linenr=mm_linenr,

buff=buffer,bkBuf=bkBuffer,stateSk=states,isDebugging=debug,fileName=fileNm)

:= env;

mm_finish := LexTableModelica.yy_finish;

baseCond := mm_base[mm_currSt];

if (debug==true) then

print("\nPROGRAM:{" + printBuffer(program) + "} ");

print("\nBUFFER:{" + printBuffer(buffer) + "} ");

print("\nBKBUFFER:{" + printBuffer(bkBuffer) + "} ");

print("\nSTATE STACK:{" + printStack(states,"") + "} ");

print("base:" + intString(baseCond) + " st:" + intString(mm_currSt)+" ");

end if;

(resToken,program2) := match (program1,tokens1)

local

Integer c,d,act,val,c2,curr2,fchar;

list<Integer> rest;

list<OMCCTypes.Token> lToken;

String sToken;

Boolean emptyToken;

Option<OMCCTypes.Token> otok;

case (_,_) // loop tokens

equation

cp::rest = program1;

Page 72: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

65

buffer = cp::buffer;

mm_pos = mm_pos+1;

if (cp==10) then

mm_linenr = mm_linenr+1;

mm_ePos = mm_sPos;

mm_sPos = 0;

else

mm_sPos = mm_sPos+1;

end if;

if (debug==true) then

print("\n[Reading:'" + intStringChar(cp) +"' at p:" +

intString(mm_pos-1)

+ " line:"+ intString(mm_linenr) + " rPos:" + intString(mm_sPos)

+"]");

end if;

c = mm_ec[cp];

c2 = c;

curr2 = mm_currSt;

if (debug==true) then

print(" evalState Before[c" + intString(c2) + ",s"+

intString(curr2)+"]");

end if;

(mm_currSt,c) = evalState(lexTables,curr2,c2);

if (debug==true) then

print(" After[c" + intString(c) + ",s"+ intString(mm_currSt)+"]");

end if;

if (mm_currSt>0) then

curr2 = mm_base[mm_currSt];

// print("BASE:"+ intString(curr2)+"]");

mm_currSt = mm_nxt[curr2 + c];

// print("NEXT:"+ intString(mm_currSt)+"]");

else

mm_currSt = mm_nxt[c];

end if;

states = mm_currSt::states;

// printAny(states);

// print("[c" + intString(c) + ",s"+ intString(mm_currSt)+"]");

// print("[B:" + intString(mm_base[mm_currSt])+"]");

env2 =

ENV(mm_startSt,mm_currSt,mm_pos,mm_sPos,mm_ePos,mm_linenr,buffer,rest,states,d

ebug,fileNm);

lToken = tokens1;

baseCond = mm_base[mm_currSt];

if (baseCond==mm_finish) then

if (debug==true) then

print("\n[RESTORE=" + intString(mm_accept[mm_currSt])

+ "]");

end if;

(env2,act) = findRule(lexTables,env2);

(otok,env2) = LexerCodeModelica.action(act,env2);

// read the env

ENV(startSt=mm_startSt,currSt=mm_currSt,pos=mm_pos,sPos=mm_sPos,ePos=mm_ePos,

linenr=mm_linenr,

buff=buffer,bkBuf=bkBuffer,stateSk=states,isDebugging=debug,fileName=fileNm) =

env2;

//restore the program

program2 = bkBuffer;

Page 73: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

66

//restart current state

env2 =

ENV(mm_startSt,mm_startSt,mm_pos,mm_sPos,mm_pos,mm_linenr,buffer,{},{mm_startS

t},debug,fileNm);

lToken = List.consOption(otok,tokens1);

if(debug) then

print("\n CountTokens:" +

intString(listLength(lToken)));

end if;

else

program2 = rest; // consume the character

end if;

then (lToken,program2);

end match;

end consume;

function findRule

input LexerTable lexTables;

input Env env;

output Env env2;

output Integer action;

protected

array<Integer>

mm_accept,mm_ec,mm_meta,mm_base,mm_def,mm_nxt,mm_chk,mm_acclist;

Integer mm_startSt,mm_currSt,mm_pos,mm_sPos,mm_ePos,mm_linenr;

list<Integer> buffer,bkBuffer,states;

String fileNm;

Integer lp,lp1,stCmp;

Boolean st,debug;

algorithm

LEXER_TABLE(accept=mm_accept,ec=mm_ec,meta=mm_meta,base=mm_base,

def=mm_def,nxt=mm_nxt,chk=mm_chk,acclist=mm_acclist) := lexTables;

ENV(startSt=mm_startSt,currSt=mm_currSt,pos=mm_pos,sPos=mm_sPos,ePos=mm_ePos,

linenr=mm_linenr,

buff=buffer,bkBuf=bkBuffer,stateSk=states,isDebugging=debug,fileName=fileNm)

:= env;

stCmp::_ := states;

lp := mm_accept[stCmp];

// stCmp::_ := states;

lp1 := mm_accept[stCmp+1];

st := intGt(lp,0) and intLt(lp,lp1);

// print("STATE:[" + intString(mm_currSt)+ " pos:" + intString(mm_pos) +

"]");

// printAny(st);

(env2,action) := match(states,st)

local

Integer act,cp;

list<Integer> restBuff,restStates;

case ({},_)

equation

act = mm_acclist[lp];

print("\nERROR:EMPTY STATE STACK");

then (env,act);

case (_,true)

equation

stCmp::_ = states;

lp = mm_accept[stCmp];

act = mm_acclist[lp];

Page 74: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

67

then (env,act);

case (_,false)

equation

cp::restBuff = buffer;

bkBuffer = cp::bkBuffer;

mm_pos = mm_pos - 1;

mm_sPos = mm_sPos -1;

if (cp==10) then

mm_sPos = mm_ePos;

mm_linenr = mm_linenr-1;

end if;

//bkBuffer = cp::bkBuffer;

mm_currSt::restStates = states;

// printAny(restStates);

// print("Restore STATE:[" + intString(mm_currSt)+ " pos:" +

intString(mm_pos) + "]");

env2 =

ENV(mm_startSt,mm_currSt,mm_pos,mm_sPos,mm_ePos,mm_linenr,restBuff,bkBuffer,re

stStates,debug,fileNm);

(env2,act) = findRule(lexTables,env2);

then (env2,act);

end match;

end findRule;

function evalState

input LexerTable lexTables;

input Integer cState;

input Integer c;

output Integer new_state;

output Integer new_c;

protected

Integer cState1:=cState;

Integer c1:=c;

array<Integer>

mm_accept,mm_ec,mm_meta,mm_base,mm_def,mm_nxt,mm_chk,mm_acclist;

Integer val,val2,chk;

algorithm

LEXER_TABLE(accept=mm_accept,ec=mm_ec,meta=mm_meta,base=mm_base,

def=mm_def,nxt=mm_nxt,chk=mm_chk,acclist=mm_acclist) := lexTables;

chk := mm_base[cState1];

chk := chk + c1;

val := mm_chk[chk];

val2 := mm_base[cState1] + c1;

// print("{val2=" + intString(val2) + "}\n");

(new_state,new_c) := match (cState1==val)

local

Integer s,c2;

case (true)

then (cState1,c1);

case (false)

equation

cState1 = mm_def[cState1];

//print("[newS:" + intString(cState)+"]");

//c2 = c;

if ( cState1 >= LexTableModelica.yy_limit ) then

c1 = mm_meta[c1];

// print("META[c:" + intString(c)+"]");

end if;

if (cState1>0) then

(cState1,c1) = evalState(lexTables,cState1,c1);

end if;

then (cState1,c1);

end match;

end evalState;

Page 75: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

68

function getInfo

input list<Integer> buff;

input Integer frPos;

input Integer flineNr;

input String programName;

output OMCCTypes.Info info;

protected

list<Integer> buff1:=buff;

Integer mm_linenr,mm_sPos;

Integer c;

algorithm

mm_sPos := frPos;

mm_linenr := flineNr;

while (List.isEmpty(buff1)==false) loop

c::buff1 := buff1;

if (c==10) then

mm_linenr := mm_linenr - 1;

mm_sPos := 0;

else

mm_sPos := mm_sPos - 1;

end if;

end while;

info :=

OMCCTypes.INFO(programName,false,mm_linenr,mm_sPos+1,flineNr,frPos+1,OMCCTypes

.getTimeStamp());

/*if (true) then

print("\nTOKEN file:" +programName + " p(" + intString(mm_sPos) + ":" +

intString(mm_linenr) + ")-(" + intString(frPos) + ":" + intString(flineNr) +

")");

end if; */

end getInfo;

function printBuffer2

input list<Integer> inList;

input String cBuff;

output String outList;

protected

list<Integer> inList1:=inList;

String cBuff1:=cBuff;

algorithm

(outList) := match(inList,cBuff)

local

Integer c;

String new,tout;

list<Integer> rest;

case ({},_)

then (cBuff1);

else

equation

c::rest = inList1;

new = cBuff1 + intStringChar(c);

(tout) = printBuffer2(rest,new);

then (tout);

end match;

end printBuffer2;

function printBuffer

input list<Integer> inList;

output String outList;

protected

list<Integer> inList1:=inList;

Integer c;

algorithm

outList := "";

while (List.isEmpty(inList1)==false) loop

c::inList1 := inList1;

outList := outList + intStringChar(c);

Page 76: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

69

end while;

end printBuffer;

function printStack

input list<Integer> inList;

input String cBuff;

output String outList;

protected

list<Integer> inList1:=inList;

String cBuff1:=cBuff;

algorithm

(outList) := match(inList,cBuff)

local

Integer c;

String new,tout;

list<Integer> rest;

case ({},_)

then (cBuff1);

else

equation

c::rest = inList1;

new = cBuff1 + "|" + intString(c);

(tout) = printStack(rest,new);

then (tout);

end match;

end printStack;

end LexerModelica;

Page 77: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

70

ParserModelica.mo

package ParserModelica " Generated by OMCC v0.9.2 (OpenModelica Compiler-

Compiler) Copyright 2011 Open Souce Modelica Consorsium (OSMC) Fri May 4

16:09:39 2012

"

import Types;

import ParseTableModelica;

import ParseCodeModelica;

import Absyn;

import Error;

uniontype Env

record ENV

OMCCTypes.Token crTk,lookAhTk;

list<Integer> state;

list<String> errMessages;

Integer errStatus,sState,cState;

list<OMCCTypes.Token> program,progBk;

ParseCodeModelica.AstStack astStack;

Boolean isDebugging;

list<Integer> stateBackup;

ParseCodeModelica.AstStack astStackBackup;

end ENV;

end Env;

uniontype ParseData

record PARSE_TABLE

array<Integer> translate;

array<Integer> prhs;

array<Integer> rhs;

array<Integer> rline;

array<String> tname;

array<Integer> toknum;

array<Integer> r1;

array<Integer> r2;

array<Integer> defact;

array<Integer> defgoto;

array<Integer> pact;

array<Integer> pgoto;

array<Integer> table;

array<Integer> check;

array<Integer> stos; // to be replaced

end PARSE_TABLE;

end ParseData;

/* when the error is positive the parser runs in recovery mode,

if the error is negative, the parser runs in testing candidate mode

if the error is cero, then no error is present or has been recovered

The error value decreases with each shifted token */

constant Integer maxErrShiftToken = 3;

constant Integer maxCandidateTokens = 4;

constant Integer maxErrRecShift = -5;

constant Integer ERR_TYPE_DELETE = 1;

constant Integer ERR_TYPE_INSERT = 2;

constant Integer ERR_TYPE_REPLACE = 3;

constant Integer ERR_TYPE_INSEND = 4;

constant Integer ERR_TYPE_MERGE = 5;

type AstTree = ParseCodeModelica.AstTree;

function parse "realize the syntax analysis over the list of tokens and

generates the AST tree"

input list<OMCCTypes.Token> tokens "list of tokens from the lexer";

input String fileName "file name of the source code";

Page 78: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

71

input Boolean debug "flag to output debug messages that explain the states

of the machine while parsing";

output Boolean result "result of the parsing";

output ParseCodeModelica.AstTree ast "AST tree that is returned when the

result output is true";

protected

list<OMCCTypes.Token> tokens1:=tokens;

array<String> mm_tname;

array<Integer> mm_translate, mm_prhs, mm_rhs, mm_rline, mm_toknum, mm_r1,

mm_r2, mm_defact, mm_defgoto,

mm_pact, mm_pgoto, mm_table, mm_check, mm_stos;

ParseData pt;

Env env;

OMCCTypes.Token emptyTok;

ParseCodeModelica.AstStack astStk;

list<OMCCTypes.Token> rToks;

list<Integer> stateStk;

list<String> errStk;

//Boolean result;

algorithm

if (debug) then

print("\nParsing tokens ParseCodeModelica ..." + fileName + "\n");

end if;

mm_translate := listArray(ParseTableModelica.yytranslate);

mm_prhs := listArray(ParseTableModelica.yyprhs);

mm_rhs := listArray(ParseTableModelica.yyrhs);

mm_rline := listArray(ParseTableModelica.yyrline);

mm_tname := listArray(ParseTableModelica.yytname);

mm_toknum := listArray(ParseTableModelica.yytoknum);

mm_r1 := listArray(ParseTableModelica.yyr1);

mm_r2 := listArray(ParseTableModelica.yyr2);

mm_defact := listArray(ParseTableModelica.yydefact);

mm_defgoto := listArray(ParseTableModelica.yydefgoto);

mm_pact := listArray(ParseTableModelica.yypact);

mm_pgoto := listArray(ParseTableModelica.yypgoto);

mm_table := listArray(ParseTableModelica.yytable);

mm_check := listArray(ParseTableModelica.yycheck);

mm_stos := listArray(ParseTableModelica.yystos);

pt :=

PARSE_TABLE(mm_translate,mm_prhs,mm_rhs,mm_rline,mm_tname,mm_toknum,mm_r1,mm_r

2

,mm_defact,mm_defgoto,mm_pact,mm_pgoto,mm_table,mm_check,mm_stos);

stateStk := {0};

errStk := {};

astStk := ParseCodeModelica.initAstStack(astStk);

env :=

ENV(emptyTok,emptyTok,stateStk,errStk,0,0,0,tokens,{},astStk,debug,stateStk,as

tStk);

while (List.isEmpty(tokens1)==false) loop

if (debug) then

print("\nTokens remaining:");

print(intString(listLength(tokens1)));

end if;

// printAny("\nTokens remaining:");

// printAny(intString(listLength(tokens)));

(tokens1,env,result,ast) := processToken(tokens1,env,pt);

if (result==false) then

break;

end if;

end while;

if (debug) then

printAny(ast);

end if;

Page 79: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

72

/*if (result==true) then

print("\n SUCCEED - (AST)");

else

print("\n FAILED PARSING");

end if;*/

end parse;

function addSourceMessage

input list<String> errStk;

input OMCCTypes.Info info;

algorithm

Error.addSourceMessage(Error.SYNTAX_ERROR,errStk,info);

//print(printSemStack(listReverse(errStk),""));

end addSourceMessage;

function printErrorMessages

input list<String> errStk;

algorithm

// print("\n ***ERROR(S) FOUND*** ");

// print(printSemStack(listReverse(errStk),""));

end printErrorMessages;

function processToken

input list<OMCCTypes.Token> tokens;

input Env env;

input ParseData pt;

output list<OMCCTypes.Token> rTokens;

output Env env2;

output Boolean result;

output ParseCodeModelica.AstTree ast;

protected

list<OMCCTypes.Token> tokens1:=tokens;

list<OMCCTypes.Token> tempTokens;

// parse tables

array<String> mm_tname;

array<Integer> mm_translate, mm_prhs, mm_rhs, mm_rline, mm_toknum, mm_r1,

mm_r2, mm_defact, mm_defgoto,

mm_pact, mm_pgoto, mm_table, mm_check, mm_stos;

// env variables

OMCCTypes.Token cTok,nTk;

ParseCodeModelica.AstStack astStk,astSkBk;

Boolean debug;

list<Integer> stateStk,stateSkBk;

list<String> errStk;

String astTmp;

Integer sSt,cSt,lSt,errSt,cFinal,cPactNinf,cTableNinf;

list<OMCCTypes.Token> prog,prgBk;

algorithm

PARSE_TABLE(translate=mm_translate,prhs=mm_prhs,rhs=mm_rhs,rline=mm_rline,tnam

e=mm_tname,toknum=mm_toknum,r1=mm_r1,r2=mm_r2

,defact=mm_defact,defgoto=mm_defgoto,pact=mm_pact,pgoto=mm_pgoto,table=mm_tabl

e,check=mm_check,stos=mm_stos) := pt;

ENV(crTk=cTok,lookAhTk=nTk,state=stateStk,errMessages=errStk,errStatus=errSt,

sState=sSt,cState=cSt,program=prog,progBk=prgBk,astStack=astStk,isDebugging=de

bug,stateBackup=stateSkBk,astStackBackup=astSkBk):= env;

if (debug) then

print("\n[State:" + intString(cSt) +"]{" + printStack(stateStk,"") +

"}\n");

end if;

env2 := env;

// Start the LALR(1) Parsing

cFinal := ParseTableModelica.YYFINAL;

Page 80: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

73

cPactNinf := ParseTableModelica.YYPACT_NINF;

cTableNinf := ParseTableModelica.YYTABLE_NINF;

prog := tokens1;

// cFinal==cSt is a final state? then ACCEPT

// mm_pact[cSt]==cPactNinf if this REDUCE or ERROR

result := true;

(rTokens,result) :=

matchcontinue(tokens,env,pt,cFinal==cSt,mm_pact[cSt+1]==cPactNinf)

local

list<OMCCTypes.Token> rest;

list<Integer> vl;

OMCCTypes.Token c,nt;

Integer n,len,val,tok,tmTok,chkVal;

String nm,semVal;

Absyn.Ident idVal;

case ({},_,_,false,false)

equation

if (debug) then

print("\nNow at end of input:\n");

end if;

n = mm_pact[cSt+1];

rest = {};

if (debug) then

print("[n:" + intString(n) + "]");

end if;

if (n < 0 or ParseTableModelica.YYLAST < n or mm_check[n+1] <> 0)

then

//goto yydefault;

n = mm_defact[cSt+1];

if (n==0) then

// Error Handler

if (debug) then

print("\n Syntax Error found yyerrlab5:" + intString(errSt));

//printAny("\n Syntax Error found yyerrlab5:" +

intString(errSt));

end if;

if (errSt>=0) then

(env2,semVal,result) = errorHandler(cTok,env,pt);

ENV(crTk=cTok,lookAhTk=nTk,state=stateStk,errMessages=errStk,errStatus=errSt,s

State=sSt,cState=cSt,program=prog,progBk=prgBk,astStack=astStk,isDebugging=deb

ug,stateBackup=stateSkBk,astStackBackup=astSkBk)= env2;

else

result=false;

end if;

end if;

if (debug) then

print(" REDUCE4");

end if;

env2=reduce(n,env,pt);

ENV(crTk=cTok,lookAhTk=nTk,state=stateStk,errMessages=errStk,errStatus=errSt,s

State=sSt,cState=cSt,

program=prog,progBk=prgBk,astStack=astStk,isDebugging=debug,stateBackup=stateS

kBk,astStackBackup=astSkBk)= env2;

else

n = mm_table[n+1];

if (n<=0) then

if (n==0 or n==cTableNinf) then

// Error Handler

if (debug) then

print("\n Syntax Error found yyerrlab4:" +

intString(n));

end if;

if (errSt>=0) then

(env2,semVal,result) = errorHandler(cTok,env,pt);

Page 81: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

74

else

result = false;

end if;

ENV(crTk=cTok,lookAhTk=nTk,state=stateStk,errMessages=errStk,errStatus=errSt,s

State=sSt,cState=cSt,program=prog,progBk=prgBk,astStack=astStk,isDebugging=deb

ug,stateBackup=stateSkBk,astStackBackup=astSkBk)= env2;

end if;

n = -n;

if (debug) then

print(" REDUCE5");

end if;

env2=reduce(n,env,pt);

ENV(crTk=cTok,lookAhTk=nTk,state=stateStk,errMessages=errStk,errStatus=errSt,s

State=sSt,cState=cSt,program=prog,progBk=prgBk,astStack=astStk,isDebugging=deb

ug,stateBackup=stateSkBk,astStackBackup=astSkBk)= env2;

else

if (debug) then

print(" SHIFT");

end if;

if (errSt<0) then // reduce the shift error lookup

if (debug) then

print("\n***-RECOVERY TOKEN INSERTED IS SHIFTED-***");

end if;

errSt = maxErrRecShift;

end if;

cSt = n;

stateStk = cSt::stateStk;

env2 =

ENV(c,nt,stateStk,errStk,errSt,sSt,cSt,rest,rest,astStk,debug,stateSkBk,astSkB

k);

end if;

end if;

if (result==true and errSt>maxErrRecShift) then //stops when it finds

and error

if (debug) then

print("\nReprocesing at the END");

end if;

(rest,env2,result,ast) = processToken(rest,env2,pt);

end if;

then ({},result);

case (_,_,_,true,_)

equation

if (debug) then

print("\n\n***************-ACCEPTED-***************\n");

end if;

result = true;

if (List.isEmpty(errStk)==false) then

printErrorMessages(errStk);

result = false;

end if;

ast = ParseCodeModelica.getAST(astStk);

then ({},result);

case (_,_,_,false,true)

equation

n = mm_defact[cSt+1];

if (n == 0) then

// Error Handler

if (debug) then

print("\n Syntax Error found yyerrlab3:" + intString(n));

end if;

if (errSt>=0) then

(env2,semVal,result) = errorHandler(cTok,env,pt);

Page 82: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

75

ENV(crTk=cTok,lookAhTk=nTk,state=stateStk,errMessages=errStk,errStatus=errSt,s

State=sSt,cState=cSt,program=prog,progBk=prgBk,astStack=astStk,isDebugging=deb

ug,stateBackup=stateSkBk,astStackBackup=astSkBk)= env2;

else

result = false;

end if;

end if;

// reduce;

if (debug) then

print("REDUCE3");

end if;

env2=reduce(n,env,pt);

if (result==true) then //stops when it finds and

error

(rest,env2,result,ast) = processToken(tokens,env2,pt);

end if;

then (rest,result);

case (_,_,_,false,false)

equation

/* Do appropriate processing given the current state. Read a

lookahead token if we need one and don't already have one. */

c::rest = tokens1;

cTok = c;

OMCCTypes.TOKEN(id=tmTok,name=nm,value=vl) = c;

semVal = printBuffer(vl);

if (debug) then

print("[" + nm + ",'" + semVal +"']");

end if;

tok = translate(tmTok,pt);

/* First try to decide what to do without reference to lookahead

token. */

n = mm_pact[cSt+1];

if (debug) then

print("[n:" + intString(n) + "-");

end if;

n = n + tok;

if (debug) then

print("NT:" + intString(n) + "]");

end if;

chkVal = n+1;

if (chkVal<=0) then

chkVal = 1;

end if;

if (n < 0 or ParseTableModelica.YYLAST < n or mm_check[chkVal] <>

tok) then

//goto yydefault;

n = mm_defact[cSt+1];

if (n==0) then

// Error Handler

if (debug) then

print("\n Syntax Error found yyerrlab2:" +

intString(n));

end if;

if (errSt>=0) then

(env2,semVal,result) = errorHandler(cTok,env,pt);

ENV(crTk=cTok,lookAhTk=nTk,state=stateStk,errMessages=errStk,errStatus=errSt,s

State=sSt,cState=cSt,program=prog,progBk=prgBk,astStack=astStk,isDebugging=deb

ug,stateBackup=stateSkBk,astStackBackup=astSkBk)= env2;

else

Page 83: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

76

errSt = maxErrRecShift;

result = false;

end if;

else

if (debug) then

print(" REDUCE2");

end if;

env2=reduce(n,env,pt);

ENV(crTk=cTok,lookAhTk=nTk,state=stateStk,errMessages=errStk,errStatus=errSt,s

State=sSt,cState=cSt,program=prog,progBk=prgBk,astStack=astStk,isDebugging=deb

ug,stateBackup=stateSkBk,astStackBackup=astSkBk)= env2;

rest = tokens1;

end if;

else

// try to get the value for the action in the table array

n = mm_table[n+1];

if (n<=0) then

//

if (n==0 or n==cTableNinf) then

// Error Handler

if (debug) then

print("\n Syntax Error found yyerrlab:" +

intString(n));

end if;

if (errSt>=0) then

(env2,semVal,result) = errorHandler(cTok,env,pt);

ENV(crTk=cTok,lookAhTk=nTk,state=stateStk,errMessages=errStk,errStatus=errSt,s

State=sSt,cState=cSt,program=prog,progBk=prgBk,astStack=astStk,isDebugging=deb

ug,stateBackup=stateSkBk,astStackBackup=astSkBk)= env2;

else

result = false;

errSt = maxErrRecShift;

end if;

else

n = -n;

if (debug) then

print(" REDUCE");

end if;

env2=reduce(n,env,pt);

ENV(crTk=cTok,lookAhTk=nTk,state=stateStk,errMessages=errStk,errStatus=errSt,s

State=sSt,cState=cSt,program=prog,progBk=prgBk,astStack=astStk,isDebugging=deb

ug,stateBackup=stateSkBk,astStackBackup=astSkBk)= env2;

rest = tokens1;

end if;

else

if (debug) then

print(" SHIFT1");

end if;

cSt = n;

stateStk = cSt::stateStk;

idVal = semVal;

(astStk) = ParseCodeModelica.push(astStk,idVal,cTok);

astSkBk = astStk;

stateSkBk = stateStk;

if (errSt<>0) then // reduce the shift error lookup

errSt = errSt - 1;

end if;

env2 =

ENV(c,nt,stateStk,errStk,errSt,sSt,cSt,rest,rest,astStk,debug,stateSkBk,astSkB

k);

end if;

end if;

if (errSt<>0 or listLength(rest)==0) then

Page 84: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

77

if ((result==true) and (errSt>maxErrRecShift)) then //stops

when it finds and error

(rest,env2,result,ast) = processToken(rest,env2,pt);

end if;

end if;

then (rest,result);

end matchcontinue;

// return the AST

end processToken;

function errorHandler

input OMCCTypes.Token currTok;

input Env env;

input ParseData pt;

output Env env2;

output String errorMsg;

output Boolean result;

// env variables

protected

OMCCTypes.Token cTok,nTk;

ParseCodeModelica.AstStack astStk,astSkBk;

Boolean debug;

Integer sSt,cSt,errSt;

list<OMCCTypes.Token> prog,prgBk;

list<Integer> stateStk,stateSkBk;

list<String> errStk;

// parse tables

array<String> mm_tname;

array<Integer> mm_translate, mm_prhs, mm_rhs, mm_rline, mm_toknum, mm_r1,

mm_r2, mm_defact, mm_defgoto,

mm_pact, mm_pgoto, mm_table, mm_check, mm_stos;

list<String> redStk;

Integer numTokens;

String msg,semVal;

algorithm

PARSE_TABLE(translate=mm_translate,prhs=mm_prhs,rhs=mm_rhs,rline=mm_rline,tnam

e=mm_tname,toknum=mm_toknum,r1=mm_r1,r2=mm_r2

,defact=mm_defact,defgoto=mm_defgoto,pact=mm_pact,pgoto=mm_pgoto,table=mm_tabl

e,check=mm_check,stos=mm_stos) := pt;

ENV(crTk=cTok,lookAhTk=nTk,state=stateStk,sState=sSt,errMessages=errStk,errSta

tus=errSt,cState=cSt,

program=prog,progBk=prgBk,astStack=astStk,isDebugging=debug,stateBackup=stateS

kBk,astStackBackup=astSkBk):= env;

if (debug) then

print("\nERROR RECOVERY INITIATED:");

print("\n[State:" + intString(cSt) +"]{" + printStack(stateStk,"") +

"}\n");

print("\n[StateStack Backup:{" + printStack(stateSkBk,"") + "}\n");

end if;

semVal := OMCCTypes.printToken(currTok);

(errorMsg,result) := matchcontinue(errSt==0,prog)

local

String erMsg,name;

list<String> candidates;

list<OMCCTypes.Token> rest;

Integer i,idTok;

OMCCTypes.Info info;

case (true,{}) //start error catching

equation

Page 85: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

78

erMsg = OMCCTypes.printErrorToken(currTok);

// insert token

if (debug) then

print("\n Checking INSERT at the END token:");

//printAny("\n Checking INSERT at the END token:");

end if;

candidates = {};

candidates = checkCandidates(candidates,env,pt,3);

if (List.isEmpty(candidates)==false) then

erMsg = erMsg + ", INSERT at the End token " +

printCandidateTokens(candidates,"") ;

end if;

errStk = erMsg::errStk;

OMCCTypes.TOKEN(loc=info) = currTok;

addSourceMessage(errStk,info);

printErrorMessages(errStk);

errSt = maxErrShiftToken;

then (erMsg,false); //end error catching

case (true,_) //start error catching

equation

//OMCCTypes.TOKEN(id=idTok) = currTok;

erMsg = OMCCTypes.printErrorToken(currTok);

if (debug) then

print("\n Check MERGE token until next token");

end if;

nTk::_ = prog;

OMCCTypes.TOKEN(id=idTok) = nTk;

if (checkToken(idTok,env,pt,5)==true) then

_::nTk::_ = prog;

erMsg = erMsg + ", MERGE tokens " +

OMCCTypes.printShortToken(currTok)

+ " and " + OMCCTypes.printShortToken(nTk);

end if;

// insert token

if (debug) then

print("\n Checking INSERT token:");

end if;

candidates = {};

candidates = checkCandidates(candidates,env,pt,2);

if (List.isEmpty(candidates)==false) then

erMsg = erMsg + ", INSERT token " +

printCandidateTokens(candidates,"");

//errStk = erMsg::errStk;

end if;

errSt = maxErrShiftToken;

// replace token

// erMsg = "Syntax Error near " + semVal;

if (debug) then

print("\n Checking REPLACE token:");

end if;

candidates = {};

candidates = checkCandidates(candidates,env,pt,3);

if (List.isEmpty(candidates)==false) then

erMsg = erMsg + ", REPLACE token with " +

printCandidateTokens(candidates,"");

//errStk = erMsg::errStk;

end if;

errSt = maxErrShiftToken;

// try to supress the token

Page 86: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

79

// erMsg = "Syntax Error near " + semVal;

if (debug) then

print("\n Check ERASE token until next token");

end if;

nTk::_ = prog;

OMCCTypes.TOKEN(id=idTok) = nTk;

if (checkToken(idTok,env,pt,1)==true) then

erMsg = erMsg + ", ERASE token";

//errStk = erMsg::errStk;

end if;

//printAny(errStk);

if (List.isEmpty(errStk)==true) then

errStk = erMsg::{};

else

errStk = erMsg::errStk;

end if;

OMCCTypes.TOKEN(loc=info) = currTok;

addSourceMessage(errStk,info);

errSt = maxErrShiftToken;

then (erMsg,true); //end error catching

case (false,_) // add one more error

equation

printErrorMessages(errStk);

erMsg = OMCCTypes.printErrorToken(currTok);

then (erMsg,false);

end matchcontinue;

if (debug==true) then

print("\nERROR NUM:" + intString(errSt) +" DETECTED:\n" + errorMsg);

end if;

env2 :=

ENV(cTok,nTk,stateStk,errStk,errSt,sSt,cSt,prog,prgBk,astStk,debug,stateSkBk,a

stSkBk);

//env2 := env;

end errorHandler;

function checkCandidates

input list<String> candidates;

input Env env;

input ParseData pt;

input Integer action;

output list<String> resCandidates;

protected

list<String> candidates1:=candidates;

Integer n;

// env variables

OMCCTypes.Token cTok,nTk;

ParseCodeModelica.AstStack astStk,astSkBk;

Boolean debug;

Integer sSt,cSt,errSt;

list<OMCCTypes.Token> prog,prgBk;

list<Integer> stateStk,stateSkBk;

list<String> errStk;

// parse tables

array<String> mm_tname;

array<Integer> mm_translate, mm_prhs, mm_rhs, mm_rline, mm_toknum, mm_r1,

mm_r2, mm_defact, mm_defgoto,

mm_pact, mm_pgoto, mm_table, mm_check, mm_stos;

Integer numTokens,i,j=1;

String name,tokVal;

algorithm

PARSE_TABLE(tname=mm_tname) := pt;

resCandidates := candidates1;

numTokens := 255 + ParseTableModelica.YYNTOKENS - 1;

// exhaustive search over the tokens

for i in 258:numTokens loop

if (checkToken(i,env,pt,action)==true) then

Page 87: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

80

//name := mm_tname[i-255];

if (j<=maxCandidateTokens) then

tokVal := getTokenSemValue(i-255,pt);

resCandidates := tokVal::resCandidates;

j := j+1;

else

i := numTokens+1;

end if;

end if;

end for;

end checkCandidates;

function checkToken

input Integer chkTok;

input Env env;

input ParseData pt;

input Integer action; // 1 delete 2 insert 3 replace

output Boolean result;

protected

Integer n;

// env variables

OMCCTypes.Token cTok,nTk;

ParseCodeModelica.AstStack astStk,astSkBk;

Boolean debug;

Integer sSt,cSt,errSt;

list<OMCCTypes.Token> prog,prgBk;

list<Integer> stateStk,stateSkBk;

list<String> errStk;

// parse tables

array<String> mm_tname;

array<Integer> mm_translate, mm_prhs, mm_rhs, mm_rline, mm_toknum, mm_r1,

mm_r2, mm_defact, mm_defgoto,

mm_pact, mm_pgoto, mm_table, mm_check, mm_stos;

Integer chk2;

Env env2;

OMCCTypes.Info info;

OMCCTypes.Token candTok;

algorithm

PARSE_TABLE(translate=mm_translate,prhs=mm_prhs,rhs=mm_rhs,rline=mm_rline,tnam

e=mm_tname,toknum=mm_toknum,r1=mm_r1,r2=mm_r2

,defact=mm_defact,defgoto=mm_defgoto,pact=mm_pact,pgoto=mm_pgoto,table=mm_tabl

e,check=mm_check,stos=mm_stos) := pt;

ENV(crTk=cTok,lookAhTk=nTk,state=stateStk,sState=sSt,errMessages=errStk,errSta

tus=errSt,cState=cSt,program=prog,

progBk=prgBk,astStack=astStk,isDebugging=debug,stateBackup=stateSkBk,astStackB

ackup=astSkBk):= env;

if (debug) then

print("\n **** Checking TOKEN: " + intString(chkTok) + " action:" +

intString(action));

//printAny("\n **** Checking TOKEN: " + intString(chkTok) + " action:" +

intString(action));

end if;

// restore back up configuration and run the machine again to check

candidate

if (List.isEmpty(prog)==false) then

cTok::prog := prog;

if (debug) then

print("\n **** Last token: " + OMCCTypes.printToken(cTok));

end if;

info := OMCCTypes.INFO("",false,1,1,1,1,OMCCTypes.getTimeStamp());

//fake position

candTok := OMCCTypes.TOKEN(mm_tname[chkTok-255],chkTok,{65},info);

Page 88: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

81

else

if (debug) then

print("\n Creating Fake Token position");

end if;

info := OMCCTypes.INFO("",false,1,1,1,1,OMCCTypes.getTimeStamp()); //fake

position

candTok := OMCCTypes.TOKEN(mm_tname[chkTok-255],chkTok,{65},info);

end if;

if (debug) then

print("\n **** Process candidate token: " +

OMCCTypes.printToken(candTok) + " action: " + intString(action));

end if;

(prog) := matchcontinue(action)

local

list<Integer> value;

list<OMCCTypes.Token> lstTokens;

case (5) // Merge

equation

if (List.isEmpty(prog)==false) then

candTok::prog = prog;

value = OMCCTypes.getMergeTokenValue(cTok,candTok);

lstTokens = LexerModelica.lex("fileName",value,debug);

candTok::_ = lstTokens;

prog = candTok::prog;

end if;

then (prog);

case (2) // Insert

equation

prog = candTok::cTok::prog;

then (prog);

case (3) // replace

equation

prog = candTok::prog;

then (prog);

else then (prog);

end matchcontinue;

cSt::_ := stateSkBk;

errStk := {}; //reset errors

errSt := -1; // no errors reset

// backup the env variables to the last shifted token

env2 :=

ENV(cTok,nTk,stateSkBk,errStk,errSt,sSt,cSt,prog,prgBk,astSkBk,debug,stateSkBk

,astSkBk);

//printAny(env2);

result := false;

if (debug) then

//print("\n\n*****ProcessTOKENS:" + OMCCTypes.printTokens(prog,"") + "

check" + intString(chkTok));

end if;

//print("\n[State="+ intString(cSt) + " Stack Backup:{" +

printStack(stateSkBk,"") + "}]\n");

//print("\n[StateStack Backup:{" + printStack(stateSkBk,"") + "}\n");

(_,_,result,_) := processToken(prog,env2,pt);

if (result and debug) then

print("\n **** Candidate TOKEN ADDED: " + intString(chkTok));

end if;

end checkToken;

function reduce

input Integer rule;

input Env env;

Page 89: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

82

input ParseData pt;

output Env env2;

protected

// parse tables

array<String> mm_tname;

array<Integer> mm_translate, mm_prhs, mm_rhs, mm_rline, mm_toknum, mm_r1,

mm_r2, mm_defact, mm_defgoto,

mm_pact, mm_pgoto, mm_table, mm_check, mm_stos;

// env variables

OMCCTypes.Token cTok,nTk;

ParseCodeModelica.AstStack astStk;

ParseCodeModelica.AstStack astSkBk;

Boolean debug,error;

list<Integer> stateStk,stateSkBk;

list<String> errStk,redStk;

String astTmp,semVal,errMsg;

Integer errSt,sSt,cSt;

list<OMCCTypes.Token> prog,prgBk;

Integer i,len,val,n, nSt,chkVal;

algorithm

PARSE_TABLE(translate=mm_translate,prhs=mm_prhs,rhs=mm_rhs,rline=mm_rline,tnam

e=mm_tname,toknum=mm_toknum,r1=mm_r1,r2=mm_r2

,defact=mm_defact,defgoto=mm_defgoto,pact=mm_pact,pgoto=mm_pgoto,table=mm_tabl

e,check=mm_check,stos=mm_stos) := pt;

ENV(crTk=cTok,lookAhTk=nTk,state=stateStk,sState=sSt,errMessages=errStk,errSta

tus=errSt,cState=cSt,program=prog,progBk=prgBk,astStack=astStk,

isDebugging=debug,stateBackup=stateSkBk,astStackBackup=astSkBk):= env;

if rule > 0 then

len := mm_r2[rule];

if (debug) then

print("[Reducing(l:" + intString(len) + ",r:" +

intString(rule) +")]");

end if;

redStk := {};

for i in 1:len loop

val::stateStk := stateStk;

end for;

if (errSt>=0) then

(astStk,error,errMsg) :=

ParseCodeModelica.actionRed(rule,astStk,mm_r2);

end if;

if (error) then

errStk := errMsg::errStk;

errSt := maxErrShiftToken;

end if;

cSt::_ := stateStk;

n := mm_r1[rule];

nSt := mm_pgoto[n - ParseTableModelica.YYNTOKENS + 1];

nSt := nSt + cSt;

chkVal := nSt +1;

if (chkVal<=0) then

chkVal := 1;

end if;

if ( (nSt >=0) and (nSt <= ParseTableModelica.YYLAST) and

(mm_check[chkVal] == cSt) ) then

cSt := mm_table[nSt+1];

else

cSt := mm_defgoto[n - ParseTableModelica.YYNTOKENS+1];

end if;

if (debug) then

print("[nState:" + intString(cSt) + "]");

Page 90: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

83

end if;

stateStk := cSt::stateStk;

end if;

env2 :=

ENV(cTok,nTk,stateStk,errStk,errSt,sSt,cSt,prog,prgBk,astStk,debug,stateSkBk,a

stSkBk);

end reduce;

function translate

input Integer tok1;

input ParseData pt;

output Integer tok2;

protected

ParseData pt1:=pt;

array<Integer> mm_translate;

Integer maxT,uTok;

algorithm

PARSE_TABLE(translate=mm_translate) := pt1;

maxT := ParseTableModelica.YYMAXUTOK;

uTok := ParseTableModelica.YYUNDEFTOK;

(tok2) := matchcontinue(tok1<=maxT)

local

Integer res;

case (true)

equation

res = mm_translate[tok1];

//print("\nTRANSLATE TO:" + intString(res));

then (res);

case (false)

then (uTok);

end matchcontinue;

end translate;

function getTokenSemValue "retrieves semantic value from token id"

input Integer tokenId;

input ParseData pt;

output String tokenSemValue "returns semantic value of the token";

protected

array<String> values;

algorithm

if (List.isEmpty(ParseCodeModelica.lstSemValue)==true) then

PARSE_TABLE(tname=values) := pt;

else

values := listArray(ParseCodeModelica.lstSemValue);

end if;

tokenSemValue := "'" + values[tokenId] + "'";

end getTokenSemValue;

function printBuffer

input list<Integer> inList;

output String outList;

protected

list<Integer> inList1:=inList;

Integer c;

algorithm

outList := "";

while (List.isEmpty(inList1)==false) loop

c::inList1 := inList1;

outList := outList + intStringChar(c);

end while;

end printBuffer;

function printSemStack

input list<String> inList;

input String cBuff;

Page 91: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

84

output String outList;

protected

list<String> inList1:=inList;

algorithm

(outList) := matchcontinue(inList,cBuff)

local

String c;

String new,tout;

list<String> rest;

case ({},_)

then (cBuff);

else

equation

c::rest = inList1;

new = cBuff + "\n" + c;

(tout) = printSemStack(rest,new);

then (tout);

end matchcontinue;

end printSemStack;

function printCandidateTokens

input list<String> inList;

input String cBuff;

output String outList;

protected

list<String> inList1:=inList;

String cBuff1:=cBuff;

algorithm

(outList) := matchcontinue(inList,cBuff)

local

String c;

String new,tout;

list<String> rest;

case ({},_)

equation

cBuff1 = System.substring(cBuff1,1,stringLength(cBuff1)-4);

then (cBuff1);

else

equation

c::rest = inList1;

new = cBuff1 + c + " or ";

(tout) = printCandidateTokens(rest,new);

then (tout);

end matchcontinue;

end printCandidateTokens;

function printStack

input list<Integer> inList;

input String cBuff;

output String outList;

protected

list<Integer> inList1:=inList;

algorithm

(outList) := matchcontinue(inList,cBuff)

local

Integer c;

String new,tout;

list<Integer> rest;

case ({},_)

then (cBuff);

else

equation

c::rest = inList1;

new = cBuff + "|" + intString(c);

(tout) = printStack(rest,new);

then (tout);

end matchcontinue;

end printStack; end ParserModelica;

Page 92: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

85

Appendix B

Sample input and output

Input

class test1

Integer c;

Real a;

parameter Real z=3.0;

Boolean x;

String t="test";

constant Real pi=3.14;

constant Integer x=1;

end test1;

Output

// Parsing Modelica with file ../../testsuite/omcc_test/Test1.mo

//

// starting lexer // [TOKEN:CLASS 'class' (5:2-5:7)][TOKEN:IDENT 'test1' (5:8-5:13)][TOKEN:IDENT

'Integer' (6:2-6:9)][TOKEN:IDENT 'c' (6:10-6:11)][TOKEN:SEMICOLON ';' (6:11-

6:12)][TOKEN:IDENT 'Real' (7:2-7:6)][TOKEN:IDENT 'a' (7:7-

7:8)][TOKEN:SEMICOLON ';' (7:8-7:9)][TOKEN:PARAMETER 'parameter' (8:2-

8:11)][TOKEN:IDENT 'Real' (8:12-8:16)][TOKEN:IDENT 'z' (8:17-

8:18)][TOKEN:EQUALS '=' (8:18-8:19)][TOKEN:UNSIGNED_REAL '3.0' (8:19-

8:22)][TOKEN:SEMICOLON ';' (8:22-8:23)][TOKEN:IDENT 'Boolean' (9:2-

9:9)][TOKEN:IDENT 'x' (9:10-9:11)][TOKEN:SEMICOLON ';' (9:11-

9:12)][TOKEN:IDENT 'String' (10:2-10:8)][TOKEN:IDENT 't' (10:9-

10:10)][TOKEN:EQUALS '=' (10:10-10:11)][TOKEN:STRING '"test"' (10:11-

10:17)][TOKEN:SEMICOLON ';' (10:17-10:18)][TOKEN:CONSTANT 'constant' (11:2-

11:10)][TOKEN:IDENT 'Real' (11:11-11:15)][TOKEN:IDENT 'pi' (11:16-

11:18)][TOKEN:EQUALS '=' (11:18-11:19)][TOKEN:UNSIGNED_REAL '3.14' (11:19-

11:23)][TOKEN:SEMICOLON ';' (11:23-11:24)][TOKEN:CONSTANT 'constant' (12:2-

12:10)][TOKEN:IDENT 'Integer' (12:11-12:18)][TOKEN:IDENT 'x' (12:19-

12:20)][TOKEN:EQUALS '=' (12:20-12:21)][TOKEN:IDENT '1' (12:21-

12:22)][TOKEN:SEMICOLON ';' (12:22-12:23)][TOKEN:ENDCLASS 'end test1' (13:2-

13:11)][TOKEN:SEMICOLON ';' (13:11-13:12)]

// Tokens processed:36

// starting parser

// Parsing tokens ParseCodeModelica ...../../testsuite/omcc_test/Test1.mo

//

// Tokens remaining:36

// [State:0]{|0}

// [CLASS,'class'][n:1368-NT:1375] SHIFT1

// Tokens remaining:35

// [State:2]{|2|0}

// REDUCE3[Reducing(l:1,r:24)][nState:23]

// [State:23]{|23|0}

// [IDENT,'test1'][n:110-NT:191] SHIFT1

// Tokens remaining:34

// [State:38]{|38|23|0}

// [IDENT,'Integer'][n:818-NT:899] SHIFT1

// Tokens remaining:33

// [State:31]{|31|38|23|0}

// REDUCE3[Reducing(l:1,r:332)][nState:33]

Page 93: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

86

// [State:33]{|33|38|23|0}

// [IDENT,'c'][n:101-NT:182] REDUCE2[Reducing(l:1,r:329)][nState:98]

// Tokens remaining:33

// [State:98]{|98|38|23|0}

// [IDENT,'c'][n:83-NT:164] REDUCE2[Reducing(l:0,r:220)][nState:216]

// Tokens remaining:33

// [State:216]{|216|98|38|23|0}

// REDUCE3[Reducing(l:2,r:214)][nState:97]

// [State:97]{|97|38|23|0}

// [IDENT,'c'][n:197-NT:278] SHIFT1

// Tokens remaining:32

// [State:31]{|31|97|38|23|0}

// REDUCE3[Reducing(l:1,r:332)][nState:212]

// [State:212]{|212|97|38|23|0}

// [SEMICOLON,';'][n:339-NT:413] REDUCE2[Reducing(l:0,r:220)][nState:325]

// Tokens remaining:32

// [State:325]{|325|212|97|38|23|0}

// [SEMICOLON,';'][n:265-NT:339] REDUCE2[Reducing(l:2,r:153)][nState:211]

// Tokens remaining:32

// [State:211]{|211|97|38|23|0}

// [SEMICOLON,';'][n:63-NT:137] REDUCE2[Reducing(l:0,r:337)][nState:324]

// Tokens remaining:32

// [State:324]{|324|211|97|38|23|0}

// REDUCE3[Reducing(l:2,r:149)][nState:210]

// [State:210]{|210|97|38|23|0}

// [SEMICOLON,';'][n:329-NT:403] REDUCE2[Reducing(l:1,r:147)][nState:209]

// Tokens remaining:32

// [State:209]{|209|97|38|23|0}

// REDUCE3[Reducing(l:2,r:201)][nState:93]

// [State:93]{|93|38|23|0}

// REDUCE3[Reducing(l:1,r:140)][nState:85]

// [State:85]{|85|38|23|0}

// REDUCE3[Reducing(l:1,r:131)][nState:82]

// [State:82]{|82|38|23|0}

// [SEMICOLON,';'][n:249-NT:323] SHIFT1

// Tokens remaining:31

// [State:199]{|199|82|38|23|0}

// REDUCE3[Reducing(l:2,r:129)][nState:81]

// [State:81]{|81|38|23|0}

// [IDENT,'Real'][n:1020-NT:1101] SHIFT1

// Tokens remaining:30

// [State:31]{|31|81|38|23|0}

// REDUCE3[Reducing(l:1,r:332)][nState:33]

// [State:33]{|33|81|38|23|0}

// [IDENT,'a'][n:101-NT:182] REDUCE2[Reducing(l:1,r:329)][nState:98]

// Tokens remaining:30

// [State:98]{|98|81|38|23|0}

// [IDENT,'a'][n:83-NT:164] REDUCE2[Reducing(l:0,r:220)][nState:216]

// Tokens remaining:30

// [State:216]{|216|98|81|38|23|0}

// REDUCE3[Reducing(l:2,r:214)][nState:97]

// [State:97]{|97|81|38|23|0}

// [IDENT,'a'][n:197-NT:278] SHIFT1

// Tokens remaining:29

// [State:31]{|31|97|81|38|23|0}

// REDUCE3[Reducing(l:1,r:332)][nState:212]

// [State:212]{|212|97|81|38|23|0}

// [SEMICOLON,';'][n:339-NT:413] REDUCE2[Reducing(l:0,r:220)][nState:325]

// Tokens remaining:29

// [State:325]{|325|212|97|81|38|23|0}

// [SEMICOLON,';'][n:265-NT:339] REDUCE2[Reducing(l:2,r:153)][nState:211]

// Tokens remaining:29

// [State:211]{|211|97|81|38|23|0}

// [SEMICOLON,';'][n:63-NT:137] REDUCE2[Reducing(l:0,r:337)][nState:324]

// Tokens remaining:29

// [State:324]{|324|211|97|81|38|23|0}

// REDUCE3[Reducing(l:2,r:149)][nState:210]

// [State:210]{|210|97|81|38|23|0}

Page 94: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

87

// [SEMICOLON,';'][n:329-NT:403] REDUCE2[Reducing(l:1,r:147)][nState:209]

// Tokens remaining:29

// [State:209]{|209|97|81|38|23|0}

// REDUCE3[Reducing(l:2,r:201)][nState:93]

// [State:93]{|93|81|38|23|0}

// REDUCE3[Reducing(l:1,r:140)][nState:85]

// [State:85]{|85|81|38|23|0}

// REDUCE3[Reducing(l:1,r:131)][nState:82]

// [State:82]{|82|81|38|23|0}

// [SEMICOLON,';'][n:249-NT:323] SHIFT1

// Tokens remaining:28

// [State:199]{|199|82|81|38|23|0}

// REDUCE3[Reducing(l:2,r:129)][nState:81]

// [State:81]{|81|81|38|23|0}

// [PARAMETER,'parameter'][n:1020-NT:1066] SHIFT1

// Tokens remaining:27

// [State:62]{|62|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:209)][nState:95]

// [State:95]{|95|81|81|38|23|0}

// [IDENT,'Real'][n:238-NT:319] REDUCE2[Reducing(l:1,r:203)][nState:94]

// Tokens remaining:27

// [State:94]{|94|81|81|38|23|0}

// [IDENT,'Real'][n:30-NT:111] SHIFT1

// Tokens remaining:26

// [State:31]{|31|94|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:332)][nState:33]

// [State:33]{|33|94|81|81|38|23|0}

// [IDENT,'z'][n:101-NT:182] REDUCE2[Reducing(l:1,r:329)][nState:98]

// Tokens remaining:26

// [State:98]{|98|94|81|81|38|23|0}

// [IDENT,'z'][n:83-NT:164] REDUCE2[Reducing(l:0,r:220)][nState:216]

// Tokens remaining:26

// [State:216]{|216|98|94|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:214)][nState:207]

// [State:207]{|207|94|81|81|38|23|0}

// [IDENT,'z'][n:197-NT:278] SHIFT1

// Tokens remaining:25

// [State:31]{|31|207|94|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:332)][nState:212]

// [State:212]{|212|207|94|81|81|38|23|0}

// [EQUALS,'='][n:339-NT:409] REDUCE2[Reducing(l:0,r:220)][nState:325]

// Tokens remaining:25

// [State:325]{|325|212|207|94|81|81|38|23|0}

// [EQUALS,'='][n:265-NT:335] SHIFT1

// Tokens remaining:24

// [State:411]{|411|325|212|207|94|81|81|38|23|0}

// [UNSIGNED_REAL,'3.0'][n:1602-NT:1659] SHIFT1

// Tokens remaining:23

// [State:112]{|112|411|325|212|207|94|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:294)][nState:145]

// [State:145]{|145|411|325|212|207|94|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:279)][nState:144]

// [State:144]{|144|411|325|212|207|94|81|81|38|23|0}

// [SEMICOLON,';'][n:222-NT:296] REDUCE2[Reducing(l:1,r:277)][nState:143]

// Tokens remaining:23

// [State:143]{|143|411|325|212|207|94|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:275)][nState:142]

// [State:142]{|142|411|325|212|207|94|81|81|38|23|0}

// [SEMICOLON,';'][n:187-NT:261] REDUCE2[Reducing(l:1,r:272)][nState:141]

// Tokens remaining:23

// [State:141]{|141|411|325|212|207|94|81|81|38|23|0}

// [SEMICOLON,';'][n:615-NT:689] REDUCE2[Reducing(l:1,r:270)][nState:140]

// Tokens remaining:23

// [State:140]{|140|411|325|212|207|94|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:268)][nState:139]

// [State:139]{|139|411|325|212|207|94|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:266)][nState:138]

// [State:138]{|138|411|325|212|207|94|81|81|38|23|0}

Page 95: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

88

// [SEMICOLON,';'][n:365-NT:439] REDUCE2[Reducing(l:1,r:264)][nState:137]

// Tokens remaining:23

// [State:137]{|137|411|325|212|207|94|81|81|38|23|0}

// [SEMICOLON,';'][n:1-NT:75] REDUCE2[Reducing(l:1,r:256)][nState:169]

// Tokens remaining:23

// [State:169]{|169|411|325|212|207|94|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:235)][nState:470]

// [State:470]{|470|411|325|212|207|94|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:154)][nState:413]

// [State:413]{|413|325|212|207|94|81|81|38|23|0}

// REDUCE3[Reducing(l:3,r:152)][nState:211]

// [State:211]{|211|207|94|81|81|38|23|0}

// [SEMICOLON,';'][n:63-NT:137] REDUCE2[Reducing(l:0,r:337)][nState:324]

// Tokens remaining:23

// [State:324]{|324|211|207|94|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:149)][nState:210]

// [State:210]{|210|207|94|81|81|38|23|0}

// [SEMICOLON,';'][n:329-NT:403] REDUCE2[Reducing(l:1,r:147)][nState:320]

// Tokens remaining:23

// [State:320]{|320|207|94|81|81|38|23|0}

// REDUCE3[Reducing(l:3,r:200)][nState:93]

// [State:93]{|93|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:140)][nState:85]

// [State:85]{|85|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:131)][nState:82]

// [State:82]{|82|81|81|38|23|0}

// [SEMICOLON,';'][n:249-NT:323] SHIFT1

// Tokens remaining:22

// [State:199]{|199|82|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:129)][nState:81]

// [State:81]{|81|81|81|38|23|0}

// [IDENT,'Boolean'][n:1020-NT:1101] SHIFT1

// Tokens remaining:21

// [State:31]{|31|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:332)][nState:33]

// [State:33]{|33|81|81|81|38|23|0}

// [IDENT,'x'][n:101-NT:182] REDUCE2[Reducing(l:1,r:329)][nState:98]

// Tokens remaining:21

// [State:98]{|98|81|81|81|38|23|0}

// [IDENT,'x'][n:83-NT:164] REDUCE2[Reducing(l:0,r:220)][nState:216]

// Tokens remaining:21

// [State:216]{|216|98|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:214)][nState:97]

// [State:97]{|97|81|81|81|38|23|0}

// [IDENT,'x'][n:197-NT:278] SHIFT1

// Tokens remaining:20

// [State:31]{|31|97|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:332)][nState:212]

// [State:212]{|212|97|81|81|81|38|23|0}

// [SEMICOLON,';'][n:339-NT:413] REDUCE2[Reducing(l:0,r:220)][nState:325]

// Tokens remaining:20

// [State:325]{|325|212|97|81|81|81|38|23|0}

// [SEMICOLON,';'][n:265-NT:339] REDUCE2[Reducing(l:2,r:153)][nState:211]

// Tokens remaining:20

// [State:211]{|211|97|81|81|81|38|23|0}

// [SEMICOLON,';'][n:63-NT:137] REDUCE2[Reducing(l:0,r:337)][nState:324]

// Tokens remaining:20

// [State:324]{|324|211|97|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:149)][nState:210]

// [State:210]{|210|97|81|81|81|38|23|0}

// [SEMICOLON,';'][n:329-NT:403] REDUCE2[Reducing(l:1,r:147)][nState:209]

// Tokens remaining:20

// [State:209]{|209|97|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:201)][nState:93]

// [State:93]{|93|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:140)][nState:85]

// [State:85]{|85|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:131)][nState:82]

Page 96: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

89

// [State:82]{|82|81|81|81|38|23|0}

// [SEMICOLON,';'][n:249-NT:323] SHIFT1

// Tokens remaining:19

// [State:199]{|199|82|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:129)][nState:81]

// [State:81]{|81|81|81|81|38|23|0}

// [IDENT,'String'][n:1020-NT:1101] SHIFT1

// Tokens remaining:18

// [State:31]{|31|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:332)][nState:33]

// [State:33]{|33|81|81|81|81|38|23|0}

// [IDENT,'t'][n:101-NT:182] REDUCE2[Reducing(l:1,r:329)][nState:98]

// Tokens remaining:18

// [State:98]{|98|81|81|81|81|38|23|0}

// [IDENT,'t'][n:83-NT:164] REDUCE2[Reducing(l:0,r:220)][nState:216]

// Tokens remaining:18

// [State:216]{|216|98|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:214)][nState:97]

// [State:97]{|97|81|81|81|81|38|23|0}

// [IDENT,'t'][n:197-NT:278] SHIFT1

// Tokens remaining:17

// [State:31]{|31|97|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:332)][nState:212]

// [State:212]{|212|97|81|81|81|81|38|23|0}

// [EQUALS,'='][n:339-NT:409] REDUCE2[Reducing(l:0,r:220)][nState:325]

// Tokens remaining:17

// [State:325]{|325|212|97|81|81|81|81|38|23|0}

// [EQUALS,'='][n:265-NT:335] SHIFT1

// Tokens remaining:16

// [State:411]{|411|325|212|97|81|81|81|81|38|23|0}

// [STRING,'"test"'][n:1602-NT:1697] SHIFT1

// Tokens remaining:15

// [State:68]{|68|411|325|212|97|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:333)][nState:150]

// [State:150]{|150|411|325|212|97|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:283)][nState:144]

// [State:144]{|144|411|325|212|97|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:222-NT:296] REDUCE2[Reducing(l:1,r:277)][nState:143]

// Tokens remaining:15

// [State:143]{|143|411|325|212|97|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:275)][nState:142]

// [State:142]{|142|411|325|212|97|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:187-NT:261] REDUCE2[Reducing(l:1,r:272)][nState:141]

// Tokens remaining:15

// [State:141]{|141|411|325|212|97|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:615-NT:689] REDUCE2[Reducing(l:1,r:270)][nState:140]

// Tokens remaining:15

// [State:140]{|140|411|325|212|97|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:268)][nState:139]

// [State:139]{|139|411|325|212|97|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:266)][nState:138]

// [State:138]{|138|411|325|212|97|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:365-NT:439] REDUCE2[Reducing(l:1,r:264)][nState:137]

// Tokens remaining:15

// [State:137]{|137|411|325|212|97|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:1-NT:75] REDUCE2[Reducing(l:1,r:256)][nState:169]

// Tokens remaining:15

// [State:169]{|169|411|325|212|97|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:235)][nState:470]

// [State:470]{|470|411|325|212|97|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:154)][nState:413]

// [State:413]{|413|325|212|97|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:3,r:152)][nState:211]

// [State:211]{|211|97|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:63-NT:137] REDUCE2[Reducing(l:0,r:337)][nState:324]

// Tokens remaining:15

// [State:324]{|324|211|97|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:149)][nState:210]

Page 97: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

90

// [State:210]{|210|97|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:329-NT:403] REDUCE2[Reducing(l:1,r:147)][nState:209]

// Tokens remaining:15

// [State:209]{|209|97|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:201)][nState:93]

// [State:93]{|93|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:140)][nState:85]

// [State:85]{|85|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:131)][nState:82]

// [State:82]{|82|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:249-NT:323] SHIFT1

// Tokens remaining:14

// [State:199]{|199|82|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:129)][nState:81]

// [State:81]{|81|81|81|81|81|38|23|0}

// [CONSTANT,'constant'][n:1020-NT:1030] SHIFT1

// Tokens remaining:13

// [State:48]{|48|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:210)][nState:95]

// [State:95]{|95|81|81|81|81|81|38|23|0}

// [IDENT,'Real'][n:238-NT:319] REDUCE2[Reducing(l:1,r:203)][nState:94]

// Tokens remaining:13

// [State:94]{|94|81|81|81|81|81|38|23|0}

// [IDENT,'Real'][n:30-NT:111] SHIFT1

// Tokens remaining:12

// [State:31]{|31|94|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:332)][nState:33]

// [State:33]{|33|94|81|81|81|81|81|38|23|0}

// [IDENT,'pi'][n:101-NT:182] REDUCE2[Reducing(l:1,r:329)][nState:98]

// Tokens remaining:12

// [State:98]{|98|94|81|81|81|81|81|38|23|0}

// [IDENT,'pi'][n:83-NT:164] REDUCE2[Reducing(l:0,r:220)][nState:216]

// Tokens remaining:12

// [State:216]{|216|98|94|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:214)][nState:207]

// [State:207]{|207|94|81|81|81|81|81|38|23|0}

// [IDENT,'pi'][n:197-NT:278] SHIFT1

// Tokens remaining:11

// [State:31]{|31|207|94|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:332)][nState:212]

// [State:212]{|212|207|94|81|81|81|81|81|38|23|0}

// [EQUALS,'='][n:339-NT:409] REDUCE2[Reducing(l:0,r:220)][nState:325]

// Tokens remaining:11

// [State:325]{|325|212|207|94|81|81|81|81|81|38|23|0}

// [EQUALS,'='][n:265-NT:335] SHIFT1

// Tokens remaining:10

// [State:411]{|411|325|212|207|94|81|81|81|81|81|38|23|0}

// [UNSIGNED_REAL,'3.14'][n:1602-NT:1659] SHIFT1

// Tokens remaining:9

// [State:112]{|112|411|325|212|207|94|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:294)][nState:145]

// [State:145]{|145|411|325|212|207|94|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:279)][nState:144]

// [State:144]{|144|411|325|212|207|94|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:222-NT:296] REDUCE2[Reducing(l:1,r:277)][nState:143]

// Tokens remaining:9

// [State:143]{|143|411|325|212|207|94|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:275)][nState:142]

// [State:142]{|142|411|325|212|207|94|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:187-NT:261] REDUCE2[Reducing(l:1,r:272)][nState:141]

// Tokens remaining:9

// [State:141]{|141|411|325|212|207|94|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:615-NT:689] REDUCE2[Reducing(l:1,r:270)][nState:140]

// Tokens remaining:9

// [State:140]{|140|411|325|212|207|94|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:268)][nState:139]

// [State:139]{|139|411|325|212|207|94|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:266)][nState:138]

Page 98: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

91

// [State:138]{|138|411|325|212|207|94|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:365-NT:439] REDUCE2[Reducing(l:1,r:264)][nState:137]

// Tokens remaining:9

// [State:137]{|137|411|325|212|207|94|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:1-NT:75] REDUCE2[Reducing(l:1,r:256)][nState:169]

// Tokens remaining:9

// [State:169]{|169|411|325|212|207|94|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:235)][nState:470]

// [State:470]{|470|411|325|212|207|94|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:154)][nState:413]

// [State:413]{|413|325|212|207|94|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:3,r:152)][nState:211]

// [State:211]{|211|207|94|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:63-NT:137] REDUCE2[Reducing(l:0,r:337)][nState:324]

// Tokens remaining:9

// [State:324]{|324|211|207|94|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:149)][nState:210]

// [State:210]{|210|207|94|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:329-NT:403] REDUCE2[Reducing(l:1,r:147)][nState:320]

// Tokens remaining:9

// [State:320]{|320|207|94|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:3,r:200)][nState:93]

// [State:93]{|93|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:140)][nState:85]

// [State:85]{|85|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:131)][nState:82]

// [State:82]{|82|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:249-NT:323] SHIFT1

// Tokens remaining:8

// [State:199]{|199|82|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:129)][nState:81]

// [State:81]{|81|81|81|81|81|81|38|23|0}

// [CONSTANT,'constant'][n:1020-NT:1030] SHIFT1

// Tokens remaining:7

// [State:48]{|48|81|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:210)][nState:95]

// [State:95]{|95|81|81|81|81|81|81|38|23|0}

// [IDENT,'Integer'][n:238-NT:319] REDUCE2[Reducing(l:1,r:203)][nState:94]

// Tokens remaining:7

// [State:94]{|94|81|81|81|81|81|81|38|23|0}

// [IDENT,'Integer'][n:30-NT:111] SHIFT1

// Tokens remaining:6

// [State:31]{|31|94|81|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:332)][nState:33]

// [State:33]{|33|94|81|81|81|81|81|81|38|23|0}

// [IDENT,'x'][n:101-NT:182] REDUCE2[Reducing(l:1,r:329)][nState:98]

// Tokens remaining:6

// [State:98]{|98|94|81|81|81|81|81|81|38|23|0}

// [IDENT,'x'][n:83-NT:164] REDUCE2[Reducing(l:0,r:220)][nState:216]

// Tokens remaining:6

// [State:216]{|216|98|94|81|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:214)][nState:207]

// [State:207]{|207|94|81|81|81|81|81|81|38|23|0}

// [IDENT,'x'][n:197-NT:278] SHIFT1

// Tokens remaining:5

// [State:31]{|31|207|94|81|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:332)][nState:212]

// [State:212]{|212|207|94|81|81|81|81|81|81|38|23|0}

// [EQUALS,'='][n:339-NT:409] REDUCE2[Reducing(l:0,r:220)][nState:325]

// Tokens remaining:5

// [State:325]{|325|212|207|94|81|81|81|81|81|81|38|23|0}

// [EQUALS,'='][n:265-NT:335] SHIFT1

// Tokens remaining:4

// [State:411]{|411|325|212|207|94|81|81|81|81|81|81|38|23|0}

// [IDENT,'1'][n:1602-NT:1683] SHIFT1

// Tokens remaining:3

// [State:31]{|31|411|325|212|207|94|81|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:332)][nState:149]

Page 99: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

92

// [State:149]{|149|411|325|212|207|94|81|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:-24-NT:50] REDUCE2[Reducing(l:0,r:220)][nState:277]

// Tokens remaining:3

// [State:277]{|277|149|411|325|212|207|94|81|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:367-NT:441] REDUCE2[Reducing(l:2,r:304)][nState:227]

// Tokens remaining:3

// [State:227]{|227|411|325|212|207|94|81|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:295-NT:369] REDUCE2[Reducing(l:1,r:280)][nState:144]

// Tokens remaining:3

// [State:144]{|144|411|325|212|207|94|81|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:222-NT:296] REDUCE2[Reducing(l:1,r:277)][nState:143]

// Tokens remaining:3

// [State:143]{|143|411|325|212|207|94|81|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:275)][nState:142]

// [State:142]{|142|411|325|212|207|94|81|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:187-NT:261] REDUCE2[Reducing(l:1,r:272)][nState:141]

// Tokens remaining:3

// [State:141]{|141|411|325|212|207|94|81|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:615-NT:689] REDUCE2[Reducing(l:1,r:270)][nState:140]

// Tokens remaining:3

// [State:140]{|140|411|325|212|207|94|81|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:268)][nState:139]

// [State:139]{|139|411|325|212|207|94|81|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:266)][nState:138]

// [State:138]{|138|411|325|212|207|94|81|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:365-NT:439] REDUCE2[Reducing(l:1,r:264)][nState:137]

// Tokens remaining:3

// [State:137]{|137|411|325|212|207|94|81|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:1-NT:75] REDUCE2[Reducing(l:1,r:256)][nState:169]

// Tokens remaining:3

// [State:169]{|169|411|325|212|207|94|81|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:235)][nState:470]

// [State:470]{|470|411|325|212|207|94|81|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:154)][nState:413]

// [State:413]{|413|325|212|207|94|81|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:3,r:152)][nState:211]

// [State:211]{|211|207|94|81|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:63-NT:137] REDUCE2[Reducing(l:0,r:337)][nState:324]

// Tokens remaining:3

// [State:324]{|324|211|207|94|81|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:149)][nState:210]

// [State:210]{|210|207|94|81|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:329-NT:403] REDUCE2[Reducing(l:1,r:147)][nState:320]

// Tokens remaining:3

// [State:320]{|320|207|94|81|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:3,r:200)][nState:93]

// [State:93]{|93|81|81|81|81|8Absyn.Program.PROGRAM(classes =

{Absyn.Class.CLASS(name = test1, partialPrefix = 0, finalPrefix = 0,

encapsulatedPrefix = 0, restriction = Absyn.Restriction.R_CLASS(), body =

Absyn.ClassDef.PARTS(typeVars = {NIL}, classAttrs = {NIL}, classParts =

{Absyn.ClassPart.PUBLIC(contents = {Absyn.ElementItem.ELEMENTITEM(element =

Absyn.Element.ELEMENT(finalPrefix = 0, redeclareKeywords = NONE(), innerOuter

= Absyn.InnerOuter.NOT_INNER_OUTER(), name = component, specification =

Absyn.ElementSpec.COMPONENTS(attributes =

Absyn.ElementAttributes.ATTR(flowPrefix = 0, streamPrefix = 0, parallelism =

0, variability = Absyn.Variability.VAR(), direction = Absyn.Direction.BIDIR(),

arrayDim = {NIL}), typeSpec = Absyn.TypeSpec.TPATH(path =

Absyn.Path.IDENT(name = Integer), arrayDim = NONE()), components =

{Absyn.ComponentItem.COMPONENTITEM(component = Absyn.Component.COMPONENT(name

= c, arrayDim = {NIL}, modification = NONE()), condition = NONE(), comment =

SOME(Absyn.Comment.COMMENT(annotation_ = NONE(), comment = NONE())))}), info =

Absyn.Info.INFO(fileName = ../../testsuite/omcc_test/Test1.mo, isReadOnly = 0,

lineNumberStart = 6, columnNumberStart = 2, lineNumberEnd = 6, columnNumberEnd

= 11, buildTimes = Absyn.TimeStamp.TIMESTAMP(lastBuildTime = 0, lastEditTime =

0)), constrainClass = NONE())), Absyn.ElementItem.ELEMENTITEM(element =

Absyn.Element.ELEMENT(finalPrefix = 0, redeclareKeywords = NONE(), innerOuter

= Absyn.InnerOuter.NOT_INNER_OUTER(), name = component, specification =

Absyn.ElementSpec.COMPONENTS(attributes =

Page 100: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

93

Absyn.ElementAttributes.ATTR(flowPrefix = 0, streamPrefix = 0, parallelism =

0, variability = Absyn.Variability.VAR(), direction = Absyn.Direction.BIDIR(),

arrayDim = {NIL}), typeSpec = Absyn.TypeSpec.TPATH(path =

Absyn.Path.IDENT(name = Real), arrayDim = NONE()), components =

{Absyn.ComponentItem.COMPONENTITEM(component = Absyn.Component.COMPONENT(name

= a, arrayDim = {NIL}, modification = NONE()), condition = NONE(), comment =

SOME(Absyn.Comment.COMMENT(annotation_ = NONE(), comment = NONE())))}), info =

Absyn.Info.INFO(fileName = ../../testsuite/omcc_test/Test1.mo, isReadOnly = 0,

lineNumberStart = 7, columnNumberStart = 2, lineNumberEnd = 7, columnNumberEnd

= 8, buildTimes = Absyn.TimeStamp.TIMESTAMP(lastBuildTime = 0, lastEditTime =

0)), constrainClass = NONE())), Absyn.ElementItem.ELEMENTITEM(element =

Absyn.Element.ELEMENT(finalPrefix = 0, redeclareKeywords = NONE(), innerOuter

= Absyn.InnerOuter.NOT_INNER_OUTER(), name = component, specification =

Absyn.ElementSpec.COMPONENTS(attributes =

Absyn.ElementAttributes.ATTR(flowPrefix = 0, streamPrefix = 0, parallelism =

0, variability = Absyn.Variability.PARAM(), direction =

Absyn.Direction.BIDIR(), arrayDim = {NIL}), typeSpec =

Absyn.TypeSpec.TPATH(path = Absyn.Path.IDENT(name = Real), arrayDim = NONE()),

components = {Absyn.ComponentItem.COMPONENTITEM(component =

Absyn.Component.COMPONENT(name = z, arrayDim = {NIL}, modification =

SOME(Absyn.Modification.CLASSMOD(elementArgLst = {NIL}, eqMod =

Absyn.EqMod.EQMOD(exp = Absyn.Exp.REAL(value = 3), info =

Absyn.Info.INFO(fileName = ../../testsuite/omcc_test/Test1.mo, isReadOnly = 0,

lineNumberStart = 8, columnNumberStart = 18, lineNumberEnd = 8,

columnNumberEnd = 22, buildTimes = Absyn.TimeStamp.TIMESTAMP(lastBuildTime =

0, lastEditTime = 0)))))), condition = NONE(), comment =

SOME(Absyn.Comment.COMMENT(annotation_ = NONE(), comment = NONE())))}), info =

Absyn.Info.INFO(fileName = ../../testsuite/omcc_test/Test1.mo, isReadOnly = 0,

lineNumberStart = 8, columnNumberStart = 2, lineNumberEnd = 8, columnNumberEnd

= 22, buildTimes = Absyn.TimeStamp.TIMESTAMP(lastBuildTime = 0, lastEditTime =

0)), constrainClass = NONE())), Absyn.ElementItem.ELEMENTITEM(element =

Absyn.Element.ELEMENT(finalPrefix = 0, redeclareKeywords = NONE(), innerOuter

= Absyn.InnerOuter.NOT_INNER_OUTER(), name = component, specification =

Absyn.ElementSpec.COMPONENTS(attributes =

Absyn.ElementAttributes.ATTR(flowPrefix = 0, streamPrefix = 0, parallelism =

0, variability = Absyn.Variability.VAR(), direction = Absyn.Direction.BIDIR(),

arrayDim = {NIL}), typeSpec = Absyn.TypeSpec.TPATH(path =

Absyn.Path.IDENT(name = Boolean), arrayDim = NONE()), components =

{Absyn.ComponentItem.COMPONENTITEM(component = Absyn.Component.COMPONENT(name

= x, arrayDim = {NIL}, modification = NONE()), condition = NONE(), comment =

SOME(Absyn.Comment.COMMENT(annotation_ = NONE(), comment = NONE())))}), info =

Absyn.Info.INFO(fileName = ../../testsuite/omcc_test/Test1.mo, isReadOnly = 0,

lineNumberStart = 9, columnNumberStart = 2, lineNumberEnd = 9, columnNumberEnd

= 11, buildTimes = Absyn.TimeStamp.TIMESTAMP(lastBuildTime = 0, lastEditTime =

0)), constrainClass = NONE())), Absyn.ElementItem.ELEMENTITEM(element =

Absyn.Element.ELEMENT(finalPrefix = 0, redeclareKeywords = NONE(), innerOuter

= Absyn.InnerOuter.NOT_INNER_OUTER(), name = component, specification =

Absyn.ElementSpec.COMPONENTS(attributes =

Absyn.ElementAttributes.ATTR(flowPrefix = 0, streamPrefix = 0, parallelism =

0, variability = Absyn.Variability.VAR(), direction = Absyn.Direction.BIDIR(),

arrayDim = {NIL}), typeSpec = Absyn.TypeSpec.TPATH(path =

Absyn.Path.IDENT(name = String), arrayDim = NONE()), components =

{Absyn.ComponentItem.COMPONENTITEM(component = Absyn.Component.COMPONENT(name

= t, arrayDim = {NIL}, modification =

SOME(Absyn.Modification.CLASSMOD(elementArgLst = {NIL}, eqMod =

Absyn.EqMod.EQMOD(exp = Absyn.Exp.STRING(value = test), info =

Absyn.Info.INFO(fileName = ../../testsuite/omcc_test/Test1.mo, isReadOnly = 0,

lineNumberStart = 10, columnNumberStart = 10, lineNumberEnd = 10,

columnNumberEnd = 17, buildTimes = Absyn.TimeStamp.TIMESTAMP(lastBuildTime =

0, lastEditTime = 0)))))), condition = NONE(), comment =

SOME(Absyn.Comment.COMMENT(annotation_ = NONE(), comment = NONE())))}), info =

Absyn.Info.INFO(fileName = ../../testsuite/omcc_test/Test1.mo, isReadOnly = 0,

lineNumberStart = 10, columnNumberStart = 2, lineNumberEnd = 10,

columnNumberEnd = 17, buildTimes = Absyn.TimeStamp.TIMESTAMP(lastBuildTime =

0, lastEditTime = 0)), constrainClass = NONE())),

Absyn.ElementItem.ELEMENTITEM(element = Absyn.Element.ELEMENT(finalPrefix = 0,

redeclareKeywords = NONE(), innerOuter = Absyn.InnerOuter.NOT_INNER_OUTER(),

name = component, specification = Absyn.ElementSpec.COMPONENTS(attributes =

Page 101: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

94

Absyn.ElementAttributes.ATTR(flowPrefix = 0, streamPrefix = 0, parallelism =

0, variability = Absyn.Variability.CONST(), direction =

Absyn.Direction.BIDIR(), arrayDim = {NIL}), typeSpec =

Absyn.TypeSpec.TPATH(path = Absyn.Path.IDENT(name = Real), arrayDim = NONE()),

components = {Absyn.ComponentItem.COMPONENTITEM(component =

Absyn.Component.COMPONENT(name = pi, arrayDim = {NIL}, modification =

SOME(Absyn.Modification.CLASSMOD(elementArgLst = {NIL}, eqMod =

Absyn.EqMod.EQMOD(exp = Absyn.Exp.REAL(value = 3.14), info =

Absyn.Info.INFO(fileName = ../../testsuite/omcc_test/Test1.mo, isReadOnly = 0,

lineNumberStart = 11, columnNumberStart = 18, lineNumberEnd = 11,

columnNumberEnd = 23, buildTimes = Absyn.TimeStamp.TIMESTAMP(lastBuildTime =

0, lastEditTime = 0)))))), condition = NONE(), comment =

SOME(Absyn.Comment.COMMENT(annotation_ = NONE(), comment = NONE())))}), info =

Absyn.Info.INFO(fileName = ../../testsuite/omcc_test/Test1.mo, isReadOnly = 0,

lineNumberStart = 11, columnNumberStart = 2, lineNumberEnd = 11,

columnNumberEnd = 23, buildTimes = Absyn.TimeStamp.TIMESTAMP(lastBuildTime =

0, lastEditTime = 0)), constrainClass = NONE())),

Absyn.ElementItem.ELEMENTITEM(element = Absyn.Element.ELEMENT(finalPrefix = 0,

redeclareKeywords = NONE(), innerOuter = Absyn.InnerOuter.NOT_INNER_OUTER(),

name = component, specification = Absyn.ElementSpec.COMPONENTS(attributes =

Absyn.ElementAttributes.ATTR(flowPrefix = 0, streamPrefix = 0, parallelism =

0, variability = Absyn.Variability.CONST(), direction =

Absyn.Direction.BIDIR(), arrayDim = {NIL}), typeSpec =

Absyn.TypeSpec.TPATH(path = Absyn.Path.IDENT(name = Integer), arrayDim =

NONE()), components = {Absyn.ComponentItem.COMPONENTITEM(component =

Absyn.Component.COMPONENT(name = x, arrayDim = {NIL}, modification =

SOME(Absyn.Modification.CLASSMOD(elementArgLst = {NIL}, eqMod =

Absyn.EqMod.EQMOD(exp = Absyn.Exp.CREF(componentRef =

Absyn.ComponentRef.CREF_IDENT(name = 1, subscripts = {NIL})), info =

Absyn.Info.INFO(fileName = ../../testsuite/omcc_test/Test1.mo, isReadOnly = 0,

lineNumberStart = 12, columnNumberStart = 20, lineNumberEnd = 12,

columnNumberEnd = 22, buildTimes = Absyn.TimeStamp.TIMESTAMP(lastBuildTime =

0, lastEditTime = 0)))))), condition = NONE(), comment =

SOME(Absyn.Comment.COMMENT(annotation_ = NONE(), comment = NONE())))}), info =

Absyn.Info.INFO(fileName = ../../testsuite/omcc_test/Test1.mo, isReadOnly = 0,

lineNumberStart = 12, columnNumberStart = 2, lineNumberEnd = 12,

columnNumberEnd = 22, buildTimes = Absyn.TimeStamp.TIMESTAMP(lastBuildTime =

0, lastEditTime = 0)), constrainClass = NONE()))})}, comment = NONE()), info =

Absyn.Info.INFO(fileName = ../../testsuite/omcc_test/Test1.mo, isReadOnly = 0,

lineNumberStart = 5, columnNumberStart = 2, lineNumberEnd = 13,

columnNumberEnd = 11, buildTimes = Absyn.TimeStamp.TIMESTAMP(lastBuildTime =

0, lastEditTime = 0)))}, within_ = Absyn.Within.TOP(), globalBuildTimes =

Absyn.TimeStamp.TIMESTAMP(lastBuildTime = 1.347478e+09, lastEditTime =

1.347478e+09))1|81|38|23|0}

// REDUCE3[Reducing(l:1,r:140)][nState:85]

// [State:85]{|85|81|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:1,r:131)][nState:82]

// [State:82]{|82|81|81|81|81|81|81|38|23|0}

// [SEMICOLON,';'][n:249-NT:323] SHIFT1

// Tokens remaining:2

// [State:199]{|199|82|81|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:129)][nState:81]

// [State:81]{|81|81|81|81|81|81|81|38|23|0}

// [ENDCLASS,'end test1'][n:1020-NT:1140]

REDUCE2[Reducing(l:1,r:127)][nState:198]

// Tokens remaining:2

// [State:198]{|198|81|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:128)][nState:198]

// [State:198]{|198|81|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:128)][nState:198]

// [State:198]{|198|81|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:128)][nState:198]

// [State:198]{|198|81|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:128)][nState:198]

// [State:198]{|198|81|81|38|23|0}

// REDUCE3[Reducing(l:2,r:128)][nState:198]

// [State:198]{|198|81|38|23|0}

Page 102: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

95

// REDUCE3[Reducing(l:2,r:128)][nState:80]

// [State:80]{|80|38|23|0}

// REDUCE3[Reducing(l:1,r:47)][nState:76]

// [State:76]{|76|38|23|0}

// [ENDCLASS,'end test1'][n:972-NT:1092]

REDUCE2[Reducing(l:1,r:45)][nState:75]

// Tokens remaining:2

// [State:75]{|75|38|23|0}

// [ENDCLASS,'end test1'][n:198-NT:318] SHIFT1

// Tokens remaining:1

// [State:196]{|196|75|38|23|0}

// REDUCE3[Reducing(l:2,r:14)][nState:72]

// [State:72]{|72|38|23|0}

// REDUCE3[Reducing(l:3,r:10)][nState:21]

// [State:21]{|21|0}

// REDUCE3[Reducing(l:1,r:9)][nState:20]

// [State:20]{|20|0}

// [SEMICOLON,';'][n:123-NT:197] SHIFT1

// [State:36]{|36|20|0}

//

// Now at end of input:

// [n:1410] REDUCE4[Reducing(l:2,r:5)][nState:19]

// Reprocesing at the END

// [State:19]{|19|0}

// REDUCE3[Reducing(l:1,r:2)][nState:17]

// [State:17]{|17|0}

//

// Now at end of input:

// [n:117] SHIFT

// Reprocesing at the END

// [State:34]{|34|17|0}

//

//

// ***************-ACCEPTED-***************

//

//

// SUCCEED

// args:../../testsuite/omcc_test/Test1.mo

// OMCCp v0.9.2 (OpenModelica compiler-compiler Parser generator) Lexer and

Parser Generator-2012

// ""

// endResult

Page 103: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

96

Appendix C

Modelica Grammar

lexerModelica.l

%{

%}

%x c_comment

%x c_linecomment

%x c_string

whitespace [ \t\n]+

letter [a-zA-Z]

wild [_]

digit [0-9]

digits {digit}+

ident (({letter}|{wild})|({letter}|{digit}|{wild}))({letter}|{digit}|{wild})*

exponent ([e]|[E])([+]|[-])?{digits}

real {digits}[\.]({digits})?({exponent})?

real2 {digits}{exponent}

real3 [\.]{digits}({exponent})?

endif "end"{whitespace}"if"

endfor "end"{whitespace}"for"

endwhile "end"{whitespace}"while"

endwhen "end"{whitespace}"when"

endmatch "end"{whitespace}"match"

endmatchcontinue "end"{whitespace}"matchcontinue"

endident "end"{whitespace}{ident}

sescape "\\\""

/* Lex style lexical syntax of tokens in the MODELICA language */

%%

{whitespace} ;

{real} return UNSIGNED_REAL;

{real2} return UNSIGNED_REAL;

{real3} { sToken = LexerModelica.printBuffer(listReverse(buffer));

Error.addSourceMessage(6000,{"Treating "+ sToken +" as

0"+sToken+". This is not standard Modelica and only done for compatibility

with old code. Support for this feature may be removed in the future."},info);

;} return UNSIGNED_REAL; // throw a warning

{endif} return ENDIF;

{endfor} return ENDFOR;

{endwhile} return ENDWHILE;

{endwhen} return ENDWHEN;

{endmatchcontinue} return ENDMATCHCONTINUE;

{endmatch} return ENDMATCH;

{endident} return ENDCLASS;

"algorithm" return T_ALGORITHM;

"and" return T_AND;

"annotation" return T_ANNOTATION;

"block" return BLOCK;

"class" return CLASS;

"connect" return CONNECT;

"connector" return CONNECTOR;

"constant" return CONSTANT;

"discrete" return DISCRETE;

"der" return DER;

"defineunit" return DEFINEUNIT;

Page 104: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

97

"each" return EACH;

"else" return ELSE;

"elseif" return ELSEIF;

"elsewhen" return ELSEWHEN;

"end" return T_END;

"enumeration" return ENUMERATION;

"equation" return EQUATION;

"encapsulated" return ENCAPSULATED;

"expandable" return EXPANDABLE;

"extends" return EXTENDS;

"constrainedby" return CONSTRAINEDBY;

"external" return EXTERNAL;

"false" return T_FALSE;

"final" return FINAL;

"flow" return FLOW;

"for" return FOR;

"function" return FUNCTION;

"if" return IF;

"import" return IMPORT;

"in" return T_IN;

"initial" return INITIAL;

"inner" return INNER;

"input" return T_INPUT;

"loop" return LOOP;

"model" return MODEL;

"not" return T_NOT;

"outer" return T_OUTER;

"operator" return OPERATOR;

"overload" return OVERLOAD;

"or" return T_OR;

"output" return T_OUTPUT;

"package" return T_PACKAGE;

"parameter" return PARAMETER;

"partial" return PARTIAL;

"protected" return PROTECTED;

"public" return PUBLIC;

"record" return RECORD;

"redeclare" return REDECLARE;

"replaceable" return REPLACEABLE;

"results" return RESULTS;

"then" return THEN;

"true" return T_TRUE;

"type" return TYPE;

"unsigned_real" return UNSIGNED_REAL;

"when" return WHEN;

"while" return WHILE;

"within" return WITHIN;

"return" return RETURN;

"break" return BREAK;

"(" return LPAR;

")" return RPAR;

"[" return LBRACK;

"]" return RBRACK;

"{" return LBRACE;

"}" return RBRACE;

"==" return EQEQ;

"=" return EQUALS;

"," return COMMA;

":=" return ASSIGN;

"::" return COLONCOLON;

":" return COLON;

";" return SEMICOLON;

"Code" return CODE;

"$Code" return CODE;

"$TypeName" return CODE_NAME;

"$Exp" return CODE_EXP;

Page 105: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

98

"$Var" return CODE_VAR;

"pure" return PURE;

"impure" return IMPURE;

".+" return PLUS_EW;

".-" return MINUS_EW;

".*" return STAR_EW;

"./" return SLASH_EW;

".^" return POWER_EW;

"*" return STAR;

"-" return MINUS;

"+" return PLUS;

"<=" return LESSEQ;

"<>" return LESSGT;

"<" return LESS;

">" return GREATER;

">=" return GREATEREQ;

"^" return POWER;

"/" return SLASH;

"as" return AS;

"case" return CASE;

"equality" return EQUALITY;

"failure" return FAILURE;

"guard" return GUARD;

"local" return LOCAL;

"match" return MATCH;

"matchcontinue" return MATCHCONTINUE;

"uniontype" return UNIONTYPE;

"__" return ALLWILD;

"_" return WILD;

"subtypeof" return SUBTYPEOF;

"\%" return MOD;

"stream" return STREAM;

"\." return DOT;

%"[\"][^\"]*[\"]" return STRING;

{ident} return IDENT;

{digits} return UNSIGNED_INTEGER;

"\"" {

BEGIN(c_string) keepBuffer;

}

<c_string>

{

"\\"+"\"" { keepBuffer; }

"\\"+"\\" { keepBuffer; }

"\"" { BEGIN(INITIAL) return STRING; }

[^\n] {keepBuffer; }

\n {keepBuffer; }

}

"/\*" {

BEGIN(c_comment);

}

<c_comment>

{

"\*/" { BEGIN(INITIAL); }

"/\*" { yyerror("Suspicious comment"); }

[^\n] ;

\n ;

Page 106: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

99

<<EOF>> {

yyerror("Unterminated comment");

yyterminate();

}

}

"//" {

BEGIN(c_linecomment) keepBuffer;

}

<c_linecomment>

{

\n { BEGIN(INITIAL); }

[^\n] ;

}

%%

Page 107: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

100

parserModelica.y

%{

import Absyn;

/* Type Declarations */

type AstTree = Absyn.Program;

type Token = OMCCTypes.Token;

type Program = Absyn.Program;

type Within = Absyn.Within;

type lstClass = list<Absyn.Class>;

type Class = Absyn.Class;

type Ident = Absyn.Ident;

type Path = Absyn.Path;

type ClassDef = Absyn.ClassDef;

type ClassPart = Absyn.ClassPart;

type ClassParts = list<ClassPart>;

type Import = Absyn.Import;

type ElementItem = Absyn.ElementItem;

type ElementItems = list<Absyn.ElementItem>;

type Element = Absyn.Element;

type ElementSpec = Absyn.ElementSpec;

type ElementAttributes = Absyn.ElementAttributes;

type Comment = Absyn.Comment;

type Direction = Absyn.Direction;

type Exp = Absyn.Exp;

type Exps = list<Exp>;

type Matrix = list<list<Exp>>;

type Subscript = Absyn.Subscript;

type ArrayDim = list<Subscript>;

type Operator = Absyn.Operator;

type Case = Absyn.Case;

type Cases = list<Case>;

type MatchType = Absyn.MatchType;

type Restriction = Absyn.Restriction;

type InnerOuter = Absyn.InnerOuter;

type ComponentRef = Absyn.ComponentRef;

type Variability = Absyn.Variability;

type RedeclareKeywords = Absyn.RedeclareKeywords;

type NamedArg=Absyn.NamedArg;

type TypeSpec=Absyn.TypeSpec;

type TypeSpecs=list<TypeSpec>;

type ComponentItem=Absyn.ComponentItem;

type ComponentItems=list<ComponentItem>;

type Component=Absyn.Component;

type EquationItem = Absyn.EquationItem;

type EquationItems = list<EquationItem>;

type Equation = Absyn.Equation;

type Elseif = tuple<Exp, list<EquationItem>>;

type Elseifs = list<Elseif>;

type ForIterator= Absyn.ForIterator;

type ForIterators = list<ForIterator>;

type Elsewhen = tuple<Exp, list<EquationItem>>;

type Elsewhens = list<Elsewhen>;

type FunctionArgs = Absyn.FunctionArgs;

type NamedArgs = list<NamedArg>;

type AlgorithmItem = Absyn.AlgorithmItem;

type AlgorithmItems = list<AlgorithmItem>;

type Algorithm = Absyn.Algorithm;

type AlgElseif = tuple<Exp, list<AlgorithmItem>>;

type AlgElseifs = list<AlgElseif>;

type AlgElsewhen = tuple<Exp, list<AlgorithmItem>>;

type AlgElsewhens = list<AlgElsewhen>;

type ExpElseif = tuple<Exp, Exp>;

type ExpElseifs = list<ExpElseif>;

type EnumDef = Absyn.EnumDef;

Page 108: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

101

type EnumLiteral = Absyn.EnumLiteral;

type EnumLiterals = list<EnumLiteral>;

type Modification = Absyn.Modification;

type Boolean3 = tuple<Boolean,Boolean,Boolean>;

type Boolean2 = tuple<Boolean,Boolean>;

type ElementArg = Absyn.ElementArg;

type ElementArgs = list<ElementArg>;

type Each = Absyn.Each;

type EqMod=Absyn.EqMod;

type ComponentCondition = Absyn.ComponentCondition;

type ExternalDecl = Absyn.ExternalDecl;

type Annotation = Absyn.Annotation;

type ConstrainClass= Absyn.ConstrainClass;

constant list<String> lstSemValue3 = {};

constant list<String> lstSemValue = {

"error", "$undefined", "ALGORITHM", "AND", "ANNOTATION",

"BLOCK", "CLASS", "CONNECT", "CONNECTOR", "CONSTANT", "DISCRETE", "DER",

"DEFINEUNIT", "EACH", "ELSE", "ELSEIF", "ELSEWHEN", "END",

"ENUMERATION", "EQUATION", "ENCAPSULATED", "EXPANDABLE", "EXTENDS",

"CONSTRAINEDBY", "EXTERNAL", "FALSE", "FINAL", "FLOW", "FOR",

"FUNCTION", "IF", "IMPORT", "IN", "INITIAL", "INNER", "INPUT",

"LOOP", "MODEL", "NOT", "OUTER", "OPERATOR", "OVERLOAD", "OR",

"OUTPUT", "PACKAGE", "PARAMETER", "PARTIAL", "PROTECTED", "PUBLIC",

"RECORD", "REDECLARE", "REPLACEABLE", "RESULTS", "THEN", "TRUE",

"TYPE", "REAL", "WHEN", "WHILE", "WITHIN", "RETURN", "BREAK",

".", "(", ")", "[", "]", "{", "}", "=",

"ASSIGN", "COMMA", "COLON", "SEMICOLON", "CODE", "CODE_NAME", "CODE_EXP",

"CODE_VAR", "PURE", "IMPURE", "Identity", "DIGIT", "INTEGER",

"*", "-", "+", "<=", "<>", "<", ">",

">=", "==", "^", "SLASH", "STRING", ".+", ".-",

".*", "./", ".*", "STREAM", "AS", "CASE", "EQUALITY",

"FAILURE", "GUARD", "LOCAL", "MATCH", "MATCHCONTINUE", "UNIONTYPE",

"ALLWILD", "WILD", "SUBTYPEOF", "COLONCOLON", "MOD", "ENDIF", "ENDFOR",

"ENDWHILE", "ENDWHEN", "ENDCLASS", "ENDMATCHCONTINUE", "ENDMATCH",

"$accept",

"program", "within", "classes_list", "class", "classprefix",

"encapsulated", "partial", "restriction", "classdef",

"classdefenumeration", "classdefderived", "enumeration", "enumlist",

"enumliteral", "classparts", "classpart", "restClass",

"algorithmsection", "algorithmitem", "algorithm", "if_algorithm",

"algelseifs", "algelseif", "when_algorithm", "algelsewhens",

"algelsewhen", "equationsection", "equationitem", "equation",

"when_equation", "elsewhens", "elsewhen", "foriterators", "foriterator",

"if_equation", "elseifs", "elseif", "elementItems", "elementItem",

"element", "componentclause", "componentitems", "componentitem",

"component", "modification", "redeclarekeywords", "innerouter",

"importelementspec", "classelementspec", "import", "elementspec",

"elementAttr", "variability", "direction", "typespec", "arrayComplex",

"typespecs", "arraySubscripts", "arrayDim", "functioncall",

"functionargs", "namedargs", "namedarg", "exp", "matchcont", "if_exp",

"expelseifs", "expelseif", "matchlocal", "cases", "case", "casearg",

"simpleExp", "headtail", "rangeExp", "logicexp", "logicterm",

"logfactor", "relterm", "addterm", "term", "factor", "expElement",

"tuple", "explist", "explist2", "cref", "woperator", "soperator",

"power", "relOperator", "path", "ident", "string", "comment"};

%}

%token T_ALGORITHM

%token T_AND

%token T_ANNOTATION

%token BLOCK

%token CLASS

%token CONNECT

Page 109: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

102

%token CONNECTOR

%token CONSTANT

%token DISCRETE

%token DER

%token DEFINEUNIT

%token EACH

%token ELSE

%token ELSEIF

%token ELSEWHEN

%token T_END

%token ENUMERATION

%token EQUATION

%token ENCAPSULATED

%token EXPANDABLE

%token EXTENDS

%token CONSTRAINEDBY

%token EXTERNAL

%token T_FALSE

%token FINAL

%token FLOW

%token FOR

%token FUNCTION

%token IF

%token IMPORT

%token T_IN

%token INITIAL

%token INNER

%token T_INPUT

%token LOOP

%token MODEL

%token T_NOT

%token T_OUTER

%token OPERATOR

%token OVERLOAD

%token T_OR

%token T_OUTPUT

%token T_PACKAGE

%token PARAMETER

%token PARTIAL

%token PROTECTED

%token PUBLIC

%token RECORD

%token REDECLARE

%token REPLACEABLE

%token RESULTS

%token THEN

%token T_TRUE

%token TYPE

%token UNSIGNED_REAL

%token WHEN

%token WHILE

%token WITHIN

%token RETURN

%token BREAK

%token DOT

%token LPAR

%token RPAR

%token LBRACK

%token RBRACK

%token LBRACE

%token RBRACE

%token EQUALS

%token ASSIGN

%token COMMA

%token COLON

%token SEMICOLON

%token CODE

%token CODE_NAME

Page 110: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

103

%token CODE_EXP

%token CODE_VAR

%token PURE

%token IMPURE

%token IDENT

%token DIGIT

%token UNSIGNED_INTEGER

%token STAR

%token MINUS

%token PLUS

%token LESSEQ

%token LESSGT

%token LESS

%token GREATER

%token GREATEREQ

%token EQEQ

%token POWER

%token SLASH

%token STRING

%token PLUS_EW

%token MINUS_EW

%token STAR_EW

%token SLASH_EW

%token POWER_EW

%token STREAM

%token AS

%token CASE

%token EQUALITY

%token FAILURE

%token GUARD

%token LOCAL

%token MATCH

%token MATCHCONTINUE

%token UNIONTYPE

%token ALLWILD

%token WILD

%token SUBTYPEOF

%token COLONCOLON

%token MOD

%token ENDIF

%token ENDFOR

%token ENDWHILE

%token ENDWHEN

%token ENDCLASS

%token ENDMATCHCONTINUE

%token ENDMATCH

//%expect 42

%%

/* Yacc BNF grammar of the Modelica+MetaModelica language */

program : classes_list

{ (absyntree)[Program] =

Absyn.PROGRAM($1[lstClass],Absyn.TOP(),Absyn.TIMESTAMP(System.getCurrentTime()

,System.getCurrentTime())); }

| within classes_list

{ (absyntree)[Program] =

Absyn.PROGRAM($2[lstClass],$1[Within],Absyn.TIMESTAMP(System.getCurrentTime(),

System.getCurrentTime())); }

Page 111: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

104

within : WITHIN path SEMICOLON { $$[Within] =

Absyn.WITHIN($2[Path]); }

classes_list : class2 SEMICOLON { $$[lstClass] = $1[Class]::{}; }

| class2 SEMICOLON classes_list { $$[lstClass] =

$1[Class]::$2[lstClass]; }

/* restriction IDENT classdef T_END IDENT SEMICOLON

{ if (not stringEqual($2,$5) ) then

print(Types.printInfoError(info) + " Error: The identifier at start and end

are different '" + $2 + "'");

true = ($2 == $5);

end if; $$[Class] =

Absyn.CLASS($2,false,false,false,$1[Restriction],$3[ClassDef],info); }

*/

class2 : FINAL classprefix restriction IDENT classdef {

(v1Boolean,v2Boolean) = $2[Boolean2];

$$[Class] =

Absyn.CLASS($4,v2Boolean,true,v1Boolean,$3[Restriction],$5[ClassDef],info); }

| FINAL restriction IDENT classdef

{ $$[Class] =

Absyn.CLASS($3,false,true,false,$2[Restriction],$4[ClassDef],info); }

| class { $$[Class] = $1[Class]; }

class : restriction IDENT classdef

{ $$[Class] =

Absyn.CLASS($2,false,false,false,$1[Restriction],$3[ClassDef],info); }

| classprefix restriction IDENT classdef

{ (v1Boolean,v2Boolean) = $1[Boolean2];

$$[Class] =

Absyn.CLASS($3,v2Boolean,false,v1Boolean,$2[Restriction],$4[ClassDef],info); }

classdef : string ENDCLASS

{ $$[ClassDef] = Absyn.PARTS({},{},{},SOME($1)); }

|ENDCLASS

{ $$[ClassDef] = Absyn.PARTS({},{},{},NONE()); }

|classparts ENDCLASS

{ $$[ClassDef] =

Absyn.PARTS({},{},$1[ClassParts],NONE()); }

| string classparts ENDCLASS

{ $$[ClassDef] =

Absyn.PARTS({},{},$2[ClassParts],SOME($1)); }

| classdefenumeration

{ $$[ClassDef] = $1[ClassDef]; };

| classdefderived

{ $$[ClassDef] = $1[ClassDef]; };

classprefix : ENCAPSULATED partial

{ $$[Boolean2] = (true,$2[Boolean]); }

| PARTIAL

{ $$[Boolean2] = (false,true); }

// encapsulated : ENCAPSULATED { $$[Boolean] = true; }

// | /* empty */ { $$[Boolean] = false; }

partial : PARTIAL { $$[Boolean] = true; }

| /* empty */ { $$[Boolean] = false; }

final : FINAL { $$[Boolean] = true; }

| /* empty */ { $$[Boolean] = false; }

restriction : CLASS { $$[Restriction] = Absyn.R_CLASS(); }

| MODEL { $$[Restriction] =

Absyn.R_MODEL(); }

Page 112: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

105

| RECORD { $$[Restriction] =

Absyn.R_RECORD(); }

| T_PACKAGE { $$[Restriction] =

Absyn.R_PACKAGE(); }

| TYPE { $$[Restriction] =

Absyn.R_TYPE(); }

| FUNCTION { $$[Restriction] =

Absyn.R_FUNCTION(Absyn.FR_NORMAL_FUNCTION()); }

| UNIONTYPE { $$[Restriction] =

Absyn.R_UNIONTYPE(); }

| BLOCK { $$[Restriction] =

Absyn.R_BLOCK(); }

| CONNECTOR { $$[Restriction] =

Absyn.R_CONNECTOR(); }

| EXPANDABLE CONNECTOR {

$$[Restriction] = Absyn.R_EXP_CONNECTOR(); }

| ENUMERATION { $$[Restriction] =

Absyn.R_ENUMERATION(); }

| OPERATOR RECORD { $$[Restriction] =

Absyn.R_OPERATOR_RECORD(); }

| OPERATOR { $$[Restriction] =

Absyn.R_OPERATOR(); }

classdefenumeration : EQUALS ENUMERATION LPAR enumeration RPAR comment

{ $$[ClassDef] =

Absyn.ENUMERATION($4[EnumDef],SOME($6[Comment])); }

classdefderived : EQUALS typespec elementargs2 comment

{ $$[ClassDef] =

Absyn.DERIVED($2[TypeSpec],Absyn.ATTR(false,false,Absyn.NON_PARALLEL(),Absyn.V

AR(), Absyn.BIDIR(),{}),$3[ElementArgs],SOME($4[Comment])); }

| EQUALS elementAttr typespec elementargs2 comment

{ $$[ClassDef] =

Absyn.DERIVED($3[TypeSpec],$2[ElementAttributes],$4[ElementArgs],SOME($5[Comme

nt])); }

enumeration : enumlist { $$[EnumDef] =

Absyn.ENUMLITERALS($1[EnumLiterals]); }

| COLON { $$[EnumDef] = Absyn.ENUM_COLON(); }

enumlist : enumliteral { $$[EnumLiterals] = $1[EnumLiteral]::{}; }

| enumliteral COMMA enumlist { $$[EnumLiterals] =

$1[EnumLiteral]::$3[EnumLiterals]; }

enumliteral : ident comment { $$[EnumLiteral] =

Absyn.ENUMLITERAL($1[Ident],SOME($2[Comment])); }

classparts : classpart { $$[ClassParts] = $1[ClassPart]::{}; }

| classpart classparts { $$[ClassParts] =

$1[ClassPart]::$2[ClassParts]; }

classpart : elementItems { $$[ClassPart] =

Absyn.PUBLIC($1[ElementItems]); }

| restClass { $$[ClassPart] = $1[ClassPart]; }

restClass : PUBLIC optelement { $$[ClassPart] =

Absyn.PUBLIC($1[ElementItems]); }

| PROTECTED optelement { $$[ClassPart] =

Absyn.PROTECTED($1[ElementItems]); }

| EQUATION optequationsection { $$[ClassPart] =

Absyn.EQUATIONS($1[EquationItems]); }

| T_ALGORITHM optalgorithmsection { $$[ClassPart] =

Absyn.ALGORITHMS($1[AlgorithmItems]); }

| initialClass { $$[ClassPart]=$1[ClassPart]; }

Page 113: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

106

| external { $$[ClassPart]=$1[ClassPart]; }

initialClass : INITIAL EQUATION equationsection { $$[ClassPart] =

Absyn.INITIALEQUATIONS($1[EquationItems]); }

| INITIAL T_ALGORITHM algorithmsection { $$[ClassPart] =

Absyn.INITIALALGORITHMS($1[AlgorithmItems]); }

optelement : elementItems { $$[ElementItems]=$1[ElementItems]; }

| /* empty */ { $$[ElementItems]={}; }

optequationsection : equationsection {

$$[EquationItems]=$1[EquationItems]; }

| /* empty */ { $$[EquationItems]={}; }

optalgorithmsection : algorithmsection {

$$[AlgorithmItems]=$1[AlgorithmItems]; }

| /* empty */ { $$[AlgorithmItems]={}; }

external : EXTERNAL SEMICOLON { $$[ClassPart] =

Absyn.EXTERNAL(Absyn.EXTERNALDECL(NONE(),NONE(),NONE(),{},NONE()),NONE()); }

| EXTERNAL externalDecl SEMICOLON { $$[ClassPart] =

Absyn.EXTERNAL($2[ExternalDecl],NONE()); }

| EXTERNAL externalDecl SEMICOLON annotation SEMICOLON

{ $$[ClassPart] = Absyn.EXTERNAL($2[ExternalDecl],SOME($3[Annotation])); }

externalDecl : string { $$[ExternalDecl] =

Absyn.EXTERNALDECL(NONE(),SOME($1),NONE(),{},NONE()); }

| string annotation { $$[ExternalDecl] =

Absyn.EXTERNALDECL(NONE(),SOME($1),NONE(),{},SOME($2[Annotation])); }

| string cref EQUALS ident LPAR explist2 RPAR {

$$[ExternalDecl] =

Absyn.EXTERNALDECL(SOME($4[Ident]),SOME($1),SOME($2[ComponentRef]),$6[Exps],NO

NE()); }

| string cref EQUALS ident LPAR explist2 RPAR

annotation { $$[ExternalDecl] =

Absyn.EXTERNALDECL(SOME($4[Ident]),SOME($1),SOME($2[ComponentRef]),$6[Exps],SO

ME($8[Annotation])); }

| string ident LPAR explist2 RPAR annotation {

$$[ExternalDecl] =

Absyn.EXTERNALDECL(SOME($2[Ident]),SOME($1),NONE(),$4[Exps],SOME($6[Annotation

])); }

| string ident LPAR explist2 RPAR { $$[ExternalDecl] =

Absyn.EXTERNALDECL(SOME($2[Ident]),SOME($1),NONE(),$4[Exps],NONE()); }

| cref EQUALS ident LPAR explist2 RPAR {

$$[ExternalDecl] =

Absyn.EXTERNALDECL(SOME($3[Ident]),NONE(),SOME($1[ComponentRef]),$5[Exps],NONE

()); }

| cref EQUALS ident LPAR explist2 RPAR annotation {

$$[ExternalDecl] =

Absyn.EXTERNALDECL(SOME($3[Ident]),NONE(),SOME($1[ComponentRef]),$5[Exps],SOME

($7[Annotation])); }

| ident LPAR explist2 RPAR annotation {

$$[ExternalDecl] =

Absyn.EXTERNALDECL(SOME($1[Ident]),NONE(),NONE(),$3[Exps],SOME($5[Annotation])

); }

| ident LPAR explist2 RPAR { $$[ExternalDecl] =

Absyn.EXTERNALDECL(SOME($1[Ident]),NONE(),NONE(),$3[Exps],NONE()); }

/* ALGORITHMS */

algorithmsection : algorithmitem SEMICOLON { $$[AlgorithmItems] =

$1[AlgorithmItem]::{}; }

| algorithmitem SEMICOLON algorithmsection {

$$[AlgorithmItems] = $1[AlgorithmItem]::$2[AlgorithmItems]; }

algorithmitem : algorithm comment

Page 114: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

107

{ $$[AlgorithmItem] =

Absyn.ALGORITHMITEM($1[Algorithm],SOME($2[Comment]),info); }

algorithm : simpleExp ASSIGN exp

{ $$[Algorithm] =

Absyn.ALG_ASSIGN($1[Exp],$3[Exp]); }

| cref functioncall

{ $$[Algorithm] =

Absyn.ALG_NORETCALL($1[ComponentRef],$2[FunctionArgs]); }

| RETURN

{ $$[Algorithm] = Absyn.ALG_RETURN(); }

| BREAK

{ $$[Algorithm] = Absyn.ALG_BREAK(); }

| if_algorithm

{ $$[Algorithm] = $1[Algorithm]; }

| when_algorithm

{ $$[Algorithm] = $1[Algorithm]; }

| FOR foriterators LOOP algorithmsection ENDFOR

{ $$[Algorithm] =

Absyn.ALG_FOR($3[ForIterators],$5[AlgorithmItems]); }

| WHILE exp LOOP algorithmsection ENDWHILE

{ $$[Algorithm] =

Absyn.ALG_WHILE($3[Exp],$5[AlgorithmItems]); }

if_algorithm : IF exp THEN ENDIF { $$[Algorithm] =

Absyn.ALG_IF($2[Exp],{},{},{}); } // warning empty if

|IF exp THEN algorithmsection ENDIF { $$[Algorithm] =

Absyn.ALG_IF($2[Exp],$4[AlgorithmItems],{},{}); }

| IF exp THEN algorithmsection ELSE algorithmsection

ENDIF { $$[Algorithm] =

Absyn.ALG_IF($2[Exp],$4[AlgorithmItems],{},$6[AlgorithmItems]); }

| IF exp THEN algorithmsection algelseifs ENDIF {

$$[Algorithm] = Absyn.ALG_IF($2[Exp],$4[AlgorithmItems],$5[AlgElseifs],{}); }

| IF exp THEN algorithmsection algelseifs ELSE

algorithmsection ENDIF { $$[Algorithm] =

Absyn.ALG_IF($2[Exp],$4[AlgorithmItems],$5[AlgElseifs],$7[AlgorithmItems]); }

algelseifs : algelseif { $$[AlgElseifs] = $1[AlgElseif]::{}; }

| algelseif algelseifs { $$[AlgElseifs] =

$1[AlgElseif]::$2[AlgElseifs]; }

algelseif : ELSEIF exp THEN algorithmsection { $$[AlgElseif] =

($2[Exp],$4[AlgorithmItems]); }

when_algorithm : WHEN exp THEN algorithmsection ENDWHEN

{ $$[Algorithm] =

Absyn.ALG_WHEN_A($2[Exp],$4[AlgorithmItems],{}); }

| WHEN exp THEN algorithmsection algelsewhens ENDWHEN

{ $$[Algorithm] =

Absyn.ALG_WHEN_A($2[Exp],$4[AlgorithmItems],$5[AlgElsewhens]); }

algelsewhens : algelsewhen { $$[AlgElsewhens] =

$1[AlgElsewhen]::{}; }

| algelsewhen algelsewhens { $$[AlgElsewhens] =

$1[AlgElsewhen]::$2[AlgElsewhens]; }

algelsewhen : ELSEWHEN exp THEN algorithmsection {

$$[AlgElsewhen] = ($2[Exp],$4[AlgorithmItems]); }

/* EQUATIONS */

equationsection : equationitem SEMICOLON { $$[EquationItems] =

$1[EquationItem]::{}; }

| equationitem SEMICOLON equationsection {

$$[EquationItems] = $1[EquationItem]::$2[EquationItems]; }

equationitem : equation comment

Page 115: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

108

{ $$[EquationItem] =

Absyn.EQUATIONITEM($1[Equation],SOME($2[Comment]),info); }

equation : exp EQUALS exp

{ $$[Equation] =

Absyn.EQ_EQUALS($1[Exp],$3[Exp]); }

| if_equation

{ $$[Equation] = $1[Equation]; }

| when_equation

{ $$[Equation] = $1[Equation]; }

| CONNECT LPAR cref COMMA cref RPAR

{ $$[Equation] =

Absyn.EQ_CONNECT($3[ComponentRef],$5[ComponentRef]); }

| FOR foriterators LOOP equationsection ENDFOR

{ $$[Equation] =

Absyn.EQ_FOR($3[ForIterators],$5[EquationItems]); }

| cref functioncall { $$[Equation] =

Absyn.EQ_NORETCALL($1[ComponentRef],$2[FunctionArgs]); }

when_equation : WHEN exp THEN equationsection ENDWHEN

{ $$[Equation] =

Absyn.EQ_WHEN_E($2[Exp],$4[EquationItems],{}); }

| WHEN exp THEN equationsection elsewhens ENDWHEN

{ $$[Equation] =

Absyn.EQ_WHEN_E($2[Exp],$4[EquationItems],$5[Elsewhens]); }

elsewhens : elsewhen { $$[Elsewhens] = $1[Elsewhen]::{}; }

| elsewhen elsewhens { $$[Elsewhens] =

$1[Elsewhen]::$2[Elsewhens]; }

elsewhen : ELSEWHEN exp THEN equationsection { $$[Elsewhen] =

($2[Exp],$4[EquationItems]); }

foriterators : foriterator { $$[ForIterators] = $1[ForIterator]::{};

}

| foriterator COMMA foriterators { $$[ForIterators] =

$1[ForIterator]::$2[ForIterators]; }

foriterator : IDENT { $$[ForIterator] =

Absyn.ITERATOR($1,NONE(),NONE()); }

| IDENT T_IN exp { $$[ForIterator] =

Absyn.ITERATOR($1,NONE(),SOME($3[Exp])); }

if_equation : IF exp THEN equationsection ENDIF

{ $$[Equation] =

Absyn.EQ_IF($2[Exp],$4[EquationItems],{},{}); }

| IF exp THEN equationsection ELSE equationsection ENDIF

{ $$[Equation] =

Absyn.EQ_IF($2[Exp],$4[EquationItems],{},$6[EquationItems]); }

| IF exp THEN equationsection ELSE ENDIF

{ $$[Equation] =

Absyn.EQ_IF($2[Exp],$4[EquationItems],{},{}); }

| IF exp THEN equationsection elseifs ENDIF

{ $$[Equation] =

Absyn.EQ_IF($2[Exp],$4[EquationItems],$5[Elseifs],{}); }

| IF exp THEN equationsection elseifs ELSE

equationsection ENDIF

{ $$[Equation] =

Absyn.EQ_IF($2[Exp],$4[EquationItems],$5[Elseifs],$7[EquationItems]); }

| IF exp THEN equationsection elseifs ELSE ENDIF

{ $$[Equation] =

Absyn.EQ_IF($2[Exp],$4[EquationItems],$5[Elseifs],{}); }

elseifs : elseif { $$[Elseifs] = $1[Elseif]::{}; }

| elseif elseifs { $$[Elseifs] =

$1[Elseif]::$2[Elseifs]; }

Page 116: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

109

elseif : ELSEIF exp THEN equationsection { $$[Elseif] =

($2[Exp],$4[EquationItems]); }

/* Expressions and Elements */

elementItems : elementItem { $$[ElementItems] = $1[ElementItem]::{}; }

| elementItem elementItems { $$[ElementItems] =

$1[ElementItem]::$2[ElementItems]; }

elementItem : element SEMICOLON { $$[ElementItem] =

Absyn.ELEMENTITEM($1[Element]); }

| annotation SEMICOLON { $$[ElementItem] =

Absyn.ANNOTATIONITEM($1[Annotation]); }

element : componentclause

{ $$[Element] = $1[Element]; }

| classElement2

{ $$[Element] = $1[Element]; }

| importelementspec

{ $$[Element] =

Absyn.ELEMENT(false,NONE(),Absyn.NOT_INNER_OUTER(),"IMPORT",$1[ElementSpec],in

fo,NONE()); }

| extends

{ $$[Element] =

Absyn.ELEMENT(false,NONE(),Absyn.NOT_INNER_OUTER(),"EXTENDS",$1[ElementSpec],i

nfo,NONE()); }

| unitclause

{ $$[Element] = $1[Element]; }

unitclause : DEFINEUNIT ident { $$[Element] =

Absyn.DEFINEUNIT($2[Ident],{}); }

| DEFINEUNIT ident LPAR namedargs RPAR { $$[Element] =

Absyn.DEFINEUNIT($2[Ident],$4[NamedArgs]); }

classElement2 : classelementspec

{ $$[Element] =

Absyn.ELEMENT(false,NONE(),Absyn.NOT_INNER_OUTER(),"??",$1[ElementSpec],info,N

ONE()); }

| REDECLARE classelementspec

{ $$[Element] =

Absyn.ELEMENT(false,SOME(Absyn.REDECLARE()),Absyn.NOT_INNER_OUTER(),"CLASS",$1

[ElementSpec],info,NONE()); }

componentclause : elementspec

{ $$[Element] =

Absyn.ELEMENT(false,NONE(),Absyn.NOT_INNER_OUTER(),"component",$1[ElementSpec]

,info,NONE()); }

| innerouter elementspec

{ $$[Element] =

Absyn.ELEMENT(false,NONE(),$1[InnerOuter],"INNEROUTTER

ELEMENTSPEC",$2[ElementSpec],info,NONE()); }

| redeclarekeywords final innerouter elementspec

{ $$[Element] =

Absyn.ELEMENT($2[Boolean],SOME($1[RedeclareKeywords]),$3[InnerOuter],"REDE

ELEMENTSPEC",$4[ElementSpec],info,NONE()); }

| redeclarekeywords final elementspec

{ $$[Element] =

Absyn.ELEMENT($2[Boolean],SOME($1[RedeclareKeywords]),Absyn.NOT_INNER_OUTER(),

"REDE ELEMENTSPEC",$3[ElementSpec],info,NONE()); }

| redeclarekeywords final elementspec

constraining_clause

Page 117: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

110

{ $$[Element] =

Absyn.ELEMENT($2[Boolean],SOME($1[RedeclareKeywords]),Absyn.NOT_INNER_OUTER(),

"REDE ELEMENTSPEC",$3[ElementSpec],info,SOME($4[ConstrainClass])); }

| FINAL elementspec

{ $$[Element] =

Absyn.ELEMENT(true,NONE(),Absyn.NOT_INNER_OUTER(),"FINAL

ELEMENTSPEC",$2[ElementSpec],info,NONE()); }

| FINAL innerouter elementspec

{ $$[Element] =

Absyn.ELEMENT(true,NONE(),$2[InnerOuter],"FINAL INNEROUTER

ELEMENTSPEC",$3[ElementSpec],info,NONE()); }

componentitems : componentitem { $$[ComponentItems] =

$1[ComponentItem]::{}; }

| componentitem COMMA componentitems { $$[ComponentItems]

= $1[ComponentItem]::$2[ComponentItems]; }

componentitem : component comment { $$[ComponentItem] =

Absyn.COMPONENTITEM($1[Component],NONE(),SOME($2[Comment])); }

| component componentcondition comment { $$[ComponentItem]

=

Absyn.COMPONENTITEM($1[Component],SOME($2[ComponentCondition]),SOME($3[Comment

])); }

componentcondition : IF exp { $$[ComponentCondition] = $1[Exp]; }

component : ident arraySubscripts modification { $$[Component] =

Absyn.COMPONENT($1[Ident],$2[ArrayDim],SOME($3[Modification])); }

| ident arraySubscripts { $$[Component] =

Absyn.COMPONENT($1[Ident],$2[ArrayDim],NONE()); }

modification : EQUALS exp { $$[Modification] =

Absyn.CLASSMOD({},Absyn.EQMOD($2[Exp],info)); }

| ASSIGN exp { $$[Modification] =

Absyn.CLASSMOD({},Absyn.EQMOD($2[Exp],info)); }

| class_modification { $$[Modification] =

$1[Modification]; }

class_modification : elementargs

{ $$[Modification] =

Absyn.CLASSMOD($1[ElementArgs],Absyn.NOMOD()); }

| elementargs EQUALS exp

{ $$[Modification] =

Absyn.CLASSMOD($1[ElementArgs],Absyn.EQMOD($3[Exp],info)); }

annotation : T_ANNOTATION elementargs { $$[Annotation]=

Absyn.ANNOTATION($1[ElementArgs]); }

elementargs : LPAR argumentlist RPAR { $$[ElementArgs] =

$1[ElementArgs]; }

elementargs2 : LPAR argumentlist RPAR { $$[ElementArgs] =

$1[ElementArgs]; }

| /* empty */ { $$[ElementArgs] = {}; }

argumentlist : elementarg { $$[ElementArgs] = {$1[ElementArg]}; }

| elementarg COMMA argumentlist { $$[ElementArgs] =

$1[ElementArg]::$2[ElementArgs]; }

elementarg : element_mod_rep { $$[ElementArg] = $1[ElementArg]; }

| element_redec { $$[ElementArg] = $1[ElementArg]; }

element_mod_rep : element_mod { $$[ElementArg] = $1[ElementArg]; }

| element_rep { $$[ElementArg] = $1[ElementArg]; }

element_mod : eachprefix final cref

Page 118: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

111

{ $$[ElementArg] =

Absyn.MODIFICATION($2[Boolean],$1[Each],$3[ComponentRef],NONE(),NONE(),info);

}

| eachprefix final cref modification

{ $$[ElementArg] =

Absyn.MODIFICATION($2[Boolean],$1[Each],$3[ComponentRef],SOME($4[Modification]

),NONE(),info); }

| eachprefix final cref string

{ $$[ElementArg] =

Absyn.MODIFICATION($2[Boolean],$1[Each],$3[ComponentRef],NONE(),SOME($4),info)

; }

| eachprefix final cref modification string

{ $$[ElementArg] =

Absyn.MODIFICATION($2[Boolean],$1[Each],$3[ComponentRef],SOME($4[Modification]

),SOME($5),info); }

element_rep : REPLACEABLE eachprefix final classelementspec

{ $$[ElementArg] =

Absyn.REDECLARATION($3[Boolean],Absyn.REPLACEABLE(),$2[Each],$4[ElementSpec],N

ONE(),info); }

| REPLACEABLE eachprefix final elementspec2

{ $$[ElementArg] =

Absyn.REDECLARATION($3[Boolean],Absyn.REPLACEABLE(),$2[Each],$4[ElementSpec],N

ONE(),info); }

| REPLACEABLE eachprefix final classelementspec

constraining_clause

{ $$[ElementArg] =

Absyn.REDECLARATION($3[Boolean],Absyn.REDECLARE(),$2[Each],$4[ElementSpec],SOM

E($5[ConstrainClass]),info); }

| REPLACEABLE eachprefix final elementspec2

constraining_clause

{ $$[ElementArg] =

Absyn.REDECLARATION($3[Boolean],Absyn.REDECLARE(),$2[Each],$4[ElementSpec],SOM

E($5[ConstrainClass]),info); }

element_redec : REDECLARE eachprefix final classelementspec

{ $$[ElementArg] =

Absyn.REDECLARATION($3[Boolean],Absyn.REDECLARE(),$2[Each],$4[ElementSpec],NON

E(),info); }

| REDECLARE eachprefix final elementspec2

{ $$[ElementArg] =

Absyn.REDECLARATION($3[Boolean],Absyn.REDECLARE(),$2[Each],$4[ElementSpec],NON

E(),info); }

elementspec2 : elementAttr typespec componentitems2 // arraydim

from typespec should be in elementAttr arraydim

{ ($1[ElementAttributes],$2[TypeSpec]) =

fixArray($1[ElementAttributes],$2[TypeSpec]);

$$[ElementSpec] =

Absyn.COMPONENTS($1[ElementAttributes],$2[TypeSpec],$3[ComponentItems]); }

| typespec componentitems2 // arraydim from typespec

should be in elementAttr arraydim

{ (v1ElementAttributes,$1[TypeSpec]) =

fixArray(Absyn.ATTR(false,false,Absyn.NON_PARALLEL(),Absyn.VAR(),

Absyn.BIDIR(),{}),$1[TypeSpec]);

$$[ElementSpec] =

Absyn.COMPONENTS(v1ElementAttributes,$1[TypeSpec],$2[ComponentItems]); }

componentitems2 : component comment { $$[ComponentItems] =

{Absyn.COMPONENTITEM($1[Component],NONE(),SOME($2[Comment]))}; }

eachprefix : EACH { $$[Each]= Absyn.EACH(); }

| /* empty */ { $$[Each]= Absyn.NON_EACH(); }

redeclarekeywords : REDECLARE { $$[RedeclareKeywords] = Absyn.REDECLARE(); }

Page 119: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

112

| REPLACEABLE { $$[RedeclareKeywords] =

Absyn.REPLACEABLE(); }

| REDECLARE REPLACEABLE { $$[RedeclareKeywords] =

Absyn.REDECLARE_REPLACEABLE(); }

innerouter : INNER { $$[InnerOuter] = Absyn.INNER(); }

| T_OUTER { $$[InnerOuter] = Absyn.OUTER(); }

| INNER T_OUTER { $$[InnerOuter] = Absyn.INNER_OUTER(); }

//| /* empty */ { $$[InnerOuter] =

Absyn.NOT_INNER_OUTER(); }

importelementspec : import comment { $$[ElementSpec] =

Absyn.IMPORT($1[Import],SOME($2[Comment]),info); }

classelementspec : class { $$[ElementSpec] =

Absyn.CLASSDEF(false,$1[Class]); }

| REPLACEABLE class { $$[ElementSpec] =

Absyn.CLASSDEF(true,$2[Class]); }

import : IMPORT path { $$[Import] = Absyn.QUAL_IMPORT($2[Path]);

}

| IMPORT path STAR_EW { $$[Import] =

Absyn.UNQUAL_IMPORT($2[Path]); }

| IMPORT ident EQUALS path { $$[Import] =

Absyn.NAMED_IMPORT($2[Ident],$4[Path]); }

extends : EXTENDS path elementargs2

{ $$[ElementSpec] =

Absyn.EXTENDS($2[Path],$3[ElementArgs],NONE()); }

| EXTENDS path elementargs2 annotation

{ $$[ElementSpec] =

Absyn.EXTENDS($2[Path],$3[ElementArgs],SOME($4[Annotation])); }

constraining_clause : extends { $$[ConstrainClass]=

Absyn.CONSTRAINCLASS($1[ElementSpec],NONE()); }

| CONSTRAINEDBY path elementargs2 { $$[ConstrainClass]=

Absyn.CONSTRAINCLASS(Absyn.EXTENDS($2[Path],$3[ElementArgs],NONE()),NONE()); }

elementspec : elementAttr typespec componentitems // arraydim from

typespec should be in elementAttr arraydim

{ ($1[ElementAttributes],$2[TypeSpec]) =

fixArray($1[ElementAttributes],$2[TypeSpec]);

$$[ElementSpec] =

Absyn.COMPONENTS($1[ElementAttributes],$2[TypeSpec],$3[ComponentItems]); }

| typespec componentitems // arraydim from typespec

should be in elementAttr arraydim

{ (v1ElementAttributes,$1[TypeSpec]) =

fixArray(Absyn.ATTR(false,false,Absyn.NON_PARALLEL(),Absyn.VAR(),

Absyn.BIDIR(),{}),$1[TypeSpec]);

$$[ElementSpec] =

Absyn.COMPONENTS(v1ElementAttributes,$1[TypeSpec],$2[ComponentItems]); }

elementAttr : direction

{ $$[ElementAttributes] =

Absyn.ATTR(false,false,Absyn.NON_PARALLEL(),Absyn.VAR(), $1[Direction],{}); }

| variability

{ $$[ElementAttributes] =

Absyn.ATTR(false,false,Absyn.NON_PARALLEL(),$1[Variability],

Absyn.BIDIR(),{}); }

| variability direction

{ $$[ElementAttributes] =

Absyn.ATTR(false,false,Absyn.NON_PARALLEL(),$1[Variability],

$2[Direction],{}); }

| STREAM variability direction

Page 120: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

113

{ $$[ElementAttributes] =

Absyn.ATTR(false,true,Absyn.NON_PARALLEL(),$2[Variability], $3[Direction],{});

}

| FLOW variability direction

{ $$[ElementAttributes] =

Absyn.ATTR(true,false,Absyn.NON_PARALLEL(),$2[Variability], $3[Direction],{});

}

| FLOW direction

{ $$[ElementAttributes] =

Absyn.ATTR(true,false,Absyn.NON_PARALLEL(),Absyn.VAR(), $2[Direction],{}); }

| FLOW

{ $$[ElementAttributes] =

Absyn.ATTR(true,false,Absyn.NON_PARALLEL(),Absyn.VAR(),Absyn.BIDIR(),{}); }

variability : PARAMETER { $$[Variability] = Absyn.PARAM(); }

| CONSTANT { $$[Variability] = Absyn.CONST(); }

| DISCRETE { $$[Variability] = Absyn.DISCRETE(); }

// | /* empty */ { $$[Variability] = Absyn.VAR(); }

direction : T_INPUT { $$[Direction] = Absyn.INPUT(); }

| T_OUTPUT { $$[Direction] = Absyn.OUTPUT(); }

// | /* empty */ { $$[Direction] = Absyn.BIDIR(); }

/* Type specification */

typespec : path arraySubscripts { $$[TypeSpec] =

Absyn.TPATH($1[Path],SOME($2[ArrayDim])); }

| path arrayComplex { $$[TypeSpec] =

Absyn.TCOMPLEX($1[Path],$2[TypeSpecs],NONE()); }

arrayComplex : LESS typespecs GREATER { $$[TypeSpecs] = $1[TypeSpecs];

}

typespecs : typespec { $$[TypeSpecs] = $1[TypeSpec]::{}; }

| typespec COMMA typespecs { $$[TypeSpecs] =

$1[TypeSpec]::$2[TypeSpecs]; }

arraySubscripts : LBRACK arrayDim RBRACK { $$[ArrayDim] = $1[ArrayDim]; }

| /* empty */ { $$[ArrayDim] = {}; }

arrayDim : subscript { $$[ArrayDim] = $1[Subscript]::{}; }

| subscript COMMA arrayDim { $$[ArrayDim] =

$1[Subscript]::$2[ArrayDim]; }

subscript : exp { $$[Subscript] = Absyn.SUBSCRIPT($1[Exp]); }

| COLON { $$[Subscript] = Absyn.NOSUB(); }

/* function calls */

functioncall : LPAR functionargs RPAR { $$[FunctionArgs] =

$1[FunctionArgs]; }

functionargs : namedargs

{ $$[FunctionArgs] =

Absyn.FUNCTIONARGS({},$1[NamedArgs]); }

| functionargs2 { $$[FunctionArgs]= $1[FunctionArgs]; }

| functionargs3 { $$[FunctionArgs]= $1[FunctionArgs]; }

functionargs2 : explist2

{ $$[FunctionArgs] = Absyn.FUNCTIONARGS($1[Exps],{}); }

| explist2 COMMA namedargs

{ $$[FunctionArgs] =

Absyn.FUNCTIONARGS($1[Exps],$3[NamedArgs]); }

Page 121: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

114

functionargs3 : exp FOR foriterators

{ $$[FunctionArgs] =

Absyn.FOR_ITER_FARG($1[Exp],$3[ForIterators]); }

namedargs : namedarg { $$[NamedArgs] = $1[NamedArg]::{}; }

| namedarg COMMA namedargs { $$[NamedArgs] =

$1[NamedArg]::$2[NamedArgs]; }

namedarg : ident EQUALS exp { $$[NamedArg] =

Absyn.NAMEDARG($1[Ident],$2[Exp]); }

/* expressions */

exp : simpleExp { $$[Exp] = $1[Exp]; }

| if_exp { $$[Exp] = $1[Exp]; }

| matchcont { $$[Exp] = $1[Exp]; }

matchcont : MATCH exp cases ENDMATCH { $$[Exp] =

Absyn.MATCHEXP(Absyn.MATCH(),$2[Exp],{},$3[Cases],NONE()); }

| MATCH exp matchlocal cases ENDMATCH { $$[Exp] =

Absyn.MATCHEXP(Absyn.MATCH(),$2[Exp],$3[ElementItems],$4[Cases],NONE()); }

| MATCHCONTINUE exp cases ENDMATCHCONTINUE { $$[Exp] =

Absyn.MATCHEXP(Absyn.MATCHCONTINUE(),$2[Exp],{},$3[Cases],NONE()); }

| MATCHCONTINUE exp matchlocal cases ENDMATCHCONTINUE {

$$[Exp] =

Absyn.MATCHEXP(Absyn.MATCHCONTINUE(),$2[Exp],$3[ElementItems],$4[Cases],NONE()

); }

if_exp : IF exp THEN exp ELSE exp { $$[Exp] =

Absyn.IFEXP($2[Exp],$4[Exp],$6[Exp],{}); }

| IF exp THEN exp expelseifs ELSE exp { $$[Exp] =

Absyn.IFEXP($2[Exp],$4[Exp],$7[Exp],$5[ExpElseifs]); }

expelseifs : expelseif { $$[ExpElseifs] = $1[ExpElseif]::{}; }

| expelseif expelseifs { $$[ExpElseifs] =

$1[ExpElseif]::$2[ExpElseifs]; }

expelseif : ELSEIF exp THEN exp { $$[ExpElseif] = ($2[Exp],$4[Exp]); }

matchlocal : LOCAL elementItems { $$[ElementItems] = $1[ElementItems];

}

cases : case { $$[Cases] = $1[Case]::{}; }

| case cases { $$[Cases] = $1[Case]::$2[Cases]; }

case : CASE casearg THEN exp SEMICOLON

{ $$[Case] =

Absyn.CASE($2[Exp],info,{},{},$4[Exp],info,NONE(),info); }

| CASE casearg EQUATION THEN exp SEMICOLON

{ $$[Case] =

Absyn.CASE($2[Exp],info,{},{},$4[Exp],info,NONE(),info); }

| CASE casearg EQUATION equationsection THEN exp SEMICOLON

{ $$[Case] =

Absyn.CASE($2[Exp],info,{},$4[EquationItems],$6[Exp],info,NONE(),info); }

| ELSE THEN exp SEMICOLON

{ $$[Case] =

Absyn.ELSE({},{},$3[Exp],info,NONE(),info); }

| ELSE EQUATION equationsection THEN exp SEMICOLON

{ $$[Case] =

Absyn.ELSE({},$3[EquationItems],$5[Exp],info,NONE(),info); }

casearg : exp { $$[Exp] = $1[Exp]; }

Page 122: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

115

simpleExp : logicexp { $$[Exp] = $1[Exp]; }

| rangeExp { $$[Exp] = $1[Exp]; }

| headtail { $$[Exp] = $1[Exp]; }

| ident AS simpleExp { $$[Exp] =

Absyn.AS($1[Ident],$2[Exp]); }

headtail : logicexp COLONCOLON logicexp { $$[Exp] =

Absyn.CONS($1[Exp],$3[Exp]); }

| logicexp COLONCOLON headtail { $$[Exp] =

Absyn.CONS($1[Exp],$3[Exp]); }

rangeExp : logicexp COLON logicexp { $$[Exp] =

Absyn.RANGE($1[Exp],NONE(),$3[Exp]); }

| logicexp COLON logicexp COLON logicexp { $$[Exp] =

Absyn.RANGE($1[Exp],SOME($3[Exp]),$5[Exp]); }

logicexp : logicterm { $$[Exp] = $1[Exp]; }

| logicexp T_OR logicterm { $$[Exp] =

Absyn.LBINARY($1[Exp],Absyn.OR(),$2[Exp]); }

logicterm : logfactor { $$[Exp] = $1[Exp]; }

| logicterm T_AND logfactor { $$[Exp] =

Absyn.LBINARY($1[Exp],Absyn.AND(),$2[Exp]); }

logfactor : relterm { $$[Exp] = $1[Exp]; }

| T_NOT relterm { $$[Exp] =

Absyn.LUNARY(Absyn.NOT(),$1[Exp]); }

relterm : addterm { $$[Exp] = $1[Exp]; }

| addterm relOperator addterm { $$[Exp] =

Absyn.RELATION($1[Exp],$2[Operator],$3[Exp]); }

addterm : term { $$[Exp] = $1[Exp]; }

| unoperator term { $$[Exp] =

Absyn.UNARY($1[Operator],$2[Exp]); }

| addterm woperator term { $$[Exp] =

Absyn.BINARY($1[Exp],$2[Operator],$3[Exp]); }

term : factor { $$[Exp] = $1[Exp]; }

| term soperator factor { $$[Exp] =

Absyn.BINARY($1[Exp],$2[Operator],$3[Exp]); }

factor : expElement { $$[Exp] = $1[Exp]; }

| expElement power factor { $$[Exp] =

Absyn.BINARY($1[Exp],$2[Operator],$3[Exp]); }

expElement : number { $$[Exp] = $1[Exp]; }

| cref { $$[Exp] = Absyn.CREF($1[ComponentRef]); }

| T_FALSE { $$[Exp] = Absyn.BOOL(false); }

| T_TRUE { $$[Exp] = Absyn.BOOL(true); }

| string { $$[Exp] = Absyn.STRING($1); }

| tuple { $$[Exp] = $1[Exp]; }

| LBRACE explist2 RBRACE { $$[Exp] =

Absyn.ARRAY($2[Exps]); }

| LBRACE functionargs RBRACE { $$[Exp] =

Absyn.CALL(Absyn.CREF_IDENT("array",{}),$2[FunctionArgs]); }

| LBRACK matrix RBRACK { $$[Exp] =

Absyn.MATRIX($2[Matrix]); }

| cref functioncall { $$[Exp] =

Absyn.CALL($1[ComponentRef],$2[FunctionArgs]); }

| DER functioncall { $$[Exp] =

Absyn.CALL(Absyn.CREF_IDENT("der",{}),$2[FunctionArgs]); }

| INITIAL functioncall { $$[Exp] =

Absyn.CALL(Absyn.CREF_IDENT("initial",{}),$2[FunctionArgs]); }

| LPAR exp RPAR { $$[Exp] = $2[Exp]; }

| T_END { $$[Exp] = Absyn.END(); }

Page 123: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

116

number : UNSIGNED_INTEGER { $$[Exp] =

Absyn.INTEGER(stringInt($1)); }

| UNSIGNED_REAL { $$[Exp] = Absyn.REAL(stringReal($1)); }

matrix : explist2 { $$[Matrix] = {$1[Exps]}; }

| explist2 SEMICOLON matrix { $$[Matrix] =

$1[Exps]::$3[Matrix]; }

tuple : LPAR explist RPAR { $$[Exp] = Absyn.TUPLE($2[Exps]); }

explist : exp COMMA exp { $$[Exps] = {$1[Exp],$3[Exp]}; }

| exp COMMA explist { $$[Exps] = $1[Exp]::$3[Exps]; }

| /* empty */ { $$[Exps] = {}; }

explist2 : exp { $$[Exps] = {$1[Exp]}; }

| explist2 COMMA exp { $$[Exps] =

listReverse($3[Exp]::listReverse($1[Exps])); }

| /* empty */ { $$[Exps] = {}; }

cref : ident arraySubscripts { $$[ComponentRef] =

Absyn.CREF_IDENT($1[Ident],$2[ArrayDim]); }

| ident arraySubscripts DOT cref { $$[ComponentRef] =

Absyn.CREF_QUAL($1[Ident],$2[ArrayDim],$4[ComponentRef]); }

| DOT cref { $$[ComponentRef] =

Absyn.CREF_FULLYQUALIFIED($2[ComponentRef]); }

| WILD { $$[ComponentRef] = Absyn.WILD();}

| ALLWILD { $$[ComponentRef] = Absyn.ALLWILD();}

unoperator : PLUS { $$[Operator] = Absyn.UPLUS(); }

| MINUS { $$[Operator] = Absyn.UMINUS(); }

| PLUS_EW { $$[Operator] = Absyn.UPLUS_EW(); }

| MINUS_EW { $$[Operator] = Absyn.UMINUS_EW(); }

woperator : PLUS { $$[Operator] = Absyn.ADD(); }

| MINUS { $$[Operator] = Absyn.SUB(); }

| PLUS_EW { $$[Operator] = Absyn.ADD_EW(); }

| MINUS_EW { $$[Operator] = Absyn.SUB_EW(); }

soperator : STAR { $$[Operator] = Absyn.MUL(); }

| SLASH { $$[Operator] = Absyn.DIV(); }

| STAR_EW { $$[Operator] = Absyn.MUL_EW(); }

| SLASH_EW { $$[Operator] = Absyn.DIV_EW(); }

power : POWER { $$[Operator] = Absyn.POW(); }

| POWER_EW { $$[Operator] = Absyn.POW_EW(); }

relOperator : LESS { $$[Operator] = Absyn.LESS(); }

| LESSEQ { $$[Operator] = Absyn.LESSEQ(); }

| GREATER { $$[Operator] = Absyn.GREATER(); }

| GREATEREQ { $$[Operator] = Absyn.GREATEREQ(); }

| EQEQ { $$[Operator] = Absyn.EQUAL(); }

| LESSGT { $$[Operator] = Absyn.NEQUAL(); }

path : ident { $$[Path] = Absyn.IDENT($1[Ident]); }

| ident DOT path { $$[Path] =

Absyn.QUALIFIED($1[Ident],$2[Path]); }

| DOT path { $$[Path] = Absyn.FULLYQUALIFIED($2[Path]);

}

ident : IDENT { $$[Ident] = $1; }

string : STRING { $$ = trimquotes($1); } // trim the quote of

the string

comment : string { $$[Comment] = Absyn.COMMENT(NONE(),SOME($1));

}

Page 124: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

117

| string annotation { $$[Comment] =

Absyn.COMMENT(SOME($2[Annotation]),SOME($1)); }

| annotation { $$[Comment] =

Absyn.COMMENT(SOME($1[Annotation]),NONE()); }

| /* empty */ { $$[Comment] =

Absyn.COMMENT(NONE(),NONE()); }

%%

public function trimquotes

"removes chars in charsToRemove from inString"

input String inString;

output String outString;

algorithm

if (stringLength(inString)>2) then

outString := System.substring(inString,2,stringLength(inString)-1);

else

outString := "";

end if;

end trimquotes;

function fixArray

input ElementAttributes v1ElementAttributes;

input TypeSpec v2TypeSpec;

output ElementAttributes v1ElementAttributes2;

output TypeSpec v2TypeSpec2;

protected

Boolean flowPrefix,b1,b2 "flow" ;

Boolean streamPrefix "stream" ;

// Boolean inner_ "inner";

// Boolean outer_ "outer";

Variability variability,v1 "variability ; parameter, constant etc." ;

Direction direction,d1 "direction" ;

Absyn.Parallelism parallelism;

ArrayDim arrayDim,a1 "arrayDim" ;

Path path,p1;

Option<ArrayDim> oa1;

algorithm

Absyn.ATTR(flowPrefix=b1,streamPrefix=b2,parallelism=Absyn.NON_PARALLEL(),vari

ability=v1,direction=d1,arrayDim=a1) := v1ElementAttributes;

Absyn.TPATH(path=p1,arrayDim=oa1) :=v2TypeSpec;

a1 := match oa1

local ArrayDim l1;

case SOME(l1)

then (l1);

case NONE() then ({});

end match;

v1ElementAttributes2 := Absyn.ATTR(b1,b2,parallelism,v1,d1,a1);

v2TypeSpec2 := Absyn.TPATH(p1,NONE());

end fixArray;

function printContentStack

input AstStack astStk;

protected

list<Token> skToken;

list<Path> skPath;

list<ClassDef> skClassDef;

list<Ident> skIdent;

list<Class> skClass;

list<Program> skProgram;

list<lstClass> sklstClass;

list<String> skString;

Page 125: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

118

list<Integer> skInteger;

algorithm

ASTSTACK(stackToken=skToken,stackPath=skPath,stackClassDef=skClassDef,stackIde

nt=skIdent,stackClass=skClass,stackProgram=skProgram,stacklstClass=sklstClass,

stackString=skString,stackInteger=skInteger) := astStk;

print("\n Stack content:");

print(" skToken:");

print(intString(listLength(skToken)));

print(" skPath:");

print(intString(listLength(skPath)));

print(" skClassDef:");

print(intString(listLength(skClassDef)));

print(" skIdent:");

print(intString(listLength(skIdent)));

print(" skClass:");

print(intString(listLength(skClass)));

print(" skProgram:");

print(intString(listLength(skProgram)));

print(" sklstClass:");

print(intString(listLength(sklstClass)));

print(" skString:");

print(intString(listLength(skString)));

print(" skInteger:");

print(intString(listLength(skInteger)));

end printContentStack;

Page 126: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

119

Appendix D

Error handling

Error1.mo

// name: error1.mo

// keywords: This tests the error handler "insert" when a model is not defined

with a class,block,records etc.. and hints the developers with messages

// status: incorrect

int x y z w;

while x <> 99 loop

x := (x+111) - (y/3);

if x == 10 then

y := 234;

end if;

end while;

output message

Parsing Modelica with file ../../testsuite/omcc_test/error1.mo

starting lexer

Tokens processed:38

starting parser

[../../testsuite/omcc_test/error1.mo:5:1-5:4:writable] Error: Syntax error near: 'int', INSERT

token 'ENUMERATION' or 'CONNECTOR' or 'CLASS' or 'BLOCK', REPLACE token with

'ENUMERATION' or 'CONNECTOR' or 'CLASS' or 'BLOCK'

args:../../testsuite/omcc_test/error1.mo

OMCCp v0.9.2 (OpenModelica compiler-compiler Parser generator) Lexer and Parser

Generator-2012

""

Detailed version of the output with parsing stack states: Parsing Modelica with file ../../testsuite/omcc_test/error1.mo

starting lexer

Tokens processed:38

starting parser

Parsing tokens ParseCodeModelica ...../../testsuite/omcc_test/error1.mo

Tokens remaining:38

[State:0]{|0}

[IDENT,'int'][n:1368-NT:1449]

Syntax Error found yyerrlab2:0

Page 127: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

120

ERROR RECOVERY INITIATED:

[State:0]{|0}

[StateStack Backup:{|0}

Check MERGE token until next token

**** Checking TOKEN: 336 action:5

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:IDENT 'A' (1:1-1:1)] action: 5

Lexer analyzer LexerCodeModelica...fileName

TOTAL Chars:4

Chars remaining:4

PROGRAM:{i}

BUFFER:{}

BKBUFFER:{}

STATE STACK:{|1} base:0 st:1

[Reading:'i' at p:1 line:1 rPos:1] evalState Before[c40,s1] After[c40,s1]

Chars remaining:3

PROGRAM:{n}

BUFFER:{i}

BKBUFFER:{}

STATE STACK:{|42|1} base:106 st:42

[Reading:'n' at p:2 line:1 rPos:2] evalState Before[c44,s42] After[c44,s42]

Chars remaining:2

PROGRAM:{t}

BUFFER:{ni}

BKBUFFER:{}

STATE STACK:{|114|42|1} base:207 st:114

[Reading:'t' at p:3 line:1 rPos:3] evalState Before[c50,s114] After[c4,s508]

Chars remaining:1

PROGRAM:{x}

BUFFER:{tni}

BKBUFFER:{}

STATE STACK:{|81|114|42|1} base:898 st:81

[Reading:'x' at p:4 line:1 rPos:4] evalState Before[c54,s81] After[c4,s508]

[State:0]{|0}

[IDENT,'y'][n:1368-NT:1449]

Syntax Error found yyerrlab2:0

Checking INSERT token:

**** Checking TOKEN: 258 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:T_ALGORITHM 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[T_ALGORITHM,'A'][n:1368-NT:1371]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 259 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:T_AND 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[T_AND,'A'][n:1368-NT:1372]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 260 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:T_ANNOTATION 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[T_ANNOTATION,'A'][n:1368-NT:1373]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 261 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:BLOCK 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[BLOCK,'A'][n:1368-NT:1374] SHIFT1

[State:1]{|1|0}

REDUCE3[Reducing(l:1,r:31)][nState:23]

[State:23]{|23|0}

[IDENT,'int'][n:110-NT:191] SHIFT1

[State:38]{|38|23|0}

Page 128: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

121

[IDENT,'x'][n:818-NT:899] SHIFT1

[State:31]{|31|38|23|0}

REDUCE3[Reducing(l:1,r:332)][nState:33]

[State:33]{|33|38|23|0}

[IDENT,'y'][n:101-NT:182] REDUCE2[Reducing(l:1,r:329)][nState:98]

[State:98]{|98|38|23|0}

[IDENT,'y'][n:83-NT:164] REDUCE2[Reducing(l:0,r:220)][nState:216]

[State:216]{|216|98|38|23|0}

REDUCE3[Reducing(l:2,r:214)][nState:97]

[State:97]{|97|38|23|0}

[IDENT,'y'][n:197-NT:278] SHIFT1

**** Candidate TOKEN ADDED: 261

**** Checking TOKEN: 262 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:CLASS 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[CLASS,'A'][n:1368-NT:1375] SHIFT1

[State:2]{|2|0}

REDUCE3[Reducing(l:1,r:24)][nState:23]

[State:23]{|23|0}

[IDENT,'int'][n:110-NT:191] SHIFT1

[State:38]{|38|23|0}

[IDENT,'x'][n:818-NT:899] SHIFT1

[State:31]{|31|38|23|0}

REDUCE3[Reducing(l:1,r:332)][nState:33]

[State:33]{|33|38|23|0}

[IDENT,'y'][n:101-NT:182] REDUCE2[Reducing(l:1,r:329)][nState:98]

[State:98]{|98|38|23|0}

[IDENT,'y'][n:83-NT:164] REDUCE2[Reducing(l:0,r:220)][nState:216]

[State:216]{|216|98|38|23|0}

REDUCE3[Reducing(l:2,r:214)][nState:97]

[State:97]{|97|38|23|0}

[IDENT,'y'][n:197-NT:278] SHIFT1

**** Candidate TOKEN ADDED: 262

**** Checking TOKEN: 263 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:CONNECT 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[CONNECT,'A'][n:1368-NT:1376]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 264 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:CONNECTOR 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[CONNECTOR,'A'][n:1368-NT:1377] SHIFT1

[State:3]{|3|0}

REDUCE3[Reducing(l:1,r:32)][nState:23]

[State:23]{|23|0}

[IDENT,'int'][n:110-NT:191] SHIFT1

[State:38]{|38|23|0}

[IDENT,'x'][n:818-NT:899] SHIFT1

[State:31]{|31|38|23|0}

REDUCE3[Reducing(l:1,r:332)][nState:33]

[State:33]{|33|38|23|0}

[IDENT,'y'][n:101-NT:182] REDUCE2[Reducing(l:1,r:329)][nState:98]

[State:98]{|98|38|23|0}

[IDENT,'y'][n:83-NT:164] REDUCE2[Reducing(l:0,r:220)][nState:216]

[State:216]{|216|98|38|23|0}

REDUCE3[Reducing(l:2,r:214)][nState:97]

[State:97]{|97|38|23|0}

[IDENT,'y'][n:197-NT:278] SHIFT1

**** Candidate TOKEN ADDED: 264

**** Checking TOKEN: 265 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:CONSTANT 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[CONSTANT,'A'][n:1368-NT:1378]

Syntax Error found yyerrlab2:0

Page 129: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

122

**** Checking TOKEN: 266 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:DISCRETE 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[DISCRETE,'A'][n:1368-NT:1379]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 267 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:DER 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[DER,'A'][n:1368-NT:1380]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 268 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:DEFINEUNIT 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[DEFINEUNIT,'A'][n:1368-NT:1381]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 269 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:EACH 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[EACH,'A'][n:1368-NT:1382]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 270 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:ELSE 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[ELSE,'A'][n:1368-NT:1383]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 271 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:ELSEIF 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[ELSEIF,'A'][n:1368-NT:1384]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 272 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:ELSEWHEN 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[ELSEWHEN,'A'][n:1368-NT:1385]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 273 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:T_END 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[T_END,'A'][n:1368-NT:1386]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 274 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:ENUMERATION 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[ENUMERATION,'A'][n:1368-NT:1387] SHIFT1

[State:4]{|4|0}

REDUCE3[Reducing(l:1,r:34)][nState:23]

[State:23]{|23|0}

[IDENT,'int'][n:110-NT:191] SHIFT1

[State:38]{|38|23|0}

[IDENT,'x'][n:818-NT:899] SHIFT1

[State:31]{|31|38|23|0}

REDUCE3[Reducing(l:1,r:332)][nState:33]

[State:33]{|33|38|23|0}

[IDENT,'y'][n:101-NT:182] REDUCE2[Reducing(l:1,r:329)][nState:98]

[State:98]{|98|38|23|0}

[IDENT,'y'][n:83-NT:164] REDUCE2[Reducing(l:0,r:220)][nState:216]

[State:216]{|216|98|38|23|0}

REDUCE3[Reducing(l:2,r:214)][nState:97]

[State:97]{|97|38|23|0}

Page 130: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

123

[IDENT,'y'][n:197-NT:278] SHIFT1

**** Candidate TOKEN ADDED: 274

**** Checking TOKEN: 275 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:EQUATION 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[EQUATION,'A'][n:1368-NT:1388]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 276 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:ENCAPSULATED 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[ENCAPSULATED,'A'][n:1368-NT:1389] SHIFT1

[State:5]{|5|0}

[IDENT,'int'][n:48-NT:129] REDUCE2[Reducing(l:0,r:21)][nState:25]

[State:25]{|25|5|0}

REDUCE3[Reducing(l:2,r:18)][nState:22]

[State:22]{|22|0}

[IDENT,'int'][n:207-NT:288]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 277 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:EXPANDABLE 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[EXPANDABLE,'A'][n:1368-NT:1390] SHIFT1

[State:6]{|6|0}

[IDENT,'int'][n:41-NT:122]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 278 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:EXTENDS 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[EXTENDS,'A'][n:1368-NT:1391]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 279 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:CONSTRAINEDBY 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[CONSTRAINEDBY,'A'][n:1368-NT:1392]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 280 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:EXTERNAL 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[EXTERNAL,'A'][n:1368-NT:1393]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 281 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:T_FALSE 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[T_FALSE,'A'][n:1368-NT:1394]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 282 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:FINAL 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[FINAL,'A'][n:1368-NT:1395] SHIFT1

[State:7]{|7|0}

[IDENT,'int'][n:1485-NT:1566]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 283 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:FLOW 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[FLOW,'A'][n:1368-NT:1396]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 284 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

Page 131: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

124

**** Process candidate token: [TOKEN:FOR 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[FOR,'A'][n:1368-NT:1397]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 285 action:2

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:FUNCTION 'A' (1:1-1:1)] action: 2

[State:0]{|0}

[FUNCTION,'A'][n:1368-NT:1398] SHIFT1

[State:8]{|8|0}

REDUCE3[Reducing(l:1,r:29)][nState:23]

[State:23]{|23|0}

[IDENT,'int'][n:110-NT:191] SHIFT1

[State:38]{|38|23|0}

[IDENT,'x'][n:818-NT:899] SHIFT1

[State:31]{|31|38|23|0}

REDUCE3[Reducing(l:1,r:332)][nState:33]

[State:33]{|33|38|23|0}

[IDENT,'y'][n:101-NT:182] REDUCE2[Reducing(l:1,r:329)][nState:98]

[State:98]{|98|38|23|0}

[IDENT,'y'][n:83-NT:164] REDUCE2[Reducing(l:0,r:220)][nState:216]

[State:216]{|216|98|38|23|0}

REDUCE3[Reducing(l:2,r:214)][nState:97]

[State:97]{|97|38|23|0}

[IDENT,'y'][n:197-NT:278] SHIFT1

**** Candidate TOKEN ADDED: 285

Checking REPLACE token:

**** Checking TOKEN: 258 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:T_ALGORITHM 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[T_ALGORITHM,'A'][n:1368-NT:1371]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 259 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:T_AND 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[T_AND,'A'][n:1368-NT:1372]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 260 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:T_ANNOTATION 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[T_ANNOTATION,'A'][n:1368-NT:1373]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 261 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:BLOCK 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[BLOCK,'A'][n:1368-NT:1374] SHIFT1

[State:1]{|1|0}

REDUCE3[Reducing(l:1,r:31)][nState:23]

[State:23]{|23|0}

[IDENT,'x'][n:110-NT:191] SHIFT1

[State:38]{|38|23|0}

[IDENT,'y'][n:818-NT:899] SHIFT1

[State:31]{|31|38|23|0}

REDUCE3[Reducing(l:1,r:332)][nState:33]

[State:33]{|33|38|23|0}

[IDENT,'z'][n:101-NT:182] REDUCE2[Reducing(l:1,r:329)][nState:98]

[State:98]{|98|38|23|0}

[IDENT,'z'][n:83-NT:164] REDUCE2[Reducing(l:0,r:220)][nState:216]

[State:216]{|216|98|38|23|0}

REDUCE3[Reducing(l:2,r:214)][nState:97]

[State:97]{|97|38|23|0}

[IDENT,'z'][n:197-NT:278] SHIFT1

**** Candidate TOKEN ADDED: 261

**** Checking TOKEN: 262 action:3

Page 132: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

125

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:CLASS 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[CLASS,'A'][n:1368-NT:1375] SHIFT1

[State:2]{|2|0}

REDUCE3[Reducing(l:1,r:24)][nState:23]

[State:23]{|23|0}

[IDENT,'x'][n:110-NT:191] SHIFT1

[State:38]{|38|23|0}

[IDENT,'y'][n:818-NT:899] SHIFT1

[State:31]{|31|38|23|0}

REDUCE3[Reducing(l:1,r:332)][nState:33]

[State:33]{|33|38|23|0}

[IDENT,'z'][n:101-NT:182] REDUCE2[Reducing(l:1,r:329)][nState:98]

[State:98]{|98|38|23|0}

[IDENT,'z'][n:83-NT:164] REDUCE2[Reducing(l:0,r:220)][nState:216]

[State:216]{|216|98|38|23|0}

REDUCE3[Reducing(l:2,r:214)][nState:97]

[State:97]{|97|38|23|0}

[IDENT,'z'][n:197-NT:278] SHIFT1

**** Candidate TOKEN ADDED: 262

**** Checking TOKEN: 263 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:CONNECT 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[CONNECT,'A'][n:1368-NT:1376]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 264 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:CONNECTOR 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[CONNECTOR,'A'][n:1368-NT:1377] SHIFT1

[State:3]{|3|0}

REDUCE3[Reducing(l:1,r:32)][nState:23]

[State:23]{|23|0}

[IDENT,'x'][n:110-NT:191] SHIFT1

[State:38]{|38|23|0}

[IDENT,'y'][n:818-NT:899] SHIFT1

[State:31]{|31|38|23|0}

REDUCE3[Reducing(l:1,r:332)][nState:33]

[State:33]{|33|38|23|0}

[IDENT,'z'][n:101-NT:182] REDUCE2[Reducing(l:1,r:329)][nState:98]

[State:98]{|98|38|23|0}

[IDENT,'z'][n:83-NT:164] REDUCE2[Reducing(l:0,r:220)][nState:216]

[State:216]{|216|98|38|23|0}

REDUCE3[Reducing(l:2,r:214)][nState:97]

[State:97]{|97|38|23|0}

[IDENT,'z'][n:197-NT:278] SHIFT1

**** Candidate TOKEN ADDED: 264

**** Checking TOKEN: 265 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:CONSTANT 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[CONSTANT,'A'][n:1368-NT:1378]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 266 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:DISCRETE 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[DISCRETE,'A'][n:1368-NT:1379]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 267 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:DER 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[DER,'A'][n:1368-NT:1380]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 268 action:3

Page 133: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

126

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:DEFINEUNIT 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[DEFINEUNIT,'A'][n:1368-NT:1381]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 269 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:EACH 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[EACH,'A'][n:1368-NT:1382]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 270 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:ELSE 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[ELSE,'A'][n:1368-NT:1383]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 271 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:ELSEIF 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[ELSEIF,'A'][n:1368-NT:1384]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 272 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:ELSEWHEN 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[ELSEWHEN,'A'][n:1368-NT:1385]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 273 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:T_END 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[T_END,'A'][n:1368-NT:1386]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 274 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:ENUMERATION 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[ENUMERATION,'A'][n:1368-NT:1387] SHIFT1

[State:4]{|4|0}

REDUCE3[Reducing(l:1,r:34)][nState:23]

[State:23]{|23|0}

[IDENT,'x'][n:110-NT:191] SHIFT1

[State:38]{|38|23|0}

[IDENT,'y'][n:818-NT:899] SHIFT1

[State:31]{|31|38|23|0}

REDUCE3[Reducing(l:1,r:332)][nState:33]

[State:33]{|33|38|23|0}

[IDENT,'z'][n:101-NT:182] REDUCE2[Reducing(l:1,r:329)][nState:98]

[State:98]{|98|38|23|0}

[IDENT,'z'][n:83-NT:164] REDUCE2[Reducing(l:0,r:220)][nState:216]

[State:216]{|216|98|38|23|0}

REDUCE3[Reducing(l:2,r:214)][nState:97]

[State:97]{|97|38|23|0}

[IDENT,'z'][n:197-NT:278] SHIFT1

**** Candidate TOKEN ADDED: 274

**** Checking TOKEN: 275 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:EQUATION 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[EQUATION,'A'][n:1368-NT:1388]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 276 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:ENCAPSULATED 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[ENCAPSULATED,'A'][n:1368-NT:1389] SHIFT1

Page 134: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

127

[State:5]{|5|0}

[IDENT,'x'][n:48-NT:129] REDUCE2[Reducing(l:0,r:21)][nState:25]

[State:25]{|25|5|0}

REDUCE3[Reducing(l:2,r:18)][nState:22]

[State:22]{|22|0}

[IDENT,'x'][n:207-NT:288]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 277 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:EXPANDABLE 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[EXPANDABLE,'A'][n:1368-NT:1390] SHIFT1

[State:6]{|6|0}

[IDENT,'x'][n:41-NT:122]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 278 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:EXTENDS 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[EXTENDS,'A'][n:1368-NT:1391]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 279 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:CONSTRAINEDBY 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[CONSTRAINEDBY,'A'][n:1368-NT:1392]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 280 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:EXTERNAL 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[EXTERNAL,'A'][n:1368-NT:1393]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 281 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:T_FALSE 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[T_FALSE,'A'][n:1368-NT:1394]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 282 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:FINAL 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[FINAL,'A'][n:1368-NT:1395] SHIFT1

[State:7]{|7|0}

[IDENT,'x'][n:1485-NT:1566]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 283 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:FLOW 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[FLOW,'A'][n:1368-NT:1396]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 284 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:FOR 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[FOR,'A'][n:1368-NT:1397]

Syntax Error found yyerrlab2:0

**** Checking TOKEN: 285 action:3

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:FUNCTION 'A' (1:1-1:1)] action: 3

[State:0]{|0}

[FUNCTION,'A'][n:1368-NT:1398] SHIFT1

[State:8]{|8|0}

REDUCE3[Reducing(l:1,r:29)][nState:23]

[State:23]{|23|0}

[IDENT,'x'][n:110-NT:191] SHIFT1

Page 135: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

128

[State:38]{|38|23|0}

[IDENT,'y'][n:818-NT:899] SHIFT1

[State:31]{|31|38|23|0}

REDUCE3[Reducing(l:1,r:332)][nState:33]

[State:33]{|33|38|23|0}

[IDENT,'z'][n:101-NT:182] REDUCE2[Reducing(l:1,r:329)][nState:98]

[State:98]{|98|38|23|0}

[IDENT,'z'][n:83-NT:164] REDUCE2[Reducing(l:0,r:220)][nState:216]

[State:216]{|216|98|38|23|0}

REDUCE3[Reducing(l:2,r:214)][nState:97]

[State:97]{|97|38|23|0}

[IDENT,'z'][n:197-NT:278] SHIFT1

**** Candidate TOKEN ADDED: 285

Check ERASE token until next token

**** Checking TOKEN: 336 action:1

**** Last token: [TOKEN:IDENT 'int' (5:1-5:4)]

**** Process candidate token: [TOKEN:IDENT 'A' (1:1-1:1)] action: 1

[State:0]{|0}

[IDENT,'x'][n:1368-NT:1449]

Syntax Error found yyerrlab2:0

ERROR NUM:3 DETECTED:

'int', INSERT token 'ENUMERATION' or 'CONNECTOR' or 'CLASS' or 'BLOCK',

REPLACE token with 'ENUMERATION' or 'CONNECTOR' or 'CLASS' or 'BLOCK'

[State:0]{|0}

[IDENT,'x'][n:1368-NT:1449]

Syntax Error found yyerrlab2:0

ERROR RECOVERY INITIATED:

[State:0]{|0}

[StateStack Backup:{|0}

ERROR NUM:3 DETECTED:

'x'

[../../testsuite/omcc_test/error1.mo:5:1-5:4:writable] Error: Syntax error

near: 'int', INSERT token 'ENUMERATION' or 'CONNECTOR' or 'CLASS' or 'BLOCK',

REPLACE token with 'ENUMERATION' or 'CONNECTOR' or 'CLASS' or 'BLOCK'

args:../../testsuite/omcc_test/error1.mo

OMCCp v0.9.2 (OpenModelica compiler-compiler Parser generator) Lexer and

Parser Generator-2012

""

Page 136: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

129

Error2.mo

// name: error2.mo

// keywords:This tests the error handler "insert or replace " when

varibales,conditional statements, loops are not defined under an equation or

algorithm section

// status: incorrect

class error_test

int x,y,z,w;

while x <> 99 loop

x := (x+111) - (y/3);

if x == 10 then

y := 234;

end if;

end while;

end error_test;

output message Parsing Modelica with file ../../testsuite/omcc_test/error2.mo

starting lexer

Tokens processed:45

starting parser

[../../testsuite/omcc_test/error2.mo:7:1-7:6:writable] Error: Syntax

error near: 'while', INSERT token 'ALGORITHM', REPLACE token with

'EQUATION' or 'ALGORITHM'

args:../../testsuite/omcc_test/error2.mo

OMCCp v0.9.2 (OpenModelica compiler-compiler Parser generator) Lexer

and Parser Generator-2012

""

Page 137: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

130

Error3.mo

// name: error3.mo

// keywords:This tests the error handler "insert" when loop statements are not

defined with keyword "loop" and hints the user with possible token to insert

// status: incorrect

class error_test

int x,y,z,w;

algorithm

while x <> 99

x := (x+111) - (y/3);

if x == 10 then

y := 234;

end if;

end while;

end error_test;

output message

Parsing Modelica with file ../../testsuite/omcc_test/error3.mo

starting lexer

Tokens processed:45

starting parser

[../../testsuite/omcc_test/error3.mo:9:3-9:4:writable] Error: Syntax

error near: 'x', INSERT token 'LOOP'

args:../../testsuite/omcc_test/error3.mo

OMCCp v0.9.2 (OpenModelica compiler-compiler Parser generator) Lexer

and Parser Generator-2012

""

Page 138: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

131

Error4.mo

// name: error4.mo

// keywords:This tests the error handler "merge" when some extra spaces are

identified between two tokens and hints the user with possible token to merge

// status: incorrect

class error_test

int x,y,z,w;

algorithm

while x <> 99 loop

x := (x+111) - (y/3);

if x = = 10

y := 234;

end if;

end while;

end error_test;

output message

Parsing Modelica with file ../../testsuite/omcc_test/error4.mo

starting lexer

Tokens processed:47

starting parser

[../../testsuite/omcc_test/error4.mo:10:9-10:10:writable] Error: Syntax error near: '=', MERGE tokens '='

and '='

args:../../testsuite/omcc_test/error4.mo

OMCCp v0.9.2 (OpenModelica compiler-compiler Parser generator) Lexer and Parser Generator-2012

""

Page 139: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

132

Error5.mo

// name: error5.mo

// keywords:This tests the error handler "Erase" when two tokens are repeated

and hints the users with possible replacement

// status: incorrect

class error_test

int x,y,z,w;

algorithm

while x <> 99 loop

x := (x+111) - (y/3);

if x == 10 then then

y := 234;

end if;

end while;

end error_test;

output message

Parsing Modelica with file ../../testsuite/omcc_test/error5.mo

starting lexer

Tokens processed:47

starting parser

[../../testsuite/omcc_test/error5.mo:10:20-10:24:writable] Error: Syntax error near: 'then', REPLACE

token with '+' or '-' or '.' or 'NOT', ERASE token

args:../../testsuite/omcc_test/error5.mo

OMCCp v0.9.2 (OpenModelica compiler-compiler Parser generator) Lexer and Parser Generator-2012

""

Page 140: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

133

Error6.mo

// name: error6.mo

// keywords:This tests the error handler "Insert at end of token"

// status: incorrect

class error_test

int x,y,z,w;

algorithm

while x <> 99 loop

x := (x+111) - (y/3);

if x == 10 then

y := 234;

end if;

end while;

end error_test

Output message

Parsing Modelica with file ../../testsuite/omcc_test/error6.mo

starting lexer

Tokens processed:45

starting parser

[../../testsuite/omcc_test/error6.mo:14:1-14:15:writable] Error: Syntax error near: 'end error_test',

INSERT at the End token 'SEMICOLON'

args:../../testsuite/omcc_test/error6.mo

OMCCp v0.9.2 (OpenModelica compiler-compiler Parser generator) Lexer and Parser Generator-2012

""

Page 141: Institutionen för datavetenskapliu.diva-portal.org/smash/get/diva2:576838/FULLTEXT01.pdf · 2012-12-13 · Institutionen för datavetenskap Department of Computer and Information

På svenska

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

under en längre tid från publiceringsdatum under förutsättning att inga extra-

ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,

skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för

ickekommersiell forskning och för undervisning. Överföring av upphovsrätten

vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av

dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,

säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ

art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i

den omfattning som god sed kräver vid användning av dokumentet på ovan

beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan

form eller i sådant sammanhang som är kränkande för upphovsmannens litterära

eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se

förlagets hemsida http://www.ep.liu.se/

In English

The publishers will keep this document online on the Internet - or its possible

replacement - for a considerable time from the date of publication barring

exceptional circumstances.

The online availability of the document implies a permanent permission for

anyone to read, to download, to print out single copies for your own use and to

use it unchanged for any non-commercial research and educational purpose.

Subsequent transfers of copyright cannot revoke this permission. All other uses

of the document are conditional on the consent of the copyright owner. The

publisher has taken technical and administrative measures to assure authenticity,

security and accessibility.

According to intellectual property law the author has the right to be

mentioned when his/her work is accessed as described above and to be protected

against infringement.

For additional information about the Linköping University Electronic Press

and its procedures for publication and for assurance of document integrity,

please refer to its WWW home page: http://www.ep.liu.se/

© ARUNKUMAR PALANISAMY