1 (boca) bijzondere onderwerpen computer architectuur block a introduction

Post on 19-Dec-2015

213 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

1

(BOCA)Bijzondere Onderwerpen

Computer Architectuur

Block A

Introduction

2

The aims of the course

• Show the relation between the algorithm and the architecture.

• Derive the architecture from the algorithm.• Explaining and formalizing the design

process.• Explain the distinction between structure and

behavior.• Explain some architectures.

3

The design process

• Behavior: Expresses the relation between the input and the output value-streams of the system

• Structure: Describes how the system is decomposed into subsystems and how these subsystems are connected

• Geometry: Describes where the different parts are located.

Pure behavioral, structural or geometrical descriptions do not exist in practice.

A design description may express:

4

Abstraction levels

Behavior Structure Geometry

Application

Algorithm

Basic operator

Boolean logic

Physical level

Block level

Processing element

Basic block

Transistor

Board level

Layout

Cell

5

The Design Process

Spec 0

Idea

Spec 1

Spec N

verification:

by simulation only

by simulation formal verification

by simulation formal verification

For practical reasons

a specification must

be executable

The implementation i is the specification for the implementation i+1

6

Descriptions

• Predicate logic

• Algebra (language Z, SDL (VDM) )

• Process algebras CCS, CSP, Lotos

• VHDL, Verilog

• Silage, ......

7

Specification overloading

Specification overloading means that the specification gives a possibly unwanted implementation suggestion,

i.e. the behavioral specification expresses structure

In practice:

A behavioral specification always contains structure.

8

same function same behavior, different expressions different structure different designs

Example:

2 baz

+ xb

2z

a

2 abaz

suggests:

andx

+

b

2

zax

suggests:

9

Architecture

Definition:

Architecture is the way in which hardware and software is structured;

the structure is usually based on grandiose design philosophies.

Architecture deals with fundamental elements that affect the way a system operates and thus its capabilities and its limitations.

The New American Computer Dictionary

10

Our focus

• Array processors.

• Systolic arrays.

• Wave-front array processors.

• Architectures for embedded algorithms

s.a. digital signal processing algorithms.

11

Array processor

An array processor is a

structure in which

identical processing

elements are arranged

regularly

PE PE PE PE

PEPE PE PE

PEPE PE PE

PEPE PE PE

PEPE PE PE

1 dimension

2 dimensions

12

PE

PE

PE

PE

PE

PE

PE

PE

PE

PE

PE

PE

PE

PE

PE

PE

PE

PE

Array processor 3 dimensions

13

Systolic array

In a systolic array

processor all

communication path

contain at least one

unit delay (register).

is register or delay

Delay constraints are local. Therefore unlimited extension without changing the cells

PEPE PE PE

PEPE PE PE

PEPE PE PE

PEPE PE PE

PEPE PE PE

14

Wave-front array

PEPE PE PE

PEPE PE PE

PEPE PE PE

PEPE PE PE

15

Array Processors

Can be approached from:

• Application • Algorithm• Architecture•Technology

We will focus on

Algorithm Architecture

Derive the architecture from the algorithm

16

General purpose processors do not provide sufficient processing power

Array processors: Application areas

• Speech processing • Image processing (video, medical ,.....)• Radar• Weather• Medical signal processing• Geology• . . . . . . . . . . .

Many simple calculations on a lot of datain a short time

17

Example video processing

1000 operations per pixel (is not that much) 1024 x 1024 pixels per frame (high density TV) 50 frames per second (100 Hz TV)

50 G operations per second < 1 Watt available

Pentium 2Ghz: 2G operations per second> 30 Watt

required 25 Pentiums 750 Watt

18

Description of the algorithms

In practice the algorithms are described (specified) in:

• some programming language.

In our (toy) examples we use:• programming languages • algebraic descriptions

19

Examples of algorithms we will use:

Filter:

1

0 ,

N

j jjii xcy

xCy

1

0.

N

i itit xhy

Matrix algebra:

Transformations like Fourier transformZ transform

Sorting

. . . .

20

Graphs

Graphs are applicable for describing• behavior• structure

Dependency graphsconsist of:

• nodes expressing operations or functions • edges expressing data dependencies or the flow of data

So, graphs are suitable to describe the design flow from

Algorithm to architecture

21

Design flow example: Sorting

idea

program (imperative)

single assignment code (functional)

dependency graph

recurrent relations

22

Sorting: the idea

12 10 9

8

8 5 3 2 1

>

12 10 9

8

8 5 3 2 1

>

shiftedone position

empty placeneeded

23

9 9

8

6 3 3 1

9 9 8 6 3 3 1

8

9 9 8 6 3 3 1

8

8

9 9 8 6 3 3 1

8

y := mj

mj-1 mj mj+1

x

y

x

x

x

y

y

y

mj:= x

x := y

?mx j

mj-1 mj mj+1

mj-1 mj mj+1

mj-1 mj mj+1

24

if (x>= m[j])

{ y = m[j];

m[j] = x;

x = y;

}

if (x>= m[j]) swap(m[j],x);

m[j],x = MaxMin(m[j],x);

Identical descriptions of swapping

Inserting an element into a sorted array of i elements such that the order is preserved:

m[i] = -infinite

for(j = 0; j < i+1; j++)

{ m[j],x = MaxMin(m[j],x);

}

Sorting: inserting one element

25

Sorting N elements in an array is composed from N times inserting an element into a sorted array of N elements such that the order is preserved. An empty array is ordered.

for(int i = 0; i < N; i++)

{ for(j = 0; j < i+1; j++)

{ m[j],x[i] = MaxMin(m[j],x[i]);}

}

Sorting: The program

int in[0:N-1], x[0:N-1], m[0:N-1];

for(int i = 0; i < N; i++)

{ x[i] = in[i]; m[i] = - infinite; }

for(int j = 0; j < N; j++)

{ out[j] = m[j];}

input

body

output

26

Single assignment:

Each scalar variable is assigned only once

Why?

Goal is a data dependency graph

- nodes expressing operations or functions

- edges expressing data dependencies or

the flow of data

Sorting: Towards ‘Single assignment’

27

Single assignment:

Each scalar variable is assigned only once

Why?

Code Nodes Graph

x=a+b;

x=c*d;

Sorting: Towards ‘Single assignment’

+

*

a

bc

d

x

x

How do you connect these?

28

Single assignment:

Each scalar variable is assigned only once

Why?

Code

x=a+b;

x=c*d;

Sorting: Towards ‘Single assignment’

Description already optimized towards

implementation: memory optimization.

But, fundamentally you produce two

different values, e.g. x1 an x2

29

for(int i = 0; i < N; i++)

{ for(j = 0; j < i+1; i++)

{ m[j],x[i] = MaxMin(m[j],x[i]);}

}

Single assignment:

Each scalar variable is assigned only once

Sorting: Towards ‘Single assignment’

Start with m[j]:

m[j] at loop index i depends on the value at loop index i-1

for(int i = 0; i < N; i++)

{ for(j = 0; j < i+1; j++)

{ m[i,j],x[i] = MaxMin(m[i-1,j],x[i]);}

}

hence,

30

Sorting: Towards ‘Single assignment’

x[i] at loop index j depends on the value at loop index j-1

for(int i = 0; i < N; i++)

{ for(j = 0; j < i+1; i++)

{ m[i,j],x[i] = MaxMin(m[i-1,j],x[i]);}

}

hence, for(int i = 0; i < N; i++)

{ for(j = 0; j < i+1; i++)

{ m[i,j],x[i,j] = MaxMin(m[i-1,j],x[i,j-1]);}

}

31

All scalar variables are assigned only once.

The algorithm satisfies the single assignment property

for(int i = 0; i < N; i++)

{ for(j = 0; j < i+1; j++)

{ m[i,j],x[i,j] = MaxMin(m[i-1,j],x[i,j-1]);}

}

Sorting: The algorithm in ‘single assignment’

int in[0:N-1], x[0:N-1,-1:N-1], m[-1:N-1,0:N-1];

for(int i = 0; i < N; i++)

{ x[i,-1] = in[i]; m[i-1,i] = - infinite; }

for(int j = 0; j < N; j++)

{ out[j] = m[N-1,j];}

input

body

output

32

Sorting: The algorithm in ‘single assignment’

-∞

-∞

-∞

-∞

-1

n-1

0 n-1

x0

x1

x2

x3

x0

x1

x2

x3

-1 n-1

n-1

0

in x

m

out

MaxMin

int in[0:N-1], x[0:N-1,-1:N-1], m[-1:N-1,0:N-1];

for(int i = 0; i < N; i++)

{ x[i,-1] = in[i]; m[i-1,i] = - infinite; }

for(int i = 0; i < N; i++)

{ for(j = 0; j < i+1; j++)

{ m[i,j],x[i,j] = MaxMin(m[i-1,j],x[i,j-1]);}

}

i = 1

j = 0

33

Sorting: The algorithm in ‘single assignment’

-∞

-∞

-∞

-∞

-1

n-1

0 n-1

5

7

4

6

5

7

4

6

-1 n-1

n-1

0

mMM

for(int i = 0; i < N; i++)

{ for(j = 0; j < i+1; j++)

{ m[i,j],x[i,j] = MaxMin(m[i-1,j],x[i,j-1]);}

}

i

j

34

Sorting: The algorithm in ‘single assignment’

-∞

5 -∞

-∞

-∞

-1

n-1

0 n-1

5 -∞

7

4

6

5

7

4

6

-1 n-1

n-1

0

mMM

for(int i = 0; i < N; i++)

{ for(j = 0; j < i+1; j++)

{ m[i,j],x[i,j] = MaxMin(m[i-1,j],x[i,j-1]);}

}

i

j

35

Sorting: The algorithm in ‘single assignment’

-∞

5 -∞

7 -∞

-∞

-1

n-1

0 n-1

5 -∞

7 5

4

6

5

7

4

6

-1 n-1

n-1

0

m

for(int i = 0; i < N; i++)

{ for(j = 0; j < i+1; j++)

{ m[i,j],x[i,j] = MaxMin(m[i-1,j],x[i,j-1]);}

}

i

j

36

Sorting: The algorithm in ‘single assignment’

-∞

5 -∞

7 -∞

-∞

-1

n-1

0 n-1

5 -∞

7 5

4

6

5

7

4

6

-1 n-1

n-1

0

mMM

for(int i = 0; i < N; i++)

{ for(j = 0; j < i+1; j++)

{ m[i,j],x[i,j] = MaxMin(m[i-1,j],x[i,j-1]);}

}

i

j

37

Sorting: The algorithm in ‘single assignment’

-∞

5 -∞

7 5 -∞

-∞

-1

n-1

0 n-1

5 -∞

7 5 -∞

4

6

5

7

4

6

-1 n-1

n-1

0

m

for(int i = 0; i < N; i++)

{ for(j = 0; j < i+1; j++)

{ m[i,j],x[i,j] = MaxMin(m[i-1,j],x[i,j-1]);}

}

i

j

38Notice that the order of these relations is arbitrary

m[i,j],x[i,j] = MaxMin(m[i-1,j],x[i,j-1])

Sorting: Recurrent relation

x[i,-1] = in[i]

m[i-1,i] = - infinite

out[j] = m[N-1,j]

input

output

A description in single assignment can be directly translated into a recurrent relation

in[0:N-1], out[0:N-1], x[0:N-1, -1:N-1], m[-1:N-1, 0:N-1];

0 <= i < N;

0 <= j < i+1 }

declaration

body

area

39

m[i,j],x[i,j] = MaxMin(m[i-1,j],x[i,j-1])

Sorting: Body in two dimensions

The body is executed for all i and j. Hence two dimensions

body

MaxMin

m[i-1,j]

x[i,j-1]

m[i,j]

x[i,j]

j

i

40

m[i,j],x[i,j] = MaxMin(m[i-1,j],x[i,j-i])

Sorting: Body implementation

body

if( m[i-1,j] <= x[i,j-1]) { m[i,j] = x[i,j-1]; x[i,j] = m[i-1,j]; }else { m[i,j] = m[i-1,j]; x[i,j] = x[i,j-1]); }

m[i-1,j]

x[i,j-1]

m[i,j]

x[i,j]

j

i

1

0

0

1

41

Sorting: Implementation N = 4

j

i

PEPE

PEPE PE

PEPE PE PE

PE

PE = MaxMin

m[3,0]

x[0,-1]

x[1,-1]

x[2,-1]

x[3,-1]

-1

m[3,1] m[3,2] m[3,3]

m[-1,0]=

m[0,1]=

m[1,2]=

m[2,3]=

3

2

1

0

-1 3210

42

Sorting: Example N = 4

5

3

1

5

2

3 2 1

43

Something on functions

Tuple : BbAaba ,

Cartesian product: set of all tuples

The number of tuples in the set

BAba ,

BAba is,

BABA

If Q is a set and P is a subset of Q,

then the set of all subsets of Q is

The number of subsets of Q is

Hence, the set of all subsets of

and the number of subsets of

QP Q2

QQ 22 BABA 2is

BABABA 22is

44

Something on functions

Function F YXF

YX is the set of all functions with domain X and co-domain Y

F is a function in if and only if YX

Each element of the domain of X is mapped by F on a single element of the codomain Y

Hence YaFXaa :

and cbcaFbaFXacba :,,

F can be represented as a set of tuples

Hence,

YbXaba andwith,

YXYX 2

45

Functions, Arrays, Tuples, Sequences, ....

Arrays, tuples and sequences are all representations of the same set of functions

VD ul ,

in which Dl,u is a closed subset of the set of integers Z

uzlZzzD ul |,

and V is some value co-domain

corresponds to

,,.....,,, 1210 NyyyyySo

VDy N 1,0

Hence, yi, y(i) and y[i] are syntactically different notations for the function value in i.

46

Functions on more than one variableCurrying

A function on two variables can be represented in three different ways:

abFbaFbaF ,

baFvvba F ,,

VBAF

VvBAbavbaF ,|,,

:,baF

47

Functions on more than one variableCurrying

abFbaFbaF ,

VBAF *

VBpAapaF |,

:baF baFvvba aFF

VABF **

VAqBbqbF |,

:abF abFvvab bFF

48

Functions on more than one variableCurrying (Example)

3,2,1,02,1,02 babav

86422

75311

64200

3210

a

bv

73

52

31

10

1F

62

51

40

2F

12212,1 FFF

49

Linear Time Invariant Systems

Fx y

x and y are streams.

ztime

ztime

VZx VZy

VZVZF

Time is represented by the set of integers Z, so F maps functions on functions

Obviously, this class of functions also models systems that cannot exist in reality. For example non-causal systems

50

Adding functions

x and y are streams modeled by functions on Z.

z

time

VZxi

+

=

Zzallforzxzxzxxxx 213213

51

Linear functions, linear systems

Definition:

A system F is called linear if 2121 .... xFbxFaxbxaF

2121

22

11 ...

.xbxaFyy

xFby

xFay

or

x1

x1+x2

x2

y1

y1+y2

y2

52

Time invariant systems

Definition:

A system F is called time invariant if

tyty

xFy

txtx

xFy

12

22

12

11

)(

x1

x2

y1

y2

53

Linear time-invariant systems

Why?

Linear: Because they can easily be described

Time-invariant: Because electrical systems like transistors resistors capacitance and induction satisfy this property.

54

The convolution algorithm

The behavior of a linear time-invariant system can be fully described by its impulse response h, i.e. the response on the output to a single unit pulse on the input.

The response y on the output to an input stream x then follows from:

i

izhixzy .

hxy or

We will derive this convolution operation for time discrete signals

55

The convolution algorithm

Let the unit sample sequence be defined by

otherwise0

if1 zizi In which

z represents time,

i represents the location of the unit pulse

Zzi ,

VZi

VZZ

zi

z

1

i

56

The convolution algorithm

• Step 1: express x using a delta function

57

The convolution algorithm

Then

2.2 x

i

iixx .

3.3 x

4.4 x

5.5 x

zx1 2 3 4 5 6 7

in which (i) is a function on Z and x(i) is a scalar

58

The convolution algorithm

• Step 1: express x using a delta function• Step 2: rewrite time-shifted delta function

59

The convolution algorithm

Shifting over

z0

zx

10-1 2 3 10-1 2 3

zy

zz Hence zxzy

z2

izzi 0Hence zz

60

The convolution algorithm

• Step 1: express x using a delta function• Step 2: rewrite time-shifted delta function• Step 3: rewrite impulse response using time

invariance property

61

The convolution algorithm

Consider a linear time-invariant system F

z

Let h*(i) be the response of this system to the unit sample sequence (i).

F(i) h*(i)

z

F is time-invariant, so

izhzih 0**

(i)(z) h*(i)(z)

62

The convolution algorithm

• Step 1: express x using a delta function• Step 2: rewrite time-shifted delta function• Step 3: rewrite impulse response using time

invariance property• Step 4: rewrite impulse response using linearity

property

63

The convolution algorithm

Example

z

F(i) h*(i)

(0)(z) h*(0)(z)

10-1 2 3 10-1 2 3

z

-(1)(z) -h*(1)(z)

10-1 2 3 10-1 2 3

½.(2)(z) ½. h*(2)(z)

10-1 2 3 10-1 2 3

izazia 0.. izhaziha 0*.*.

4

64

The convolution algorithm

• Step 1: express x using a delta function• Step 2: rewrite time-shifted delta function• Step 3: rewrite impulse response using time

invariance property• Step 4: rewrite impulse response using linearity

property• Step 5: rewrite general expression by means of

algebraic manipulation using result from step 4.

65

The convolution algorithm

in which h is called the impulse response of the system F

Fx y

xFy

iFih *

hh 0*

66

The convolution algorithm

F is linear and x(i) is a scalar, hence

From the preceding we derive:

i

iixx .

i

iixFy .

i

iFixy .

i

ihixy *.

scalar

function on Z

iFih *

67

The convolution algorithm

continue

i

ihixy *.

recall

zihixzyi

*.

Zzallforzhzhzhhhh 213213

zihixzyi

*.

izhzih 0**

izhixzyi

0*.

recall

68

The convolution algorithm

continue izhixzyi

0*.

izhixzyi

.

This is called the convolution operation, denoted by

hxy

hh 0*recall

We will apply this formula several times

69

The convolution algorithm

izhixzyi

.

and if the impulse response h is finite (bounded), i.e.

continue

jhjzxzyj

.

with j = z – i, we obtain:

Nzor0if0 zzh

we get jhjzxzy

N

j

1

0

.

70

Dependency Graphs and Signal Flow Graphs

The array processor described: • the way in which the processors are

arranged and• the way in which the data is communicated

between the processing elements.

PEPE PE PE

PEPE PE PE

PEPE PE PE

So we may consider it as a Dependency Graph or

a Signal Flow Graph

Hence, the graph describes the dependencies of the data that is communicated, or said differently:

The graph describes the way in which the data values at the outputs of a processing element depend on the data at the outputs of the other processing elements.

71

Dependency graphs and Signal Flow Graphs

Dependency Graph:All communicated values are scalars and the processing elements are functions on scalars. Each arrow carries only one value. Time does not play a role.

PEPE PE PE

PEPE PE PE

PEPE PE PE

Signal Flow Graph:The communicated values are streams, i.e. functions on time and the processing elements are functions on steams.

NN VVPE

NN VZVZPE

V is the value domain, number of inputs = number of outputs = N

Z represents time

72

Recurrent relations

For simple algorithms the transformation from single assignment code to a recurrent relation is simple.

Questions to answer:

• How do recurrent relations influence the dependency graph

• How can recurrent relations be manipulated such that the behavior remains the same and the structure of the dependency graph is changed

We will answer these questions by means of an example: Matrix-Vector multiplication bAc

.

73

Matrix Vector multiplication

Recurrent relations:

bAc

. j

N

j jii bac

1

0 , .

jjijiji bass .,1,,

01, is 1, Nii sc

1,,0 Ki

Alternative (because is associative)

jjijiji bass .,1,,

0, Nis 0,ii sc

1,,0 Nj

1,,0 Ki 1,,0 Nj

74

Matrix Vector multiplication

The basic cell is described by: jjijiji bass .,1,,

We have two indices i and j, so the dependency graph can be described as a two-dimensional array

PEsi,j-1 si,j

bj

i

j

+si,j-1

bj

x

ai,j

si,j

75

DG-1 of the Matrix Vector multiplication

b0, b1 and b2 are global dependencies.

Therefore this graph is called a

Globally recursive Graph

j

i

s0,2=c0

b0

PEPE PE

PEPE PE

PEPE PE

PEPE PE0

0

0

0

s1,2=c1

s2,2=c2

s3,2=c3

b1 b2

S0,0 s0,1

s1,0

s2,0

s3,0

s0,-1

s3,-1

01, is

2,ii sc

3,2,1,0i

2,1,0j

jjijiji bass .,1,,

(K = 4)

(N = 3)

76

DG-2 of the Matrix Vector multiplication

j

i

b0

PEPE PE

PEPE PE

PEPE PE

PEPE PE 0

0

0

0

b1 b2

s0,1 s0,2

s1,1

s2,1

s3,1

s0,3

s3,3

3,2,1,0i

2,1,0j

c0=s0,0

c1=s1,0

c2=s2,0

c3=s3,0

0, Nis

0,ii sc

jjijiji bass .,1,,

(K = 4)

(N = 3)

77

Variable naming and index assignment

A variable associated to an arrow gets the indices of the processing element that delivers its value.

j

i PEi,j

ai,j-1

bi-1,j-1

bi,j

ai,j

ci,j

ci-1,j

( i , j )

Local constants get the indices of the processing element that they are in

PEi,j

vi,j

78

Recurrent relations: Conclusion

The associative operations and result in two different recurrent relations and thus in two different dependency graphs.

11

Ni ixyEquation results in

111 ,0, Niii sysxss

01 0 sy,s,xss Niii

1

0Ni ixyEquation results in

1,,1,0with Ni

1,,1,0with Ni

111 ,1,. Niii sysxss

01 ,1,. sysxss Niii

Other associative operations are for example ‘AND’ and ‘OR’.

79

Changing global data dependencies into local data dependencies

Global data dependencies resist manipulating the dependency graph

jijijiji dass ,1,1,, .

01, is 1, Nii sc

Local data dependencies

j

N

j jii bac

1

0 , .

Global data dependencies

01, is 1, Nii sc

jjijiji bass .,1,,

jj bd .1 jiji dd ,1,

ci

ci

bj

bj

di-1,j

si,j

j

i

80

Changing global data dependencies into local data dependencies

So the matrix-vector multiplications becomes:

jijijiji dass ,1,1,, .

j

N

j jii bac

1

0 , .

01, is

1, Nii sc

jj bd .1

jiji dd ,1,

s0,2=c0

b0=d-1,0

PEPE PE

PEPE PE

PEPE PE

PEPE PE0

0

0

0

s1,2=c1

s2,2=c2

s3,2=c3

b1=d-1,1 b2=d-1,2

s0,0 s0,1

s1,0

s2,0

s3,0

s0,-1

s3,-1

d0,0

d1,0

d0,1

Relations:

Locally recursive graph

3,2,1,0i 2,1,0j

(K = 4)

(N = 3)

81

Alternative transformation from global data dependencies to local data dependencies

jijijiji dass ,1,1,, .

01, is 1, Nii sc

Local data dependencies

j

N

j jii bac

1

0 , .

Global data dependencies

01, is 1, Nii sc

jjijiji bass .,1,,

jjN bd , jiji dd ,1,

ci

bi

bi

ci

di,j

si,j

82

Changing global data dependencies into local data dependencies

So the alternative locally recursive graph becomes:

j1,iji,1ji,ji, .dass

j

N

j jii bac

1

0 , .

01, is

1, Nii sc

jjK, bd

jiji dd ,1,

s0,2=c0

b0=d4,0

PEPE PE

PEPE PE

PEPE PE

PEPE PE0

0

0

0

s1,2=c1

s2,2=c2

s3,2=c3

b1=d4,1 b2=d4,2

s0,0 s0,1

s1,0

s2,0

s3,0

s0,-1

s3,-1

d1,0

d2,0

d1,1Relations:

3,2,1,0i 2,1,0j

(K = 4)

(N = 3)

83

Shift-invariant graph

Consider an N-dimensional dependency graph with processing elements PE at locations (i,j,k, ...) .

Base (1,0,0,..), (0,1,0,..), (0,0,1,...), ... .

If for any (i,j,k, ...) and for any input x of the PE at (i,j,k, ...) that is delivered by the output x of PE at (p,q,r,... ), holds that the input x of the PE at (i,j+1,k,...) is delivered by the output x of the PE at (p,q+1,r,... ), then the graph is called shift-invariant in the direction (0,1,0,..).

j

iSh-Inv in direction i and jSh-Inv in direction i

84

Shift-invariant graphs (Examples)j

i

Sh-Inv in direction i and j

Sh-Inv in no direction

Sh-Inv in no direction

Sh-Inv in direction j

85

Shift-invariant graphs

Because the inputs and outputs often negatively influence the shift-invariance property, the inputs and outputs are treated separately.

Hence, we always distinguish between

• Input edges,

• Output edges and

• Intermediate edges

86

Dependeny Graphs

Conclusions:

Associative operations give two alternative DG’s.

Input, output and intermediate edges will be treated separately.

Transformation from global to local dependencies gives two alternative DG’s.

top related