[ieee 2013 11th east-west design and test symposium (ewdts) - rostov-on-don, russia...

4
High-level Test Program Generation Strategies for Processors Shima Hoseinzadeh Department of Computer Engineering, Science and Research branch Islamic Azad University, Hesarak,1477893855, Tehran ,Iran [email protected] Mohammad Hashem Haghbayan School of Electrical and Computer Engineering Engineering Colleges, Campus 2 University of Tehran, 1450 North Amirabad, 14395-515 Tehran, Iran [email protected] Abstract Abstract: This paper brings together reliability and testability and introduces certain rules for generating high level test macros for processors. These rules help to generate higher quality test macros. On the other hand, these rules can be a reference guide for a programmer to write more reliable codes. The basic idea of these rules comes from the motto that a more testable code results in a lower reliability and vice versa. The empirical results show the effect of these rules in generating high quality high-level test macros and use of which results in a less reliable overall code. The programmer can use these guidelines for generating of less efficient testable code, and better reliable programs. Keywords: Test generation, Processor testing, Test program I. INTRODUCTION By increasing the complexity and the integrity of digital design logics, susceptibility to internal defects including post fabrication faults and aging faults increases. Therefore, many methods for testing chips after fabrication and also during its work have been generated [1]. One of the conventional methods of testing digital designs is generating test patterns in a bit level using stuck-at fault model. On the other hand, by increasing the density of logics on a silicon wafer in large complex designs, many methods for testing designs functionally using high level test macros and/or high-level fault models have been proposed [2,3]. On the other hand, in the other branches of computer engineering, many methods have been proposed to have fault tolerable designs. The main purpose of such circuits is to prepare some features to protect the design from occurred faults, such as aging faults or transient faults, or at least alarming the user to deal with the fault [4]. This concept will be discussed in reliable system designs and reliability factors [5,6]. In general, the fault model definition in reliability can be different based on the occurred fault assumption and can be a simple stuck-at fault model, transient fault model, glitch with a special duration, and etc. In this paper we consider a stuck-at aging fault model, but it can be proved that the proposed mentioned rules can be applied for other fault models. High level test macros can be used for two purposes 1) Covering all stuck-at or transient faults in on-line or offline post silicon testing design and 2) a guidance for designers and also compilers to generate some codes with lower capacity for propagating faults to the primary output that leads to more reliable codes. In other words, we take a look at high level test macros from two different aspects; one of them is the ability of test macros for propagating faults to the primary outputs that leads to high quality test vectors and, from the other view, the weakness of such test does’nt mask the occurred faults that leads to less reliable codes. Then in this paper, we propose some general rules for compiler and designer to generate codes that produces best applications for functional testing of design and some avoiding rules for compiler and also programmer to generate high reliable codes. II. PROGRAMMING LEVELS High level programming languages can be considered through different aspects and levels. For example we can look at this in assembly level by reordering and changing the types of assembly languages in compiler level up to choosing different algorithms for the same application in the highest level of developing the projects. Figure 1 shows the process of developing a programming project from designing the algorithm down to bit level implementation. As is discussed above, functional test macro generation can be done in each of mentioned level independently and together. For example in algorithm design level, the designer can determine his method of algorithm design for achieving more fault coverage. As a simple case of study, we can suppose the Fibonacci algorithm which can be implemented by a recursive algorithm or by an algorithm using loop iteration. In this phase, the best algorithm for testability or on the contrary for reliability should be selected. The

Upload: mohammad-hashem

Post on 13-Mar-2017

221 views

Category:

Documents


8 download

TRANSCRIPT

Page 1: [IEEE 2013 11th East-West Design and Test Symposium (EWDTS) - Rostov-on-Don, Russia (2013.09.27-2013.09.30)] East-West Design & Test Symposium (EWDTS 2013) - High-level test program

High-level Test Program Generation Strategies for Processors

Shima Hoseinzadeh

Department of Computer Engineering,

Science and Research branch Islamic Azad University,

Hesarak,1477893855, Tehran ,Iran

[email protected]

Mohammad Hashem Haghbayan

School of Electrical and Computer Engineering

Engineering Colleges, Campus 2

University of Tehran, 1450 North Amirabad, 14395-515

Tehran, Iran

[email protected]

Abstract

Abstract: This paper brings together reliability and

testability and introduces certain rules for generating high

level test macros for processors. These rules help to

generate higher quality test macros. On the other hand,

these rules can be a reference guide for a programmer to

write more reliable codes. The basic idea of these rules

comes from the motto that a more testable code results in

a lower reliability and vice versa. The empirical results

show the effect of these rules in generating high quality

high-level test macros and use of which results in a less

reliable overall code. The programmer can use these

guidelines for generating of less efficient testable code,

and better reliable programs.

Keywords: Test generation, Processor testing, Test

program

I. INTRODUCTION

By increasing the complexity and the integrity of

digital design logics, susceptibility to internal defects

including post fabrication faults and aging faults

increases. Therefore, many methods for testing chips

after fabrication and also during its work have been

generated [1]. One of the conventional methods of

testing digital designs is generating test patterns in a

bit level using stuck-at fault model. On the other

hand, by increasing the density of logics on a silicon

wafer in large complex designs, many methods for

testing designs functionally using high level test

macros and/or high-level fault models have been

proposed [2,3].

On the other hand, in the other branches of

computer engineering, many methods have been

proposed to have fault tolerable designs. The main

purpose of such circuits is to prepare some features

to protect the design from occurred faults, such as

aging faults or transient faults, or at least alarming

the user to deal with the fault [4]. This concept will

be discussed in reliable system designs and reliability

factors [5,6]. In general, the fault model definition in

reliability can be different based on the occurred

fault assumption and can be a simple stuck-at fault

model, transient fault model, glitch with a special

duration, and etc. In this paper we consider a stuck-at

aging fault model, but it can be proved that the

proposed mentioned rules can be applied for other

fault models.

High level test macros can be used for two

purposes 1) Covering all stuck-at or transient faults

in on-line or offline post silicon testing design and 2)

a guidance for designers and also compilers to

generate some codes with lower capacity for

propagating faults to the primary output that leads to

more reliable codes. In other words, we take a look at

high level test macros from two different aspects; one

of them is the ability of test macros for propagating

faults to the primary outputs that leads to high quality

test vectors and, from the other view, the weakness of

such test does’nt mask the occurred faults that leads

to less reliable codes. Then in this paper, we propose

some general rules for compiler and designer to

generate codes that produces best applications for

functional testing of design and some avoiding rules

for compiler and also programmer to generate high

reliable codes.

II. PROGRAMMING LEVELS High level programming languages can be considered

through different aspects and levels. For example we can

look at this in assembly level by reordering and changing

the types of assembly languages in compiler level up to

choosing different algorithms for the same application in

the highest level of developing the projects.

Figure 1 shows the process of developing a

programming project from designing the algorithm down

to bit level implementation. As is discussed above,

functional test macro generation can be done in each of

mentioned level independently and together. For example

in algorithm design level, the designer can determine his

method of algorithm design for achieving more fault

coverage. As a simple case of study, we can suppose the

Fibonacci algorithm which can be implemented by a

recursive algorithm or by an algorithm using loop

iteration. In this phase, the best algorithm for testability

or on the contrary for reliability should be selected. The

litvinova eugenia
Печатная машинка
978-1-4799-2096-9/13/$31.00 ©2013 IEEE
Page 2: [IEEE 2013 11th East-West Design and Test Symposium (EWDTS) - Rostov-on-Don, Russia (2013.09.27-2013.09.30)] East-West Design & Test Symposium (EWDTS 2013) - High-level test program

//Example for more testable code

int a = 3;

int b = 4;

int c = 5;

c += a * b;

return c;

----------------------------

//Example for more reliable code

int a = 3;

int b = 4;

int c = 5;

int i = 0;

for (i=0; i<b; i++)

a += a;

return a + c;

same process can be applied in each level. In this paper,

we focus on some simple rules in high-level

programming level by which the designer can reform his

style of programming in the form that he achieves more

testable or more reliable program.

Fig.1. The levels of programming

As a simple example suppose the task A that can be

divided to three tasks B, C, and D that we can run each of

them on machine X, Y, and Z with probability of aging

defect as Px(t),Py(t), and Pz(t) where tis the time from

fabrication. It is obvious that if we run all three tasks on

one machine (for example machine X), it would be more

reliable than scheduling the tasks on three separated

machines. It comes through this fact that assigning a task

to a machine that has been tested recently is more reliable

than using several machines without considering the run

time. So we can conclude that the first mentioned

strategy is more reliable. The second mentioned strategy

that schedules the tasks on different machines, wider

logics are to be used by the tasks, it can be notified that

this strategy is more testable. This comes through this

fact that the second strategy fires more paths in the

system.

From this simple example we can conclude that two

different strategies will prepare us different testability

and reliability. In the following sections, some beneficial

rules are mentioned that can help us to achieve this.

III. PROGRAMMING RULES FOR TEST

MACRO GENERATION

According to the above discussion, test macro

generation can be considered in different levels of design

and compiler for functional on-line and offline testing

and reliability. In this section we propose some general

observations in high-level coding (not design level or

assembly level) for improving the quality of test macros

for the same application from basic facts up to more

complex situations.

Observation 1 (using the same variable or different

variables): In high level programming, using different

variables instead of using the same variable will increase

the quality of high-level test macro.

As it is obvious, the compiler allocates different

places for variables on the system, in many cases the

variables will be assigned to the internal registers that

will be activated during the execution of the program that

increases the probability of fault propagation from that

region.

On the other hand, we conclude that for more reliable

systems, using the same variable of applications seems to

be better. For example more testable and more reliable

example codes according to this rule are shown in Fig. 2.

Fig.2. Using the same variable vs. different variables

Observation 2 (using various components vs. using

single components): In a high level code, using various

components will increase the test macro quality.

Using various components in the design leads to

better fault propagation that will increase the quality. For

getting more reliable codes it is better for the designer to

use more simple codes. Fig. 3 shows two simple method

for calculating the multiply accumulate of three digits. In

more reliable codes only the adder is used.

Fig.3. Using different components vs. using the same

components

//Example for more testable code

int a = 3;

int b = 4;

int c;

c = a + b;

return c;

----------------------------

//Example for more reliable code

int a = 3;

int b = 4;

b = a + b;

return b;

Page 3: [IEEE 2013 11th East-West Design and Test Symposium (EWDTS) - Rostov-on-Don, Russia (2013.09.27-2013.09.30)] East-West Design & Test Symposium (EWDTS 2013) - High-level test program

//Example for more testable code

long int a = 3;

long int b = 4;

long int c;

c = a + b;

return c;

----------------------------

//Example for more reliable code

int a = 3;

int b = 4;

int c;

c = a + b;

return c;

//Example for more testable code

int intarray [2];

intarray[1]=5;

intarray[2]=6;

----------------------

//Example for more reliable code

struct node{int x; node* next;}*node1,*node2;

node1 =(node*) malloc( sizeof(struct node) );

node2 =(node*) malloc( sizeof(struct node) );

node1->next =node2;

node1->x = 5;

node1->next->x=6;

cout<<node2->x;

//Example for more testable code

int mac (int a, int b, int c){

return (a * b) + c;

}

int main (){

int x = 3;

int y = 4;

int z = 5;

int w;

w = mac (x, y, z);

return w;

}

----------------------------

//Example for more reliable code

#define MACRO_1 (x * y) + z

int main (){

int x = 3;

int y = 4;

int z = 5;

int w;

w = MACRO_1;

return w;

Observation 3 (using link list vs. using array) while

considering testing, and shuffling the data during the

program is very essential;

Shuffling the data during the program will increase

the probability of fault detection. Hence, whenever a

memory allocation occurs dynamically during the

execution of the program, the quality of the application in

detecting faults increases. Therefore, using dynamic

memory increases the quality of test macros versus using

array structures. Fig. 4 shows an example of coding using

dynamic memory and array type memory.

Fig. 4. Using array vs. using link list

On the other hand, for developing more reliable

programs using array data structures are better.

Observation 4 (large bit length vs. small bit length):

Using larger bit length variables increases the quality of

test macro.

While defining large bit length for variables (for

example long int instead of int in C), larger amount of

bits in arithmetic part of the processors will involve in

calculating the data. Thus, this can propagate more

amounts of probable faults to the output. Therefore, the

larger amount of bit lengths, the higher quality of test

macros. The example o this rule is shown in Fig. 5.

Fig.5. Using large bit length vs. small bit length

On the other hand the designer of reliable application

reduces the bit length as much as possible to achieve

better reliability.

Observation 5 (using call functions vs. using macros):

Call functions in comparison to macros can prepare

better fault coverage.

Although using macros increases the length of the

program and it will cover more faults of program counter

and program memory, using functions of the same

operations will cover the logics related to stacks and

some special cases for controller. In addition to these, in

call functions we have local variable definition, the

possibility of covering faults of the other part of register

files (and input/output memory bus) increases. According

to experimental results, call functions cover larger

amount of faults but is less reliable comparing to macros.

In Fig. 6 we have an example for more testable and more

reliable codes for this rule.

Fig. 6. Using call functions vs. using macros

IV. EXPERIMENTAL RESULTS For evaluating the applicability of the proposed rules

experimentally we used an industrial 8bit processor as the

case study. The processor contains 32 register as the

register file. The architecture of the processor is very

close to AVR micro controllers. The processor has more

than 130 instructions and intelligent C compiler. First of

all the HDL code is converted to netlist using XST

synthesis tool from Xilinx ISE. After generating the

atomic netlist [11], components are mapped to a

predefined library for fault simulation. A parallel fault

simulator is used for simulating the processor.

From the other side, we used an industrial compiler

for generating the assembly codes from C. Fig. 7 shows

the assembly code of Fig. 2 generated by the compiler.

As the observability of the processor is limited, we used

store instruction for improving the fault coverage in both

Page 4: [IEEE 2013 11th East-West Design and Test Symposium (EWDTS) - Rostov-on-Don, Russia (2013.09.27-2013.09.30)] East-West Design & Test Symposium (EWDTS 2013) - High-level test program

;//Example for more testable code

; 0000 00AE intc ;

;0000 00AF c=a+b;

0000c2 931a ST -Y,R17

0000c3 930a ST -Y,R16

; a -> Y+4

; b -> Y+2

; c -> R16,R17

0000c4 81ea LDD R30,Y+2

0000c5 81fb LDD R31,Y+2+1

0000c6 81ac LDD R26,Y+4

0000c7 81bd LDD R27,Y+4+1

0000c8 0fea ADD R30,R26

0000c9 1ffb ADC R31,R27

0000ca 018f MOVW R16,R30

;0000 00B0 return c;

0000cb 01f8 MOVW R30,R16

0000cc 8119 LDD R17,Y+1

0000cd 8108 LDD R16,Y+0

0000ce 9626 ADIW R28,6

0000cf 9508 RET

more reliable and observable macros. In addition to this,

for demonstrating the effect of mentioned rules, we used

a processor in which all its register file outputs are

connected to outputs of the design. This process can

improve the observability of faults and shows

propagation of faults to the next state.

Fig.7. Assembly code generated by compiler for the

code in Fig. 2.

Table 1 shows the result of fault coverage for more

testable codes vs. more reliable codes mentioned in

previous section. Furthermore Table 2 shows the fault

coverage results of the processor with improved

observability. In both cases, we had better coverage for

the same algorithm while using the proposed rules.

Changing the netlist in order to have a better

observability causes the second processor to have more

collapsed faults. As the total number of collapsed faults

are a large number, large amount of detected faults

results less increase in fault coverage, but this short

increase shows high coverage in particular components

such as ALU and register file.

Table 1. Fault simulation results for Processor

#Collapsed

Faults 33493

More testable code More reliable code

#detected

faults FC

#detected

faults FC

Code of Fig. 2 2718 8.1% 2630 7.8%

Code of Fig. 3 2726 8.1% 2522 7.5%

Code of Fig. 4 2819 8.4% 2630 7.8%

Code of Fig. 5 2121 6.3% 2002 5.9%

Code of Fig. 6 2234 6.6% 2122 6.3%

Table 2. Fault simulation results for Processor with

increased observability

#Collapse

d Faults 35177

More testable code More reliable code

#detected faults FC #detected

faults FC

Code of Fig. 2 6104 17.3% 6002 17%

Code of Fig. 3 6117 17.3% 5930 16.8%

Code of Fig. 4 6530 18.5% 6112 17.3%

Code of Fig. 5 6212 17.6% 6170 17.5%

Code of Fig. 6 6105 17.3% 6009 17%

V. CONCLUSION AND FUTURE WORKS In this paper, some general rules for having more

testable high level macros are proposed. The test macro

programmer can use these rules for generating more

testable codes. On the other hand, the complement of the

proposed rules can be used for generating more reliable

codes. The fault coverage results on an industrial case

study are presented below. As the density and area of

digital designs are increasing continuously, this aspect of

functional testing can be replaced by deterministic

methods in future. It also can help us for generating new

fault models for processors in future.

VI. REFERENCES [1] Z. Navabi, Digital System Test and Testable Design: Using HDL

Models and Architectures, Springer, 2010.

[2] M. H. Haghbayan, Sara Karamati, Fatemeh Javaheri, Zainalabedin Navabi, "Test Pattern Selection and Compaction for Sequential Circuits in an HDL Environment," Asian Test Symposium, 2010, pp 53-56.

[3] D. Sabena, M. S. Reorda, L. Sterpone, "A new SBST algorithm for testing the register file of VLIW processors," DATE 2012, pp 412-417.

[4] B. Khodabandeloo, S. A. Hoseini, S. Taheri, M. H. Haghbayan, M. R. Babaei, Z. Navabi, "Online Test Macro Scheduling and Assignment in MPSoC Design," Asian Test Symposium 2011, pp. 148-15

[5] D. Zhu, R. Melhem, D. Mosse, "The effects of energy management on reliability in real-time embedded systems," in Proc. of International Conference on Computer Aided Design, pp. 35-40, 2004.

[6] F.Firouzi, Mostafa E. Salehi, Fan wang, Ali Azarpeyvand, and Sied Mehdi Fakhraie, “Reliability considerations in dynamic voltage and frequency scheduling schemes,”in Proc., of IEEE International Conference on Design & Technology of Integrated Systems in nanoscale Era (DTIS).

[7] M. H. Haghbayan, Saeed Safari, Zainalabedin Navabi, "Power constraint testing for multi-clock domain SoCs using concurrent hybrid BIST," DDECS 2012, pp 42-45.

[8] P. Kabiri, Z. Navabi, "Effective RT-level software-based self-testing of embedded processor cores,"DDECS 2012, pp 209-212.

[9] M. Grosso, W. Javier, P. Holguin, E. Sánchez, M. S. Reorda, A. P. Tonda, J. V. Medina, "Software-Based Testing for System Peripherals," J. Electronic Testing, 28(2): 189-200 (2012).

[10] S. Yang, W. Wang, T. Lu, W. Wolf, N. Vijaykrishnan, and Y. Xie, “Case study of reliability-aware and low-power design,” IEEE Transactions on Very Large Scale Integration (VLSI) systems, vol. 16, no. 7,July 2008.

[11] J. F. Ziegler, “Terrestrial cosmic ray intensities,” IBM Journal of Research and Development, vol. 42, no. 1, pp. 117-139, 1998.